DeepSeek V4 Pro costs $1.74/1M input and $3.48/1M output tokens while V4 Flash costs $0.14/1M input and $0.28/1M output tokens, both the cheapest in their class
Chinese AI lab DeepSeek's last model release was V3.2 (and V3.2 Speciale) last December. They just dropped the first of their …
Simon Willison's Weblog Simon Willison
Related Coverage
- DeepSeek V4 goes live with massive 1.6T parameters and 1M context support Neowin · Paul Hill
- DeepSeek-V4: Towards Highly Efficient Million-Token Context Intelligence Hacker News
- As agentic AI pushes rivals to raise prices and cap usage, Deepseek ships a good-enough model for almost nothing The Decoder · Maximilian Schreiner
- DeepSeek Unveils V4: The Latest Open-Source AI Model Challenging Big Tech Giants Blockonomi · Trader Edge
- DeepSeek Releases V4 Pro and Flash, Undercutting OpenAI Pricing by Up to 10x Implicator.ai · Marcus Schuler
- DeepSeek's Newest Models Take on Silicon Valley at a Fraction of the Cost Gizmodo · Bruce Gil
- DeepSeek's Long-Awaited New Model Fails to Narrow US Lead in AI Bloomberg · Seth Fiegerman
- DeepSeek says its new V4 models trail OpenAI and Google by months, not years crypto.news · Rony Roy
Discussion
-
@simonw
Simon Willison
on x
More of my notes on DeepSeek V4 - the really big news is the pricing: both DeepSeek-V4-Flash and DeepSeek-V4-Pro are the cheapest models in their categories while benchmarking close to the frontier models from other providers https://simonwillison.net/... [image]
-
@simonwillison.net
Simon Willison
on bluesky
DeepSeek V4 just dropped - two models, Flash and Pro, both benchmarking well, decent pelicans and prices that put them both as the cheapest in their respective categories by a solid margin simonwillison.net/2026/Apr/24/ ... [images]
-
@ErikJonker@mastodon.social
Erik Jonker
on mastodon
Do not only look at benchmarks of AI models. Costs are also very important and the differences are big. In the end that is very important for businesses using AI at scale. Picture is from this excellent blog/post from @simon — https://simonwillison.net/... #AI #deepseekv4 #…
-
r/LocalLLaMA
r
on reddit
No Multimodality yet in DeepSeek-V4. But I'll wait.
-
M Mohan
M Mohan
on linkedin
DeepSeek is turning model efficiency into a weapon. Its 30 - 44% cheaper — DeepSeek AI is making LLM it look like an economic design problem …