/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

DeepSeek V4 Pro costs $1.74/1M input and $3.48/1M output tokens while V4 Flash costs $0.14/1M input and $0.28/1M output tokens, both the cheapest in their class

Chinese AI lab DeepSeek's last model release was V3.2 (and V3.2 Speciale) last December.  They just dropped the first of their …

Simon Willison's Weblog Simon Willison

Discussion

  • @simonw Simon Willison on x
    More of my notes on DeepSeek V4 - the really big news is the pricing: both DeepSeek-V4-Flash and DeepSeek-V4-Pro are the cheapest models in their categories while benchmarking close to the frontier models from other providers https://simonwillison.net/... [image]
  • @simonwillison.net Simon Willison on bluesky
    DeepSeek V4 just dropped - two models, Flash and Pro, both benchmarking well, decent pelicans and prices that put them both as the cheapest in their respective categories by a solid margin simonwillison.net/2026/Apr/24/ ...  [images]
  • @ErikJonker@mastodon.social Erik Jonker on mastodon
    Do not only look at benchmarks of AI models.  Costs are also very important and the differences are big.  In the end that is very important for businesses using AI at scale.  Picture is from this excellent blog/post from @simon  —  https://simonwillison.net/...  #AI #deepseekv4 #…
  • r/LocalLLaMA r on reddit
    No Multimodality yet in DeepSeek-V4.  But I'll wait.
  • M Mohan M Mohan on linkedin
    DeepSeek is turning model efficiency into a weapon.  Its 30 - 44% cheaper  —  DeepSeek AI is making LLM it look like an economic design problem …