/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Baptiste Rozière

@b_roziere
4 posts
2025-12-09
Glad to release the new devstral, the best open model for code agents! It comes with vibe CLI, and you can install it easily with uv or pip and let it guide you! uv tool install mistral-vibe pip install mistral-vibe
2025-12-09 View on X
TechCrunch

Mistral launches Devstral 2, a coding model with 123B parameters requiring at least four H100 GPUs, and a 24B parameter Devstral Small for local deployment

French AI startup Mistral today launched Devstral 2, a new generation of its AI model designed for coding, as the company seeks …

2024-01-30
We released a 70B version of CodeLlama today! Trained on 1T tokens, it is a much stronger base model for coding tasks. I look forward to seeing what the community will do with it! :)
2024-01-30 View on X
VentureBeat

Meta releases Code Llama 70B, a new version of its code generation model, featuring improved code correctness, a variant optimized for Python, and more

available under the same license as previous Code Llama models. Download the models ➡️ https://ai.meta.com/... • CodeLlama-70B • CodeLlama-70B-Python • CodeLlama-70B-Instruct [imag...

2023-08-25
We initialize CodeLlama with Llama 2 and train on 500B tokens of a code-heavy dataset. At mid-training, it is already equivalent to a full training from scratch. The perplexities of our models still go down at the end of training, so extra budget could be used to train longer. [image]
2023-08-25 View on X
The Verge

Meta debuts Code Llama, which can generate code and debug human-written work, under the same community license as Llama 2, free for research and commercial use

Code Llama, Code Llama - Python, and Code Llama - Instruct, fine-tuned for understanding natural language instructions. … Yann LeCun / @yannlecun : You knew that was coming: Code L...

It allows CodeLlama to outperform other open base models on programming benchmarks, even those trained on more tokens. We also obtain significant performance gains compared to Llama 2. Even CodeLlama 7B outperforms Llama 2 70B on a multilingual benchmark. [image]
2023-08-25 View on X
The Verge

Meta debuts Code Llama, which can generate code and debug human-written work, under the same community license as Llama 2, free for research and commercial use

Code Llama, Code Llama - Python, and Code Llama - Instruct, fine-tuned for understanding natural language instructions. … Yann LeCun / @yannlecun : You knew that was coming: Code L...