Arcee AI releases Trinity-Large-Thinking, a 399B-parameter text-only reasoning model under an Apache 2.0 license, allowing full customization and commercial use
The baton of open source AI models has been passed on between several companies over the years since ChatGPT debuted in late 2022 …
VentureBeat Carl Franzen
Related Coverage
- Trinity-Large-Thinking: Scaling an Open Source Frontier Agent Arcee AI · Lucas Atkins
- Arcee AI Releases 400B Open Reasoning Model That Rivals Claude at 96% Lower Cost Implicator.ai · Harkaram Grewal
- Arcee AI Releases Trinity Large Thinking: An Apache 2.0 Open Reasoning Model for Long-Horizon Agents and Tool Use MarkTechPost · Asif Razzaq
- Four Open Models Just Proved You Can Own Frontier AI at Every Scale TheNeuron · Grant Harvey
- Google Jumps Back Into the Open Source AI Race With Gemma 4 Decrypt · Jose Antonio Lanz
Discussion
-
@arcee_ai
@arcee_ai
on x
Today we're releasing Trinity-Large-Thinking. Available now on the Arcee API, with open weights on Hugging Face under Apache 2.0. We built it for developers and enterprises that want models they can inspect, post-train, host, distill, and own. [video]
-
@willccbb
Will Brown
on x
the best American open-source model ever just dropped, and it costs less than $1 per million tokens i feel like more people should be talking about this
-
@markmcquade
Mark McQuade
on x
Today we drop Trinity-Large-Thinking. SOTA on Tau2-Airline, frontier-class on Tau2-Telecom, and the #2 model on PinchBench, right behind Opus. On BCFLv4, we're in the mix with the best. 26 people with under $50M raised and a ruthless pursuit of greatness. What this team just
-
@primeintellect
@primeintellect
on x
We're excited to support @Arcee_ai's Trinity-Large-Thinking — a frontier open reasoning model Purpose-built for the agents people are actually running in production. Proud to have supported with our infra and post-training stack including prime-rl and verifiers.
-
@xlr8harder
@xlr8harder
on x
This is a noteworthy release. I don't think there has been been a real open source model from the US that is this close to the frontier, ever. Looking forward to trying it out.
-
@arcee_ai
@arcee_ai
on x
Preview showed us where the demand was going. People were already running Trinity-Large-Preview in real agent workflows, with long-horizon tool use and production constraints. So over the last two months, we pushed our SFT and RL stack to meet that moment. [image]
-
@designarena
@designarena
on x
Trinity-Large-Thinking by @arcee_ai has been added to Design Arena! The current leading open model is GLM 5 by @Zai_org. Huge congrats to the @arcee_ai team for this release! [image]
-
@arcee_ai
@arcee_ai
on x
More important than any one score, Trinity-Large-Thinking is a major step up from Preview in the places that matter most for agents: better multi-turn tool use better context coherence cleaner instruction following more stable long-running behavior
-
@teortaxestex
@teortaxestex
on x
It's a good start of the revival of American open weights. They'll have to work on reasoning efficiency from here on out. [image]
-
@leavittron
Matthew Leavitt
on x
Two things I'm particularly proud of here: 1. The pretraining data are derived entirely from publicly-available tokens. 2. No closed-source models were used in any part of the pretraining data curation pipeline.
-
@arcee_ai
@arcee_ai
on x
Our focus was clear. Build a model that stays coherent across turns, uses tools cleanly, follows instructions under constraint, and is efficient enough to serve at scale. That is the bet behind Trinity-Large-Thinking.
-
@matthewberman
Matthew Berman
on x
American Open Source 🇺🇸
-
@ivanfioravanti
Ivan Fioravanti
on x
Trinity-Large-Thinking running on M3 Ultra 512GB in 4bit using MLX! 🚀 Text Generation at 48 toks/s Peak Mem 224GB Quantizations (4bit and 5bit) upload in progress on HF mlx-community! 🚀 [video]