/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

US-based AI startup Arcee releases Trinity Large, a 400B-parameter open-weight model that it says compares to Meta's Llama 4 Maverick 400B on some benchmarks

Many in the industry think the winners of the AI model market have already been decided: Big Tech will own it (Google, Meta

TechCrunch Julie Bort

Discussion

  • @arcee_ai @arcee_ai on x
    Today, we're releasing the first weights from Trinity Large, our first frontier-scale model in the Trinity MoE family. [video]
  • @latkins Lucas Atkins on x
    Today, we are releasing our first weights from Trinity-Large, our first frontier-scale model in the Trinity MoE family. American Made. - Trinity-Large-Preview (instruct) - Trinity-Large-Base (pretrain checkpoint) - Trinity-Large-TrueBase (10T pre Instruct data/anneal) [video]
  • @theonejvo Jamieson O'Reilly on x
    In a twist that perfectly illustrates the threat landscape I've been writing about, Clawdbot's creator Peter had to rename the project's accounts due to alleged @AnthropicAI trademark issues, and during that transition window crypto scammers immediately snatched the old handles […
  • @scaling01 @scaling01 on x
    American open-weight LLMs are back! Arcee AI trained Trinity Large Preview a 400B MoE model in just over 30 days on 2048 Nvidia B300 GPUs. It is much faster and more efficient than comparable chinese open-weights models like DeepSeek-V3 and GLM-4.7. Trinity Large is part of the […
  • @sasurobert Robert Sasu on x
    Do not download skills from the internet. Write it yourself, manually. Take a little time to do it. Prompt injections, skills injections is super easy and can steal everything. Thousands of “devs” simply download without verifying. This will compromise everything.
  • @kalomaze @kalomaze on x
    oh nothing too crazy. just a 400 billion parameter western MoE model pretrained from scratch on 15T+ tokens at an even more aggressive sparsity ratio than any other model of comparable scale
  • @beffjezos @beffjezos on x
    American open source isn't dead. Kudos to the team
  • @natolambert Nathan Lambert on x
    Here's my conversation with @latkins and the team at @arcee_ai on their path to training and releasing Trinity Large today. From going all in on open models built end to end in the US 6 months ago to having the model in hand is no easy feet. I loved this conversation on how to [v…
  • @juddrosenblatt Judd Rosenblatt on x
    (early) Recursively self-improving AI systems are already here and the back door is open to anyone
  • @prietschka Paul Rietschka on bluesky
    Going to note that it's pretty insane everyone is out there building and rebuilding the same transformer-based LLMs and calling it “innovation.”  —  It's not.  —  This is akin to everyone spending billions to reinvent, over and over, Coca-Cola.  —  “Look at my 400B-parameter spar…
  • @cailen @cailen on x
    Since everyone on this site takes a victory lap, so will I. Proud @arcee_ai investor since seed.
  • r/LocalLLaMA r on reddit
    Arcee AI releases Trinity Large : OpenWeight 400B-A13B