/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Meta debuts Llama 3.1 405B, the “first frontier-level open source AI model”, as well as new Llama 3.1 70B and 8B models, and says it's working on Llama 4

Bloomberg

Discussion

  • @zuck Mark Zuckerberg on threads
    Open Source AI is the Path Forward
  • @timothybucksf Timothy Buck on threads
    My team was part of a really exciting set of announcements today!  We introduced the ability to imagine yourself 👨‍🚀.  We upgraded Meta AI to Llama 3.1, expanded to new languages and countries, and gave you the ability to edit images you've created (add, remove, change, etc).
  • @crumbler Casey Newton on threads
    Meta's largest-ever AI model is here — and likely to give new life to the debate over whether AI development can be done more safely if it's open source.  I wrote about the trade-offs: https://www.platformer.news/ ...
  • @benedictevans Benedict Evans on threads
    it appears the the latest version of Llama is large enough to be a ‘systemic risk’ under the EU's AI Act.  Open source, freely available to any startup that want to build on it.  Except... Tell me more about how these laws will make the EU a leader in AI. https://x.com/...
  • @daveleebbg Dave Lee on threads
    Stuffing it into the world's most popular apps and getting usage isn't a strategic triumph  —  RE: https://www.threads.net/...
  • @yannlecun Yann LeCun on threads
    💥BOOM 💥 Llama 3.1 is out 💥 405B, 70B, 8B versions.  Main takeaways: 1.  405B performance is on par with the best closed models.  2. Open/free weights and code, with a license that enables fine-tuning, distillation into other models, and deployment anywhere...
  • @brandonlive Brandon Paddock on threads
    Open weights is great, but isn't open source and shouldn't be called that.  The “source” for a model would be the training code plus the training data.
  • @deanwball Dean W. Ball on x
    Llama 3 405b is a “systemic risk” to society, according to the European Union and their AI Act. [image]
  • @aiatmeta @aiatmeta on x
    We've also updated our license to allow developers to use the outputs from Llama models — including 405B — to improve other models for the first time. We're excited about how this will enable new advancements in the field through synthetic data generation and model distillation
  • @karpathy Andrej Karpathy on x
    Huge congrats to @AIatMeta on the Llama 3.1 release!  Few notes: Today, with the 405B model release, is the first time that a frontier-capability LLM is available to everyone to work with and build on...I expect the closed model players (which imo have a role in the ecosystem too…
  • @thejackobrien Jack O'Brien on x
    Open source surpassing the closed source AI models, what a time to be alive!
  • @soumithchintala Soumith Chintala on x
    Why do 16k GPU jobs fail? The Llama3 paper has many cool details — but notably, has a huge infrastructure section that covers how we parallelize, keep things reliable, etc. We hit an overall 90% effective-training-time. https://ai.meta.com/... [image]
  • @neil_chilson Neil Chilson on x
    Why are FTC & DOJ issuing statements w/ EU competition authorities discussing “risks” in the blazingly competitive, U.S.-built AI ecosystem? And on the same day that Meta turbocharges disruptive innovation with the first-ever frontier-level open source AI model? A 🧵 [image]
  • @corbtt Kyle Corbitt on x
    Guys fine-tuned Llama 3.1 8B is completely cracked. Just ran it through our fine-tuning test suite and blows GPT-4o mini out of the water on every task. There has never been an open model this small, this good. [image]
  • @marouane53 Marouane A. Lamharzi on x
    Can't believe Llama 3.1 (405B), a model as good as GPT-4 and Claude Sonnet 3.5, is open-source and free! Multilingual, 128K context, tool-use + agents. Never thought I'd say this, but thank you, Mark Zuckerberg, for Llama. This is an amazing moment for Open-Source AI.
  • @natolambert Nathan Lambert on x
    Setting the record straight on Llama 3.1 not being truly open-source: * Revisiting Meta's AI strategy * Zuckerberg on open vs closed AI * Who the Llama 3.1 license doesn't serve * Implications of an open frontier model More on the technical details soon!
  • @emollick Ethan Mollick on x
    Early days, but on vibes alone, on complex business writing and educational prompts, Llama 3.1 405B is quite good, but doesn't take a clear lead over the other frontier models. Of course, being open(ish), there may be rapid improvement on both models & prompting techniques soon.
  • @pwendell Patrick Wendell on x
    Meta AI's Llama release today is really important, likely the most important open source AI announcement ever.  Many people don't understand why: 1.  The quality gap between the best proprietary and open models has effectively vanished.  No one really knew if this gap would get b…
  • @keunwoochoi Keunwoo Choi on x
    i admit i underestimated how quickly the community (largely thanks to Meta, of course) could catch up OpenAI. impressive.
  • @amuldotexe @amuldotexe on x
    @karpathy @AIatMeta Hypothesis: Maybe one of the coolest achievements of @ylecun might be to nudge someone as resourceful as Mark Zuckerberg to invest in making such high quality models open source for the greater good via @AIatMeta that's such a pragmatic far-reaching impact on …
  • @basedbeffjezos @basedbeffjezos on x
    The goal was always to ban open source models that are competitive so that incumbent closed model makers could achieve regulatory capture. AI Safety is just a pretense for this. We can't let this bill pass in California. The future of OSS AI depends on it.
  • @teortaxestex @teortaxestex on x
    Pretty insane that the cost of producing llama-3-405B, this behemoth, is like 40% of *Ant-Man and the Wasp: Quantumania* movie at most If I were Zuck, I'd have open sourced a $10B omnimodal AGI purely out of spite for the vast fortunes spent on normieslop as a matter of course [i…
  • @aravsrinivas Aravind Srinivas on x
    Llama 3 8b was better than Llama 2 70b. There's no reason Llama 4 8 /70b (which will be natively multimodal with audio image and video) shouldn't be better than Llama 3.1 405b. Imagine that scenario where you will be able run a model better than GPT 4o (worth a billion or two
  • @elonmusk Elon Musk on x
    @karpathy @AIatMeta It is impressive and Zuck does deserve credit for open-sourcing
  • @andrewyng Andrew Ng on x
    Thank you Meta and the Llama team for your huge contributions to open-source! Llama 3.1 with increased context length and improved capabilities is a wonderful gift to everyone. I hope foolish regulations don't like California's proposed SB1047 don't stop such innovations.
  • @emollick Ethan Mollick on x
    I spoke to companies in the last year who dropped a of money on Llama 2 powered systems because they were cheap to run, but took lots of engineering to make work well in more demanding applications APIs can be switched but there was a lot of work building around temporary limits
  • @minchoi Min Choi on x
    Instant Intelligence is wild with Llama 3.1 8B + Groq 🤯 [video]
  • @ramaswmysridhar Sridhar on x
    Excited to announce that Snowflake is teaming up with @AIatMeta Meta to bring Llama 3.1 405B, their largest and most powerful open source model, to Snowflake Cortex AI! 🚀 We've optimized this state-of-the-art model to run faster and more efficiently in Snowflake so our [image]
  • @hingeloss Chris on x
    Compared leaked Llama 3.1 benchmarks with other leading models, very excited for the release! We can tier out models by price / 1M output tokens. O($0.10): 4o-mini and <10B param models. I think 4o-mini will still be best but a strong local 8B will unlock lots of applications. [i…
  • @ahmad_al_dahle Ahmad Al-Dahle on x
    With today's launch of our Llama 3.1 collection of models we're making history with the largest and most capable open source AI model ever released. 128K context length, multilingual support, and new safety tools. Download 405B and our improved 8B & 70B here. [video]
  • @ylecun Yann LeCun on x
    💥BOOM 💥 Llama 3.1 is out 💥 405B, 70B, 8B versions.  Main takeaways: 1.  405B performance is on par with the best closed models.  2. Open/free weights and code, with a license that enables fine-tuning, distillation into other models, and deployment anywhere.  3. 128k context lengt…
  • @skirano Pietro Schirano on x
    With the introduction of Llama 3.1 405B, we now have an open-source model that beats the best closed-source one available today on selected benchmarks. What a time. [image]
  • @matthewberman @matthewberman on x
    Tomorrow is a pivotal day in the world of AI, with the release of LLaMA 3.1 405b. Suddenly, the world will have access to an open-source model considered SOTA. It BEATS GPT4o on many benchmarks. What a time to be alive. [image]
  • @togethercompute @togethercompute on x
    🚀 Excited to partner with Meta to bring Llama 3.1 models to Together Inference and Fine-tuning! Up to 80 tokens per second for Llama 3.1 405B and up to 400 tokens per second for Llama 3.1 8B, which is 1.9x to 4.5x faster than vLLM while maintaining full accuracy. [image]
  • @alphasignalai Lior on x
    This might be the biggest moment for Open-Source AI. Meta just released Llama 3.1 and a 405 billion parameter model, the most sophisticated open model ever released. It already outperforms GPT-4o on several benchmarks. [video]
  • @hingeloss Chris on x
    Meta trained a E2E speech experience with Llama 3.1 - pretty cool! This should equal real-time speech response. Audio encoder + adapter + LLM = audio in, text out Custom TTS model uses LLM embeddings to condition output - IMO elegant to stay in latent space and avoid phonemes. [i…
  • @emollick Ethan Mollick on x
    All the current models are getting into a good grad school, especially in the humanities. This includes the small, open Llama 3.1 70B model. Nice gains over the previous generations. (Yes, human tests are never a great way to judge models, but still interesting) [image]
  • @thom_wolf Thomas Wolf on x
    Among the most impressive aspect of the Llama 3.1 release is the accompanying research paper! Close to 100 pages of deep knowledge-sharing on LLMs like we havn't seen very often recently What a treat! It covers everything, pretrainining data, filtering, annealing, synthetic [imag…
  • @astonzhangaz Aston Zhang on x
    Our Llama 3.1 405B is now openly available! After a year of dedicated effort, from project planning to launch reviews, we are thrilled to open-source the Llama 3 herd of models and share our findings through the paper: 🔹Llama 3.1 405B, continuously trained with a 128K context [im…
  • @tobi Tobi Lutke on x
    An incredible gift from Meta & Zuck to all of us. Llama3.1 looks incredible. Congrats to the whole team behind it!
  • @garymarcus Gary Marcus on x
    LLaMa 3 450B is huge.  Here's why: - Roughly as good as GPT-4, but open source.  - There goes OpenAI's business model - But it's not GPT-5 level either, despite truly massive resources... - Massive convergence on GPT-4 level models means massive price war, few profits for anyone …
  • @mascobot @mascobot on x
    LLaMa 3.1 benchmarks side by side. This is truly a SOTA model. Beats GPT4 almost on every single benchmark. Continuously trained with a 128K context length. Pre-trained on 15.6T tokens (405B). The fine-tuning data includes publicly available instruction datasets, as well as [imag…
  • @bbgoriginals @bbgoriginals on x
    Mark Zuckerberg says he wants to control his own tech “destiny” by releasing the largest open-source AI model ever, and avoiding the “soul-crushing” tactics of Apple. He speaks exclusively with @emilychangtv on The Circuit https://www.bloomberg.com/... [video]
  • @_akhaliq @_akhaliq on x
    The Llama 3 Herd of Models Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a new set of foundation models, called Llama 3. It is a herd of language models that natively support multilinguality, coding, reasoning, and tool usage. […
  • @aiatmeta @aiatmeta on x
    With Llama 3.1, we evaluated performance on >150 benchmark datasets spanning a wide range of languages — in addition to extensive human evaluations in real-world scenarios. These results show that the 405B competes with leading closed source models like GPT-4, Claude 2 and Gemini…
  • @metanewsroom @metanewsroom on x
    Meta AI is now multilingual, smarter and more creative: - Now available in 7 new languages and 22 countries 🌎 - New creative tools to help bring your visions to life as images 🎨 - You can now use our largest and most capable open-source model in Meta AI for help with more
  • r/technology r on reddit
    Meta releases the biggest and best open-source AI model yet
  • r/mlscaling r on reddit
    Llama 3.1 Paper