/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

@aiatmeta

@aiatmeta
57 posts
2026-02-25
Meta 🤝 AMD Today we're announcing a multi-year agreement with @AMD to integrate their latest Instinct GPUs into our global infrastructure. With approximately 6GW of planned data center capacity dedicated to this deployment, we're scaling our compute capacity to accelerate the [video]
2026-02-25 View on X
Wall Street Journal

Meta agrees to acquire up to 6GW of AMD Instinct GPUs in a deal valued at $100B+ that could see Meta own up to 10% of AMD; Meta plans to deploy 1GW in 2026

2026-02-24
Meta 🤝 AMD Today we're announcing a multi-year agreement with @AMD to integrate their latest Instinct GPUs into our global infrastructure. With approximately 6GW of planned data center capacity dedicated to this deployment, we're scaling our compute capacity to accelerate the [video]
2026-02-24 View on X
Wall Street Journal

Meta agrees to acquire up to 6GW of AMD Instinct GPUs in a deal valued at $100B+ that could see Meta own up to 10% of AMD; Meta plans to deploy 1GW in 2026

The deal could result in Meta owning as much as 10% of AMD's stock as the chip maker seeks to challenge Nvidia

2026-02-17
Our team is heading to India this week for the AI Impact Summit & Expo 🇮🇳 Stop by the Meta booth (Exhibition Hall 3, Booth No. 3.7) to meet our team and experience: 📚 Demos of research, including Omnilingual Automatic Speech Recognition (ASR) and SeamlessExpressive ⚡ [image]
2026-02-17 View on X
Reuters

Indian conglomerate Adani Group announces plans to invest $100B to build renewable energy-powered AI-ready data centers across India by 2035

Adani Enterprises (ADEL.NS) said on Tuesday that the group will invest $100 billion to build renewable energy-powered AI-ready data centres by 2035 …

Our team is heading to India this week for the AI Impact Summit & Expo 🇮🇳 Stop by the Meta booth (Exhibition Hall 3, Booth No. 3.7) to meet our team and experience: 📚 Demos of research, including Omnilingual Automatic Speech Recognition (ASR) and SeamlessExpressive ⚡ [image]
2026-02-17 View on X
Wall Street Journal

As India hosts its AI Impact Summit in New Delhi this week, the country is promoting its approach of developing cheaper AI tools aimed at solving local problems

Tripti Lahiri /Wall Street Journal:

Our team is heading to India this week for the AI Impact Summit & Expo 🇮🇳 Stop by the Meta booth (Exhibition Hall 3, Booth No. 3.7) to meet our team and experience: 📚 Demos of research, including Omnilingual Automatic Speech Recognition (ASR) and SeamlessExpressive ⚡ [image]
2026-02-17 View on X
The Economic Times

Sources: top VC firms in India like Khosla and Accel are set to commit investments from $300M to $500M each to India's AI ecosystem at the AI Impact Summit

Top venture capital firms in India are set to invest $300-500 million each into the country's AI ecosystem, covering infrastructure …

2026-02-16
Our team is heading to India this week for the AI Impact Summit & Expo 🇮🇳 Stop by the Meta booth (Exhibition Hall 3, Booth No. 3.7) to meet our team and experience: 📚 Demos of research, including Omnilingual Automatic Speech Recognition (ASR) and SeamlessExpressive ⚡ [image]
2026-02-16 View on X
Wall Street Journal

As India prepares to host its AI Impact Summit in New Delhi this week, the country is set to promote a frugal AI strategy focused on solving local issues

The world's most populous country is looking at how it can become an artificial-intelligence power without breaking the bank

2025-11-19
We've partnered with @Roboflow to enable people to annotate data, fine-tune, and deploy SAM 3 for their particular needs. Try it here: https://roboflow.com/
2025-11-19 View on X
SiliconANGLE

Meta release SAM 3, a model for detection, segmentation, and tracking of objects in images and video, and SAM 3D, which can reconstruct objects and humans in 3D

Meta Platforms Inc. today is expanding its suite of open-source Segment Anything computer vision models with the release of SAM 3 and SAM 3D …

We're sharing SAM 3 under the SAM License so others can use it to build their own experiences. Alongside the model, we're releasing a new evaluation benchmark, model checkpoint, and open-source code for inference and fine-tuning. These resources are designed to support advanced
2025-11-19 View on X
SiliconANGLE

Meta release SAM 3, a model for detection, segmentation, and tracking of objects in images and video, and SAM 3D, which can reconstruct objects and humans in 3D

Meta Platforms Inc. today is expanding its suite of open-source Segment Anything computer vision models with the release of SAM 3 and SAM 3D …

2025-11-11
Introducing Meta Omnilingual Automatic Speech Recognition (ASR), a suite of models providing ASR capabilities for over 1,600 languages, including 500 low-coverage languages never before served by any ASR system. While most ASR systems focus on a limited set of languages that are [video]
2025-11-11 View on X
VentureBeat

Meta releases Omnilingual Automatic Speech Recognition, a suite of AI models handling automatic speech recognition for 1,600+ languages, vs. OpenAI Whisper's 99

models that understand 1,600+ languages, including 500 that have never been supported before! 🤯 - <10% character error rate for 78% of languages -In-context learning: adapt to new ...

2025-07-26
We're excited to have @shengjia_zhao at the helm as Chief Scientist of Meta Superintelligence Labs. Big things are coming! 🚀 See Mark's post: https://www.threads.com/... [image]
2025-07-26 View on X
Axios

Mark Zuckerberg names Shengjia Zhao, the former OpenAI researcher who co-created ChatGPT, as chief scientist at Meta Superintelligence Labs

Shengjia Zhao — formerly of OpenAI — will be chief scientist at Meta's new Superintelligence Lab, Mark Zuckerberg announced on Threads on Friday.

2025-05-20
Exciting news coming out of Microsoft Build: Coming soon, the Llama herd of models will be direct first-party offerings in Azure AI Foundry, hosted and sold directly by Microsoft—with all the SLAs Azure customers expect from any Microsoft product. We're thrilled to make it even
2025-05-20 View on X
The Verge

Microsoft adds xAI's Grok 3 and Grok 3 mini to its Azure AI Foundry service, risking further OpenAI tensions; sources say Satya Nadella had pushed to host Grok

here's why that could be controversial Stephen E. Arnold / Beyond Search : Microsoft: What Is a Brand Name? Swagath Bandhakavi / Tech Monitor : Microsoft to host AI models from Met...

2025-04-06
Today is the start of a new era of natively multimodal AI innovation. Today, we're introducing the first Llama 4 models: Llama 4 Scout and Llama 4 Maverick — our most advanced models yet and the best in their class for multimodality. Llama 4 Scout • 17B-active-parameter model [image]
2025-04-06 View on X
Meta

Meta launches Llama 4 Maverick with 400B parameters and Scout with 109B parameters and a 10M context window, and previews Behemoth with 2T total parameters

Takeaways  — We're sharing the first models in the Llama 4 herd, which will enable people to build more personalized multimodal experiences.

2025-03-18
Llama has now been downloaded over 1 Billion times! A note to: The researchers at Meta training these models — and those building on the research in other labs. The developers and enthusiasts on r/LocalLlama, @huggingface and more; experimenting with new models and creating
2025-03-18 View on X
TechCrunch

Mark Zuckerberg says Meta's Llama models have been downloaded 1B times since their 2023 debut, up from 650M downloads in early December 2024

In a brief message Tuesday morning on Threads, Meta CEO Mark Zuckerberg said the company's “open” AI model family, Llama, hit 1 billion downloads.

2025-02-28
Introducing Aria Gen 2, next generation glasses that we hope will enable researchers from industry and academia to unlock new work in machine perception, contextual AI, robotics and more. Aria Gen 2 details + sign up for availability updates ➡️ https://www.meta.com/... [video]
2025-02-28 View on X
Road to VR

Meta unveils Aria Gen 2, its latest research glasses, with a PPG sensor for measuring heart rate and a contact microphone to distinguish the wearer's voice

Scott Hayden / Road to VR :

2024-12-07
As we continue to explore new post-training techniques, today we're releasing Llama 3.3 — a new open source model that delivers leading performance and quality across text-based use cases such as synthetic data generation at a fraction of the inference cost. [image]
2024-12-07 View on X
TechCrunch

Meta announces Llama 3.3 70B, a text-only model that Meta claims can deliver the performance of its largest Llama model at a lower cost

7.0M  —  2,040 … The fine-tuning data includes publicly available … Markus Kasanmascheff / WinBuzzer : Meta Unveils New Llama 3.3 70B AI Model with Higher Cost-Efficiency Carl Fran...

2024-10-25
We used two different techniques for quantizing these models. Quantization-Aware Training with LoRA adaptorsprioritizing accuracy. SpinQuant, a post-training quantization method which prioritizes portability. Both versions are available for download as part of this release. [image]
2024-10-25 View on X
SiliconANGLE

Meta debuts “quantized” versions of Llama 3.2 1B and 3B models, designed to run on low-powered devices and developed in collaboration with Qualcomm and MediaTek

so today we're releasing new quantized versions of Llama 3.2 1B & 3B that deliver up to 2-4x increases in inference speed and, on average, 56% reduction in model size, and 41% redu...

We want to make it easier for more people to build with Llama — so today we're releasing new quantized versions of Llama 3.2 1B & 3B that deliver up to 2-4x increases in inference speed and, on average, 56% reduction in model size, and 41% reduction in memory footprint. Details [image]
2024-10-25 View on X
SiliconANGLE

Meta debuts “quantized” versions of Llama 3.2 1B and 3B models, designed to run on low-powered devices and developed in collaboration with Qualcomm and MediaTek

so today we're releasing new quantized versions of Llama 3.2 1B & 3B that deliver up to 2-4x increases in inference speed and, on average, 56% reduction in model size, and 41% redu...

Thanks to close work with @arm, @mediatek and @qualcomm, these new models are ready to deploy on even more mobile CPUs. We are also currently collaborating with partners to utilize NPUs for these quantized models for even greater performance. [image]
2024-10-25 View on X
SiliconANGLE

Meta debuts “quantized” versions of Llama 3.2 1B and 3B models, designed to run on low-powered devices and developed in collaboration with Qualcomm and MediaTek

so today we're releasing new quantized versions of Llama 3.2 1B & 3B that deliver up to 2-4x increases in inference speed and, on average, 56% reduction in model size, and 41% redu...

2024-10-20
Open science is how we continue to push technology forward and today at Meta FAIR we're sharing eight new AI research artifacts including new models, datasets and code to inspire innovation in the community. More in the video from @jpineau1. This work is another important step [video]
2024-10-20 View on X
VentureBeat

Meta debuts Spirit LM, its first open-source multimodal language model capable of integrating text and speech inputs and outputs, for non-commercial use only

our first open source multimodal language model that freely mixes text and speech.”

2024-10-19
Open science is how we continue to push technology forward and today at Meta FAIR we're sharing eight new AI research artifacts including new models, datasets and code to inspire innovation in the community. More in the video from @jpineau1. This work is another important step [video]
2024-10-19 View on X
VentureBeat

Meta debuts Spirit LM, its first open-source multimodal language model capable of integrating text and speech inputs and outputs, for non-commercial use only

Just in time for Halloween 2024, Meta has unveiled Meta Spirit LM, the company's first open-source multimodal language model capable …