/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

@lokoyacap

@lokoyacap
2 posts
2025-11-02
Satya on BG2: “Absolutely Azure could have grown ~41-42% if we'd had more compute” 👀 h/t @TMTBreakout for the notes
2025-11-02 View on X
@bg2pod

Q&A with Sam Altman and Satya Nadella about the Microsoft-OpenAI partnership, OpenAI's restructuring and $100B revenue target for 2027, $3T AI buildout, more

BG2. All things AI w @sama & @satyanadella. A Halloween Special. 🎃🔥@bg2pod @altcap — (00:00) Intro (02:28) Microsoft's Investment in OpenAI (03:19) The Nonprofit Structure and Its ...

2025-01-27
Everyone has the DeepSeek thing literally ass-backwards. The hyperscalers/frontier LLMs will learn/use whatever they can from it (along w everything else already on their roadmaps) to make their models even better and then exponentiate them with these looming giga clusters.
2025-01-27 View on X
YouTubeTranscriptOptimizer

A bear case for Nvidia: hardware competitors, LLM code translation to avoid CUDA lock-in, DeepSeek's training and inference efficiency breakthroughs, and more

DeepSeek recently released models claiming up to 45x more efficient training and inference compared to today's best-known large language models (like OpenAI's o1). … Forums: Hacker...