/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Sherjil Ozair

@sherjilozair
3 posts
2025-12-07
cracked team and a 🐐 shipped an s-tier open-source model
2025-12-07 View on X
Essential AI

Essential AI, whose CEO co-wrote Google's Attention Is All You Need paper, unveils Rnj-1, an 8B-parameter open model with SWE-bench performance close to GPT-4o

The long-term advancement and equitable diffusion of AI technologies crucially depend on their development in the Open.

2023-10-09
Linear probes are a classic way of grounding distributed representations! They were famously used as an evaluation protocol for unsupervised representation learning methods like CPC, SimCLR, etc. @gyomalin_ML wrote about it in 2016: https://arxiv.org/...
2023-10-09 View on X
Anthropic

A research paper details how decomposing groups of neural network neurons into “interpretable features” may improve safety by enabling the monitoring of LLMs

Neural networks are trained on data, not programmed to follow rules.  With each step of training …

2023-10-08
Linear probes are a classic way of grounding distributed representations! They were famously used as an evaluation protocol for unsupervised representation learning methods like CPC, SimCLR, etc. @gyomalin_ML wrote about it in 2016: https://arxiv.org/...
2023-10-08 View on X
Anthropic

A research paper details how decomposing groups of neurons in a neural network into interpretable “features” may improve safety by enabling monitoring of LLMs

Neural networks are trained on data, not programmed to follow rules.  With each step of training …