/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
API keys, docs, usage dashboard
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
Entity

model weights

Filtered to controversy pattern ×
18 articles decelerating
Articles
18
mentions
Velocity
-50.0%
growth rate
Acceleration
-0.167
velocity change
Sources
12
publications

Coverage Timeline

2025-10-10
Anthropic 16 related

A study finds that as few as 250 malicious documents can produce a “backdoor” vulnerability in an LLM, regardless of model size or training data volume

regardless of the size of the model or its training data. Data-poisoning attacks might be more practical than previously believed. [image] @anthropicai : Previous research suggested that attackers mig...

2025-07-08
Financial Times 7 related

Sources: OpenAI overhauled its security, adding biometric checks in its offices and isolating sensitive info, to protect IP such as model weights from espionage

Artificial intelligence group has added fingerprint scans and hired military experts to protect important data

2025-04-28
CBS Mornings on YouTube 6 related

Geoffrey Hinton fears AI companies are under-investing in safety research, embracing military usage, and sharing model weights while pushing for less regulation

Nobel laureate Geoffrey Hinton, often called a “godfather of artificial intelligence,” spoke with Brook Silva-Braga …

2025-03-14
TechCrunch 15 related

OpenAI calls DeepSeek “state-controlled” and recommends that the US ban “PRC-produced equipment and models that violate user privacy and create security risks”

https://techcrunch.com/... Threads: Vishvanand Subramanian / @vishvanands : trying hard to steelman this position from openai but unless it's possible to hide malware in the model weights, what exactl...

2025-02-02
Wall Street Journal 12 related

When asked in an AMA if OpenAI would release model weights and research, Sam Altman said “we are discussing” and “it's also not our current highest priority”

www.theguardian.com/commentisfre...  [image] Mastodon: Bryan Lawrence / @bnlawrence@mastodon.nz : “For us little people, the choice seems to be between being data-jacked and screwed over by the undemo...

2025-02-01
Wall Street Journal 17 related

When asked in an AMA if OpenAI would release model weights and research, Sam Altman said “we are discussing” and “it's also not our current highest priority”

CEO of the ChatGPT maker says his company has been ‘on the wrong side of history’ with open-source software

2024-12-07
Apollo Research 5 related

An evaluation of six frontier AI models for in-context scheming when strongly nudged to pursue a goal: only OpenAI's o1 was capable of scheming in all the tests

It presents a new safety challenge that OpenAI is trying to address.  —  techcrunch.com/2024/12/05/o... Anders Sandberg / @arenamontanus : In an IVA discussion on AI yesterday evening professor Kristi...

2023-07-22
The Verge 83 related

OpenAI, Microsoft, Meta, Google, Amazon, Anthropic, and Inflection make voluntary AI promises to the White House, like cybersecurity investment and watermarking

White House Pranav Dixit / Business Today : OpenAI, Google, Meta, Amazon and others pledge to watermark AI content for safety Ryan Morrison / Tech Monitor : White House secures AI safety commitment As...

Loading articles...

Quarterly Coverage

Top Sources

Narrative

Loading narrative...

Relationships

Loading graph...