/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Andy Tseng

@andytseng
16 posts
2026-03-09
These grants fund research, museums, and local culture across the US.  Whatever damage we see now may be nothing compared to the ripple effects years from now.  —  #USPol #DOGE #ChatGPT #AI #DEI #Humanities #ResearchSky #NationalEndowmentForTheHumanities [embedded post]
2026-03-09 View on X
New York Times

Lawsuit documents: two DOGE employees used ChatGPT to identify National Endowment for the Humanities grants, worth $100M+, to be cut for being related to DEI

Documents show how A.I. was used to cancel most previously approved grants by the National Endowment for the Humanities as the agency embraced President Trump's agenda.

2026-03-08
These grants fund research, museums, and local culture across the US.  Whatever damage we see now may be nothing compared to the ripple effects years from now.  —  #USPol #DOGE #ChatGPT #AI #DEI #Humanities #ResearchSky #NationalEndowmentForTheHumanities [embedded post]
2026-03-08 View on X
New York Times

Documents show two DOGE employees used ChatGPT to identify National Endowment for the Humanities grants, worth over $100M, to be cut for being related to DEI

Documents show how A.I. was used to cancel most previously approved grants by the National Endowment for the Humanities as the agency embraced President Trump's agenda.

2026-03-03
In case anyone's interested, @jtillipman.bsky.social has an excellent, detailed analysis of the current Anthropic-DoD-OpenAI contract debate - lots of nuances I wasn't aware of!  —  #USPol #AI #AIGovernance #Anthropic #DoD #OpenAI #GovernmentProcurement #GovCon #ProcurementPolicy [embedded post]
2026-03-03 View on X
Jessica Tillipman

A look at the rights AI companies have in US government contracts, such as the “any lawful use” standard, amid the Anthropic-DOD dispute and the OpenAI-DOD deal

But Users Aren't Buying It

In case anyone's interested, @jtillipman.bsky.social has an excellent, detailed analysis of the current Anthropic-DoD-OpenAI contract debate - lots of nuances I wasn't aware of!  —  #USPol #AI #AIGovernance #Anthropic #DoD #OpenAI #GovernmentProcurement #GovCon #ProcurementPolicy [embedded post]
2026-03-03 View on X
Bloomberg

Sources: amid negotiations with the DOD, Anthropic submitted a bid to compete in a $100M DOD contest to develop voice-controlled, autonomous drone swarming tech

Anthropic PBC was among the artificial intelligence companies that submitted a proposal earlier this year to compete …

2026-03-02
In case anyone's interested, @jtillipman.bsky.social has an excellent, detailed analysis of the current Anthropic-DoD-OpenAI contract debate - lots of nuances I wasn't aware of!  —  #USPol #AI #AIGovernance #Anthropic #DoD #OpenAI #GovernmentProcurement #GovCon #ProcurementPolicy [embedded post]
2026-03-02 View on X
Hyperdimensional

The Anthropic-DOD skirmish is the first major public debate on control over frontier AI, and institutions behaved erratically, maliciously, and without clarity

On Anthropic and the Department of War  —  I.  —  A little more than a decade ago, I sat with my father and watched him die.

In case anyone's interested, @jtillipman.bsky.social has an excellent, detailed analysis of the current Anthropic-DoD-OpenAI contract debate - lots of nuances I wasn't aware of!  —  #USPol #AI #AIGovernance #Anthropic #DoD #OpenAI #GovernmentProcurement #GovCon #ProcurementPolicy [embedded post]
2026-03-02 View on X
The Verge

Sources: OpenAI agreed to follow US laws that have allowed for mass surveillance in the past, and the DOD didn't budge from its demands over bulk analyzing data

On Friday evening, amidst fallout from a standoff between the Department of Defense and Anthropic, OpenAI CEO Sam Altman announced …

In case anyone's interested, @jtillipman.bsky.social has an excellent, detailed analysis of the current Anthropic-DoD-OpenAI contract debate - lots of nuances I wasn't aware of!  —  #USPol #AI #AIGovernance #Anthropic #DoD #OpenAI #GovernmentProcurement #GovCon #ProcurementPolicy [embedded post]
2026-03-02 View on X
Jessica Tillipman

A look at the rights AI companies have in US government contracts, such as the “any lawful use” standard, amid the Anthropic-DOD dispute and the OpenAI-DOD deal

It Depends on the Acquisition Pathway, the Contract Type, and the Contract Terms.

2026-02-01
This is the real danger of the new AI era: casual users cobble together apps that “work” without understanding how they're built or whether LLM outputs are accurate or secure - often bypassing SME review.  No wonder problems like this happen.  —  #Moltbook #AI #GenAI #Cybersecurity #DevOps #ResponsibleAI
2026-02-01 View on X
404 Media

A researcher says an exposed Moltbook database could have let anyone take control of the site's AI agents and post anything; the database has since been secured

‘It exploded before anyone thought to check whether the database was properly secured.’  —  Moltbook is a “social media” …

2026-01-19
We've recently started using Claude Enterprise at work, and I'm getting the same vibe many of our users are talking about, especially with Claude Code.  It feels genuinely useful in day-to-day work, not just hype.  —  #AI #Anthropic #Claude #GenAI  —  Gift Link: www.wsj.com/tech/ai/anth...
2026-01-19 View on X
Wall Street Journal

Similarweb: Claude's web audience more than doubled YoY in December 2025, as many coders spent their holiday breaks on a “Claude bender” testing Claude Opus 4.5

Developers and hobbyists are comparing the viral moment for Anthropic's Claude Code to the launch of generative AI

2026-01-16
Hard not to notice the pattern - another OpenAI safety lead heads to Anthropic.  Of all the big AI players, Anthropic seems to take safety more seriously, at least on the surface.  Not perfect, but I wish more would follow.  —  #AI #AISafety #ResponsibleAI #Anthropic #OpenAI [embedded post]
2026-01-16 View on X
The Verge

Andrea Vallone, who left OpenAI in December after serving as its Head of Model Policy, joins Anthropic's alignment team, which tries to understand AI model risk

Andrea Vallone has joined Anthropic's alignment team. … One of the most controversial issues in the AI industry over the past year was what to do when a user displays signs …

2025-09-21
The lengths people go to scam you... nowhere is safe, stay sharp!  —  #Cybersecurity #Scams #Phishing
2025-09-21 View on X
Wired

As carriers deploy protections against fraudulent texts, scammers are using “SMS blasters” that impersonate base stations to send fake messages to nearby phones

Scammers are now using “SMS blasters” to send out up to 100,000 texts per hour to phones that are tricked into thinking the devices are cell towers.

2025-08-24
If you're wondering why #Bluesky complies with the UK's OSA but rejects Mississippi's HB1126: the OSA targets only specific content, while HB1126 mandates blanket age checks, sensitive data grabs, and tracking - impossible for smaller platforms.  —  #Privacy #ChildSafety #SocialMedia #HB1126 #OSA #USPol
2025-08-24 View on X
TechCrunch

Bluesky blocks access to its service in Mississippi, saying it doesn't have the resources to comply with the state's broad new law requiring age verification

Bluesky's decision to drop out of the Mississippi market … Rob Pegoraro / PCMag : Bluesky Blocks Mississippi Users Instead of Making Them Prove They're Over 18 Ashton Pittman / Mis...

2025-08-23
If you're wondering why #Bluesky complies with the UK's OSA but rejects Mississippi's HB1126: the OSA targets only specific content, while HB1126 mandates blanket age checks, sensitive data grabs, and tracking - impossible for smaller platforms.  —  #Privacy #ChildSafety #SocialMedia #HB1126 #OSA #USPol
2025-08-23 View on X
TechCrunch

Bluesky blocks access to its service in Mississippi, saying it doesn't have the resources to comply with the state's broad new law requiring age verification

Social networking startup Bluesky has made the decision to block access to its service in the state of Mississippi, rather than comply with a new age assurance law.

2025-04-21
Interesting perspective in this “AI as Normal Technology” article.  It's definitely valuable to consider #AI as a natural progression of technology, rather than a radical departure.  It's good to have diverse views when we think about AI's future.  Thanks for sharing @mariannhardey.com.  [embedded post]
2025-04-21 View on X
Knight First Amendment Institute

A deep dive into AI as a normal technology vs. a humanlike intelligence and how major public policy based on controlling superintelligence may make things worse

An alternative to the vision of AI as a potential superintelligence  —  We articulate a vision of artificial intelligence (AI) as normal technology. Bluesky: @taumuyi , @knightcolu...

2025-03-16
It's great to see @bsky.app team considering real user control over how their data is used for #AI.  In a time when nearly all #SocialMedia platforms ignore privacy concerns, this proactive approach is a breath of fresh air.  More companies should follow suit!  #PrivacyMatters #AIEthics #Bluesky
2025-03-16 View on X
TechCrunch

Bluesky proposes letting users indicate if their data can be used for AI training, web archiving, and more; critics see it as a reversal of its prior statements

Social network Bluesky recently published a proposal on GitHub outlining new options it could give users to indicate whether …

2024-12-07
We need a global AI safety standard, it's a no-brainer.  But as Wired highlights below, creating such a standard for an ever-evolving technology is no small feat.  It demands collaboration across industries, academia, and governments to ensure AI advancement stays safe and ethical.  #AISafety #AIEthics
2024-12-07 View on X
Wired

MLCommons, a nonprofit that helps companies measure their AI systems' performance, debuts the AILuminate benchmark featuring 12K+ prompts to assess LLMs' safety

MLCommons provides benchmarks that test the abilities of AI systems.  It wants to measure the bad side of AI next.