/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

@ainowinstitute

@ainowinstitute
14 posts
2025-04-13
“Double-checking” AI doesn't cut it, says @heidykhlaaf.bsky.social in @technologyreview.com.  When an AI model draws conclusions from opaque, massive data sets, human oversight becomes a myth; and for military intel, a dangerous myth.
2025-04-13 View on X
MIT Technology Review

How the Pentagon uses AI tools from Vannevar Labs, which got a DOD deal worth up to $99M, to scan open-source intelligence, write intelligence reports, and more

In a test run, a unit of Marines in the Pacific used generative AI not just to collect intelligence but to interpret it.  Routine intel work is only the start.” www.technologyrevie...

2024-03-28
In @nytimes, @ambaonadventure emphasized to @ceciliakang that the U.S. / China AI race is consistently invoked by industry-aligned interests to give dominant American AI firms a pass on regulation. https://www.nytimes.com/... [image]
2024-03-28 View on X
New York Times

At a DC event on May 1, over 100 tech CEOs and investors plan to lobby against AI regulation, ask to relax immigration rules, and drum up hawkishness on China

Jacob Helberg, a senior adviser to Palantir, is organizing a conference for tech leaders and Washington lawmakers on May 1.Jason Andrew for The New York Times X: @ainowinstitute , ...

2023-12-21
The FTC just banned RiteAid, a major retailer, from using facial recognition for five years based on its harmful and irresponsible use of a system that was set up to fail, and likely to fail Black, Asian, Latino, and women consumers the most. 🧵1/5 https://www.ftc.gov/...
2023-12-21 View on X
Bloomberg

Pharmacy chain Rite Aid must stop using facial recognition for five years as part of a settlement with the US FTC, which says the tech falsely flagged customers

It enrolled tens of thousands of people into a ‘watchlist database’. The system's operators weren't trained that it was fallible - and false positives meant many were wrongly expelled from the store and reported to the police. 4/5
2023-12-21 View on X
Bloomberg

Pharmacy chain Rite Aid must stop using facial recognition for five years as part of a settlement with the US FTC, which says the tech falsely flagged customers

2023-12-20
The FTC just banned RiteAid, a major retailer, from using facial recognition for five years based on its harmful and irresponsible use of a system that was set up to fail, and likely to fail Black, Asian, Latino, and women consumers the most. 🧵1/5 https://www.ftc.gov/...
2023-12-20 View on X
Bloomberg

Pharmacy chain Rite Aid must stop using facial recognition for five years as part of a settlement with the US FTC, which says the tech falsely flagged customers

- FTC says chain's surveillance system falsely flagged customers  — Bankrupt pharmacy chain will stop using system, delete images

It enrolled tens of thousands of people into a ‘watchlist database’. The system's operators weren't trained that it was fallible - and false positives meant many were wrongly expelled from the store and reported to the police. 4/5
2023-12-20 View on X
Bloomberg

Pharmacy chain Rite Aid must stop using facial recognition for five years as part of a settlement with the US FTC, which says the tech falsely flagged customers

- FTC says chain's surveillance system falsely flagged customers  — Bankrupt pharmacy chain will stop using system, delete images

2023-10-12
“When resources, expertise, and power have concentrated so heavily in a few companies, and policy makers are seeped in their own cocktail of fears, the landscape of policy ideas collapses under pressure, eroding the base of a healthy democracy.”-@_KarenHao https://www.theatlantic.com/ ...
2023-10-12 View on X
The Atlantic

Sources: the US Department of Commerce is considering new export controls for general-purpose AI programs, a move that experts say could weaken US AI innovation

Karen Hao / The Atlantic : X: @_karenhao , @dlberes , @bobehayes , @ainowinstitute , @theatlantic , and @ambaonadventure LinkedIn: Marc Rotenberg , Christos Makridis , and Marc Ro...

2023-03-03
“The stakes are high because these tools are used in very sensitive social domains like in hiring, housing and credit, and there is real evidence that over the years, A.I. tools have been flawed and biased.” — Amba Kak in @nytimes https://www.nytimes.com/...
2023-03-03 View on X
New York Times

Few US lawmakers are taking action to regulate AI, as many struggle to understand the technology and its dangers, unlike the EU which proposed AI rules in 2021

Tech innovations are again racing ahead of Washington's ability to regulate them, lawmakers and A.I. experts said. Tweets: @reptedlieu , @neilturkewitz , @geomblog , @cademetz , @c...

2020-12-07
We stand with Dr. Timnit Gebru, and are grateful for her brilliant and tireless work. If you want to show your support, Dr. Gebru's colleagues have drafted this petition: https://googlewalkout.medium.com/ ... #IStandWithTimnit
2020-12-07 View on X
MIT Technology Review

A summary of the draft paper co-authored by Timnit Gebru, which outlined the main risks of large language AI models and provided suggestions for future research

2020-12-06
We stand with Dr. Timnit Gebru, and are grateful for her brilliant and tireless work. If you want to show your support, Dr. Gebru's colleagues have drafted this petition: https://googlewalkout.medium.com/ ... #IStandWithTimnit
2020-12-06 View on X
MIT Technology Review

A summary of the draft paper co-authored by Timnit Gebru, which outlined the main risks of large language AI models and provided suggestions for future research

The company's star ethics researcher highlighted the risks of large language models, which are key to Google's business.  —  hide

2020-12-05
We stand with Dr. Timnit Gebru, and are grateful for her brilliant and tireless work. If you want to show your support, Dr. Gebru's colleagues have drafted this petition: https://googlewalkout.medium.com/ ... #IStandWithTimnit
2020-12-05 View on X
The Guardian

1,200+ Google employees and 1,500+ academic, industry, and civil supporters have signed a petition condemning the termination of AI scientist Timnit Gebru

More than 1,500 researchers also sign letter after Black expert on ethics says Google tried to suppress her research on bias

We stand with Dr. Timnit Gebru, and are grateful for her brilliant and tireless work. If you want to show your support, Dr. Gebru's colleagues have drafted this petition: https://googlewalkout.medium.com/ ... #IStandWithTimnit
2020-12-05 View on X
Reuters

In an internal email, Google's head of AI says that Gebru threatened to resign unless she was told which colleagues deemed her draft paper as unpublishable

2019-10-20
Great article by @_KarenHao that shows why predictive algorithms don't make the judicial process more fair - and why we need impact assessments 👍 (nice to see shout outs to @ruha9 and our policy director Rashida Richardson) https://twitter.com/...
2019-10-20 View on X
MIT Technology Review

A game made with a real world dataset of defendants shows the shortcomings of COMPAS, an AI-powered risk assessment tool used in the US criminal legal system

Sundays are for making a house into a home … Tweets: Karen Hao / @_karenhao : IT'S HERE!!! The biggest story I've ever worked on. @techreview's very first interactive ever, which w...

2019-10-19
Great article by @_KarenHao that shows why predictive algorithms don't make the judicial process more fair - and why we need impact assessments 👍 (nice to see shout outs to @ruha9 and our policy director Rashida Richardson) https://twitter.com/...
2019-10-19 View on X
MIT Technology Review

A game made with a real world dataset of defendants shows the shortcomings of COMPAS, an AI-powered risk assessment tool used in the US criminal legal system

The US criminal legal system uses predictive algorithms to try to make the judicial process less biased.  But there's a deeper problem. Tweets: @_karenhao , @marylgray , @ledatamin...