/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Mark Fallon

@glynco
2 posts
2025-01-14
Moreover, researchers have found that people using AI tools can succumb to “automation bias,” a tendency to blindly trust decisions made by powerful software, ignorant to its risks and limitations. https://www.washingtonpost.com/ ...
2025-01-14 View on X
Washington Post

An investigation finds 15 police departments across 12 US states have arrested suspects identified through facial recognition without having any other evidence

“law enforcement agencies across the nation are using the [AI] tools in a way they were never intended to be used: as a shortcut to finding and arresting suspects without other evi...

2023-11-15
In 2019, the National Institute of Standards and Technology, published a study revealing that many facial-recognition systems falsely identified Black and Asian faces between ten and a hundred times more frequently than Caucasian ones. | The New Yorker https://www.newyorker.com/...
2023-11-15 View on X
New Yorker

A look at wrongful US arrests due to false positive facial recognition matches, and how “automation bias” can lead the police to ignore contradictory evidence

Eyal Press / New Yorker :