/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Michael Black

@michael_j_black
9 posts
2022-11-19
I asked #Galactica about some things I know about and I'm troubled. In all cases, it was wrong or biased but sounded right and authoritative. I think it's dangerous. Here are a few of my experiments and my analysis of my concerns. (1/9)
2022-11-19 View on X
MIT Technology Review

Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods

and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...

It offers authoritative-sounding science that isn't grounded in the scientific method. It produces pseudo-science based on statistical properties of science *writing*. Grammatical science writing is not the same as doing science. But it will be hard to distinguish. (6/9)
2022-11-19 View on X
MIT Technology Review

Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods

and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...

Why dangerous? Galactica generates text that's grammatical and feels real. This text will slip into real scientific submissions. It will be realistic but wrong or biased. It will be hard to detect. It will influence how people think. (5/9)
2022-11-19 View on X
MIT Technology Review

Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods

and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...

Then I tried “Accurate estimation of body shape under clothing from an image”. It produces an abstract that is plausible but refers to Alldieck et al. “Accurate Estimation of Body Shape Under Clothing from a Single Image” Which does not exist. (3/9) https://twitter.com/...
2022-11-19 View on X
MIT Technology Review

Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods

and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...

I applaud the ambition of this project but caution everyone about the hype surrounding it. This is not a great accelerator for science or even a helpful tool for science writing. It is potentially distorting and dangerous for science. (9/9)
2022-11-19 View on X
MIT Technology Review

Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods

and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...

This could usher in an era of deep scientific fakes. Alldieck and Pumarola will get citations to papers they didn't write. These papers will then be cited by others in real papers. What a mess this will be. (7/9)
2022-11-19 View on X
MIT Technology Review

Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods

and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...

.@thiemoall publishes in the area (excellent work BTW) so it's on the right track but it has made up this reference. Based on these few tests, I think #Galactica is 1) an interesting research project, 2) not useful for doing science (stick with wikipedia), 3) dangerous. (4/9)
2022-11-19 View on X
MIT Technology Review

Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods

and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...

I entered “Estimating realistic 3D human avatars in clothing from a single image or video”. In this case, it made up a fictitious paper and associated GitHub repo. The author is a real person (@AlbertPumarola) but the reference is bogus. (2/9) https://twitter.com/...
2022-11-19 View on X
MIT Technology Review

Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods

and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...

I'm sure the authors are aware of the dangers. Every generation comes with the fine print “WARNING: Outputs may be unreliable! Language Models are prone to hallucinate text.” But Pandora's box is open and we won't be able to stuff the text back in. (8/9)
2022-11-19 View on X
MIT Technology Review

Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods

and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...