/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

@liquidai_

@liquidai_
6 posts
2024-10-01
LFMs are Memory efficient LFMs have a reduced memory footprint compared to transformer architectures. This is particularly true for long inputs, where the KV cache in transformer-based LLMs grows linearly with sequence length. [image]
2024-10-01 View on X
VentureBeat

MIT spinoff Liquid AI debuts its non-transformer AI models LFM-1B, LFM-3B, and LFM-40B MoE, claiming they achieve “state-of-the-art performance at every scale”

Liquid AI, a startup co-founded by former researchers from the Massachusetts Institute of Technology (MIT) …

What Language LFMs are not good at today: Zero-shot code tasks Precise numerical calculations Time-sensitive information Counting r's in the word “Strawberry”! Human preference optimization techniques have not yet been applied to our models, extensively.
2024-10-01 View on X
VentureBeat

MIT spinoff Liquid AI debuts its non-transformer AI models LFM-1B, LFM-3B, and LFM-40B MoE, claiming they achieve “state-of-the-art performance at every scale”

Liquid AI, a startup co-founded by former researchers from the Massachusetts Institute of Technology (MIT) …

LFM-1B performs well on public benchmarks in the 1B category, making it the new state-of-the-art model at this size. This is the first time a non-GPT architecture significantly outperforms transformer-based models. [image]
2024-10-01 View on X
VentureBeat

MIT spinoff Liquid AI debuts its non-transformer AI models LFM-1B, LFM-3B, and LFM-40B MoE, claiming they achieve “state-of-the-art performance at every scale”

Liquid AI, a startup co-founded by former researchers from the Massachusetts Institute of Technology (MIT) …

LFM-3B delivers incredible performance for its size. It positions itself as first place among 3B parameter transformers, hybrids, and RNN models, but also outperforms the previous generation of 7B and 13B models. It is also on par with Phi-3.5-mini on multiple benchmarks, while [image]
2024-10-01 View on X
VentureBeat

MIT spinoff Liquid AI debuts its non-transformer AI models LFM-1B, LFM-3B, and LFM-40B MoE, claiming they achieve “state-of-the-art performance at every scale”

Liquid AI, a startup co-founded by former researchers from the Massachusetts Institute of Technology (MIT) …

What Language LFMs are good at today: General and expert knowledge Mathematics and logical reasoning Efficient and effective long-context tasks A primary language of English, with secondary multilingual capabilities in Spanish, French, German, Chinese, Arabic, Japanese, and
2024-10-01 View on X
VentureBeat

MIT spinoff Liquid AI debuts its non-transformer AI models LFM-1B, LFM-3B, and LFM-40B MoE, claiming they achieve “state-of-the-art performance at every scale”

Liquid AI, a startup co-founded by former researchers from the Massachusetts Institute of Technology (MIT) …

LFM-40B offers a new balance between model size and output quality. It leverages 12B activated parameters at use. Its performance is comparable to models larger than itself, while its MoE architecture enables higher throughput and deployment on more cost-effective hardware. [image]
2024-10-01 View on X
VentureBeat

MIT spinoff Liquid AI debuts its non-transformer AI models LFM-1B, LFM-3B, and LFM-40B MoE, claiming they achieve “state-of-the-art performance at every scale”

Liquid AI, a startup co-founded by former researchers from the Massachusetts Institute of Technology (MIT) …