2024-03-17
I am finally forced to take the time to give a serious look at multimodal and figure out how it works. One day, relatively soon, people will be saying “Your AI used to not understand images? that's crazy!” Like today's youth can't comprehend life before cell phones.
VentureBeat
Apple researchers detail MM1, a series of multimodal LLMs with up to 30B parameters they say achieve state-of-the-art performance across multiple AI benchmarks
Apple researchers have developed new methods for training large language models on both text and images, enabling more powerful …
2023-11-27
https://www.theatlantic.com/ ... Excellent article @chumpchanger
The Atlantic
As safety demands strip Llama 2, ChatGPT, and others of anything remotely controversial, some programmers are building uncensored LLMs without safety guardrails
2023-11-26
https://www.theatlantic.com/ ... Excellent article @chumpchanger
The Atlantic
As safety demands strip Llama 2, ChatGPT, and others of anything remotely controversial, some programmers are building uncensored LLMs without safety guardrails
A chatbot that can't say anything controversial isn't worth much. Bring on the uncensored models. Threads: @tnlnyc . Mastodon: @MikeElgan@mastodon.social . X: @chumpchanger , @the...
2023-11-06
@AB_StateSpeed I'm excited because Meta didn't publish llama-2-34b and so there currently a big hole in the ~30-40b size bracket. I'm hopeful that Yi might fill that gap. The proof is in the performance of the fine tuned models.
Bloomberg
Chinese startup 01.AI, launched in March 2023 by computer scientist Kai-Fu Lee, reaches a $1B+ valuation and releases its AI model Yi-34B in Chinese and English
Train — Use in Transformers … The first public release contains two bilingual … Pandaily : Kai-Fu Lee's 01.AI Releases A Large-scale Model Yi-34B Worth Over $1 Billion Matthias B...