2025-03-18
@MistralAI Including a base model is so huge about Mistral Small 3.1. Can't wait to try what @NousResearch, @cognitivecompai and others do with it. Brings back the good ol Mistral 7B memories. [image]
VentureBeat
Mistral debuts Mistral Small 3.1, a 24B-parameter multimodal and multilingual open-source model it says outperforms Gemma 3 and GPT-4o-mini and runs on 32GB RAM
SOTA. Multimodal. Multilingual. Apache 2.0 — Research Hugging Face : Mistral-Small-3.1-24B-Base-2503 like 41 — Mistral AI_ 6.32k — Model Card for Mistral-Small-3.1-24B-Bas...