MLCommons and Hugging Face release Unsupervised People's Speech, a dataset for AI research containing more than 1M hours of audio spanning at least 89 languages
Kyle Wiggers / TechCrunch :
MLCommons and Hugging Face release Unsupervised People's Speech, a dataset for AI research containing more than 1M hours of audio spanning at least 89 languages
MLCommons, a nonprofit AI safety working group, has teamed up with AI dev platform Hugging Face to release …
MLCommons, a nonprofit that helps companies measure their AI systems' performance, debuts the AILuminate benchmark featuring 12K+ prompts to assess LLMs' safety
MLCommons provides benchmarks that test the abilities of AI systems. It wants to measure the bad side of AI next.
MLCommons shares results from its MLPerf 4.0 training benchmarks, which added Google's and Intel's AI accelerators; Nvidia H100 GPUs topped all nine benchmarks
For years, Nvidia has dominated many machine learning benchmarks, and now there are two more notches in its belt.
MLCommons shares the results from its MLPerf 4.0 inferencing benchmarks, which added Llama 2 70B and Stable Diffusion XL; PCs with Nvidia GPUs came out on top
no Blackwell submissions yet, sorry Karl Freund / Forbes : Nvidia Sweeps AI Benchmarks While AMD Misses The Boat. Again. Intel : Intel Gaudi 2 Remains Only Benchmarked Alternative to NV H100 for GenAI...
MLCommons' MLPerf benchmark, based on a 6B-parameter LLM that summarizes CNN articles: Nvidia's H100s perform best, followed by Intel's Gaudi2 at ~10% slower
An artificial intelligence benchmark group called MLCommons unveiled the results on Monday of new tests that determine how quickly top-of-the-line hardware can run AI models.