2023-10-19
As capabilities of foundation models are waxing, *transparency* is waning. How do we quantify transparency? We introduce the Foundation Models Transparency Index (FMTI), evaluating 10 foundation model developers on 100 indicators. https://crfm.stanford.edu/fmti/ [image]
New York Times
Stanford unveils the Foundation Model Transparency Index, featuring 100 indicators; Llama 2 led at 54%, GPT-4 placed third at 48%, and PaLM 2 took fifth at 40%
https://www.nytimes.com/... [image] Mark Coggins / @coggins@mastodon.social : This is the kind of needed AI regulation—requiring model makers to reveal how they trained their lang...
Open developers (Meta, Hugging Face, Stability) are more transparent (all score in the top 4 and well above the average). Much of that margin comes from greater upstream transparency. Closed developers can control downstream use, but this does not transfer to transparency. [image]
New York Times
Stanford unveils the Foundation Model Transparency Index, featuring 100 indicators; Llama 2 led at 54%, GPT-4 placed third at 48%, and PaLM 2 took fifth at 40%
https://www.nytimes.com/... [image] Mark Coggins / @coggins@mastodon.social : This is the kind of needed AI regulation—requiring model makers to reveal how they trained their lang...