A look at the more challenging AI evaluations emerging in response to the rapid progress of models, including FrontierMath, Humanity's Last Exam, and RE-Bench
Despite their expertise, AI developers don't always know what their most advanced systems are capable of—at least, not at first. X: @tharin_p and @tharin_p X: @tharin_p : My latest piece for @TIME contextualises o3's benchmark results with a look at the new wave of evals shaping the field, including @EpochAIResearch's FrontierMath, @METR_Evals's RE-Bench, and @scale_AI / @ai_risks's Humanity's Last Exam: @tharin_p : Merry Christmas everyone here is 2500 words on AI evals — more interesting than it sounds!