Google and UCB researchers detail “inference-time search”, which some call a fourth AI scaling law, though experts are skeptical of its usefulness in many cases
But Can It Deliver? Eric Zhao : Why We Can't Escape Brute-Force Search Bluesky: Dave Lee / @davelee.me : An AI that adds “but there's reason to be skeptical” to the end of every sentence in a story about AI [embedded post] X: Ethan Mollick / @emollick : So it looks like there's a third scaling law: you can make models better by training them with more compute, by having them “think” for longer about an answer, or by generating large numbers of answers in parallel and picking good ones. Each might be increased independently. Eric Zhao / @ericzhao28 : Thinking for longer (e.g. o1) is only one of many axes of test-time compute. In a new @Google_AI paper, we instead focus on scaling the search axis. By just randomly sampling 200x & self-verifying, Gemini 1.5 ➡️ o1 performance. The secret: self-verification is easier at scale! [image]