Recursion Is The Next Scaling Law In AI

Categories: VC, Startup, Design

Summary

A 7-million parameter model outperforming models a thousand times its size on tasks like ARC Prize. That's what recursive reasoning unlocks. In this episode of Decoded, YC's Ankit Gupta and Francois

Transcript Excerpt

Welcome back to another episode of Decoded. Today, I'm back with YC visiting partner Francois Shaard to talk about one of the most interesting recent trends in AI research, recursion. Specifically, we're going to talk about how we can improve a model's reasoning performance by using recursion at inference time rather than by just making the model bigger and bigger. There were two papers that made the power of this approach really clear in 2025. One on hierarchal reasoning models or HRM and another on tiny recursive models TRM. Franis, thanks for joining us. Um, can you tell us a little bit about these two models and what was so interesting about them? >> Sure. I guess, um, to set up a little bit of a foundation, uh, you already did an amazing lecture on RNN's and LM in one of the previous …