François Chollet: Why Scaling Alone Isn’t Enough for AGI
By Y Combinator
Categories: VC, Startup, Design
Summary
François Chollet argues that scaling alone won't achieve AGI—the industry's $billions-per-year bet on deep learning is fundamentally inefficient. His lab NDIA is building 'symbolic descent,' a machine learning alternative using minimal data and symbolic models instead of parametric curves, with only 10-15% success odds but potentially paradigm-shifting returns.
Key Takeaways
- Symbolic descent replaces gradient descent by finding the simplest possible symbolic model to explain data, requiring far less training data while achieving better generalization and composability than deep learning parametric approaches.
- The minimum description length principle suggests the shortest model is most likely to generalize well—a principle impossible to satisfy with parametric learning but central to symbolic approaches, suggesting deep learning may be inherently limited.
- Conviction with low odds (10-15% success) on contrarian bets in AI is worth pursuing if success means building something nobody else is working on, making it a viable founder strategy when the payoff is asymmetric.
- Program synthesis and symbolic learning operate at a 'much lower level' than coding agents—rebuilding the entire ML stack's foundations rather than adding another layer, creating a structural moat against incremental LLM improvements.
- Everyone racing to scale current LLM approaches creates opportunity for research into alternative paradigms; concentration of capital on one path makes contrarian foundational research underexplored despite being necessary for long-term AI efficiency.
Topics
- Symbolic Descent Alternative to Gradient Descent
- AGI Timeline 2030 Research
- Program Synthesis Machine Learning
- Minimum Description Length Principle
- Deep Learning Limitations and Alternatives
Transcript Excerpt
I think we're probably looking at AGI 2030 around the time uh that we're going to be releasing like maybe AR 6 or AR 7. You're not going to stop uh AI progress. I think I think it's too late for that. And so the next question is okay like AI progress is here. Uh it's actually going to keep accelerating. How do you make use of it? How do you leverage? How do you ride the wave? That's the question to ask. Today we're lucky to be joined by France Chole, founder of the ARK Prize, a global competitio...