[FULL WORKSHOP] AI Coding For Real Engineers - Matt Pocock, AI Hero (@mattpocockuk )

By ai.engineer

Categories: AI, Tools

Summary

LLMs have a 'smart zone' that degrades around 100k tokens—software engineering fundamentals like task decomposition work better with AI than trying to push through context limits. Multi-phase planning prevents the quadratic attention collapse that makes LLMs progressively dumber as conversation length increases.

Key Takeaways

  1. LLMs operate in a 'smart zone' up to ~100k tokens, then degrade due to quadratic scaling of attention relationships; each token adds exponential computational strain like adding teams to a football league.
  2. Break large tasks into small phases that fit within the smart zone rather than pushing single prompts to context limits; avoid the temptation to continuously compact and restart which wastes tokens.
  3. Classic software engineering principles (small, focused tasks, don't bite off more than you can chew) apply directly to AI coding—this isn't a new paradigm but an application of existing best practices.
  4. Multi-phase planning with explicit phase gates is more efficient than reactive looping; recognize when phase breakdowns become repetitive patterns that should be abstracted into phase n logic.
  5. Recognize 'sediment'—accumulated context degradation from repeated compaction cycles—that compounds token waste and performance loss over long-running AI-assisted projects.

Topics

Transcript Excerpt

Yeah, we good. >> Okay, folks, we're at capacity. Let's kick off. I don't want you waiting here for 25 more minutes before we some arbitrary deadline. So, welcome. My name is Matt. Uh I'm a teacher and I suppose now I teach AI. Um we have a link up here if you've not already been to this which is has the exercises for the um stuff we're going to do today. This is going to be around two hours. So we might just sort of kick off two hours from now. Is that right Mike? >> Yeah. Perfect. Um, and the ...