Make it So Easy a Kid Can Learn It

by Justin Skycak (@justinskycak) on

If you can scaffold the content so well that it creates a smooth, efficient learning experience for knucklehead kids, it's going to feel even smoother for more conscientious adults.

Want to get notified about new posts? Join the mailing list and follow on X/Twitter.

One of my most useful experiences that’s coming into play while building out our Machine Learning course is having taught this material for years to advanced 15-year-olds (who had the prerequisite math/coding background).

There’s a certain level of chaos that’s introduced into the learning process when you try to teach a 15-year-old something that’s usually reserved for college students or adults.

If they have the prerequisite knowledge then you can succeed in teaching them, but you also have to be prepared for all the sorts of knucklehead kid mistakes they’re going to make along the way.

For instance…

  • you tell them to use learning rate alpha=0.01 in a gradient descent problem
  • they just interpret that as "use a small positive number"
  • they forget what exact value you said (even though you wrote it down in the problem statement)
  • they use alpha=0.1
  • their gradient descent algorithm goes off the rails because the learning rate is too high
  • they're banging their head trying to debug their algorithm when it's not even an algo-logic issue, it's a parameter issue
  • you check in on them and they tell you that "gradient descent doesn't work" or "Python is broken"
  • you point to the alpha=0.01 in the problem statement and the alpha=0.1 in their code
  • learning happens

The learning outcome here is good but there are 2 issues:

  1. it's inefficient -- the student is spending a long time banging their head against the wall before getting help
  2. if you're trying to design an automated learning system at scale you're not going to be able to provide that ultra-specific help

There’s a really elegant way to solve both of these issues: just go over the common failure modes beforehand!

Don’t wait for students to implement the algorithm and then fall into every single pothole there is. Give them a pep talk beforehand:

“OK, now that you guys know how gradient descent works and you’ve chugged through a couple iterations by hand, let’s go over all the common failure modes where the algorithm goes wrong, so that you can steer clear of these while coding it up, and if and when your code breaks, you’ll have some idea of where to look for debugging.”

And don’t just talk about the failure modes, really drill them into the students. They need practice inferring and diagnosing failure modes just like they need practice working out a couple iterations of the algorithm by hand.

  • Tell me if/how the algorithm will break if I change some specific param to some specific value that may or may not be problematic.
  • Tell me what might be happening if the algorithm breaks in a specific way, producing some specific weird result.

If you can scaffold the content so well that it creates a smooth, efficient learning experience for knucklehead kids, it’s going to feel even smoother for more conscientious adults.


Want to get notified about new posts? Join the mailing list and follow on X/Twitter.