A ML Model is a Decent First-Order Approximation of a Human Learner

by Justin Skycak (@justinskycak) on

Want to get notified about new posts? Join the mailing list and follow on X/Twitter.


A ML model is a decent first-order approximation of a human learner.

Even a human learner needs to incrementally update their model params / mental schema while running through a crap-ton of good training examples with feedback, and pre-train on sub-tasks before moving on to more advanced tasks.

Many failure modes of ML models map directly onto failure modes of human learners. For instance:

  • You can create spurious connections in a learner's head if you train them on a data set with these spurious patterns, just like you can fake out a ML model by giving it data with spurious correlations.
  • Running behaviors/algorithms that make inefficient use of available data / compute power will lead to underperformance (e.g., getting a question wrong and then going on to the next question instead of drilling down in the explanation to find the issue).

Also, hardware matters. Different students move at different paces depending (in part) on what kind of (biological) hardware and theyโ€™re running under the hood. (E.g., working memory capacity.)

Note: Obviously the human brain is way more sophisticated than any ML model in existence, and optimizing human learning is more complicated than optimizing a ML model. Iโ€™m just saying there are lots of parallels to be drawn.

Some differences:

  • An ML model always seeks to maximize performance. That's its objective function. However, most students are going to continually minimize their effort subject to the standards theyโ€™re held to.
  • Unlike ML models, human learners have complex emotional states that affect their motivation to put forth effort (whereas the amount of compute power that a GPU puts forth on a daily basis is much more stable).



Want to get notified about new posts? Join the mailing list and follow on X/Twitter.