Failure Modes Shared Between Human Learners and Machine Learning Models

by Justin Skycak (@justinskycak) on

Bad / insufficient / non-comprehensive training data, inability to fit new data that's too different from the current representation, lack of compute power, running behaviors/algorithms that make inefficient use of available data / compute power.

Want to get notified about new posts? Join the mailing list and follow on X/Twitter.

At a fundamental level, a human learner is pretty similar to a machine learning model.

On one hand, human learners have more sophisticated models and optimization algorithms running under the hood (this sentence is doing a lot of heavy lifting), and emotions are heavily factored into their optimization algorithms.

However, you still need to have them incrementally update their model params / mental schema while running through a crap-ton of good training examples with feedback, and pre-train on sub-tasks before moving on to more advanced tasks.

You can create spurious connections in a learner’s head if you train them on a data set with these spurious patterns, just like you can fake out a ML model by giving it data with spurious correlations.

Different students move at different paces depending on what kind of hardware and optimization algorithms they’re running under the hood.

What I’m trying to get at here is that the failure modes of ML models map directly onto the failure modes of human learners, e.g.:

  • bad / insufficient / non-comprehensive training data,
  • inability to fit new data that's too different from the current representation,
  • lack of compute power (working memory issues),
  • running behaviors/algorithms that make inefficient use of available data / compute power (e.g., getting a question wrong and then going on to the next question instead of drilling down in the explanation to find the issue).


Want to get notified about new posts? Join the mailing list and follow on X/Twitter.