This awesome article about the Khan Academy combines two of my passions–transforming education through technology and using data to predict human action.
They do this by accounting for hundreds of data points that describe, in numbers, the entire history of the relationship between a learner and a concept.
“If [a user is] logged in, then we have the entire history of every problem they’ve done, and how long it took them, and how they did,” says Ben Kamens, the lead developer at Khan Academy. “So whenever anybody does a problem, we see whether they got it right or wrong, how many tries it took them, what their guess was, what the problem was, how many hints they used, and how long they took between each hint.”
The Khan engineers are also working to tweak the exercise platform so it does not confuse genuine mastery with “pattern matching” — a method of problem-solving wherein a student mechanically rehashes the steps necessary to solve that type of problem without necessarily grasping, conceptually, what those steps represent.
Pattern-matching is one of the human brain’s most basic learning tools, Kamens says. It is the sort of useful imitation that allows toddlers to learn how to use language without first learning how grammar works. But there is a difference between imitating problem-solving procedures and mastering the logic undergirding those procedures, Kamens says. Getting to that level of understanding, he says, is probably what determines whether students will remember how to solve a problem after the test is over, after a course is over, and — most importantly, in Khan’s view — once their formal schooling is over.
Khan has half-joked that his ideal assessment model is having professors ambush their students in the hallways with random questions, months after the student has passed the exam, and revise their score based on whether they’ve kept their chops. At Khan Academy, that half-joke is half-real. At a time when students are always within arm’s reach of a computer and a wireless signal, “mechanic practice schedulers” can spring questions on students at intervals to gauge how well they remember how to do certain types of problems. This would allow Khan’s team to collect data on how well students retain their command of different concepts, which in turn would enable them to look back at their original interactions with the concepts and try to spot variables that correlate with long-term retention.
“We have already built some internal models that incorporate memory/forgetting over time into the predictions,” says Kohlmeier. “We’ll continue improving them.”
The Khan engineers think that randomizing the types of problems posed during these jump-outs might serve as a shibboleth to distinguish between people who have truly mastered concepts, and those who passed the tests by temporarily memorizing a series of rote steps.
This is crucial, Kohlmeier says. When students are doing practice problems in a textbook chapter on quadratic equations, they know each problem can be solved by factorization — even if they cannot say why. But in the world, whether in finance, software or education, the right path to a solution is not always so obvious.
“A big part of real-life problem-solving,” Kohlmeier says, “is recognizing what kind of problem you’re dealing with.”