If you survey the frontier of online learning, it looks heavily weighted to the science, technology, engineering and math side. The big MOOC platforms got rolling with courses in Circuits and Electronics, Artificial Intelligence and Robotics, and Machine Learning, respectively. Khan Academy started with math videos and its interactive platforms heavily emphasize math. The National Center for Academic Transformation‘s current work with blended course redesign is focused entirely on remedial college math. The Open Learning Initiative, one of the most venerable efforts in adaptive learning science, has 18 courses, just 4 of which fall outside STEM disciplines.
It’s entirely natural that technologists would use technology to create products that instruct future technologists. But though there are plenty of offerings for promoting basic literacy in K-12, the lack of learning applications developed “natively” by teachers of humanities backgrounds contributes to a pervasive doubt in the value of blending technology into non-math, non-science classes.
The problem is basic. Once you get beyond the mechanics of the alphabet, spelling, grammar and sentence structure in English and foreign languages, the humanities does not easily lend itself to multiple choice or other machine-readable grading procedures. (Which reading of Robert Frost was more insightful? Who gave a better account of the factors leading to the Civil War?) Among the proposed alternatives, so-called “robograding” of written work gets people pretty upset. So does peer grading –which no one, to my knowledge, has proposed for schoolchildren.
The problem becomes even greater when you start to think about the potential impact of data analytics in education. Put simply, this means designing smart computer platforms that get smarter as students use them to learn. What the Open Learning Initiative calls multiple “feedback loops” are put into play when teachers can draw on a dashboard of students’ progress to customize their learning services, and when course designers use their knowledge of the students’ interaction with the platform, to make the platform better at teaching them, and other students, in the future. In this video, for example, one of Khan Academy’s data scientists maps a literal “learning curve” of a student’s understanding of fractions, and then shows how enhance attention to the presentation of problems at the correct level of difficulty speeds up the acquisition of that particular skill.
The idea of this kind of insight into the learning process and these tools for efficiency, in the hands of every teacher in the nation, is an exciting and powerful one. But the accuracy of the computer’s data, and thus the integrity of any modeling that takes place on top of that, depends on the strength of the underlying relationship between questions and answers.
The dangers here are twofold: First, that research done primarily on STEM learning, which is solidly machine-gradeable, will be overgeneralized to humanities disciplines, where it may not apply.
And second, the opposite–that technology will be left out of the humanities equation altogether; that as the learning science of STEM subjects advances, we’ll continue teaching history, civics, literature and writing in the same old ways.