As innumerable companies push AI products and practices toward the classroom, a new publication lays out five reasons that a cautious pumping of the brakes might be wise.
“Education Hazards of Generative AI” was co-authored by Paul Bruno (assistant professor of education policy, organization and leadership in the College of Education, University of Illinois at Urbana-Champaign) and Benjamin Riley (founder and CEO of Cognitive Resonance).
Said Riley, via email, “I see mounting evidence that concerns me that states and school districts are pumping out AI-related policy documents that assume teachers should integrate AI into their instruction. That assume doing so will necessarily lead to important educational benefits.”
Riley and Bruno do not reject the use of AI in the classroom, but they do offer five specific caveats.
1) Understand what large language models are actually designed to do.
LLMs do not “understand” or “read.” They treat text as a series of tokens, then based on their training, produce a large series of predictions of what word is likely to come next. They do not “understand” educational goals, nor can they predict a series of lessons that would build a desired learning scaffold. And their lack of actual understanding means they can easily produce statements that are incorrect.
As Tom Mullaney, a K-12 consultant and speaker, pointed out on X, “when LLMs ‘hallucinate’ they do nothing different from when they generate text we deem accurate.”