There’s a certain kind of writing assignment that has never gone out of style in some classrooms; the rise of AI gives teachers a chance to reconsider inflicting it on students ever again and turn their attention to more important things.
The shake and bake assignment, sometimes called a research paper, asks students to collect some sources about a topic and to mush them together. Get some sources, shake them up, and bake the result into submittable format. The final product may suffer from misunderstandings, mistranscriptions from the source, or simple made up baloney that the student has added to pad out the length, but you can hand it in.
What was the supposed to get out of the assignment? Practice citing sources? Deeper understanding of the topic? Skill in disguising obvious plagiarism? The educational value of the shake and bake was never great, but technology has reduced it further, because AI software can shake and bake like nobody’s business.
The problem is not simply that the AI can do the assignment for students. The issues run deeper.
Google’s Gemini is only the most recent example to find itself in trouble. Gemini is the next logical step in truncating the research process, moving from “Google, find me a bunch of sources about widgets” to “Google, find me a bunch of sources about widgets and then sum up what they all say.” Search, shake, bake, and hand over the result.
It’s not just that this takes an assignment that had minimal educational value to begin with and sucks the last edu-molecules from it. It’s not even that Gemini, like most AI programs, has a tendency to just make stuff up. It’s the problems with judgment.
There's a strong thread that many AI critics have commented on of AI hype-men reframing and simplifying the tasks that people do down to the level of what AI can complete to claim AI success, rather than acknowledging the complexity of creative tasks which AI still can't hope to complete.