It appears that Chicago Public Schools have tried to get ahead of the AI juggernaut with an AI guidebook, and it is... not good. It's an exemplar of the kind of wrong-headedness that is evident in so many educational leaders' thoughts about AI.
I've read it so that you don't have to (unless you work for CPS, I guess-- sorry). But let's just sort of through, rather than hammer away at every of the twenty-some pages of silliness that sets out to provide
"guidelines for ethical use, pedagogical strategies, and approved tools" for generative artificial intelligence and "integrating those tools ethically and responsibly."
There's a vision statement, jam packed with the sort of bureaucratic argle bargle that signals that we're a little fuzzy on what, exactly, the audience is supposed to be. Some of the radishes in this word salad:
pursuit of educational excellence and innovation, organizational operations, instructional core, drive community engagement, strategic adoption, enrich learning environments, success in a continually evolving technological world, steadfastly upholding, leveraging GenAI responsibly, enhance educational outcomes
Lordy.
We then jump into the AI portions of the guidebook, and things are looking bad right off the top.
Artificial Intelligence (AI) leverages computing power to mimic human cognitive functions such as problem-solving and decision-making.
Not really, no. Nor do we get a better explanation for generative AI in particular:
GenAI generates new content—including text, audio, code, images, or videos—based on vast amounts of “training” data, typically derived from the internet.
That's not really a useful answer, is it. Like saying "a piano is an instrument that produces notes"-- it's not wrong, but it completely omits the "how," and omitting the "how" from any discussion on GenAI is a terrible mistake, because if there's anything that is a) super important and b) widely misunderstood, it is how GenAI actually works. This is a link to three excellent explanations, but the very short answer regarding text is that GenAI strings together a series of probably next words. It does not "understand" anything in any human sense of the word. It is not magical, and it is not smart, and anyone who is going to mess with it must understand those things. Spoiler alert: at no point will this guide clarify any of that. Just "Magic box makes smarty content stuff! Wheee!"
Next is some guidance for staff in GenAI use. Start with what ought to already be a basic IT rule for staff--don't feed the AI any private information.
Much of the guidance falls into one of two categories: 1) Can be done, but will be more time consuming than just generating materials your own damn self and 2) Cannot be done.
For instance, CPS wants teachers to verify the tool's output. "These systems and their output require vigorous scrutiny and correction." Because the output might include "hallucinations" (aka wrong things the software just made up), the output "requires careful review." But if I'm going to have the computer kick out my lesson materials, and then I am going to spend a whole lot of time doing research and review of those materials, where am I saving time, and I wouldn't I be just as far ahead to put it together in the first place?
Also, CPS wants teachers to "avoid using any GenAI outputs that might contain copyrighted material without clear ownership." You can do this, the guide says, by "examining the work for a copyright notice, considering the type of content and source (i.e. content issued by the US government is generally public), or referring to websites that store public domain works or the Copyright Database." This is bananas. You have no idea what the GenAI has trained on. GenAI companies have gone to great lengths (and increasing numbers of lawsuits) to avoid letting you know what the AI has trained on. The chance that any content generated by any AI is the result of training on or inclusion of copyrighted materials is somewhere around 99.999%, and the chances that you can confidently check an AI's output for copyrighted material is 00.0001%.
CPS warns us that "all models reflect the biases in their training data," but they only warn about this insofar as it might perpetuate stereotypes, discrimination, or DEI problems. All worth considering, but it's also worth mentioning that there are a zillion other biases in the world ("Lincoln was not a great President" "Certain types of data are more valid than others") and those can also work their way in. CPS suggests
Conduct thorough reviews to ensure outputs are not only accurate, but also free of unintended biases and align with our educational goals. Verify and assess the source information that GenAI outputs are relying upon.
Again, the first takes a whole lot of time, and the second cannot be done.
Always document the use of AI tools. I'm not sure what this accomplishes, exactly. Are teachers going to hand out worksheets marked "Generated by ChatGPT"? CPS says to make sure that "all GenAI engagements are traceable and accountable," which raises sort of an interesting question-- can GenAI results be traceable when they are not replicable? But throughout the guide, CPS is clear that they want "detailed records of when and how GenAI tools are used." If I were cynical, I might think this was a bit of a CYA paper trail for the district. But hey--maybe you can just have GenAI generate that detailed record for you.
How about academic integrity. CPS says that "students should submit work that is fundamentally their own," whatever that means. But also, "students should clearly identify any AI-generated content they have used in their assignments." This should work as well as telling students they are obligated to show where they have cut and pasted paragraphs from Wikipedia into their paper.
Don't use AI to "create inappropriate or harmful content." But CPS does have a whole page of "positive GenAI use for students."
Use GenAI as a brainstorming partner. Synthesize a variety of opinions and propose compromise solutions (I think this one is meant to handle your group work). Use GenAI image creators to bring ideas to life. Overcome writer's block by suggesting a variety of ideas and writing prompts. Ask GenAI to propose unconventional solutions to problems. Use GenAI as an interactive tutor. Generate immediate feedback on first drafts of written assignments (but not, I guess, if that makes it not fundamentally your own). Oh, and use generative search engines like Perplexity as a research assistant (that would be the AI that is in trouble for scraping data without permission, which would seem to conflict with some of the ethical concerns CPS has).
CPS also has guidance for educators and staff. There are some restrictions suggested; for instance, Gemini and Copilot, two GenAI programs that pretty much every student has access to, should not be used by students under 18.
As with any new student-facing technology, the introduction of GenAI tools invites educators to consider how GenAI can further the underlying goals of their activities and assignments instead of impeding them.
This is wrong in the way that much ed tech introduction is wrong. Teachers should not be asking how the technology can help-- they should be asking IF the technology can help and whether or not it should be allowed to help.
But, Lord help us, CPS even has some specific suggestions of how certain assignments would go without and with GenAI.
Not all of these are terrible. The elementary science assignment in which students let the AI try to depict an animal and then figure out what the AI got wrong? Not bad at all. Some of them are pointless. The elementary social studies assignment in which students role play as leaders and answer questions--why does the teacher need an AI to generate questions? And how much magical thinking is behind the notion that having AI design interventions for individual students is better than having the teacher do it?
Some are short-circuiting education itself. Like the science assignments that suggest that, instead of having students run experiments, just have the AI run "virtual" experiments and tell the students how they went. Not how to have students learn science (but it is how to justify cutting labs and lab supplies out of budgets).
And some are just bananas. Have the AI role play a character in a book, or retell the story from that character's point of view? I have seen nothing to indicate that an AI is any way capable of faithfully doing that work (at best, it will steal from human-completed versions of that assignment).
There is a link to a list of 851 CPS-approved GenAI products, and if someone at the office has extensively vetted each and every one of those, they deserve a huge raise.
This is just a mess. The guidebook repeatedly insists that students and teachers use AI ethically, but there is little evidence that the folks behind this have wrestled with many of the deep and difficult ethical questions behind generative AI. How much is too much? How do we reckon AI programs' unauthorized and undisclosed use of people's work for "training"? And while CPS wants teachers to monitor how and how much students use AI, they have no more thoughts than anyone else about how teachers are supposed to do that.
Teachers do need some help dealing with the AI revolution. This is not that. Look online for some thoughtful and useful guides (like this one). The best thing I can say about it is that it smells like the sort of thing that arrives in teacher mailboxes from the front office which they then ignore. This should be one of the prettiest, slickest items in many circular files.
Great stuff here Peter. AI for Education has an online course for educators that I've taken -- many of the mistaken claims about how AI functions are included in that course, so I'm pinning the blame on them for that. I have to wonder how many other major districts will be hiring them to put out similar guidance.
The math caught my eye - the middle school math can be achieved without the use of generative AI since there are plenty of non-generative platforms that adhere to their description. And the high school math without AI description is basically “we make math boring and teach it badly” - AI is not necessary at all to make math engaging or explore complex topics.