Bill Gates, the wealthy education amateur, just won't stop talking about his genius ideas for education. He remains a pioneer in selling the idea that to be an expert in a field, you can either actually study and work in that field, or you can just be a rich guy.
Combine that limited understanding of teaching with aspirational fantasies about technology, and you get his latest wacky ideas.
Gates predicts that AI chatbots will be able to be just "like a great high school teacher" when it comes to teaching writing. Gates acknowledges that they can't do it now, that current software is "not that great" at teaching reading or writing skills, which is a statement on par with saying that McDonalds is "not that great" at creating fine dining experiences.
But Gates is sure that soon the algorithms will be able to provide useful feedback, like getting more clarity or making arguments more solidly reasoned and supported.
Very few students get feedback [from software programs] on an essay that this could be clearer, you really skipped this piece and the reasoning. I do think the AI will be like a great high school teacher who really marks your essay, and you go back and think, "OK, I need to step up there."
Note the "soon." Software that can do a good job of assessing writing has been coming "soon" for decades. It has not arrived (as we have chronicled repeatedly here at this blog: see here, here, here, here, here and here for starters). Not only has it not arrived, but it has shown no signs of arriving any time soon. "I put your Big Mac on a doily," says Ronald. "Are you feeling the fine dining yet?"
There's a fundamental problem with both robo-grading and robo-writing--the software does not "read" or "understand" language in any meaningful sense of the words. What the algorithm does is say, "Based on my library of samples, the most likely word to come next in this string of words is X." The notion that algorithms could assess how clear or well-reasoned a piece of writing is is absurd (note: the algorithm in my desktop is certain that the repeated "is" is a mistake).
The algorithms' predictive power, fed by a gazillion language "samples" stripped and/or plagiarized from a variety of sources, is getting greater all the time, but that growth is not getting us any closer to actual reading and understanding of text. The algorithm doesn't know what it's saying, and it doesn't know what you're saying. It's just increasingly adept at determining whether or not you have fallen within the parameters of linguistic probability.
Gates acknowledges that it would be a huge step. And he also didn't quite say that this "like a great teacher" software should replace human teachers. But it would help "overworked" teachers and provide better educational stuff for poor kids (without, of course, having to bother the world's jillionaires by having them pay more taxes or operate their corporations in such a way that they put more resources back into the community instead of just draining them, because one way to help "overworked" teachers is to hire more teachers and one way to get "low-income" students better schools is to spend the money to get them the resources, but hey, let's not talk crazy, because Gates would like to help the world, but only in ways that don't actually address his role in creating the current state of the world--read Winners Take All).
What the software would need, Gates suggests, is for actual teachers to offer AI tutoring programs feedback about how tech could help them do their jobs, and he is just SO CLOSE to a useful idea here, because for thirty years what ed tech executives needed to do was ask that same question of those same people rather than what we've been getting, which is ed tech guys saying, "I've come up with this really cool tool which will really help you if you just change what you do to fit what the software does." Instead, Gates offers "Could you give us a hand in training your replacement?"
Meanwhile, various districts are playing with bot applications, like Los Angeles schools, where a chatbot named Ed will be doing... something? Answering parent phone calls about basic data that any bot could be programmed to look up and report? Or maybe writing IEPs, which would be scarier.
An algorithm that can teach and assess writing is a longtime ed tech dream, presumably because it seems like such an open market. Grading essays and papers is time-consuming, hard, and fuzzily subjective, and so an endless parade of ed tech gurus and edu-prenuers have pitched their algorithm or hawked a product that is Just Around The Corner because either A) it would streamline and standardized a fuzzy subject area or B) the person who gets it right will make sooooo much money.
But wishes are neither horses nor English teachers, and the dreams of one of the world's richest men are still just dreams.
I give Gates a little more credit. He does read extensively and spends his time investigating issues that fascinate him. His personal journey as a learner is impressive, but understanding the social, political, and bureaucratic world of education from the perspective of a self-motivated learner has what others will see as limitations. I am interested in the role AI might play as a tutor. I see AI more as a capable peer - useful if you don’t expect perfection and are metacognitively active in evaluating and directing the interaction.
Love the shout out of Winners Take All. It was a great book highlighting the flaws of the anti-democratic, technocrat mindset. A good companion to it is When McKinsey Comes to Town, which highlights the dangers of relying on narrow profit motives to make business and government decisions. It seems like Gates is the best example to highlight because he is so often held up as the "good billionaire" using his money through his foundation to do good things, when in reality it's another anti-democratic system of one person thinking they know better than everyone else and implementing ideas that influence public policy in the way they want and end up having a harmful impact on everyone who didn't get to make that choice.