There’s a skit in episode 4 of Mitchell and Webb are Not Helping (Channel 4, subscription required) featuring a confused teacher, who doesn’t understand why his pupils come back from holiday knowing nothing of what he’s taught them.
He’s reminded that they’re new kids, each year, and that all the effort you’ve “previously” put in to teaching them mathematics (in this case) needs to repeated for each iteration, until you die.
As horribly familiar as the gag might seem for teachers in year n+1, our day-to-day experiences – at high student volumes, at least – might well compromise our ability to see long-term trends. But it can’t stop me from speculating about them, and in short, it’s difficult to believe we’re not fucked, at least in the short-term.
The various horrors of international politics overdetermine this outcome, to be sure, but the particular manifestation of impending doom we see in education is an interesting one in itself.
In preparation for an upcoming talk on AI and education, I’ve been re-reading Chalmers and others on philosophical zombies, because I’ve been going on about a “zombie interregnum” in conversation for a few years now, simply because of the increasing numbers of students in my classes who submit coherent, but seemingly thoughtless, essays.
It’s only inference and experience I rely on to say “thoughtless”, and it’s an aggregate judgment, rather than one that relates to any particular student. The point is simply that the proportion of submissions that are surface-level informed, as well as grammatically tolerable, is trending sharply upward, and there’s no reason to believe an explanation resting on people having become more sensible.
The zombie reading is motivated by the idea that we’re going to be teaching generations of students who (at least in non-quantitative fields) can entirely plausibly be awarded a degree, without having done any significant amount of reading, writing, or thinking.
It’s not the university’s fault, nor that of particular educators or any piece of software that we could deploy. The simple reality is that unless you can do oral or sit-down examinations and tests, assessment metrics are increasingly meaningless; and that if you want to educate as many people as possible (and remember that in some places, like South Africa, “output” of graduates is a key factor in funding), the resulting classroom sizes make that impossible.
So, unless rigorous assessment (and standards) can be implemented, one could bullshit their way to graduation. This is not a new problem – I’ve seen countless people graduate and then be promoted to high rank, even as I don’t understand how they can believe what they say – but the difference is: those people had at least engaged enough to know the game they were playing, and how to play it.
We might well get to the same point, for the students who delegate all their work to AI, but in the interim, my concern is that there will be multiple cohorts of students who think they have leaned something; employers who also believe this; and then, hypothetical children who might “learn” similar epistemic habits from these students.
So, I mean “zombies” in the sense of meta-cognition never having been encouraged, rather than as used in the physicalism debates, because there used to be only be a few ways you could garnish your brain’s output, and they all needed input from a human.
In other words, the nonsense had a little scaffolding, in that e.g. Grammarly requires source text; a plagiarist has to find a source; and your work would be a obvious outlier by comparison to other students in your class if it were produced by a super-competent assistant such as an AI tool that had read the entire Internet.
That scaffolding is no longer necessary, and all the tools for generating plausible sense are increasingly available to all (noting, of course, that in countries like mine, certain demographic groups will have a head start because of easier access to, and more familiarity with, those tools).
There’s less need to have, or use, your brain than ever before.
The early and often hyperbolic commentary on AI often featured the “stochastic parrot” analogy, meant to highlight how AI could mimic human expression without understanding it.
And, back to the zombies: if increasing numbers of humans might be able to get away with mimicking even a university degree, I wonder whether the stochastic parrot we should be more worried about is us, especially when we will be the ones deciding (at least, for now) on the scope of our reliance on AI, and then, responding to whatever World results from those decisions.