Continuing with the theme of my previous post (on higher education and OpenAI), John Maytham invited me join him in discussion on CapeTalk to explore some of the implications of these AI tools for student assessment and the educational project. The audio on my side was not great, but it’s certainly audible, and the podcast is available here.
In any event, here’s a more structured version of what I said, and wanted to say. In his introduction, John referred to the comments made by other guests he’s had on the show, including by some friends of mine, but those contributions seem to have largely been focused on the implications of ChatGPT/OpenAI on human creativity, how it is identified (in light of these plausible simulacra), and the implications of that for culture.
And yes – there is no doubt in my mind that these are important concerns, though not the concerns I want to focus on here, as someone working in higher education. The only short comment I’ll offer on the broader cultural aspects are: (a) that I think the moral panic some are expressing is largely overblown, seeing as many people don’t care about a high/low art culture distinction anyway (and I’m not saying that they should), and (b) that once you acknowledge that so much of what’s popular might be generated by the modern version of Stock/Aitken/Waterman (SAW), dying on the hill that it can’t be generated by an AI is a little weird.
So, that’s the wrong focus for any moral panic, regardless of whether any moral panic is justified at all. It reminds one of Plato saying that writing will “create forgetfulness” (because we’re not using our memories) in 370BCE, or of Malesherbes telling us (sometime in the 18th Century) that newspapers will cause ”social isolation”, because you’re not getting your news from the pulpit (where you can have tea with folk afterwards).
The real “get off my lawn” sort of concern is, for me, the one raised by J.S. Mill:
It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied. And if the fool, or the pig, are a different opinion, it is because they only know their own side of the question. The other party to the comparison knows both sides.J.S. Mill (Utilitarianism, 1863)
And, it’s not at all clear to me that – for the average citizen – there’s a moral obligation to avoid being a fool. But, it is clear to me that universities have a particular role to play, in generating thought-leaders, innovators, scientists who will help save us from the next thing that other scientists create that might kill us, and so forth.
So it’s within that context that OpenAI and ChatGPT are a particular concern to me. While some of us – perhaps of a certain age – might believe in the innate virtue of independent thought, the argument that it is actually essential for everyone is not an easy one to make.
Given that context, there is a fundamental collision of incentives in (at least) two areas. First, a massified higher education system, where everyone believes that they have a right to a university degree; and second, an international and local network that determines things like access and eligibility for bursaries or transfer on the basis of “marks”.
On the first area, the key concern is that the degree carries a status that is independent of whether a student has actually achieved course outcomes, or has even developed the capacity for independent critical thought at all.
I’m not offering any moral judgment on this, but rather making the point that if you can spend three years (or whatever) doing something that makes a paying job more likely, the assumption that you’ll also spend that time developing intellectual skills is speculative, at best.
And, you’ll be enrolled in that degree with hundreds of other students, at least in the South African context of the Humanities or Commerce, in junior years, where those who grade your work have a) gone through the same sausage factory, and might believe its standards to be appropriate; b) perhaps never even developed the skills required to assess academic assignments themselves.
On the second area: as much as one might think something is amiss in how we teach, and the standards by which we award degrees, the fact remains that we get money for graduating students, and that we need to give students some sort of number – a course mark – by which they can be compared with their peers for the purposes of bursaries, transfers, or acceptance to universities in other countries.
But, the truth is that a lot of this has always been contestable in terms of its intellectual coherence, as well as in terms of consistency in application (i.e., in terms of moral principles like fairness).
Now, add ChatGPT/OpenAI, where some students have access and the knowledge to use those tools; where only some staff have the awareness, ability, and interest to engage with this development; and where all of the other incentives remain as they were before.
It’s true that many of us can spend our time listening to Bananarama, or some other SAW creation, and that things will still turn out alright in the end. But, perhaps that’s in part due to the fact that even as a large proportion of the human population watched Oprah, a significant proportion still needed to demonstrate the ability to think independently, and to express those independent thoughts in writing.
And yes, I could of course be falling prey to the same moral panic referenced in paragraph four, above. But, if we don’t find a way to encourage people to do intellectual work when they know that they can fake it instead, that smaller proportion of people – the ones that solve problems, innovate, and create – might well become small enough to make too little difference.