As submitted to The Daily Maverick.
It is not the Internet, or Google, that is making is stupid – it’s our brains. We’ve never been as smart as we’d like to think we are, and the current fashion of looking for reasons why we feel less clever than before partly amounts to a hope to find excuses – someone to blame – for our attention deficits.
It is of course true that there is more information available to us than ever before, and the amount of available information grows exponentially every day. But there has always been more information available than we can comfortably pay attention to, at least since Gutenberg made printed material available to the masses.
What changed are our cultural dispositions in terms of agency and blame – we used to understand that mastering a field took time and effort, and that work was required to filter signal from noise. Now, we blame the noise, even as we no longer invest the time and effort required for mastery of a field.
The justification for this transferral of blame is the work of cognitive neuroscientists at Stanford University, and the popularisation of this work by Nicholas Carr and others. In the labs at Stanford, the cognitive burden of multi-tasking is exposed via concentration tasks, which reveal that heavy multi-taskers perform 10-20% worse on these tasks than light multi-taskers do.
Part of the reason for this is that we don’t multi-task by performing tasks simultaneously, but rather by rapidly switching between tasks. This switching between tasks incurs a cognitive cost – we’re essentially wasting part of our attention span on the switching, rather than on any particular task.
Based on these laboratory environments, it is difficult to dispute that the more we try to do, the less efficient we may be at any particular task. Furthermore, Carr argues that because of neural plasticity (the “re-wiring” of the brain in response to experience), we may be programming ourselves to be shallow readers, and superficial interpreters of information. The corollary is of course that we might be losing some of the competencies involved in deep engagement with any particular ideas, or fields of knowledge.
Steven Pinker is one of those not convinced by the neuroplasticity arguments, arguing that the skills involved in reading, analysis and debate are not threatened by new technology. These skills, he says, “are not granted by propping a heavy encyclopedia on your lap, nor are they taken away by efficient access to information on the Internet”. It’s too soon to say how the brain is actually responding to these (relatively) new stimuli, both because long-term data is not available, and because brain science is in its infancy.
But on the assumption that our brains can be, and are being, rewired by our shallow engagement with various simultaneous tasks, we can still ask: is this the Internet’s fault? We still have the option of filtering out the noise we don’t want to hear, and it’s not anyone else’s fault that we choose not to do so. Instead of spending our days with multiple browser-tabs open, alongside a Twitter client and an instant-messaging program, we could choose to shut that portal down, and open a book instead.
Much of the time, I choose not to do this, as do many of you. However, most of us are sometimes required to focus on one important task, and somehow still seem able to do so. While it is certainly difficult to ignore the siren-song of the Internet, one of the possible reasons for this is something quite pedestrian: namely that we’re simply being lazy, and succumbing to the pleasures of the dopamine-hits that constant changes of stimuli provide. In an effort to escape boredom, we instead risk sacrificing some of our ability to pay attention.
If we do choose to allow the Internet to intrude, instead of focusing on reading a book, there is still much we can do to avoid the shallows that Carr warns us of. The insights of behavioural economists like Kahneman and Tversky are no less relevant today than they were in 1974, when these pioneers of decision-theory highlighted the ways in which we use heuristics as information-navigation shortcuts.
One of those heuristics might be particularly relevant here: the availability heuristic, by which we can develop a cognitive bias towards over-estimating the likelihood of something happening based on how easily examples of it come to mind. It is of course easy to think of examples of ourselves being distracted – and then, perhaps to extrapolate from those examples to this notion of the Internet, and information more generally, being something we are victims of. What we forget, though, is that experiments in a lab don’t always tell the whole story.
This particular story has to include the observation that we are in part victims of our own choices. We generally choose to try and do many things at once, even while our experiences show us that doing so is not easy. This story also has to include discussion of all the potential gains of multi-tasking, which may result in multi-tasking being a rational choice, even if it results in long-term impairment of our ability to pay attention.
Many of us no longer need the skills that are reportedly being lost, because fewer of us are required to be specialists, rather than generalists. Those who do need to specialise know that doing so requires hard work, and deep immersion into one thing rather than superficial skimming – so it’s a skill they practise, and retain.
For everyone else, it remains true that we are doing far more as a result of the Internet and other new technologies. Twitter, blogs, SMS’es and all the rest allow us to remain informed to a far greater degree than ever before, and many of us are reading and writing more than we ever have. Except in specialised contexts, it’s a disputable value-judgement to say that more reading and writing – even if sometimes superficial – is a worse outcome than doing less of such things.