On robots, AI, and the future of humanity

robotaiA few weeks back, Sarah Wild asked if I’d be interested in offering a comment or two on artificial intelligence for a piece she was working on (the article in question appears in this week’s Mail & Guardian).

While I knew that only a sentence or two would make it into the article, I ended up writing quite a few more than that, and offer them below for those interested in what I had to say.


 

What role to humans have to play in a world in which computers can do everything better than they can?

In the most extreme scenario, humans might have no role to play – but we should be wary of thinking that we’re somehow deserving of playing one in any event. While it’s common for people to think of themselves, and the species, as both special and deserving of special attention, there’s no real ground for that except our high regard for ourselves, which I think unfounded. We don’t “deserve” to exist, or to thrive as a species, no matter how much we might like to. If the planet as a whole, including all sentient beings, would be better off with us taking a back seat or not existing at all, those of a Utilitarian persuasion might not think that a bad thing at all.

In a less pessimistic (for some) scenario, we’re still a very long way away from a world in which humans are redundant. Computers are capable of impressive feats of recall, but are significantly inferior to us at adapting to unpredictable situations. They’re currently more of a tool for implementing our wishes than something that can initiate and carry out projects independently, so humans will – for the foreseeable future – still be necessary for telling computers what to do, and also for building computers that are able to do what we’d like them to do more efficiently.

Elon Musk has said that AI offer human kind’s “greatest existential crisis”. What do you make of this statement?

This strikes me as bizarrely technophobic. We’re already at a point – and have been for decades – where the average human has no idea how the technology around them operates, and where we routinely place our faith in incomprehensible processes, machines and technologies. (Cf. Arthur C. Clarke’s comment that sufficiently advanced technology is “indistinguishable from magic”.) If it’s a level of alienation from the world we live and work in that triggers this crisis, I’d think we’d be in crisis already.

There seems no reason to prefer this moral panic or fear-mongering to what seems an equally plausible alternative, namely that the sort of alienation Marx was concerned about might be alleviated through AI. If machines can perform all of our routine tasks far more quickly, efficiently and cheaply than we currently can, perhaps we can spend more time having conversations, walks and dinners, rediscovering play over work, or generating art.

It’s probably true that there will be an interregnum wherein class divides will accentuate, in that wealthier people and nations will be first to have access to the means for enjoying these advances, but as with all technologies, they become cheaper and more accessible as our research advances. Technophobia as displayed by Musk here runs contrary to that, in that the last thing we want to do is to disincentivise people from engaging with these technologies through making them fearful of progress.

A recent Financial Times articles paints an apocalyptic AI future. What do you think a future world – with self-driving cars, care-giver robots, Watson-driven healthcare, etc – looks like?

The key fears around an AI future tend to be driven by the concept of the singularity, popularised by Ray Kurzweil. One possibility sketched by those who take the singularity seriously is that if we invent a super-intelligent computer, it would be able to immediately create even more intelligent versions of itself – and then this concept, applied recursively, means that we’d soon end up with something unfathomably intelligent, that might or might not think us worth keeping around.

Again, I think this pessimistic. We’d be building in safeguards along the way (perhaps akin to Clarke’s laws of robotics), and we’d likely see frighteningly smart computers coming years or decades in advance, allowing us to anticipate, to some extent at least, what safeguards would be necessary. Given the current state of AI, we’re so far away from this possibility that I don’t think it worth panicking about now (despite Kurzweil’s claim that the singularity will occur in 30 or so years from now).

(Incidentally, Nick Bostrom is very worth reading on these things.)

A more general reason to not be as concerned as folk like Kurzweil are is that I’d think malice against humans (or other beings) requires not only intelligence, but also sentience, and more specifically the ability to perceive pains and pleasures. Even the most intelligent AI might not be a person in the sense of being sentient and having those feelings, which seems to me to make it vanishingly unlikely that it would perceive us as a threat, seeing as it would not perceive itself to be something under threat from us. (A dissenting view is here.)

But to address the question more directly: such a world could be far superior to the world we currently live in. We make many mistakes – in healthcare, certainly when driving, and it’s simply ego that typically stands in the way of handing these tasks over to more reliable agents. Confirmation bias is at play here, and also mistaking anecdotes for data, in that when you react instinctively to avoid driving over a squirrel, the agency you feel so acutely feels exceptional, and validates fears that the robot driver might make the wrong choice (perhaps, sacrificing the live of its passenger to save other lives). On aggregate, though, the decisions that a sufficiently advanced AI would make would save more lives, and we are each individually typically in the position of the aggregate, not the exceptional. I therefore would think it immoral to not opt for robot drivers, once the data shows that they do a better job than we do.

(An older column about driverless cars, for more on this.)

What you do think is the most interesting piece of AI research underway at the moment?

On a broad interpretation of AI, I’d vote for transhumanism, without a doubt. We’ve been artificially enhancing ourselves for some time, whether through spectacles, doping in sport, Ritalin and so forth. But AI and better technology in general opens up the possibility for memory enhancement (one could perhaps even rewind your memories), or for modulating mood, strength and so forth. Perhaps these modifications will occur with the help of an AI implant, that modulates some of your characteristics in real-time, in response to your situation.

This would fundamentally change the nature of humans, in that we’d no longer be able to define ourselves as persons in the same way. Who you are – the philosophical conception of the person – has always been a topic of much debate, but this would detach those conversations from many of the factors we take for granted, namely that you are your attributes, such as the attribute of being a non-French speaker (with the right implant, everyone is a French speaker in the future).

It would also likely change the nature of trust, and relationships. Charlie Brooker’s “Black Mirror” TV series had a great episode (The Entire History of You) on this topic, suggesting that it would be catastrophic for human relationships – nobody would be able to lie about anything. It is this area (of human enhancement via AI/tech), rather than autonomous AI, that I think potentially far more worrisome.

But to answer your question more directly – neural network design is going to open up very exciting possibilities for problem-solving and planning. In everyday applications, we’re talking about Google Voice or Siri becoming the most effective PA imaginable. But in more important contexts, we might be fortunate to consult with robot physicians who save far more lives than is currently the case, perhaps with the help of nano-bots that repair cell damage from inside the body.

While many AI applications, such as driverless cars or Watson, offer societal benefits, robot caregivers arguably could damage ideas of collective responsibility for vulnerable people or erode filial responsibilities and make people less caring. Do you think that’s a valid concern? That as we outsource more of the jobs we don’t like, we lose our humanity?

Part – I’d say most – of what we currently value about human interaction has been driven by the ways in which we’ve been forced, by circumstance, ability, environment, to engage with people. In other words, I don’t think it’s necessarily the case that those relationships of feelings of commonality are connected to the ways in which we currently care for people. We need to avoid reifying these ideas into very particular forms. Speaking for myself, if I were living with a terminally-ill loved one, I can imagine my relationship with that person being enhanced by someone else performing various unpleasant tasks, which would mean that the time I spent with that person could be of a higher quality.

More generally, we’ve always outsourced jobs we don’t like to machines (or to poor people, of course) – I don’t see how this is a qualitatively different situation from the one we’re already in, rather than just another step on a continuum. Those who argue that these AI applications will cost us some humanity need to accept the burden of proof, and demonstrate that the new situations are incomparable to the old.

Joseph Conrad wrote, in Heart of Darkness, “I don’t like work — no man does — but I like what is in the work — the chance to find yourself. You own reality — for yourself not for others — what no other man can ever know. They can only see the mere show, and never can tell what it really means.” Do we impoverish our experience or fundamentally alter who we are by outsourcing less enjoyable work?

Much of what I said in response to the question above applies here also. We can’t restrict ourselves to one model of work, or certain sorts of activity, to find meaning – and never have. We’ve always adapted to different situations, and found whatever meaning we can in what it is that we’re engaged with. And optimistically, when we’re freed from running on various hamster-wheels, we might find forms of meaning that we never imagined existed.

Talking with Eusebius about argumentation

Power_FM_(South_Africa)Earlier today, Eusebius McKaiser invited me to join him in a half-hour conversation on critical thinking – how we should do it, and how we fail. Seeing as I happened to be in Johannesburg, I was able to join him at the PowerFM studios for the conversation that ensued, which proved to be far more interesting – for me, at least! – than the more typical interview by telephone. For those interested in the topic, the Soundcloud podcast is embedded below.

Has #trolleyology gone off the rails?

6a00d8342025e153ef01538e3a45b7970b-600wiI first heard about “the trolley problem” as an undergraduate philosophy student in 1991, as one of the countless thought-experiments moral philosophy uses to probe our intuitions regarding right and wrong, and whether we are consistent in our judgements of what is right/wrong. The problem, for those of you who don’t know it, is presented by its creator (Philippa Foot) as follows:

Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found guilty for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed.

Beside this example is placed another in which a pilot whose aeroplane is about to crash is deciding whether to steer from a more to a less inhabited area. To make the parallel as close as possible it may rather be supposed that he is the driver of a runaway tram which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. In the case of the riots the mob have five hostages, so that in both examples the exchange is supposed to be one man’s life for the lives of five.

Continue reading “Has #trolleyology gone off the rails?”

Suarez, and inconsistent vs disproportionate responses

Those of you who watch football might have heard of Luis Suarez, and in particular of his habit (if three instances counts as a habit) of occasionally biting opponents in the heat of battle. I want to briefly offer a distinction for your consideration, because – as is so often the case on social media – the reactions to his most recent offence have tended towards the hysterical.

The most recent offence occurred in a World Cup game where Uruguay beat (and in consequence, eliminated) Italy. Suarez. The video below clearly shows Suarez lining himself up for a chomp at Chiellini’s shoulder, and Fifa are currently discussing how long Suarez should be banned for (6000 has the scoop on that, by the way).

There’s no question that Suarez deserves sanctions of some sort. And this is where the distinction comes in: one can – and should – separate the issue of what a proportionate punishment would be from the issue of whether we’re treating this case as we treat similar (or worse) offences.

Just like in the last World Cup, when Suarez handballed against Ghana, some folk seem to want him hung, drawn and quartered. The reaction was disproportionate then, especially in South Africa, perhaps thanks to our bizarre adoption of Ghana as our surrogate team. Suarez is a nasty piece of work, as far as I can tell – there’s the three instances of biting, the racism, and the fact that he’s rather good yet plays for Liverpool.

But despite this, we shouldn’t let the fact that biting someone seems particularly strange – animalistic – obscure the fact that biting someone is a far less serious offence than repeated violent play, of the sort that can break legs and end careers. Our reactions to Suarez are inconsistent with our reactions to other serial offenders, who offend in ways other than biting.

Take Pepe as example, with reference to the video below:

Pepe attracts cards fairly frequently – 42 yellow cards and 5 red cards over the years 2008 to 2012 – as one might expect after watching that clip. Yet while the intensity and regularity of his violence attract comment from football fans along the lines of “he’s a dirty bastard” or what have you, I don’t see many people calling for a lifetime ban or many months of suspension.

But tell me: would you rather play against Suarez, with the possibility that he might take a bite of your arm or shoulder, or against Pepe, who might kick you in the head while you’re lying on the turf? As disreputable as Suarez might be – and this fantastic ESPN profile is worth a read, to see his history in this regard, as well as the lengths people go to to defend him – his offences are more notable for their sheer oddity than for their brutality, and we should keep this in mind when calling for his head.

Neil deGrasse Tyson, and the usefulness (or not) of philosophy

abou-tyson2Neil deGrasse Tyson has provoked some debate on the value of philosophy and its role in relation to science, following comments that he made on the Nerdist podcast in March this year. He’s not the first scientist to question the value of philosophy, and the most recent high-profile case was Lawrence Krauss, who (in a 2012 interview) said:

Philosophy used to be a field that had content, but then “natural philosophy” became physics, and physics has only continued to make inroads. Every time there’s a leap in physics, it encroaches on these areas that philosophers have carefully sequestered away to themselves, and so then you have this natural resentment on the part of philosophers.

In short, he’s saying that the space remaining in which philosophy might make an important contribution to physics shrinks all the time. I agree with that, but Continue reading “Neil deGrasse Tyson, and the usefulness (or not) of philosophy”

No, voyeurism is not a “human right”

Much of what I end up writing here has to do with the nuances of some or other situation. Whether I get things right or wrong is your call to make, but hopefully many of you read what I post because you at least agree that the simple or instinctive reaction is often wrong, or incomplete at best. And in another example of how hype and hyperbole can people to switch off their brains, this morning I heard a caller to Redi Tlhabi’s show trying to make the case that his human rights were being violated, as he was unable to watch or listen to all of the Oscar Pistorius trial.

I don’t know the details (I’m not following the trial, except through the occasional summary recap or meta-commentary like 6000’s archives of the “insight” our journalists are occasionally displaying on Twitter), but some things can be seen and some not, some can be live and some delayed, and so forth. Cricket bats sound like gunshots, and if you’re a white model, you get to have your “dignity” preserved in death in a way that Anene Booysen never could.

The caller thought it grossly unfair – a rights violation – that he couldn’t follow the soap-opera, even though the outcome of it makes no difference to his life. Furthermore, the fact that two courtrooms had been set up for journalists to be able to observe proceedings was also grossly iniquitous – why them and not me, Lord? As I’ve argued in a different context, if you train people to expect sensation instead of subtlety, you should shouldn’t be surprised if they keep expecting more of the same, and eventually, become capable of understanding nothing less.

Gender-based violence and apophenia

Earlier today, my friend @kelltrill said

and this led to a little bit of to-and-fro between her and some others who seemed to think it somehow obvious that if Oscar Pistorius had intentionally killed Reeva Steenkamp, it would have to be classified as gender-based violence. Now, that might be typical usage of the phrase gender-based violence. But if it is, I’d like to suggest to you that it’s wrong, and lazy, to speak of cases like this (i.e. a man killing a woman) as axiomatically gender-based.

Steenkamp & PistoriusNone of what I say here is intended to minimise or trivialise the fact that women are overwhelmingly more likely to be the victims of domestic assault by their partners than men are. There are hundreds of things I could link you to, but the evidence is so overwhelming that there’s no need – you can easily find something yourself. (And in case any MRA’s happen to wander past here, no, I’m not saying that men aren’t sometimes victims of various forms of discrimination themselves.)

Furthermore, I’m quite happy to regard this case as at least in part an instance of gender-based violence (on the assumption, for the sake of argument, that Pistorius intended to shoot Steenkamp). I’m happy to do so because Pistorius fits a classic alpha-male stereotype – proud, strong, with a history of short-temperedness and violence. The stereotype might not fit or be fair, but I’m disclosing it to wall it off, in that this case in particular is not my focus – I want to instead address the use of that generalisation (gender-based violence), with the case as a springboard for doing so.

The mere fact that a victim is female (or whatever) does not mean that the violence can be described as whatever-based. If Pistorius knew that he was shooting Steenkamp, then – obviously – the most fitting label for this action is Steenkamp-based violence, where Steenkamp is also a woman.

Even if it’s true (as it is) that more men abuse and kill their female partners than vice-versa, Pistorius can’t be known to have been more likely to shoot Steenkamp than he would be to shoot anyone else who he was ill-disposed to, or where he could benefit from doing so.

If a person had a history of violence against a certain sex, race, nationality or whatever, the generalisation has more merit – but before establishing whether those facts hold, we shouldn’t jump from a) the existence of a general culture of violence against X to b) the conclusion that a particular instance of violence against X fits that pattern.

I’ve argued something similar in a post about “Satanic” killings, where while it’s easy to generalise, doing so can obscure important details about motivation and how we should respond (for example, that psychiatrists might be more useful commentators than ghostbusters like Kobus Jonker).

The same danger of over-generalising in a confounding sort of way could occur with a murder or assault that is perpetrated across races – in South Africa, entrenched distrust between races could (more in some parts of the country than others) explain the motivations behind a murder, but they can’t be assumed to do so.

Take Eugene Terre’Blanche as an example: yes, he was a white supremacist, but the farmworkers who murdered him might have done so because he was also an abusive employer, or a rapist (as the murderers alleged). So while you could call that an instance of race-based violence, doing so would (or, could) distract from more pertinent details.

In short, what I’m arguing is that we should be careful of affixing convenient labels to events or people, even if they are often true. Harriet Hall has a review of an interesting-looking new book on critical thinking on Science-based Medicine, where I was introduced to a useful idea I hadn’t encountered before. It’s called apophenia, and

It means the spontaneous perception of connections and meaningfulness of unrelated phenomena, the tendency to find personal information in noise, seeing patterns where there are none, the kind of subjective validation that cold reading exploits.

To recap: I don’t dispute that gender-based violence is a real thing, and a real problem. But to call every instance of violence across genders (usually male on female) an example of gender-based violence is hyperbolic, in that it might be a judgement that claims more than what the evidence tells us.

This, in turn, could be problematic, not only because it’s a simple instance of laziness in not making fine discriminations regarding what data can tell us, but also because the more things you fit into a category, the more diluted that category might become.

It’s precisely because gender-based violence is such a real thing, and is such a problem, that we might want to be more cautious about affixing that label to cases that it might not fit.

Please look after the place while I’m gone

Originally published in the Daily Maverick

imagesIt’s time for a holiday. In a literal sense, because I am about to go off to a conference in Las Vegas (where some amount of holiday is difficult to avoid), but also in the more general sense of taking a break from what has become routine. One of those things is obsessing over the nuances of South Africa’s racial politics, and another is this column.

The optimism on display at the Agang launch earlier today was good to see. Many of you might share my fatigue at the constant succession of stories that don’t promote optimism – from the classification of the Nkandla report as top secret, to the ad hominem abuse of opposition parliamentarians. Last week, we even heard the absurdist – yet sadly apposite – story of how the very ambulance taking Mandela to hospital ran out of energy.

In the midst of all this, I had a Twitter argument with a black man over Dan Roodt – where I was criticising Roodt’s myopic nationalism and cherry-picking of evidence related to who was killing more of whom, and my interlocutor was defending Roodt’s right to hold those views. As long as the argument went on, I couldn’t persuade this man that while I agree that Roodt’s views can be held and freely expressed, we should certainly be on the same side in condemning them.

So, it’s a South Africa where a white liberal can now find himself disagreeing with someone (who has almost certainly borne a larger share of apartheid’s burdens) over whether a racist Afrikaner nationalist has a worthwhile point of view or not. These are strange days, indeed.

This isn’t to say that I share the pessimism that many seem to feel. I’d like to take a break from a certain form of engagement, a certain sort of discourse. Many of you might already avoid social media for exactly this reason – it’s too full of over-confident ad hoc opinions that tend towards the extremes. Depending on who you listen to, either we’re doomed or we’re in great shape, with little room for any position in-between.

The truth is most likely in-between, though, as it ever is. We’ll one day be rid of Zuma, and we’ll one day somehow get to a stage where we’re a democracy in more than only name – in other words, where the incumbent party feels the real possibility of losing power, and is thus fully motivated to do its job.

In the meanwhile, there’s plenty going on that’s far more local, far more manageable, and where it’s far easier for any and each of us to make an impact. If there’s no community project you can or want to get involved with, give to an organisation or charity that does things you support – Equal Education, DignitySA, a hospice, a hospital.

And, easiest of all, remember that each of us incentivises (and dis-incentivises) certain attitudes, behaviour and speech every day, simply though what we present to others as permissible or advisable. If you have kids, they will learn about how to treat others through you. If you have students, they learn how to think through you. Even in matters most prosaic – if you keep jumping the red light or rolling through the stop sign, don’t be surprised to see that behaviour becoming common.

In short, we can all contribute to upholding a social contract without indulging in the sanctimony of a LeadSA – and our despondency at the examples set by government sometimes allows us to forget that. We might think: with such a rot at the top, what difference does it make what I do? But for all the large-scale importance of what happens at the top, we affect each other’s lives frequently, and could sometimes do with a reminder that not everything can be blamed on the man in the high castle.

One of the things I’ve tried to do in most of the 158 columns I’ve written for the Daily Maverick is to deflate our certainty on various firm convictions. This is because oftentimes, it seems that we cede our responsibility to come to a reasoned conclusion and instead settle for something ready-made by emotion, political conviction or some other powerful force. In consequence, we’re less able to talk, debate and learn, and more often compelled to resort to the safety of stereotype.

In a young country, with a crippled education system, a corrupt administration, widespread economic inequality and still-seething racial tensions, the last thing we’d want to do is to stop thinking. So let’s not – and let’s keep encouraging each other to keep at it too. I’ll certainly be back to play my part – at this point it’s just not clear where or when that will be.

Dennett’s ‘seven tools for thinking’

In 2009, I had the great pleasure of sharing a number of meals and pub-sessions with Dan Dennett, when he visited South Africa for a series of lectures. The picture below is of his first encounter with something called a “bunny chow” – a hollowed-out section of bread, filled with curry. Since meeting him then, he’s always been exceedingly generous with his time and thoughtful input when requested, as I’m sure any of you who have dealt with him would concur.

In case you hadn’t noticed, he has a new book out which looks well-worth our time and attention. It’s titled “Intuition pumps and other tools for thinking“, and is certainly next in line for consumption on my Kindle.

The Guardian recently carried an excerpt detailing “seven tools for thinking”. Number two on that list is certainly one I wish more of our “community” would take to heart, and deals with the tendency to caricature our opponent’s positions. I’ll paste a snippet below, but please go and read the rest – we could do with a reminder in many of these respects.

Begin quote:

The best antidote I know for this tendency to caricature one’s opponent is a list of rules promulgated many years ago by social psychologist and game theorist Anatol Rapoport.

How to compose a successful critical commentary:

  1. Attempt to re-express your target’s position so clearly, vividly and fairly that your target says: “Thanks, I wish I’d thought of putting it that way.”
  2. List any points of agreement (especially if they are not matters of general or widespread agreement).
  3. Mention anything you have learned from your target.
  4. Only then are you permitted to say so much as a word of rebuttal or criticism.

One immediate effect of following these rules is that your targets will be a receptive audience for your criticism: you have already shown that you understand their positions as well as they do, and have demonstrated good judgment (you agree with them on some important matters and have even been persuaded by something they said).

The idea of an afterlife

Of course it can be tempting to believe in the afterlife, because it reassures or comforts – perhaps we’ll see the loved one again, and perhaps (sometimes) we’ll get to shrug off some guilt we’re now left with because of hurtful things we said or did. But notice, especially with that last set of motivations, that selfishness is the governing principle, rather than a tribute to the deceased or the memory of them. If we did want to somehow acknowledge those that have left us, I’d imagine that satisfying the demands of our egos in that fashion would not be what they would recommend or hope for (if able to recommend or hope for anything).

But giving up on the belief in an afterlife does not mean that we have to give up on commemorating the lives of those we’ve lost. Rituals of significance are popular for all of us, even non-believers, and often deservedly so. They provide a narrative force that punctuates our existence, bookmarking our progress or regress, coming into and leaving existence. Weddings, birthdays, and funerals play this role, and we can engage in all of these sorts of things with as much or as little commitment to metaphysics as we like.

ghostFor a while, many decades ago, I used to light a candle on the anniversary of a particular person’s death, because he was such a treasured friend that I felt something was amiss if I didn’t remember him. Of course, that’s close to superstition. But it made me feel better, and that’s surely a respectable motivation, even if it isn’t the strongest one?

There is of course a range of significance to fictions, including the dabbling with the idea of an afterlife that might seem tempting in the immediate days after someone’s death. Fictions that allow you to sweep child molestation under the rug, to justify misogyny, or cause you to pray over a child while she dies instead of rushing her to hospital are clearly deeply significant. In addition to this spectrum of significance, it is to my mind indisputable that whether something is true or not matters.

In many cases, wishing that there were an afterlife is probably trivial. However, if a belief in an afterlife allows you to neglect duties in your “real” life, thinking you have time to make amends later, or allows you to think that your real obligations are in the hereafter rather than now (think, for example, of religiously motivated suicide bombers), then the belief can contribute to serious harms. And, because there aren’t any effective ways of preventing false beliefs from taking on these harmful forms, perhaps it’s better to avoid them as much as possible.

Even then, this shouldn’t mean telling the grieving mother that no, her child is not in heaven. But it does mean that we shouldn’t encourage such beliefs. And, not encouraging them doesn’t mean that we need to treat the deceased as if they never lived, or never meant anything to us at all.