As someone who writes and speaks about AI, I am often struck by the way people frame the questions they ask about it. Questions like: ‘What do you make of AI-based digital humans taking jobs away from regular workers?’ and: ‘How do you imagine a collaboration between AI and humans, and what balance should be found between them?’.

Faced with the assumption underlying questions phrased in this way – that AI is some kind of alien intelligence, already on a level with human intelligence – I think it’s worth taking a step back.

Can we have a sensible discussion of what AI means for our future with assumptions like these as our starting point?

There is a natural human tendency to anthropomorphize abstract things such as AI, shaping a mass of zeros and ones into a form about which we can feel emotions of some kind. The result is that we’ve been led to think of AI as something similar to ourselves, with similar properties of sentience, identity and agency – only much smarter and possessed of an unknowable, potentially malign intentionality. But none of this is true of AI, at least as we currently see it.

AI has no intentionality, no desires of any kind. It can’t, in any real sense of the word, ‘know’ anything at all. According to internet pioneer Jaron Lanier, ‘the most pragmatic position is to think of AI as a tool, not a creature’. More like a torque wrench than The Terminator.

Generative AI in particular, the form of AI currently provoking most of the fear that we are imminently about to be replaced or taken over, only appears so ‘lifelike’ in its responses because it plays back to our habits of speech. It has been trained on a vast subset of everything humans have ever said or written online. It is, Lanier says, ‘a mashup of us’. 

Perhaps the reason that ChatGPT has been such a massive media event is that it has a natural language interface. For the first time, we can interact with an AI using ordinary human languages, not code. A field that was previously all about maths is suddenly about something much greater numbers of us know how to process and manipulate: words. And this makes it so much more interesting to us because our human language is such a large part of what defines us as human.

That’s why LLMs do such a good job of passing for humans. An AI of this type is not a ‘digital human’; it’s just numbers doing cosplay. 

Mistakes/solutions 

But I don’t want to downplay the importance of AI by characterizing it in this way or to pretend that our feelings about it are not significant.

The process of using tools like ChatGPT, Bard, etc. which so many people have now done, has led to a mix of powerful emotions: surprise, wonder, awe, excitement – but also a fair bit of ‘uncanny valley’, and in the case of those extra fingers generated by text-to-image tools like DALL.E and Midjourney, a strong dose of ‘the ik’. 

A lot has been made of the errors and ‘hallucinations’ that generative AI tools seem prone to. It doesn’t seem all that clear, either, whether these artifacts are just ‘teething problems’ that will soon be sorted out or an inherent feature of the technology that we will have to learn to live with. Where the AI is trained on a ring-fenced, authoritative set of materials, there’s no problem, I am assured, but when it comes to generating responses based on uncurated materials from the open web, there could be limits to its usefulness.

But that’s not all there is to say about AI ‘mistakes’, by any means. Looked at another way, they take on a really interesting aspect. The phenomenon of those extra limbs and physical distortions you get when generating images of humans on text-to-image tools shows AI making mistakes a human would never make. They show it’s not sentient, as some people have tried to claim, because it doesn’t know what it’s looking at. However, the fact that it can make such mistakes – a whole new type of machine mistakes – surely must mean that it can potentially come up with very useful mistakes.

In the human context, we often call useful mistakes ‘solutions’. 

A careless lab technician returns from holiday to find that mold has contaminated her petri dish – and discovers penicillin. Mistakes, and accidents, have been seen to be formidable generators of innovation down the ages. And AI makes the sort of mistakes a human would never make. The existence of a whole new category of mistakes, therefore, surely shows a capacity to produce solutions, too, that humans would never come up with. In that light, ChatGPT’s ability to summarize your company’s annual report in the style of Bob Dylan might not be the last word on generative AI’s creative potential.

Looked at in this light, it’s possible to see AI as a tool that can extend our creativity, by complementing our human creativity. Replacing human creativity, a big theme in the media, is probably much less likely. The idea that in the future AI will write all the symphonies, author all the novels, and throw the entire workforce of the creative industries in the scrap heap in the process proceeds from a fundamental misunderstanding of what it is we look to those human artists to do for us. An AI trained on the entire corpus of human artistic activity can still only reflect the past. It can’t do the job of processing what is happening now that we are passing through. Only another human can do that. It might give you music that sounds like Drake, but it can’t give you the next album by Drake (let alone the next Drake).

The true creative power of AI might lie not in how similar it can make itself look to one of us – but in its fundamental difference: its ability to make different mistakes, to find different solutions. But it will still need us, we humans, to recognize the value of those solutions.

It’s a tool, not The Terminator.