Marcus Du Sautoy

This article is a preview from the Autumn 2019 edition of New Humanist

Marcus Du Sautoy is a mathematician and Simonyi Professor for the Public Understanding of Science at Oxford University. His latest book is “The Creativity Code: How AI is Learning to Write Paint and Think ” (4th Estate).

Your new book, about artificial intelligence, asks what it means to be creative. Is there a single answer?

The definition I took is from the philosopher Margaret Boden. She says a creative act isn’t just something new – it should also be surprising and make us see our environment in a new way, and it should be valuable too. Creativity is also tied down to consciousness. I often wonder if human consciousness and creativity started to emerge at the same time. The psychologist Carl Rodgers talks about creativity as a need to express our inner world: where we behave less like machines and more like conscious beings with a desire to assert our free will. So you will never see this definition of creativity in a machine, until there is intention and an inner world that feels the need to be shared with others.

You write that we all run on something you call “the human code”. What is this?

It’s an algorithm working on the information that we receive through our senses and through the neuronal activity – a complex processor developed over millions of years of evolution. While many believe artistic expression is something mysterious, I try to illustrate that it does have a rationale, even though it may be articulating mysterious things beneath our conscious worlds.

In the last couple of years, there has been a sea change in the world of artificial intelligence (AI) where we are getting a new, bottom-up kind of code. Machine learning means that you don’t know what the code is going to do in advance. That is much closer to how our own human code emerged. The challenge can be summarised by thinking hard about two questions. How quickly can this artificial code achieve something comparable to what we are doing? And is AI just exposure to data or something much more than that? Code learning is not new. What is new is a massive increase in processing power. Digital landscapes are now developed enough so that AI has a place to learn, play, develop, and become something independent of the original code.

The mathematician Alan Turing was a pioneer of AI. What was his most important contribution?

In the past, machines were created to carry out specific tasks, like telling the time, for example, or cracking the code [to intercept Nazi communications during the Second World War] in Bletchley Park. But they could not do much else. Turing hit upon the idea that there was what he called a universal machine. And that it was better to create a machine that you could programme to do different tasks. He essentially asked how much we can capture that general intelligence in code and in a machine.

Famously, Turing had his own test for determining whether a computer is capable of thinking like a human being. You argue for a new version – the “Lovelace test”, named after the 19th-century mathematician Ada Lovelace. What is it?

It looks at how the code produced by the original coder is starting to become autonomous. With machine learning we are seeing that code is beginning to change, mutate and become something new.

What implications could this have for society?

Already we are seeing instances where the code is making decisions about, say, job or mortgage applications. But we often don’t understand why the code is making those decisions exactly. It’s like the subconscious: if you opened up the human brain you wouldn’t be able to understand why it comes to all of these decisions. Similarly, we are seeing the same complexity within code, which is no longer amoral or value-free.

What role does our ability to gather uprecedented amounts of data, thanks to new technology, play in this?

Previously we had a code that could learn, but it really didn’t have a rich environment which it could learn from. One of the great breakthroughs of machine learning is having a lot of data from which it can learn. Data is the key to the process of machine learning.

Machines still seem unable to master language.

Code does run into problems with the written word. It has problems in producing long-term narrative structures. When a human differentiates between words, their brain is tapping into a huge bank of knowledge. A computer, conversely, does not have this complete picture of language. Children, for example, need very little language before they can start speaking.
Therefore it may not be possible for a machine to achieve the level of language that humans can. Much of this comes down to an evolutionary process that has built up a code in our brains for language. That is impossible to fast-track in a machine.

You write that art is an expression of free will. If so, will art created by a computer always need some human assistance?

This ties into the question of why humans produce works of art, write music, novels, and create paintings. The answer essentially comes down to the challenge at the heart of consciousness: it is very difficult to know what your experience of the world is like. When I talk about pain, is that anything like your pain? We play these games that try to match up these words and these experiences. Art is a powerful way to match up our conscious worlds. AI is a fantastic tool for us as humans to extend our creativity. But machine creativity will only happen when it has a similar desire to express what it is like to be a machine.

Why do you write that mathematics is a survival mechanism for the human species?

Mathematics is the science of patterns. And that has been a very powerful tool to be able to understand our environment and make predictions for the future, which enable us to plan effectively for our own survival. This links in with the idea of consciousness, because consciousness gives us the ability to mentally time travel. Mathematics allows us to study patterns in the past which we can then look at and say: this is the direction it is going to go in the future.

How could this be applicable to, say, politics?

Climate change is a great example of the use of mathematics to realise an existential threat that faces our species. It’s all about trying to understand data to look into the future. Without the use of mathematics we would have no chance of surviving this climate crisis.

Do you think there will ever be a time when machines become conscious?

I believe there is no reason why machines one day could not become conscious. That idea of consciousness is going to be different to ours. Already we are facing a very challenging future, where machines are starting to make decisions that impact on society and we don’t understand those decisions. So, for example, does a driverless car decide to kill the five passengers on the pavement, or kill the person in the passenger seat? Or take the issue of AI weapons. We need to understand the AI as it begins to make decisions that affect us. And we need the AI to understand us too. What you are beginning to see is an empathetic AI, which understands what it means to be human.

The timescale is the key thing here. Maybe the brain had to go through this huge evolutionary process to achieve consciousness and you cannot simply fast-track that. But there isn’t an extra mysterious component to the soul or anything like that. There is nothing we don’t understand on a material level that makes up our own consciousness.

So if consciousness can be understood entirely through the material world, what’s the best way to describe it?

Douglas Hofstadter’s 1979 book Gödel, Escher, Bach asks an important question: why a system might have a sense of itself. And he uses this thing in mathematics called Gödel’s incompleteness theorem: an equation consists of numbers, but these numbers can also be code for statements about mathematics. So you get this self reference going on, where an equation can talk about itself. Hofstadter uses this as a simple model for what the brain is doing: that the brain is somehow using two different levels of coding in order to be able to encode thoughts about itself. We are beginning to see how a machine might be able to encode thoughts about itself. So the machine isn’t quite conscious yet, but it is on its way to consciousness.

How much do algorithms, like the ones you mentioned that make decisions about mortgage applications, already control our daily lives?

These things are very subtle. Data is now playing a similar role to that which labour played in the industrial revolution: it is something given away very cheaply to make huge profits.
We need to know how these algorithms are working, what value they have and whom they benefit. If we have these tools and knowledge we will be less manipulated and ultimately have more control over our lives.

Is there a dangerous relationship between capitalism, consumerism and AI?

There is huge hype around AI that is comparable to the dot com boom. There is a lot of dishonesty. So we really need to think about whether this application of AI is genuinely exciting, creating new things, or whether it’s just companies trying to get people to buy products.

There’s a lot of doom and gloom at the moment – it doesn’t sound like you share it.

We should see AI as a powerful collaborative tool: thinking about this in terms of the future of humanity, but also about the future of AI. There is far too much fear around this subject.