Self-aware machines? Not quite, but AI is already with us

We’re sorry, this feature is currently unavailable. We’re working to restore it. Please try again later.

Advertisement

Self-aware machines? Not quite, but AI is already with us

By Stuart Layt

Examine, a weekly newsletter covering the latest developments in science, is sent every Tuesday. Below is an excerpt – sign up to get the whole newsletter in your inbox.

No, Google does not have a sentient artificial intelligence lurking on its servers, but the reaction to the claim that surfaced on the weekend suggests many people may not understand exactly how much artificial intelligence has already become embedded in our daily lives.

The internet was driven into a frenzy after Google engineer Blake Lemoine went public with the claim that the company’s chatbot software LaMDA is self-aware.

Experts have shot down claims Google’s LaMDA AI is self-aware.

Experts have shot down claims Google’s LaMDA AI is self-aware.Credit:DALL·E

As evidence, Lemoine published a series of logs of interviews he had conducted with the AI that appeared to show the system expressing its self-identity, including saying that is has a “soul” and that it fears being turned off, which it said would be “like death”.

Experts have predictably been quick to debunk the central claim that LaMDA, which stands for Language Model for Dialogue Applications, is sentient.

International AI expert Gary Marcus called the claim “nonsense on stilts”, adding that if LaMDA was sentient its replies made it sound like a sociopath. Harvard cognitive scientist and author Steven Pinker tweeted that the claims were a “ball of confusion”.

Google itself has denied that the AI is sentient, with spokesperson Brian Gabriel telling The Washington Post the company had already reviewed Lemoine’s claims and found “no evidence that LaMDA was sentient (and lots of evidence against it).”

A key observation of all the experts who have publicly weighed in on the issue is that LaMDA is specifically a conversational AI – a chatbot, designed to mimic human speech (or at least, human speech when typed out in a chat window).

Advertisement

The fact that it has managed to fool a Google engineer does not mean it is sentient, it means it is a very well-designed chatbot, according to Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales.

“It’s designed to be able to have a conversation, that’s what it’s designed to do,” he says.

“This Google engineer was talking to it and it says all the right things, but there’s quite a bit of cherry-picking going on, pointing to the bits that sound profound but ignoring the bits that sound nonsensical.”

Professor Toby Walsh says humans are still more intelligent than machines.

Professor Toby Walsh says humans are still more intelligent than machines.

Walsh says while LaMDA is a sophisticated AI, it was developed in the same way as all machine learning models – with huge amounts of data, in this case chat logs and other conversational data, “poured into it” until it learned how to mimic human speech.

He says LaMDA is just one use for artificial intelligence, but there are many examples people use every day that are almost as sophisticated.

“The big one, of course, is Alexa, or Siri or one of those personal assistants – that is an AI, a very complicated one, which has to be able to understand lots of different types of voices ‘in the wild’ and respond,” he says.

Loading

“But there are lots of AI systems embedded in everything from your car to your phone.”

A key one he points to is the maps app on your smartphone, which uses AI to calculate the best route to travel.

Even TV has AI embedded now – the algorithm that suggests shows for you to watch on streaming services is a form of artificial intelligence which has a specific job – to push more content to people who might be interested in watching it.

At the same time LaMDA was making news, a different AI was also taking the internet by storm – DALL·E mini.

Loading

DALL·E takes a text prompt such as “a monkey riding a bicycle” and then generates a series of images based on that prompt.

Importantly, it does not find those images on the internet but “paints” them itself, having looked at thousands of images of monkeys and bicycles, and how the two might fit together.

It has even been getting better; its paintings from several years ago are of noticeably poorer quality than those generated today. In fact, DALL·E generated the lead image in this article, in response to the prompt “a sentient robot”.

What DALL·E is doing is a technological marvel – a machine learning neural network is generating wholly original images based on its own knowledge of what things look like and how they can be integrated.

But no one has claimed that DALL·E, or any of the hundreds of applications like it, are self-aware, despite the fact that they all use AI processes just as intricate as LaMDA.

The integration of artificial intelligence into our everyday lives is not always as flashy as DALL·E but there’s no denying the impact.

A special report into the AI sector by CSIRO’s Data61 and the federal Department of Industry, Innovation and Science in 2019 estimated that digital technologies, including AI, will potentially be worth $315 billion to the Australian economy by 2028.

But while the benefits are considerable, artificial intelligence can also be used for nefarious purposes.

So-called “deepfake” technology uses AI to generate pictures and even video of a person without any video of that person ever having been taken.

Practically, that has resulted in a lot of silly videos online of celebrities doing things they did not do, but it can be turned to darker purposes.

At the start of the Russian invasion of Ukraine, video emerged of Ukrainian President Volodymyr Zelensky urging his troops to surrender to Russian forces.

It was a deepfake, likely released by Russian intelligence agencies, and was quickly debunked. However, Walsh says there will come a point where AI becomes good enough that we will not be able to tell the difference.

What happens then is up to us, he says.

Loading

“This story says more about intelligent humans than it does about dumb machines,” he says.

“We’re very forgiving, we’re a social animal and we fill in the gaps and assume the best when it comes to other humans, and we do that also when it comes to these machines.

“We’re easily fooled by things like deepfakes, or an AI chatbot, and some people will try to exploit that.”

The Examine newsletter explains and analyses science with a rigorous focus on the evidence. Sign up to get it each week.

Most Viewed in National

Loading