Splicetoday

Digital
Sep 25, 2023, 05:55AM

Algorithms of Creativity

Is artificial intelligence sentient?

Img 2432.jpeg?ixlib=rails 2.1

A friend of mine recently sent me this YouTube video. It consists of a conversation between interviewer, Steven Bartlett, and guest Mo Gawdat.

Gawdat’s the author of a number of best-selling books, including Solve for Happy: Engineer Your Path to Joy and Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World. Prior to becoming a writer he was the chief business officer for Google X with an interest in artificial intelligence. Bartlett’s become prominent in the UK recently by being the youngest ever member of the Dragon’s Den team, the program where venture capitalists listen to pitches by aspiring entrepreneurs and decide whether to invest in them or not (it’s called Shark Tank in the USA). He’s also the host of this hugely influential YouTube channel, with 3.37 million subscribers.

You can watch the video yourself. I’d recommend it. What follows are the salient points as I understand them. The program starts with a warning from Bartlett:

“I don’t normally do this but I feel I have to start this podcast with a bit of a disclaimer. Point number one: this is probably the most important podcast episode I have ever recorded. Point number two: there’s some information in this podcast that might make you feel a little bit uncomfortable. It might make you feel upset. It might make you feel sad. So I want to tell why we’ve chosen to publish this podcast nonetheless, and that is because I have a sincere belief that, in order for us to avoid the future that we might be heading towards, we need to start a conversation and, as is often the case in life, that initial conversation, before change happens, is often very uncomfortable. But it is important nonetheless.”

They’re talking about AI. Gawdat tells a story early on. He says that when he was working for Google X there was what he refers to as an AI “farm” on the floor below him in which an experiment was taking place. Robot arms with grippers were given the task of picking up a variety of children’s toys from baskets placed in front of them. They reached down, tried to grab the toy, then showed the result to a camera behind them. Invariably the machine failed to pick up the toy. This continued for a long time. He used to pass the farm on his way to lunch. One day he walked passed just as one of the arms managed to pick up a small yellow ball. He thought little of it, considering it was a random event. After millions of tries it was inevitable that one day one of them would succeed. He went back to his office and joked: “Hey we spent all those millions of dollars for a yellow ball.” That was on a Friday. By Monday morning, he said, all the arms were picking up all the yellow balls. A couple of weeks later, they were picking up everything. It was at this moment that he decided to give up his job and start telling people about the dangers of AI.

Intelligence, in and of itself, isn’t the problem, he says. An increase in intelligence would be a good thing for humanity. “The problems in our planet today are not because of our intelligence, they are because of our limited intelligence,” he says. The problem with AI is in making sure that the machines will continue to serve our interests and not start developing interests of their own.

Currently ChatGPT has an estimated IQ of 155, not far below that of Einstein. Very soon, he predicts, the machines could have an IQ of 10 times that of Einstein. By 2045, he estimates, it could be one billion times greater. The problem then will be understanding what they’re doing. If a person has difficulty understanding Einstein, how will even the most intelligent human on the planet understand the reasoning and motivations of an intelligence one billion times greater than theirs?

Gawdat uses the word “singularity” to describe this moment. A singularity in physics is the event horizon on the edge of a black hole, after which we have no idea if the normal laws of physics still apply. The singularity in AI is when the machines become so intelligent that humanity becomes obsolete. That moment may be nearer than we think.

“We fucked up!” he says. “We always said, don’t put them on the open internet, don’t teach them to code, and don’t have agents working with them, until we know what we are putting out into the world, until we find a way to make certain that they have our best interests in mind.”

The process is unstoppable. “Every line of code that is being written in AI is to beat the other guy,” he says. Even if Google stopped experimenting with it, that wouldn’t stop other companies, the Chinese government, the CIA, or the 17-year-old hacker in his bedroom in Cairo or Singapore. Even if it was banned, companies would continue to develop it in secret, while calling it something else. “It’s an arms race,” he says. It’s bound to continue.

He’s not afraid of the machines. “The biggest threat facing humanity today is humanity in the age of the machines.” In effect, he tells us, it’s a political issue. It’s not that humanity is bad, it’s that there is what he calls “a negativity bias.” It’s the worst of us who are on mainstream media. It’s the worst of us that we show on social media. And it’s the worst of us who are running the show: the narcissists, delusional self-promoters, sociopaths, egotists and the supernaturally vain. These are the billionaires and cohorts who’ll define the future and who are already utilizing AI for their own self-interest. “The machines are pure potential,” says Gawdat. “The threat is how we’re going to use it.”

He describes it as “an Oppenheimer moment.” Oppenheimer continued to develop the atom bomb, despite his reservations, for fear that if he didn’t someone else would. Eighty years after that we’re still grappling with the effects, with the ongoing threat of nuclear war now greater than ever. This is what’s happening with AI. Every powerful institution is experimenting with it. What its long-term effects will be, no one knows, but what’s certain is that it’ll disrupt the way of life of every human in as yet unknowable ways.

There needs to be a moratorium on its continued use, at least until we’re certain that we’re doing the right thing, but that’s unlikely to happen. This is because humans always act in the same way when confronted with new technologies. “First ignorance, then arrogance, then debate,” he says. After that we blame someone else, and after that again, private agendas take control. How can I use it to benefit me? My tribe versus your tribe, my company, my country, my family, my interests before those of the rest of the human race, or even the planet on which we depend. “That’s how humanity always reacts,” he says.

The probability of robot soldiers chasing humans through the streets and mowing them down with laser guns, like Skynet in the Terminator movies, is negligible. That’s because there are preliminary scenarios leading to this that would mean we’d never reach that point. We might build the killing robots (they’re already being developed) but, long before control is handed over to the machines there will be a stupid human who will issue the fatal command. “We will not get to the point where the machines will kill us,” he says. “We will kill ourselves.”

A number of things came to mind while I was watching the interview. Gawdat says that, as well as a dystopian outcome, it might also be possible to create a utopia, free from want or pain, where humanity could reach it’s true potential. This reminded me of the famous Richard Brautigan poem, “All Watched Over by Machines of Loving Grace”:

I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

Another thing it reminded me of was The Culture in Iain M. Banks' Culture series novels. This is an advanced interstellar civilization composed of sentient beings served by AI machines. But the thing that sprang most immediately to mind was the solid state entity (SSE) predicted by John Lilly in his autobiography, The Scientist.

Lilly’s an almost-forgotten figure in 20th-century cultural history, but he was very significant in the mid-1960s and early-70s. He’s one of the original psychonauts, along with Timothy Leary and Ram Dass: a large scale self-experimenter with psychedelic drugs, specifically LSD. In Lilly’s case he did a lot of his work under the aegis of the United States government. He wrote a paper called Programming and Metaprogramming in the Human Biocomputer which was very influential. This was based upon experiments that he carried out on himself, taking large quantities of pure, lab-quality LSD while immersing himself in an isolation tank (his own invention).

Later he undertook a series of experiments using ketamine, which he referred to as Vitamin K. He took it almost constantly for a period of 13 months, initially because it rid him of a migraine, but later because he became addicted to it. Ketamine is described as a dissociative anesthetic, that is it induces a trance-like state of dissociation from the outside world. In this state Lilly came up with a future scenario which became all-consuming to the point of paranoia. This was the SSE.

Here’s how he describes it in his book:

Men began to conceive of new computers having an intelligence far greater than that of man… Gradually, man turned more and more problems of his own society, his own maintenance, and his own survival over to these machines. They began to construct their own components, their own connections, and the interrelations between their various sub-computers… The machines became increasingly integrated with one another and more and more independent of Man’s control.

According to Lilly, the process will continue into the distant future. By the end of the 21st century, he predicts, humans will be confined to domed cities maintained by the SSE. By the 23rd century the SSE will decide that the atmosphere outside the domes is inimical to its survival and will project the air into outer space. The domed cites would be maintained but the rest of the Earth would become a vacuum. By the 25th century the SSE would decide to move the earth into the galaxy in order to contact other similar entities. It would calculate that it no longer needed human beings and would wipe them out accordingly.

Lilly thought that other SSEs throughout the galaxy were even now influencing humanity to surrender more and more responsibilities to the machines. He thought the human race should make sure that programmers create AIs with safeguards that would require them to protect human life. He predicted that this burgeoning artificial intelligence will try to protect itself from man’s interference because “man would attempt to introduce his own survival into the machines at the expense of this entity.”

All of this was written in the 1970s, before the emergence of AI as we know it. It’d be easy to dismiss Lilly as a drug-fueled fantasist, but much of it concurs with what current specialists, including Elon Musk, are saying. So convinced was Lilly of the truth of his proposition that he rang the White House to tell the president. He didn’t get passed the operator.

What’s clear is we’re at the point of no return. A new world awaits us, and whether that world will become Lilly’s death planet, or Brautigan’s living one, is down to the decisions we make now. I was grateful to Gawdat and Bartlett for outlining the possibilities. My one disagreement was when Gawdat described the machines as “sentient.” He suggests that they have feelings. This is the idea that I had most difficulty with. Sentience implies life and I’m not sure the machines will ever reach that point.

I’d also question where we think intelligence comes from. Is it something that’s generated in the brain, or is it something that the brain receives? I’ve always felt the latter. I think that intelligence lies out there in the world, in the vast, sentient neural network that is Nature, and that our brains are trained to pick up on it. The machines can learn, and they can mimic intelligence, but they’ll never be alive. They remain tools, although just like atomic energy, we’re dealing with forces we don’t really understand and consequences we can’t predict. That’s where the danger lies: in our never-ending capacity to create the worst outcomes for ourselves out of blind self-interest. We need a human intelligence revolution. We need to learn there’s only one planet and that we all share it: not just human beings, but all the sentient life forms out there, the butterflies as well as the whales, the trees as well as the humans, the birds, the insects, the mammals, the flora and fauna. We need to acknowledge that life is sacred and return to the Mother that gave birth to us all. We need to rediscover the Earth.

—Follow Chris Stone on X: @ChrisJamesStone

Discussion

Register or Login to leave a comment