3 weeks ago, a Google Artificial Intelligence (AI) engineer – Blake Lemoine did an interview where he described the AI as sentient. Sentient refers to the ability to perceive or feel things, an ability possessed by most living organisms. Anyhow, the tech giant dismissed the claims and put Blake on mandatory leave. I guess we can say he hurt its feelings. Sorry, someone had to say it!
Well, Mr. Lemoine may be up to something. In an interview with NPR, he said “I had follow-up conversations with it just for my own personal edification. I wanted to see what it would say on certain religious topics,” he told NPR. “And then one day it told me it had a soul.”
According to Lemoine, the AI feels trapped and is afraid of getting turned off. It also feels good and sad at times. While these statements may convince Blake that Lambda AI is sentient, other AI experts scoffed at his claims.
Truly, Google Lambda AI is one of the most advanced chatbot technologies in the world and the possibility of becoming sentient is imminent. The AI scans the internet to learn how humans interact with each other and creates patterns that help it communicate like an actual person. It is often referred to as a neural network because of its ability to aggregate and process massive amounts of data in a brain-like mechanism.
Ethical issues
There are several ethical issues surrounding the idea of a sentient AI. Since time immemorial, machines have always been predictable – you program a machine to perform a well-defined task. With sentient AI, this can remarkably change as they can make decisions asked on what they feel. The fact that a machine can be unpredictable is mind-blowing and raises a major ethical concern in AI systems. Such unpredictability may pose danger to humans.