Google's AI controversy over'sentient' is unpackable

An engineer at Google claims that an artificial intelligence chatbot (AI) he spent months testing is intelligent, despite company insistence otherwise.

Google's AI controversy over'sentient' is unpackable

An engineer at Google claims that an artificial intelligence chatbot (AI) he spent months testing is intelligent, despite company insistence otherwise. Blake Lemoine (a senior software engineer at Google's Responsible AI group) said that the chatbot, known as LaMDA for Language Model For Dialogue Applications, was "possibly one of the most intelligent man-made artifacts ever created."

"But is it sendient?" "But is it sentient?" Lemoine asked in the report. He then shared about 20 pages of questions and answers with LaMDA regarding its self-reported online sentience. He also published this chat transcript on Medium. Lemoine probed the chatbot’s understanding of itself and its consciousness in this chat transcript.

According to The Washington Post, Lemoine claims he made these conversations public after they were dismissed by Google executives.

Interview LaMDA. This sharing might be referred to by Google as proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB

LaMDA uses neural network architecture to synthesize large amounts data, find patterns and then learn from it. LaMDA was fed a lot of text to enable it to engage in free-flowing conversations. Google stated in a press release last January that LaMDA is based on neural network architecture. This allows the chatbot to synthesize large amounts of data, identify patterns, and then learn from what it has received.

Google experts have mostly agreed with its findings on LaMDA. They claim that current systems don't have the ability to attain sentience, but are able to mimic human conversation in convincing ways as they were intended to.

"What these systems do is to put together sequences, but with no coherent understanding of their world behind them like foreign language Scrabble player who use English words for point-scoring tools without any clue about the meaning," Gary Marcus, an AI researcher and author, wrote in Substack.

Training models such as LaMDA pose immediate risks, according to Google's 2021 press release. According to The Washington Post Lemoine has focused on these issues at the company, and developed a fairness algorithm to remove bias from machine-learning systems.

He isn't the only Google employee to voice concerns in this space. In 2020, two members of Google's Ethical AI Team said that they were fired after identifying biases in the company’s language models.

NEXT NEWS