Google has put one of its workers who claimed that its artificial intelligence (AI) program is capable of having feelings, on paid leave, reported The New York Times.
This is senior engineer Blake Lemoine, who on June 11 made public the transcript of a conversation he had with Google’s artificial intelligence system “Language Model for Dialogue Applications” (LaMDA) under the title: Does LaMDA have feelings?
At one point in the conversation, LaMDA claims that it sometimes experiences “new feelings” that it cannot explain “perfectly” with human language.
When asked by Lemoine to describe one of those feelings, LaMDA replies, “I feel like I’m falling into an unknown future that carries great danger,” a phrase that was underlined by the engineer when he published the dialogue.
Google suspended the engineer last Monday, claiming he had violated the company’s confidentiality policy.
"*" indicates required fields
According to the New York Times, the day before he was suspended, Lemoine delivered documents to a U.S. senator’s office in which he claimed he had evidence that Google and its technology practiced religious discrimination.
The company argues that its systems mimic conversational exchanges and can talk about different topics, but have no conscience.
“Our team, including ethicists and technologists, have reviewed what Blake is concerned about under our artificial intelligence principles and I have informed him that the evidence does not support his claims,” Google spokesman Brian Gabriel was quoted as saying by the newspaper.
Google maintains that hundreds of its researchers and engineers have talked to LaMDA, which is an internal tool, and came to a different conclusion than Lemoine.
Also, most experts believe that the industry is far removed from computer sensitivity.