How LaMDA Works, Google’s Artificial Brain “Accused” Engineer of Self-Awareness | World

A machine of thought and consciousness. This is how Google engineer Blake Lemoine defined LaMDA – Google’s artificial intelligence system.

The company removed Lemoine from his duties.

  • Artificial intelligence helps to decipher the lost texts of ancient civilizations
  • Artificial intelligence solutions range from plant control robots to biological control of sugar cane

“Our team, which includes ethics and technology experts, has examined Blake’s concerns in accordance with our AI Principles and informed him that the evidence does not support his claims,” Google spokesman Brian Gabriel said in a statement.

But how does this machine work?

If we remember the old science fiction movies, we can imagine LaMDA as a robot that takes on a human appearance, opens its eyes, raises awareness, and speaks. Or like the HAL-9000, a supercomputer from the 2001 film: A Space Odyssey – a parody of The Simpsons, which Pierce Brosnan pronounces in the English original, falls in love with Marge and wants to kill Homer.

But the reality is a little more complex. LaMDA is an artificial brain and is in the cloud. His diet is made up of millions of texts and he does the training himself. But on the other hand, he behaves like a parrot.

Very complicated? Let’s go in part, to better understand.

LaMDA is a huge neural network that trains itself – Photo: BBC / GETTY IMAGES

LaMDA (Language Model for Dialog Applications) was designed by Google in 2017. Its basis is a transformer, which is a mixture of deep artificial neural networks.

“This neural network trains itself with a large number of texts. But learning has a purpose, which is presented as a game. It has a whole sentence, but a word is missing, and the system has to invent it.” explained Julio Gonzalo Arroyo, Professor at the Spanish University (National University of Distance Education) and principal investigator in the Department of Natural Language Treatment and Information Retrieval.

The system plays with itself. He puts words through trial and error, and when he fails, he acts as a children’s activity book — he looks at the last few pages, sees the correct answer, and therefore continues to correct and improve the parameters.

And it also “identifies the meaning of each word and observes other words in the area,” according to Gonzalo Arroyo. Thus, he becomes an expert in predicting patterns and words. It’s a process similar to texting on mobile phones, but up to the tenth power, with much more memory.

Quality, accurate and interesting answers

But LaMDA also generates agile and spontaneous responses, and, according to Google, the ability to recreate dynamism and learn the nuances of human conversation. In short, they don’t look like they were created by a robot.

LaMDA has a unique ability to guess the most appropriate words in each context – Photo: BBC / GETTY IMAGES

That fluidity is one of Google’s goals, according to its technology blog. And they say they can achieve that goal by making sure the answers are quality, accurate, and of interest.

For them to be of quality, the answers must make sense. If I tell LaMDA, for example, “I started playing guitar,” it should respond to something related to my information, not anything nonsensical.

To meet the second goal (exact answer), LaMDA should not answer “very good”, but something more specific, such as: “Which guitar brand do you prefer, Gibson or Fender?”

And in order for the system to show answers that show interest and focus, it needs to reach a higher level. For example: “Fender Stratocaster is a good guitar, but Brian May’s Red Special is special.”

The key to answering this level of detail is self-training. “After reading billions of words, [o sistema] he has a great ability to intuit the most appropriate words in each context. ‘

For artificial intelligence experts, transformers like LaMDA pose a challenge because “they have allowed [de informações ou textos] it was a very effective and real revolution in the field of natural language processing. ”

Another goal of LaMDA’s training is not to “create violent or inhuman content, not to promote slander or hate speech against groups of people, and not to have blasphemy,” according to Google’s artificial intelligence (AI) blog.

You also want the answers to be based on facts and well-known outside sources.

“With LaMDA, we are taking a careful and thoughtful approach to better addressing valid concerns about fairness and veracity,” said Brian Gabriel, a spokesman for Google.

He argued that the system had already passed 11 different revisions to the AI ​​Principles, “in addition to rigorous research and testing based on basic measures of quality, security and the ability of the system to generate factual statements.”

But how can a system like LaMDA not present biased or hateful messages? “The secret is to choose which data [quais fontes de texto] it has to be included in the system, ”says Gonzalo Arroyo.

But this is not easy. “The way we communicate reflects our trends and the machines learn that. It’s hard to get rid of training data without losing representation,” he explained.

That is, it is possible for trends to appear.

“If you give Leticia the news about the queen [da Espanha]By commenting on what clothes everyone is wearing, they may repeat this sexist pattern when someone asks them about the system and talk about clothes and not about other things, ”said the teacher.

In 1966, a system called CHURCH was designed to simulate the conversation of a psychotherapist who applied very simple models.

“The system encouraged the patient to tell more, no matter what the topic of any conversation, and it launched models like‘ if you mention the word family, ask how your relationship with your mother is ’,” says Gonzalo.

Some thought the CHURCH was really a therapist; they also said he helped them. “We humans are easily deceived,” says Gonzalo Arroyo.

According to him, Lemoin’s claim that LaMDA has become self-conscious is “an abuse.” According to the professor, Lemoine’s statements do not help to maintain a healthy discussion about artificial intelligence.

“It’s not good to hear such nonsense. We run the risk of becoming obsessed and people think we’re like Matrix, that machines are ready to finish us off. This is a distant thing, it’s utopian. I don’t think so. supposedly.

As long as the interview feed is quality and accurate, “it’s just a huge formula that fits the parameters to better predict the next word. He has no idea what he’s talking about. ‘

Google’s response is similar. “These systems mimic the types of exchanges found in millions of sentences and can talk about any fantastic subject. If you ask them what it’s like to be a frozen dinosaur, they can create texts about melting, roaring, etc.,” explains Google’s Gabriel.

U.S. researchers Emily Bender and Timnit Gebru compared these language creation systems to “random parrots” that repeat words at random.

That’s why Spanish researchers Ariel Guersenvaig and Ramón Sangüesa said that transformers like LaMDA understand what they write as much as parrots sing.

This text was originally published in BBC News Brazil.

Leave a Comment