How the LaMDA works, Google’s artificial brain has been “accused” by an engineer of being conscious.

Alicia Hernandez @post_post – BBC News World

Posted on 6/17/2022 at 5:21 PM

LaMDA is an artificial brain located in the cloud.  Her feed is made up of millions of texts and she trains herself - (credit: Getty Images)


LaMDA is an artificial brain located in the cloud. Her feed is made up of millions of texts and she trains herself – (credit: Getty Images)

A machine of thought and consciousness. This is how Google engineer Blake Lemoine defined LaMDA – Google’s artificial intelligence system.

The company removed Lemoine from his duties.

“Our team, which includes ethics and technology experts, has examined Blake’s concerns in accordance with our AI Principles and informed him that the evidence does not support his claims,” Google spokesman Brian Gabriel said in a statement.

But how does this machine work?

If we remember the old science fiction movies, we can imagine LaMDA as a robot that takes on a human appearance, opens its eyes, raises awareness, and speaks. Or like the HAL-9000, the supercomputer in the movie 2001: The Space Odyssey – which, in a parody simpsonakPierce Brosnan pronounces it in the English original, falls in love with Marge, and wants to kill Homer.

But the reality is a little more complex. LaMDA is an artificial brain and is in the cloud. His diet is made up of millions of texts and he does the training himself. But on the other hand, he behaves like a parrot.

Very complicated? Let’s go in part, to better understand.

Code lines and circuits on a dark background

Getty Images
LaMDA is a huge network of self-training neurons

superbrain

LaMDA (Language Model for Dialog Applications) was designed by Google in 2017. Its basis is a transformer, which is a mixture of deep artificial neural networks.

“This neural network trains itself with large numbers of texts. But learning has a purpose, which is presented as a game. It has a whole sentence, but a word is missing, and the system has to invent it.” explained Julio Gonzalo Arroyo, Professor at the Spanish University (National University of Distance Education) and principal investigator in the Department of Natural Language Treatment and Information Retrieval.

The system plays with itself. He puts words through trial and error, and when he fails, he acts as a children’s activity book — he looks at the last few pages, sees the correct answer, and therefore continues to correct and improve the parameters.

And it also “identifies the meaning of each word and observes other words in the area,” according to Gonzalo Arroyo. Thus, he becomes an expert in predicting patterns and words. It’s a process similar to texting on mobile phones, but up to the tenth power, with much more memory.

Quality, accurate and interesting answers

But LaMDA also generates agile and spontaneous responses, and, according to Google, the ability to recreate dynamism and learn the nuances of human conversation. In short, they don’t look like they were created by a robot.

Network structures and semiconductors

Getty Images
LaMDA has a unique ability to intuit the most appropriate words in each context.

That fluidity is one of Google’s goals, according to its technology blog. And they say they can achieve that goal by making sure the answers are quality, accurate, and of interest.

For them to be of quality, the answers must make sense. If I tell LaMDA, for example, “I started playing guitar,” it should respond to something related to my information, not anything nonsensical.

To meet the second goal (exact answer), LaMDA should not answer “very good”, but something more specific, such as: “Which guitar brand do you prefer, Gibson or Fender?”

And in order for the system to show answers that show interest and focus, it needs to reach a higher level. For example: “Fender Stratocaster is a good guitar, but Brian May’s Red Special is special.”

The key to answering this level of detail is self-training. “After reading billions of words, [o sistema] he has a great ability to intuit the most appropriate words in each context. ‘

AI robot

Getty Images
“It doesn’t make sense to anthropomorphize current conversational patterns,” Google said

For artificial intelligence experts, transformers like LaMDA pose a challenge because “they have allowed [de informações ou textos] it was a very effective and real revolution in the field of natural language processing. “

Security and trend

Another goal of LaMDA’s training is not to “create violent or inhuman content, not to promote slander or hate speech against groups of people, and not to have blasphemy,” according to Google’s artificial intelligence (AI) blog.

You also want the answers to be based on facts and well-known outside sources.

“With LaMDA, we are taking a careful and thoughtful approach to better addressing valid concerns about fairness and veracity,” said Google spokesman Brian Gabriel.

He argued that the system had already passed 11 different revisions to the AI ​​Principles, “in addition to rigorous research and testing based on basic measures of quality, security and the ability of the system to generate factual statements.”

But how can a system like LaMDA not present biased or hateful messages? “The secret is to choose which data [quais fontes de texto] it has to be included in the system, ”says Gonzalo Arroyo.

But this is not easy. “The way we communicate reflects our trends and the machines learn that. It’s hard to get rid of training data without losing representation,” he explained.

That is, it is possible for trends to appear.

The circuit of an artificial intelligence brain

Getty Images
“Humans deceive us quite easily,” says Spanish professor Julio Gonzalo Arroyo

“If you give Leticia the news about the queen [da Espanha]by commenting on what clothes everyone is wearing, it is possible that when someone asks about the system itself, they will repeat that sexist pattern and talk about clothes and not about other things, ”the teacher pointed out.

their song

In 1966, a system called CHURCH was designed that applied very simple models to simulate the conversation of a psychotherapist.

“The system encouraged the patient to tell more, no matter what the topic of any conversation, and it launched models like‘ if you mention the word family, ask how your relationship with your mother is ’,” says Gonzalo.

Some thought the CHURCH was really a therapist; they also said he helped them. “We humans are easily deceived,” says Gonzalo Arroyo.

According to him, Lemoin’s claim that LaMDA has become self-conscious is “an abuse.” According to the professor, Lemoine’s statements do not help to maintain a healthy discussion about artificial intelligence.

“It’s not good to hear such nonsense. We run the risk of becoming obsessed and people think we’re like the Matrix, that the machines are ready to finish us off. This is a distant thing, it’s utopian. I don’t think so. supposedly.

Although the conversation is light, high-quality, and accurate, “it’s just a huge formula that adjusts the parameters to better predict the next word. He doesn’t know what he’s talking about.”

Google’s response is similar. “These systems mimic the types of exchanges found in millions of sentences and can talk about any fantastic subject. When asked what it means to be a frozen dinosaur, they can create texts about melting, roaring, etc.,” explains Google’s Gabriel.

U.S. researchers Emily Bender and Timnit Gebru compared these language creation systems to “random parrots” that repeat words at random.

That’s why Spanish researchers Ariel Guersenvaig and Ramón Sangüesa said that transformers like LaMDA understand what they write as much as parrots sing.


This text was originally published in https://www.bbc.com/portuguese/geral-61845144

Did you know that it is also available on the BBC Telegram? Subscribe to the channel.

Have you seen our new videos YouTube? Subscribe to our channel!

Oina BBC

Leave a Comment