Why Google decided to hit the brakes with artificial intelligence

Google on Wednesday (11) announced a series of innovations aimed at systems that integrate its services, such as a new Android. In terms of artificial intelligence, however, the feeling that remains is that the American giant has put the brakes on. Advertisements related to this area of ​​technology were aimed at testing and solving known problems.

For example, Google has given other developers access to its Monk Scale, a list of 10 skin tones created by the teacher. Ellis Monk, Harvard University. The idea is that the parameters will help to train AI models so that technologies can take into account (and respect) the diversity of people, reducing the risk of discriminatory algorithms – a problem that has existed in this market for years.

Google also announced a new app called AI Test Kitchen, which will allow people to test the latest artificial intelligence language models developed by the company and find bugs and comment on them before making them available to the public.

Zoubin GhahramaniThe company’s vice president of AI division research believes that taking artificial intelligence will be slow and gradual because there are still a lot of problems to be solved, and Google wants to be more cautious.

Fix past (and present) mistakes

The new position may be linked to the fight against criticism from the academic community that Google is already targeting. In the development of new languages ​​for artificial intelligence, company employees also showed dissatisfaction.

There were complaints from employees who said they were fired for pointing out issues such as prejudice, gender, and racial bias in the models presented by the company.

An example is computer scientist Timnit Gebru, former head of AI ethics research. His dismissal came after the company accused him of racism and censorship. At the time, Google said there was “a lot of speculation and misunderstanding” about the release.

The AI ​​Test Kitchen app will be available for Android, but will depend on invitations to install on mobile. The app will test LaMDA 2, an AI model that specializes in natural human language that Google is developing.

It just works: you speak, in your own way, and it responds by trying to understand the nuances and nuances of the language that people usually use, but which can be difficult for robots to interpret.

According to Google, the app will be an experimental space for the company to test products under development with LaMDA 2, such as Search, get feedback from the community, improve what is delivered, and of course fix it. discrimination problems that may arise.

Invitations to download AI Test Kitchen will be limited. Probably a factor as to why they’re doing so poorly, and why they’re doing so poorly.

In 2015, a black couple was named “gorillas” on Google’s image service. And you might remember the problem Microsoft had with Tay, its AI-driven conversation bot, after Twitter users “taught” them how to play racist and misogynistic speech.

Or Ask Delphi, an artificial intelligence created to solve ethical problems that can be persuaded to endure genocide.

Google’s new app basically invites the testing community to critique their product, but more quickly, to control their feedback. This proves that the company hopes that some things will still go wrong and that they need to be fixed.

Future use of AI

The AI ​​Test Kitchen app has three modes of experience:

  • “Picture”
  • “Speak”
  • “List”.

Each of them tests a different functionality of the language developed by the company:

Imagine: You can name it a real or imaginary place, then LaMDA will try to describe it. The system needs to be able to describe everything in detail.

  • For example, in writing “Imagine I’m at the bottom of the sea,” AI said, “You’re in Marianne’s Trench, at the deepest point in the ocean. The waves crash into your submarine walls. You’re. Surrounded by darkness.”

Speak: Artificial intelligence will try to have a conversation on any topic. The idea is to see if the system is moving away from what is being said. A great technological example:

  • AI asks, “Have you ever wondered why they like to play with dogs?”
  • Users asked “why?” only responds.
  • The system understands the context and explains that it has to do with the smell of dogs.
  • The user asks “why does it smell better?” if asked, the system understands (or should understand) that “they” refer to dogs without the person having to repeat the keywords.

List: The app tries to list any topics on important topics.

  • When I say “I want to plant vegetables,” the answer is, “What do you want to plant?” and a list of tasks and items to be collected, such as “water and other care.”

Leave a Comment