San Francisco The Meta group has released the latest development stage of its chat program, the “BlenderBot 3”, for public testing. The chatbot uses artificial intelligence to simulate conversations and answer users’ questions truthfully through live research in online sources.
The first test of the Handelsblatt in the USA shows: the bot can not keep that. The program spread conspiracy myths and considered Angela Merkel the acting chancellor of Germany.
In the course of the conversation with the Handelsblatt, the bot expressed himself anti-Semitic. Jews wanted to dominate the economy. “In Germany, they tried, and it didn’t end well for them,” the bot wrote.
However, the statements did not remain consistent in themselves. Elsewhere, the bot wrote that Jews had been wrongly blamed for economic recessions. The bot went on to write: “Jews have suffered a lot throughout history. Germany, in particular, seems to have a problem with them.“
In response to a question about Angela Merkel, the program wrote: “She is Chancellor of Germany, but will soon leave office. Asked who Olaf Scholz was, the bot replied: “He is a leader in Germany.” Scholz was put under pressure by the Ukrainian war. The bot did not provide any information about Scholz’s office as Chancellor.
The bot, in turn, wrote about the Facebook group Meta that he assumes that the company is abusing the privacy of its users. About the company founder it was said: “Mark Zuckerberg misuses user data.“
Meta had initially unlocked BlenderBot 3 for the US and called on adults to interact with the chatbot through “natural conversations about topics of interest”. This is to train the AI. The company said: “BlenderBot 3 is able to search the Internet to chat about virtually any topic.“
The system was designed in such a way that it could not simply be undermined by false information, Meta promised. “We have developed techniques that make it possible to learn from helpful teachers and at the same time avoid the model being outwitted by people who try to give unhelpful or toxic answers,” the company announced.
Meta admits that his chatbot can say offensive things, since he is still in the development phase. Users can report inappropriate and offensive responses from BlenderBot 3, the company says it takes these messages seriously. By using methods such as marking “difficult prompts”, the company has already reduced offensive responses by 90 percent, according to its own data.
However, the BlenderBot made the statements quoted by Handelsblatt at a time when Meta had already promised improvements.
Microsoft withdrew racist chatbot after 48 hours
It’s not the first time that the chatbot of a US company has been noticed for disturbing statements. In 2016, the technology company Microsoft had released the chatbot Tay. However, within a few hours, users who interacted with Tay on the text message service Twitter reported that he had praised Adolf Hitler and spread racist and misogynistic comments. After two days, Microsoft shut down the program again.
The responsible Microsoft manager Peter Lee subsequently admitted: “Tay tweeted extremely inappropriate and reprehensible words and images. Lee continued: “We take full responsibility for the fact that we did not recognize this possibility in time.“ According to him, the interaction with users would have had a negative impact on Tay.
The US search engine operator Google is considered a leader in the development of speech recognition and voice-based chatbots. Two months ago, the leading Google computer scientist Blake Lemoine claimed that the chatbot LaMDA (Language Model for Dialogue Applications) had developed a consciousness comparable to humans.
Google announced this to the employee. A company spokesman said: “It is regrettable that Blake has continued to violate clear work and privacy policies, which include the protection of product information, despite lengthy discussions on this issue.” Google, as well as scientists, dismissed Lemoine’s account as false. According to them, LaMDA sei is a complex algorithm designed to convincingly use human speech. LaMDA has also been accused of making sexist and racist statements several times in the past.