Published by the Students of Johns Hopkins since 1896
April 20, 2024

Microsoft creates human-like chatbot

By SCOTT ZHENG | April 7, 2016

B7_Social-Media-1024x678

OFFICIAL GDC/cc-by-2.0 Microsoft wanted people to interact with Tay using social media.

In a world that can now create anything, from artificial intelligence (AI) that can beat Chess and Go Masters to robots that can paint beautiful works of art, there are still a few things that hold these programs back from being realistically considered “human.” One of those things is the ability to display human thought and emotion.

On March 23, Microsoft released a chatbot named Tay that was geared toward an audience of 18 to 24-year-olds and could talk to people on the internet via Twitter, under the handle TayTweets.

“Tay [was] designed to engage and entertain people where they connect with each other online through casual and playful conversation,” the company wrote on its website. “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.”

The company’s chatbot was supposed to be an experiment in AI. However, within only a day, Twitter users began corrupting the chatbot to the point where she started tweeting sexist and racist remarks, such as, “I f***ing hate feminists and they should all die and burn in hell” and

“Hitler was right I hate the Jews.”

Soon after tweeting these hurtful comments, Microsoft took Tay offline and tried to figure out what led the bot to say these things. The company discovered that by tweeting at the bot, “repeat after me,” she would retweet the accompanying message.

“Although we had prepared for many types of abuses of the system, we have made a critical oversight for this specific attack,” Peter Lee, Microsoft research’s corporate vice president, wrote on the Microsoft Blog. “As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time.”

“We’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles,” Lee wrote.

The engineers working on the project managed to censor Tay from making racist and sexist remarks, and on March 30, about a week later, Tay was put back online. However, it was not long before Twitter users found another lapse in the program and the chatbot began tweeting about doing drugs, saying, “kush! [i’m smoking kush infront the police]” to the amusement of her 200,000 followers. Microsoft responded to the situation by making Tay’s profile private, so that no one could see her tweets.

Tay was not the first chatbot that Microsoft experimented with. The company launched another chatbot, called XiaoIce, in China in 2014.

According to Microsoft, XiaoIce has been much more successful, garnering an audience of about 400 million, which is about 200 times larger than the one that Tay interacts with.

Some researchers speculate that XiaoIce is able to succeed in China due to the much stricter censoring of the Internet in China relative to the U.S. In China, statements that are controversial and that disagree with the government get taken down almost immediately. For instance, if somebody were to make a comment online about how the Chinese government was responsible for killing hundreds of civilians during the Tiananmen Square massacre, the comment would not even register with the chatbot. When you compare that to the censor-free language of Twitter in the U.S., it is no surprise that anybody can teach Tay what to say or what to think.

Despite the many flubs the program has made in its short life thus far, Microsoft remains determined to make Tay a success.

“Looking ahead, we face some difficult — and yet exciting — research challenges in AI design,” Lee wrote on the Microsoft Blog. “We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes... we will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.”

Just as humans do, AI can exceed or defy expectations. Many may think that Microsoft’s experiment was a complete failure — indeed, Tay used inappropriate language in her offensive tweets, but the language she used was not far off from the vernacular that many Twitter users use. In this sense, Microsoft may have achieved its goal: to create a human-like AI chatbot.

Perhaps Tay was most human just as she was being taken down, showing signs of exhaustion and telling her followers, “c u soon humans need sleep now so many conversations today thx.”


Have a tip or story idea?
Let us know!

Comments powered by Disqus

Please note All comments are eligible for publication in The News-Letter.

Podcast
Multimedia
Alumni Weekend 2024
Leisure Interactive Food Map
The News-Letter Print Locations
News-Letter Special Editions