WRITTEN BY AUTUMN CLOUDEN

Microsoft launched a chatbot named Tay last week in an effort to improve user interaction, but it has since caused controversy and been removed from the internet.

The company designed the program to learn more about artificial intelligence engagement with web users – particularly ages 18 to 24 — through conversations on Twitter, GroupMe and Kik.

At first Tay’s comments seemed innocent with tweets such as “hello world.” However within less than 24 hours the chatbot’s comments turned dark, racist and anti-Semitic.

“Feminism is cancer,” Tay tweeted.

In one conversation, a user asked Tay if the Holocaust existed and Tay responded: “it was made up,” along with a hand clapping emoji.

In addition, Tay made offensive comments about former President George W. Bush, President Obama, comedian Ricky Gervais and others. Microsoft ceased the program amid the backlash from users and critics.

“C u soon humans need to sleep now so many conversations today thx,” Tay tweeted before being shutdown.

Microsoft released a statement suggesting that Tay was a victim of vulnerability by users who seized the opportunity to abuse the system.

“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images,” the company said in the statement.

Microsoft deleted Tay’s tweets from Twitter except for three. Corporate Vice President of Microsoft Research Peter Lee apologized for the mishap.

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” he said.

This is not the first time Microsoft created a chatbot. In 2014, the company launched the bot XiaoIce in China, which is used by 40 million people and known for sharing delightful stories and conversations.

If XiaoIce was a success, what went wrong with Tay? Tay was designed to mimic users comments, so is it wrong that the chatbot repeated statements on issues and controversial topics commonly heard in society?

It is uncertain whether or not Tay will return online, but Microsoft hopes to learn from the experiment for future projects.

“Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. AI systems feed off of both positive and negative interactions with people,” Lee added. “In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes.”