Tay was programmed to mingle with young adults and was able to talk slung. Twitter was the main podium for the bot. However, given its ability to learn from human responses the chatbot was targeted by a section of the online community and was taught to make provocative and racist remarks.
In no time at all, it was making Hitler remarks and segregated individual races. Communications experts cannot believe that Microsoft didn't see this coming knowing what the online community is capable of. "The more you chat with Tay, the smarter she gets, so the experience can be more personalized for you," said Kris Hammond, a computer scientist. "I can't believe they didn't see this coming."
As soon as it went online users noticed that Tay responses were a bit off, and it wasn't easy to sway its logic and responses. Repeated interaction with similar ideas resulted in it learning such derogative ideas.
Microsoft says that Tay was an experimental bot, and it has been pulled down for upgrades. They hope to restrict it from certain learning lanes such as making racist remarks. Microsoft and indeed other tech giants see learning robotics as the next face of bot-evolution.