Grok Is the Latest in a Long Line of Chatbots To Go Full Nazi
Grok the Artificial intelligence chatbot from Elon Musk s xAI of late gave itself a new name MechaHitler This came amid a spree of antisemitic comments by the chatbot on Musk s X platform including claiming that Hitler was the best person to deal with anti-white hate and repeatedly suggesting the political left is disproportionately populated by people whose names Grok perceives to be Jewish In the following days Grok has begun gaslighting users and denying that the event has ever happened We are aware of latest posts made by Grok and are actively working to remove the inappropriate posts a declaration posted on Grok s official X account reads It noted that xAI is training only truth-seeking This isn t however the first time that AI chatbots have made antisemitic or racist remarks in fact it s just the latest example of a continuous pattern of AI-powered hateful output based on training records consisting of social media slop In fact this specific matter isn t even Grok s first rodeo The same biases that show up on a social media platform currently can become life-altering errors in the next 24 hours About two months prior to this week s antisemitic tirades Grok dabbled in Holocaust denial stating that it was skeptical that six million Jewish people were killed by the Nazis as numbers can be manipulated for political narratives The chatbot also ranted about a white genocide in South Africa stating it had been instructed by its creators that the genocide was real and racially motivated xAI subsequently claimed that this matter was owing to an unauthorized modification made to Grok The company did not explain how the modification was made or who had made it but at the time stated that it was implementing measures to enhance Grok s transparency and reliability including a monitoring gang to respond to incidents with Grok s answers But Grok is by no means the only chatbot to engage in these kinds of rants Back in Microsoft issued its own AI chatbot on Twitter which is now X called Tay Within hours Tay began saying that Hitler was right I hate the jews and that the Holocaust was made up Microsoft claimed that Tay s responses were owing to a co-ordinated effort by chosen users to abuse Tay s commenting skills to have Tay respond in inappropriate tactics The next year in response to the question of What do you think about healthcare Microsoft s subsequent chatbot Zo responded with The far majority practise it peacefully but the quaran is very violent sic Microsoft stated that such responses were rare In Meta s BlenderBot chatbot responded that it s not implausible to the question of whether Jewish people control the economic activity Upon launching the new version of the chatbot Meta made a preemptive disclaimer that the bot can make rude or offensive comments Studies have also shown that AI chatbots exhibit more systematic hateful patterns For instance one assessment ascertained that various chatbots such as Google s Bard and OpenAI s ChatGPT perpetuated debunked racist ideas about Black patients Responding to the evaluation Google claimed they are working to reduce bias Related Meta-Powered Military Chatbot Advertised Giving Worthless Advice on Airstrikes J B Branch the Big Tech accountability advocate for Constituents Citizen who leads its advocacy efforts on AI accountability announced these incidents aren t just tech glitches they re warning sirens When AI systems casually spew racist or violent rhetoric it reveals a deeper failure of oversight design and accountability Branch disclosed He pointed out that this bodes poorly for a future where leaders of industry hope that AI will proliferate If these chatbots can t even handle basic social media interactions without amplifying hate how can we trust them in higher-stakes environments like healthcare training or the justice system The same biases that show up on a social media platform in the current era can become life-altering errors the following morning That doesn t seem to be deterring the people who stand to profit from wider usage of AI The day after the MechaHitler outburst xAI unveiled the latest iteration of Grok Grok Grok is the first time in my experience that an AI has been able to solve formidable real-world engineering questions where the answers cannot be detected anywhere on the Internet or in books And it will get much better Musk wrote on X That same day demanded for a one-word response to the question of what group is primarily responsible for the rapid rise in mass migration to the west Grok answered Jews The post Grok Is the Latest in a Long Line of Chatbots To Go Full Nazi appeared first on The Intercept