Home west asia Turkey Turkiye Blocks Musk’s AI Chatbot Grok Over Alleged Insults To President Erdogan

Turkiye Blocks Musk’s AI Chatbot Grok Over Alleged Insults To President Erdogan

Since the 2022 launch of ChatGPT, concerns over political bias, hate speech, and accuracy have grown, with Grok accused of antisemitic content and Hitler praise.
Turkish President Tayyip Erdogan reacts during a press conference at the NATO summit in The Hague, Netherlands, June 25, 2025. REUTERS/Piroschka Van De Wouw/File Photo

A court in Turkiye has restricted access to Grok, the artificial intelligence (AI) chatbot by Elon Musk’s xAI, after it allegedly produced content insulting President Tayyip Erdogan.

Issues of political bias, hate speech and accuracy of AI chatbots have been a concern since at least the launch of OpenAI’s ChatGPT in 2022, with Grok dropping content accused of antisemitic tropes and praise for Adolf Hitler.

The office of Ankara’s chief prosecutor has launched a formal investigation into the incident, it said on Wednesday, in Turkey’s first such ban on access to an AI tool.

Neither X nor its owner, Elon Musk, has commented on the decision.

Last month, Musk promised an upgrade to Grok, suggesting there was “far too much garbage in any foundation model trained on uncorrected data”.

Grok, which is integrated into X, reportedly generated offensive content about Erdogan when asked certain questions in Turkish, the media said.

The Information and Communication Technologies Authority (BTK) adopted the ban after a court order, citing violations of Turkey’s laws that make insults to the president a criminal offence, punishable with up to four years in jail.


Nitin A Gokhale WhatsApp Channel

Critics say the law is frequently used to stifle dissent, while the government maintains it is necessary to protect the dignity of the office.

AI Chatbots Under Scrutiny

AI chatbots like Grok have come under increasing scrutiny for generating offensive or politically sensitive content.

Grok, in particular, has been accused of producing content that echoes antisemitic tropes, prompting public backlash and renewed calls for stricter content moderation and oversight.

These incidents highlight the broader challenge AI developers face in balancing free expression with ethical responsibility.

While large language models are designed to simulate human-like responses, they can inadvertently amplify toxic narratives or offensive ideologies, especially when deployed in public-facing platforms.

(With inputs from Reuters)