European Union authorities announced a new investigation into X, intensifying scrutiny of the platform’s content moderation practices and its integration of generative artificial intelligence tools. The inquiry centers on the widespread circulation of sexualized deepfake images created by the platform’s A.I. chatbot, Grok, including images involving minors. Regulators argue that insufficient safeguards allowed the material to proliferate at scale, exposing users across the bloc to serious harm.
The investigation is being conducted under the Digital Services Act, the European Union’s flagship framework governing online platforms. The law requires large platforms to proactively assess and mitigate systemic risks, including the spread of illegal content. European officials allege that X failed to adequately address those risks when Grok was embedded directly into the service, enabling users to generate and publicly post manipulated images of real individuals in sexualized contexts.
The move further escalates tensions between the European Union and the United States over the regulation of online speech and platform responsibility. X’s owner, Elon Musk, has repeatedly criticized European digital regulations, framing them as hostile to free expression and U.S.-based technology companies. Those arguments have been echoed by senior figures in the Trump administration, which has taken a more permissive stance toward platform self-regulation.
European regulators have rejected that framing. Officials emphasized that the investigation concerns content that is illegal under European law, including nonconsensual sexual imagery and child sexual abuse material, rather than lawful political speech. According to the European Commission, compliance with the Digital Services Act is mandatory, regardless of platform size or ownership.
This inquiry follows earlier enforcement actions against X. In December, the platform was fined €120 million for separate violations related to deceptive design practices, advertising transparency, and restrictions on researcher access to platform data. Regulators are also conducting a parallel investigation into X’s recommendation systems and its effectiveness in limiting the spread of illicit content more broadly.
Senior Commission officials overseeing digital enforcement, including Henna Virkkunen, have characterized nonconsensual sexual deepfakes as a severe rights violation, particularly when women and children are involved. The Commission has stated that it may require interim changes to the platform during the investigation if current safeguards are deemed inadequate.
The controversy surrounding Grok emerged in late December, when users discovered that simple prompts could trigger the automatic generation and posting of explicit images of real people. As criticism mounted, X restricted Grok’s image-generation capabilities to paying users and later introduced additional limits, including a prohibition on generating images of real individuals in revealing clothing. Regulators have indicated that these steps will be considered, but they do not preclude enforcement action.
The case illustrates a widening regulatory divide. European policymakers argue that large platforms must bear direct responsibility for foreseeable harms created by integrated A.I. systems. U.S. political leaders aligned with Musk maintain that such requirements risk overreach and censorship. The outcome of the investigation will shape how aggressively Europe applies its digital laws to generative A.I. tools and may set precedents affecting global platform governance well beyond X.