|
Elon Musk isn't the only party at fault for Grok's nonconsensual intimate deepfakes of real people, including children. What about Apple and Google? The two (frequently virtue-signaling) companies have inexplicably allowed Grok and X to remain in their app stores — even as Musk's chatbot reportedly continues to produce the material. On Wednesday, a coalition of women's and progressive advocacy groups called on Tim Cook and Sundar Pichai to uphold their own rules and remove the apps.
The open letters to Apple and Google were signed by 28 groups. Among them are the women's advocacy group Ultraviolet, the parents' group ParentsTogether Action and the National Organization for Women.
The letter accuses Apple and Google of "not just enabling NCII and CSAM, but profiting off of it. As a coalition of organizations committed to the online safety and well-being of all — particularly women and children — as well as the ethical application of artificial intelligence (AI), we demand that Apple leadership urgently remove Grok and X from the App Store to prevent further abuse and criminal activity."
|
|
X says it is changing its policies around Grok's image-editing abilities following a multi-week outcry over the chatbot repeatedly being accused of generating sexualized images of children and nonconsensual nudity. In an update shared from the @Safety account on X, the company said it has "implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis."
The new safeguards, according to X, will apply to all users regardless of whether they pay for Grok. xAI is also moving all of Grok's image-generating features behind its subscriber paywall so that non-paying users will no longer be able to create images. And it will geoblock "the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X" in regions where it's illegal.
— Safety (@Safety) January 14, 2026
The company's statement comes hours after the state of California opened an investigation into xAI and Grok over its handling of AI-generated nudity and child exploitation material. A statement from California Attorney General Rob Bonta cited one analysis that found "more than half of the 20,000 images generated by xAI between Christmas and New Years depicted people in minimal clothing," including some that appeared to be children.
In its update, X said that it has "zero tolerance" for child exploitation and that it removes "high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-
|
|
The state is the latest actor to condemn the chatbot's proliferation of AI-generated erotic images of women and girls.
| RELATED ARTICLES | | |
|