Grok faced potential removal from the App Store

A new report from NBC said Apple privately threatened to take the artificial intelligence app off its platform in January due to deepfake nudification concerns.

Grok faced potential removal from the App Store

This audio is auto-generated. Please let us know if you have feedback.

X owner Elon Musk loves to project an image of himself as a rebel and a maverick of the business world who thumbs his nose to regulation, and is different from upper-class boardroom types. Yet, even Musk has to abide by the rules, and according to a new report from NBC News, Musk’s constant boundary pushing almost got his artificial intelligence app Grok banned from the App Store earlier this year, in a move that could have sunk X’s future prospects.

As reported by NBC: “Apple privately threatened to remove Elon Musk’s artificial intelligence app, Grok, from its App Store in January after Musk’s xAI failed to do enough to stop it from creating nude or sexualized deepfakes, Apple told senators in a letter that was obtained by NBC News.

The Grok deepfake scandal is still set to cost X millions in fines due to the trend of users using Musk’s Grok AI chatbot to digitally strip down images of people in the app.

Which Musk initially argued was acceptable, saying that plenty of other AI platforms facilitate the same, and that X was only being targeted for enforcement because it allows free speech and the elites are afraid of this.

However, while other AI nudification apps exist, all of them are under restrictions or investigation. The concern in Grok’s case was the scale and reach of X, and how much of X’s user base was taking up this concerning trend.

According to research commissioned by Bloomberg, Grok, at one stage early in 2026, was producing more than 6,700 images every hour that would be categorized as “sexually suggestive of nudifying.”

Yet, Musk remained defiant, at least initially, and said on X that the criticism of Grok was another “excuse for censorship” of the app.

Then in mid-January, Musk must have got the call from Apple informing him that Grok could be banned, at which point, the company changed tack, and X revised Grok’s code to limit its nudification capacity.

Does that mean that X has completely addressed this element, and that Grok will no longer enable users to produce offensive images of this type?

Apparently not. According to reports, while X has restricted image generation and blocked certain prompts, thus limiting the potential for people to produce nude images, it is still possible to get Grok to generate nude images of people.

A separate investigation published by NBC News this week found “dozens of AI-generated sexual images and videos depicting real people posted publicly on Musk’s social media app, X, over the past month.” According to the report, the generated images depict women “whose likenesses were edited by the AI chatbot to put them in more revealing clothing, such as towels, sports bras, skintight Spider-Woman outfits or bunny costumes.” NBC reported that many of the women featured in these depictions are female pop stars or actors.

So while X said it limited the damage and restricted this as an option, it is still possible to use its Grok chatbot to generate offensive, potentially harmful images.

Which Elon Musk seems to be largely in support of. Musk also oversaw the development of AI-powered NSFW chatbots in the app and regularly re-posts AI-generated depictions of young women on his own profile.

This, combined with his initial defense of X users’ right to be able to generate fake nudes, suggests that Musk views nudification as a viable usage of the company’s AI tools and a potential means to drive engagement.

It seems problematic that Musk is looking to use this as a way to boost X usage, considering the app has more than 500 million active users, and as such, has significant capacity to amplify harmful depictions of people and events.

X itself has acknowledged this. Last month, X’s Head of Product Nikita Bier said his team was working to address AI-generated deepfakes related to the conflict in Iran, in order to protect the integrity of the platform.

Does that extend to the impact of deepfake nudes and how they could harm individuals?

It remains a contentious concern, and one which could still end up costing X significantly, even as it works to promote expanded usage of its AI tools.

It’s not yet clear what kind of potential harm AI tools can cause, nor is it clear what kind of mental impacts these tools could have on victims. Yet, people like Musk are pushing ahead with AI development in order to win a perceived race against other billionaires who are investing in the same.

In the end, regular people could end up being the casualties of this tech war, and people like Musk have shown little regard for this aspect.