The UK’s media regulator has opened a formal investigation into X under the Online Safety Act. "There have been deeply concerning reports of the Grok AI chatbot account on X being used to create and share undressed images of people — which may amount to intimate image abuse or pornography — and sexualized images of children that may amount to child sexual abuse material (CSAM)," Ofcom said.The investigation will focus on whether X has "has complied with its duties to protect people in the UK from content that is illegal in the UK." That includes whether X is taking appropriate measures to prevent UK users from seeing "priority" illegal content, such as CSAM and non-consensual intimate images; if the platform is removing illegal content quickly after beco [...]
Malaysia and Indonesia are the first countries to block Grok, claiming that X’s chatbot does not have sufficient safeguards in place to prevent explicit AI-generated deepfakes of women and children [...]
Large language models (LLMs) have astounded the world with their capabilities, yet they remain plagued by unpredictability and hallucinations – confidently outputting incorrect information. In high- [...]
Deploying AI agents for repository-scale tasks like bug detection, patch verification, and code review requires overcoming significant technical hurdles. One major bottleneck: the need to set up dynam [...]
California authorities have launched an investigation into xAI following weeks of reports that the chatbot was generating sexualized images of children. "xAI appears to be facilitating the large- [...]
Although X removed Grok’s ability to create nonconsensual digitally undressed images on the social platform, the standalone Grok app is another story. It reportedly continues to produce “nudifiedâ [...]
Netflix, Disney+, Amazon's Prime Video and other major video on-demand (VOD) streaming services are set to face stricter regulation in the UK. Platforms with a monthly average of more than 500,00 [...]
Elon Musk isn't the only party at fault for Grok's nonconsensual intimate deepfakes of real people, including children. What about Apple and Google? The two (frequently virtue-signaling) com [...]