According to a report from NPR, Meta plans to shift the task of assessing its products' potential harms away from human reviewers, instead leaning more heavily on AI to speed up the process. Internal documents seen by the publication note that Meta is aiming to have up to 90 percent of risk assessments fall on AI, NPR reports, and is considering using AI reviews even in areas such as youth risk and "integrity," which covers violent content, misinformation and more. Unnamed current and former Meta employees who spoke with NPR warned AI may overlook serious risks that a human team would have been able to identify.<br /> Updates and new features for Meta's platforms, including Instagram and WhatsApp, have long been subjected to human reviews before they hit the publi [...]
Some of the most successful creators on Facebook aren't names you'd ever recognize. In fact, many of their pages don't have a face or recognizable persona attached. Instead, they run pa [...]
At Meta Connect 2025's kickoff event, Mark Zuckerberg unveiled a trio of new smart eyewear, including its first model with augmented reality. Meta's boss also announced the second generation [...]
Meta Connect, the company's annual event dedicated to all things AR, VR, AI and the metaverse is just days away. And once again, it seems like it will be a big year for smart glasses and AI.<b [...]
Anthropic on Monday released Code Review, a multi-agent code review system built into Claude Code that dispatches teams of AI agents to scrutinize every pull request for bugs that human reviewers rout [...]