Meta releases the third generation of its "Segment Anything Model." Unlike standard models limited to fixed categories, SAM 3 uses an open vocabulary to understand both images and videos. The system relies on a new training method combining human and AI annotators.<br /> The article Meta's SAM 3 segmentation model blurs the boundary between language and vision appeared first on THE DECODER. [...]
Some of the most successful creators on Facebook aren't names you'd ever recognize. In fact, many of their pages don't have a face or recognizable persona attached. Instead, they run pa [...]
Microsoft on Tuesday released Phi-4-reasoning-vision-15B, a compact open-weight multimodal AI model that the company says matches or exceeds the performance of systems many times its size — while co [...]
Meta has been one of the most interesting companies of the generative AI era — initially gaining a loyal and huge following of users for the release of its mostly open source Llama family of large l [...]
At Meta Connect 2025's kickoff event, Mark Zuckerberg unveiled a trio of new smart eyewear, including its first model with augmented reality. Meta's boss also announced the second generation [...]