Meta has introduced V-JEPA 2, a 1.2-billion-parameter video model designed to connect intuitive physical understanding with robot control. The system achieves state-of-the-art results on motion recognition and action prediction benchmarks—and can control robots without additional training.<br /> The article Meta’s latest model highlights the challenge AI faces in long-term planning and causal reasoning appeared first on THE DECODER. [...]
Microsoft on Tuesday released Phi-4-reasoning-vision-15B, a compact open-weight multimodal AI model that the company says matches or exceeds the performance of systems many times its size — while co [...]
Alembic Technologies has raised $145 million in Series B and growth funding at a valuation 13 times higher than its previous round, betting that the next competitive advantage in artificial intelligen [...]
AI engineers often chase performance by scaling up LLM parameters and data, but the trend toward smaller, more efficient, and better-focused models has accelerated. The Phi-4 fine-tuning methodology [...]
Researchers at Meta FAIR and the University of Edinburgh have developed a new technique that can predict the correctness of a large language model's (LLM) reasoning and even intervene to fix its [...]
Meta has been one of the most interesting companies of the generative AI era — initially gaining a loyal and huge following of users for the release of its mostly open source Llama family of large l [...]
Some of the most successful creators on Facebook aren't names you'd ever recognize. In fact, many of their pages don't have a face or recognizable persona attached. Instead, they run pa [...]
Researchers at MiroMind AI and several Chinese universities have released OpenMMReasoner, a new training framework that improves the capabilities of language models in multimodal reasoning.The framewo [...]