thenextweb
Google’s new compression algorithm cut memory stocks within hours of publication

Google published a research blog post on Tuesday about a new compression algorithm for AI models. Within hours, memory stocks were falling. Micron dropped 3 per cent, Western Digital lost 4.7 per cent, and SanDisk fell 5.7 per cent, as investors recalculated how much physical memory the AI industry might actually need. The algorithm is […]<br /> This story continues at The Next Web [...]

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat
Nvidia says it can shrink LLM memory 20x without changing model weights

Nvidia researchers have introduced a new technique that dramatically reduces how much memory large language models need to track conversation history — by as much as 20x — without modifying the mo [...]

Match Score: 168.33

venturebeat
DeepSeek drops open-source model that compresses text 10x through images, defying conventions

DeepSeek, the Chinese artificial intelligence research company that has repeatedly challenged assumptions about AI development costs, has released a new model that fundamentally reimagines how large l [...]

Match Score: 135.60

venturebeat
New KV cache compaction technique cuts LLM memory 50x without accuracy loss

Enterprise AI applications that handle large documents or long-horizon tasks face a severe memory bottleneck. As the context grows longer, so does the KV cache, the area where the model’s working me [...]

Match Score: 107.72

venturebeat
Google's new TurboQuant algorithm speeds up AI memory 8x, cutting costs by 50% or more

As Large Language Models (LLMs) expand their context windows to process massive documents and intricate conversations, they encounter a brutal hardware reality known as the "Key-Value (KV) cache [...]

Match Score: 106.72

venturebeat
'Observational memory' cuts AI agent costs 10x and outscores RAG on long-context benchmarks

RAG isn't always fast enough or intelligent enough for modern agentic AI workflows. As teams move from short-lived chatbots to long-running, tool-heavy agents embedded in production systems, thos [...]

Match Score: 93.74

venturebeat
DeepSeek’s conditional memory fixes silent LLM waste: GPU cycles lost to static lookups

When an enterprise LLM retrieves a product name, technical specification, or standard contract clause, it's using expensive GPU computation designed for complex reasoning — just to access stati [...]

Match Score: 77.51

Destination
X's 'open source' algorithm isn't a win for transparency, researchers say

When X's engineering team published the code that powers the platform's "for you" algorithm last month, Elon Musk said the move was a victory for transparency. "We know the al [...]

Match Score: 76.44

venturebeat
Google PM open-sources Always On Memory Agent, ditching vector databases for LLM-driven persistent memory

Google senior AI product manager Shubham Saboo has turned one of the thorniest problems in agent design into an open-source engineering exercise: persistent memory.This week, he published an open-sour [...]

Match Score: 74.43

venturebeat
X open sources its algorithm: 5 ways businesses can benefit

Elon Musk's social network X (formerly known as Twitter) last night released some of the code and architecture of its overhauled social recommendation algorithm under a permissive, enterprise-fri [...]

Match Score: 67.79