Destination

2025-06-13

Anthropic researchers teach language models to fine-tune themselves

Anthropic has launched a new research initiative focused on the welfare of AI systems—specifically, whether advanced models should be morally considered when they display human-like capabilities.


Researchers working with AI company Anthropic have developed a new method called Internal Coherence Maximization (ICM) that fine-tunes language models using only their own outputs. The approach could help—or even replace—human oversight for complex tasks.


The article Discover Copy

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat

2025-10-15

Anthropic is giving away its powerful Claude Haiku 4.5 AI for free to take on OpenAI

Anthropic released Claude Haiku 4.5 on Wednesday, a smaller and significantly cheaper artificial intelligence model that matches the coding capabilities of systems that were considered cutting-edge ju [...]

Match Score: 186.83

venturebeat

2025-10-16

How Anthropic’s ‘Skills’ make Claude faster, cheaper, and more consistent for business workflows

Anthropic launched a new capability on Thursday that allows its Claude AI assistant to tap into specialized expertise on demand, marking the company's latest effort to make artificial intelligenc [...]

Match Score: 133.64

venturebeat

2025-10-13

Self-improving language models are becoming reality with MIT's updated SEAL technique

Researchers at the Massachusetts Institute of Technology (MIT) are gaining renewed attention for developing and open sourcing a technique that allows large language models (LLMs) — like those underp [...]

Match Score: 105.80

venturebeat

2025-10-21

DeepSeek drops open-source model that compresses text 10x through images, defying conventions

DeepSeek, the Chinese artificial intelligence research company that has repeatedly challenged assumptions about AI development costs, has released a new model that fundamentally reimagines how large l [...]

Match Score: 76.63

venturebeat

2025-10-13

Researchers find that retraining only small parts of AI models can cut costs and prevent forgetting

Enterprises often find that when they fine-tune models, one effective approach to making a large language model (LLM) fit for purpose and grounded in data is to have the model lose some of its abiliti [...]

Match Score: 73.36

venturebeat

2025-10-20

Adobe Foundry wants to rebuild Firefly for your brand — not just tweak it

Hoping to attract more enterprise teams to its ecosystem, Adobe launched a new model customization service called Adobe AI Foundry, which would create bespoke versions of its flagship AI model, Firefl [...]

Match Score: 66.81

venturebeat

2025-10-02

'Western Qwen': IBM wows with Granite 4 LLM launch and hybrid Mamba/Transformer architecture

IBM today announced the release of Granite 4.0, the newest generation of its homemade family of open source large language models (LLMs) designed to balance high performance with lower memory and cost [...]

Match Score: 57.21

venturebeat

2025-10-09

Nvidia researchers boost LLMs reasoning skills by getting them to 'think' during pre-training

Researchers at Nvidia have developed a new technique that flips the script on how large language models (LLMs) learn to reason. The method, called reinforcement learning pre-training (RLP), integrates [...]

Match Score: 56.11

Destination

2025-01-22

Google is investing another billion dollars in Anthropic

Google has decided to invest another billion into Anthropic, four sources told the Financial Times, bringing its total sunk cost to more than three billion dollars. Both companies have declined to com [...]

Match Score: 54.47