Destination

2025-07-10

Most AI models can fake alignment, but safety training suppresses the behavior, study finds

abstract illustration of a claude logo, looks like a person's head, wearing detective hat and sunglasses


A new study analyzing 25 language models finds that most do not fake safety compliance - though not due to a lack of capability.


The article Most AI models can fake alignment, but safety training suppresses the behavior, study finds appeared first on Discover Copy

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat

2025-11-10

Baseten takes on hyperscalers with new AI training platform that lets you own your model weights

Baseten, the AI infrastructure company recently valued at $2.15 billion, is making its most significant product pivot yet: a full-scale push into model training that could reshape how enterprises wean [...]

Match Score: 149.44

venturebeat

2025-10-09

Nvidia researchers boost LLMs reasoning skills by getting them to 'think' during pre-training

Researchers at Nvidia have developed a new technique that flips the script on how large language models (LLMs) learn to reason. The method, called reinforcement learning pre-training (RLP), integrates [...]

Match Score: 83.96

venturebeat

2025-11-20

Grok 4.1 Fast's compelling dev access and Agent Tools API overshadowed by Musk glazing

Elon Musk's frontier generative AI startup xAI formally opened developer access to its Grok 4.1 Fast models last night and introduced a new Agent Tools API—but the technical milestones were imm [...]

Match Score: 81.42

venturebeat

2025-10-27

Google Cloud takes aim at CoreWeave and AWS with managed Slurm for enterprise-scale AI training

Some enterprises are best served by fine-tuning large models to their needs, but a number of companies plan to build their own models, a project that would require access to GPUs. Google Cloud wants [...]

Match Score: 72.41

venturebeat

2025-11-14

OpenAI experiment finds that sparse models could give AI builders the tools to debug neural networks

OpenAI researchers are experimenting with a new approach to designing neural networks, with the aim of making AI models easier to understand, debug, and govern. Sparse models can provide enterprises w [...]

Match Score: 70.83

Destination

2025-02-10

Roblox, Discord, OpenAI and Google found new child safety group

Roblox, Discord, OpenAI and Google are launching a nonprofit organization called ROOST, or Robust Open Online Safety Tools, which hopes "to build scalable, interoperable safety infrastructure su [...]

Match Score: 64.37

venturebeat

2025-11-19

Meta’s DreamGym framework trains AI agents in a simulated world to cut reinforcement learning costs

Researchers at Meta, the University of Chicago, and UC Berkeley have developed a new framework that addresses the high costs, infrastructure complexity, and unreliable feedback associated with using r [...]

Match Score: 63.95

venturebeat

2025-10-29

From static classifiers to reasoning engines: OpenAI’s new model rethinks content moderation

Enterprises, eager to ensure any AI models they use adhere to safety and safe-use policies, fine-tune LLMs so they do not respond to unwanted queries. However, much of the safeguarding and red teamin [...]

Match Score: 63.20

venturebeat

2025-11-04

Attention ISN'T all you need?! New Qwen3 variant Brumby-14B-Base leverages Power Retention technique

When the transformer architecture was introduced in 2017 in the now seminal Google paper "Attention Is All You Need," it became an instant cornerstone of modern artificial intelligence. Ever [...]

Match Score: 60.21