venturebeat
OpenAI admits prompt injection is here to stay as enterprises lag on defenses

It's refreshing when a leading AI company states the obvious. In a detailed post on hardening ChatGPT Atlas against prompt injection, OpenAI acknowledged what security practitioners have known for years: "Prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully 'solved.'"What’s new isn’t the risk — it’s the admission. OpenAI, the company deploying one of the most widely used AI agents, confirmed publicly that agent mode “expands the security threat surface” and that even sophisticated defenses can’t offer deterministic guarantees. For enterprises already running AI in production, this isn’t a revelation. It’s validation — and a signal that the gap between how AI is deployed and how it’s defended is no l [...]

Rating

Innovation

Pricing

Technology

Usability

We have discovered similar tools to what you are looking for. Check out our suggestions for similar AI tools.

venturebeat
Three AI coding agents leaked secrets through a single prompt injection. One vendor's system card predicted it

A security researcher, working with colleagues at Johns Hopkins University, opened a GitHub pull request, typed a malicious instruction into the PR title, and watched Anthropic’s Claude Code Securit [...]

Match Score: 214.00

venturebeat
Anthropic published the prompt injection failure rates that enterprise security teams have been asking every vendor for

Run a prompt injection attack against Claude Opus 4.6 in a constrained coding environment, and it fails every time, 0% success rate across 200 attempts, no safeguards needed. Move that same attack to [...]

Match Score: 153.84

venturebeat
Researchers broke every AI defense they tested. Here are 7 questions to ask vendors.

Security teams are buying AI defenses that don't work. Researchers from OpenAI, Anthropic, and Google DeepMind published findings in October 2025 that should stop every CISO mid-procurement. Thei [...]

Match Score: 109.99

venturebeat
Red teaming LLMs exposes a harsh truth about the AI security arms race

Unrelenting, persistent attacks on frontier models make them fail, with the patterns of failure varying by model and developer. Red teaming shows that it’s not the sophisticated, complex attacks tha [...]

Match Score: 103.92

venturebeat
Prompt Security's Itamar Golan on why generative AI security requires building a category, not a feature

VentureBeat recently sat down (virtually) with Itamar Golan, co-founder and CEO of Prompt Security, to chat through the GenAI security challenges organizations of all sizes face. We talked about shado [...]

Match Score: 91.88

venturebeat
Microsoft patched a Copilot Studio prompt injection. The data exfiltrated anyway.

Microsoft assigned CVE-2026-21520, a CVSS 7.5 indirect prompt injection vulnerability, to Copilot Studio. Capsule Security discovered the flaw, coordinated disclosure with Microsoft, and the patch was [...]

Match Score: 78.77

venturebeat
GitHub leads the enterprise, Claude leads the pack—Cursor’s speed can’t close

In the race to deploy generative AI for coding, the fastest tools are not winning enterprise deals. A new VentureBeat analysis, combining a comprehensive survey of 86 engineering teams with our own ha [...]

Match Score: 77.92

venturebeat
OpenAI deploys Cerebras chips for 15x faster code generation in first major move beyond Nvidia

OpenAI on Thursday launched GPT-5.3-Codex-Spark, a stripped-down coding model engineered for near-instantaneous response times, marking the company's first significant inference partnership outsi [...]

Match Score: 75.09

venturebeat
The AI governance mirage: Why 72% of enterprises don’t have the control and security they think they do

Decision makers at 72% of organizations claim to have two or more AI platforms that they identify as their "primary" layer, according to a survey of 40 enterprise companies conducted by Vent [...]

Match Score: 73.59