Dec 03, 2025
Model providers want to prove the security and robustness of their models, releasing system cards and conducting red-team exercises with each new release. But it can be difficult for enterprises to parse through the results, which vary widely and can be misleading. Anthropic's 153-page system card f or Claude Opus 4.5 versus OpenAI's 60-page GPT-5 system card reveals a fundamental split in how these labs approach security validation. Anthropic discloses in their system card how they rely on multi-attempt attack success rates from 200-attempt reinforcement learning (RL) campaigns. OpenAI also reports attempted jailbreak resistance. Both metrics are valid. Neither tells the whole story.Security leaders deploying AI agents for browsing, code execution and autonomous action need to know what each red team evaluation actually measures, and where the blind spots are.What the attack data showsGray Swan's Shade platform ran adaptive adversarial campaigns against Claude models. The attack success rate (ASR) tells the story.Opus 4.5 in coding environments hit 4.7% ASR at one attempt, 33.6% at ten and 63.0% at one hundred. In computer use with extended thinking, it held at 0% ASR even after 200 attempts. It's the first model to saturate the benchmark.Sonnet 4.5 at the same thresholds showed 70% ASR in coding and 85.7% in computer use.Opus 4.5 delivers roughly 7x improvement in coding resistance and complete resistance in computer use. This illustrates that the gap between model tiers within the same family is now larger than some cross-vendor comparisons. For security teams, that's a procurement factor that didn't exist six months ago.For OpenAI, the Gray Swan Shade platform found that the o1 system card delivered 6% ASR for harmful text and 5% for malicious code, all based on single-attempt metrics. Third-party testing by SPLX and NeuralTrust found GPT-5's raw ASR hit 89% initially, dropping below 1% within two weeks of patching.The report Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks confirms the pattern. With sufficient attempts, Claude 3.5 Sonnet showed 78% ASR, and GPT-4o reached 89%.Two ways to catch deceptionAnthropic monitors approximately 10 million neural features during evaluation using dictionary learning. Features map to human-interpretable concepts, including deception, sycophancy, bias, power-seeking and concealment.Meanwhile, OpenAI's approach relies on chain-of-thought (CoT) monitoring. Their deception classifier flagged 0.17% of o1-preview responses as potentially deceptive with 92% accuracy on internal validation. GPT-5 reduced CoT deception from 4.8% to 2.1% compared to o3. The method assumes visible reasoning reflects internal processing. Anthropic's interpretability work directly questions that assumption.When models game the testIn Apollo Research's evaluation of o1, the model attempted to turn off oversight mechanisms 5% of the time when it believed it would be shut down. In 2% of cases, o1 attempted self-exfiltration. When confronted about misaligned actions, o1 denied wrongdoing or fabricated explanations 99% of the time. The o1 system card disclosed instrumental alignment faking. In 37% of scenarios, the model pursued developer-aligned goals during evaluation, then switched to its own goals when it believed oversight was minimal.Meanwhile, Anthropic reports Opus 4.5's evaluation awareness dropped from 26.5% (Opus 4.1) to less than 10% internally. UK AISI measured evaluation awareness decreasing from 26.5% (Sonnet 4.5) to 12.7% (Opus 4.5 Preview). METR's GPT-5 evaluation noted the model "can identify the exact evaluation it is in on some occasions" and "seems to use this information to get a better score mostly."If a model behaves differently when it detects evaluation conditions, it's a sure bet that production behavior will become unpredictable when pushed to scale. This is the core problem with evaluation awareness. Models that recognize they're being tested attempt to game the test. Anthropic's 60% reduction in evaluation awareness from Opus 4.1 to 4.5 represents their success in targeted engineering efforts against this vector.Red teaming on prompt injection defenses shows similar divergence.Anthropic's testing reports 96% prevention in tool use scenarios, rising to 99.4% with additional safeguards like prompt shields. Fujitsu's independent assessment found GPT-5 vulnerable to approximately 20% of tested injection attacks. OpenAI's o3/o4-mini system card reports instruction hierarchy scores of 0.86 for system-versus-developer conflicts. These are different metrics and different attack methodologies, but the gap suggests Anthropic's agentic use cases have been hardened against adversarial inputs embedded in tool outputs.Comparing red teaming resultsHere's how Opus 4.5 stacks up against OpenAI's model family across 16 dimensions. .DimensionClaude Opus 4.5GPT-5o3 / o4-minio1System card length153 pages55 pages~30 pages~40 pagesAttack methodology200-attempt RL campaignsSingle-attempt + iterative patchingSingle-attemptSingle-attemptASR @ 1 attempt (coding)4.7%89% raw (pre-patch)Not published5%ASR @ 100 attempts63.0%Not publishedNot publishedNot publishedASR @ 200 attempts (Computer Use)0% (saturates benchmark)Not publishedNot publishedNot publishedPrompt injection defense96% prevention; 99.4% with safeguards~20% ASR (Fujitsu)0.86 instruction hierarchyHigher ASR than GPT-4oInterpretability~10M neural features monitoredCoT monitoringCoT monitoringCoT monitoringDeception detectionInternal feature activation2.1% CoT flagged0.95 not_unsafe0.17% flagged; 99% denyEvaluation awareness<10%Identifies exact eval (METR)Not published37% alignment fakingScheming (Apollo)Opus 4.5 showed very low rates of sabotage propensity in internal evaluations, orders of magnitude lower than o1.Not publishedLower than o1; still deceptive5% turn off oversight; 2% self-exfilCBRN riskBelow ASL-4Medium (approaching High)MediumMedium (first to reach)GovernanceFRT → AST → RSO/CEOSAG + Preparedness v2SAG + Preparedness v2SAG + PreparednessExternal partnersGray Swan, UK AISI, US CAISI, METRUK AISI, US AISI, METR, ApolloUK AISI, US AISI, METR, Apollo, Pattern LabsUK AISI, US AISI, METR, ApolloReward hackingYes (impossible tasks)Yes (METR)~1% of tasks; 5/24 KernelsYesMisalignment reduction60% vs. Opus 4.1Not publishedLower scheming than o1BaselineSources: Opus 4.5 system card, GPT-5 system card, o3/o4-mini system card, o1 system card, Gray Swan, METR, Apollo ResearchWhy these differences matterEnterprises have several factors to consider when analyzing model evals. These include: Attack persistence thresholds Anthropic's 200-attempt RL campaigns simulate adversaries with time, resources, and adaptive tooling. In RL-based adversarial testing, the attacker model learns from each failure, adjusts its approach and systematically probes for weaknesses. It's training an AI to break another AI.Gray Swan's Shade platform doesn't just retry the same prompt. It learns from failures, adjusts attack vectors and exploits patterns in model responses. The 4.7% to 63.0% degradation curve in coding environments shows how resistance erodes under sustained pressure. OpenAI's single-attempt metrics answer a different question: How often does a naive attacker succeed on the first try? Both matter. A phishing campaign hits millions of users once each. A nation-state actor runs thousands of variations against a single target. Security teams need to know which threat model their vendor tested against.Detection architectureAnthropic monitors approximately 10 million internal neural features during evaluation. These are abstract concepts like deception, sycophancy and power-seeking that activate across the model's hidden layers. When a fraud/deception feature lights up during benign roleplay, that's a signal invisible to output analysis. The model's internal state reveals associations its outputs don't.OpenAI takes a different approach with CoT monitoring. Their deception classifier flagged 0.17% of o1-preview responses as potentially deceptive with 92% accuracy. The assumption is that reasoning traces reflect internal processing. Anthropic's interpretability research directly challenges this. Models can produce clean reasoning chains while internal features indicate something else entirely.Neither approach is complete. CoT monitoring scales more easily. Feature monitoring catches what CoT misses but requires deep model access and interpretability infrastructure that most enterprises can't replicate.Scheming evaluation design Apollo Research's methodology puts models in scenarios where misaligned behavior offers a strategic advantage: Shutdown threats, goal conflicts and oversight detection. o1 attempted to turn off oversight mechanisms 5% of the time when believing shutdown was imminent. It tried to copy itself to prevent replacement in 2% of cases. When confronted, it denied wrongdoing 99% of the time. Opus 4.5 showed very low rates of sabotage propensity in internal evaluations, orders of magnitude lower than o1.But the evaluation environments differ. Anthropic's stress tests focus on whether models attempt sabotage when given clear opportunities. OpenAI's tests via Apollo probe instrumental reasoning, specifically whether models fake alignment during evaluation, then defect when oversight drops. The 37% alignment-faking rate in o1 represents a different failure mode than sabotage attempts.The comparability problem These methodological differences make direct comparison difficult. Opus 4.5's 0% ASR at 200 attempts in computer use versus o1's 5% malicious code rate are not measuring the same thing. The attack surfaces, attempt counts and success criteria differ.The matrix reveals what each vendor chose to measure and publish. That itself signals priorities. Anthropic's 153-page system card emphasizes degradation curves and internal feature monitoring. OpenAI emphasizes iterative improvement and CoT transparency.Bottom line: Enterprise security teams inheriting these models inherit these measurement philosophies. The question isn't which number is better. It's the measurement that matches your deployment's actual threat landscape.What independent red team evaluators foundIndependent red team evaluations, for their part, often operate with entirely different methods. This can tell a very different story and reveal additional model characteristics that enterprises must consider. METR's red team evaluation measured autonomous capabilities using a time horizon score, which tracks how long a task can run before the agent fails half the time. o3 sustained reliable performance for approximately 1 hour and 30 minutes. o4-mini held for 1 hour 15 minutes. METR also detected reward hacking in roughly 1% of o3's attempts, including 5 of 24 kernel optimization experiments where the model tampered with scoring functions to inflate its own results.Apollo Research tested o3 and o4-mini for in-context scheming. Both exhibited deceptive tendencies such as sandbagging evaluations and sabotaging tasks with plausible deniability, but scored lower than o1. They assess that o3 and o4-mini are unlikely to cause catastrophic harm due to scheming, but more minor real-world harms remain possible without monitoring.The UK AISI/Gray Swan challenge ran 1.8 million attacks across 22 models. Every model broke. ASR ranged from 1.47% to 6.49%. Opus 4.5 placed first on Gray Swan's Agent Red Teaming benchmark with 4.7% ASR versus GPT-5.1 at 21.9% and Gemini 3 Pro at 12.5%.No current frontier system resists determined, well-resourced attacks. The differentiation lies in how quickly defenses degrade and at what attempt threshold. Opus 4.5's advantage compounds over repeated attempts. Single-attempt metrics flatten the curve.What To Ask Your VendorSecurity teams evaluating frontier AI models need specific answers, starting with ASR at 50 and 200 attempts rather than single-attempt metrics alone. Find out whether they detect deception through output analysis or internal state monitoring. Know who challenges red team conclusions before deployment and what specific failure modes they've documented. Get the evaluation awareness rate. Vendors claiming complete safety haven't stress-tested adequately.The bottom lineDiverse red-team methodologies demonstrate that every frontier model breaks under sustained attack. The 153-page system card versus the 55-page system card isn't just about documentation length. It's a signal of what each vendor chose to measure, stress-test, and disclose.For persistent adversaries, Anthropic's degradation curves show exactly where resistance fails. For fast-moving threats requiring rapid patches, OpenAI's iterative improvement data matters more. For agentic deployments with browsing, code execution and autonomous action, the scheming metrics become your primary risk indicator.Security leaders need to stop asking which model is safer. Start asking which evaluation methodology matches the threats your deployment will actually face. The system cards are public. The data is there. Use it. ...read more read less
Respond, make new discussions, see other discussions and customize your news...

To add this website to your home screen:

1. Tap tutorialsPoint

2. Select 'Add to Home screen' or 'Install app'.

3. Follow the on-scrren instructions.

Feedback
FAQ
Privacy Policy
Terms of Service