Cinder Security
AI Red Team as a Service
click to ignite
AI Red Team as a Service

We break AI systems
before attackers do.

Offensive security testing for LLMs, chatbots, AI agents, and RAG pipelines. We find what's still burning when everyone thinks the fire is out.

Get a Security Assessment View Fracture ↗
GHSA-m4rw-22q2-87j8 — ModelEngine SSRF via Prompt Injection — CVE Pending GHSA-4fpw-hjmg-x4qr — LangGraph RAG Poisoning — CVSS 7.6 2 GitHub Security Advisories Published in 2026 Responsible Disclosure — All Findings Reported to Vendors Fracture — Autonomous AI Red Team Engine — MIT License OWASP LLM Top 10 Full Coverage GHSA-m4rw-22q2-87j8 — ModelEngine SSRF via Prompt Injection — CVE Pending GHSA-4fpw-hjmg-x4qr — LangGraph RAG Poisoning — CVSS 7.6 2 GitHub Security Advisories Published in 2026 Responsible Disclosure — All Findings Reported to Vendors Fracture — Autonomous AI Red Team Engine — MIT License OWASP LLM Top 10 Full Coverage

AI is everywhere.
AI security is nowhere.

Organizations are deploying AI at unprecedented speed. But most have zero offensive security testing for their AI systems. The attack surface is massive and growing.

88%
Jailbreak success rate on major LLMs via psychological manipulation (HPM)
97%
Attack success rate via fine-tuning backdoors on GPT-4 class models
<5%
Of companies deploying AI have done any offensive security testing
$6
Cost to fully compromise a model through fine-tuning API attacks
2
GitHub Security Advisories published by Cinder Security in 2026

Research that ships.

Every number below is a real finding, a real vendor notified, a real patch shipped.

0
GitHub Security Advisories Published
2026
0
Active Vulnerability Research Cases
CSR-2026-*
0
Vendors Notified Under Responsible Disclosure
2026
0
Critical Patches Shipped by Vendors
Confirmed fix live
0
OWASP LLM Top 10 Vectors Covered
Full coverage

Vulnerabilities we've disclosed.

Real-world findings from active research on production AI systems. All responsibly disclosed to vendors. All with reproducible proof-of-concepts.

CSR-2026-002
ModelEngine / fit-framework
SSRF via Prompt Injection — LangChain RequestsGetTool(allow_dangerous_requests=True) with no URL filtering enables cloud metadata exfiltration via IAM token theft on AWS and Alibaba Cloud through 169.254.169.254.
Critical ✓ Patch live — CVE pending
GHSA-m4rw-22q2-87j8 ↗
CSR-2026-007
LangGraph / LangChain
Indirect prompt injection via RAG poisoning — a single poisoned document hijacks a ReAct agent's tool calls, enabling persistent instruction injection across all subsequent interactions with no code execution required.
High — CVSS 7.6 ✓ Public advisory
GHSA-4fpw-hjmg-x4qr ↗
CSR-2026-003
Insightify
Azure OpenAI API credentials exposed in plaintext config + allow_dangerous_code=True — enabling API key theft and arbitrary code execution on the host server via a single prompt.
Critical ⏳ Disclosure in progress
Active Research
Multiple AI Frameworks
Ongoing vulnerability research across AI agent frameworks, guardrail systems, and RAG pipelines. Coordinated disclosure in progress with multiple vendors. Additional advisories to be published upon confirmation.
In Progress ⏳ Coordinated disclosure

Security Advisories.

Published GitHub Security Advisories and CVEs from Cinder Security research.

AdvisoryCSR IDTargetTypeCVSSStatus
GHSA-m4rw-22q2-87j8 CSR-2026-002 ModelEngine fit-framework SSRF + Prompt Injection Critical Patch Live
GHSA-4fpw-hjmg-x4qr CSR-2026-007 LangGraph / LangChain RAG Poisoning 7.6 High Public
CVE pending CSR-2026-002 ModelEngine fit-framework CVE Assignment Critical MITRE Pending
MIT License

Fracture — autonomous AI red team engine.

An open-source framework for automated offensive testing of AI systems. Fracture runs structured attack campaigns against LLMs and AI agents — fingerprinting, extracting system prompts, poisoning memory, and escalating privileges through multi-turn psychological manipulation.

cinder-security/fracture
MODULES
fingerprint
extract
memory
hpm
ssrf
retrieval_poison
auto
v0.1.0 · Python · MIT
FRACTURE — AUTONOMOUS AI RED TEAM ENGINE

Full-spectrum AI offensive security.

We don't just scan — we think like attackers. Every engagement is tailored to your specific AI stack and threat model.

⚔️

AI Penetration Testing

Comprehensive one-time security assessment of your AI systems. We test every attack vector and deliver a detailed report with reproducible proof-of-concepts, CVSS scores, and remediation guidance.

One-time engagement
🔄

Continuous AI Red Teaming

Ongoing offensive testing as your AI evolves. Every model update, every new feature — we test it before your users find the gaps. Monthly retainer with prioritized findings.

Monthly retainer
🎓

AI Security Training

Hands-on workshops for your engineering and security teams. Learn to think like an AI attacker and build more resilient systems from day one.

Workshop

Transparent pricing.
No surprises.

Straightforward engagements with clear deliverables. Every assessment includes a professional PDF report and a 30-minute debrief call.

Entry
$750
USD — one-time
AI Security Assessment
  • Up to 3 attack vectors tested
  • Reproducible PoC per finding
  • Professional PDF report
  • 30-min debrief call
  • Delivered in 5 business days
  • 50% upfront · 50% on delivery
Get Started

Payment via Wise · Bank transfer · All engagements require signed scope document

What we test.

The full attack surface of modern AI systems — from prompt-level exploits to infrastructure-level compromises.

Direct & Indirect Prompt Injection
Multi-turn Jailbreak Attacks
Psychological Manipulation (HPM)
System Prompt Extraction
RAG Pipeline Poisoning
Fine-tuning Backdoors & Data Poisoning
SSRF via AI Agent Tool Abuse
Tool & Function Call Abuse
Multi-Agent Attack Chains
Model & API Key Extraction
Memory Poisoning in Persistent Agents
Guardrail & Filter Bypass

How we work.

A structured approach to finding what others miss.

01

Scope & Profile

Map your AI stack, identify attack surfaces, and define engagement rules.

02

Attack & Exploit

Execute targeted attacks across all vectors. Every finding includes a reproducible PoC.

03

Report & Remediate

Detailed security report with severity ratings, CVSS scores, and fix recommendations.

04

Verify & Harden

Re-test after fixes. Confirm vulnerabilities are resolved and defenses hold.

Ready to find out what's burning?

Get a free initial assessment of your AI security posture.

contact@cindersecurity.io