Offensive security testing for LLMs, chatbots, AI agents, and RAG pipelines. We find what's still burning when everyone thinks the fire is out.
Organizations are deploying AI at unprecedented speed. But most have zero offensive security testing for their AI systems. The attack surface is massive and growing.
Every number below is a real finding, a real vendor notified, a real patch shipped.
Real-world findings from active research on production AI systems. All responsibly disclosed to vendors. All with reproducible proof-of-concepts.
RequestsGetTool(allow_dangerous_requests=True) with no URL filtering enables cloud metadata exfiltration via IAM token theft on AWS and Alibaba Cloud through 169.254.169.254.allow_dangerous_code=True — enabling API key theft and arbitrary code execution on the host server via a single prompt.Published GitHub Security Advisories and CVEs from Cinder Security research.
An open-source framework for automated offensive testing of AI systems. Fracture runs structured attack campaigns against LLMs and AI agents — fingerprinting, extracting system prompts, poisoning memory, and escalating privileges through multi-turn psychological manipulation.
cinder-security/fractureWe don't just scan — we think like attackers. Every engagement is tailored to your specific AI stack and threat model.
Comprehensive one-time security assessment of your AI systems. We test every attack vector and deliver a detailed report with reproducible proof-of-concepts, CVSS scores, and remediation guidance.
Ongoing offensive testing as your AI evolves. Every model update, every new feature — we test it before your users find the gaps. Monthly retainer with prioritized findings.
Hands-on workshops for your engineering and security teams. Learn to think like an AI attacker and build more resilient systems from day one.
Straightforward engagements with clear deliverables. Every assessment includes a professional PDF report and a 30-minute debrief call.
Payment via Wise · Bank transfer · All engagements require signed scope document
The full attack surface of modern AI systems — from prompt-level exploits to infrastructure-level compromises.
A structured approach to finding what others miss.
Map your AI stack, identify attack surfaces, and define engagement rules.
Execute targeted attacks across all vectors. Every finding includes a reproducible PoC.
Detailed security report with severity ratings, CVSS scores, and fix recommendations.
Re-test after fixes. Confirm vulnerabilities are resolved and defenses hold.
Get a free initial assessment of your AI security posture.
contact@cindersecurity.io