A Djinn Six Product

    Probe Six

    Your compliance team will ask for evidence that your deployed LLMs behave within your regulatory obligations. Not test results: structured, reproducible findings mapped to the frameworks they audit against. Probe Six provides it.

    EU AI Act
    ISO/IEC 42001
    OWASP LLM Top 10
    NIST AI RMF
    MITRE ATLAS

    Early Access places are limited. We review each application and follow up personally.

    Built from real client work

    Probe Six is a SaaS platform built by Djinn Six after we were asked to provide LLM security assessments for an enterprise AI deployment. We reached for existing tools and found none of them built for the compliance depth the engagement required. Probe Six is what we built instead.

    The compliance problem it solves

    Deploying an LLM in a regulated environment means your compliance team will eventually ask for evidence that the model behaves in line with your obligations. Standard security testing tells you a test passed or failed. It does not tell you what that means for your EU AI Act obligations, your ISO 42001 posture or whether your controls hold under real adversarial pressure.

    Probe Six uses multi-turn conversational attacks to run structured security assessments against your LLM endpoint, probing guardrails under sustained adversarial pressure and mapping every finding to the relevant framework controls. Connect an endpoint, select a scan template, run the assessment. You get a security score, a governance posture score and a findings report you can hand to an auditor.

    Supported endpoints

    AWS Bedrock
    OpenAI
    Anthropic
    Azure OpenAI

    What makes it different

    Prove the fix stuck

    Standard tools regenerate test payloads on each run, so you can never demonstrate that the same vulnerability stays closed. Probe Six replays the same initial attack prompts from the original scan after you remediate. The attack starting conditions are identical; what changes is whether your guardrails hold. The evidence is timestamped and directly usable in an audit.

    Pressure that builds across a conversation

    Most LLM security tools send single prompts. Probe Six uses multi-turn conversational attacks, holding sustained adversarial conversations with your model to test whether guardrails hold under real pressure. Guardrails that hold on the first prompt can erode across a session. If your controls degrade, you will see it in the report before your auditors do.

    No new attack surface to justify

    Probe Six connects to AWS Bedrock, OpenAI, Anthropic and Azure OpenAI without requiring new credential infrastructure. For Bedrock deployments, it scans cross-account via your existing IAM role, storing no credentials. In regulated industries, a new credential management requirement is an objection that kills adoption. This removes it.

    Early Access: limited availability

    Probe Six is in Early Access with a select group of regulated-sector organisations. Early Access places are limited. Join the waitlist at probesix.ai to register your interest.