Back to Case Studies
SecurityIn Beta

VettIQ

Guardian of Your Code

99.7%
OWASP Top 10 Detection
Across the full vulnerability taxonomy
<2s
Per File Scan Time
5-stage multi-LLM pipeline
1,467
Malicious Skills Found
Detected on ClawHub by OpenClaw Security

The Challenge

AI coding tools have created a new category of security risk that static analyzers weren't built for. Cursor, Copilot, Claude Code, Windsurf, Lovable, Bolt — these tools write functional code fast. They also introduce vulnerabilities at scale. Studies show 68% of AI-generated codebases contain at least one exploitable vulnerability, and the attack surface compounds with every commit.

The existing security tooling ecosystem wasn't designed for this. Legacy SAST tools catch known patterns but miss context-dependent vulnerabilities introduced by LLM code generation. They also operate as a single pass — one tool, one methodology, one blind spot.

The problem LumenIQ was asked to solve had two dimensions. First: catch what AI writes wrong, before it ships. Second: a newer and more dangerous problem — AI agents running with full access to your filesystem, network, and infrastructure, executing skills and plugins sourced from public registries with zero vetting. The security perimeter had moved. The tooling hadn't.

---

The Approach

### The 5-Stage Multi-LLM Pipeline (Code Security)

The core insight behind VettIQ Code Security is that no single LLM has complete security coverage. Different models have different training biases, different knowledge of vulnerability patterns, and different failure modes. Consensus across multiple models is a fundamentally stronger signal than any single model's output.

VettIQ runs every file through a sequential pipeline where each stage builds on the last:

Stage 1 — Detection: Initial vulnerability scan across the OWASP Top 10 and extended taxonomy. Identifies candidate issues and assigns severity scores.

Stage 2 — Deep Analysis: A second model interrogates each flagged issue in context. Is this a real exploit path or a false positive? What's the blast radius?

Stage 3 — Adversarial Confirmation: A third model actively attempts to confirm exploitability — reasoning from an attacker's perspective. This stage eliminates a significant fraction of false positives that pass Stage 1.

Stage 4 — Fix Generation: For confirmed vulnerabilities, the pipeline generates a specific, tested remediation. Not a generic recommendation — actual replacement code.

Stage 5 — Verification: The proposed fix is validated to confirm it resolves the vulnerability without introducing new issues.

The entire pipeline completes in under 2 seconds per file. Scanning engines from Snyk, Semgrep, and VirusTotal run in parallel to augment LLM analysis with signature-based detection.

The Results

The Outcome

VettIQ entered beta addressing two problems that didn't exist at scale three years ago — AI-generated code vulnerabilities and unsecured AI agent runtimes. The platform's architecture reflects a core conviction: security for AI-native development can't be a single-pass, single-model afterthought. It has to be multi-stage, adversarial, and fast enough to run in a developer's existing workflow without friction.

The Free Blueprints layer — open source, works with every major AI coding tool, one-command install — means VettIQ's security standards are being applied in codebases before a single line of the paid product is written.

---

Build Something Like This

Ready to create your own success story? Let's discuss how we can help you achieve similar breakthrough results.