Directories

Security for AI Apps tools directory

A curated directory of tools, frameworks, and reference materials specifically designed to address security vulnerabilities in LLM-integrated applications, including prompt injection, data leakage, and output validation.

Category:
Pricing Model:

Showing 12 of 12 entries

Rebuff

open-source

A multi-layered prompt injection detector that uses heuristics, dedicated LLM analyzers, and canary tokens to identify and block malicious inputs.

Pros

  • + Uses canary tokens to detect data exfiltration
  • + Multi-layered approach reduces false negatives
  • + Simple SDK integration for Python and TypeScript

Cons

  • Adds latency to the request pipeline
  • Heuristics may require tuning for specific domains
prompt-injectionsecurity-middlewarecanary-tokens
Visit ↗

Guardrails AI

open-source

A framework for adding structure, type validation, and quality constraints to LLM outputs using a pydantic-style syntax.

Pros

  • + Strong integration with Pydantic for data validation
  • + Supports corrective actions like automatic re-asking
  • + Extensive library of pre-built validators

Cons

  • Steep learning curve for complex RAIL schemas
  • Significant token overhead during re-prompting
output-validationstructured-datareliability
Visit ↗

NeMo Guardrails

open-source

NVIDIA's open-source toolkit for adding programmable guardrails to LLM-based conversational systems using the Colang modeling language.

Pros

  • + Powerful DSL for defining dialog flows and constraints
  • + Built-in support for jailbreak detection
  • + Native integration with LangChain

Cons

  • Learning Colang is required for advanced logic
  • Can be heavyweight for simple applications
conversational-aijailbreak-preventionnvidia
Visit ↗

Lakera Guard

freemium

An enterprise-grade API for real-time protection against prompt injections, jailbreaks, and PII leakage in LLM applications.

Pros

  • + Very low latency compared to self-hosted LLM analyzers
  • + Frequently updated threat intelligence database
  • + Detailed dashboard for security monitoring

Cons

  • Closed-source proprietary engine
  • External API dependency
managed-securitypii-protectionreal-time
Visit ↗

Promptfoo

open-source

A CLI tool for testing and evaluating LLM output quality, including specific test cases for prompt injection and security regressions.

Pros

  • + Enables matrix testing across different prompts and models
  • + Automated scoring for security vulnerabilities
  • + Integrates easily into CI/CD pipelines

Cons

  • Requires manual definition of test assertions
  • CLI-heavy workflow may not suit all teams
testingred-teamingevaluation
Visit ↗

OWASP Top 10 for LLM Applications

free

The definitive industry reference guide for the most critical security risks in LLM applications, including mitigations.

Pros

  • + Industry standard for compliance and auditing
  • + Comprehensive coverage of theoretical and practical risks
  • + Regularly updated by security experts

Cons

  • Informational only; no direct implementation code
  • Broad scope requires careful prioritization
standardcompliancevulnerability-list
Visit ↗

LLM Guard

open-source

A comprehensive security toolkit for LLM applications that provides sanitization and detection for both inputs and outputs.

Pros

  • + Includes PII detection and redaction
  • + Scans for toxicity and offensive content
  • + Modular architecture allows choosing specific scanners

Cons

  • High resource usage for local model-based scanners
  • Initial configuration can be verbose
sanitizationpiicontent-moderation
Visit ↗

Giskard

open-source

An open-source testing framework that automatically detects vulnerabilities, biases, and performance issues in LLM models.

Pros

  • + Automatic generation of adversarial test cases
  • + Specific focus on RAG (Retrieval Augmented Generation) security
  • + Clear reporting for stakeholders

Cons

  • Focuses more on testing than real-time protection
  • Python environment setup can be complex
adversarial-testingrag-securitybias-detection
Visit ↗

HashiCorp Vault

freemium

A tool for securely accessing secrets such as LLM API keys, providing dynamic rotation and fine-grained access control.

Pros

  • + Centralized management for multiple LLM providers
  • + Supports dynamic credential generation
  • + Robust audit logging for secret access

Cons

  • Significant infrastructure management overhead
  • Steep learning curve for policy configuration
secretsapi-keysinfrastructure
Visit ↗

PyRIT

open-source

The Python Risk Identification Tool for LLMs, developed by Microsoft to automate red teaming and risk assessment.

Pros

  • + Automates complex multi-turn jailbreak attempts
  • + Integrates with various model endpoints
  • + Built for professional security red teams

Cons

  • Requires security expertise to interpret results
  • Early-stage project with evolving API
red-teamingautomationmicrosoft
Visit ↗

Pangea AI Shield

freemium

A set of security APIs specifically for AI apps, including PII redaction, prompt injection detection, and audit logging.

Pros

  • + Unified API for multiple security functions
  • + Excellent documentation and SDK support
  • + SOC2 compliant infrastructure

Cons

  • Usage-based pricing can scale quickly
  • Requires sending data to a third-party cloud
api-securitycompliancepii-redaction
Visit ↗

Garak

open-source

An LLM vulnerability scanner that probes models for a wide range of security flaws, including halluncinations and data leaks.

Pros

  • + Extensive library of probe types
  • + Supports many model types (local and remote)
  • + Fast execution for quick security audits

Cons

  • CLI output can be overwhelming
  • Requires careful filtering of false positives
vulnerability-scannerprobingsecurity-audit
Visit ↗