How ShipSafe Detects Prompt Injection in AI Applications

Prompt injection is the #1 risk in the OWASP Top 10 for LLM Applications. ShipSafe is the only security scanner with dedicated prompt injection detection rules. Here is what they catch and why it matters.

What is Prompt Injection?

Prompt injection occurs when untrusted user input is passed directly into LLM prompts, allowing attackers to override system instructions. Think of it as SQL injection for AI applications. The attacker does not exploit a bug in the LLM itself — they exploit how your application constructs prompts.

The consequences range from data leakage (extracting system prompts, API keys, or user data) to full behavior manipulation (making the AI perform unintended actions like sending emails, calling APIs, or generating malicious content).

The 7 Prompt Injection Rules

ShipSafe ships with 7 dedicated rules that cover the major prompt injection attack vectors:

1. Unsanitized User Input in LLM Prompts

Catches when user input from request bodies, query parameters, or form data is concatenated directly into prompt strings without validation or sanitization.

2. System Prompt Leakage

Detects patterns where system prompts are included in responses or where the application structure makes it easy for users to extract system instructions.

3. Indirect Prompt Injection (RAG Poisoning)

Flags when retrieved content from databases, documents, or web pages is included in prompts without sanitization. Attackers can poison these sources with injection payloads.

4. Prompt Template Manipulation

Detects when user-controlled variables are used in prompt templates in ways that allow the template structure to be altered.

5. Missing Input Validation Before LLM Calls

Identifies code paths where user input reaches LLM API calls without any validation, length checking, or sanitization in between.

6. Jailbreak-Susceptible Patterns

Flags prompt constructions that are known to be susceptible to common jailbreak techniques like role-playing, hypothetical scenarios, and encoding tricks.

7. User Input in Function/Tool Parameters

Detects when user input is passed as tool parameters in function-calling LLMs, which can lead to unintended tool execution or parameter manipulation.

Example: Catching Unsanitized LLM Input

Consider this common pattern in AI applications:

// src/routes/chat.ts
app.post("/chat", async (req, res) => {
  const { message } = req.body;

  const response = await openai.chat.completions.create({
    model: "gpt-4",
    messages: [
      { role: "system", content: SYSTEM_PROMPT },
      { role: "user", content: message }, // unsanitized!
    ],
  });

  res.json({ reply: response.choices[0].message.content });
});

ShipSafe flags this immediately:

$ shipsafe scan

  HIGH  prompt-injection/unsanitized-llm-input
  src/routes/chat.ts:5
  User input from req.body is passed directly to LLM prompt
  without sanitization.
  Fix: Validate and sanitize user input before passing to LLM.
  Consider input length limits, character filtering, and prompt
  boundary tokens.

Why Other Scanners Miss This

Traditional security scanners like Semgrep, Snyk, and SonarQube were built before the LLM era. Their rule sets cover classic web vulnerabilities — SQL injection, XSS, SSRF — but they have no awareness of prompt construction patterns or LLM API calls.

ShipSafe was built specifically for the AI coding era. It understands the OpenAI, Anthropic, and other LLM SDK patterns and knows which function parameters are dangerous when they contain user input.

How to Protect Your AI Application

  1. Install ShipSafe: npm install -g @shipsafe/cli
  2. Scan your project: shipsafe scan
  3. Install git hooks: shipsafe hooks install — blocks prompt injection from being committed
  4. Validate input: Add length limits, character filtering, and prompt boundary tokens to all user inputs before they reach LLM calls
  5. Sanitize retrieved content: If using RAG, sanitize retrieved documents before including them in prompts

Protect Your AI Application

ShipSafe is the only security scanner with dedicated prompt injection detection.

npm install -g @shipsafe/cliGet Started Free