HIGH prompt injectionadonisjsapi keys

Prompt Injection in Adonisjs with Api Keys

Prompt Injection in Adonisjs with Api Keys — how this specific combination creates or exposes the vulnerability

AdonisJS is a Node.js web framework commonly used to build API services. When an AdonisJS endpoint accepts an API key in an HTTP header and forwards that key to an LLM provider as part of a prompt or as context, it can create an indirect prompt injection surface. The API key itself is not a prompt, but its presence in the request flow can change how an LLM-based middleware or service interprets and processes user input.

Consider a route handler that calls an LLM to summarize user content while also logging or tagging the call with the caller’s API key. If the API key is included in the prompt template without strict separation from user-controlled content, an attacker may craft input that shifts the model’s role or instructions. For example, a user could submit a payload designed to leak the system prompt or alter the expected behavior, and the model might mistakenly treat the API key context as part of the user’s instructions.

In practice, this can happen when the API key is interpolated directly into the prompt string rather than being passed as a separate metadata field or handled by the integration layer. The LLM security check for System Prompt Leakage uses 27 regex patterns tailored to ChatML, Llama 2, Mistral, and Alpaca formats to detect when model instructions are exposed in the output. If an attacker can cause the model to echo or reveal the surrounding instructions that include the API key context, this constitutes a prompt injection path that compromises confidentiality and intended behavior.

Another scenario involves excessive agency detection. If an AdonisJS service configures the LLM with tool usage or function calling capabilities and the API key influences which tools are available or how they are invoked, an attacker may attempt to exploit the agent-like behavior. For instance, crafted inputs could try to coerce the model into invoking unintended functions or exposing sensitive operation details. Output scanning for PII, API keys, and executable code is necessary to catch unintended disclosures that arise from these interactions.

Unauthenticated LLM endpoint detection is relevant when an AdonisJS route inadvertently exposes an LLM endpoint without requiring authentication, relying only on the API key for identification. If that endpoint is reachable without proper access controls, external actors can probe it with injection probes, including system prompt extraction, instruction override, DAN jailbreak, data exfiltration, and cost exploitation sequences. MiddleBrick’s LLM security checks run these five sequential probes to identify whether user input can manipulate the model’s behavior or reveal internal instructions that reference the API key handling logic.

Api Keys-Specific Remediation in Adonisjs — concrete code fixes

To reduce prompt injection risk when using API keys in AdonisJS, keep the API key strictly outside the prompt content and manage it in the integration layer. Use environment variables for the provider key, and pass the customer’s API key as metadata rather than as part of the user-facing prompt.

Example 1: Passing the API key as metadata instead of injecting it into the prompt.

import { HttpContextContract } from '@ioc:Adonis/Core/HttpContext'
import OpenAI from 'openai'

export default class SummarizeController {
  public async store({ request, auth }: HttpContextContract) {
    const userContent = request.input('content')
    const apiKey = request.header('x-api-key')

    // Do not interpolate apiKey into the prompt string
    const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY })
    const completion = await openai.chat.completions.create({
      model: 'gpt-4o-mini',
      messages: [
        {
          role: 'system',
          content: 'You are a summarizer. Return a concise summary.'
        },
        {
          role: 'user',
          content: userContent
        }
      ],
      // Pass the customer API key as metadata for logging or billing
      metadata: {
        customerApiKey: apiKey
      }
    })

    return { summary: completion.choices[0]?.message?.content }
  }
}

Example 2: Validating and sanitizing user input before sending it to the LLM, ensuring no executable content or prompt-like patterns are inadvertently included.

import { HttpContextContract } from '@ioc:Adonis/Core/HttpContext'

export default class CommentController {
  public async store({ request }: HttpContextContract) {
    const userInput = request.input('comment')

    // Basic sanitization: remove code-like block markers and suspicious instructions
    const sanitized = userInput
      .replace(/```[\s\S]*?```/g, '')
      .replace(/system:/gi, '')
      .replace(/assistant:/gi, '')

    // Send only sanitized content to the LLM
    // ... invoke LLM with sanitized text
  }
}

Example 3: Using structured logging to record the API key separately from the prompt, avoiding any mixing of roles in the model context.

import logger from '@ioc:Adonis/Core/Logger'

export async function logRequest(apiKey: string, userId: string, action: string) {
  // Log metadata separately; do not include in LLM prompt
  logger.info('API request', {
    apiKeyHash: hash(apiKey),
    userId,
    action,
    timestamp: new Date().toISOString()
  })
}

These patterns help ensure the API key remains a management and billing attribute rather than a prompt element, reducing the likelihood of indirect prompt injection through role confusion or context leakage.

Related CWEs: llmSecurity

CWE IDNameSeverity
CWE-754Improper Check for Unusual or Exceptional Conditions MEDIUM

Frequently Asked Questions

Why is including the API key directly in the prompt considered risky?
Including the API key in the prompt can expose it to the LLM's output through role confusion or prompt leakage checks. If the model is tricked into revealing the system instructions or context, the API key may be disclosed, enabling unauthorized usage or bypass of intended access controls.
Does MiddleBrick detect prompt injection risks involving API keys in AdonisJS endpoints?
Yes. MiddleBrick runs LLM security checks that include system prompt leakage detection, active prompt injection probes, and output scanning for PII and API keys. These checks help identify whether user input can manipulate model behavior or expose sensitive handling logic related to API key usage.