HIGH prompt injectionfiberjwt tokens

Prompt Injection in Fiber with Jwt Tokens

Prompt Injection in Fiber with Jwt Tokens — how this specific combination creates or exposes the vulnerability

In a Fiber-based API that accepts an Authorization header containing a JWT, prompt injection becomes relevant when the application forwards user-influenced input to an LLM endpoint without sufficient validation. A JWT typically carries claims such as sub, role, or scopes in its payload, and these values may be used to tailor requests to the LLM or to enforce authorization. If the application directly incorporates header values, path parameters, or query strings into the prompt or into tool construction, an attacker can supply crafted input designed to alter the LLM behavior.

Consider a scenario where the JWT payload includes a user role that the application uses to decide which tools are available. If the role claim is concatenated into a system prompt or passed into function descriptions, an attacker who manages to obtain or guess a valid JWT (for example, through token leakage or a confused-deputy scenario) may be able to inject instructions that override the intended system prompt. This maps to the LLM/AI Security checks in middleBrick, specifically system prompt leakage detection and active prompt injection testing, which probe for extraction of system prompts and successful instruction overrides via sequential probes including DAN jailbreak and data exfiltration.

Even when the JWT is verified cryptographically, the application must avoid treating trusted claims as safe prompt material. For instance, using the subject identifier directly in a user message or in dynamic tool parameters can enable prompt injection if the LLM treats that content as part of its instructions rather than as neutral data. middleBrick’s unauthenticated LLM endpoint detection and output scanning for PII or API keys highlight the risk when JWT-derived data leaks into LLM responses or when exposed endpoints allow anonymous probing.

Concrete risk patterns in Fiber include concatenating req.params.id or query values into system messages, or building function call arrays where names or descriptions include unchecked user input. Such patterns can lead to CVE-adjacent behaviors like instruction override or unintended tool usage. Because JWTs are often bearer tokens, any leakage (for example, via logs or error messages) compounds prompt injection exposure by giving attackers a valid token to probe endpoints that rely on it.

middleBrick’s report in this context would flag findings related to LLM/AI Security, providing severity and remediation guidance rather than attempting to block or fix the endpoint. The scanner exercises black-box testing against the unauthenticated attack surface, revealing whether prompt injection techniques can alter LLM behavior when JWT-based authorization is present.

Jwt Tokens-Specific Remediation in Fiber — concrete code fixes

Remediation focuses on strict separation between authorization data and prompt content, avoiding the use of JWT claims in LLM instructions or tool definitions, and validating all user-influenced input. Do not embed role or scope claims directly into system prompts, tool descriptions, or function parameters used by the LLM. Instead, enforce authorization at the API layer and pass only minimal, sanitized context to the LLM.

Example of unsafe code that concatenates a JWT claim into a system prompt, creating prompt injection risk:

const jwt = require('jsonwebtoken');
const express = require('express');
const { ApolloServer } = require('apollo-server-express');

const app = express();
app.use(express.json());

app.post('/chat', (req, res) => {
  const auth = req.headers.authorization || '';
  const token = auth.replace(/^Bearer\s+/i, '');
  let payload;
  try {
    payload = jwt.verify(token, process.env.JWT_SECRET);
  } catch (err) {
    return res.status(401).json({ error: 'invalid_token' });
  }
  // Unsafe: using JWT subject in system prompt
  const systemPrompt = `You are assisting user ${payload.sub}.`;
  const userMessage = req.body.message || '';
  // Further code to call LLM omitted
  res.json({ systemPrompt, userMessage });
});
app.listen(3000);

Safer approach: keep JWT claims out of the prompt. Use the token only for authentication and authorization, then send a sanitized context to the LLM:

const jwt = require('jsonwebtoken');
const express = require('express');

const app = express();
app.use(express.json());

app.post('/chat', (req, res) => {
  const auth = req.headers.authorization || '';
  const token = auth.replace(/^Bearer\s+/i, '');
  let payload;
  try {
    payload = jwt.verify(token, process.env.JWT_SECRET);
  } catch (err) {
    return res.status(401).json({ error: 'invalid_token' });
  }
  // Safe: authorization only, no JWT claims in prompt
  const safeUserId = payload.sub ? String(payload.sub).replace(/[^a-zA-Z0-9_-]/g, '_') : 'user';
  const systemPrompt = 'You are a helpful assistant.';
  const userMessage = req.body.message || '';
  // Include user identifier only if necessary and sanitized, not as instruction override
  const enrichedPrompt = `${systemPrompt} User input: ${userMessage}`;
  // Further code to call LLM using enrichedPrompt omitted
  res.json({ systemPrompt, userMessage: enrichedPrompt });
});
app.listen(3000);

Additionally, validate and sanitize any input used in dynamic tool construction. If tools are generated based on user or JWT-derived data, ensure names, descriptions, and parameters are static or strictly whitelisted:

const tools = [
  {
    type: 'function',
    function: {
      name: 'get_weather',
      description: 'Get current weather for a city',
      parameters: {
        type: 'object',
        properties: {
          city: { type: 'string', pattern: '^[a-zA-Z\\s-]+$' }
        },
        required: ['city']
      }
    }
  }
];
// Do not inject names or descriptions from JWT or user input
app.post('/tools', (req, res) => {
  res.json({ tools });
});

Enforce rate limiting and monitor for anomalous token usage to reduce the window for prompt injection via compromised JWTs. middleBrick’s Pro plan supports continuous monitoring and CI/CD integration to catch regressions, while the free tier allows initial scan verification to identify obvious prompt injection surfaces.

Related CWEs: llmSecurity

CWE IDNameSeverity
CWE-754Improper Check for Unusual or Exceptional Conditions MEDIUM

Frequently Asked Questions

Can JWT claims ever be safely included in LLM prompts?
Avoid including JWT claims in LLM prompts or tool definitions. If you must reference a user identifier, sanitize it strictly and use it only as non-instructional data, not as part of the system prompt or function schema.
How does middleBrick detect prompt injection risks involving JWT tokens?
middleBrick runs active prompt injection testing, including system prompt extraction attempts and instruction override probes, against the unauthenticated endpoint. Reports highlight LLM/AI Security findings and provide remediation guidance without making changes to your application.