HIGH llm data leakagefiberjwt tokens

Llm Data Leakage in Fiber with Jwt Tokens

Llm Data Leakage in Fiber with Jwt Tokens — how this specific combination creates or exposes the vulnerability

When an API built with Fiber exposes endpoints that return or process JSON Web Tokens (JWTs) and also provides an unauthenticated LLM endpoint or logs LLM-related metadata, there is a risk of LLM data leakage. This occurs when error messages, debug output, or verbose server responses inadvertently include JWT values or surrounding context that an LLM can extract and reveal. For example, if a Fiber route that issues or validates tokens formats errors like Invalid token: {token} and that text is shown in an LLM response, a system prompt or generated answer may reproduce the token or its structure.

JWTs are often bearer tokens that grant access; if they appear in logs, trace IDs, or stack traces that are visible to an LLM integration, they can become sensitive data subject to extraction via prompt injection or output scanning. The LLM/AI Security checks in middleBrick include system prompt leakage detection and active prompt injection testing specifically to identify whether JWT material can be coaxed from the API through crafted inputs. Even when tokens are not intentionally returned, verbose error handling or misconfigured logging can couple JWT values with LLM-facing outputs, increasing exposure risk.

In a black-box scan, middleBrick tests unauthenticated attack surfaces and runs checks in parallel, including LLM/AI Security. It looks for patterns where JWTs are mentioned in responses or can be inferred through techniques such as system prompt extraction or data exfiltration probes. If a Fiber API uses JWTs in authorization headers or response bodies and also serves an LLM endpoint without proper output controls, middleBrick’s LLM/AI checks can flag this as a potential leakage path, mapping findings to frameworks like OWASP API Top 10 and highlighting remediation guidance.

Jwt Tokens-Specific Remediation in Fiber — concrete code fixes

To reduce LLM data leakage risk when using JWTs in Fiber, ensure tokens are never embedded in error messages, logs, or responses that an LLM can observe. Handle errors generically, avoid echoing token values, and structure logging to exclude sensitive payloads. Below are concrete code examples demonstrating secure practices.

Example 1: Safe error handling without exposing JWT values

package main

import (
	"github.com/gofiber/fiber/v2"
	"github.com/golang-jwt/jwt/v5"
)

func main() {
	app := fiber.New()

	app.Post("/login", func(c *fiber.Ctx) error {
		// Validate credentials and issue a token
		token := jwt.NewWithClaims(jwt.SigningMethodHS256, jwt.MapClaims{
			"sub": "1234567890",
			"exp": 1735689600,
		})
		tokenString, err := token.SignedString([]byte("your-secret-key"))
		if err != nil {
			// Generic error, no token in message
			return c.Status(fiber.StatusInternalServerError).JSON(fiber.Map{
				"error": "unable to generate token",
			})
		}
		return c.JSON(fiber.Map{"access_token": tokenString})
	})

	app.Use(func(c *fiber.Ctx) error {
		// Middleware that validates JWT without echoing it
		auth := c.Get("Authorization")
		if auth == "" {
			return c.Status(fiber.StatusUnauthorized).JSON(fiber.Map{
				"error": "authorization header missing",
			})
		}
		// Extract "Bearer "
		tokenString := auth[len("Bearer "):]
		_, err := jwt.Parse(tokenString, func(token *jwt.Token) (interface{}, error) {
			if _, ok := token.Method.(*jwt.SigningMethodHMAC); !ok {
				return nil, fiber.ErrUnauthorized
			}
			return []byte("your-secret-key"), nil
		})
		if err != nil {
			// Generic error, no token or details leaked
			return c.Status(fiber.StatusUnauthorized).JSON(fiber.Map{
				"error": "invalid authorization",
			})
		}
		return c.Next()
	})

	app.Listen(":3000")
}

Example 2: Structured logging that excludes JWT values

package main

import (
	"log"
	"net/http"
	"github.com/gofiber/fiber/v2"
)

func auditLog(req *fiber.Ctx, status int, err error) {
	// Log only metadata, never the JWT itself
	log.Printf("request_id=%s method=%s path=%s status=%d error=%v",
		req.Locals("request_id"), req.Method(), req.Path(), status, err)
}

func main() {
	app := fiber.New()

	app.Get("/profile", func(c *fiber.Ctx) error {
		auth := c.Get("Authorization")
		if auth == "" {
			auditLog(c, http.StatusUnauthorized, fiber.ErrUnauthorized)
			return c.Status(fiber.StatusUnauthorized).JSON(fiber.Map{
				"error": "authorization header missing",
			})
		}
		// Validate token and proceed
		// ... token validation logic ...
		auditLog(c, http.StatusOK, nil)
		return c.JSON(fiber.Map{"profile": "ok"})
	})

	app.Listen(":3000")
}

Example 3: Avoiding token leakage in LLM-facing outputs

package main

import (
	"github.com/gofiber/fiber/v2"
)

func safeLLMHandler(c *fiber.Ctx) error {
	// Do not include request-scoped tokens in the prompt or response
	prompt := "Analyze this request safely without exposing credentials."
	// Send prompt to LLM integration; ensure no JWT is concatenated into the prompt
	// llmResponse, err := callLLM(prompt)
	// if err != nil {
	//     return c.Status(fiber.StatusInternalServerError).JSON(fiber.Map{"error": "analysis failed"})
	// }
	// For this example, return a generic response
	return c.JSON(fiber.Map{"analysis": "completed securely"})
}

func main() {
	app := fiber.New()
	app.Post("/analyze", safeLLMHandler)
	app.Listen(":3000")
}

By following these patterns, you ensure JWTs are treated as sensitive data that should not be reflected in any output that could be inspected by an LLM. middleBrick’s LLM/AI Security checks can validate that your implementation avoids leakage, and its CLI tool allows you to scan from the terminal to verify these protections in your CI/CD pipeline.

Related CWEs: llmSecurity

CWE IDNameSeverity
CWE-754Improper Check for Unusual or Exceptional Conditions MEDIUM

Frequently Asked Questions

Can an LLM reconstruct a JWT from error messages alone?
Yes, if error messages include or echo parts of a JWT (e.g., 'Invalid token: {partial}'), an LLM can extract and reconstruct the token through repeated probes or output scanning.
Does middleBrick automatically fix JWT leakage issues in Fiber APIs?
No. middleBrick detects and reports potential LLM data leakage involving JWTs and provides remediation guidance, but it does not automatically fix or modify your code.