HIGH llm data leakageaspnethmac signatures

Llm Data Leakage in Aspnet with Hmac Signatures

Llm Data Leakage in Aspnet with Hmac Signatures — how this specific combination creates or exposes the vulnerability

LLM data leakage in ASP.NET applications that use HMAC signatures can occur when application logic unintentionally exposes sensitive request or response data to language models or when data meant for LLM consumption is derived from or includes HMAC-verified information. HMAC signatures are typically used to ensure request integrity and authenticity, for example to verify that a webhook or API callback originates from a trusted source. If an application embeds sensitive data within the payload that is also included in the HMAC calculation or exposes intermediate values used during signature validation, and that data is later passed to an LLM endpoint, sensitive information such as authentication tokens, user identifiers, or business logic details may be revealed.

Consider an ASP.NET Core controller that validates an HMAC signature from a header and then forwards some of the request payload to an LLM service for analysis. If the developer does not strip or redact sensitive fields before sending data to the LLM, the LLM or its logs may retain secrets. Even when using stateless HMAC verification, if the application reuses the same data structures for both verification and LLM input, leakage can occur through error messages or through logging of the raw request. This is especially risky when the LLM endpoint is unauthenticated or when output scanning is not applied, as the model might echo back fragments of the input that contain secrets embedded in the payload.

middleBrick detects this class of risk under LLM/AI Security by checking for system prompt leakage, active prompt injection, output scanning for PII or API keys, and unauthenticated LLM endpoints. When combined with ASP.NET applications that use HMAC signatures, the scanner looks for patterns where signed data is transmitted to LLM endpoints without sanitization. For example, if a JSON Web Token or a user identifier is included in the signed payload and is also present in the request body sent to the LLM, middleBrick flags the scenario as a potential data exposure path. This helps developers understand that HMAC integrity does not prevent accidental data exposure to LLMs; explicit data minimization and redaction are required.

In practice, an attacker who can influence what data is sent to the LLM might coerce the application into sending signed payload fields that contain personally identifiable information or session tokens. If the LLM response is returned to the client or logged, sensitive information could be exfiltrated. This is not a weakness in HMAC itself, but a design issue where the application passes too much information to downstream AI services. middleBrick’s checks for system prompt leakage and output scanning help surface cases where sensitive data could be echoed back, even when HMAC verification passes.

middleBrick supports OpenAPI/Swagger spec analysis and can correlate runtime findings with spec definitions, including $ref resolution. For ASP.NET APIs, this means it can detect whether LLM-related endpoints are defined and whether they accept payloads that overlap with fields used in HMAC-signed structures. By cross-referencing spec definitions with runtime behavior, the scanner highlights where data flows from authenticated, signed requests into unauthenticated or poorly monitored LLM endpoints, prompting developers to review data flow and apply strict input filtering before any LLM call.

Hmac Signatures-Specific Remediation in Aspnet — concrete code fixes

Remediation focuses on ensuring that sensitive data included in HMAC-signed structures is never forwarded to LLM endpoints, and that redaction happens before any external call. In ASP.NET Core, developers should create explicit view models for LLM requests that exclude sensitive fields, and validate HMAC on a sanitized subset of the original payload. The following examples illustrate a secure pattern where HMAC verification is performed first, then only safe, non-sensitive data is passed to the LLM client.

First, define a model for the signed payload that includes all fields needed for business logic and HMAC validation, and a separate model for LLM consumption that omits secrets:

// Signed payload model
public class OrderRequest
{
    public string OrderId { get; set; }
    public string UserId { get; set; }
    public decimal Amount { get; set; }
    public string SensitiveToken { get; set; } // Used for HMAC, not sent to LLM
}

// LLM-safe model
public class OrderForLlm
{
    public string OrderId { get; set; }
    public decimal Amount { get; set; }
}

Next, in the controller, verify the HMAC signature using the full payload, then map to the safe model before invoking the LLM service:

using System.Security.Cryptography;
using System.Text;
using Microsoft.AspNetCore.Mvc;

[ApiController]
[Route("api/order")]
public class OrderController : ControllerBase
{
    private const string Secret = "your-256-bit-secret";

    [HttpPost("process")]
    public IActionResult ProcessOrder([FromBody] OrderRequest request)
    {
        if (!TryComputeHmac(request, Secret, out var computedHmac))
        {
            return Unauthorized("Invalid signature");
        }

        var providedHmac = Request.Headers["X-Hub-Signature-256"].ToString().Replace("sha256=", "");
        if (!CryptographicOperations.FixedTimeEquals(
            Encoding.UTF8.GetBytes(computedHmac),
            Encoding.UTF8.GetBytes(providedHmac)))
        {
            return Unauthorized("Invalid signature");
        }

        // Map to safe model before LLM call
        var safePayload = new OrderForLlm { OrderId = request.OrderId, Amount = request.Amount };

        // Call LLM with safePayload only
        var llmResponse = CallLlmEndpoint(safePayload);
        return Ok(llmResponse);
    }

    private bool TryComputeHmac(OrderRequest request, string key, out string hmac)
    {
        try
        {
            using var hmac = new HMACSHA256(Encoding.UTF8.GetBytes(key));
            var payload = $"{request.OrderId}|{request.UserId}|{request.Amount}";
            var hash = hmac.ComputeHash(Encoding.UTF8.GetBytes(payload));
            hmac = Convert.ToBase64String(hash);
            return true;
        }
        catch
        {
            hmac = null;
            return false;
        }
    }

    private string CallLlmEndpoint(OrderForLlm payload)
    {
        // Implement HTTP call to LLM endpoint with payload serialization
        // Ensure no sensitive fields are included in the serialized JSON
        return "LLM response";
    }
}

Additionally, apply output scanning and input validation to ensure that any data extracted from LLM responses does not contain secrets. Avoid logging raw signed requests or including sensitive headers in telemetry. With these controls, HMAC signatures provide integrity while preventing LLM data leakage through accidental exposure of authenticated data.

Related CWEs: llmSecurity

CWE IDNameSeverity
CWE-754Improper Check for Unusual or Exceptional Conditions MEDIUM

Frequently Asked Questions

Does HMAC verification prevent LLM data leakage by itself?
No. HMAC ensures request integrity but does not prevent sensitive data from being forwarded to LLMs. Explicit data minimization and redaction are required.
How does middleBrick detect LLM data leakage risks in ASP.NET apps with HMAC signatures?
middleBrick checks for system prompt leakage, prompt injection, output scanning for PII/API keys, unauthenticated LLM endpoints, and maps data flows from signed payloads to LLM calls using OpenAPI analysis.