HIGH formula injectionfeathersjsjwt tokens

Formula Injection in Feathersjs with Jwt Tokens

Formula Injection in Feathersjs with Jwt Tokens — how this specific combination creates or exposes the vulnerability

Formula Injection in Feathers.js occurs when untrusted input is used to construct executable logic such as arithmetic expressions or formulas that are later evaluated, often by downstream services or libraries. When JWT tokens are involved, the risk shifts from direct runtime code execution to authorization bypass or data integrity compromise if claims derived from or compared against token payloads are manipulated.

Consider a Feathers service that calculates a user’s access level by evaluating a formula stored in a database and supplied, in part, by a JWT claim. If the formula string is built by concatenating user-controlled values (e.g., role weight from a decoded JWT) with unchecked external input, an attacker can inject additional expressions. For example, a token with { "roleWeight": 10 } might lead to a formula like "10 + " + userInput. If userInput is " - 5", the resulting 10 + - 5 may still evaluate to a number, masking privilege escalation. More critically, if the service uses Function or eval-like mechanisms to resolve the formula, injected code can execute in the evaluation context.

When JWT tokens are used for authentication, developers sometimes place trust in decoded payloads without additional validation. If a Feathers hook or service assumes a claim such as permissions is immutable because it is signed, an attacker who can tamper with the token (via weak secrets, algorithm confusion, or token leakage) can modify permissions and have them accepted as valid. Even when tokens are verified, combining claims with unchecked external data in authorization checks can lead to path or logic flaws. For instance, a service might compute effective permissions as permsFromToken.concat(userSuppliedRoles) and then evaluate whether the combined set includes "admin". If userSuppliedRoles contains values like "admin" injected via a malicious client, the check passes despite the token lacking that right.

In practice, this manifests in endpoints that accept formula-like parameters, such as discount rules or dynamic pricing expressions, where JWT-derived user roles influence which formulas are allowed. An attacker can supply a payload like { "formula": "basePrice * (1 - 0.20); grantAccessIf(baseRole == 'admin')", "baseRole": "user" } and, if the server concatenates or interpolates baseRole from the JWT into the expression, the injected logic may alter control flow or data visibility. Because the scan reports from middleBrick include findings mapped to OWASP API Top 10 and reference real attack patterns, teams can identify such formula-injection risks during unauthenticated or authenticated scans, even when JWT validation appears intact.

Remediation begins with strict input validation and avoiding runtime evaluation of attacker-influenced strings. Do not use eval, Function, or template-based evaluators on external data. Instead, use a safe parser or a controlled rule engine. For JWT-related risks, validate all claims against an allowlist, enforce token binding where feasible, and avoid concatenating token-derived values directly into executable logic. middleBrick’s LLM/AI Security checks can also detect prompt injection patterns that parallel formula injection in AI-driven workflows, and its OpenAPI/Swagger analysis resolves $ref definitions to cross-reference runtime behavior with declared schemas.

Jwt Tokens-Specific Remediation in Feathersjs — concrete code fixes

Secure Feathers services should treat JWT claims as untrusted inputs when constructing logic or combining them with external data. Below are concrete examples that demonstrate insecure patterns and their remediations.

Insecure example: dynamic formula assembly with JWT-derived role

// Before: unsafe concatenation and eval-like behavior
app.service('pricing').hooks({
  before: {
    create: [context => {
      const userRole = context.params.user.roleFromToken; // from JWT claim
      const externalFormula = context.data.formula; // attacker-controlled
      // Dangerous: builds and evaluates a formula string
      context.params.formulaToEval = `${userRole} ${externalFormula}`;
    }]
  }
});

Remediation: validate, map, and avoid eval

// After: safe mapping and parameterized logic
const roleWeightMap = {
  admin: 10,
  user: 5,
  guest: 1
};

app.service('pricing').hooks({
  before: {
    create: [context => {
      const claimedRole = context.params.authRole; // validated JWT claim
      const weight = roleWeightMap[claimedRole];
      if (weight === undefined) {
        throw new Error('Unauthorized role');
      }
      // Use a safe parser or arithmetic instead of eval
      const base = Number(context.data.basePrice) || 0;
      const discount = Number(context.data.discount) || 0;
      const finalPrice = (base * (100 - discount)) / 100;
      context.result = { finalPrice, weight };
    }]
  }
});

Insecure example: merging JWT claims with external roles in authorization checks

// Before: trusting concatenated role lists from token and input
const allRoles = context.params.authRoles.concat(context.data.additionalRoles);
if (allRoles.includes('admin')) {
  // grant access
}

Remediation: canonicalize and validate each source independently

// After: strict allowlist and no external concatenation
const ALLOWED_ROLES = new Set(['admin', 'editor']);
const tokenRoles = Array.isArray(context.params.authRoles) ? context.params.authRoles : [];
const hasAdmin = tokenRoles.some(role => ALLOWED_ROLES.has(role));
if (!hasAdmin) {
  throw new Error('Forbidden');
}
// additionalRoles should not be used for privilege decisions unless explicitly trusted

These examples emphasize that JWT tokens provide identity and claims, but business logic must treat those claims as inputs to be validated, not as trusted instructions. By using enumerated mappings, avoiding runtime code generation, and relying on allowlists, Feathers services reduce the attack surface related to formula injection and JWT misuse. middleBrick’s CLI tool (middlebrick scan <url>) can be run against your service endpoints to surface such issues, while the GitHub Action helps prevent regressions by failing builds when risk scores degrade.

Frequently Asked Questions

How can I test my Feathers.js API for formula injection risks using middleBrick?
Run the middleBrick CLI: middlebrick scan https://your-api.example.com. The scan completes in 5–15 seconds and will flag endpoints that accept formula-like inputs or show logic flaws. For ongoing checks, add the GitHub Action to fail builds if the score drops below your chosen threshold.
Does middleBrick’s LLM/AI Security testing help detect formula injection patterns?
Yes. The LLM/AI Security checks include active prompt injection testing and system prompt leakage detection. While focused on AI endpoints, the patterns can surface similar injection risks in APIs that dynamically construct logic, and findings are mapped to frameworks such as OWASP API Top 10.