Information Disclosure in Fiber with Dynamodb
Information Disclosure in Fiber with Dynamodb
Information disclosure occurs when an API unintentionally exposes data that should be restricted. In a Fiber application that uses DynamoDB as a persistence layer, this risk arises from a combination of framework behavior, DynamoDB access patterns, and API design choices. A common scenario involves endpoints that accept an identifier (such as a user ID or record ID), construct a DynamoDB key, and retrieve an item without verifying that the requesting user is authorized to view that item.
For example, consider a route that reads a user profile using a path parameter:
app.get('/profiles/:userId', (req, res) => {
const params = {
TableName: 'UserProfiles',
Key: { userId: req.params.userId }
};
docClient.get(params, (err, data) => {
if (err) return res.status(500).send(err);
res.json(data.Item);
});
});
If the API does not ensure that the authenticated caller matches the userId in the path, an attacker can iterate over arbitrary user IDs and obtain profiles they should not see. This is an IDOR (Insecure Direct Object Reference) pattern enabled by a lack of authorization checks before the DynamoDB call.
DynamoDB-specific characteristics can amplify the issue. Because DynamoDB responses include only attributes that exist, missing attributes are not inherently an error. If an application relies on the absence of an attribute to enforce scoping (for example, checking whether a tenantId attribute is present), an attacker might supply an item that lacks that attribute to bypass checks. Additionally, when queries use a partition key derived from user context (like tenantId), omitting that filter in the request allows scanning or retrieval from other partitions if the application falls back to a less constrained query or scan operation.
Another vector involves verbose error messages returned by DynamoDB and then surfaced directly to the client. For instance, a conditional write that fails due to a missing attribute may return a detailed ConditionalCheckFailedException. If this response is forwarded to the caller, it can reveal internal attribute names, key structure, or update patterns. In a Fiber service, unhandled or poorly mapped errors may expose stack traces or raw DynamoDB error payloads, further aiding an attacker in understanding the data model.
Data Exposure checks performed by middleBrick can identify these risks by correlating endpoint definitions in an OpenAPI spec with runtime behavior. For example, an endpoint that retrieves user-specific data without an authorization check before the DynamoDB operation would be flagged. The scanner also examines error handling patterns and verifies whether sensitive information, such as internal keys or DynamoDB metadata, is inadvertently returned in responses.
LLM/AI Security checks offered by middleBrick additionally look for system prompt leakage and injection attempts targeting any AI components integrated into the API. Although not specific to DynamoDB, these checks ensure that AI endpoints used alongside data access do not expose instructions or sensitive context that could lead to further disclosure.
Dynamodb-Specific Remediation in Fiber
Remediation focuses on enforcing authorization before any DynamoDB interaction, validating and normalizing inputs, and ensuring error messages do not leak internal details. The following examples illustrate secure patterns for a Fiber application.
1. Enforce ownership or role-based checks before querying DynamoDB:
app.get('/profiles/:userId', async (req, res) => {
const authenticatedUserId = req.user.id; // from session or token
const requestedUserId = req.params.userId;
if (authenticatedUserId !== requestedUserId) {
return res.status(403).send('Forbidden');
}
const params = {
TableName: 'UserProfiles',
Key: { userId: requestedUserId }
};
try {
const data = await docClient.get(params).promise();
if (!data.Item) {
return res.status(404).send('Not found');
}
res.json(data.Item);
} catch (err) {
// Log detailed error internally, return generic message to caller
console.error('Failed to fetch profile', err);
res.status(500).send('Internal error');
}
});
2. Use a tenant-aware query pattern that includes the tenant identifier in the key condition, preventing cross-tenant access:
app.get('/items', async (req, res) => {
const tenantId = req.user.tenantId; // enforced by authentication middleware
const params = {
TableName: 'TenantItems',
IndexName: 'TenantIdIndex',
KeyConditionExpression: 'tenantId = :tid',
ExpressionAttributeValues: {
':tid': tenantId
}
};
try {
const data = await docClient.query(params).promise();
res.json(data.Items || []);
} catch (err) {
console.error('Query failed', err);
res.status(500).send('Internal error');
}
});
3. Normalize attributes to avoid bypass via missing fields. If an access control rule depends on a role attribute, ensure it is always present and defaulted on write:
const putParams = {
TableName: 'UserRoles',
Item: {
userId: 'user-123',
role: req.body.role || 'user', // enforce default
updatedAt: new Date().toISOString()
}
};
await docClient.put(putParams).promise();
4. Mask internal errors and avoid exposing DynamoDB metadata. Map ConditionalCheckFailedException and other internal errors to generic responses after logging details securely:
app.put('/records/:id', async (req, res) => {
const params = {
TableName: 'Records',
Key: { id: req.params.id },
UpdateExpression: 'set #status = :s',
ConditionExpression: 'attribute_exists(id)',
ExpressionAttributeNames: { '#status': 'status' },
ExpressionAttributeValues: { ':s': req.body.status }
};
try {
await docClient.update(params).promise();
res.sendStatus(204);
} catch (err) {
if (err.name === 'ConditionalCheckFailedException') {
console.warn('Condition failed on record', req.params.id);
return res.status(409).send('Conflict');
}
console.error('Update error', err);
res.status(500).send('Internal error');
}
});
5. Validate and sanitize all inputs used in DynamoDB expressions to prevent injection or malformed queries. Use parameterized expression attribute values rather than concatenating user input into expression strings.
middleBrick’s scans can verify that these controls are present by checking endpoint definitions and error handling flows. The CLI tool enables quick local verification, while the GitHub Action can enforce a minimum security score before merging changes. Continuous monitoring in the Pro plan helps detect regressions that might reintroduce information disclosure risks.