Insufficient Logging in Feathersjs with Dynamodb
Insufficient Logging in Feathersjs with Dynamodb — how this specific combination creates or exposes the vulnerability
FeathersJS is a popular framework for building REST and real-time APIs with JavaScript and Node.js. When using DynamoDB as the persistence layer, insufficient logging can prevent detection and investigation of security-relevant events. Unlike traditional SQL databases, DynamoDB operates as a managed NoSQL service and does not provide native query logs or connection-level audit trails. This shifts responsibility to the application layer to capture and retain meaningful operational data.
An insufficient logging strategy in a FeathersJS service backed by DynamoDB means that critical actions—such as create, update, delete, and get operations—are not recorded with enough context to support incident investigation. For example, if a record is unexpectedly modified or deleted, the absence of request metadata (user identity, source IP, timestamps, before/after states) and application-level outcomes (success, validation failure, conditional check failure) makes it difficult to distinguish between legitimate usage, misconfiguration, or an attack like BOLA (Broken Object Level Authorization).
FeathersJS hooks are the primary extensibility point where logging can be added. Without explicit hook implementations that write to an external log store, developers may rely only on default error handling or console output, which does not persist beyond container restarts and is not centralized. Attackers may exploit this gap by suppressing error responses or manipulating payloads to avoid generating visible traces. In addition, DynamoDB operations such as ConditionExpression failures can silently fail if not explicitly logged, leaving no evidence of authorization or data integrity issues. This lack of visibility aligns with the Insufficient Logging finding category in middleBrick’s 12 security checks, which tests whether endpoints produce adequate audit trails for unauthenticated and authenticated scenarios.
Real-world examples include scenarios where an unauthenticated or low-privilege actor leverages weak logging to perform data manipulation without detection, or where sensitive fields (such as roles or PII) are included in log-relevant payloads without appropriate masking. MiddleBrick’s LLM/AI Security checks highlight how missing logs can also obscure prompt injection or data exfiltration attempts targeting AI-integrated endpoints. Therefore, robust logging in FeathersJS with DynamoDB must capture request identifiers, principals, operations, input and output states (with sensitive data redacted), and system-level responses to ensure traceability and support compliance mappings to frameworks such as OWASP API Top 10 and SOC2.
Dynamodb-Specific Remediation in Feathersjs — concrete code fixes
Remediation centers on instrumenting FeathersJS services to emit structured, immutable logs for every DynamoDB interaction. Use Feathers hooks to capture before and after states, status outcomes, and contextual metadata. Ensure logs do not contain unprotected secrets and are written to a centralized system.
Example DynamoDB client configuration for FeathersJS
Configure the AWS SDK with appropriate region and credentials (in practice, use IAM roles or environment variables), and create a DocumentClient that your service adapter uses. This client is then passed into the FeathersJS adapter so operations can be intercepted.
const { DynamoDB } = require('aws-sdk');
const dynamoDb = new DynamoDB.DocumentClient({
region: process.env.AWS_REGION || 'us-east-1',
// credentials injected via environment or managed runtime
});
module.exports = dynamoDb;
FeathersJS service using DynamoDB adapter with logging hooks
Below is a complete, syntactically correct example. The service uses the @feathersjs/adapter-commons base adapter and wraps DynamoDB operations with logging logic inside before and after hooks. The log payload includes timestamp, event type, record ID, user context (if available), input parameters, condition expression, and a redacted representation of previous and current values.
const dynamoDb = require('./dynamodb-client');
const { v4: uuidv4 } = require('uuid');
function auditLog(event, record) {
const redacted = (obj) => {
if (!obj) return obj;
const clone = { ...obj };
if (clone.email) clone.email = '**redacted**';
if (clone.ssn) clone.ssn = '**redacted**';
if (clone.token) clone.token = '**redacted**';
return clone;
};
const logEntry = {
id: uuidv4(),
timestamp: new Date().toISOString(),
event, // 'create', 'update', 'patch', 'remove'
recordId: record?.id ?? null,
userId: record?.userId ?? null,
input: redacted(record?.data ?? null),
previous: redacted(record?.previous ?? null),
condition: record?.condition ?? null,
outcome: record?.outcome ?? null,
};
// Replace with your logging sink, e.g., CloudWatch, Elasticsearch, or a secure blob
console.log(JSON.stringify(logEntry));
}
const service = app => ({
path: 'resources',
async create(data, params) {
const putParams = {
TableName: process.env.DYNAMO_TABLE,
Item: data,
ConditionExpression: 'attribute_not_exists(id)',
};
try {
const result = await dynamoDb.put(putParams).promise();
const record = { id: data.id, userId: params?.account?.userId, data, outcome: 'success' };
auditLog('create', record);
return result;
} catch (error) {
const record = { id: data.id, condition: 'ConditionExpression', outcome: 'failure', error: error.message };
auditLog('create', record);
throw error;
}
},
async update(id, data, params) {
const getParams = {
TableName: process.env.DYNAMO_TABLE,
Key: { id },
};
const previous = await dynamoDb.get(getParams).promise().then(res => res.Item).catch(() => null);
const updateParams = {
TableName: process.env.DYNAMO_TABLE,
Key: { id },
UpdateExpression: 'set #attr = :val',
ExpressionAttributeNames: { '#attr': 'attr' },
ExpressionAttributeValues: { ':val': data.attr },
ReturnValues: 'ALL_OLD',
};
try {
const result = await dynamoDb.update(updateParams).promise();
const record = {
id,
userId: params?.account?.userId,
previous: previous || null,
current: { id, attr: data.attr },
condition: 'UpdateCheck',
outcome: 'success',
};
auditLog('update', record);
return result;
} catch (error) {
const record = {
id,
previous: previous || null,
condition: 'UpdateCheck',
outcome: 'failure',
error: error.message,
};
auditLog('update', record);
throw error;
}
},
async remove(id, params) {
const previous = await dynamoDb.get({ TableName: process.env.DYNAMO_TABLE, Key: { id } }).promise().then(res => res.Item).catch(() => null);
try {
await dynamoDb.delete({ TableName: process.env.DYNAMO_TABLE, Key: { id } }).promise();
const record = { id, userId: params?.account?.userId, previous, outcome: 'success' };
auditLog('remove', record);
return id;
} catch (error) {
const record = { id, previous, outcome: 'failure', error: error.message };
auditLog('remove', record);
throw error;
}
},
});
app.use('/resources', service);
Key remediation practices
- Log at the hook level for all CRUD operations to capture both successful and failed attempts.
- Include correlation identifiers (e.g., request ID) to trace a single request across services.
- Redact sensitive fields before writing logs.
- Centralize logs to a durable, searchable backend to ensure persistence beyond runtime.
- Instrument middleware to capture unhandled errors that may otherwise produce incomplete traces.
These patterns align with how middleBrick scans and reports on logging sufficiency, emphasizing structured, contextual audit trails rather than default console output.