HIGH insufficient loggingsailsdynamodb

Insufficient Logging in Sails with Dynamodb

Insufficient Logging in Sails with Dynamodb — how this specific combination creates or exposes the vulnerability

Sails is a Node.js web framework that encourages convention-over-configuration. When using Amazon DynamoDB as a persistence layer, developers often integrate Sails models via an ORM/ODM adapter such as sails-dynamodb. Insufficient logging in this context means that application events—such as create, update, delete, and authorization failures—are not reliably recorded with enough context to support detection, investigation, or audit.

With DynamoDB, operations are typically performed through the AWS SDK calls (e.g., get, put, update, delete, query, scan). If Sails services or controllers do not explicitly log request metadata (user or service identity, tenant or subject, target identifier, HTTP method, source IP), timestamps, and outcome (success or error), security-relevant events remain invisible or hard to correlate. This becomes a problem when authorization logic is enforced at the application layer rather than the database layer; without logs, a broken access control (BOLA/IDOR) may leave no trace.

DynamoDB itself emits CloudWatch Logs and CloudTrail events for management-plane operations (e.g., CreateTable, UpdateTable), but data-plane item-level operations are not logged by default. If Sails applications do not add their own logging around DynamoDB SDK calls, there is no durable record of who accessed which item, with what input, and with what result. In a black-box scan, this can be observed as missing audit trails for sensitive endpoints, and in practice it delays detection of tampering or credential misuse. For example, an attacker leveraging a BOLA/IDOR vulnerability to enumerate or modify other users’ records may leave no identifiable log entry if the Sails layer does not log the access attempt.

Additionally, insufficient logging can hide insecure data handling. Without logging request validation outcomes (e.g., which fields were checked, which constraints failed), developers lose visibility into malformed payloads that may indicate injection attempts or malformed client behavior. Similarly, if responses from DynamoDB (such as conditional check failures or throttling errors) are not surfaced in structured logs, incident responders may miss patterns of abuse or misconfiguration. MiddleBrick’s checks for Data Exposure and Input Validation highlight these gaps by analyzing runtime behavior and spec contracts, noting where outcomes are not recorded.

The combination of Sails’ model-layer abstractions and DynamoDB’s managed, schema-less nature can unintentionally encourage minimal instrumentation. Because DynamoDB does not enforce schemas or row-level audit logs at the service level, responsibility shifts to the application. If Sails does not explicitly log key attributes—record keys, versioning fields (e.g., createdAt, updatedAt), and authorization decisions—organizations lack the telemetry required to meet compliance expectations such as those in OWASP API Top 10 and SOC 2 controls.

To address this in a way that aligns with the tool’s capabilities, use the middleBrick CLI to scan unauthenticated attack surfaces and generate findings for Insufficient Logging, then follow the remediation guidance it provides. The scanner does not modify systems; it surfaces where logging is missing or unstructured so that developers can add the necessary telemetry.

Dynamodb-Specific Remediation in Sails — concrete code fixes

Remediation centers on instrumenting Sails controllers and services to emit structured, searchable logs for every interaction with DynamoDB. Logs should include consistent correlation identifiers, user/service context, request parameters (redacted for secrets), operation type, DynamoDB keys, condition check results, and response metadata. Below are concrete patterns you can adopt within a Sails project.

1. Centralized logger setup

Use a structured logger so logs are easy to index and query. Example with winston:

// config/log.js
module.exports.log = {
  level: 'info',
  transports: [
    new (require('winston')).transports.Console({
      format: require('winston').format.combine(
        require('winston').format.timestamp(),
        require('winston').format.json()
      )
    })
  ]
};

2. Service wrapper for DynamoDB operations

Create a service that wraps DynamoDB calls and logs inputs and outcomes. This keeps logging consistent across controllers.

// api/services/DynamoLogger.js
const AWS = require('aws-sdk');
const dynamo = new AWS.DynamoDB.DocumentClient();
const winston = require('winston');
const logger = winston.log.transports.Console;

module.exports = {
  async get(params, requestContext) {
    const startTime = Date.now();
    try {
      const data = await dynamo.get(params).promise();
      winston.info({
        timestamp: new Date().toISOString(),
        requestId: requestContext.id,
        userId: requestContext.userId,
        operation: 'get',
        table: params.TableName,
        key: params.Key,
        statusCode: 'success',
        latencyMs: Date.now() - startTime,
        item: data.Item ? this.sanitize(data.Item) : null
      });
      return data;
    } catch (err) {
      winston.warn({
        timestamp: new Date().toISOString(),
        requestId: requestContext.id,
        userId: requestContext.userId,
        operation: 'get',
        table: params.TableName,
        key: params.Key,
        statusCode: err.name,
        latencyMs: Date.now() - startTime,
        error: err.message
      });
      throw err;
    }
  },

  async query(params, requestContext) {
    const startTime = Date.now();
    try {
      const data = await dynamo.query(params).promise();
      winston.info({
        timestamp: new Date().toISOString(),
        requestId: requestContext.id,
        userId: requestContext.userId,
        operation: 'query',
        table: params.TableName,
        indexName: params.IndexName,
        keyConditionExpression: params.KeyConditionExpression,
        statusCode: 'success',
        latencyMs: Date.now() - startTime,
        count: data.Count,
        scannedCount: data.ScannedCount
      });
      return data;
    } catch (err) {
      winston.warn({
        timestamp: new Date().toISOString(),
        requestId: requestContext.id,
        userId: requestContext.userId,
        operation: 'query',
        table: params.TableName,
        indexName: params.IndexName,
        keyConditionExpression: params.KeyConditionExpression,
        statusCode: err.name,
        latencyMs: Date.now() - startTime,
        error: err.message
      });
      throw err;
    }
  },

  async put(params, requestContext) {
    const startTime = Date.now();
    try {
      const data = await dynamo.put(params).promise();
      winston.info({
        timestamp: new Date().toISOString(),
        requestId: requestContext.id,
        userId: requestContext.userId,
        operation: 'put',
        table: params.TableName,
        key: params.Key,
        statusCode: 'success',
        latencyMs: Date.now() - startTime
      });
      return data;
    } catch (err) {
      winston.warn({
        timestamp: new Date().toISOString(),
        requestId: requestContext.id,
        userId: requestContext.userId,
        operation: 'put',
        table: params.TableName,
        key: params.Key,
        statusCode: err.name,
        latencyMs: Date.now() - startTime,
        error: err.message
      });
      throw err;
    }
  },

  sanitize(item) {
    // Remove or hash sensitive fields before logging
    if (!item) return item;
    const { password, ssn, token, ...safe } = item;
    return safe;
  }
};

3. Example controller usage with correlation and user context

Ensure each request carries an ID and user identifier, then pass them through to the service.

// api/controllers/RecordController.js
const DynamoLogger = require('../services/DynamoLogger');

module.exports = {
  async findOne(req, res) {
    const { id } = req.params;
    const userId = req.user ? req.user.id : 'anonymous';
    const requestContext = {
      id: req.id || req.headers['x-request-id'] || require('uuid').v4(),
      userId
    };
    try {
      const result = await DynamoLogger.get(
        {
          TableName: process.env.DYNAMO_TABLE,
          Key: { id }
        },
        requestContext
      );
      return res.ok(result.Item);
    } catch (err) {
      if (err.code === 'ResourceNotFoundException') {
        return res.notFound();
      }
      return res.serverError(err);
    }
  },

  async update(req, res) {
    const { id } = req.params;
    const userId = req.user ? req.user.id : 'anonymous';
    const body = req.body;
    const requestContext = {
      id: req.id || req.headers['x-request-id'] || require('uuid').v4(),
      userId
    };
    const params = {
      TableName: process.env.DYNAMO_TABLE,
      Key: { id },
      UpdateExpression: 'set #status = :status, updatedAt = :updatedAt',
      ExpressionAttributeNames: { '#status': 'status' },
      ExpressionAttributeValues: {
        ':status': body.status,
        ':updatedAt': new Date().toISOString()
      },
      ReturnValues: 'UPDATED_NEW'
    };
    try {
      const result = await DynamoLogger.put(params, requestContext);
      return res.ok(result.Attributes);
    } catch (err) {
      return res.serverError(err);
    }
  }
};

4. Correlation across async flows

When operations spawn async work (e.g., via queues or callbacks), propagate the request context so logs remain traceable. Include the correlation ID in condition checks and client errors returned to the caller.

5. Complementing middleBrick findings

After running a scan with the middleBrick CLI, use its output to prioritize which endpoints lack sufficient logging. The tool’s findings map to OWASP API Top 10 and can guide where to add the above telemetry. middleBrick does not fix or block; it provides findings and remediation guidance to implement these patterns.

Frequently Asked Questions

Does middleBrick fix insufficient logging issues in Sails when using DynamoDB?
No. middleBrick detects and reports insufficient logging and provides remediation guidance. It does not modify application code or infrastructure.
What additional telemetry is recommended for DynamoDB-backed Sails APIs?
Log correlation identifiers, user/service context, DynamoDB keys, operation type, condition check outcomes, sanitized item representations (excluding secrets), latency, and error names. This supports detection of BOLA/IDOR, data exposure, and input validation issues.