Using Python Context Managers for Structured Logging in AWS

Using Python Context Managers for Structured Logging in AWS

One of the most valuable debugging tools in a distributed or cloud-based microservices environment is structured logging. In AWS, where we rely on CloudWatch or similar tools to aggregate and search our logs, it’s especially important that our log entries are not only readable by humans, but also easily parsed and filterable by machines.

When developing APIs and backend services, I often want to know:

  • When did this operation start?
  • How long did it take?
  • What was its status at various points?
  • What contextual data (request ID, customer ID, etc.) can I attach?

While Python’s logging framework is flexible, I found myself repeating a lot of code to capture start/end times and to emit structured JSON logs. So, I decided to make this easier by wrapping everything in a simple context manager.

The Solution: log_it

Here’s the utility I came up with:

import contextlib
import json
from datetime import datetime

class LogEntry:
    # ... (For the full source see: https://gist.github.com/brettschneider/c78359fe8c04ba207fed3ee8b20558a9)
    # Class definition omitted for brevity

@contextlib.contextmanager
def log_it(msg: str, **kwargs):
    entry = LogEntry(msg, **kwargs)
    entry.begin()
    yield entry
    entry.complete()

How does it work?

  • Context Manager: By using the with statement, we automatically emit a “begin” log entry at the start and an “end” log entry (with execution time) at the end of the code block.
  • Structured JSON: Log entries are always valid JSON, making them easy to index and search in CloudWatch.
  • Contextual Data: You can add any key-value pairs to your log entries, either up front (customer_id, request_id, etc.) or dynamically during execution.
  • Timing: Execution time is automatically tracked and included.

Example Usage

Here’s how you might use it in a FastAPI endpoint handler:

@app.get("/")
def api_status():
    with log_it("API Status", customer_id="n/a") as log:
        status = get_api_status()
        log(api_status=status)
        return status

Sample Output:

{"timestamp": "2025-07-30T22:39:02.113608", "message": "API Status", "customer_id": "n/a", "status": "begin"}
{"timestamp": "2025-07-30T22:39:02.113830", "message": "API Status", "customer_id": "n/a", "status": "interim", "api_status": {"status": "no meta"}, "execution_time": "0:00:00.000243"}
{"timestamp": "2025-07-30T22:39:02.113861", "message": "API Status", "customer_id": "n/a", "status": "end", "execution_time": "0:00:00.000274"}

Why Use This Pattern?

  • Consistency: All your logs have the same structure, making downstream processing much easier.
  • Automatic Timing: No need to manually subtract timestamps or remember to log durations.
  • Flexibility: Add any extra fields you need at any point in your logic.

AWS-Friendly

Because every log is JSON and includes timestamps, status, and custom fields, it’s trivial to configure CloudWatch Logs Insights to search or aggregate across these entries—whether you’re debugging a production issue or generating dashboards.

Next Steps

This is just a starting point. You could extend this pattern to integrate with Python’s built-in logging module (for log levels), send logs to external systems, or add error/exception handling for even richer context.

If you find yourself writing a lot of “start/end” logging code in your AWS Python projects, give this context manager a try!

The full source code for this logging pattern can be found at this GitHub Gist: https://gist.github.com/brettschneider/c78359fe8c04ba207fed3ee8b20558a9