Back to blog

How to Send OpenTelemetry Logs from Python in 5 Minutes

·6 min read·By The LogClaw Team

OpenTelemetry is the industry standard for collecting telemetry data, but getting Python logging wired up correctly can be confusing. This guide gets you from zero to structured OTEL logs in under 5 minutes.

Why OpenTelemetry for Logging?

Python's built-in logging module is great for local development, but it falls short in production distributed systems. Logs end up in files, stdout gets lost in container orchestrators, and correlating a request across microservices becomes a manual nightmare.

OpenTelemetry solves this by providing a vendor-neutral protocol (OTLP) for shipping structured logs to any compatible backend. Your logs carry trace context, resource attributes, and structured metadata that make debugging distributed systems dramatically easier.

Step 1: Install the Packages

You need three packages: the OTEL SDK, the OTLP log exporter, and the logging instrumentor that bridges Python's stdlib logging to OTEL.

pip install opentelemetry-sdk \
    opentelemetry-exporter-otlp-proto-http \
    opentelemetry-instrumentation-logging

Step 2: Configure the Exporter

Add this setup code at the top of your application, before any logging calls. The key is configuring the OTLP exporter with your endpoint inline — no environment variables needed.

import logging
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
from opentelemetry.sdk.resources import Resource

# 1. Define your service identity
resource = Resource.create({
    "service.name": "my-python-app",
    "service.version": "1.0.0",
    "deployment.environment": "production",
})

# 2. Create the OTLP exporter with inline config
exporter = OTLPLogExporter(
    endpoint="https://otel.logclaw.ai",                       # Self-hosted: http://localhost:4318
    headers={"x-logclaw-api-key": "lc_proj_your_key_here"},   # Self-hosted: remove this line
)

# 3. Wire it up
provider = LoggerProvider(resource=resource)
provider.add_log_record_processor(BatchLogRecordProcessor(exporter))

# 4. Attach to Python's stdlib logging
handler = LoggingHandler(level=logging.INFO, logger_provider=provider)
logging.getLogger().addHandler(handler)
logging.getLogger().setLevel(logging.INFO)

Step 3: Use Standard Python Logging

The beauty of the OTEL logging bridge is that you don't change how you write logs. Use Python's standard logging module exactly as you always have. The OTEL handler intercepts the log records and ships them over OTLP.

logger = logging.getLogger(__name__)

logger.info("User signed up", extra={"user_id": "usr_123", "plan": "pro"})
logger.warning("Rate limit approaching", extra={"endpoint": "/api/search", "usage": 0.85})
logger.error("Payment failed", extra={"order_id": "ord_456", "reason": "card_declined"})

The extra dictionary becomes structured attributes on your OTEL log record. This means you can filter and query by user_id, order_id, or any other attribute in your observability backend — no regex parsing required.

Step 4: Verify It Works

Send a test log and verify it arrives at your backend. If you're using LogClaw, you can also verify with a quick cURL:

curl -i https://otel.logclaw.ai/v1/logs \
  -H "Content-Type: application/json" \
  -H "x-logclaw-api-key: lc_proj_your_key_here" \
  -d '{"resourceLogs":[{"resource":{"attributes":[{"key":"service.name","value":{"stringValue":"test"}}]},"scopeLogs":[{"logRecords":[{"body":{"stringValue":"hello from curl"},"severityText":"INFO"}]}]}]}'

A 200 OK response confirms your endpoint and API key are working. Your Python logs should appear in the LogClaw dashboard within seconds.

Framework-Specific Guides

The setup above works for any Python application. For popular frameworks, you can add auto-instrumentation to capture HTTP request context automatically:

  • Django: Add opentelemetry-instrumentation-django and call DjangoInstrumentor().instrument() in your wsgi.py.
  • Flask: Add opentelemetry-instrumentation-flask and call FlaskInstrumentor().instrument_app(app).
  • FastAPI: Add opentelemetry-instrumentation-fastapi and call FastAPIInstrumentor.instrument_app(app).
  • Celery: Add opentelemetry-instrumentation-celery to correlate task logs with the originating request trace.

See the full framework guides in our quickstart documentation.

What Happens After Ingestion?

Once your logs are flowing, LogClaw's AI engine starts working immediately. It baselines your normal error rates, detects anomalies using statistical analysis, and automatically creates incident tickets when something goes wrong. No dashboards to configure, no alert thresholds to set. Your team gets a detailed Jira or Linear ticket with the full context — affected services, error patterns, and suggested root cause.

Start sending logs in 5 minutes

Follow the full quickstart guide with examples for Python, Node.js, Go, Java, .NET, Ruby, and Rust.