An Introduction to Using OpenTelemetry & Python Together
By
Community /
Developer
May 22, 2023
Navigate to:
This post was written by Mercy Kibet, a full-stack developer with a knack for learning and writing about new and intriguing tech stacks.
In today’s digital world, software applications are becoming increasingly complex and distributed, making it more challenging than ever to diagnose and troubleshoot issues when they arise. OpenTelemetry, a powerful observability framework, will help you and your operations teams gain visibility into your applications and infrastructure, thus enabling you to identify and resolve issues quickly.
OpenTelemetry has become essential for any organization looking to build and maintain high-performing software applications by providing a standardized approach to collecting, processing, and exporting telemetry data. This post will introduce you to using OpenTelemetry with Python.
OpenTelemetry in Python
OpenTelemetry is an open-source observability framework that collects telemetry data from your application and infrastructure. Telemetry data include metrics, traces, and logs.
OpenTelemetry is vendor-agnostic, meaning you can use it with multiple monitoring and logging tools. It supports different programming languages, including Python. Its versatility and flexibility make it a powerful observability tool.
Below is a general flow of how OpenTelemetry works in Python:
-
Instrumentation: The first step is to instrument the code to generate telemetry data. You can do this using the OpenTelemetry SDK.
-
Data collection: Once you’ve instrumented the code, the OpenTelemetry agents or SDKs collect telemetry data, such as traces, metrics, and logs, from the instrumented code.
-
Data processing: OpenTelemetry collectors receive telemetry data from the agents or SDKs and perform additional processing, such as filtering, aggregation, and enrichment, of the data. The processed data is then sent to the back end for storage and analysis.
-
Data export: The final step is to export the telemetry data to a back end, such as a monitoring or logging system, for visualization, analysis, and alerting. OpenTelemetry supports integration with various back ends, including popular monitoring tools such as Prometheus, Grafana, and Jaeger among others.
Automatic instrumentation
Automatic instrumentation is the ability of the OpenTelemetry framework to instrument and collect telemetry data from applications requiring no manual configuration or code changes. You can do this using Python’s instrumentation libraries and integrations with popular frameworks like Fast API, Django, and Flask.
The instrumentation libraries use techniques such as code modification, bytecode injection, and function wrapping. This allows the libraries to intercept application code and collect telemetry data, including traces, metrics, and logs, without altering application code.
Automatic instrumentation eliminates manual configuration and code changes, and it provides a consistent and standardized way to collect telemetry data across different applications and frameworks.
Manual instrumentation
Unlike automatic instrumentation, manual instrumentation requires you to modify your application’s code to collect relevant metric data. You’ll use manual instrumentation when automatic instrumentation is not feasible or provides insufficient coverage. For example, if an application uses a custom protocol or framework that isn’t supported by the automatic instrumentation libraries, the developer may need to manually instrument the application to collect telemetry data.
While manual instrumentation requires more effort, it can provide more finely grained control over the collected telemetry data and collect data that automatic instrumentation libraries don’t cover.
How to use OpenTelemetry in Python
Now we’ll show you how to use OpenTelemetry with Python. We’ll create a simple CRUD API using Fast API.
Prerequisites
To follow along, you’ll need to install and configure Python 3.8+, Jaeger, and Docker.
Start by creating a folder, changing into the folder, and then setting up a virtual environment. You’ll then activate the virtual environment.
mkdir opentelemetry-python
cd open telemetry-python
#creating a virtual environment
pip3 -m venv env
#activating virtual environment for mac and linux
source env/bin/activate
#activating for windows
env\Scripts\activate
Next, install Fast API and Uvicorn and freeze the requirements to a requirements.txt file.
pip install fastapi uvicorn[standard]
#freezing the dependencies
pip freeze > requirements.txt
Then create a simple to-do list application where you can create, read, update, and delete (CRUD) to-dos. We’ll use a list or array to store our to-dos. Basically, the endpoints will manipulate the array. Below is an example code with all the endpoints in a main.py file.
from fastapi import FastAPI
from pydantic import BaseModel
class Todo(BaseModel):
id: int
title: str
description: str
app = FastAPI()
todos: list[dict[str, str | int]] = [
{
"id": 1,
"title": "My first todo item",
"description": "This is what I'll do first before moving on"
},
{
"id": 2,
"title": "My second todo item",
"description": "This is what I'll do second before moving on"
}
]
@app.get("/", tags=["Root"])
def test_route() -> dict:
return {"hello": "world"}
@app.get("/todos", tags=["Todos"])
def get_todos() -> dict:
return {"data": todos}
@app.get("/todos/{id}", tags=["Todos"])
def get_todo_with_id(id: int) -> list[dict]:
return list(filter(lambda todo: todo['id'] == id, todos))
@app.post("/todos", tags=["Todos"])
def create_todo(todo: Todo):
todos.append(todo)
return {"data": todos}
@app.put("/todos/{id}", tags=["Todos"])
def update_todo(id: int, body: dict):
for todo in todos:
if (int(todo["id"])) == id:
todo["title"] = body["title"]
todo["description"] = body["description"]
return {
"data": f"Todo with id {id} has been updated"
}
return {
"data": f"This Todo with id {id} is not found!"
}
@app.delete("/todos/{id}", tags=["Todos"])
def delete_todo(id: int):
for todo in todos:
if int(todo["id"]) == id:
todos.remove(todo)
return {"data": "Todo deleted!"}
return {
"data": f"Todo with id {id} not found!"
}
Run your application and navigate to the endpoint localhost:8000/docs. Since Fast API ships with open API, you can query your endpoints to ensure they work as expected.
uvicorn main:app --reload
Now spin up Jaeger. To ensure it’s running, visit localhost:16686. You’re using Jaeger to collect and visualize app traces.
You’ll need to instrument your code to push telemetry data back to Jaeger to see what your application is doing at run time. That way, your app’s service name will appear in the services drop-down menu. To do this, install OpenTelemetry’s Python API and SDK.
In the next section, we’ll explore two types of instrumentation: automatic and manual.
OpenTelemetry automatic instrumentation in Python
Since you’re using Fast API, you can leverage OpenTelemetry’s automated tracing. First, install opentelemetry-distro using PIP.
pip install opentelemetry-distro
Installing opentelemetry-distro gives you access to opentelemetry-bootstrap, which you’ll use to install Fast API libraries. To see a list of modules you can install, use the following command:
opentelemetry-bootstrap -a requirements
You should see the version of Fast API to install. Locate it in the list, then copy and install it using PIP. Alternatively, you can copy and add it to your requirements.txt, then run pip install.
Next, you’ll need to export your telemetry data (traces) to Jaeger. To do that, you’ll need to install a default exporter since Jaeger uses OpenTelemetry Protocol (OTLP).
pip install opentelemetry-exporter-otlp-proto-grpc
Finally, run your application by wrapping it with opentelemetry-instrument and declaring a service name for your application so that Jaeger can identify it and collect its traces.
opentelemetry-instrument --service_name my-app uvicorn main:app
Alternatively, you can set your exporter to the console where your traces and metrics will output to your terminal in JSON format.
opentelemetry-instrument --traces_exporter console --metrics_exporter console --service_name my_app uvicorn main:app
OpenTelemetry manual instrumentation in Python
Here, we’ll modify the code to include observability. As noted above, observability happens at the boundary of your application’s inbound and outbound traffic, which means you have limited access to exactly what’s happening inside. But with manual instrumentation, you can create spans that give you a peak into what’s happening to your endpoint. That way, you can easily troubleshoot errors.
Since you’ve already installed the required packages, you can now import them and initialize the provider. For this case, you’re collecting traces, so you can set up the trace provider.
from opentelemetry import trace
from opentelemetry.exporter.jaeger.proto.grpc import JaegerExporter
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
resource = Resource(attributes={
SERVICE_NAME: "my-app"
})
jaeger_exporter = JaegerExporter()
provider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(jaeger_exporter)
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
Next, initialize a tracer that you’ll use to create a span. A span represents a unit of work that’s part of a larger distributed system. You can use a context manager or a decorator to create one, and you can use it to trace the path of a request as it flows through multiple services in the system.
tracer = trace.get_tracer(__name__)
You use the get_tracer() method to create a tracer instance. Next, you’ll create a span using a context manager, as shown below, where you’ll call the start_as_current_span() method on the tracer instance you created above and pass the “get_todos_span” name.
@app.get("/todos", tags=["Todos"])
def get_todos() -> dict:
with tracer.start_as_current_span("get_todos_span"):
return {"data": todos}
Alternatively, instead of using the context manager, you could use a decorator (like shown below) to start your span with the name "get_todos_span"
@tracer.start_as_current_span("get_todos_span"):
def get_todos() -> dict:
return {"data": todos}
With these steps in place, your Fast API application should send traces to Jaeger. You can view the traces in the Jaeger UI by navigating to http://localhost:16686 in your web browser.
Conclusion
OpenTelemetry in Python is a powerful and flexible observability framework that allows you to collect telemetry data from distributed systems, including traces, metrics, and logs. It provides a powerful and flexible observability solution that you can use to monitor and debug distributed systems of any complexity. Its support for automatic and manual instrumentation, along with its support for multiple languages and telemetry data types, makes it a valuable tool for any organization seeking to improve the observability and reliability of its distributed systems.