Using the Python Client Library with InfluxDB v3 Core
By
Anais Dotis-Georgiou /
Developer
Jan 15, 2025
Navigate to:
The long-awaited InfluxDB 3 Core is finally here, introducing a powerful new way to manage your time series data. InfluxDB 3 Core is an open source recent-data engine for time series and event data. It’s currently in public Alpha under MIT/ Apache 2 license. In this post, we’ll dive into how to query and write data using the Python client library, unlocking the full potential of InfluxDB v3 Core with clear, hands-on examples. Let’s get started and see what this game-changing release has to offer! For a deep dive into the InfluxDB v3 Python Client Library, I suggest reading this two-part blog series, Client Library Deep Dive: Python (Part 1) and Part 2, for detailed explanations of the data objects and file type support.
Requirements
I recommend making a Python Virtual Environment before installing the Python Client Library:
$ python3 -m venv ./.venv
$ source .venv/bin/activate
$ pip install –upgrade pip
$ pip install influxdb3-python
You’ll also need to install InfluxDB v3 Core. Follow the installation instructions here.
Creating InfluxDB v3 resources and a CLI tour
Before we can use the Python Client Library to read data to InfluxDB v3 Core, we’ll first need to create a token. The InfluxDB v3 Core CLI provides a simple way to manage your authentication setup. Using the influxdb3
CLI, you can start the server, create tokens, and interact with the data through queries and writes. For example, you can use the influxdb3 create
command to create a token and the influxdb3 write
and influxdb3 query
commands to manage and explore your data.
First, we need to generate a new token with:
influxdb3 create token
You should see the following output:
Token: apiv3_xxx
Hashed Token: zzz
Start the server with `influxdb3 serve --bearer-token zzz`
HTTP requests require the following header: "Authorization: Bearer apiv3_xxx"
This will grant you access to every HTTP endpoint or deny it otherwise
The Hashed Token
is a cryptographic representation of the plain Token
. By passing the Hashed Token
to the server, you avoid exposing the plain token in the command line, logs, or configuration files. So, when a client sends a plain bearer token in an HTTP request, the server hashes the received token and compares the hashed result to the hashed token you provided at startup. This ensures that the server can validate the plain token securely without needing to store or process it directly.
Now, you can elect to serve influxdb3 with the bearer-token:
influxdb3 serve --host-id=local01 --object-store memory --bearer-token zzz
Alternatively, you could serve the instance and store objects in the local filesystem:
influxdb3 serve --host-id=local01 --object-store file --data-dir ~/.influxdb3 --bearer-token zzz
Parquet files serve as the durable, persisted data format for InfluxDB 3.0, enabling object storage to become the preferred solution for long-term data retention. This approach significantly lowers storage costs while maintaining excellent performance. The --object-store
option allows users to specify where they want to write those Parquet files. You can choose to write these to memory, local file system, Amazon S3, Azure Blob Storage, Google Cloud Storage, or any cloud storage.
Next, we can create a database and write to it with:
influxdb3 write --dbname airSensors --file test_data
In this example, test_data
is a file that contains line protocol data, the ingest format for InfluxDB. You can find a selection of line protocol real-time datasets here. For example, you could use some Air Sensor data (or air-sensor-data.lp
) :
airSensors,sensor_id=TLM0100 temperature=71.24021491535241,humidity=35.0752743309533,co=0.5098629816173851 1732669098000000000
airSensors,sensor_id=TLM0101 temperature=71.84309523593232,humidity=34.934199682459,co=0.5034259382294339 1732669098000000000
airSensors,sensor_id=TLM0102 temperature=71.95391915782443,humidity=34.92433120092046,co=0.5175197455105179 1732669098000000000
After writing the data with the influxdb3 write
command, you should see the following confirmation:
success
Now, we can successfully query the data with:
influxdb3 query --dbname=airSensors "SELECT * FROM airSensors LIMIT 10"
Testing with cURL
Of course, sometimes it’s helpful to test your token by executing a cURL request before using a Client SDK. You could, for example, also write data to the test_db
database with:
curl \
"http://127.0.0.1:8181/api/v2/write?bucket=test_db&precision=s" \
--header "Authorization: Bearer apiv3_xxx" \
--data-binary 'home,room=kitchen temp=72 1732669098'
Important Notes:
- InfluxDB Core is running on port 8181 by default.
- Databases are created on write via the CLI, API, or Clients.
InfluxDB v3 Core and the Python Client Library
Now that we’ve confirmed that we can successfully create a bucket, token, and write with a cURL request, let’s use the InfluxDB v3 Python Client Library to write data as well. The write method supports writing several different data objects, including Points, Pandas DataFrames, and Polars DataFrames. In this tutoria,l we’ll focus on writing and returning a Pandas DataFrame. For explanations of working with the other data objects, see this example directory or Client Library Deep Dive: Python (Part 1).
from influxdb_client_3 import InfluxDBClient3
import pandas as pd
import numpy as np
# Initialize Client with the correct authorization scheme
client = InfluxDBClient3(
host="http://127.0.0.1:8181",
token="apiv3_xxx",
org="",
database="test",
auth_scheme="Bearer"
)
print("Connection Successful!")
# Create a dataframe
df = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
# Create a range of datetime values
dates = pd.date_range(start='2025-01-01', end='2025-01-02', freq='1H')
# Create a DataFrame with random data and datetime index
df = pd.DataFrame(
np.random.randn(
len(dates),
3),
index=dates,
columns=[
'Column 1',
'Column 2',
'Column 3'])
df['tagkey'] = 'Hello World'
print(df)
# Write the DataFrame to InfluxDB
try:
client.write(df, data_frame_measurement_name='table', data_frame_tag_columns=['tagkey'])
print("DataFrame successfully written to InfluxDB!")
except Exception as e:
print(f"Failed to write to InfluxDB: {e}")
# Query the DataFrame from InfluxDB
try:
query = '''SELECT * from "table"'''
table = client.query(query=query, language="sql", mode="pandas")
print("SQL Query Results:")
print(table)
except Exception as e:
print(f"SQL Query failed: {e}")
This code initializes an InfluxDBClient3 to connect to an InfluxDB Core instance and defines a Pandas DataFrame with randomly generated data. The script writes this DataFrame to the database and then uses a SQL query to retrieve all entries from the table
in the test
database, printing the result as a Pandas DataFrame. The try-except blocks handle potential errors during the write and query operations. Please see these examples for more details on how to write other types of data (including Parquet, CSV, JSON, Polars) and query with InfluxQL using the Python Client Library.
Final thoughts
Last but not least, if you want to stop running the influxdb3 server you can kill the process first by returning the PID with:
pgrep influxdb3
And then using:
kill <PID>
The InfluxDB v3 CLI is still under development. In the future, users should be able to create a database without having to write data to it. This should facilitate using InfuxDB v3 Core, although there is an advantage to using the write command and verifying that your writes are also successful. Additionally, when extra support is added to InfluxDB v3 Core in the future, updates will have to be made to the InfluxDB v3 Python Client Library. But for now, I hope this blog post helps you get started using InfluxDB v3 Core.
To get started, download InfluxDB 3 Core here. Please share your first impressions, thoughts, and feedback on the Discord for InfluxDB Core. Your experience and opinions are important to us during the Alpha release of InfluxDB Core. If you need help, please contact us on our community site or Slack channel. If you are also working on a data processing project with InfluxDB, I’d love to hear from you!