CLI Operations for InfluxDB 3 Core and Enterprise
By
Anais Dotis-Georgiou /
Developer
Feb 04, 2025
Navigate to:
This blog covers the nitty-gritty of essential command-line tools and workflows to effectively manage and interact with your InfluxDB 3 Core and Enterprise instances. Whether you’re starting or stopping the server with configurations like memory, file, or object store, this guide will walk you through the process. We’ll also look at creating and writing data into databases using authentication tokens, exploring direct line protocol input versus file-based approaches for tasks like testing. You’ll learn how to query data efficiently using SQL and InfluxQL, set up performance-boosting features like the last value cache and meta-cache, and tidy up by deleting databases and tables when necessary. Let’s get started with mastering the InfluxDB 3 Core and Enterprise CLI!
Requirements
This blog post assumes you meet the following requirements: InfluxDB 3 Core or InfluxDB Enterprise. If you’re just getting started, I recommend beginning with InfluxDB 3 Core. As the open-source version of InfluxDB 3 Enterprise, it’s a solid foundation for learning and an excellent choice as an edge data collector. Alternatively, you can use the free trial of Enterprise or upgrade from Core to Enterprise—the CLI commands in Core are identical to those in Enterprise. To upgrade from Core to Enterprise, users need to download and install InfluxDB 3 Enterprise in free trial mode or with a valid license in the same environment where they installed Core, ensuring it points to the same object store.
Image of the CLI after you install it with a single cURL command. There are options to install as a simple download or with docker.
Naming Conventions Note: If you’re a previous InfluxDB user, some changes in nomenclature might be confusing; please note that the following are synonymous:
Bucket ↔ Database
Measurement ↔ Table
Field, Tags ↔ Columns
Start by getting –-help
Once you have InfluxDB v3 OSS installed, the influxdb3
command-line interface (CLI) becomes your go-to tool for interacting with the database. This versatile CLI allows you to manage your InfluxDB instance, create resources like databases and tokens, and perform essential operations such as querying and writing data. Running influxdb3 –-help
provides a complete list of commands and options, with examples to get you started. Whether you’re setting up a server with influxdb3 serve
, writing data with influxdb3 write
, or running queries using influxdb3 query
, this tool makes it simple to work with your time series data.
Here’s a detailed explanation of each command listed in the influxdb3 –-help
output to help you understand their purpose and usage:
serve
: Starts the InfluxDB 3 Core or Enterprise server, the central process that makes the database operational.query
: Executes queries against a running InfluxDB 3 Core or Enterprise server, allowing you to retrieve and analyze stored data.write
: Writes time series data to an InfluxDB 3 Core or Enterprise server. This command allows you to add data manually or programmatically from other tools.create
: Helps you create new resources in your InfluxDB instance, such as databases or tokens.help
: Displays help information for theinfluxdb3
CLI or a specific command. For example,influxdb3 query –-help
provides detailed usage instructions for the query command.
Starting the server with different configurations
When you first install InfluxDB 3, the CLI walks you through server configuration options. After that initial setup, use the serve
command to start the instance:
influxdb3 serve --object-store file --data-dir ~/.influxdb3 --node-id node0
InfluxDB 3 runs on port 8181 by default, but you can add --http-bind='0.0.0.0:8181
to specify a different port. You must also specify the --node-id
identifier that determines the server’s storage path within the configured storage location. It must be unique for any nodes sharing the same object store configuration, such as the same bucket. Parquet files serve as the durable, persisted data format for InfluxDB 3 Core and Enterprise, enabling object storage to become the preferred solution for long-term data retention. This approach significantly lowers storage costs while maintaining excellent performance. The --object-store
option allows users to specify where they want to write those Parquet files. You can choose to write these to memory, local file system, Amazon S3, Azure Blob Storage, Google Cloud Storage, or any cloud storage. Cloud storage options include:
memory
(default): Effectively no object persistence.memorythrottled
: Likememory
but with latency and throughput that somewhat resembles a cloud object store. Useful for testing and benchmarking.file
: Stores objects in the local filesystem. Must also set--data-dir
.s3
: Amazon S3. Must also set--bucket
,--aws-access-key-id
,--aws-secret-access-key
, and possibly--aws-default-region
.google
: Google Cloud Storage. Must also set--bucket
and--google-service-account
.azure
: Microsoft Azure blob storage. Must also set--bucket
,--azure-storage-account
, and--azure-storage-access-key
.
For example, I could use the serve command to start the instance using memory as the “object store” with:
influxdb3 serve --object-store memory --node-id node0
In-memory storage doesn’t provide a permanent object store, and data clears on restart, but it is the fastest way to get running with InfluxDB 3.
Create a database and write to it with a plain token
Now we’re ready to create a database and write data with one command:
influxdb3 write --database [your database name] --file [path to your line protocol data]`
For example, if we wanted to write the following line protocol data we might use:
influxdb3 write --database airSensors --file /Desktop/airsensors.lp
Where airsensors.lp
is a file that contains line protocol data, the ingest format for InfluxDB. You can find a selection of line protocol real-time datasets here. For example, you could download some Air Sensor data which looks like:
airSensors,sensor_id=TLM0100 temperature=71.24021491535241,humidity=35.0752743309533,co=0.5098629816173851 1732669098000000000
airSensors,sensor_id=TLM0101 temperature=71.84309523593232,humidity=34.934199682459,co=0.5034259382294339 1732669098000000000
airSensors,sensor_id=TLM0102 temperature=71.95391915782443,humidity=34.92433120092046,co=0.5175197455105179 1732669098000000000
This dataset contains temperature, carbon monoxide, and humidity data from eight different sensors.
Or, we could write those few lines of data directly, instead of pointing to a file with:
influxdb3 write --database [your database name]
'airSensors,sensor_id=TLM0100 temperature=71.24021491535241,humidity=35.0752743309533,co=0.5098629816173851 1732669098000000000
airSensors,sensor_id=TLM0101 temperature=71.84309523593232,humidity=34.934199682459,co=0.5034259382294339 1732669098000000000
airSensors,sensor_id=TLM0102 temperature=71.95391915782443,humidity=34.92433120092046,co=0.5175197455105179 1732669098000000000'
After writing the data with the influxdb3 write
command, you should see the following confirmation:
success
In this example, and by default, we are serving InfluxDB 3 Core with the plain Token
. In the next section, we’ll learn about how to write data to InfluxDB 3 Core or Enterprise with the Hashed Token
and the difference between the two.
You create a database on write with the influxdb3 write
command. However, you can also elect to create a database with the influxdb3 create database [your database name]
command.
Writing into the database with authentication tokens
You’ll need to create a token if you want to use the HTTP API or SDKs to access your data. To create a token with the InfluxDB CLI, you can use the following command:
influxdb3 create token
You should see the following output:
Token: apiv3_xxx
Hashed Token: zzz
Start the server with `influxdb3 serve --bearer-token zzz
HTTP requests require the following header: "Authorization: Bearer apiv3_xxx"
This will grant you access to every HTTP endpoint or deny it otherwise
Now, you can elect to serve influxdb3 with the bearer-token:
influxdb3 serve --object-store memory --bearer-token zzz --node-id node0
You can also elect to serve the instance and store objects in the local filesystem:
influxdb3 serve --object-store file --data-dir ~/.influxdb3 --bearer-token zzz --node-id node0
The hashed token is a cryptographic representation of the plain token. By passing the hashed token to the server, you avoid exposing the plain token in the command line, logs, or configuration files. So when a client sends a plain bearer token in an HTTP request, the server hashes the received token and compares the hashed result to the hashed token you provided at startup. This ensures that the server can validate the plain token securely without needing to store or process it directly. It’s best practice to serve InfluxDB 3 Core and Enterprise with the Hashed Token
.
Now that we’re serving with the Hashed Token
, we can use the same CLI commands above to write data to the database. Alternatively, if we wanted to write with cURL we do the following:
curl \
"http://127.0.0.1:8181/api/v2/write?bucket=[your database name]&precision=s" \
--header "Authorization: Bearer zzz" \
--data-binary 'home,room=kitchen temp=72 1732669098'
Querying data with InfluxQL and SQL
Now we’re ready to query our InfluxDB 3 Core instance with SQL:
influxdb3 query --database=[your database name] "SELECT * FROM airSensors LIMIT 10"
You can also elect to set the language to InfluxQL instead of SQL if you’re an existing InfluxDB user and more familiar with that language. Simply specify the language with: --language=influxql
:
influxdb3 query --database=[your database name] --language=influxql "SHOW MEASUREMENTS"
Output from querying InfluxQL with the InfluxDB 3 CLI.
Setting up last value cache and distinct value cache
InfluxDB 3 Core supports a last-n values cache (LVC), which stores the last N values in a series or column hierarchy in memory, and a distinct value cache, which retains unique values for a single column or a hierarchy of columns in RAM. The last value cache allows InfluxDB 3 to answer last-n values queries in under 10 milliseconds, while the distinct value cache is great for fast metadata lookups.
In future blog posts, we’ll dive into benchmarking the performance of these caches, but for this blog, let’s focus on creating and using them. The process for creating both caches is almost identical, so let’s demonstrate the process by creating a last-value cache with:
influxdb3 create last_cache --database [your database name] --table [your database table] [CACHE_NAME]
The [CACHE_NAME]
is optional, and the command automatically generates a name if not provided. So, for example, if we wanted to create a last cache for the airSensors table/measurement.
influxdb3 create last_cache --database [your database name] --table airSensors airSensorsLVC
You should see the following output:
new cache created: {
"table": "airSensors",
"name": "airSensorsLVC",
"key_columns": [
0
],
"value_columns": "all_non_key_columns",
"count": 1,
"ttl": 14400
}
Now we can return last 10 values that utilize the cache with:
influxdb3 query --database=[your database name] "SELECT * FROM last_cache(airSensors, airSensorsLVC) LIMIT 10"
The last-value cache is helpful because it enables fast, efficient access to the most recent data for specific combinations of key column values. This helps users to ensure they have up-to-date data for alerts or decision-making. It’s also extremely useful when working with an event-based time-series application where you aren’t writing subsets of data regularly. In this instance, you want to avoid full scans of historical data for queries focused on the latest values.
Creating a Last Value Cache for foo
To better understand that output, we need to dive into the additional create last_cache
options that we didn’t utilize during our LVC creation. The create last_cache
command has the following options, as per the documentation:
--key-columns
: A comma-separated list of columns to use as keys in the cache. For example:foo,bar,baz
. This provides the top level of the cache.--value-columns
: A comma-separated list of columns to store as values in the cache. For example:foo,bar,baz
. At the leaf (or terminal) nodes of the hierarchy, a buffer is maintained to store the values. The buffer size is determined by the--count
parameter.--count
: The number of entries per unique key column combination to store in the cache. The maximum number here can be 10.--ttl
: The cache entries’ time-to-live (TTL) in Humantime form–for example:10s, 1min 30sec, 3 hours. If any entry in the buffer has been there for longer than the TTL, it will be removed, regardless of the buffer size and how many entries it contains.
When data is written, the values are added to the buffer corresponding to the matching combination of key column values. For example, imagine we create a cache with:
influxdb3 create last_cache --database [your database name] --table [your database table] --key-columns t1,t2 --value-columns f1 --count 5
Consider the following line protocol data, where 1 denotes data written at the first timestamp:
foo,t1=A,t2=A f1=1 1
foo,t1=A,t2=B f1=2 1
foo,t1=B,t2=C f1=3 1
foo,t1=B,t2=D f1=4 1
Data would be added to the buffer in the cache in the following way: A hierarchical cache structure for organizing data with Last Value Cache based on key columns and storing values in buffers for InfluxDB 3.
Now imagine we write another line of data: foo,t1=A,t2=A f1=2 2
. Now we see it added to the following buffer:
Values are only buffered if their time is newer than the most recent value of their respective buffer.
The distinct value cache operates similarly to the last value cache with the following option differences, as per the documentation:
--columns
(required): A comma-separated list of columns to cache distinct values. For example:col1,col2,col3
.--max-cardinality
: Maximum number of distinct value combinations to hold in the cache.--max-age
: Maximum age of an entry in the cache entered as a human-readable duration. For example:30d, 24h
.
As mentioned above, the distinct value cache is optimal for fast metadata lookups. They help guide queries toward the correct subset of data by traversing the cache structure, and looking for the leaf nodes that match the keys in the query. For example, in an IoT scenario, this could help developers quickly check if a particular sensor (perhaps one of thousands of the same sensors across multiple factories) is reporting data and operating as expected.
Deletes
We can also use the influxdb3 delete command to delete databases, distinct value cache, file index for a database, last value cache, plugins, tables in a database, and triggers. We’ll learn more about plugins and triggers in the following blog post. But let’s delete the database we created with:
influxdb3 delete database [your database name]
Stopping the server
Finally, if you want to stop running the influxdb3 server, you can kill the process first by returning the PID with:
pgrep influxdb3
And then using:
kill [PID]
Final thoughts
I hope this blog post helps you get started with InfluxDB 3 Core or Enterprise. In an upcoming blog post, we’ll learn how to create Python Plugins for the Processing Engine for InfluxDB 3 Enterprise, a feature that is also controlled through the CLI. As always, get started with InfluxDB 3 Cloud here and Core and Enterprise here. If you need help, please contact us on our community site or Slack channel. If you are also working on a data processing project with InfluxDB, I’d love to hear from you!