Skip to content

Commit

Permalink
InfluxDB OSS v1.11.7 (#5648)
Browse files Browse the repository at this point in the history
* InfluxDB OSS v1.11.7

* Apply suggestions from code review

* updated specific references to influxdb 1.8 to 1.11

* updated 1.11.7 release notes

* fix indentation on 1.11.7 release notes

* add language about cloning a new instance with 1.11

* Apply suggestions from code review

Co-authored-by: Jason Stirnaman <[email protected]>

* Apply suggestions from code review

* corrected v1 linux binary package name

* corrected v1 linux binary package name

* bump 1.11.7 flux version to 1.194.5

* added warning about no 32-bit builds (#5661)

* updated influxdb v1 latest patch on data/products

---------

Co-authored-by: Jason Stirnaman <[email protected]>
  • Loading branch information
sanderson and jstirnaman authored Oct 28, 2024
1 parent df12257 commit 91482e6
Show file tree
Hide file tree
Showing 26 changed files with 808 additions and 123 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -1650,21 +1650,25 @@ Number of queries allowed to execute concurrently.
Default is `0`.

#### query-initial-memory-bytes

Initial bytes of memory allocated for a query.
`0` means unlimited.
Default is `0`.

#### query-max-memory-bytes

Maximum total bytes of memory allowed for an individual query.
`0` means unlimited.
Default is `0`.

#### total-max-memory-bytes

Maximum total bytes of memory allowed for all running Flux queries.
`0` means unlimited.
Default is `0`.

#### query-queue-size

Maximum number of queries allowed in execution queue.
When queue limit is reached, new queries are rejected.
`0` means unlimited.
Expand Down
18 changes: 16 additions & 2 deletions content/enterprise_influxdb/v1/flux/optimize-queries.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,14 @@ Optimize your Flux queries to reduce their memory and compute (CPU) requirements
- [Measure query performance with Flux profilers](#measure-query-performance-with-flux-profilers)

## Start queries with pushdowns
**Pushdowns** are functions or function combinations that push data operations to the underlying data source rather than operating on data in memory. Start queries with pushdowns to improve query performance. Once a non-pushdown function runs, Flux pulls data into memory and runs all subsequent operations there.

**Pushdowns** are functions or function combinations that push data operations
to the underlying data source rather than operating on data in memory.
Start queries with pushdowns to improve query performance. Once a non-pushdown
function runs, Flux pulls data into memory and runs all subsequent operations there.

#### Pushdown functions and function combinations

The following pushdowns are supported in InfluxDB Enterprise 1.10+.

| Functions | Supported |
Expand Down Expand Up @@ -63,6 +68,7 @@ Once a non-pushdown function runs, Flux pulls data into memory and runs all
subsequent operations there.

##### Pushdown functions in use

```js
from(bucket: "db/rp")
|> range(start: -1h) //
Expand All @@ -75,6 +81,7 @@ from(bucket: "db/rp")
```

### Avoid processing filters inline

Avoid using mathematic operations or string manipulation inline to define data filters.
Processing filter values inline prevents `filter()` from pushing its operation down
to the underlying data source, so data returned by the
Expand Down Expand Up @@ -104,12 +111,14 @@ from(bucket: "db/rp")
```

## Avoid short window durations

Windowing (grouping data based on time intervals) is commonly used to aggregate and downsample data.
Increase performance by avoiding short window durations.
More windows require more compute power to evaluate which window each row should be assigned to.
Reasonable window durations depend on the total time range queried.

## Use "heavy" functions sparingly

The following functions use more memory or CPU than others.
Consider their necessity in your data processing before using them:

Expand All @@ -120,6 +129,7 @@ Consider their necessity in your data processing before using them:
- [pivot()](/influxdb/v2/reference/flux/stdlib/built-in/transformations/pivot/)

## Use set() instead of map() when possible

[`set()`](/influxdb/v2/reference/flux/stdlib/built-in/transformations/set/),
[`experimental.set()`](/influxdb/v2/reference/flux/stdlib/experimental/set/),
and [`map`](/influxdb/v2/reference/flux/stdlib/built-in/transformations/map/)
Expand All @@ -132,6 +142,7 @@ Use the following guidelines to determine which to use:
- If dynamically setting a column value using **existing row data**, use `map()`.

#### Set a column value to a static value

The following queries are functionally the same, but using `set()` is more performant than using `map()`.

```js
Expand All @@ -144,12 +155,14 @@ data
```

#### Dynamically set a column value using existing row data

```js
data
|> map(fn: (r) => ({ r with foo: r.bar }))
```

## Balance time range and data precision

To ensure queries are performant, balance the time range and the precision of your data.
For example, if you query data stored every second and request six months worth of data,
results would include ≈15.5 million points per series.
Expand All @@ -160,7 +173,8 @@ Use [pushdowns](#pushdown-functions-and-function-combinations) to optimize how
many points are stored in memory.

## Measure query performance with Flux profilers
Use the [Flux Profiler package](/influxdb/v2/reference/flux/stdlib/profiler/)

Use the [Flux Profiler package](/flux/v0/stdlib/profiler/)
to measure query performance and append performance metrics to your query output.
The following Flux profilers are available:

Expand Down
2 changes: 1 addition & 1 deletion content/enterprise_influxdb/v1/tools/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -249,7 +249,7 @@ curl --request POST "http://localhost:8086/api/v2/delete?bucket=exampleDB/autoge
}'
```
If you use the `predicate` option in your request, review [delete predicate syntax](/influxdb/latest/reference/syntax/delete-predicate/) and note its [limitations](/influxdb/latest/reference/syntax/delete-predicate/#limitations).
If you use the `predicate` option in your request, review [delete predicate syntax](/influxdb/v2/reference/syntax/delete-predicate/) and note its [limitations](/influxdb/v2/reference/syntax/delete-predicate/#limitations).
## InfluxDB 1.x HTTP endpoints
The following InfluxDB 1.x API endpoints are available:
Expand Down
8 changes: 4 additions & 4 deletions content/enterprise_influxdb/v1/tools/flux-vscode.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,15 +18,15 @@ provides Flux syntax highlighting, autocompletion, and a direct InfluxDB server
integration that lets you run Flux scripts natively and show results in VS Code.

{{% note %}}
#### Enable Flux in InfluxDB 1.8
To use the Flux VS Code extension with InfluxDB 1.8, ensure Flux is enabled in
#### Enable Flux in InfluxDB 1.11
To use the Flux VS Code extension with InfluxDB 1.11, ensure Flux is enabled in
your InfluxDB configuration file.
For more information, see [Enable Flux](/enterprise_influxdb/v1/flux/installation/).
{{% /note %}}

##### On this page
- [Install the Flux VS Code extension](#install-the-flux-vs-code-extension)
- [Connect to InfluxDB 1.8](#connect-to-influxdb-18)
- [Connect to InfluxDB 1.11](#connect-to-influxdb-111)
- [Query InfluxDB from VS Code](#query-influxdb-from-vs-code)
- [Explore your schema](#explore-your-schema)
- [Debug Flux queries](#debug-flux-queries)
Expand All @@ -38,7 +38,7 @@ The Flux VS Code extension is available in the **Visual Studio Marketplace**.
For information about installing extensions from the Visual Studio marketplace,
see the [Extension Marketplace documentation](https://code.visualstudio.com/docs/editor/extension-gallery).

## Connect to InfluxDB 1.8
## Connect to InfluxDB 1.11
To create an InfluxDB connection in VS Code:

1. Open the **VS Code Command Pallet** ({{< keybind mac="⇧⌘P" other="Ctrl+Shift+P" >}}).
Expand Down
114 changes: 102 additions & 12 deletions content/enterprise_influxdb/v1/tools/influx_inspect.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,18 +29,21 @@ influx_inspect [ [ command ] [ options ] ]

The `influx_inspect` commands are summarized here, with links to detailed information on each of the commands.

* [`buildtsi`](#buildtsi): Converts in-memory (TSM-based) shards to TSI.
* [`deletetsm`](#deletetsm): Bulk deletes a measurement from a raw TSM file.
* [`dumptsi`](#dumptsi): Dumps low-level details about TSI files.
* [`dumptsm`](#dumptsm): Dumps low-level details about TSM files.
* [`dumptsmwal`](#dumptsmwal): Dump all data from a WAL file.
* [`export`](#export): Exports raw data from a shard in InfluxDB line protocol format.
* [`report`](#report): Displays a shard level report.
* [`report-disk`](#report-disk): Reports disk usage by shard and measurement.
* [`reporttsi`](#reporttsi): Reports on cardinality for measurements and shards.
* [`verify`](#verify): Verifies the integrity of TSM files.
* [`verify-seriesfile`](#verify-seriesfile): Verifies the integrity of series files.
* [`verify-tombstone`](#verify-tombstone): Verifies the integrity of tombstones.
- [`buildtsi`](#buildtsi): Converts in-memory (TSM-based) shards to TSI.
- [`check-schema`](#check-schema): Checks for type conflicts between shards.
- [`deletetsm`](#deletetsm): Bulk deletes a measurement from a raw TSM file.
- [`dumptsi`](#dumptsi): Dumps low-level details about TSI files.
- [`dumptsm`](#dumptsm): Dumps low-level details about TSM files.
- [`dumptsmwal`](#dumptsmwal): Dump all data from a WAL file.
- [`export`](#export): Exports raw data from a shard in InfluxDB line protocol format.
- [`merge-schema`](#merge-schema): Merges a set of schema files from the `check-schema` command.
- [`report`](#report): Displays a shard level report.
- [`report-db`](#report-db): Estimates InfluxDB Cloud (TSM) cardinality for a database.
- [`report-disk`](#report-disk): Reports disk usage by shard and measurement.
- [`reporttsi`](#reporttsi): Reports on cardinality for measurements and shards.
- [`verify`](#verify): Verifies the integrity of TSM files.
- [`verify-seriesfile`](#verify-seriesfile): Verifies the integrity of series files.
- [`verify-tombstone`](#verify-tombstone): Verifies the integrity of tombstones.

### `buildtsi`

Expand Down Expand Up @@ -139,6 +142,31 @@ $ influx_inspect buildtsi -database mydb -datadir ~/.influxdb/data -waldir ~/.in
$ influx_inspect buildtsi -database stress -shard 1 -datadir ~/.influxdb/data -waldir ~/.influxdb/wal
```

### `check-schema`

Check for type conflicts between shards.

#### Syntax

```
influx_inspect check-schema [ options ]
```

#### Options

##### [ `-conflicts-file <string>` ]

Filename conflicts data should be written to. Default is `conflicts.json`.

##### [ `-path <string>` ]

Directory path where `fields.idx` files are located. Default is the current
working directory `.`.

##### [ `-schema-file <string>` ]

Filename schema data should be written to. Default is `schema.json`.

### `deletetsm`

Use `deletetsm -measurement` to delete a measurement in a raw TSM file (from specified shards).
Expand Down Expand Up @@ -426,6 +454,26 @@ randset value=97.9296104805 1439856000000000000
randset value=25.3849066842 1439856100000000000
```

### `merge-schema`

Merge a set of schema files from the [`check-schema` command](#check-schema).

#### Syntax

```
influx_inspect merge-schema [ options ]
```

#### Options

##### [ `-conflicts-file <string>` ]

Filename conflicts data should be written to. Default is `conflicts.json`.

##### [ `-schema-file <string>` ]

Filename for the output file. Default is `schema.json`.

### `report`

Displays series metadata for all shards.
Expand Down Expand Up @@ -461,6 +509,48 @@ The flag to report exact cardinality counts instead of estimates.
Default value is `false`.
Note: This can use a lot of memory.

### `report-db`

Use the `report-db` command to estimate the series cardinality of data in a
database when migrated to InfluxDB Cloud (TSM). InfluxDB Cloud (TSM) includes
field keys in the series key so unique field keys affect the total cardinality.
The total series cardinality of data in a InfluxDB 1.x database may differ from
from the series cardinality of that same data when migrated to InfluxDB Cloud (TSM).

#### Syntax

```
influx_inspect report-db [ options ]
```

#### Options

##### [ `-c <int>` ]

Set worker concurrency. Default is `1`.

##### `-db-path <string>`

{{< req >}}: The path to the database.

##### [ `-detailed` ]

Include counts for fields, tags in the command output.

##### [ `-exact` ]

Report exact cardinality counts instead of estimates.
This method of calculation can use a lot of memory.

##### [ `-rollup <string>` ]

Specify the cardinality "rollup" level--the granularity of the cardinality report:

- `t`: total
- `d`: database
- `r`: retention policy
- `m`: measurement <em class="op65">(Default)</em>

### `report-disk`

Use the `report-disk` command to review TSM file disk usage per shard and measurement in a specified directory. Useful for capacity planning and identifying which measurement or shard is using the most disk space. The default directory path `~/.influxdb/data/`.
Expand Down
Loading

0 comments on commit 91482e6

Please sign in to comment.