Skip to content

Commit

Permalink
Transformations: Adding group by and aggregate on multiple fields tra…
Browse files Browse the repository at this point in the history
…nsformation

* Adding Occurences transformer

* Adding test for Occurences Transformer

* Cleanup. Adding a test.

* Adding doc

* Modifying UI to support custom calculations options

* Implementing data transformation

* Finalizing calculations implementation

* Cleanup

* Using Fields instead of arrays in data grouping

* Renaming transformation to GroupBy

* Adding some doc

* Apply suggestions (solving TS typing errors)

Co-authored-by: Marcus Andersson <[email protected]>

* Tweaking UI

* Preventing of selecting twice the same field name.

* Removing console print. No calculations by default.

* Forgot to add the current value to the GroupBy selector

* Solving some typing issues and prettyfier errors

* Cleanup

* Updating test

* Ensure proper copy of options (solves some issues)

* Check if the fields exist in the data before processing

* Adding missing import in test file

* If group by field not specified, return all data untouched.

* Adding another missing import in test

* Minor updates

* Implementing GroupBy multiple fields + Improve field typing

* Removing console prints

* Allowing the exact number of fields to be added as aggregation

* Centering remove button icon

* Cleanup

* Correcting TS error

* Chaging transformer options structure

* Sorting so GroupBy fields appear on top

* Cleanup

* Simplifying some operations. Adding curly brackets.

* Changing some labels on the UI

* Updating test

* Cleanup

* Updating doc

* Fixed field list. Storing options as Record instead of Array.

* Update test

* Cleaned up the group by editor UI code.

* changed the transform to a table layout instead of a flexbox layout.

* cleaned up group by transformer.

* removed unused imports.

* Added some more tests.

* Added one more test and cleaned up code.

* fixed failing test.

* Fixed so we we have the proper casing on naming.

* fixed so we don't wrap on the first row.

Co-authored-by: Marcus Andersson <[email protected]>
Co-authored-by: Torkel Ödegaard <[email protected]>
Co-authored-by: Marcus Andersson <[email protected]>
  • Loading branch information
4 people authored Aug 31, 2020
1 parent fe6d399 commit 7db42f0
Show file tree
Hide file tree
Showing 8 changed files with 657 additions and 0 deletions.
62 changes: 62 additions & 0 deletions docs/sources/panels/transformations.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,7 @@ Grafana comes with the following transformations:
- [Join by field (outer join)](#join-by-field-outer-join)
- [Add field from calculation](#add-field-from-calculation)
- [Labels to fields](#labels-to-fields)
- [Group By](#group-by)
- [Series to rows](#series-to-rows)
- [Debug transformations](#debug-transformations)

Expand Down Expand Up @@ -222,6 +223,67 @@ After I apply the transformation, my labels appear in the table as fields.

{{< docs-imagebox img="/img/docs/transformations/labels-to-fields-after-7-0.png" class="docs-image--no-shadow" max-width= "1100px" >}}


### Group By

This transformation groups the data by a specified field (column) value and processes calculations on each group. The available calculations are the same as the Reduce transformation.

Here's an example of original data.

| Time | Server ID | CPU Temperature | Server Status
|---------------------|-------------|-----------------|----------
| 2020-07-07 11:34:20 | server 1 | 80 | Shutdown
| 2020-07-07 11:34:20 | server 3 | 62 | OK
| 2020-07-07 10:32:20 | server 2 | 90 | Overload
| 2020-07-07 10:31:22 | server 3 | 55 | OK
| 2020-07-07 09:30:57 | server 3 | 62 | Rebooting
| 2020-07-07 09:30:05 | server 2 | 88 | OK
| 2020-07-07 09:28:06 | server 1 | 80 | OK
| 2020-07-07 09:25:05 | server 2 | 88 | OK
| 2020-07-07 09:23:07 | server 1 | 86 | OK

This transformation goes in two steps. First you specify one or multiple fields to group the data by. This will group all the same values of those fields together, as if you sorted them. For instance if we `Group By` the `Server ID` field, it would group the data this way:

| Time | Server ID | CPU Temperature | Server Status
|---------------------|-------------|-----------------|----------
| 2020-07-07 11:34:20 | **server 1** | 80 | Shutdown
| 2020-07-07 09:28:06 | **server 1** | 80 | OK
| 2020-07-07 09:23:07 | **server 1** | 86 | OK
|
| 2020-07-07 10:32:20 | server 2 | 90 | Overload
| 2020-07-07 09:30:05 | server 2 | 88 | OK
| 2020-07-07 09:25:05 | server 2 | 88 | OK
|
| 2020-07-07 11:34:20 | ***server 3*** | 62 | OK
| 2020-07-07 10:31:22 | ***server 3*** | 55 | OK
| 2020-07-07 09:30:57 | ***server 3*** | 62 | Rebooting

All rows with the same value of `Server ID` are grouped together.

After choosing which field you want to group your data by, you can add various calculations on the other fields, and the calculation will be applied on each group of rows. For instance, we could want to calculate the average `CPU temperature` for each of those servers. So we can add the _mean_ calculation applied on the `CPU Temperature` field to get the following:

| Server ID | CPU Temperature (mean)
|-----------|--------------------------
| server 1 | 82
| server 2 | 88.6
| server 3 | 59.6

And we can add more than one of those calculation. For instance :

- For field `Time`, we can calculate the *Last* value, to know when the last data point was received for each server
- For field `Server Status`, we can calculate the *Last* value to know what is the last state value for each server
- For field `Temperature`, we can also calculate the *Last* value to know what is the latest monitored temperature for each server

We would then get :

| Server ID | CPU Temperature (mean) | CPU Temperature (last) | Time (last) | Server Status (last)
|-----------|-------------------------- |------------------------|------------------|----------------------
| server 1 | 82 | 80 | 2020-07-07 11:34:20 | Shutdown
| server 2 | 88.6 | 90 | 2020-07-07 10:32:20 | Overload
| server 3 | 59.6 | 62 | 2020-07-07 11:34:20 | OK

This transformation allows you to extract some key information out of your time series and display them in a convenient way.

## Series to rows

> **Note:** This documentation refers to a Grafana 7.1 feature.
Expand Down
2 changes: 2 additions & 0 deletions packages/grafana-data/src/transformations/transformers.ts
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ import { seriesToRowsTransformer } from './transformers/seriesToRows';
import { renameFieldsTransformer } from './transformers/rename';
import { labelsToFieldsTransformer } from './transformers/labelsToFields';
import { ensureColumnsTransformer } from './transformers/ensureColumns';
import { groupByTransformer } from './transformers/groupBy';
import { mergeTransformer } from './transformers/merge';

export const standardTransformers = {
Expand All @@ -30,5 +31,6 @@ export const standardTransformers = {
renameFieldsTransformer,
labelsToFieldsTransformer,
ensureColumnsTransformer,
groupByTransformer,
mergeTransformer,
};
257 changes: 257 additions & 0 deletions packages/grafana-data/src/transformations/transformers/groupBy.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,257 @@
import { toDataFrame } from '../../dataframe/processDataFrame';
import { groupByTransformer, GroupByTransformerOptions, GroupByOperationID } from './groupBy';
import { mockTransformationsRegistry } from '../../utils/tests/mockTransformationsRegistry';
import { transformDataFrame } from '../transformDataFrame';
import { Field, FieldType } from '../../types';
import { DataTransformerID } from './ids';
import { ArrayVector } from '../../vector';
import { ReducerID } from '../fieldReducer';
import { DataTransformerConfig } from '@grafana/data';

describe('GroupBy transformer', () => {
beforeAll(() => {
mockTransformationsRegistry([groupByTransformer]);
});

it('should not apply transformation if config is missing group by fields', () => {
const testSeries = toDataFrame({
name: 'A',
fields: [
{ name: 'time', type: FieldType.time, values: [3000, 4000, 5000, 6000, 7000, 8000] },
{ name: 'message', type: FieldType.string, values: ['one', 'two', 'two', 'three', 'three', 'three'] },
{ name: 'values', type: FieldType.string, values: [1, 2, 2, 3, 3, 3] },
],
});

const cfg: DataTransformerConfig<GroupByTransformerOptions> = {
id: DataTransformerID.groupBy,
options: {
fields: {
message: {
operation: GroupByOperationID.aggregate,
aggregations: [ReducerID.count],
},
},
},
};

const result = transformDataFrame([cfg], [testSeries]);
expect(result[0]).toBe(testSeries);
});

it('should group values by message', () => {
const testSeries = toDataFrame({
name: 'A',
fields: [
{ name: 'time', type: FieldType.time, values: [3000, 4000, 5000, 6000, 7000, 8000] },
{ name: 'message', type: FieldType.string, values: ['one', 'two', 'two', 'three', 'three', 'three'] },
{ name: 'values', type: FieldType.string, values: [1, 2, 2, 3, 3, 3] },
],
});

const cfg: DataTransformerConfig<GroupByTransformerOptions> = {
id: DataTransformerID.groupBy,
options: {
fields: {
message: {
operation: GroupByOperationID.groupBy,
aggregations: [],
},
},
},
};

const result = transformDataFrame([cfg], [testSeries]);

const expected: Field[] = [
{
name: 'message',
type: FieldType.string,
values: new ArrayVector(['one', 'two', 'three']),
config: {},
},
];

expect(result[0].fields).toEqual(expected);
});

it('should group values by message and summarize values', () => {
const testSeries = toDataFrame({
name: 'A',
fields: [
{ name: 'time', type: FieldType.time, values: [3000, 4000, 5000, 6000, 7000, 8000] },
{ name: 'message', type: FieldType.string, values: ['one', 'two', 'two', 'three', 'three', 'three'] },
{ name: 'values', type: FieldType.string, values: [1, 2, 2, 3, 3, 3] },
],
});

const cfg: DataTransformerConfig<GroupByTransformerOptions> = {
id: DataTransformerID.groupBy,
options: {
fields: {
message: {
operation: GroupByOperationID.groupBy,
aggregations: [],
},
values: {
operation: GroupByOperationID.aggregate,
aggregations: [ReducerID.sum],
},
},
},
};

const result = transformDataFrame([cfg], [testSeries]);

const expected: Field[] = [
{
name: 'message',
type: FieldType.string,
values: new ArrayVector(['one', 'two', 'three']),
config: {},
},
{
name: 'values (sum)',
type: FieldType.number,
values: new ArrayVector([1, 4, 9]),
config: {},
},
];

expect(result[0].fields).toEqual(expected);
});

it('should group by and compute a few calculations for each group of values', () => {
const testSeries = toDataFrame({
name: 'A',
fields: [
{ name: 'time', type: FieldType.time, values: [3000, 4000, 5000, 6000, 7000, 8000] },
{ name: 'message', type: FieldType.string, values: ['one', 'two', 'two', 'three', 'three', 'three'] },
{ name: 'values', type: FieldType.string, values: [1, 2, 2, 3, 3, 3] },
],
});

const cfg: DataTransformerConfig<GroupByTransformerOptions> = {
id: DataTransformerID.groupBy,
options: {
fields: {
message: {
operation: GroupByOperationID.groupBy,
aggregations: [],
},
time: {
operation: GroupByOperationID.aggregate,
aggregations: [ReducerID.count, ReducerID.last],
},
values: {
operation: GroupByOperationID.aggregate,
aggregations: [ReducerID.sum],
},
},
},
};

const result = transformDataFrame([cfg], [testSeries]);

const expected: Field[] = [
{
name: 'message',
type: FieldType.string,
values: new ArrayVector(['one', 'two', 'three']),
config: {},
},
{
name: 'time (count)',
type: FieldType.number,
values: new ArrayVector([1, 2, 3]),
config: {},
},
{
name: 'time (last)',
type: FieldType.time,
values: new ArrayVector([3000, 5000, 8000]),
config: {},
},
{
name: 'values (sum)',
type: FieldType.number,
values: new ArrayVector([1, 4, 9]),
config: {},
},
];

expect(result[0].fields).toEqual(expected);
});

it('should group values in data frames induvidually', () => {
const testSeries = [
toDataFrame({
name: 'A',
fields: [
{ name: 'time', type: FieldType.time, values: [3000, 4000, 5000, 6000, 7000, 8000] },
{ name: 'message', type: FieldType.string, values: ['one', 'two', 'two', 'three', 'three', 'three'] },
{ name: 'values', type: FieldType.string, values: [1, 2, 2, 3, 3, 3] },
],
}),
toDataFrame({
name: 'B',
fields: [
{ name: 'time', type: FieldType.time, values: [3000, 4000, 5000, 6000, 7000, 8000] },
{ name: 'message', type: FieldType.string, values: ['one', 'two', 'two', 'three', 'three', 'three'] },
{ name: 'values', type: FieldType.string, values: [0, 2, 5, 3, 3, 2] },
],
}),
];

const cfg: DataTransformerConfig<GroupByTransformerOptions> = {
id: DataTransformerID.groupBy,
options: {
fields: {
message: {
operation: GroupByOperationID.groupBy,
aggregations: [],
},
values: {
operation: GroupByOperationID.aggregate,
aggregations: [ReducerID.sum],
},
},
},
};

const result = transformDataFrame([cfg], testSeries);

const expectedA: Field[] = [
{
name: 'message',
type: FieldType.string,
values: new ArrayVector(['one', 'two', 'three']),
config: {},
},
{
name: 'values (sum)',
type: FieldType.number,
values: new ArrayVector([1, 4, 9]),
config: {},
},
];

const expectedB: Field[] = [
{
name: 'message',
type: FieldType.string,
values: new ArrayVector(['one', 'two', 'three']),
config: {},
},
{
name: 'values (sum)',
type: FieldType.number,
values: new ArrayVector([0, 7, 8]),
config: {},
},
];

expect(result[0].fields).toEqual(expectedA);
expect(result[1].fields).toEqual(expectedB);
});
});
Loading

0 comments on commit 7db42f0

Please sign in to comment.