Skip to content

Commit

Permalink
Merge branch 'master' into task/match-fleet
Browse files Browse the repository at this point in the history
  • Loading branch information
kibanamachine authored Feb 22, 2021
2 parents d759b33 + 4511fe5 commit afc6278
Show file tree
Hide file tree
Showing 168 changed files with 4,641 additions and 1,756 deletions.
2 changes: 2 additions & 0 deletions docs/api/alerts/find.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,8 @@ Retrieve a paginated set of alerts based on condition.
NOTE: As alerts change in {kib}, the results on each page of the response also
change. Use the find API for traditional paginated results, but avoid using it to export large amounts of data.

NOTE: Alert `params` are stored as {ref}/flattened.html[flattened] and analyzed as `keyword`.

[[alerts-api-find-request-codes]]
==== Response code

Expand Down
8 changes: 6 additions & 2 deletions docs/user/alerting/defining-alerts.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,16 @@ image::images/alert-flyout-sections.png[The three sections of an alert definitio
All alert share the following four properties in common:

[role="screenshot"]
image::images/alert-flyout-general-details.png[alt='All alerts have name, tags, check every, and notify every properties in common']
image::images/alert-flyout-general-details.png[alt='All alerts have name, tags, check every, and notify properties in common']

Name:: The name of the alert. While this name does not have to be unique, the name can be referenced in actions and also appears in the searchable alert listing in the management UI. A distinctive name can help identify and find an alert.
Tags:: A list of tag names that can be applied to an alert. Tags can help you organize and find alerts, because tags appear in the alert listing in the management UI which is searchable by tag.
Check every:: This value determines how frequently the alert conditions below are checked. Note that the timing of background alert checks are not guaranteed, particularly for intervals of less than 10 seconds. See <<alerting-production-considerations>> for more information.
Notify every:: This value limits how often actions are repeated when an alert instance remains active across alert checks. See <<alerting-concepts-suppressing-duplicate-notifications>> for more information.
Notify:: This value limits how often actions are repeated when an alert instance remains active across alert checks. See <<alerting-concepts-suppressing-duplicate-notifications>> for more information. +
- **Only on status change**: Actions are not repeated when an alert instance remains active across checks. Actions run only when the alert status changes.
- **Every time alert is active**: Actions are repeated when an alert instance remains active across checks.
- **On a custom action interval**: Actions are suppressed for the throttle interval, but repeat when an alert instance remains active across checks for a duration longer than the throttle interval.


[float]
[[defining-alerts-type-conditions]]
Expand Down
Binary file modified docs/user/alerting/images/alert-flyout-alert-conditions.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/user/alerting/images/alert-flyout-alert-type-selection.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/user/alerting/images/alert-flyout-general-details.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions packages/kbn-ui-shared-deps/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
"kbn:watch": "node scripts/build --dev --watch"
},
"dependencies": {
"@kbn/analytics": "link:../kbn-analytics",
"@kbn/i18n": "link:../kbn-i18n",
"@kbn/monaco": "link:../kbn-monaco"
},
Expand Down
11 changes: 11 additions & 0 deletions src/core/server/logging/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
- [Configuration](#configuration)
- [Usage](#usage)
- [Logging config migration](#logging-config-migration)
- [Logging configuration via CLI](#logging-configuration-via-CLI)
- [Log record format changes](#log-record-format-changes)

The way logging works in Kibana is inspired by `log4j 2` logging framework used by [Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html#logging).
Expand Down Expand Up @@ -527,6 +528,16 @@ and you can enable them by adjusting the minimum required [logging level](#log-l
#### logging.filter
TBD

### Logging configuration via CLI

| legacy logging | Kibana Platform logging|
|-|-|
|--verbose| --logging.root.level=debug --logging.root.appenders[0]=default --logging.root.appenders[1]=console|
|--quiet| --logging.root.level=error --logging.root.appenders[0]=default --logging.root.appenders[1]=console|
|--silent| --logging.root.level=off|

*note that you have to pass the `default` appender until the legacy logging system is removed in v8.0

### Log record format changes

| Parameter | Platform log record in **pattern** format | Legacy Platform log record **text** format |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -722,7 +722,7 @@ describe('DocumentMigrator', () => {
});
});

it('logs the document and transform that failed', () => {
it('logs the original error and throws a transform error if a document transform fails', () => {
const log = mockLogger;
const migrator = new DocumentMigrator({
...testOpts(),
Expand All @@ -747,10 +747,13 @@ describe('DocumentMigrator', () => {
migrator.migrate(_.cloneDeep(failedDoc));
expect('Did not throw').toEqual('But it should have!');
} catch (error) {
expect(error.message).toMatch(/Dang diggity!/);
const warning = loggingSystemMock.collect(mockLoggerFactory).warn[0][0];
expect(warning).toContain(JSON.stringify(failedDoc));
expect(warning).toContain('dog:1.2.3');
expect(error.message).toMatchInlineSnapshot(`
"Failed to transform document smelly. Transform: dog:1.2.3
Doc: {\\"id\\":\\"smelly\\",\\"type\\":\\"dog\\",\\"attributes\\":{},\\"migrationVersion\\":{}}"
`);
expect(loggingSystemMock.collect(mockLoggerFactory).error[0][0]).toMatchInlineSnapshot(
`[Error: Dang diggity!]`
);
}
});

Expand Down Expand Up @@ -779,7 +782,7 @@ describe('DocumentMigrator', () => {
};
migrator.migrate(doc);
expect(loggingSystemMock.collect(mockLoggerFactory).info[0][0]).toEqual(logTestMsg);
expect(loggingSystemMock.collect(mockLoggerFactory).warn[1][0]).toEqual(logTestMsg);
expect(loggingSystemMock.collect(mockLoggerFactory).warn[0][0]).toEqual(logTestMsg);
});

test('extracts the latest migration version info', () => {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -678,10 +678,11 @@ function wrapWithTry(
} catch (error) {
const failedTransform = `${type.name}:${version}`;
const failedDoc = JSON.stringify(doc);
log.warn(
log.error(error);

throw new Error(
`Failed to transform document ${doc?.id}. Transform: ${failedTransform}\nDoc: ${failedDoc}`
);
throw error;
}
};
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,12 +58,12 @@ describe('migrateRawDocs', () => {
expect(transform).toHaveBeenNthCalledWith(2, obj2);
});

test('passes invalid docs through untouched and logs error', async () => {
test('throws when encountering a corrupt saved object document', async () => {
const logger = createSavedObjectsMigrationLoggerMock();
const transform = jest.fn<any, any>((doc: any) => [
set(_.cloneDeep(doc), 'attributes.name', 'TADA'),
]);
const result = await migrateRawDocs(
const result = migrateRawDocs(
new SavedObjectsSerializer(new SavedObjectTypeRegistry()),
transform,
[
Expand All @@ -73,25 +73,11 @@ describe('migrateRawDocs', () => {
logger
);

expect(result).toEqual([
{ _id: 'foo:b', _source: { type: 'a', a: { name: 'AAA' } } },
{
_id: 'c:d',
_source: { type: 'c', c: { name: 'TADA' }, migrationVersion: {}, references: [] },
},
]);

const obj2 = {
id: 'd',
type: 'c',
attributes: { name: 'DDD' },
migrationVersion: {},
references: [],
};
expect(transform).toHaveBeenCalledTimes(1);
expect(transform).toHaveBeenCalledWith(obj2);
expect(result).rejects.toMatchInlineSnapshot(
`[Error: Unable to migrate the corrupt saved object document with _id: 'foo:b'.]`
);

expect(logger.error).toBeCalledTimes(1);
expect(transform).toHaveBeenCalledTimes(0);
});

test('handles when one document is transformed into multiple documents', async () => {
Expand Down
25 changes: 19 additions & 6 deletions src/core/server/saved_objects/migrations/core/migrate_raw_docs.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,23 @@ import {
import { MigrateAndConvertFn } from './document_migrator';
import { SavedObjectsMigrationLogger } from '.';

/**
* Error thrown when saved object migrations encounter a corrupt saved object.
* Corrupt saved objects cannot be serialized because:
* - there's no `[type]` property which contains the type attributes
* - the type or namespace in the _id doesn't match the `type` or `namespace`
* properties
*/
export class CorruptSavedObjectError extends Error {
constructor(public readonly rawId: string) {
super(`Unable to migrate the corrupt saved object document with _id: '${rawId}'.`);

// Set the prototype explicitly, see:
// https://github.com/Microsoft/TypeScript/wiki/Breaking-Changes#extending-built-ins-like-error-array-and-map-may-no-longer-work
Object.setPrototypeOf(this, CorruptSavedObjectError.prototype);
}
}

/**
* Applies the specified migration function to every saved object document in the list
* of raw docs. Any raw docs that are not valid saved objects will simply be passed through.
Expand All @@ -35,7 +52,7 @@ export async function migrateRawDocs(
const migrateDocWithoutBlocking = transformNonBlocking(migrateDoc);
const processedDocs = [];
for (const raw of rawDocs) {
const options = { namespaceTreatment: 'lax' as 'lax' };
const options = { namespaceTreatment: 'lax' as const };
if (serializer.isRawSavedObject(raw, options)) {
const savedObject = serializer.rawToSavedObject(raw, options);
savedObject.migrationVersion = savedObject.migrationVersion || {};
Expand All @@ -48,11 +65,7 @@ export async function migrateRawDocs(
)
);
} else {
log.error(
`Error: Unable to migrate the corrupt Saved Object document ${raw._id}. To prevent Kibana from performing a migration on every restart, please delete or fix this document by ensuring that the namespace and type in the document's id matches the values in the namespace and type fields.`,
{ rawDocument: raw }
);
processedDocs.push(raw);
throw new CorruptSavedObjectError(raw._id);
}
}
return processedDocs;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -330,7 +330,7 @@ describe('KibanaMigrator', () => {
const migrator = new KibanaMigrator(options);
migrator.prepareMigrations();
await expect(migrator.runMigrations()).rejects.toMatchInlineSnapshot(`
[Error: Unable to complete saved object migrations for the [.my-index] index. Please check the health of your Elasticsearch cluster and try again. Error: Reindex failed with the following error:
[Error: Unable to complete saved object migrations for the [.my-index] index. Error: Reindex failed with the following error:
{"_tag":"Some","value":{"type":"elatsicsearch_exception","reason":"task failed with an error"}}]
`);
expect(loggingSystemMock.collect(options.logger).error[0][0]).toMatchInlineSnapshot(`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -314,20 +314,25 @@ describe('migrationsStateActionMachine', () => {
next: () => {
throw new ResponseError(
elasticsearchClientMock.createApiResponse({
body: { error: { type: 'snapshot_in_progress_exception', reason: 'error reason' } },
body: {
error: {
type: 'snapshot_in_progress_exception',
reason: 'Cannot delete indices that are being snapshotted',
},
},
})
);
},
})
).rejects.toMatchInlineSnapshot(
`[Error: Unable to complete saved object migrations for the [.my-so-index] index. Please check the health of your Elasticsearch cluster and try again. ResponseError: snapshot_in_progress_exception]`
`[Error: Unable to complete saved object migrations for the [.my-so-index] index. Please check the health of your Elasticsearch cluster and try again. Error: [snapshot_in_progress_exception]: Cannot delete indices that are being snapshotted]`
);
expect(loggingSystemMock.collect(mockLogger)).toMatchInlineSnapshot(`
Object {
"debug": Array [],
"error": Array [
Array [
"[.my-so-index] [snapshot_in_progress_exception]: error reason",
"[.my-so-index] [snapshot_in_progress_exception]: Cannot delete indices that are being snapshotted",
],
Array [
"[.my-so-index] migration failed, dumping execution log:",
Expand All @@ -352,7 +357,7 @@ describe('migrationsStateActionMachine', () => {
},
})
).rejects.toMatchInlineSnapshot(
`[Error: Unable to complete saved object migrations for the [.my-so-index] index. Please check the health of your Elasticsearch cluster and try again. Error: this action throws]`
`[Error: Unable to complete saved object migrations for the [.my-so-index] index. Error: this action throws]`
);
expect(loggingSystemMock.collect(mockLogger)).toMatchInlineSnapshot(`
Object {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ import { errors as EsErrors } from '@elastic/elasticsearch';
import * as Option from 'fp-ts/lib/Option';
import { performance } from 'perf_hooks';
import { Logger, LogMeta } from '../../logging';
import { CorruptSavedObjectError } from '../migrations/core/migrate_raw_docs';
import { Model, Next, stateActionMachine } from './state_action_machine';
import { State } from './types';

Expand Down Expand Up @@ -153,12 +154,27 @@ export async function migrationStateActionMachine({
logger.error(
logMessagePrefix + `[${e.body?.error?.type}]: ${e.body?.error?.reason ?? e.message}`
);
dumpExecutionLog(logger, logMessagePrefix, executionLog);
throw new Error(
`Unable to complete saved object migrations for the [${
initialState.indexPrefix
}] index. Please check the health of your Elasticsearch cluster and try again. Error: [${
e.body?.error?.type
}]: ${e.body?.error?.reason ?? e.message}`
);
} else {
logger.error(e);

dumpExecutionLog(logger, logMessagePrefix, executionLog);
if (e instanceof CorruptSavedObjectError) {
throw new Error(
`${e.message} To allow migrations to proceed, please delete this document from the [${initialState.indexPrefix}_${initialState.kibanaVersion}_001] index.`
);
}

throw new Error(
`Unable to complete saved object migrations for the [${initialState.indexPrefix}] index. ${e}`
);
}
dumpExecutionLog(logger, logMessagePrefix, executionLog);
throw new Error(
`Unable to complete saved object migrations for the [${initialState.indexPrefix}] index. Please check the health of your Elasticsearch cluster and try again. ${e}`
);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,9 @@ const mockMappings = {
},
},
},
params: {
type: 'flattened',
},
},
},
hiddenType: {
Expand Down Expand Up @@ -168,6 +171,12 @@ describe('Filter Utils', () => {
).toEqual(esKuery.fromKueryExpression('alert.actions:{ actionTypeId: ".server-log" }'));
});

test('Assemble filter for flattened fields', () => {
expect(
validateConvertFilterToKueryNode(['alert'], 'alert.attributes.params.foo:bar', mockMappings)
).toEqual(esKuery.fromKueryExpression('alert.params.foo:bar'));
});

test('Lets make sure that we are throwing an exception if we get an error', () => {
expect(() => {
validateConvertFilterToKueryNode(
Expand Down
36 changes: 23 additions & 13 deletions src/core/server/saved_objects/service/lib/filter_utils.ts
Original file line number Diff line number Diff line change
Expand Up @@ -207,23 +207,33 @@ export const hasFilterKeyError = (

export const fieldDefined = (indexMappings: IndexMapping, key: string): boolean => {
const mappingKey = 'properties.' + key.split('.').join('.properties.');
const potentialKey = get(indexMappings, mappingKey);
if (get(indexMappings, mappingKey) != null) {
return true;
}

// If the `mappingKey` does not match a valid path, before returning null,
// If the `mappingKey` does not match a valid path, before returning false,
// we want to check and see if the intended path was for a multi-field
// such as `x.attributes.field.text` where `field` is mapped to both text
// and keyword
if (potentialKey == null) {
const propertiesAttribute = 'properties';
const indexOfLastProperties = mappingKey.lastIndexOf(propertiesAttribute);
const fieldMapping = mappingKey.substr(0, indexOfLastProperties);
const fieldType = mappingKey.substr(
mappingKey.lastIndexOf(propertiesAttribute) + `${propertiesAttribute}.`.length
);
const mapping = `${fieldMapping}fields.${fieldType}`;

return get(indexMappings, mapping) != null;
} else {
const propertiesAttribute = 'properties';
const indexOfLastProperties = mappingKey.lastIndexOf(propertiesAttribute);
const fieldMapping = mappingKey.substr(0, indexOfLastProperties);
const fieldType = mappingKey.substr(
mappingKey.lastIndexOf(propertiesAttribute) + `${propertiesAttribute}.`.length
);
const mapping = `${fieldMapping}fields.${fieldType}`;
if (get(indexMappings, mapping) != null) {
return true;
}

// If the path is for a flattned type field, we'll assume the mappings are defined.
const keys = key.split('.');
for (let i = 0; i < keys.length; i++) {
const path = `properties.${keys.slice(0, i + 1).join('.properties.')}`;
if (get(indexMappings, path)?.type === 'flattened') {
return true;
}
}

return false;
};
3 changes: 1 addition & 2 deletions src/core/server/ui_settings/integration_tests/routes.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,7 @@
import { schema } from '@kbn/config-schema';
import * as kbnTestServer from '../../../test_helpers/kbn_server';

// FLAKY: https://github.com/elastic/kibana/issues/89191
describe.skip('ui settings service', () => {
describe('ui settings service', () => {
describe('routes', () => {
let root: ReturnType<typeof kbnTestServer.createRoot>;
beforeAll(async () => {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,8 @@ export async function mountManagementSection(
chrome.setBadge(readOnlyBadge);
}

chrome.docTitle.change(title);

ReactDOM.render(
<I18nProvider>
<Router history={params.history}>
Expand All @@ -90,6 +92,7 @@ export async function mountManagementSection(
params.element
);
return () => {
chrome.docTitle.reset();
ReactDOM.unmountComponentAtNode(params.element);
};
}
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,13 @@ describe('DiscoverFieldSearch', () => {
expect(badge.text()).toEqual('0');
});

test('missing switch appears with new fields api', () => {
const component = mountComponent({ ...defaultProps, useNewFieldsApi: true });
const btn = findTestSubject(component, 'toggleFieldFilterButton');
btn.simulate('click');
expect(findTestSubject(component, 'missingSwitch').exists()).toBeTruthy();
});

test('change in filters triggers onChange', () => {
const onChange = jest.fn();
const component = mountComponent({ ...defaultProps, ...{ onChange } });
Expand Down
Loading

0 comments on commit afc6278

Please sign in to comment.