From ec242661ee02245a98e4b07614062483b3e268b2 Mon Sep 17 00:00:00 2001 From: "opensearch-trigger-bot[bot]" <98922864+opensearch-trigger-bot[bot]@users.noreply.github.com> Date: Tue, 21 Nov 2023 17:01:51 -0800 Subject: [PATCH] Added 2.11.1 release notes. (#1306) (#1307) * Added 2.11.1 release notes. * Added 2.11.1 release notes. --------- (cherry picked from commit 06c1b8a1438bbe109f2c49ac153dbda46e173845) Signed-off-by: AWSHurneyt Signed-off-by: github-actions[bot] Co-authored-by: github-actions[bot] fix workflow security tests in alerting (#1310) (#1311) Signed-off-by: Subhobrata Dey Increment version to 2.12.0-SNAPSHOT (#1239) Signed-off-by: opensearch-ci-bot Co-authored-by: opensearch-ci-bot [Backport 2.x] Reference get monitor and search monitor action / request / responses from common-utils (#1315) * Use get monitor action / req / resp from common-utils Signed-off-by: Tyler Ohlsen * Dummy commit to retrigger Signed-off-by: Tyler Ohlsen --------- Signed-off-by: Tyler Ohlsen optimize doc-level monitor execution workflow for datastreams (#1302) (#1322) Signed-off-by: Subhobrata Dey Update to Gradle 8.5 (#1369) (#1371) Signed-off-by: Andriy Redko [Backport 2.x] Inject namedWriteableRegistry during ser/deser of SearchMonitorAction (#1382) (#1384) * Inject namedWriteableRegistry during ser/deser of SearchMonitorAction (#1382) Signed-off-by: Tyler Ohlsen * remove bin files Signed-off-by: Tyler Ohlsen * remove core bin Signed-off-by: Tyler Ohlsen --------- Signed-off-by: Tyler Ohlsen Don't attempt to parse workflow if it doesn't exist (#1346) (#1359) (cherry picked from commit 733fd4e8b307251d4cd1d7c1a8509de346fd3190) Signed-off-by: Chase Engelbrecht Signed-off-by: github-actions[bot] Co-authored-by: github-actions[bot] Set docData to empty string if actual is null (#1325) (#1334) (cherry picked from commit 008e076e52c4b549f0260554fc2c6fd04269a896) Signed-off-by: Chase Engelbrecht Signed-off-by: github-actions[bot] Co-authored-by: github-actions[bot] removed default admin credentials for alerting (#1399) (#1400) (cherry picked from commit 3c50f7d001bf645e842e95201c176cd6198e7996) Signed-off-by: github-actions[bot] Co-authored-by: github-actions[bot] Co-authored-by: Dennis Toepker ipaddress lib upgrade as part of cve fix (#1397) (#1407) (cherry picked from commit 8d5906019222b2043ae73d0d8edda54a6fc103c5) Signed-off-by: Riya Saxena Signed-off-by: github-actions[bot] Co-authored-by: github-actions[bot] Bulk index findings and sequentially invoke auto-correlations (#1355) (#1410) * Bulk index findings and sequentially invoke auto-correlations * Bulk index findings in batches of 10000 and make it configurable * Addressing review comments * Add integ tests to test bulk index findings * Fix ktlint formatting --------- (cherry picked from commit b56196557b539b2f6069dc407f301cd9c15771ea) Signed-off-by: Megha Goyal Signed-off-by: github-actions[bot] Co-authored-by: github-actions[bot] Add 2.12 release notes (#1408) (#1413) * Add 2.12 release notes * Fix release notes PR * Add 2 more PRs --------- (cherry picked from commit b10eaad8f7f25c05fde540d2f4ed97517435d59c) Signed-off-by: Chase Engelbrecht Signed-off-by: github-actions[bot] Co-authored-by: github-actions[bot] [Backport 2.x] Implemented cross-cluster monitor support (#1404) (#1412) * Implemented cross-cluster monitor support (#1404) * Updated alert mappings to accommodate cross-cluster cluster metrics monitors. Signed-off-by: AWSHurneyt * Implemented support for cross-cluster cluster metrics monitors. Implemented GetRemoteIndexes API to populate the frontend UI with details regarding the remote clusters, and indexes. Signed-off-by: AWSHurneyt * Fixed a writeable test after changing QueryLevelTriggerRunResult from a data class to an open class for inheritability. Signed-off-by: AWSHurneyt * Fixed ktlint errors. Signed-off-by: AWSHurneyt * Removed changes to IndexUtils as they're only needed by doc monitors. Signed-off-by: AWSHurneyt --------- Signed-off-by: AWSHurneyt (cherry picked from commit ea36996616eb91b2547b6a64274bbd9e50b1af5d) Signed-off-by: AWSHurneyt * Fixed a test. Signed-off-by: AWSHurneyt --------- Signed-off-by: AWSHurneyt Add publishToMavenLocal in build.sh (#1418) (#1419) (cherry picked from commit 4cdc1d14e0684bf31e7ef7e9a0e0ef027cbf05f6) Signed-off-by: github-actions[bot] Co-authored-by: github-actions[bot] fix for MapperException[the [enabled] parameter can't be updated for the object mapping [metadata.source_to_query_index_mapping] (#1432) (#1434) Signed-off-by: github-actions[bot] bacport PRs #1445, #1430, #1441, #1435 to 2.x (#1452) * Add jvm aware setting and max num docs settings for batching docs for percolate queries (#1435) * add jvm aware and max docs settings for batching docs for percolate queries Signed-off-by: Surya Sashank Nistala * fix stats logging Signed-off-by: Surya Sashank Nistala * add queryfieldnames field in findings mapping Signed-off-by: Surya Sashank Nistala --------- Signed-off-by: Surya Sashank Nistala * optimize to fetch only fields relevant to doc level queries in doc level monitor instead of entire _source for each doc (#1441) * optimize to fetch only fields relevant to doc level queries in doc level monitor Signed-off-by: Surya Sashank Nistala * fix test for settings check Signed-off-by: Surya Sashank Nistala * fix ktlint Signed-off-by: Surya Sashank Nistala --------- Signed-off-by: Surya Sashank Nistala * clean up doc level queries on dry run (#1430) Signed-off-by: Joanne Wang * optimize sequence number calculation and reduce search requests in doc level monitor execution (#1445) * optimize sequence number calculation and reduce search requests by n where n is number of shards being queried in the executino Signed-off-by: Surya Sashank Nistala * fix tests Signed-off-by: Surya Sashank Nistala * optimize check indices and execute to query only write index of aliases and datastreams during monitor creation Signed-off-by: Surya Sashank Nistala * fix test Signed-off-by: Surya Sashank Nistala * add javadoc Signed-off-by: Surya Sashank Nistala * add tests to verify seq_no calculation Signed-off-by: Surya Sashank Nistala --------- Signed-off-by: Surya Sashank Nistala --------- Signed-off-by: Surya Sashank Nistala Signed-off-by: Joanne Wang Co-authored-by: Joanne Wang [Backport 2.x] Add an _exists_ check to document level monitor queries (#1425) (#1456) * Add an _exists_ check to document level monitor queries (#1425) * clean up and add integ tests Signed-off-by: Joanne Wang * refactored out common method and renamed test Signed-off-by: Joanne Wang * remove _exists_ flag Signed-off-by: Joanne Wang --------- Signed-off-by: Joanne Wang * fix integ test Signed-off-by: Joanne Wang --------- Signed-off-by: Joanne Wang add distributed locking to jobs in alerting (#1403) (#1458) Signed-off-by: Subhobrata Dey --- .github/workflows/security-test-workflow.yml | 10 +- .github/workflows/test-workflow.yml | 4 +- DEVELOPER_GUIDE.md | 4 +- alerting/build.gradle | 2 +- .../org/opensearch/alerting/AlertService.kt | 21 +- .../org/opensearch/alerting/AlertingPlugin.kt | 34 +- .../alerting/DocumentLevelMonitorRunner.kt | 662 ++-- .../org/opensearch/alerting/InputService.kt | 43 +- .../alerting/MonitorMetadataService.kt | 84 +- .../alerting/MonitorRunnerExecutionContext.kt | 13 +- .../alerting/MonitorRunnerService.kt | 83 +- .../alerting/QueryLevelMonitorRunner.kt | 17 +- .../org/opensearch/alerting/TriggerService.kt | 50 + .../alerting/action/GetMonitorAction.kt | 15 - .../alerting/action/GetMonitorRequest.kt | 56 - .../alerting/action/GetMonitorResponse.kt | 133 - .../alerting/action/GetRemoteIndexesAction.kt | 15 + ...rRequest.kt => GetRemoteIndexesRequest.kt} | 25 +- .../action/GetRemoteIndexesResponse.kt | 135 + .../alerting/action/SearchMonitorAction.kt | 16 - .../model/ClusterMetricsTriggerRunResult.kt | 110 + .../model/DocumentExecutionContext.kt | 14 - .../alerting/model/IndexExecutionContext.kt | 19 + .../model/QueryLevelTriggerRunResult.kt | 6 +- .../resthandler/RestGetMonitorAction.kt | 6 +- .../resthandler/RestGetRemoteIndexesAction.kt | 50 + .../resthandler/RestSearchMonitorAction.kt | 6 +- .../alerting/service/DeleteMonitorService.kt | 10 + .../alerting/settings/AlertingSettings.kt | 127 +- .../TransportAcknowledgeAlertAction.kt | 7 +- .../TransportDeleteWorkflowAction.kt | 35 +- .../transport/TransportGetFindingsAction.kt | 12 +- .../transport/TransportGetMonitorAction.kt | 30 +- .../TransportGetRemoteIndexesAction.kt | 193 ++ .../transport/TransportIndexMonitorAction.kt | 10 +- .../transport/TransportSearchMonitorAction.kt | 34 +- .../alerting/util/CrossClusterMonitorUtils.kt | 231 ++ .../alerting/util/DocLevelMonitorQueries.kt | 84 +- .../opensearch/alerting/util/IndexUtils.kt | 43 + .../workflow/CompositeWorkflowRunner.kt | 2 +- .../alerting/alerts/alert_mapping.json | 8 + .../alerting/alerts/finding_mapping.json | 3 + .../alerting/AlertingRestTestCase.kt | 155 +- .../alerting/DocumentMonitorRunnerIT.kt | 2804 ++++++++++++----- .../alerting/MonitorDataSourcesIT.kt | 205 +- .../alerting/MonitorRunnerServiceIT.kt | 2 +- .../alerting/action/GetMonitorActionTests.kt | 16 - .../alerting/action/GetMonitorRequestTests.kt | 60 - .../action/GetMonitorResponseTests.kt | 66 - .../action/SearchMonitorActionTests.kt | 17 - .../action/SearchMonitorRequestTests.kt | 33 - .../alerting/model/WriteableTests.kt | 5 +- .../alerting/resthandler/FindingsRestApiIT.kt | 27 + .../resthandler/SecureWorkflowRestApiIT.kt | 3 + .../alerting/resthandler/WorkflowRestApiIT.kt | 43 + .../transport/AlertingSingleNodeTestCase.kt | 5 +- build-tools/merged-coverage.gradle | 24 +- build-tools/opensearchplugin-coverage.gradle | 4 +- build-tools/pkgbuild.gradle | 14 +- build.gradle | 11 +- .../alerting/core/lock/LockModel.kt | 117 + .../alerting/core/lock/LockService.kt | 311 ++ .../opensearch-alerting-config-lock.json | 18 + gradle/wrapper/gradle-wrapper.jar | Bin 61574 -> 63721 bytes gradle/wrapper/gradle-wrapper.properties | 3 +- gradlew | 29 +- ...nsearch-alerting.release-notes-2.11.1.0.md | 15 + ...nsearch-alerting.release-notes-2.12.0.0.md | 27 + scripts/build.sh | 1 + 69 files changed, 4786 insertions(+), 1691 deletions(-) delete mode 100644 alerting/src/main/kotlin/org/opensearch/alerting/action/GetMonitorAction.kt delete mode 100644 alerting/src/main/kotlin/org/opensearch/alerting/action/GetMonitorRequest.kt delete mode 100644 alerting/src/main/kotlin/org/opensearch/alerting/action/GetMonitorResponse.kt create mode 100644 alerting/src/main/kotlin/org/opensearch/alerting/action/GetRemoteIndexesAction.kt rename alerting/src/main/kotlin/org/opensearch/alerting/action/{SearchMonitorRequest.kt => GetRemoteIndexesRequest.kt} (53%) create mode 100644 alerting/src/main/kotlin/org/opensearch/alerting/action/GetRemoteIndexesResponse.kt delete mode 100644 alerting/src/main/kotlin/org/opensearch/alerting/action/SearchMonitorAction.kt create mode 100644 alerting/src/main/kotlin/org/opensearch/alerting/model/ClusterMetricsTriggerRunResult.kt delete mode 100644 alerting/src/main/kotlin/org/opensearch/alerting/model/DocumentExecutionContext.kt create mode 100644 alerting/src/main/kotlin/org/opensearch/alerting/model/IndexExecutionContext.kt create mode 100644 alerting/src/main/kotlin/org/opensearch/alerting/resthandler/RestGetRemoteIndexesAction.kt create mode 100644 alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportGetRemoteIndexesAction.kt create mode 100644 alerting/src/main/kotlin/org/opensearch/alerting/util/CrossClusterMonitorUtils.kt delete mode 100644 alerting/src/test/kotlin/org/opensearch/alerting/action/GetMonitorActionTests.kt delete mode 100644 alerting/src/test/kotlin/org/opensearch/alerting/action/GetMonitorRequestTests.kt delete mode 100644 alerting/src/test/kotlin/org/opensearch/alerting/action/GetMonitorResponseTests.kt delete mode 100644 alerting/src/test/kotlin/org/opensearch/alerting/action/SearchMonitorActionTests.kt delete mode 100644 alerting/src/test/kotlin/org/opensearch/alerting/action/SearchMonitorRequestTests.kt create mode 100644 core/src/main/kotlin/org/opensearch/alerting/core/lock/LockModel.kt create mode 100644 core/src/main/kotlin/org/opensearch/alerting/core/lock/LockService.kt create mode 100644 core/src/main/resources/mappings/opensearch-alerting-config-lock.json create mode 100644 release-notes/opensearch-alerting.release-notes-2.11.1.0.md create mode 100644 release-notes/opensearch-alerting.release-notes-2.12.0.0.md diff --git a/.github/workflows/security-test-workflow.yml b/.github/workflows/security-test-workflow.yml index 127962210..a096f26a0 100644 --- a/.github/workflows/security-test-workflow.yml +++ b/.github/workflows/security-test-workflow.yml @@ -12,7 +12,7 @@ jobs: build: strategy: matrix: - java: [ 11, 17 ] + java: [ 11, 17, 21 ] # Job name name: Build and test Alerting # This job runs on Linux @@ -73,20 +73,20 @@ jobs: if: env.imagePresent == 'true' run: | cd .. - docker run -p 9200:9200 -d -p 9600:9600 -e "discovery.type=single-node" opensearch-alerting:test + docker run -p 9200:9200 -d -p 9600:9600 -e "OPENSEARCH_INITIAL_ADMIN_PASSWORD=myStrongPassword123!" -e "discovery.type=single-node" opensearch-alerting:test sleep 120 - name: Run Alerting Test for security enabled test cases if: env.imagePresent == 'true' run: | - cluster_running=`curl -XGET https://localhost:9200/_cat/plugins -u admin:admin --insecure` + cluster_running=`curl -XGET https://localhost:9200/_cat/plugins -u admin:myStrongPassword123! --insecure` echo $cluster_running - security=`curl -XGET https://localhost:9200/_cat/plugins -u admin:admin --insecure |grep opensearch-security|wc -l` + security=`curl -XGET https://localhost:9200/_cat/plugins -u admin:myStrongPassword123! --insecure |grep opensearch-security|wc -l` echo $security if [ $security -gt 0 ] then echo "Security plugin is available" - ./gradlew :alerting:integTest -Dtests.rest.cluster=localhost:9200 -Dtests.cluster=localhost:9200 -Dtests.clustername=docker-cluster -Dsecurity=true -Dhttps=true -Duser=admin -Dpassword=admin + ./gradlew :alerting:integTest -Dtests.rest.cluster=localhost:9200 -Dtests.cluster=localhost:9200 -Dtests.clustername=docker-cluster -Dsecurity=true -Dhttps=true -Duser=admin -Dpassword=myStrongPassword123! else echo "Security plugin is NOT available skipping this run as tests without security have already been run" fi diff --git a/.github/workflows/test-workflow.yml b/.github/workflows/test-workflow.yml index 2ef1196b9..ebc4c2cec 100644 --- a/.github/workflows/test-workflow.yml +++ b/.github/workflows/test-workflow.yml @@ -21,7 +21,7 @@ jobs: WORKING_DIR: ${{ matrix.working_directory }}. strategy: matrix: - java: [11, 17] + java: [11, 17, 21] # Job name name: Build Alerting with JDK ${{ matrix.java }} on Linux # This job runs on Linux @@ -69,7 +69,7 @@ jobs: WORKING_DIR: ${{ matrix.working_directory }}. strategy: matrix: - java: [11, 17] + java: [11, 17, 21] os: [ windows-latest, macos-latest ] include: - os: windows-latest diff --git a/DEVELOPER_GUIDE.md b/DEVELOPER_GUIDE.md index ec1a3404b..5342b19d8 100644 --- a/DEVELOPER_GUIDE.md +++ b/DEVELOPER_GUIDE.md @@ -76,9 +76,9 @@ When launching a cluster using one of the above commands, logs are placed in `al 1. Setup a local opensearch cluster with security plugin. - - `./gradlew :alerting:integTest -Dtests.rest.cluster=localhost:9200 -Dtests.cluster=localhost:9200 -Dtests.clustername=opensearch -Dhttps=true -Dsecurity=true -Duser=admin -Dpassword=admin` + - `./gradlew :alerting:integTest -Dtests.rest.cluster=localhost:9200 -Dtests.cluster=localhost:9200 -Dtests.clustername=opensearch -Dhttps=true -Dsecurity=true -Duser=admin -Dpassword=` - - `./gradlew :alerting:integTest -Dtests.rest.cluster=localhost:9200 -Dtests.cluster=localhost:9200 -Dtests.clustername=opensearch -Dhttps=true -Dsecurity=true -Duser=admin -Dpassword=admin --tests "org.opensearch.alerting.MonitorRunnerIT.test execute monitor returns search result"` + - `./gradlew :alerting:integTest -Dtests.rest.cluster=localhost:9200 -Dtests.cluster=localhost:9200 -Dtests.clustername=opensearch -Dhttps=true -Dsecurity=true -Duser=admin -Dpassword= --tests "org.opensearch.alerting.MonitorRunnerIT.test execute monitor returns search result"` #### Building from the IDE diff --git a/alerting/build.gradle b/alerting/build.gradle index 9cb2488be..d0f240193 100644 --- a/alerting/build.gradle +++ b/alerting/build.gradle @@ -108,7 +108,7 @@ dependencies { implementation "org.jetbrains:annotations:13.0" api project(":alerting-core") - implementation "com.github.seancfoley:ipaddress:5.3.3" + implementation "com.github.seancfoley:ipaddress:5.4.1" testImplementation "org.antlr:antlr4-runtime:${versions.antlr4}" testImplementation "org.jetbrains.kotlin:kotlin-test:${kotlin_version}" diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/AlertService.kt b/alerting/src/main/kotlin/org/opensearch/alerting/AlertService.kt index 05e35c1b7..e0dc4625e 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/AlertService.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/AlertService.kt @@ -20,6 +20,7 @@ import org.opensearch.action.support.WriteRequest import org.opensearch.alerting.alerts.AlertIndices import org.opensearch.alerting.model.ActionRunResult import org.opensearch.alerting.model.ChainedAlertTriggerRunResult +import org.opensearch.alerting.model.ClusterMetricsTriggerRunResult import org.opensearch.alerting.model.QueryLevelTriggerRunResult import org.opensearch.alerting.opensearchapi.firstFailureOrNull import org.opensearch.alerting.opensearchapi.retry @@ -190,6 +191,19 @@ class AlertService( ) } + // Including a list of triggered clusters for cluster metrics monitors + var triggeredClusters: MutableList? = null + if (result is ClusterMetricsTriggerRunResult) + result.clusterTriggerResults.forEach { + if (it.triggered) { + // Add an empty list if one isn't already present + if (triggeredClusters.isNullOrEmpty()) triggeredClusters = mutableListOf() + + // Add the cluster to the list of triggered clusters + triggeredClusters!!.add(it.cluster) + } + } + // Merge the alert's error message to the current alert's history val updatedHistory = currentAlert?.errorHistory.update(alertError) return if (alertError == null && !result.triggered) { @@ -199,7 +213,8 @@ class AlertService( errorMessage = null, errorHistory = updatedHistory, actionExecutionResults = updatedActionExecutionResults, - schemaVersion = IndexUtils.alertIndexSchemaVersion + schemaVersion = IndexUtils.alertIndexSchemaVersion, + clusters = triggeredClusters ) } else if (alertError == null && currentAlert?.isAcknowledged() == true) { null @@ -212,6 +227,7 @@ class AlertService( errorHistory = updatedHistory, actionExecutionResults = updatedActionExecutionResults, schemaVersion = IndexUtils.alertIndexSchemaVersion, + clusters = triggeredClusters ) } else { val alertState = if (workflorwRunContext?.auditDelegateMonitorAlerts == true) { @@ -223,7 +239,8 @@ class AlertService( lastNotificationTime = currentTime, state = alertState, errorMessage = alertError?.message, errorHistory = updatedHistory, actionExecutionResults = updatedActionExecutionResults, schemaVersion = IndexUtils.alertIndexSchemaVersion, executionId = executionId, - workflowId = workflorwRunContext?.workflowId ?: "" + workflowId = workflorwRunContext?.workflowId ?: "", + clusters = triggeredClusters ) } } diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/AlertingPlugin.kt b/alerting/src/main/kotlin/org/opensearch/alerting/AlertingPlugin.kt index 7c61fe297..e2cb704e7 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/AlertingPlugin.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/AlertingPlugin.kt @@ -11,15 +11,15 @@ import org.opensearch.alerting.action.ExecuteWorkflowAction import org.opensearch.alerting.action.GetDestinationsAction import org.opensearch.alerting.action.GetEmailAccountAction import org.opensearch.alerting.action.GetEmailGroupAction -import org.opensearch.alerting.action.GetMonitorAction +import org.opensearch.alerting.action.GetRemoteIndexesAction import org.opensearch.alerting.action.SearchEmailAccountAction import org.opensearch.alerting.action.SearchEmailGroupAction -import org.opensearch.alerting.action.SearchMonitorAction import org.opensearch.alerting.alerts.AlertIndices import org.opensearch.alerting.core.JobSweeper import org.opensearch.alerting.core.ScheduledJobIndices import org.opensearch.alerting.core.action.node.ScheduledJobsStatsAction import org.opensearch.alerting.core.action.node.ScheduledJobsStatsTransportAction +import org.opensearch.alerting.core.lock.LockService import org.opensearch.alerting.core.resthandler.RestScheduledJobStatsHandler import org.opensearch.alerting.core.schedule.JobScheduler import org.opensearch.alerting.core.settings.LegacyOpenDistroScheduledJobSettings @@ -36,6 +36,7 @@ import org.opensearch.alerting.resthandler.RestGetEmailAccountAction import org.opensearch.alerting.resthandler.RestGetEmailGroupAction import org.opensearch.alerting.resthandler.RestGetFindingsAction import org.opensearch.alerting.resthandler.RestGetMonitorAction +import org.opensearch.alerting.resthandler.RestGetRemoteIndexesAction import org.opensearch.alerting.resthandler.RestGetWorkflowAction import org.opensearch.alerting.resthandler.RestGetWorkflowAlertsAction import org.opensearch.alerting.resthandler.RestIndexMonitorAction @@ -46,6 +47,7 @@ import org.opensearch.alerting.resthandler.RestSearchMonitorAction import org.opensearch.alerting.script.TriggerScript import org.opensearch.alerting.service.DeleteMonitorService import org.opensearch.alerting.settings.AlertingSettings +import org.opensearch.alerting.settings.AlertingSettings.Companion.DOC_LEVEL_MONITOR_SHARD_FETCH_SIZE import org.opensearch.alerting.settings.DestinationSettings import org.opensearch.alerting.settings.LegacyOpenDistroAlertingSettings import org.opensearch.alerting.settings.LegacyOpenDistroDestinationSettings @@ -61,6 +63,7 @@ import org.opensearch.alerting.transport.TransportGetEmailAccountAction import org.opensearch.alerting.transport.TransportGetEmailGroupAction import org.opensearch.alerting.transport.TransportGetFindingsSearchAction import org.opensearch.alerting.transport.TransportGetMonitorAction +import org.opensearch.alerting.transport.TransportGetRemoteIndexesAction import org.opensearch.alerting.transport.TransportGetWorkflowAction import org.opensearch.alerting.transport.TransportGetWorkflowAlertsAction import org.opensearch.alerting.transport.TransportIndexMonitorAction @@ -99,6 +102,7 @@ import org.opensearch.core.xcontent.XContentParser import org.opensearch.env.Environment import org.opensearch.env.NodeEnvironment import org.opensearch.index.IndexModule +import org.opensearch.monitor.jvm.JvmStats import org.opensearch.painless.spi.PainlessExtension import org.opensearch.painless.spi.Whitelist import org.opensearch.painless.spi.WhitelistLoader @@ -136,6 +140,7 @@ internal class AlertingPlugin : PainlessExtension, ActionPlugin, ScriptPlugin, R @JvmField val MONITOR_BASE_URI = "/_plugins/_alerting/monitors" @JvmField val WORKFLOW_BASE_URI = "/_plugins/_alerting/workflows" + @JvmField val REMOTE_BASE_URI = "/_plugins/_alerting/remote" @JvmField val DESTINATION_BASE_URI = "/_plugins/_alerting/destinations" @JvmField val LEGACY_OPENDISTRO_MONITOR_BASE_URI = "/_opendistro/_alerting/monitors" @@ -194,7 +199,8 @@ internal class AlertingPlugin : PainlessExtension, ActionPlugin, ScriptPlugin, R RestGetWorkflowAlertsAction(), RestGetFindingsAction(), RestGetWorkflowAction(), - RestDeleteWorkflowAction() + RestDeleteWorkflowAction(), + RestGetRemoteIndexesAction(), ) } @@ -202,9 +208,9 @@ internal class AlertingPlugin : PainlessExtension, ActionPlugin, ScriptPlugin, R return listOf( ActionPlugin.ActionHandler(ScheduledJobsStatsAction.INSTANCE, ScheduledJobsStatsTransportAction::class.java), ActionPlugin.ActionHandler(AlertingActions.INDEX_MONITOR_ACTION_TYPE, TransportIndexMonitorAction::class.java), - ActionPlugin.ActionHandler(GetMonitorAction.INSTANCE, TransportGetMonitorAction::class.java), + ActionPlugin.ActionHandler(AlertingActions.GET_MONITOR_ACTION_TYPE, TransportGetMonitorAction::class.java), ActionPlugin.ActionHandler(ExecuteMonitorAction.INSTANCE, TransportExecuteMonitorAction::class.java), - ActionPlugin.ActionHandler(SearchMonitorAction.INSTANCE, TransportSearchMonitorAction::class.java), + ActionPlugin.ActionHandler(AlertingActions.SEARCH_MONITORS_ACTION_TYPE, TransportSearchMonitorAction::class.java), ActionPlugin.ActionHandler(AlertingActions.DELETE_MONITOR_ACTION_TYPE, TransportDeleteMonitorAction::class.java), ActionPlugin.ActionHandler(AlertingActions.ACKNOWLEDGE_ALERTS_ACTION_TYPE, TransportAcknowledgeAlertAction::class.java), ActionPlugin.ActionHandler( @@ -221,7 +227,8 @@ internal class AlertingPlugin : PainlessExtension, ActionPlugin, ScriptPlugin, R ActionPlugin.ActionHandler(AlertingActions.INDEX_WORKFLOW_ACTION_TYPE, TransportIndexWorkflowAction::class.java), ActionPlugin.ActionHandler(AlertingActions.GET_WORKFLOW_ACTION_TYPE, TransportGetWorkflowAction::class.java), ActionPlugin.ActionHandler(AlertingActions.DELETE_WORKFLOW_ACTION_TYPE, TransportDeleteWorkflowAction::class.java), - ActionPlugin.ActionHandler(ExecuteWorkflowAction.INSTANCE, TransportExecuteWorkflowAction::class.java) + ActionPlugin.ActionHandler(ExecuteWorkflowAction.INSTANCE, TransportExecuteWorkflowAction::class.java), + ActionPlugin.ActionHandler(GetRemoteIndexesAction.INSTANCE, TransportGetRemoteIndexesAction::class.java), ) } @@ -254,6 +261,7 @@ internal class AlertingPlugin : PainlessExtension, ActionPlugin, ScriptPlugin, R ): Collection { // Need to figure out how to use the OpenSearch DI classes rather than handwiring things here. val settings = environment.settings() + val lockService = LockService(client, clusterService) alertIndices = AlertIndices(settings, client, threadPool, clusterService) runner = MonitorRunnerService .registerClusterService(clusterService) @@ -268,7 +276,9 @@ internal class AlertingPlugin : PainlessExtension, ActionPlugin, ScriptPlugin, R .registerTriggerService(TriggerService(scriptService)) .registerAlertService(AlertService(client, xContentRegistry, alertIndices)) .registerDocLevelMonitorQueries(DocLevelMonitorQueries(client, clusterService)) + .registerJvmStats(JvmStats.jvmStats()) .registerWorkflowService(WorkflowService(client, xContentRegistry)) + .registerLockService(lockService) .registerConsumers() .registerDestinationSettings() scheduledJobIndices = ScheduledJobIndices(client.admin(), clusterService) @@ -293,9 +303,9 @@ internal class AlertingPlugin : PainlessExtension, ActionPlugin, ScriptPlugin, R settings ) - DeleteMonitorService.initialize(client) + DeleteMonitorService.initialize(client, lockService) - return listOf(sweeper, scheduler, runner, scheduledJobIndices, docLevelMonitorQueries, destinationMigrationCoordinator) + return listOf(sweeper, scheduler, runner, scheduledJobIndices, docLevelMonitorQueries, destinationMigrationCoordinator, lockService) } override fun getSettings(): List> { @@ -325,6 +335,9 @@ internal class AlertingPlugin : PainlessExtension, ActionPlugin, ScriptPlugin, R AlertingSettings.ALERT_HISTORY_MAX_DOCS, AlertingSettings.ALERT_HISTORY_RETENTION_PERIOD, AlertingSettings.ALERTING_MAX_MONITORS, + AlertingSettings.PERCOLATE_QUERY_DOCS_SIZE_MEMORY_PERCENTAGE_LIMIT, + DOC_LEVEL_MONITOR_SHARD_FETCH_SIZE, + AlertingSettings.PERCOLATE_QUERY_MAX_NUM_DOCS_IN_MEMORY, AlertingSettings.REQUEST_TIMEOUT, AlertingSettings.MAX_ACTION_THROTTLE_VALUE, AlertingSettings.FILTER_BY_BACKEND_ROLES, @@ -345,6 +358,7 @@ internal class AlertingPlugin : PainlessExtension, ActionPlugin, ScriptPlugin, R LegacyOpenDistroAlertingSettings.REQUEST_TIMEOUT, LegacyOpenDistroAlertingSettings.MAX_ACTION_THROTTLE_VALUE, LegacyOpenDistroAlertingSettings.FILTER_BY_BACKEND_ROLES, + AlertingSettings.DOC_LEVEL_MONITOR_FETCH_ONLY_QUERY_FIELDS_ENABLED, DestinationSettings.EMAIL_USERNAME, DestinationSettings.EMAIL_PASSWORD, DestinationSettings.ALLOW_LIST, @@ -357,7 +371,9 @@ internal class AlertingPlugin : PainlessExtension, ActionPlugin, ScriptPlugin, R AlertingSettings.FINDING_HISTORY_MAX_DOCS, AlertingSettings.FINDING_HISTORY_INDEX_MAX_AGE, AlertingSettings.FINDING_HISTORY_ROLLOVER_PERIOD, - AlertingSettings.FINDING_HISTORY_RETENTION_PERIOD + AlertingSettings.FINDING_HISTORY_RETENTION_PERIOD, + AlertingSettings.FINDINGS_INDEXING_BATCH_SIZE, + AlertingSettings.REMOTE_MONITORING_ENABLED ) } diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/DocumentLevelMonitorRunner.kt b/alerting/src/main/kotlin/org/opensearch/alerting/DocumentLevelMonitorRunner.kt index f69151b13..ff939418a 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/DocumentLevelMonitorRunner.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/DocumentLevelMonitorRunner.kt @@ -8,14 +8,17 @@ package org.opensearch.alerting import org.apache.logging.log4j.LogManager import org.opensearch.ExceptionsHelper import org.opensearch.OpenSearchStatusException +import org.opensearch.action.DocWriteRequest +import org.opensearch.action.admin.indices.refresh.RefreshAction +import org.opensearch.action.admin.indices.refresh.RefreshRequest +import org.opensearch.action.bulk.BulkRequest +import org.opensearch.action.bulk.BulkResponse import org.opensearch.action.index.IndexRequest -import org.opensearch.action.index.IndexResponse import org.opensearch.action.search.SearchAction import org.opensearch.action.search.SearchRequest import org.opensearch.action.search.SearchResponse -import org.opensearch.action.support.WriteRequest -import org.opensearch.alerting.model.DocumentExecutionContext import org.opensearch.alerting.model.DocumentLevelTriggerRunResult +import org.opensearch.alerting.model.IndexExecutionContext import org.opensearch.alerting.model.InputRunResults import org.opensearch.alerting.model.MonitorMetadata import org.opensearch.alerting.model.MonitorRunResult @@ -27,7 +30,6 @@ import org.opensearch.alerting.util.IndexUtils import org.opensearch.alerting.util.defaultToPerExecutionAction import org.opensearch.alerting.util.getActionExecutionPolicy import org.opensearch.alerting.workflow.WorkflowRunContext -import org.opensearch.client.Client import org.opensearch.client.node.NodeClient import org.opensearch.cluster.metadata.IndexMetadata import org.opensearch.cluster.routing.Preference @@ -56,17 +58,30 @@ import org.opensearch.index.IndexNotFoundException import org.opensearch.index.query.BoolQueryBuilder import org.opensearch.index.query.Operator import org.opensearch.index.query.QueryBuilders +import org.opensearch.index.seqno.SequenceNumbers +import org.opensearch.indices.IndexClosedException import org.opensearch.percolator.PercolateQueryBuilderExt +import org.opensearch.search.SearchHit import org.opensearch.search.SearchHits import org.opensearch.search.builder.SearchSourceBuilder import org.opensearch.search.sort.SortOrder import java.io.IOException import java.time.Instant import java.util.UUID +import java.util.stream.Collectors import kotlin.math.max -object DocumentLevelMonitorRunner : MonitorRunner() { +class DocumentLevelMonitorRunner : MonitorRunner() { private val logger = LogManager.getLogger(javaClass) + var nonPercolateSearchesTimeTakenStat = 0L + var percolateQueriesTimeTakenStat = 0L + var totalDocsQueriedStat = 0L + var docTransformTimeTakenStat = 0L + var totalDocsSizeInBytesStat = 0L + var docsSizeOfBatchInBytes = 0L + /* Contains list of docs source that are held in memory to submit to percolate query against query index. + * Docs are fetched from the source index per shard and transformed.*/ + val transformedDocs = mutableListOf>() override suspend fun runMonitor( monitor: Monitor, @@ -120,12 +135,12 @@ object DocumentLevelMonitorRunner : MonitorRunner() { try { // Resolve all passed indices to concrete indices - val concreteIndices = IndexUtils.resolveAllIndices( + val allConcreteIndices = IndexUtils.resolveAllIndices( docLevelMonitorInput.indices, monitorCtx.clusterService!!, monitorCtx.indexNameExpressionResolver!! ) - if (concreteIndices.isEmpty()) { + if (allConcreteIndices.isEmpty()) { logger.error("indices not found-${docLevelMonitorInput.indices.joinToString(",")}") throw IndexNotFoundException(docLevelMonitorInput.indices.joinToString(",")) } @@ -141,7 +156,7 @@ object DocumentLevelMonitorRunner : MonitorRunner() { // cleanup old indices that are not monitored anymore from the same monitor val runContextKeys = updatedLastRunContext.keys.toMutableSet() for (ind in runContextKeys) { - if (!concreteIndices.contains(ind)) { + if (!allConcreteIndices.contains(ind)) { updatedLastRunContext.remove(ind) } } @@ -149,13 +164,32 @@ object DocumentLevelMonitorRunner : MonitorRunner() { // Map of document ids per index when monitor is workflow delegate and has chained findings val matchingDocIdsPerIndex = workflowRunContext?.matchingDocIdsPerIndex + val concreteIndicesSeenSoFar = mutableListOf() + val updatedIndexNames = mutableListOf() docLevelMonitorInput.indices.forEach { indexName -> - val concreteIndices = IndexUtils.resolveAllIndices( + var concreteIndices = IndexUtils.resolveAllIndices( listOf(indexName), monitorCtx.clusterService!!, monitorCtx.indexNameExpressionResolver!! ) + var lastWriteIndex: String? = null + if (IndexUtils.isAlias(indexName, monitorCtx.clusterService!!.state()) || + IndexUtils.isDataStream(indexName, monitorCtx.clusterService!!.state()) + ) { + lastWriteIndex = concreteIndices.find { lastRunContext.containsKey(it) } + if (lastWriteIndex != null) { + val lastWriteIndexCreationDate = + IndexUtils.getCreationDateForIndex(lastWriteIndex, monitorCtx.clusterService!!.state()) + concreteIndices = IndexUtils.getNewestIndicesByCreationDate( + concreteIndices, + monitorCtx.clusterService!!.state(), + lastWriteIndexCreationDate + ) + } + } + concreteIndicesSeenSoFar.addAll(concreteIndices) val updatedIndexName = indexName.replace("*", "_") + updatedIndexNames.add(updatedIndexName) val conflictingFields = monitorCtx.docLevelMonitorQueries!!.getAllConflictingFields( monitorCtx.clusterService!!.state(), concreteIndices @@ -174,12 +208,21 @@ object DocumentLevelMonitorRunner : MonitorRunner() { } // Prepare updatedLastRunContext for each index - val indexUpdatedRunContext = updateLastRunContext( + val indexUpdatedRunContext = initializeNewLastRunContext( indexLastRunContext.toMutableMap(), monitorCtx, - concreteIndexName + concreteIndexName, ) as MutableMap - updatedLastRunContext[concreteIndexName] = indexUpdatedRunContext + if (IndexUtils.isAlias(indexName, monitorCtx.clusterService!!.state()) || + IndexUtils.isDataStream(indexName, monitorCtx.clusterService!!.state()) + ) { + if (concreteIndexName == IndexUtils.getWriteIndex(indexName, monitorCtx.clusterService!!.state())) { + updatedLastRunContext.remove(lastWriteIndex) + updatedLastRunContext[concreteIndexName] = indexUpdatedRunContext + } + } else { + updatedLastRunContext[concreteIndexName] = indexUpdatedRunContext + } val count: Int = indexLastRunContext["shards_count"] as Int for (i: Int in 0 until count) { @@ -192,44 +235,65 @@ object DocumentLevelMonitorRunner : MonitorRunner() { } } - // Prepare DocumentExecutionContext for each index - val docExecutionContext = DocumentExecutionContext(queries, indexLastRunContext, indexUpdatedRunContext) - - val matchingDocs = getMatchingDocs( - monitor, - monitorCtx, - docExecutionContext, + val fieldsToBeQueried = mutableSetOf() + if (monitorCtx.fetchOnlyQueryFieldNames) { + for (it in queries) { + if (it.queryFieldNames.isEmpty()) { + fieldsToBeQueried.clear() + logger.debug( + "Monitor ${monitor.id} : " + + "Doc Level query ${it.id} : ${it.query} doesn't have queryFieldNames populated. " + + "Cannot optimize monitor to fetch only query-relevant fields. " + + "Querying entire doc source." + ) + break + } + fieldsToBeQueried.addAll(it.queryFieldNames) + } + if (fieldsToBeQueried.isNotEmpty()) + logger.debug( + "Monitor ${monitor.id} Querying only fields " + + "${fieldsToBeQueried.joinToString()} instead of entire _source of documents" + ) + } + val indexExecutionContext = IndexExecutionContext( + queries, + indexLastRunContext, + indexUpdatedRunContext, updatedIndexName, concreteIndexName, conflictingFields.toList(), - matchingDocIdsPerIndex?.get(concreteIndexName) + matchingDocIdsPerIndex?.get(concreteIndexName), ) - if (matchingDocs.isNotEmpty()) { - val matchedQueriesForDocs = getMatchedQueries( - monitorCtx, - matchingDocs.map { it.second }, - monitor, - monitorMetadata, - updatedIndexName, - concreteIndexName - ) - - matchedQueriesForDocs.forEach { hit -> - val id = hit.id - .replace("_${updatedIndexName}_${monitor.id}", "") - .replace("_${concreteIndexName}_${monitor.id}", "") - - val docIndices = hit.field("_percolator_document_slot").values.map { it.toString().toInt() } - docIndices.forEach { idx -> - val docIndex = "${matchingDocs[idx].first}|$concreteIndexName" - inputRunResults.getOrPut(id) { mutableSetOf() }.add(docIndex) - docsToQueries.getOrPut(docIndex) { mutableListOf() }.add(id) - } - } + fetchShardDataAndMaybeExecutePercolateQueries( + monitor, + monitorCtx, + indexExecutionContext, + monitorMetadata, + inputRunResults, + docsToQueries, + updatedIndexNames, + concreteIndicesSeenSoFar, + ArrayList(fieldsToBeQueried) + ) { shard, maxSeqNo -> // function passed to update last run context with new max sequence number + indexExecutionContext.updatedLastRunContext[shard] = maxSeqNo } } } + /* if all indices are covered still in-memory docs size limit is not breached we would need to submit + the percolate query at the end */ + if (transformedDocs.isNotEmpty()) { + performPercolateQueryAndResetCounters( + monitorCtx, + monitor, + monitorMetadata, + updatedIndexNames, + concreteIndicesSeenSoFar, + inputRunResults, + docsToQueries, + ) + } monitorResult = monitorResult.copy(inputResults = InputRunResults(listOf(inputRunResults))) /* @@ -249,10 +313,7 @@ object DocumentLevelMonitorRunner : MonitorRunner() { // If there are no triggers defined, we still want to generate findings if (monitor.triggers.isEmpty()) { if (dryrun == false && monitor.id != Monitor.NO_ID) { - docsToQueries.forEach { - val triggeredQueries = it.value.map { queryId -> idQueryMap[queryId]!! } - createFindings(monitor, monitorCtx, triggeredQueries, it.key, true) - } + createFindings(monitor, monitorCtx, docsToQueries, idQueryMap, true) } } else { monitor.triggers.forEach { @@ -289,6 +350,9 @@ object DocumentLevelMonitorRunner : MonitorRunner() { monitorMetadata.copy(lastRunContext = updatedLastRunContext), true ) + } else { + // Clean up any queries created by the dry run monitor + monitorCtx.docLevelMonitorQueries!!.deleteDocLevelQueriesOnDryRun(monitorMetadata) } // TODO: Update the Document as part of the Trigger and return back the trigger action result @@ -303,6 +367,22 @@ object DocumentLevelMonitorRunner : MonitorRunner() { e ) return monitorResult.copy(error = alertingException, inputResults = InputRunResults(emptyList(), alertingException)) + } finally { + logger.debug( + "PERF_DEBUG_STATS: Monitor ${monitor.id} " + + "Time spent on fetching data from shards in millis: $nonPercolateSearchesTimeTakenStat" + ) + logger.debug( + "PERF_DEBUG_STATS: Monitor {} Time spent on percolate queries in millis: {}", + monitor.id, + percolateQueriesTimeTakenStat + ) + logger.debug( + "PERF_DEBUG_STATS: Monitor {} Time spent on transforming doc fields in millis: {}", + monitor.id, + docTransformTimeTakenStat + ) + logger.debug("PERF_DEBUG_STATS: Monitor {} Num docs queried: {}", monitor.id, totalDocsQueriedStat) } } @@ -341,7 +421,7 @@ object DocumentLevelMonitorRunner : MonitorRunner() { trigger: DocumentLevelTrigger, monitor: Monitor, idQueryMap: Map, - docsToQueries: Map>, + docsToQueries: MutableMap>, queryToDocIds: Map>, dryrun: Boolean, workflowRunContext: WorkflowRunContext?, @@ -350,35 +430,33 @@ object DocumentLevelMonitorRunner : MonitorRunner() { val triggerCtx = DocumentLevelTriggerExecutionContext(monitor, trigger) val triggerResult = monitorCtx.triggerService!!.runDocLevelTrigger(monitor, trigger, queryToDocIds) - val findings = mutableListOf() - val findingDocPairs = mutableListOf>() + val triggerFindingDocPairs = mutableListOf>() // TODO: Implement throttling for findings - docsToQueries.forEach { - val triggeredQueries = it.value.map { queryId -> idQueryMap[queryId]!! } - val findingId = createFindings( - monitor, - monitorCtx, - triggeredQueries, - it.key, - !dryrun && monitor.id != Monitor.NO_ID, - executionId - ) - findings.add(findingId) + val findingToDocPairs = createFindings( + monitor, + monitorCtx, + docsToQueries, + idQueryMap, + !dryrun && monitor.id != Monitor.NO_ID, + executionId + ) - if (triggerResult.triggeredDocs.contains(it.key)) { - findingDocPairs.add(Pair(findingId, it.key)) + findingToDocPairs.forEach { + // Only pick those entries whose docs have triggers associated with them + if (triggerResult.triggeredDocs.contains(it.second)) { + triggerFindingDocPairs.add(Pair(it.first, it.second)) } } val actionCtx = triggerCtx.copy( triggeredDocs = triggerResult.triggeredDocs, - relatedFindings = findings, + relatedFindings = findingToDocPairs.map { it.first }, error = monitorResult.error ?: triggerResult.error ) val alerts = mutableListOf() - findingDocPairs.forEach { + triggerFindingDocPairs.forEach { val alert = monitorCtx.alertService!!.composeDocLevelAlert( listOf(it.first), listOf(it.second), @@ -437,51 +515,92 @@ object DocumentLevelMonitorRunner : MonitorRunner() { return triggerResult } + /** + * 1. Bulk index all findings based on shouldCreateFinding flag + * 2. invoke publishFinding() to kickstart auto-correlations + * 3. Returns a list of pairs for finding id to doc id + */ private suspend fun createFindings( monitor: Monitor, monitorCtx: MonitorRunnerExecutionContext, - docLevelQueries: List, - matchingDocId: String, + docsToQueries: MutableMap>, + idQueryMap: Map, shouldCreateFinding: Boolean, workflowExecutionId: String? = null, - ): String { - // Before the "|" is the doc id and after the "|" is the index - val docIndex = matchingDocId.split("|") + ): List> { - val finding = Finding( - id = UUID.randomUUID().toString(), - relatedDocIds = listOf(docIndex[0]), - correlatedDocIds = listOf(docIndex[0]), - monitorId = monitor.id, - monitorName = monitor.name, - index = docIndex[1], - docLevelQueries = docLevelQueries, - timestamp = Instant.now(), - executionId = workflowExecutionId - ) + val findingDocPairs = mutableListOf>() + val findings = mutableListOf() + val indexRequests = mutableListOf() - val findingStr = finding.toXContent(XContentBuilder.builder(XContentType.JSON.xContent()), ToXContent.EMPTY_PARAMS).string() - logger.debug("Findings: $findingStr") + docsToQueries.forEach { + val triggeredQueries = it.value.map { queryId -> idQueryMap[queryId]!! } - if (shouldCreateFinding) { - val indexRequest = IndexRequest(monitor.dataSources.findingsIndex) - .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE) - .source(findingStr, XContentType.JSON) - .id(finding.id) - .routing(finding.id) + // Before the "|" is the doc id and after the "|" is the index + val docIndex = it.key.split("|") - monitorCtx.client!!.suspendUntil { - monitorCtx.client!!.index(indexRequest, it) + val finding = Finding( + id = UUID.randomUUID().toString(), + relatedDocIds = listOf(docIndex[0]), + correlatedDocIds = listOf(docIndex[0]), + monitorId = monitor.id, + monitorName = monitor.name, + index = docIndex[1], + docLevelQueries = triggeredQueries, + timestamp = Instant.now(), + executionId = workflowExecutionId + ) + findingDocPairs.add(Pair(finding.id, it.key)) + findings.add(finding) + + val findingStr = + finding.toXContent(XContentBuilder.builder(XContentType.JSON.xContent()), ToXContent.EMPTY_PARAMS) + .string() + logger.debug("Findings: $findingStr") + + if (shouldCreateFinding) { + indexRequests += IndexRequest(monitor.dataSources.findingsIndex) + .source(findingStr, XContentType.JSON) + .id(finding.id) + .opType(DocWriteRequest.OpType.CREATE) } } + if (indexRequests.isNotEmpty()) { + bulkIndexFindings(monitor, monitorCtx, indexRequests) + } + try { - publishFinding(monitor, monitorCtx, finding) + findings.forEach { finding -> + publishFinding(monitor, monitorCtx, finding) + } } catch (e: Exception) { // suppress exception logger.error("Optional finding callback failed", e) } - return finding.id + return findingDocPairs + } + + private suspend fun bulkIndexFindings( + monitor: Monitor, + monitorCtx: MonitorRunnerExecutionContext, + indexRequests: List + ) { + indexRequests.chunked(monitorCtx.findingsIndexBatchSize).forEach { batch -> + val bulkResponse: BulkResponse = monitorCtx.client!!.suspendUntil { + bulk(BulkRequest().add(batch), it) + } + if (bulkResponse.hasFailures()) { + bulkResponse.items.forEach { item -> + if (item.isFailed) { + logger.error("Failed indexing the finding ${item.id} of monitor [${monitor.id}]") + } + } + } else { + logger.debug("[${bulkResponse.items.size}] All findings successfully indexed.") + } + } + monitorCtx.client!!.execute(RefreshAction.INSTANCE, RefreshRequest(monitor.dataSources.findingsIndex)) } private fun publishFinding( @@ -501,17 +620,16 @@ object DocumentLevelMonitorRunner : MonitorRunner() { ) } - private suspend fun updateLastRunContext( + private fun initializeNewLastRunContext( lastRunContext: Map, monitorCtx: MonitorRunnerExecutionContext, - index: String + index: String, ): Map { val count: Int = getShardsCount(monitorCtx.clusterService!!, index) val updatedLastRunContext = lastRunContext.toMutableMap() for (i: Int in 0 until count) { val shard = i.toString() - val maxSeqNo: Long = getMaxSeqNo(monitorCtx.client!!, index, shard) - updatedLastRunContext[shard] = maxSeqNo.toString() + updatedLastRunContext[shard] = SequenceNumbers.UNASSIGNED_SEQ_NO.toString() } return updatedLastRunContext } @@ -543,83 +661,166 @@ object DocumentLevelMonitorRunner : MonitorRunner() { return indexCreationDate > lastExecutionTime.toEpochMilli() } - /** - * Get the current max seq number of the shard. We find it by searching the last document - * in the primary shard. - */ - private suspend fun getMaxSeqNo(client: Client, index: String, shard: String): Long { - val request: SearchRequest = SearchRequest() - .indices(index) - .preference("_shards:$shard") - .source( - SearchSourceBuilder() - .version(true) - .sort("_seq_no", SortOrder.DESC) - .seqNoAndPrimaryTerm(true) - .query(QueryBuilders.matchAllQuery()) - .size(1) - ) - val response: SearchResponse = client.suspendUntil { client.search(request, it) } - if (response.status() !== RestStatus.OK) { - throw IOException("Failed to get max seq no for shard: $shard") - } - if (response.hits.hits.isEmpty()) { - return -1L - } - - return response.hits.hits[0].seqNo - } - private fun getShardsCount(clusterService: ClusterService, index: String): Int { val allShards: List = clusterService!!.state().routingTable().allShards(index) return allShards.filter { it.primary() }.size } - private suspend fun getMatchingDocs( + /** 1. Fetch data per shard for given index. (only 10000 docs are fetched. + * needs to be converted to scroll if not performant enough) + * 2. Transform documents to conform to format required for percolate query + * 3a. Check if docs in memory are crossing threshold defined by setting. + * 3b. If yes, perform percolate query and update docToQueries Map with all hits from percolate queries */ + private suspend fun fetchShardDataAndMaybeExecutePercolateQueries( monitor: Monitor, monitorCtx: MonitorRunnerExecutionContext, - docExecutionCtx: DocumentExecutionContext, - index: String, - concreteIndex: String, - conflictingFields: List, - docIds: List? = null - ): List> { - val count: Int = docExecutionCtx.updatedLastRunContext["shards_count"] as Int - val matchingDocs = mutableListOf>() + indexExecutionCtx: IndexExecutionContext, + monitorMetadata: MonitorMetadata, + inputRunResults: MutableMap>, + docsToQueries: MutableMap>, + monitorInputIndices: List, + concreteIndices: List, + fieldsToBeQueried: List, + updateLastRunContext: (String, String) -> Unit + ) { + val count: Int = indexExecutionCtx.updatedLastRunContext["shards_count"] as Int for (i: Int in 0 until count) { val shard = i.toString() try { - val maxSeqNo: Long = docExecutionCtx.updatedLastRunContext[shard].toString().toLong() - val prevSeqNo = docExecutionCtx.lastRunContext[shard].toString().toLongOrNull() - - val hits: SearchHits = searchShard( + val prevSeqNo = indexExecutionCtx.lastRunContext[shard].toString().toLongOrNull() + val from = prevSeqNo ?: SequenceNumbers.NO_OPS_PERFORMED + var to: Long = Long.MAX_VALUE + while (to >= from) { + val hits: SearchHits = searchShard( + monitorCtx, + indexExecutionCtx.concreteIndexName, + shard, + from, + to, + indexExecutionCtx.docIds, + fieldsToBeQueried, + ) + if (hits.hits.isEmpty()) { + if (to == Long.MAX_VALUE) { + updateLastRunContext(shard, (prevSeqNo ?: SequenceNumbers.NO_OPS_PERFORMED).toString()) // didn't find any docs + } + break + } + if (to == Long.MAX_VALUE) { // max sequence number of shard needs to be computed + updateLastRunContext(shard, hits.hits[0].seqNo.toString()) + } + val leastSeqNoFromHits = hits.hits.last().seqNo + to = leastSeqNoFromHits - 1 + val startTime = System.currentTimeMillis() + transformedDocs.addAll( + transformSearchHitsAndReconstructDocs( + hits, + indexExecutionCtx.indexName, + indexExecutionCtx.concreteIndexName, + monitor.id, + indexExecutionCtx.conflictingFields, + ) + ) + if ( + transformedDocs.isNotEmpty() && + shouldPerformPercolateQueryAndFlushInMemoryDocs(transformedDocs.size, monitorCtx) + ) { + performPercolateQueryAndResetCounters( + monitorCtx, + monitor, + monitorMetadata, + monitorInputIndices, + concreteIndices, + inputRunResults, + docsToQueries, + ) + } + docTransformTimeTakenStat += System.currentTimeMillis() - startTime + } + } catch (e: Exception) { + logger.error( + "Monitor ${monitor.id} :" + + "Failed to run fetch data from shard [$shard] of index [${indexExecutionCtx.concreteIndexName}]. " + + "Error: ${e.message}", + e + ) + if (e is IndexClosedException) { + throw e + } + } + if ( + transformedDocs.isNotEmpty() && + shouldPerformPercolateQueryAndFlushInMemoryDocs(transformedDocs.size, monitorCtx) + ) { + performPercolateQueryAndResetCounters( monitorCtx, - concreteIndex, - shard, - prevSeqNo, - maxSeqNo, - null, - docIds + monitor, + monitorMetadata, + monitorInputIndices, + concreteIndices, + inputRunResults, + docsToQueries, ) + } + } + } - if (hits.hits.isNotEmpty()) { - matchingDocs.addAll(getAllDocs(hits, index, concreteIndex, monitor.id, conflictingFields)) + private fun shouldPerformPercolateQueryAndFlushInMemoryDocs( + numDocs: Int, + monitorCtx: MonitorRunnerExecutionContext, + ): Boolean { + return isInMemoryDocsSizeExceedingMemoryLimit(docsSizeOfBatchInBytes, monitorCtx) || + isInMemoryNumDocsExceedingMaxDocsPerPercolateQueryLimit(numDocs, monitorCtx) + } + + private suspend fun performPercolateQueryAndResetCounters( + monitorCtx: MonitorRunnerExecutionContext, + monitor: Monitor, + monitorMetadata: MonitorMetadata, + monitorInputIndices: List, + concreteIndices: List, + inputRunResults: MutableMap>, + docsToQueries: MutableMap>, + ) { + try { + val percolateQueryResponseHits = runPercolateQueryOnTransformedDocs( + monitorCtx, + transformedDocs, + monitor, + monitorMetadata, + concreteIndices, + monitorInputIndices, + ) + + percolateQueryResponseHits.forEach { hit -> + var id = hit.id + concreteIndices.forEach { id = id.replace("_${it}_${monitor.id}", "") } + monitorInputIndices.forEach { id = id.replace("_${it}_${monitor.id}", "") } + val docIndices = hit.field("_percolator_document_slot").values.map { it.toString().toInt() } + docIndices.forEach { idx -> + val docIndex = "${transformedDocs[idx].first}|${transformedDocs[idx].second.concreteIndexName}" + inputRunResults.getOrPut(id) { mutableSetOf() }.add(docIndex) + docsToQueries.getOrPut(docIndex) { mutableListOf() }.add(id) } - } catch (e: Exception) { - logger.warn("Failed to run for shard $shard. Error: ${e.message}") } + totalDocsQueriedStat += transformedDocs.size.toLong() + } finally { + transformedDocs.clear() + docsSizeOfBatchInBytes = 0 } - return matchingDocs } + /** Executes search query on given shard of given index to fetch docs with sequene number greater than prevSeqNo. + * This method hence fetches only docs from shard which haven't been queried before + */ private suspend fun searchShard( monitorCtx: MonitorRunnerExecutionContext, index: String, shard: String, prevSeqNo: Long?, maxSeqNo: Long, - query: String?, - docIds: List? = null + docIds: List? = null, + fieldsToFetch: List, ): SearchHits { if (prevSeqNo?.equals(maxSeqNo) == true && maxSeqNo != 0L) { return SearchHits.empty() @@ -627,10 +828,6 @@ object DocumentLevelMonitorRunner : MonitorRunner() { val boolQueryBuilder = BoolQueryBuilder() boolQueryBuilder.filter(QueryBuilders.rangeQuery("_seq_no").gt(prevSeqNo).lte(maxSeqNo)) - if (query != null) { - boolQueryBuilder.must(QueryBuilders.queryStringQuery(query)) - } - if (!docIds.isNullOrEmpty()) { boolQueryBuilder.filter(QueryBuilders.termsQuery("_id", docIds)) } @@ -641,47 +838,66 @@ object DocumentLevelMonitorRunner : MonitorRunner() { .source( SearchSourceBuilder() .version(true) + .sort("_seq_no", SortOrder.DESC) + .seqNoAndPrimaryTerm(true) .query(boolQueryBuilder) - .size(10000) // fixme: make this configurable. + .size(monitorCtx.docLevelMonitorShardFetchSize) ) .preference(Preference.PRIMARY_FIRST.type()) + + if (monitorCtx.fetchOnlyQueryFieldNames && fieldsToFetch.isNotEmpty()) { + request.source().fetchSource(false) + for (field in fieldsToFetch) { + request.source().fetchField(field) + } + } val response: SearchResponse = monitorCtx.client!!.suspendUntil { monitorCtx.client!!.search(request, it) } if (response.status() !== RestStatus.OK) { - throw IOException("Failed to search shard: $shard") + throw IOException("Failed to search shard: [$shard] in index [$index]. Response status is ${response.status()}") } + nonPercolateSearchesTimeTakenStat += response.took.millis return response.hits } - private suspend fun getMatchedQueries( + /** Executes percolate query on the docs against the monitor's query index and return the hits from the search response*/ + private suspend fun runPercolateQueryOnTransformedDocs( monitorCtx: MonitorRunnerExecutionContext, - docs: List, + docs: MutableList>, monitor: Monitor, monitorMetadata: MonitorMetadata, - index: String, - concreteIndex: String + concreteIndices: List, + monitorInputIndices: List, ): SearchHits { - val boolQueryBuilder = BoolQueryBuilder().must(QueryBuilders.matchQuery("index", index).operator(Operator.AND)) - - val percolateQueryBuilder = PercolateQueryBuilderExt("query", docs, XContentType.JSON) + val indices = docs.stream().map { it.second.indexName }.distinct().collect(Collectors.toList()) + val boolQueryBuilder = BoolQueryBuilder().must(buildShouldClausesOverPerIndexMatchQueries(indices)) + val percolateQueryBuilder = + PercolateQueryBuilderExt("query", docs.map { it.second.docSource }, XContentType.JSON) if (monitor.id.isNotEmpty()) { boolQueryBuilder.must(QueryBuilders.matchQuery("monitor_id", monitor.id).operator(Operator.AND)) } boolQueryBuilder.filter(percolateQueryBuilder) - - val queryIndex = monitorMetadata.sourceToQueryIndexMapping[index + monitor.id] - if (queryIndex == null) { - val message = "Failed to resolve concrete queryIndex from sourceIndex during monitor execution!" + - " sourceIndex:$concreteIndex queryIndex:${monitor.dataSources.queryIndex}" + val queryIndices = + docs.map { monitorMetadata.sourceToQueryIndexMapping[it.second.indexName + monitor.id] }.distinct() + if (queryIndices.isEmpty()) { + val message = + "Monitor ${monitor.id}: Failed to resolve query Indices from source indices during monitor execution!" + + " sourceIndices: $monitorInputIndices" logger.error(message) throw AlertingException.wrap( OpenSearchStatusException(message, RestStatus.INTERNAL_SERVER_ERROR) ) } - val searchRequest = SearchRequest(queryIndex).preference(Preference.PRIMARY_FIRST.type()) + + val searchRequest = + SearchRequest().indices(*queryIndices.toTypedArray()).preference(Preference.PRIMARY_FIRST.type()) val searchSourceBuilder = SearchSourceBuilder() searchSourceBuilder.query(boolQueryBuilder) searchRequest.source(searchSourceBuilder) - + logger.debug( + "Monitor ${monitor.id}: " + + "Executing percolate query for docs from source indices " + + "$monitorInputIndices against query index $queryIndices" + ) var response: SearchResponse try { response = monitorCtx.client!!.suspendUntil { @@ -689,42 +905,77 @@ object DocumentLevelMonitorRunner : MonitorRunner() { } } catch (e: Exception) { throw IllegalStateException( - "Failed to run percolate search for sourceIndex [$index] and queryIndex [$queryIndex] for ${docs.size} document(s)", e + "Monitor ${monitor.id}:" + + " Failed to run percolate search for sourceIndex [${concreteIndices.joinToString()}] " + + "and queryIndex [${queryIndices.joinToString()}] for ${docs.size} document(s)", + e ) } if (response.status() !== RestStatus.OK) { - throw IOException("Failed to search percolate index: $queryIndex") + throw IOException( + "Monitor ${monitor.id}: Failed to search percolate index: ${queryIndices.joinToString()}. " + + "Response status is ${response.status()}" + ) } + logger.debug("Monitor ${monitor.id} PERF_DEBUG: Percolate query time taken millis = ${response.took}") + percolateQueriesTimeTakenStat += response.took.millis return response.hits } + /** we cannot use terms query because `index` field's mapping is of type TEXT and not keyword. Refer doc-level-queries.json*/ + private fun buildShouldClausesOverPerIndexMatchQueries(indices: List): BoolQueryBuilder { + val boolQueryBuilder = QueryBuilders.boolQuery() + indices.forEach { boolQueryBuilder.should(QueryBuilders.matchQuery("index", it)) } + return boolQueryBuilder + } - private fun getAllDocs( + /** Transform field names and index names in all the search hits to format required to run percolate search against them. + * Hits are transformed using method transformDocumentFieldNames() */ + private fun transformSearchHitsAndReconstructDocs( hits: SearchHits, index: String, concreteIndex: String, monitorId: String, - conflictingFields: List - ): List> { - return hits.map { hit -> - val sourceMap = hit.sourceAsMap - - transformDocumentFieldNames( - sourceMap, - conflictingFields, - "_${index}_$monitorId", - "_${concreteIndex}_$monitorId", - "" - ) - - var xContentBuilder = XContentFactory.jsonBuilder().map(sourceMap) - - val sourceRef = BytesReference.bytes(xContentBuilder) - - logger.debug("Document [${hit.id}] payload after transform: ", sourceRef.utf8ToString()) + conflictingFields: List, + ): List> { + return hits.mapNotNull(fun(hit: SearchHit): Pair? { + try { + val sourceMap = if (hit.hasSource()) { + hit.sourceAsMap + } else { + constructSourceMapFromFieldsInHit(hit) + } + transformDocumentFieldNames( + sourceMap, + conflictingFields, + "_${index}_$monitorId", + "_${concreteIndex}_$monitorId", + "" + ) + var xContentBuilder = XContentFactory.jsonBuilder().map(sourceMap) + val sourceRef = BytesReference.bytes(xContentBuilder) + docsSizeOfBatchInBytes += sourceRef.ramBytesUsed() + totalDocsSizeInBytesStat += sourceRef.ramBytesUsed() + return Pair(hit.id, TransformedDocDto(index, concreteIndex, hit.id, sourceRef)) + } catch (e: Exception) { + logger.error("Monitor $monitorId: Failed to transform payload $hit for percolate query", e) + // skip any document which we fail to transform because we anyway won't be able to run percolate queries on them. + return null + } + }) + } - Pair(hit.id, sourceRef) + private fun constructSourceMapFromFieldsInHit(hit: SearchHit): MutableMap { + if (hit.fields == null) + return mutableMapOf() + val sourceMap: MutableMap = mutableMapOf() + for (field in hit.fields) { + if (field.value.values != null && field.value.values.isNotEmpty()) + if (field.value.values.size == 1) { + sourceMap[field.key] = field.value.values[0] + } else sourceMap[field.key] = field.value.values } + return sourceMap } /** @@ -777,4 +1028,33 @@ object DocumentLevelMonitorRunner : MonitorRunner() { } jsonAsMap.putAll(tempMap) } + + /** + * Returns true, if the docs fetched from shards thus far amount to less than threshold + * amount of percentage (default:10. setting is dynamic and configurable) of the total heap size or not. + * + */ + private fun isInMemoryDocsSizeExceedingMemoryLimit(docsBytesSize: Long, monitorCtx: MonitorRunnerExecutionContext): Boolean { + var thresholdPercentage = monitorCtx.percQueryDocsSizeMemoryPercentageLimit + val heapMaxBytes = monitorCtx.jvmStats!!.mem.heapMax.bytes + val thresholdBytes = (thresholdPercentage.toDouble() / 100.0) * heapMaxBytes + + return docsBytesSize > thresholdBytes + } + + private fun isInMemoryNumDocsExceedingMaxDocsPerPercolateQueryLimit(numDocs: Int, monitorCtx: MonitorRunnerExecutionContext): Boolean { + var maxNumDocsThreshold = monitorCtx.percQueryMaxNumDocsInMemory + return numDocs >= maxNumDocsThreshold + } + + /** + * POJO holding information about each doc's concrete index, id, input index pattern/alias/datastream name + * and doc source. A list of these POJOs would be passed to percolate query execution logic. + */ + data class TransformedDocDto( + var indexName: String, + var concreteIndexName: String, + var docId: String, + var docSource: BytesReference + ) } diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/InputService.kt b/alerting/src/main/kotlin/org/opensearch/alerting/InputService.kt index b31e21d5f..17ec4670c 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/InputService.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/InputService.kt @@ -5,6 +5,9 @@ package org.opensearch.alerting +import kotlinx.coroutines.CoroutineScope +import kotlinx.coroutines.Dispatchers +import kotlinx.coroutines.launch import org.apache.logging.log4j.LogManager import org.opensearch.action.search.SearchRequest import org.opensearch.action.search.SearchResponse @@ -12,7 +15,9 @@ import org.opensearch.alerting.model.InputRunResults import org.opensearch.alerting.model.TriggerAfterKey import org.opensearch.alerting.opensearchapi.convertToMap import org.opensearch.alerting.opensearchapi.suspendUntil +import org.opensearch.alerting.settings.AlertingSettings import org.opensearch.alerting.util.AggregationQueryRewriter +import org.opensearch.alerting.util.CrossClusterMonitorUtils import org.opensearch.alerting.util.addUserBackendRolesFilter import org.opensearch.alerting.util.clusterMetricsMonitorHelpers.executeTransportAction import org.opensearch.alerting.util.clusterMetricsMonitorHelpers.toMap @@ -43,6 +48,8 @@ import org.opensearch.script.TemplateScript import org.opensearch.search.builder.SearchSourceBuilder import java.time.Instant +private val scope: CoroutineScope = CoroutineScope(Dispatchers.IO) + /** Service that handles the collection of input results for Monitor executions */ class InputService( val client: Client, @@ -100,8 +107,9 @@ class InputService( .newInstance(searchParams) .execute() + val indexes = CrossClusterMonitorUtils.parseIndexesForRemoteSearch(input.indices, clusterService) val searchRequest = SearchRequest() - .indices(*input.indices.toTypedArray()) + .indices(*indexes.toTypedArray()) .preference(Preference.PRIMARY_FIRST.type()) XContentType.JSON.xContent().createParser(xContentRegistry, LoggingDeprecationHandler.INSTANCE, searchSource).use { searchRequest.source(SearchSourceBuilder.fromXContent(it)) @@ -115,9 +123,36 @@ class InputService( results += searchResponse.convertToMap() } is ClusterMetricsInput -> { - logger.debug("ClusterMetricsInput clusterMetricType: ${input.clusterMetricType}") - val response = executeTransportAction(input, client) - results += response.toMap() + logger.debug("ClusterMetricsInput clusterMetricType: {}", input.clusterMetricType) + + val remoteMonitoringEnabled = clusterService.clusterSettings.get(AlertingSettings.REMOTE_MONITORING_ENABLED) + logger.debug("Remote monitoring enabled: {}", remoteMonitoringEnabled) + + val responseMap = mutableMapOf>() + if (remoteMonitoringEnabled && input.clusters.isNotEmpty()) { + client.threadPool().threadContext.stashContext().use { + scope.launch { + input.clusters.forEach { cluster -> + val targetClient = CrossClusterMonitorUtils.getClientForCluster(cluster, client, clusterService) + val response = executeTransportAction(input, targetClient) + // Not all supported API reference the cluster name in their response. + // Mapping each response to the cluster name before adding to results. + // Not adding this same logic for local-only monitors to avoid breaking existing monitors. + responseMap[cluster] = response.toMap() + } + } + } + val inputTimeout = clusterService.clusterSettings.get(AlertingSettings.INPUT_TIMEOUT) + val startTime = Instant.now().toEpochMilli() + while ( + (Instant.now().toEpochMilli() - startTime >= inputTimeout.millis) || + (responseMap.size < input.clusters.size) + ) { /* Wait for responses */ } + results += responseMap + } else { + val response = executeTransportAction(input, client) + results += response.toMap() + } } else -> { throw IllegalArgumentException("Unsupported input type: ${input.name()}.") diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/MonitorMetadataService.kt b/alerting/src/main/kotlin/org/opensearch/alerting/MonitorMetadataService.kt index e4c5a318c..da167ed49 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/MonitorMetadataService.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/MonitorMetadataService.kt @@ -12,6 +12,7 @@ import kotlinx.coroutines.SupervisorJob import org.apache.logging.log4j.LogManager import org.opensearch.ExceptionsHelper import org.opensearch.OpenSearchSecurityException +import org.opensearch.OpenSearchStatusException import org.opensearch.action.DocWriteRequest import org.opensearch.action.DocWriteResponse import org.opensearch.action.admin.indices.get.GetIndexRequest @@ -28,6 +29,7 @@ import org.opensearch.alerting.model.MonitorMetadata import org.opensearch.alerting.opensearchapi.suspendUntil import org.opensearch.alerting.settings.AlertingSettings import org.opensearch.alerting.util.AlertingException +import org.opensearch.alerting.util.IndexUtils import org.opensearch.client.Client import org.opensearch.cluster.service.ClusterService import org.opensearch.common.settings.Settings @@ -77,35 +79,51 @@ object MonitorMetadataService : @Suppress("ComplexMethod", "ReturnCount") suspend fun upsertMetadata(metadata: MonitorMetadata, updating: Boolean): MonitorMetadata { try { - val indexRequest = IndexRequest(ScheduledJob.SCHEDULED_JOBS_INDEX) - .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE) - .source(metadata.toXContent(XContentFactory.jsonBuilder(), ToXContent.MapParams(mapOf("with_type" to "true")))) - .id(metadata.id) - .routing(metadata.monitorId) - .setIfSeqNo(metadata.seqNo) - .setIfPrimaryTerm(metadata.primaryTerm) - .timeout(indexTimeout) + if (clusterService.state().routingTable.hasIndex(ScheduledJob.SCHEDULED_JOBS_INDEX)) { + val indexRequest = IndexRequest(ScheduledJob.SCHEDULED_JOBS_INDEX) + .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE) + .source( + metadata.toXContent( + XContentFactory.jsonBuilder(), + ToXContent.MapParams(mapOf("with_type" to "true")) + ) + ) + .id(metadata.id) + .routing(metadata.monitorId) + .setIfSeqNo(metadata.seqNo) + .setIfPrimaryTerm(metadata.primaryTerm) + .timeout(indexTimeout) - if (updating) { - indexRequest.id(metadata.id).setIfSeqNo(metadata.seqNo).setIfPrimaryTerm(metadata.primaryTerm) - } else { - indexRequest.opType(DocWriteRequest.OpType.CREATE) - } - val response: IndexResponse = client.suspendUntil { index(indexRequest, it) } - when (response.result) { - DocWriteResponse.Result.DELETED, DocWriteResponse.Result.NOOP, DocWriteResponse.Result.NOT_FOUND, null -> { - val failureReason = "The upsert metadata call failed with a ${response.result?.lowercase} result" - log.error(failureReason) - throw AlertingException(failureReason, RestStatus.INTERNAL_SERVER_ERROR, IllegalStateException(failureReason)) + if (updating) { + indexRequest.id(metadata.id).setIfSeqNo(metadata.seqNo).setIfPrimaryTerm(metadata.primaryTerm) + } else { + indexRequest.opType(DocWriteRequest.OpType.CREATE) } - DocWriteResponse.Result.CREATED, DocWriteResponse.Result.UPDATED -> { - log.debug("Successfully upserted MonitorMetadata:${metadata.id} ") + val response: IndexResponse = client.suspendUntil { index(indexRequest, it) } + when (response.result) { + DocWriteResponse.Result.DELETED, DocWriteResponse.Result.NOOP, DocWriteResponse.Result.NOT_FOUND, null -> { + val failureReason = + "The upsert metadata call failed with a ${response.result?.lowercase} result" + log.error(failureReason) + throw AlertingException( + failureReason, + RestStatus.INTERNAL_SERVER_ERROR, + IllegalStateException(failureReason) + ) + } + + DocWriteResponse.Result.CREATED, DocWriteResponse.Result.UPDATED -> { + log.debug("Successfully upserted MonitorMetadata:${metadata.id} ") + } } + return metadata.copy( + seqNo = response.seqNo, + primaryTerm = response.primaryTerm + ) + } else { + val failureReason = "Job index ${ScheduledJob.SCHEDULED_JOBS_INDEX} does not exist to update monitor metadata" + throw OpenSearchStatusException(failureReason, RestStatus.INTERNAL_SERVER_ERROR) } - return metadata.copy( - seqNo = response.seqNo, - primaryTerm = response.primaryTerm - ) } catch (e: Exception) { throw AlertingException.wrap(e) } @@ -216,11 +234,19 @@ object MonitorMetadataService : val lastRunContext = existingRunContext?.toMutableMap() ?: mutableMapOf() try { if (index == null) return mutableMapOf() - val getIndexRequest = GetIndexRequest().indices(index) - val getIndexResponse: GetIndexResponse = client.suspendUntil { - client.admin().indices().getIndex(getIndexRequest, it) + + val indices = mutableListOf() + if (IndexUtils.isAlias(index, clusterService.state()) || + IndexUtils.isDataStream(index, clusterService.state()) + ) { + IndexUtils.getWriteIndex(index, clusterService.state())?.let { indices.add(it) } + } else { + val getIndexRequest = GetIndexRequest().indices(index) + val getIndexResponse: GetIndexResponse = client.suspendUntil { + client.admin().indices().getIndex(getIndexRequest, it) + } + indices.addAll(getIndexResponse.indices()) } - val indices = getIndexResponse.indices() indices.forEach { indexName -> if (!lastRunContext.containsKey(indexName)) { diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/MonitorRunnerExecutionContext.kt b/alerting/src/main/kotlin/org/opensearch/alerting/MonitorRunnerExecutionContext.kt index 41a26bb79..2aed10a9a 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/MonitorRunnerExecutionContext.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/MonitorRunnerExecutionContext.kt @@ -7,6 +7,7 @@ package org.opensearch.alerting import org.opensearch.action.bulk.BackoffPolicy import org.opensearch.alerting.alerts.AlertIndices +import org.opensearch.alerting.core.lock.LockService import org.opensearch.alerting.model.destination.DestinationContextFactory import org.opensearch.alerting.settings.AlertingSettings import org.opensearch.alerting.settings.DestinationSettings @@ -18,6 +19,7 @@ import org.opensearch.cluster.service.ClusterService import org.opensearch.common.settings.Settings import org.opensearch.common.unit.TimeValue import org.opensearch.core.xcontent.NamedXContentRegistry +import org.opensearch.monitor.jvm.JvmStats import org.opensearch.script.ScriptService import org.opensearch.threadpool.ThreadPool @@ -36,6 +38,7 @@ data class MonitorRunnerExecutionContext( var alertService: AlertService? = null, var docLevelMonitorQueries: DocLevelMonitorQueries? = null, var workflowService: WorkflowService? = null, + var jvmStats: JvmStats? = null, @Volatile var retryPolicy: BackoffPolicy? = null, @Volatile var moveAlertsRetryPolicy: BackoffPolicy? = null, @@ -47,5 +50,13 @@ data class MonitorRunnerExecutionContext( @Volatile var destinationContextFactory: DestinationContextFactory? = null, @Volatile var maxActionableAlertCount: Long = AlertingSettings.DEFAULT_MAX_ACTIONABLE_ALERT_COUNT, - @Volatile var indexTimeout: TimeValue? = null + @Volatile var indexTimeout: TimeValue? = null, + @Volatile var findingsIndexBatchSize: Int = AlertingSettings.DEFAULT_FINDINGS_INDEXING_BATCH_SIZE, + @Volatile var fetchOnlyQueryFieldNames: Boolean = true, + @Volatile var percQueryMaxNumDocsInMemory: Int = AlertingSettings.DEFAULT_PERCOLATE_QUERY_NUM_DOCS_IN_MEMORY, + @Volatile var percQueryDocsSizeMemoryPercentageLimit: Int = + AlertingSettings.DEFAULT_PERCOLATE_QUERY_DOCS_SIZE_MEMORY_PERCENTAGE_LIMIT, + @Volatile var docLevelMonitorShardFetchSize: Int = + AlertingSettings.DEFAULT_DOC_LEVEL_MONITOR_SHARD_FETCH_SIZE, + @Volatile var lockService: LockService? = null ) diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/MonitorRunnerService.kt b/alerting/src/main/kotlin/org/opensearch/alerting/MonitorRunnerService.kt index ca223f7a0..0763bcae4 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/MonitorRunnerService.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/MonitorRunnerService.kt @@ -17,17 +17,26 @@ import org.opensearch.alerting.alerts.AlertIndices import org.opensearch.alerting.alerts.AlertMover.Companion.moveAlerts import org.opensearch.alerting.core.JobRunner import org.opensearch.alerting.core.ScheduledJobIndices +import org.opensearch.alerting.core.lock.LockModel +import org.opensearch.alerting.core.lock.LockService import org.opensearch.alerting.model.MonitorRunResult import org.opensearch.alerting.model.WorkflowRunResult import org.opensearch.alerting.model.destination.DestinationContextFactory import org.opensearch.alerting.opensearchapi.retry +import org.opensearch.alerting.opensearchapi.suspendUntil import org.opensearch.alerting.script.TriggerExecutionContext +import org.opensearch.alerting.settings.AlertingSettings import org.opensearch.alerting.settings.AlertingSettings.Companion.ALERT_BACKOFF_COUNT import org.opensearch.alerting.settings.AlertingSettings.Companion.ALERT_BACKOFF_MILLIS +import org.opensearch.alerting.settings.AlertingSettings.Companion.DOC_LEVEL_MONITOR_FETCH_ONLY_QUERY_FIELDS_ENABLED +import org.opensearch.alerting.settings.AlertingSettings.Companion.DOC_LEVEL_MONITOR_SHARD_FETCH_SIZE +import org.opensearch.alerting.settings.AlertingSettings.Companion.FINDINGS_INDEXING_BATCH_SIZE import org.opensearch.alerting.settings.AlertingSettings.Companion.INDEX_TIMEOUT import org.opensearch.alerting.settings.AlertingSettings.Companion.MAX_ACTIONABLE_ALERT_COUNT import org.opensearch.alerting.settings.AlertingSettings.Companion.MOVE_ALERTS_BACKOFF_COUNT import org.opensearch.alerting.settings.AlertingSettings.Companion.MOVE_ALERTS_BACKOFF_MILLIS +import org.opensearch.alerting.settings.AlertingSettings.Companion.PERCOLATE_QUERY_DOCS_SIZE_MEMORY_PERCENTAGE_LIMIT +import org.opensearch.alerting.settings.AlertingSettings.Companion.PERCOLATE_QUERY_MAX_NUM_DOCS_IN_MEMORY import org.opensearch.alerting.settings.DestinationSettings.Companion.ALLOW_LIST import org.opensearch.alerting.settings.DestinationSettings.Companion.HOST_DENY_LIST import org.opensearch.alerting.settings.DestinationSettings.Companion.loadDestinationSettings @@ -48,6 +57,7 @@ import org.opensearch.commons.alerting.model.action.Action import org.opensearch.commons.alerting.util.isBucketLevelMonitor import org.opensearch.core.action.ActionListener import org.opensearch.core.xcontent.NamedXContentRegistry +import org.opensearch.monitor.jvm.JvmStats import org.opensearch.script.Script import org.opensearch.script.ScriptService import org.opensearch.script.TemplateScript @@ -132,6 +142,11 @@ object MonitorRunnerService : JobRunner, CoroutineScope, AbstractLifecycleCompon return this } + fun registerJvmStats(jvmStats: JvmStats): MonitorRunnerService { + this.monitorCtx.jvmStats = jvmStats + return this + } + // Must be called after registerClusterService and registerSettings in AlertingPlugin fun registerConsumers(): MonitorRunnerService { monitorCtx.retryPolicy = BackoffPolicy.constantBackoff( @@ -169,6 +184,35 @@ object MonitorRunnerService : JobRunner, CoroutineScope, AbstractLifecycleCompon monitorCtx.indexTimeout = INDEX_TIMEOUT.get(monitorCtx.settings) + monitorCtx.findingsIndexBatchSize = FINDINGS_INDEXING_BATCH_SIZE.get(monitorCtx.settings) + monitorCtx.clusterService!!.clusterSettings.addSettingsUpdateConsumer(AlertingSettings.FINDINGS_INDEXING_BATCH_SIZE) { + monitorCtx.findingsIndexBatchSize = it + } + + monitorCtx.fetchOnlyQueryFieldNames = DOC_LEVEL_MONITOR_FETCH_ONLY_QUERY_FIELDS_ENABLED.get(monitorCtx.settings) + monitorCtx.clusterService!!.clusterSettings.addSettingsUpdateConsumer(DOC_LEVEL_MONITOR_FETCH_ONLY_QUERY_FIELDS_ENABLED) { + monitorCtx.fetchOnlyQueryFieldNames = it + } + + monitorCtx.percQueryMaxNumDocsInMemory = PERCOLATE_QUERY_MAX_NUM_DOCS_IN_MEMORY.get(monitorCtx.settings) + monitorCtx.clusterService!!.clusterSettings.addSettingsUpdateConsumer(PERCOLATE_QUERY_MAX_NUM_DOCS_IN_MEMORY) { + monitorCtx.percQueryMaxNumDocsInMemory = it + } + + monitorCtx.percQueryDocsSizeMemoryPercentageLimit = + PERCOLATE_QUERY_DOCS_SIZE_MEMORY_PERCENTAGE_LIMIT.get(monitorCtx.settings) + monitorCtx.clusterService!!.clusterSettings + .addSettingsUpdateConsumer(PERCOLATE_QUERY_DOCS_SIZE_MEMORY_PERCENTAGE_LIMIT) { + monitorCtx.percQueryDocsSizeMemoryPercentageLimit = it + } + + monitorCtx.docLevelMonitorShardFetchSize = + DOC_LEVEL_MONITOR_SHARD_FETCH_SIZE.get(monitorCtx.settings) + monitorCtx.clusterService!!.clusterSettings + .addSettingsUpdateConsumer(DOC_LEVEL_MONITOR_SHARD_FETCH_SIZE) { + monitorCtx.docLevelMonitorShardFetchSize = it + } + return this } @@ -180,6 +224,11 @@ object MonitorRunnerService : JobRunner, CoroutineScope, AbstractLifecycleCompon return this } + fun registerLockService(lockService: LockService): MonitorRunnerService { + monitorCtx.lockService = lockService + return this + } + // Updates destination settings when the reload API is called so that new keystore values are visible fun reloadDestinationSettings(settings: Settings) { monitorCtx.destinationSettings = loadDestinationSettings(settings) @@ -251,12 +300,40 @@ object MonitorRunnerService : JobRunner, CoroutineScope, AbstractLifecycleCompon when (job) { is Workflow -> { launch { - runJob(job, periodStart, periodEnd, false) + var lock: LockModel? = null + try { + lock = monitorCtx.client!!.suspendUntil { + monitorCtx.lockService!!.acquireLock(job, it) + } ?: return@launch + logger.debug("lock ${lock!!.lockId} acquired") + logger.debug( + "PERF_DEBUG: executing workflow ${job.id} on node " + + monitorCtx.clusterService!!.state().nodes().localNode.id + ) + runJob(job, periodStart, periodEnd, false) + } finally { + monitorCtx.client!!.suspendUntil { monitorCtx.lockService!!.release(lock, it) } + logger.debug("lock ${lock!!.lockId} released") + } } } is Monitor -> { launch { - runJob(job, periodStart, periodEnd, false) + var lock: LockModel? = null + try { + lock = monitorCtx.client!!.suspendUntil { + monitorCtx.lockService!!.acquireLock(job, it) + } ?: return@launch + logger.debug("lock ${lock!!.lockId} acquired") + logger.debug( + "PERF_DEBUG: executing ${job.monitorType} ${job.id} on node " + + monitorCtx.clusterService!!.state().nodes().localNode.id + ) + runJob(job, periodStart, periodEnd, false) + } finally { + monitorCtx.client!!.suspendUntil { monitorCtx.lockService!!.release(lock, it) } + logger.debug("lock ${lock!!.lockId} released") + } } } else -> { @@ -300,7 +377,7 @@ object MonitorRunnerService : JobRunner, CoroutineScope, AbstractLifecycleCompon val runResult = if (monitor.isBucketLevelMonitor()) { BucketLevelMonitorRunner.runMonitor(monitor, monitorCtx, periodStart, periodEnd, dryrun, executionId = executionId) } else if (monitor.isDocLevelMonitor()) { - DocumentLevelMonitorRunner.runMonitor(monitor, monitorCtx, periodStart, periodEnd, dryrun, executionId = executionId) + DocumentLevelMonitorRunner().runMonitor(monitor, monitorCtx, periodStart, periodEnd, dryrun, executionId = executionId) } else { QueryLevelMonitorRunner.runMonitor(monitor, monitorCtx, periodStart, periodEnd, dryrun, executionId = executionId) } diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/QueryLevelMonitorRunner.kt b/alerting/src/main/kotlin/org/opensearch/alerting/QueryLevelMonitorRunner.kt index 691071517..a77121069 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/QueryLevelMonitorRunner.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/QueryLevelMonitorRunner.kt @@ -11,6 +11,7 @@ import org.opensearch.alerting.model.QueryLevelTriggerRunResult import org.opensearch.alerting.opensearchapi.InjectorContextElement import org.opensearch.alerting.opensearchapi.withClosableContext import org.opensearch.alerting.script.QueryLevelTriggerExecutionContext +import org.opensearch.alerting.settings.AlertingSettings import org.opensearch.alerting.util.isADMonitor import org.opensearch.alerting.workflow.WorkflowRunContext import org.opensearch.commons.alerting.model.Alert @@ -65,7 +66,21 @@ object QueryLevelMonitorRunner : MonitorRunner() { for (trigger in monitor.triggers) { val currentAlert = currentAlerts[trigger] val triggerCtx = QueryLevelTriggerExecutionContext(monitor, trigger as QueryLevelTrigger, monitorResult, currentAlert) - val triggerResult = monitorCtx.triggerService!!.runQueryLevelTrigger(monitor, trigger, triggerCtx) + val triggerResult = when (monitor.monitorType) { + Monitor.MonitorType.QUERY_LEVEL_MONITOR -> + monitorCtx.triggerService!!.runQueryLevelTrigger(monitor, trigger, triggerCtx) + Monitor.MonitorType.CLUSTER_METRICS_MONITOR -> { + val remoteMonitoringEnabled = + monitorCtx.clusterService!!.clusterSettings.get(AlertingSettings.REMOTE_MONITORING_ENABLED) + logger.debug("Remote monitoring enabled: {}", remoteMonitoringEnabled) + if (remoteMonitoringEnabled) + monitorCtx.triggerService!!.runClusterMetricsTrigger(monitor, trigger, triggerCtx, monitorCtx.clusterService!!) + else monitorCtx.triggerService!!.runQueryLevelTrigger(monitor, trigger, triggerCtx) + } + else -> + throw IllegalArgumentException("Unsupported monitor type: ${monitor.monitorType.name}.") + } + triggerResults[trigger.id] = triggerResult if (monitorCtx.triggerService!!.isQueryLevelTriggerActionable(triggerCtx, triggerResult, workflowRunContext)) { diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/TriggerService.kt b/alerting/src/main/kotlin/org/opensearch/alerting/TriggerService.kt index f2356eddf..21ba32475 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/TriggerService.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/TriggerService.kt @@ -9,6 +9,8 @@ import org.apache.logging.log4j.LogManager import org.opensearch.alerting.chainedAlertCondition.parsers.ChainedAlertExpressionParser import org.opensearch.alerting.model.BucketLevelTriggerRunResult import org.opensearch.alerting.model.ChainedAlertTriggerRunResult +import org.opensearch.alerting.model.ClusterMetricsTriggerRunResult +import org.opensearch.alerting.model.ClusterMetricsTriggerRunResult.ClusterTriggerResult import org.opensearch.alerting.model.DocumentLevelTriggerRunResult import org.opensearch.alerting.model.QueryLevelTriggerRunResult import org.opensearch.alerting.script.BucketLevelTriggerExecutionContext @@ -16,8 +18,10 @@ import org.opensearch.alerting.script.ChainedAlertTriggerExecutionContext import org.opensearch.alerting.script.QueryLevelTriggerExecutionContext import org.opensearch.alerting.script.TriggerScript import org.opensearch.alerting.triggercondition.parsers.TriggerExpressionParser +import org.opensearch.alerting.util.CrossClusterMonitorUtils import org.opensearch.alerting.util.getBucketKeysHash import org.opensearch.alerting.workflow.WorkflowRunContext +import org.opensearch.cluster.service.ClusterService import org.opensearch.commons.alerting.aggregation.bucketselectorext.BucketSelectorIndices.Fields.BUCKET_INDICES import org.opensearch.commons.alerting.aggregation.bucketselectorext.BucketSelectorIndices.Fields.PARENT_BUCKET_PATH import org.opensearch.commons.alerting.model.AggregationResultBucket @@ -79,6 +83,52 @@ class TriggerService(val scriptService: ScriptService) { } } + fun runClusterMetricsTrigger( + monitor: Monitor, + trigger: QueryLevelTrigger, + ctx: QueryLevelTriggerExecutionContext, + clusterService: ClusterService + ): ClusterMetricsTriggerRunResult { + var runResult: ClusterMetricsTriggerRunResult? + try { + val inputResults = ctx.results.getOrElse(0) { mapOf() } + var triggered = false + val clusterTriggerResults = mutableListOf() + if (CrossClusterMonitorUtils.isRemoteMonitor(monitor, clusterService)) { + inputResults.forEach { clusterResult -> + // Reducing the inputResults to only include results from 1 cluster at a time + val clusterTriggerCtx = ctx.copy(results = listOf(mapOf(clusterResult.toPair()))) + + val clusterTriggered = scriptService.compile(trigger.condition, TriggerScript.CONTEXT) + .newInstance(trigger.condition.params) + .execute(clusterTriggerCtx) + + if (clusterTriggered) { + triggered = clusterTriggered + clusterTriggerResults.add(ClusterTriggerResult(cluster = clusterResult.key, triggered = clusterTriggered)) + } + } + } else { + triggered = scriptService.compile(trigger.condition, TriggerScript.CONTEXT) + .newInstance(trigger.condition.params) + .execute(ctx) + if (triggered) clusterTriggerResults + .add(ClusterTriggerResult(cluster = clusterService.clusterName.value(), triggered = triggered)) + } + runResult = ClusterMetricsTriggerRunResult( + triggerName = trigger.name, + triggered = triggered, + error = null, + clusterTriggerResults = clusterTriggerResults + ) + } catch (e: Exception) { + logger.info("Error running script for monitor ${monitor.id}, trigger: ${trigger.id}", e) + // if the script fails we need to send an alert so set triggered = true + runResult = ClusterMetricsTriggerRunResult(trigger.name, true, e) + } + return runResult!! + } + // TODO: improve performance and support match all and match any fun runDocLevelTrigger( monitor: Monitor, diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/action/GetMonitorAction.kt b/alerting/src/main/kotlin/org/opensearch/alerting/action/GetMonitorAction.kt deleted file mode 100644 index da209b983..000000000 --- a/alerting/src/main/kotlin/org/opensearch/alerting/action/GetMonitorAction.kt +++ /dev/null @@ -1,15 +0,0 @@ -/* - * Copyright OpenSearch Contributors - * SPDX-License-Identifier: Apache-2.0 - */ - -package org.opensearch.alerting.action - -import org.opensearch.action.ActionType - -class GetMonitorAction private constructor() : ActionType(NAME, ::GetMonitorResponse) { - companion object { - val INSTANCE = GetMonitorAction() - const val NAME = "cluster:admin/opendistro/alerting/monitor/get" - } -} diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/action/GetMonitorRequest.kt b/alerting/src/main/kotlin/org/opensearch/alerting/action/GetMonitorRequest.kt deleted file mode 100644 index 365a31f8f..000000000 --- a/alerting/src/main/kotlin/org/opensearch/alerting/action/GetMonitorRequest.kt +++ /dev/null @@ -1,56 +0,0 @@ -/* - * Copyright OpenSearch Contributors - * SPDX-License-Identifier: Apache-2.0 - */ - -package org.opensearch.alerting.action - -import org.opensearch.action.ActionRequest -import org.opensearch.action.ActionRequestValidationException -import org.opensearch.core.common.io.stream.StreamInput -import org.opensearch.core.common.io.stream.StreamOutput -import org.opensearch.rest.RestRequest -import org.opensearch.search.fetch.subphase.FetchSourceContext -import java.io.IOException - -class GetMonitorRequest : ActionRequest { - val monitorId: String - val version: Long - val method: RestRequest.Method - val srcContext: FetchSourceContext? - - constructor( - monitorId: String, - version: Long, - method: RestRequest.Method, - srcContext: FetchSourceContext? - ) : super() { - this.monitorId = monitorId - this.version = version - this.method = method - this.srcContext = srcContext - } - - @Throws(IOException::class) - constructor(sin: StreamInput) : this( - sin.readString(), // monitorId - sin.readLong(), // version - sin.readEnum(RestRequest.Method::class.java), // method - if (sin.readBoolean()) { - FetchSourceContext(sin) // srcContext - } else null - ) - - override fun validate(): ActionRequestValidationException? { - return null - } - - @Throws(IOException::class) - override fun writeTo(out: StreamOutput) { - out.writeString(monitorId) - out.writeLong(version) - out.writeEnum(method) - out.writeBoolean(srcContext != null) - srcContext?.writeTo(out) - } -} diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/action/GetMonitorResponse.kt b/alerting/src/main/kotlin/org/opensearch/alerting/action/GetMonitorResponse.kt deleted file mode 100644 index ab69a1582..000000000 --- a/alerting/src/main/kotlin/org/opensearch/alerting/action/GetMonitorResponse.kt +++ /dev/null @@ -1,133 +0,0 @@ -/* - * Copyright OpenSearch Contributors - * SPDX-License-Identifier: Apache-2.0 - */ - -package org.opensearch.alerting.action - -import org.opensearch.commons.alerting.model.Monitor -import org.opensearch.commons.alerting.util.IndexUtils.Companion._ID -import org.opensearch.commons.alerting.util.IndexUtils.Companion._PRIMARY_TERM -import org.opensearch.commons.alerting.util.IndexUtils.Companion._SEQ_NO -import org.opensearch.commons.alerting.util.IndexUtils.Companion._VERSION -import org.opensearch.core.action.ActionResponse -import org.opensearch.core.common.io.stream.StreamInput -import org.opensearch.core.common.io.stream.StreamOutput -import org.opensearch.core.rest.RestStatus -import org.opensearch.core.xcontent.ToXContent -import org.opensearch.core.xcontent.ToXContentFragment -import org.opensearch.core.xcontent.ToXContentObject -import org.opensearch.core.xcontent.XContentBuilder -import java.io.IOException - -class GetMonitorResponse : ActionResponse, ToXContentObject { - var id: String - var version: Long - var seqNo: Long - var primaryTerm: Long - var status: RestStatus - var monitor: Monitor? - var associatedWorkflows: List? - - constructor( - id: String, - version: Long, - seqNo: Long, - primaryTerm: Long, - status: RestStatus, - monitor: Monitor?, - associatedCompositeMonitors: List?, - ) : super() { - this.id = id - this.version = version - this.seqNo = seqNo - this.primaryTerm = primaryTerm - this.status = status - this.monitor = monitor - this.associatedWorkflows = associatedCompositeMonitors ?: emptyList() - } - - @Throws(IOException::class) - constructor(sin: StreamInput) : this( - id = sin.readString(), // id - version = sin.readLong(), // version - seqNo = sin.readLong(), // seqNo - primaryTerm = sin.readLong(), // primaryTerm - status = sin.readEnum(RestStatus::class.java), // RestStatus - monitor = if (sin.readBoolean()) { - Monitor.readFrom(sin) // monitor - } else null, - associatedCompositeMonitors = sin.readList((AssociatedWorkflow)::readFrom), - ) - - @Throws(IOException::class) - override fun writeTo(out: StreamOutput) { - out.writeString(id) - out.writeLong(version) - out.writeLong(seqNo) - out.writeLong(primaryTerm) - out.writeEnum(status) - if (monitor != null) { - out.writeBoolean(true) - monitor?.writeTo(out) - } else { - out.writeBoolean(false) - } - associatedWorkflows?.forEach { - it.writeTo(out) - } - } - - @Throws(IOException::class) - override fun toXContent(builder: XContentBuilder, params: ToXContent.Params): XContentBuilder { - builder.startObject() - .field(_ID, id) - .field(_VERSION, version) - .field(_SEQ_NO, seqNo) - .field(_PRIMARY_TERM, primaryTerm) - if (monitor != null) { - builder.field("monitor", monitor) - } - if (associatedWorkflows != null) { - builder.field("associated_workflows", associatedWorkflows!!.toTypedArray()) - } - return builder.endObject() - } - - class AssociatedWorkflow : ToXContentFragment { - val id: String - val name: String - - constructor(id: String, name: String) { - this.id = id - this.name = name - } - - override fun toXContent(builder: XContentBuilder, params: ToXContent.Params?): XContentBuilder { - builder.startObject() - .field("id", id) - .field("name", name) - .endObject() - return builder - } - - fun writeTo(out: StreamOutput) { - out.writeString(id) - out.writeString(name) - } - - @Throws(IOException::class) - constructor(sin: StreamInput) : this( - sin.readString(), - sin.readString() - ) - - companion object { - @JvmStatic - @Throws(IOException::class) - fun readFrom(sin: StreamInput): AssociatedWorkflow { - return AssociatedWorkflow(sin) - } - } - } -} diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/action/GetRemoteIndexesAction.kt b/alerting/src/main/kotlin/org/opensearch/alerting/action/GetRemoteIndexesAction.kt new file mode 100644 index 000000000..059110af4 --- /dev/null +++ b/alerting/src/main/kotlin/org/opensearch/alerting/action/GetRemoteIndexesAction.kt @@ -0,0 +1,15 @@ +/* + * Copyright OpenSearch Contributors + * SPDX-License-Identifier: Apache-2.0 + */ + +package org.opensearch.alerting.action + +import org.opensearch.action.ActionType + +class GetRemoteIndexesAction private constructor() : ActionType(NAME, ::GetRemoteIndexesResponse) { + companion object { + val INSTANCE = GetRemoteIndexesAction() + const val NAME = "cluster:admin/opensearch/alerting/remote/indexes/get" + } +} diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/action/SearchMonitorRequest.kt b/alerting/src/main/kotlin/org/opensearch/alerting/action/GetRemoteIndexesRequest.kt similarity index 53% rename from alerting/src/main/kotlin/org/opensearch/alerting/action/SearchMonitorRequest.kt rename to alerting/src/main/kotlin/org/opensearch/alerting/action/GetRemoteIndexesRequest.kt index 069b1ed4e..733bc3a04 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/action/SearchMonitorRequest.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/action/GetRemoteIndexesRequest.kt @@ -7,24 +7,23 @@ package org.opensearch.alerting.action import org.opensearch.action.ActionRequest import org.opensearch.action.ActionRequestValidationException -import org.opensearch.action.search.SearchRequest import org.opensearch.core.common.io.stream.StreamInput import org.opensearch.core.common.io.stream.StreamOutput import java.io.IOException -class SearchMonitorRequest : ActionRequest { +class GetRemoteIndexesRequest : ActionRequest { + var indexes: List = listOf() + var includeMappings: Boolean - val searchRequest: SearchRequest - - constructor( - searchRequest: SearchRequest - ) : super() { - this.searchRequest = searchRequest + constructor(indexes: List, includeMappings: Boolean) : super() { + this.indexes = indexes + this.includeMappings = includeMappings } @Throws(IOException::class) constructor(sin: StreamInput) : this( - searchRequest = SearchRequest(sin) + sin.readStringList(), + sin.readBoolean() ) override fun validate(): ActionRequestValidationException? { @@ -33,6 +32,12 @@ class SearchMonitorRequest : ActionRequest { @Throws(IOException::class) override fun writeTo(out: StreamOutput) { - searchRequest.writeTo(out) + out.writeStringArray(indexes.toTypedArray()) + out.writeBoolean(includeMappings) + } + + companion object { + const val INDEXES_FIELD = "indexes" + const val INCLUDE_MAPPINGS_FIELD = "include_mappings" } } diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/action/GetRemoteIndexesResponse.kt b/alerting/src/main/kotlin/org/opensearch/alerting/action/GetRemoteIndexesResponse.kt new file mode 100644 index 000000000..1572b4228 --- /dev/null +++ b/alerting/src/main/kotlin/org/opensearch/alerting/action/GetRemoteIndexesResponse.kt @@ -0,0 +1,135 @@ +/* + * Copyright OpenSearch Contributors + * SPDX-License-Identifier: Apache-2.0 + */ + +package org.opensearch.alerting.action + +import org.opensearch.cluster.health.ClusterHealthStatus +import org.opensearch.cluster.metadata.MappingMetadata +import org.opensearch.core.action.ActionResponse +import org.opensearch.core.common.io.stream.StreamInput +import org.opensearch.core.common.io.stream.StreamOutput +import org.opensearch.core.common.io.stream.Writeable +import org.opensearch.core.xcontent.ToXContent +import org.opensearch.core.xcontent.ToXContentObject +import org.opensearch.core.xcontent.XContentBuilder +import java.io.IOException + +class GetRemoteIndexesResponse : ActionResponse, ToXContentObject { + var clusterIndexes: List = emptyList() + + constructor(clusterIndexes: List) : super() { + this.clusterIndexes = clusterIndexes + } + + @Throws(IOException::class) + constructor(sin: StreamInput) : this( + clusterIndexes = sin.readList((ClusterIndexes.Companion)::readFrom) + ) + + override fun toXContent(builder: XContentBuilder, params: ToXContent.Params): XContentBuilder { + builder.startObject() + clusterIndexes.forEach { + it.toXContent(builder, params) + } + return builder.endObject() + } + + override fun writeTo(out: StreamOutput) { + clusterIndexes.forEach { it.writeTo(out) } + } + + data class ClusterIndexes( + val clusterName: String, + val clusterHealth: ClusterHealthStatus, + val hubCluster: Boolean, + val indexes: List = listOf(), + val latency: Long + ) : ToXContentObject, Writeable { + + @Throws(IOException::class) + constructor(sin: StreamInput) : this( + clusterName = sin.readString(), + clusterHealth = sin.readEnum(ClusterHealthStatus::class.java), + hubCluster = sin.readBoolean(), + indexes = sin.readList((ClusterIndex.Companion)::readFrom), + latency = sin.readLong() + ) + + override fun toXContent(builder: XContentBuilder, params: ToXContent.Params): XContentBuilder { + builder.startObject(clusterName) + builder.field(CLUSTER_NAME_FIELD, clusterName) + builder.field(CLUSTER_HEALTH_FIELD, clusterHealth) + builder.field(HUB_CLUSTER_FIELD, hubCluster) + builder.field(INDEX_LATENCY_FIELD, latency) + builder.startObject(INDEXES_FIELD) + indexes.forEach { + it.toXContent(builder, params) + } + return builder.endObject().endObject() + } + + override fun writeTo(out: StreamOutput) { + out.writeString(clusterName) + out.writeEnum(clusterHealth) + indexes.forEach { it.writeTo(out) } + out.writeLong(latency) + } + + companion object { + const val CLUSTER_NAME_FIELD = "cluster" + const val CLUSTER_HEALTH_FIELD = "health" + const val HUB_CLUSTER_FIELD = "hub_cluster" + const val INDEXES_FIELD = "indexes" + const val INDEX_LATENCY_FIELD = "latency" + + @JvmStatic + @Throws(IOException::class) + fun readFrom(sin: StreamInput): ClusterIndexes { + return ClusterIndexes(sin) + } + } + + data class ClusterIndex( + val indexName: String, + val indexHealth: ClusterHealthStatus?, + val mappings: MappingMetadata? + ) : ToXContentObject, Writeable { + + @Throws(IOException::class) + constructor(sin: StreamInput) : this( + indexName = sin.readString(), + indexHealth = sin.readEnum(ClusterHealthStatus::class.java), + mappings = sin.readOptionalWriteable(::MappingMetadata) + ) + + override fun toXContent(builder: XContentBuilder, params: ToXContent.Params): XContentBuilder { + builder.startObject(indexName) + builder.field(INDEX_NAME_FIELD, indexName) + builder.field(INDEX_HEALTH_FIELD, indexHealth) + if (mappings == null) builder.startObject(MAPPINGS_FIELD).endObject() + else builder.field(MAPPINGS_FIELD, mappings.sourceAsMap()) + return builder.endObject() + } + + override fun writeTo(out: StreamOutput) { + out.writeString(indexName) + out.writeEnum(indexHealth) + if (mappings != null) out.writeMap(mappings.sourceAsMap) + } + + companion object { + const val INDEX_NAME_FIELD = "name" + const val INDEX_HEALTH_FIELD = "health" + const val MAPPINGS_FIELD = "mappings" + + @JvmStatic + @Throws(IOException::class) + fun readFrom(sin: StreamInput): ClusterIndex { + return ClusterIndex(sin) + } + } + } + } +} diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/action/SearchMonitorAction.kt b/alerting/src/main/kotlin/org/opensearch/alerting/action/SearchMonitorAction.kt deleted file mode 100644 index 16725fc39..000000000 --- a/alerting/src/main/kotlin/org/opensearch/alerting/action/SearchMonitorAction.kt +++ /dev/null @@ -1,16 +0,0 @@ -/* - * Copyright OpenSearch Contributors - * SPDX-License-Identifier: Apache-2.0 - */ - -package org.opensearch.alerting.action - -import org.opensearch.action.ActionType -import org.opensearch.action.search.SearchResponse - -class SearchMonitorAction private constructor() : ActionType(NAME, ::SearchResponse) { - companion object { - val INSTANCE = SearchMonitorAction() - const val NAME = "cluster:admin/opendistro/alerting/monitor/search" - } -} diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/model/ClusterMetricsTriggerRunResult.kt b/alerting/src/main/kotlin/org/opensearch/alerting/model/ClusterMetricsTriggerRunResult.kt new file mode 100644 index 000000000..a19de0637 --- /dev/null +++ b/alerting/src/main/kotlin/org/opensearch/alerting/model/ClusterMetricsTriggerRunResult.kt @@ -0,0 +1,110 @@ +/* + * Copyright OpenSearch Contributors + * SPDX-License-Identifier: Apache-2.0 + */ + +package org.opensearch.alerting.model + +import org.opensearch.commons.alerting.alerts.AlertError +import org.opensearch.core.common.io.stream.StreamInput +import org.opensearch.core.common.io.stream.StreamOutput +import org.opensearch.core.common.io.stream.Writeable +import org.opensearch.core.xcontent.ToXContent +import org.opensearch.core.xcontent.ToXContentObject +import org.opensearch.core.xcontent.XContentBuilder +import org.opensearch.script.ScriptException +import java.io.IOException +import java.time.Instant + +data class ClusterMetricsTriggerRunResult( + override var triggerName: String, + override var triggered: Boolean, + override var error: Exception?, + override var actionResults: MutableMap = mutableMapOf(), + var clusterTriggerResults: List = listOf() +) : QueryLevelTriggerRunResult( + triggerName = triggerName, + error = error, + triggered = triggered, + actionResults = actionResults +) { + + @Throws(IOException::class) + @Suppress("UNCHECKED_CAST") + constructor(sin: StreamInput) : this( + triggerName = sin.readString(), + error = sin.readException(), + triggered = sin.readBoolean(), + actionResults = sin.readMap() as MutableMap, + clusterTriggerResults = sin.readList((ClusterTriggerResult.Companion)::readFrom) + ) + + override fun alertError(): AlertError? { + if (error != null) { + return AlertError(Instant.now(), "Failed evaluating trigger:\n${error!!.userErrorMessage()}") + } + for (actionResult in actionResults.values) { + if (actionResult.error != null) { + return AlertError(Instant.now(), "Failed running action:\n${actionResult.error.userErrorMessage()}") + } + } + return null + } + + override fun internalXContent(builder: XContentBuilder, params: ToXContent.Params): XContentBuilder { + if (error is ScriptException) error = Exception((error as ScriptException).toJsonString(), error) + builder + .field(TRIGGERED_FIELD, triggered) + .field(ACTION_RESULTS_FIELD, actionResults as Map) + .startArray(CLUSTER_RESULTS_FIELD) + clusterTriggerResults.forEach { it.toXContent(builder, params) } + return builder.endArray() + } + + @Throws(IOException::class) + override fun writeTo(out: StreamOutput) { + super.writeTo(out) + out.writeBoolean(triggered) + out.writeMap(actionResults as Map) + clusterTriggerResults.forEach { it.writeTo(out) } + } + + companion object { + const val TRIGGERED_FIELD = "triggered" + const val ACTION_RESULTS_FIELD = "action_results" + const val CLUSTER_RESULTS_FIELD = "cluster_results" + } + + data class ClusterTriggerResult( + val cluster: String, + val triggered: Boolean, + ) : ToXContentObject, Writeable { + + @Throws(IOException::class) + constructor(sin: StreamInput) : this( + cluster = sin.readString(), + triggered = sin.readBoolean() + ) + + override fun toXContent(builder: XContentBuilder, params: ToXContent.Params): XContentBuilder { + return builder.startObject() + .startObject(cluster) + .field(TRIGGERED_FIELD, triggered) + .endObject() + .endObject() + } + + override fun writeTo(out: StreamOutput) { + out.writeString(cluster) + out.writeBoolean(triggered) + } + + companion object { + @JvmStatic + @Throws(IOException::class) + fun readFrom(sin: StreamInput): ClusterTriggerResult { + return ClusterTriggerResult(sin) + } + } + } +} diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/model/DocumentExecutionContext.kt b/alerting/src/main/kotlin/org/opensearch/alerting/model/DocumentExecutionContext.kt deleted file mode 100644 index 0caad1f4a..000000000 --- a/alerting/src/main/kotlin/org/opensearch/alerting/model/DocumentExecutionContext.kt +++ /dev/null @@ -1,14 +0,0 @@ -/* - * Copyright OpenSearch Contributors - * SPDX-License-Identifier: Apache-2.0 - */ - -package org.opensearch.alerting.model - -import org.opensearch.commons.alerting.model.DocLevelQuery - -data class DocumentExecutionContext( - val queries: List, - val lastRunContext: Map, - val updatedLastRunContext: Map -) diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/model/IndexExecutionContext.kt b/alerting/src/main/kotlin/org/opensearch/alerting/model/IndexExecutionContext.kt new file mode 100644 index 000000000..e7aa707f9 --- /dev/null +++ b/alerting/src/main/kotlin/org/opensearch/alerting/model/IndexExecutionContext.kt @@ -0,0 +1,19 @@ +/* + * Copyright OpenSearch Contributors + * SPDX-License-Identifier: Apache-2.0 + */ + +package org.opensearch.alerting.model + +import org.opensearch.commons.alerting.model.DocLevelQuery + +/** DTO that contains all the necessary context for fetching data from shard and performing percolate queries */ +data class IndexExecutionContext( + val queries: List, + val lastRunContext: MutableMap, + val updatedLastRunContext: MutableMap, + val indexName: String, + val concreteIndexName: String, + val conflictingFields: List, + val docIds: List? = null, +) diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/model/QueryLevelTriggerRunResult.kt b/alerting/src/main/kotlin/org/opensearch/alerting/model/QueryLevelTriggerRunResult.kt index d123dbae4..5917c1ecf 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/model/QueryLevelTriggerRunResult.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/model/QueryLevelTriggerRunResult.kt @@ -14,11 +14,11 @@ import org.opensearch.script.ScriptException import java.io.IOException import java.time.Instant -data class QueryLevelTriggerRunResult( +open class QueryLevelTriggerRunResult( override var triggerName: String, - var triggered: Boolean, + open var triggered: Boolean, override var error: Exception?, - var actionResults: MutableMap = mutableMapOf() + open var actionResults: MutableMap = mutableMapOf() ) : TriggerRunResult(triggerName, error) { @Throws(IOException::class) diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/resthandler/RestGetMonitorAction.kt b/alerting/src/main/kotlin/org/opensearch/alerting/resthandler/RestGetMonitorAction.kt index 4d9bac033..54270b717 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/resthandler/RestGetMonitorAction.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/resthandler/RestGetMonitorAction.kt @@ -6,10 +6,10 @@ package org.opensearch.alerting.resthandler import org.apache.logging.log4j.LogManager import org.opensearch.alerting.AlertingPlugin -import org.opensearch.alerting.action.GetMonitorAction -import org.opensearch.alerting.action.GetMonitorRequest import org.opensearch.alerting.util.context import org.opensearch.client.node.NodeClient +import org.opensearch.commons.alerting.action.AlertingActions +import org.opensearch.commons.alerting.action.GetMonitorRequest import org.opensearch.rest.BaseRestHandler import org.opensearch.rest.BaseRestHandler.RestChannelConsumer import org.opensearch.rest.RestHandler.ReplacedRoute @@ -69,7 +69,7 @@ class RestGetMonitorAction : BaseRestHandler() { val getMonitorRequest = GetMonitorRequest(monitorId, RestActions.parseVersion(request), request.method(), srcContext) return RestChannelConsumer { channel -> - client.execute(GetMonitorAction.INSTANCE, getMonitorRequest, RestToXContentListener(channel)) + client.execute(AlertingActions.GET_MONITOR_ACTION_TYPE, getMonitorRequest, RestToXContentListener(channel)) } } } diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/resthandler/RestGetRemoteIndexesAction.kt b/alerting/src/main/kotlin/org/opensearch/alerting/resthandler/RestGetRemoteIndexesAction.kt new file mode 100644 index 000000000..591ab2c3e --- /dev/null +++ b/alerting/src/main/kotlin/org/opensearch/alerting/resthandler/RestGetRemoteIndexesAction.kt @@ -0,0 +1,50 @@ +/* + * Copyright OpenSearch Contributors + * SPDX-License-Identifier: Apache-2.0 + */ + +package org.opensearch.alerting.resthandler + +import org.apache.logging.log4j.LogManager +import org.opensearch.alerting.AlertingPlugin +import org.opensearch.alerting.action.GetRemoteIndexesAction +import org.opensearch.alerting.action.GetRemoteIndexesRequest +import org.opensearch.client.node.NodeClient +import org.opensearch.core.common.Strings +import org.opensearch.rest.BaseRestHandler +import org.opensearch.rest.RestHandler +import org.opensearch.rest.RestRequest +import org.opensearch.rest.action.RestToXContentListener + +private val log = LogManager.getLogger(RestGetRemoteIndexesAction::class.java) + +class RestGetRemoteIndexesAction : BaseRestHandler() { + val ROUTE = "${AlertingPlugin.REMOTE_BASE_URI}/indexes" + + override fun getName(): String { + return "get_remote_indexes_action" + } + + override fun routes(): List { + return mutableListOf( + RestHandler.Route(RestRequest.Method.GET, ROUTE) + ) + } + + override fun prepareRequest(request: RestRequest, client: NodeClient): RestChannelConsumer { + log.debug("${request.method()} $ROUTE") + val indexes = Strings.splitStringByCommaToArray(request.param(GetRemoteIndexesRequest.INDEXES_FIELD, "")) + val includeMappings = request.paramAsBoolean(GetRemoteIndexesRequest.INCLUDE_MAPPINGS_FIELD, false) + return RestChannelConsumer { + channel -> + client.execute( + GetRemoteIndexesAction.INSTANCE, + GetRemoteIndexesRequest( + indexes = indexes.toList(), + includeMappings = includeMappings + ), + RestToXContentListener(channel) + ) + } + } +} diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/resthandler/RestSearchMonitorAction.kt b/alerting/src/main/kotlin/org/opensearch/alerting/resthandler/RestSearchMonitorAction.kt index 4d4ea47bf..1bf51678e 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/resthandler/RestSearchMonitorAction.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/resthandler/RestSearchMonitorAction.kt @@ -9,8 +9,6 @@ import org.apache.logging.log4j.LogManager import org.opensearch.action.search.SearchRequest import org.opensearch.action.search.SearchResponse import org.opensearch.alerting.AlertingPlugin -import org.opensearch.alerting.action.SearchMonitorAction -import org.opensearch.alerting.action.SearchMonitorRequest import org.opensearch.alerting.alerts.AlertIndices.Companion.ALL_ALERT_INDEX_PATTERN import org.opensearch.alerting.settings.AlertingSettings import org.opensearch.alerting.util.context @@ -20,6 +18,8 @@ import org.opensearch.common.settings.Settings import org.opensearch.common.xcontent.LoggingDeprecationHandler import org.opensearch.common.xcontent.XContentFactory.jsonBuilder import org.opensearch.common.xcontent.XContentType +import org.opensearch.commons.alerting.action.AlertingActions +import org.opensearch.commons.alerting.action.SearchMonitorRequest import org.opensearch.commons.alerting.model.ScheduledJob import org.opensearch.commons.alerting.model.ScheduledJob.Companion.SCHEDULED_JOBS_INDEX import org.opensearch.core.common.bytes.BytesReference @@ -101,7 +101,7 @@ class RestSearchMonitorAction( val searchMonitorRequest = SearchMonitorRequest(searchRequest) return RestChannelConsumer { channel -> - client.execute(SearchMonitorAction.INSTANCE, searchMonitorRequest, searchMonitorResponse(channel)) + client.execute(AlertingActions.SEARCH_MONITORS_ACTION_TYPE, searchMonitorRequest, searchMonitorResponse(channel)) } } diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/service/DeleteMonitorService.kt b/alerting/src/main/kotlin/org/opensearch/alerting/service/DeleteMonitorService.kt index b26ae2473..2ba1fbd17 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/service/DeleteMonitorService.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/service/DeleteMonitorService.kt @@ -22,6 +22,8 @@ import org.opensearch.action.support.IndicesOptions import org.opensearch.action.support.WriteRequest.RefreshPolicy import org.opensearch.action.support.master.AcknowledgedResponse import org.opensearch.alerting.MonitorMetadataService +import org.opensearch.alerting.core.lock.LockModel +import org.opensearch.alerting.core.lock.LockService import org.opensearch.alerting.opensearchapi.suspendUntil import org.opensearch.alerting.util.AlertingException import org.opensearch.alerting.util.ScheduledJobUtils.Companion.WORKFLOW_DELEGATE_PATH @@ -48,11 +50,14 @@ object DeleteMonitorService : private val log = LogManager.getLogger(this.javaClass) private lateinit var client: Client + private lateinit var lockService: LockService fun initialize( client: Client, + lockService: LockService ) { DeleteMonitorService.client = client + DeleteMonitorService.lockService = lockService } /** @@ -64,6 +69,7 @@ object DeleteMonitorService : val deleteResponse = deleteMonitor(monitor.id, refreshPolicy) deleteDocLevelMonitorQueriesAndIndices(monitor) deleteMetadata(monitor) + deleteLock(monitor) return DeleteMonitorResponse(deleteResponse.id, deleteResponse.version) } @@ -147,6 +153,10 @@ object DeleteMonitorService : } } + private suspend fun deleteLock(monitor: Monitor) { + client.suspendUntil { lockService.deleteLock(LockModel.generateLockId(monitor.id), it) } + } + /** * Checks if the monitor is part of the workflow * diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/settings/AlertingSettings.kt b/alerting/src/main/kotlin/org/opensearch/alerting/settings/AlertingSettings.kt index 7dd90b106..a0eda830a 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/settings/AlertingSettings.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/settings/AlertingSettings.kt @@ -17,68 +17,106 @@ class AlertingSettings { companion object { const val DEFAULT_MAX_ACTIONABLE_ALERT_COUNT = 50L + const val DEFAULT_FINDINGS_INDEXING_BATCH_SIZE = 1000 + const val DEFAULT_PERCOLATE_QUERY_NUM_DOCS_IN_MEMORY = 50000 + const val DEFAULT_PERCOLATE_QUERY_DOCS_SIZE_MEMORY_PERCENTAGE_LIMIT = 10 + const val DEFAULT_DOC_LEVEL_MONITOR_SHARD_FETCH_SIZE = 10000 val ALERTING_MAX_MONITORS = Setting.intSetting( "plugins.alerting.monitor.max_monitors", LegacyOpenDistroAlertingSettings.ALERTING_MAX_MONITORS, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic + ) + + /** Defines the threshold percentage of heap size in bytes till which we accumulate docs in memory before we query against percolate query + * index in document level monitor execution. + */ + val PERCOLATE_QUERY_DOCS_SIZE_MEMORY_PERCENTAGE_LIMIT = Setting.intSetting( + "plugins.alerting.monitor.percolate_query_docs_size_memory_percentage_limit", + 10, + 0, + 100, + Setting.Property.NodeScope, Setting.Property.Dynamic + ) + + /** Purely a setting used to verify seq_no calculation + */ + val DOC_LEVEL_MONITOR_SHARD_FETCH_SIZE = Setting.intSetting( + "plugins.alerting.monitor.doc_level_monitor_shard_fetch_size", + DEFAULT_DOC_LEVEL_MONITOR_SHARD_FETCH_SIZE, + 1, + 10000, + Setting.Property.NodeScope, Setting.Property.Dynamic + ) + + /** Defines the threshold of the maximum number of docs accumulated in memory to query against percolate query index in document + * level monitor execution. The docs are being collected from searching on shards of indices mentioned in the + * monitor input indices field. When the number of in-memory docs reaches or exceeds threshold we immediately perform percolate + * query with the current set of docs and clear the cache and repeat the process till we have queried all indices in current + * execution + */ + val PERCOLATE_QUERY_MAX_NUM_DOCS_IN_MEMORY = Setting.intSetting( + "plugins.alerting.monitor.percolate_query_max_num_docs_in_memory", + DEFAULT_PERCOLATE_QUERY_NUM_DOCS_IN_MEMORY, 1000, + Setting.Property.NodeScope, Setting.Property.Dynamic + ) + + /** + * Boolean setting to enable/disable optimizing doc level monitors by fetchign only fields mentioned in queries. + * Enabled by default. If disabled, will fetch entire source of documents while fetch data from shards. + */ + val DOC_LEVEL_MONITOR_FETCH_ONLY_QUERY_FIELDS_ENABLED = Setting.boolSetting( + "plugins.alerting.monitor.doc_level_monitor_query_field_names_enabled", + true, + Setting.Property.NodeScope, Setting.Property.Dynamic ) val INPUT_TIMEOUT = Setting.positiveTimeSetting( "plugins.alerting.input_timeout", LegacyOpenDistroAlertingSettings.INPUT_TIMEOUT, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val INDEX_TIMEOUT = Setting.positiveTimeSetting( "plugins.alerting.index_timeout", LegacyOpenDistroAlertingSettings.INDEX_TIMEOUT, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val BULK_TIMEOUT = Setting.positiveTimeSetting( "plugins.alerting.bulk_timeout", LegacyOpenDistroAlertingSettings.BULK_TIMEOUT, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val ALERT_BACKOFF_MILLIS = Setting.positiveTimeSetting( "plugins.alerting.alert_backoff_millis", LegacyOpenDistroAlertingSettings.ALERT_BACKOFF_MILLIS, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val ALERT_BACKOFF_COUNT = Setting.intSetting( "plugins.alerting.alert_backoff_count", LegacyOpenDistroAlertingSettings.ALERT_BACKOFF_COUNT, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val MOVE_ALERTS_BACKOFF_MILLIS = Setting.positiveTimeSetting( "plugins.alerting.move_alerts_backoff_millis", LegacyOpenDistroAlertingSettings.MOVE_ALERTS_BACKOFF_MILLIS, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val MOVE_ALERTS_BACKOFF_COUNT = Setting.intSetting( "plugins.alerting.move_alerts_backoff_count", LegacyOpenDistroAlertingSettings.MOVE_ALERTS_BACKOFF_COUNT, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val ALERT_HISTORY_ENABLED = Setting.boolSetting( "plugins.alerting.alert_history_enabled", LegacyOpenDistroAlertingSettings.ALERT_HISTORY_ENABLED, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) // TODO: Do we want to let users to disable this? If so, we need to fix the rollover logic @@ -86,95 +124,94 @@ class AlertingSettings { val FINDING_HISTORY_ENABLED = Setting.boolSetting( "plugins.alerting.alert_finding_enabled", true, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val ALERT_HISTORY_ROLLOVER_PERIOD = Setting.positiveTimeSetting( "plugins.alerting.alert_history_rollover_period", LegacyOpenDistroAlertingSettings.ALERT_HISTORY_ROLLOVER_PERIOD, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val FINDING_HISTORY_ROLLOVER_PERIOD = Setting.positiveTimeSetting( "plugins.alerting.alert_finding_rollover_period", TimeValue.timeValueHours(12), - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val ALERT_HISTORY_INDEX_MAX_AGE = Setting.positiveTimeSetting( "plugins.alerting.alert_history_max_age", LegacyOpenDistroAlertingSettings.ALERT_HISTORY_INDEX_MAX_AGE, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val FINDING_HISTORY_INDEX_MAX_AGE = Setting.positiveTimeSetting( "plugins.alerting.finding_history_max_age", TimeValue(30, TimeUnit.DAYS), - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val ALERT_HISTORY_MAX_DOCS = Setting.longSetting( "plugins.alerting.alert_history_max_docs", LegacyOpenDistroAlertingSettings.ALERT_HISTORY_MAX_DOCS, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val FINDING_HISTORY_MAX_DOCS = Setting.longSetting( "plugins.alerting.alert_finding_max_docs", 1000L, 0L, - Setting.Property.NodeScope, - Setting.Property.Dynamic, - Setting.Property.Deprecated + Setting.Property.NodeScope, Setting.Property.Dynamic, Setting.Property.Deprecated ) val ALERT_HISTORY_RETENTION_PERIOD = Setting.positiveTimeSetting( "plugins.alerting.alert_history_retention_period", LegacyOpenDistroAlertingSettings.ALERT_HISTORY_RETENTION_PERIOD, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val FINDING_HISTORY_RETENTION_PERIOD = Setting.positiveTimeSetting( "plugins.alerting.finding_history_retention_period", TimeValue(60, TimeUnit.DAYS), - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val REQUEST_TIMEOUT = Setting.positiveTimeSetting( "plugins.alerting.request_timeout", LegacyOpenDistroAlertingSettings.REQUEST_TIMEOUT, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val MAX_ACTION_THROTTLE_VALUE = Setting.positiveTimeSetting( "plugins.alerting.action_throttle_max_value", LegacyOpenDistroAlertingSettings.MAX_ACTION_THROTTLE_VALUE, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val FILTER_BY_BACKEND_ROLES = Setting.boolSetting( "plugins.alerting.filter_by_backend_roles", LegacyOpenDistroAlertingSettings.FILTER_BY_BACKEND_ROLES, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic ) val MAX_ACTIONABLE_ALERT_COUNT = Setting.longSetting( "plugins.alerting.max_actionable_alert_count", DEFAULT_MAX_ACTIONABLE_ALERT_COUNT, -1L, - Setting.Property.NodeScope, - Setting.Property.Dynamic + Setting.Property.NodeScope, Setting.Property.Dynamic + ) + + val FINDINGS_INDEXING_BATCH_SIZE = Setting.intSetting( + "plugins.alerting.alert_findings_indexing_batch_size", + DEFAULT_FINDINGS_INDEXING_BATCH_SIZE, + 1, + Setting.Property.NodeScope, Setting.Property.Dynamic + ) + + val REMOTE_MONITORING_ENABLED = Setting.boolSetting( + "plugins.alerting.remote_monitoring_enabled", + false, + Setting.Property.NodeScope, Setting.Property.Dynamic ) } } diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportAcknowledgeAlertAction.kt b/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportAcknowledgeAlertAction.kt index f76005193..a94a682d3 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportAcknowledgeAlertAction.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportAcknowledgeAlertAction.kt @@ -20,9 +20,6 @@ import org.opensearch.action.search.SearchResponse import org.opensearch.action.support.ActionFilters import org.opensearch.action.support.HandledTransportAction import org.opensearch.action.update.UpdateRequest -import org.opensearch.alerting.action.GetMonitorAction -import org.opensearch.alerting.action.GetMonitorRequest -import org.opensearch.alerting.action.GetMonitorResponse import org.opensearch.alerting.opensearchapi.suspendUntil import org.opensearch.alerting.settings.AlertingSettings import org.opensearch.alerting.util.AlertingException @@ -37,6 +34,8 @@ import org.opensearch.common.xcontent.XContentType import org.opensearch.commons.alerting.action.AcknowledgeAlertRequest import org.opensearch.commons.alerting.action.AcknowledgeAlertResponse import org.opensearch.commons.alerting.action.AlertingActions +import org.opensearch.commons.alerting.action.GetMonitorRequest +import org.opensearch.commons.alerting.action.GetMonitorResponse import org.opensearch.commons.alerting.model.Alert import org.opensearch.commons.alerting.model.Monitor import org.opensearch.commons.alerting.util.optionalTimeField @@ -96,7 +95,7 @@ class TransportAcknowledgeAlertAction @Inject constructor( RestRequest.Method.GET, FetchSourceContext.FETCH_SOURCE ) - execute(GetMonitorAction.INSTANCE, getMonitorRequest, it) + execute(AlertingActions.GET_MONITOR_ACTION_TYPE, getMonitorRequest, it) } if (getMonitorResponse.monitor == null) { actionListener.onFailure( diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportDeleteWorkflowAction.kt b/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportDeleteWorkflowAction.kt index 9b076a600..f5c062c88 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportDeleteWorkflowAction.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportDeleteWorkflowAction.kt @@ -23,6 +23,8 @@ import org.opensearch.action.search.SearchResponse import org.opensearch.action.support.ActionFilters import org.opensearch.action.support.HandledTransportAction import org.opensearch.action.support.WriteRequest.RefreshPolicy +import org.opensearch.alerting.core.lock.LockModel +import org.opensearch.alerting.core.lock.LockService import org.opensearch.alerting.model.MonitorMetadata import org.opensearch.alerting.model.WorkflowMetadata import org.opensearch.alerting.opensearchapi.addFilter @@ -73,6 +75,7 @@ class TransportDeleteWorkflowAction @Inject constructor( val clusterService: ClusterService, val settings: Settings, val xContentRegistry: NamedXContentRegistry, + val lockService: LockService ) : HandledTransportAction( AlertingActions.DELETE_WORKFLOW_ACTION_NAME, transportService, actionFilters, ::DeleteWorkflowRequest ), @@ -119,7 +122,7 @@ class TransportDeleteWorkflowAction @Inject constructor( ) { suspend fun resolveUserAndStart() { try { - val workflow = getWorkflow() + val workflow: Workflow = getWorkflow() ?: return val canDelete = user == null || !doFilterForUser(user) || @@ -180,6 +183,12 @@ class TransportDeleteWorkflowAction @Inject constructor( } catch (t: Exception) { log.error("Failed to delete delegate monitor metadata. But proceeding with workflow deletion $workflowId", t) } + try { + // Delete the workflow lock + client.suspendUntil { lockService.deleteLock(LockModel.generateLockId(workflowId), it) } + } catch (t: Exception) { + log.error("Failed to delete workflow lock for $workflowId") + } actionListener.onResponse(deleteWorkflowResponse) } else { actionListener.onFailure( @@ -296,17 +305,27 @@ class TransportDeleteWorkflowAction @Inject constructor( return deletableMonitors } - private suspend fun getWorkflow(): Workflow { + private suspend fun getWorkflow(): Workflow? { val getRequest = GetRequest(ScheduledJob.SCHEDULED_JOBS_INDEX, workflowId) val getResponse: GetResponse = client.suspendUntil { get(getRequest, it) } - if (getResponse.isExists == false) { - actionListener.onFailure( - AlertingException.wrap( - OpenSearchStatusException("Workflow not found.", RestStatus.NOT_FOUND) - ) - ) + if (!getResponse.isExists) { + handleWorkflowMissing() + return null } + + return parseWorkflow(getResponse) + } + + private fun handleWorkflowMissing() { + actionListener.onFailure( + AlertingException.wrap( + OpenSearchStatusException("Workflow not found.", RestStatus.NOT_FOUND) + ) + ) + } + + private fun parseWorkflow(getResponse: GetResponse): Workflow { val xcp = XContentHelper.createParser( xContentRegistry, LoggingDeprecationHandler.INSTANCE, getResponse.sourceAsBytesRef, XContentType.JSON diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportGetFindingsAction.kt b/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportGetFindingsAction.kt index 302a9f22e..d5d5cc9d9 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportGetFindingsAction.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportGetFindingsAction.kt @@ -18,9 +18,6 @@ import org.opensearch.action.search.SearchRequest import org.opensearch.action.search.SearchResponse import org.opensearch.action.support.ActionFilters import org.opensearch.action.support.HandledTransportAction -import org.opensearch.alerting.action.GetMonitorAction -import org.opensearch.alerting.action.GetMonitorRequest -import org.opensearch.alerting.action.GetMonitorResponse import org.opensearch.alerting.alerts.AlertIndices.Companion.ALL_FINDING_INDEX_PATTERN import org.opensearch.alerting.opensearchapi.suspendUntil import org.opensearch.alerting.settings.AlertingSettings @@ -34,6 +31,8 @@ import org.opensearch.common.xcontent.XContentType import org.opensearch.commons.alerting.action.AlertingActions import org.opensearch.commons.alerting.action.GetFindingsRequest import org.opensearch.commons.alerting.action.GetFindingsResponse +import org.opensearch.commons.alerting.action.GetMonitorRequest +import org.opensearch.commons.alerting.action.GetMonitorResponse import org.opensearch.commons.alerting.model.Finding import org.opensearch.commons.alerting.model.FindingDocument import org.opensearch.commons.alerting.model.FindingWithDocs @@ -170,7 +169,7 @@ class TransportGetFindingsSearchAction @Inject constructor( ) val getMonitorResponse: GetMonitorResponse = this@TransportGetFindingsSearchAction.client.suspendUntil { - execute(GetMonitorAction.INSTANCE, getMonitorRequest, it) + execute(AlertingActions.GET_MONITOR_ACTION_TYPE, getMonitorRequest, it) } indexName = getMonitorResponse.monitor?.dataSources?.findingsIndex ?: ALL_FINDING_INDEX_PATTERN } @@ -221,8 +220,9 @@ class TransportGetFindingsSearchAction @Inject constructor( val documents: MutableMap = mutableMapOf() response.responses.forEach { val key = "${it.index}|${it.id}" - val docData = if (it.isFailed) "" else it.response.sourceAsString - val findingDocument = FindingDocument(it.index, it.id, !it.isFailed, docData) + val isDocFound = !(it.isFailed || it.response.sourceAsString == null) + val docData = if (isDocFound) it.response.sourceAsString else "" + val findingDocument = FindingDocument(it.index, it.id, isDocFound, docData) documents[key] = findingDocument } diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportGetMonitorAction.kt b/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportGetMonitorAction.kt index 0db73db2d..3a6f090ec 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportGetMonitorAction.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportGetMonitorAction.kt @@ -11,16 +11,13 @@ import kotlinx.coroutines.launch import org.apache.logging.log4j.LogManager import org.apache.lucene.search.join.ScoreMode import org.opensearch.OpenSearchStatusException +import org.opensearch.action.ActionRequest import org.opensearch.action.get.GetRequest import org.opensearch.action.get.GetResponse import org.opensearch.action.search.SearchRequest import org.opensearch.action.search.SearchResponse import org.opensearch.action.support.ActionFilters import org.opensearch.action.support.HandledTransportAction -import org.opensearch.alerting.action.GetMonitorAction -import org.opensearch.alerting.action.GetMonitorRequest -import org.opensearch.alerting.action.GetMonitorResponse -import org.opensearch.alerting.action.GetMonitorResponse.AssociatedWorkflow import org.opensearch.alerting.opensearchapi.suspendUntil import org.opensearch.alerting.settings.AlertingSettings import org.opensearch.alerting.util.AlertingException @@ -33,9 +30,14 @@ import org.opensearch.common.settings.Settings import org.opensearch.common.xcontent.LoggingDeprecationHandler import org.opensearch.common.xcontent.XContentHelper import org.opensearch.common.xcontent.XContentType +import org.opensearch.commons.alerting.action.AlertingActions +import org.opensearch.commons.alerting.action.GetMonitorRequest +import org.opensearch.commons.alerting.action.GetMonitorResponse +import org.opensearch.commons.alerting.action.GetMonitorResponse.AssociatedWorkflow import org.opensearch.commons.alerting.model.Monitor import org.opensearch.commons.alerting.model.ScheduledJob import org.opensearch.commons.alerting.model.Workflow +import org.opensearch.commons.utils.recreateObject import org.opensearch.core.action.ActionListener import org.opensearch.core.rest.RestStatus import org.opensearch.core.xcontent.NamedXContentRegistry @@ -54,8 +56,8 @@ class TransportGetMonitorAction @Inject constructor( val xContentRegistry: NamedXContentRegistry, val clusterService: ClusterService, settings: Settings, -) : HandledTransportAction( - GetMonitorAction.NAME, +) : HandledTransportAction( + AlertingActions.GET_MONITOR_ACTION_NAME, transportService, actionFilters, ::GetMonitorRequest @@ -69,12 +71,17 @@ class TransportGetMonitorAction @Inject constructor( listenFilterBySettingChange(clusterService) } - override fun doExecute(task: Task, getMonitorRequest: GetMonitorRequest, actionListener: ActionListener) { + override fun doExecute(task: Task, request: ActionRequest, actionListener: ActionListener) { + val transformedRequest = request as? GetMonitorRequest + ?: recreateObject(request) { + GetMonitorRequest(it) + } + val user = readUserFromThreadContext(client) - val getRequest = GetRequest(ScheduledJob.SCHEDULED_JOBS_INDEX, getMonitorRequest.monitorId) - .version(getMonitorRequest.version) - .fetchSourceContext(getMonitorRequest.srcContext) + val getRequest = GetRequest(ScheduledJob.SCHEDULED_JOBS_INDEX, transformedRequest.monitorId) + .version(transformedRequest.version) + .fetchSourceContext(transformedRequest.srcContext) if (!validateUserBackendRoles(user, actionListener)) { return @@ -114,7 +121,7 @@ class TransportGetMonitorAction @Inject constructor( monitor?.user, actionListener, "monitor", - getMonitorRequest.monitorId + transformedRequest.monitorId ) ) { return @@ -130,7 +137,6 @@ class TransportGetMonitorAction @Inject constructor( response.version, response.seqNo, response.primaryTerm, - RestStatus.OK, monitor, associatedCompositeMonitors ) diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportGetRemoteIndexesAction.kt b/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportGetRemoteIndexesAction.kt new file mode 100644 index 000000000..5b35d493a --- /dev/null +++ b/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportGetRemoteIndexesAction.kt @@ -0,0 +1,193 @@ +/* + * Copyright OpenSearch Contributors + * SPDX-License-Identifier: Apache-2.0 + */ + +package org.opensearch.alerting.transport + +import kotlinx.coroutines.CoroutineScope +import kotlinx.coroutines.Dispatchers +import kotlinx.coroutines.launch +import kotlinx.coroutines.newSingleThreadContext +import kotlinx.coroutines.withContext +import org.apache.logging.log4j.LogManager +import org.opensearch.OpenSearchStatusException +import org.opensearch.action.admin.cluster.health.ClusterHealthRequest +import org.opensearch.action.admin.cluster.health.ClusterHealthResponse +import org.opensearch.action.admin.indices.mapping.get.GetMappingsRequest +import org.opensearch.action.admin.indices.mapping.get.GetMappingsResponse +import org.opensearch.action.admin.indices.resolve.ResolveIndexAction +import org.opensearch.action.support.ActionFilters +import org.opensearch.action.support.HandledTransportAction +import org.opensearch.action.support.IndicesOptions +import org.opensearch.alerting.action.GetRemoteIndexesAction +import org.opensearch.alerting.action.GetRemoteIndexesRequest +import org.opensearch.alerting.action.GetRemoteIndexesResponse +import org.opensearch.alerting.action.GetRemoteIndexesResponse.ClusterIndexes +import org.opensearch.alerting.action.GetRemoteIndexesResponse.ClusterIndexes.ClusterIndex +import org.opensearch.alerting.opensearchapi.suspendUntil +import org.opensearch.alerting.settings.AlertingSettings +import org.opensearch.alerting.settings.AlertingSettings.Companion.REMOTE_MONITORING_ENABLED +import org.opensearch.alerting.util.AlertingException +import org.opensearch.alerting.util.CrossClusterMonitorUtils +import org.opensearch.client.Client +import org.opensearch.cluster.service.ClusterService +import org.opensearch.common.inject.Inject +import org.opensearch.common.settings.Settings +import org.opensearch.core.action.ActionListener +import org.opensearch.core.rest.RestStatus +import org.opensearch.core.xcontent.NamedXContentRegistry +import org.opensearch.tasks.Task +import org.opensearch.transport.TransportService +import java.time.Duration +import java.time.Instant + +private val log = LogManager.getLogger(TransportGetRemoteIndexesAction::class.java) +private val scope: CoroutineScope = CoroutineScope(Dispatchers.IO) + +class TransportGetRemoteIndexesAction @Inject constructor( + val transportService: TransportService, + val client: Client, + actionFilters: ActionFilters, + val xContentRegistry: NamedXContentRegistry, + val clusterService: ClusterService, + settings: Settings, +) : HandledTransportAction( + GetRemoteIndexesAction.NAME, + transportService, + actionFilters, + ::GetRemoteIndexesRequest +), + SecureTransportAction { + + @Volatile override var filterByEnabled = AlertingSettings.FILTER_BY_BACKEND_ROLES.get(settings) + + @Volatile private var remoteMonitoringEnabled = REMOTE_MONITORING_ENABLED.get(settings) + + init { + clusterService.clusterSettings.addSettingsUpdateConsumer(REMOTE_MONITORING_ENABLED) { remoteMonitoringEnabled = it } + listenFilterBySettingChange(clusterService) + } + + override fun doExecute( + task: Task, + request: GetRemoteIndexesRequest, + actionListener: ActionListener + ) { + log.debug("Remote monitoring enabled: {}", remoteMonitoringEnabled) + if (!remoteMonitoringEnabled) { + actionListener.onFailure( + AlertingException.wrap( + OpenSearchStatusException("Remote monitoring is not enabled.", RestStatus.FORBIDDEN) + ) + ) + return + } + + val user = readUserFromThreadContext(client) + if (!validateUserBackendRoles(user, actionListener)) return + + client.threadPool().threadContext.stashContext().use { + scope.launch { + val singleThreadContext = newSingleThreadContext("GetRemoteIndexesActionThread") + withContext(singleThreadContext) { + it.restore() + val clusterIndexesList = mutableListOf() + + var resolveIndexResponse: ResolveIndexAction.Response? = null + try { + resolveIndexResponse = + getRemoteClusters(CrossClusterMonitorUtils.parseIndexesForRemoteSearch(request.indexes, clusterService)) + } catch (e: Exception) { + log.error("Failed to retrieve indexes for request $request", e) + actionListener.onFailure(AlertingException.wrap(e)) + } + + val resolvedIndexes: MutableList = mutableListOf() + if (resolveIndexResponse != null) { + resolveIndexResponse.indices.forEach { resolvedIndexes.add(it.name) } + resolveIndexResponse.aliases.forEach { resolvedIndexes.add(it.name) } + } + + val clusterIndexesMap = CrossClusterMonitorUtils.separateClusterIndexes(resolvedIndexes, clusterService) + + clusterIndexesMap.forEach { (clusterName, indexes) -> + val targetClient = CrossClusterMonitorUtils.getClientForCluster(clusterName, client, clusterService) + + val startTime = Instant.now() + var clusterHealthResponse: ClusterHealthResponse? = null + try { + clusterHealthResponse = getHealthStatuses(targetClient, indexes) + } catch (e: Exception) { + log.error("Failed to retrieve health statuses for request $request", e) + actionListener.onFailure(AlertingException.wrap(e)) + } + val endTime = Instant.now() + val latency = Duration.between(startTime, endTime).toMillis() + + var mappingsResponse: GetMappingsResponse? = null + if (request.includeMappings) { + try { + mappingsResponse = getIndexMappings(targetClient, indexes) + } catch (e: Exception) { + log.error("Failed to retrieve mappings for request $request", e) + actionListener.onFailure(AlertingException.wrap(e)) + } + } + + val clusterIndexList = mutableListOf() + if (clusterHealthResponse != null) { + indexes.forEach { + clusterIndexList.add( + ClusterIndex( + indexName = it, + indexHealth = clusterHealthResponse.indices[it]?.status, + mappings = mappingsResponse?.mappings?.get(it) + ) + ) + } + } + + clusterIndexesList.add( + ClusterIndexes( + clusterName = clusterName, + clusterHealth = clusterHealthResponse!!.status, + hubCluster = clusterName == clusterService.clusterName.value(), + indexes = clusterIndexList, + latency = latency + ) + ) + } + actionListener.onResponse(GetRemoteIndexesResponse(clusterIndexes = clusterIndexesList)) + } + } + } + } + + private suspend fun getRemoteClusters(parsedIndexes: List): ResolveIndexAction.Response { + val resolveRequest = ResolveIndexAction.Request( + parsedIndexes.toTypedArray(), + ResolveIndexAction.Request.DEFAULT_INDICES_OPTIONS + ) + + return client.suspendUntil { + admin().indices().resolveIndex(resolveRequest, it) + } + } + private suspend fun getHealthStatuses(targetClient: Client, parsedIndexesNames: List): ClusterHealthResponse { + val clusterHealthRequest = ClusterHealthRequest() + .indices(*parsedIndexesNames.toTypedArray()) + .indicesOptions(IndicesOptions.lenientExpandHidden()) + + return targetClient.suspendUntil { + admin().cluster().health(clusterHealthRequest, it) + } + } + + private suspend fun getIndexMappings(targetClient: Client, parsedIndexNames: List): GetMappingsResponse { + val getMappingsRequest = GetMappingsRequest().indices(*parsedIndexNames.toTypedArray()) + return targetClient.suspendUntil { + admin().indices().getMappings(getMappingsRequest, it) + } + } +} diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportIndexMonitorAction.kt b/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportIndexMonitorAction.kt index 2100c0593..057977079 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportIndexMonitorAction.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportIndexMonitorAction.kt @@ -198,7 +198,15 @@ class TransportIndexMonitorAction @Inject constructor( else (it as DocLevelMonitorInput).indices indices.addAll(inputIndices) } - val searchRequest = SearchRequest().indices(*indices.toTypedArray()) + val updatedIndices = indices.map { index -> + if (IndexUtils.isAlias(index, clusterService.state()) || IndexUtils.isDataStream(index, clusterService.state())) { + val metadata = clusterService.state().metadata.indicesLookup[index]?.writeIndex + metadata?.index?.name ?: index + } else { + index + } + } + val searchRequest = SearchRequest().indices(*updatedIndices.toTypedArray()) .source(SearchSourceBuilder.searchSource().size(1).query(QueryBuilders.matchAllQuery())) client.search( searchRequest, diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportSearchMonitorAction.kt b/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportSearchMonitorAction.kt index d1b533282..7359d60ea 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportSearchMonitorAction.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/transport/TransportSearchMonitorAction.kt @@ -6,12 +6,11 @@ package org.opensearch.alerting.transport import org.apache.logging.log4j.LogManager +import org.opensearch.action.ActionRequest import org.opensearch.action.search.SearchRequest import org.opensearch.action.search.SearchResponse import org.opensearch.action.support.ActionFilters import org.opensearch.action.support.HandledTransportAction -import org.opensearch.alerting.action.SearchMonitorAction -import org.opensearch.alerting.action.SearchMonitorRequest import org.opensearch.alerting.opensearchapi.addFilter import org.opensearch.alerting.settings.AlertingSettings import org.opensearch.alerting.util.AlertingException @@ -19,11 +18,15 @@ import org.opensearch.client.Client import org.opensearch.cluster.service.ClusterService import org.opensearch.common.inject.Inject import org.opensearch.common.settings.Settings +import org.opensearch.commons.alerting.action.AlertingActions +import org.opensearch.commons.alerting.action.SearchMonitorRequest import org.opensearch.commons.alerting.model.Monitor import org.opensearch.commons.alerting.model.ScheduledJob import org.opensearch.commons.alerting.model.Workflow import org.opensearch.commons.authuser.User +import org.opensearch.commons.utils.recreateObject import org.opensearch.core.action.ActionListener +import org.opensearch.core.common.io.stream.NamedWriteableRegistry import org.opensearch.index.query.BoolQueryBuilder import org.opensearch.index.query.ExistsQueryBuilder import org.opensearch.index.query.MatchQueryBuilder @@ -38,12 +41,10 @@ class TransportSearchMonitorAction @Inject constructor( val settings: Settings, val client: Client, clusterService: ClusterService, - actionFilters: ActionFilters -) : HandledTransportAction( - SearchMonitorAction.NAME, - transportService, - actionFilters, - ::SearchMonitorRequest + actionFilters: ActionFilters, + val namedWriteableRegistry: NamedWriteableRegistry +) : HandledTransportAction( + AlertingActions.SEARCH_MONITORS_ACTION_NAME, transportService, actionFilters, ::SearchMonitorRequest ), SecureTransportAction { @Volatile @@ -52,8 +53,13 @@ class TransportSearchMonitorAction @Inject constructor( listenFilterBySettingChange(clusterService) } - override fun doExecute(task: Task, searchMonitorRequest: SearchMonitorRequest, actionListener: ActionListener) { - val searchSourceBuilder = searchMonitorRequest.searchRequest.source() + override fun doExecute(task: Task, request: ActionRequest, actionListener: ActionListener) { + val transformedRequest = request as? SearchMonitorRequest + ?: recreateObject(request, namedWriteableRegistry) { + SearchMonitorRequest(it) + } + + val searchSourceBuilder = transformedRequest.searchRequest.source() .seqNoAndPrimaryTerm(true) .version(true) val queryBuilder = if (searchSourceBuilder.query() == null) BoolQueryBuilder() @@ -62,7 +68,7 @@ class TransportSearchMonitorAction @Inject constructor( // The SearchMonitor API supports one 'index' parameter of either the SCHEDULED_JOBS_INDEX or ALL_ALERT_INDEX_PATTERN. // When querying the ALL_ALERT_INDEX_PATTERN, we don't want to check whether the MONITOR_TYPE field exists // because we're querying alert indexes. - if (searchMonitorRequest.searchRequest.indices().contains(ScheduledJob.SCHEDULED_JOBS_INDEX)) { + if (transformedRequest.searchRequest.indices().contains(ScheduledJob.SCHEDULED_JOBS_INDEX)) { val monitorWorkflowType = QueryBuilders.boolQuery().should(QueryBuilders.existsQuery(Monitor.MONITOR_TYPE)) .should(QueryBuilders.existsQuery(Workflow.WORKFLOW_TYPE)) queryBuilder.must(monitorWorkflowType) @@ -71,16 +77,16 @@ class TransportSearchMonitorAction @Inject constructor( searchSourceBuilder.query(queryBuilder) .seqNoAndPrimaryTerm(true) .version(true) - addOwnerFieldIfNotExists(searchMonitorRequest.searchRequest) + addOwnerFieldIfNotExists(transformedRequest.searchRequest) val user = readUserFromThreadContext(client) client.threadPool().threadContext.stashContext().use { - resolve(searchMonitorRequest, actionListener, user) + resolve(transformedRequest, actionListener, user) } } fun resolve(searchMonitorRequest: SearchMonitorRequest, actionListener: ActionListener, user: User?) { if (user == null) { - // user header is null when: 1/ security is disabled. 2/when user is super-admin. + // user header is null when: 1/ security is disabled. 2/ when user is super-admin. search(searchMonitorRequest.searchRequest, actionListener) } else if (!doFilterForUser(user)) { // security is enabled and filterby is disabled. diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/util/CrossClusterMonitorUtils.kt b/alerting/src/main/kotlin/org/opensearch/alerting/util/CrossClusterMonitorUtils.kt new file mode 100644 index 000000000..6ec14ffa2 --- /dev/null +++ b/alerting/src/main/kotlin/org/opensearch/alerting/util/CrossClusterMonitorUtils.kt @@ -0,0 +1,231 @@ +/* + * Copyright OpenSearch Contributors + * SPDX-License-Identifier: Apache-2.0 + */ + +package org.opensearch.alerting.util + +import org.opensearch.action.search.SearchRequest +import org.opensearch.client.Client +import org.opensearch.client.node.NodeClient +import org.opensearch.cluster.service.ClusterService +import org.opensearch.commons.alerting.model.ClusterMetricsInput +import org.opensearch.commons.alerting.model.DocLevelMonitorInput +import org.opensearch.commons.alerting.model.Monitor +import org.opensearch.commons.alerting.model.SearchInput + +class CrossClusterMonitorUtils { + companion object { + + /** + * Uses the monitor inputs to determine whether the monitor makes calls to remote clusters. + * @param monitor The monitor to evaluate. + * @param localClusterName The name of the local cluster. + * @return TRUE if the monitor makes calls to remote clusters; otherwise returns FALSE. + */ + @JvmStatic + fun isRemoteMonitor(monitor: Monitor, localClusterName: String): Boolean { + var isRemoteMonitor = false + monitor.inputs.forEach inputCheck@{ + when (it) { + is ClusterMetricsInput -> { + it.clusters.forEach { clusterName -> + if (clusterName != localClusterName) { + isRemoteMonitor = true + return@inputCheck + } + } + } + is SearchInput -> { + // Remote indexes follow the pattern ":". + // Index entries without a CLUSTER_NAME indicate they're store on the local cluster. + it.indices.forEach { index -> + val clusterName = parseClusterName(index) + if (clusterName != localClusterName) { + isRemoteMonitor = true + return@inputCheck + } + } + } + is DocLevelMonitorInput -> { + // TODO: When document level monitors are supported, this check will be similar to SearchInput. + throw IllegalArgumentException("Per document monitors do not currently support cross-cluster search.") + } + else -> { + throw IllegalArgumentException("Unsupported input type: ${it.name()}.") + } + } + } + return isRemoteMonitor + } + + /** + * Uses the monitor inputs to determine whether the monitor makes calls to remote clusters. + * @param monitor The monitor to evaluate. + * @param clusterService Used to retrieve the name of the local cluster. + * @return TRUE if the monitor makes calls to remote clusters; otherwise returns FALSE. + */ + @JvmStatic + fun isRemoteMonitor(monitor: Monitor, clusterService: ClusterService): Boolean { + return isRemoteMonitor(monitor = monitor, localClusterName = clusterService.clusterName.value()) + } + + /** + * Parses the list of indexes into a map of CLUSTER_NAME to List. + * @param indexes A list of index names in ":" format. + * @param localClusterName The name of the local cluster. + * @return A map of CLUSTER_NAME to List + */ + @JvmStatic + fun separateClusterIndexes(indexes: List, localClusterName: String): HashMap> { + val output = hashMapOf>() + indexes.forEach { index -> + var clusterName = parseClusterName(index) + val indexName = parseIndexName(index) + + // If the index entry does not have a CLUSTER_NAME, it indicates the index is on the local cluster. + if (clusterName.isEmpty()) clusterName = localClusterName + + output.getOrPut(clusterName) { mutableListOf() }.add(indexName) + } + return output + } + + /** + * Parses the list of indexes into a map of CLUSTER_NAME to List. + * @param indexes A list of index names in ":" format. + * Local indexes can also be in "" format. + * @param clusterService Used to retrieve the name of the local cluster. + * @return A map of CLUSTER_NAME to List + */ + @JvmStatic + fun separateClusterIndexes(indexes: List, clusterService: ClusterService): HashMap> { + return separateClusterIndexes(indexes = indexes, localClusterName = clusterService.clusterName.value()) + } + + /** + * The [NodeClient] used by the plugin cannot execute searches against local indexes + * using format ":". That format only supports querying remote indexes. + * This function formats a list of indexes to be supplied directly to a [SearchRequest]. + * @param indexes A list of index names in ":" format. + * @param localClusterName The name of the local cluster. + * @return A list of indexes with any remote indexes in ":" format, + * and any local indexes in "" format. + */ + @JvmStatic + fun parseIndexesForRemoteSearch(indexes: List, localClusterName: String): List { + return indexes.map { + var index = it + val clusterName = parseClusterName(it) + if (clusterName.isNotEmpty() && clusterName == localClusterName) { + index = parseIndexName(it) + } + index + } + } + + /** + * The [NodeClient] used by the plugin cannot execute searches against local indexes + * using format ":". That format only supports querying remote indexes. + * This function formats a list of indexes to be supplied directly to a [SearchRequest]. + * @param indexes A list of index names in ":" format. + * @param clusterService Used to retrieve the name of the local cluster. + * @return A list of indexes with any remote indexes in ":" format, + * and any local indexes in "" format. + */ + @JvmStatic + fun parseIndexesForRemoteSearch(indexes: List, clusterService: ClusterService): List { + return parseIndexesForRemoteSearch(indexes = indexes, localClusterName = clusterService.clusterName.value()) + } + + /** + * Uses the clusterName to determine whether the target client is the local or a remote client, + * and returns the appropriate client. + * @param clusterName The name of the cluster to evaluate. + * @param client The local [NodeClient]. + * @param localClusterName The name of the local cluster. + * @return The local [NodeClient] for the local cluster, or a remote client for a remote cluster. + */ + @JvmStatic + fun getClientForCluster(clusterName: String, client: Client, localClusterName: String): Client { + return if (clusterName == localClusterName) client else client.getRemoteClusterClient(clusterName) + } + + /** + * Uses the clusterName to determine whether the target client is the local or a remote client, + * and returns the appropriate client. + * @param clusterName The name of the cluster to evaluate. + * @param client The local [NodeClient]. + * @param clusterService Used to retrieve the name of the local cluster. + * @return The local [NodeClient] for the local cluster, or a remote client for a remote cluster. + */ + @JvmStatic + fun getClientForCluster(clusterName: String, client: Client, clusterService: ClusterService): Client { + return getClientForCluster(clusterName = clusterName, client = client, localClusterName = clusterService.clusterName.value()) + } + + /** + * Uses the index name to determine whether the target client is the local or a remote client, + * and returns the appropriate client. + * @param index The name of the index to evaluate. + * Can be in either ":" or "" format. + * @param client The local [NodeClient]. + * @param localClusterName The name of the local cluster. + * @return The local [NodeClient] for the local cluster, or a remote client for a remote cluster. + */ + @JvmStatic + fun getClientForIndex(index: String, client: Client, localClusterName: String): Client { + val clusterName = parseClusterName(index) + return if (clusterName.isNotEmpty() && clusterName != localClusterName) + client.getRemoteClusterClient(clusterName) else client + } + + /** + * Uses the index name to determine whether the target client is the local or a remote client, + * and returns the appropriate client. + * @param index The name of the index to evaluate. + * Can be in either ":" or "" format. + * @param client The local [NodeClient]. + * @param clusterService Used to retrieve the name of the local cluster. + * @return The local [NodeClient] for the local cluster, or a remote client for a remote cluster. + */ + @JvmStatic + fun getClientForIndex(index: String, client: Client, clusterService: ClusterService): Client { + return getClientForIndex(index = index, client = client, localClusterName = clusterService.clusterName.value()) + } + + /** + * @param index The name of the index to evaluate. + * Can be in either ":" or "" format. + * @return The cluster name if present; else an empty string. + */ + @JvmStatic + fun parseClusterName(index: String): String { + return if (index.contains(":")) index.split(":").getOrElse(0) { "" } + else "" + } + + /** + * @param index The name of the index to evaluate. + * Can be in either ":" or "" format. + * @return The index name. + */ + @JvmStatic + fun parseIndexName(index: String): String { + return if (index.contains(":")) index.split(":").getOrElse(1) { index } + else index + } + + /** + * If clusterName is provided, combines the inputs into ":" format. + * @param clusterName + * @param indexName + * @return The formatted string. + */ + @JvmStatic + fun formatClusterAndIndexName(clusterName: String, indexName: String): String { + return if (clusterName.isNotEmpty()) "$clusterName:$indexName" + else indexName + } + } +} diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/util/DocLevelMonitorQueries.kt b/alerting/src/main/kotlin/org/opensearch/alerting/util/DocLevelMonitorQueries.kt index 5b92624a5..52ef1d26f 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/util/DocLevelMonitorQueries.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/util/DocLevelMonitorQueries.kt @@ -13,6 +13,8 @@ import org.opensearch.action.admin.indices.alias.Alias import org.opensearch.action.admin.indices.create.CreateIndexRequest import org.opensearch.action.admin.indices.create.CreateIndexResponse import org.opensearch.action.admin.indices.delete.DeleteIndexRequest +import org.opensearch.action.admin.indices.exists.indices.IndicesExistsRequest +import org.opensearch.action.admin.indices.exists.indices.IndicesExistsResponse import org.opensearch.action.admin.indices.mapping.put.PutMappingRequest import org.opensearch.action.admin.indices.rollover.RolloverRequest import org.opensearch.action.admin.indices.rollover.RolloverResponse @@ -38,8 +40,16 @@ import org.opensearch.commons.alerting.model.DocLevelMonitorInput import org.opensearch.commons.alerting.model.DocLevelQuery import org.opensearch.commons.alerting.model.Monitor import org.opensearch.commons.alerting.model.ScheduledJob +import org.opensearch.core.action.ActionListener import org.opensearch.core.rest.RestStatus import org.opensearch.index.mapper.MapperService.INDEX_MAPPING_TOTAL_FIELDS_LIMIT_SETTING +import org.opensearch.index.query.QueryBuilders +import org.opensearch.index.reindex.BulkByScrollResponse +import org.opensearch.index.reindex.DeleteByQueryAction +import org.opensearch.index.reindex.DeleteByQueryRequestBuilder +import kotlin.coroutines.resume +import kotlin.coroutines.resumeWithException +import kotlin.coroutines.suspendCoroutine private val log = LogManager.getLogger(DocLevelMonitorQueries::class.java) @@ -134,6 +144,42 @@ class DocLevelMonitorQueries(private val client: Client, private val clusterServ return true } + suspend fun deleteDocLevelQueriesOnDryRun(monitorMetadata: MonitorMetadata) { + try { + monitorMetadata.sourceToQueryIndexMapping.forEach { (_, queryIndex) -> + val indicesExistsResponse: IndicesExistsResponse = + client.suspendUntil { + client.admin().indices().exists(IndicesExistsRequest(queryIndex), it) + } + if (indicesExistsResponse.isExists == false) { + return + } + + val queryBuilder = QueryBuilders.boolQuery() + .must(QueryBuilders.existsQuery("monitor_id")) + .mustNot(QueryBuilders.wildcardQuery("monitor_id", "*")) + + val response: BulkByScrollResponse = suspendCoroutine { cont -> + DeleteByQueryRequestBuilder(client, DeleteByQueryAction.INSTANCE) + .source(queryIndex) + .filter(queryBuilder) + .refresh(true) + .execute( + object : ActionListener { + override fun onResponse(response: BulkByScrollResponse) = cont.resume(response) + override fun onFailure(t: Exception) = cont.resumeWithException(t) + } + ) + } + response.bulkFailures.forEach { + log.error("Failed deleting queries while removing dry run queries: [${it.id}] cause: [${it.cause}] ") + } + } + } catch (e: Exception) { + log.error("Failed to delete doc level queries on dry run", e) + } + } + fun docLevelQueryIndexExists(dataSources: DataSources): Boolean { val clusterState = clusterService.state() return clusterState.metadata.hasAlias(dataSources.queryIndex) @@ -207,11 +253,25 @@ class DocLevelMonitorQueries(private val client: Client, private val clusterServ // Run through each backing index and apply appropriate mappings to query index indices.forEach { indexName -> - val concreteIndices = IndexUtils.resolveAllIndices( + var concreteIndices = IndexUtils.resolveAllIndices( listOf(indexName), monitorCtx.clusterService!!, monitorCtx.indexNameExpressionResolver!! ) + if (IndexUtils.isAlias(indexName, monitorCtx.clusterService!!.state()) || + IndexUtils.isDataStream(indexName, monitorCtx.clusterService!!.state()) + ) { + val lastWriteIndex = concreteIndices.find { monitorMetadata.lastRunContext.containsKey(it) } + if (lastWriteIndex != null) { + val lastWriteIndexCreationDate = + IndexUtils.getCreationDateForIndex(lastWriteIndex, monitorCtx.clusterService!!.state()) + concreteIndices = IndexUtils.getNewestIndicesByCreationDate( + concreteIndices, + monitorCtx.clusterService!!.state(), + lastWriteIndexCreationDate + ) + } + } val updatedIndexName = indexName.replace("*", "_") val updatedProperties = mutableMapOf() val allFlattenPaths = mutableSetOf>() @@ -336,6 +396,7 @@ class DocLevelMonitorQueries(private val client: Client, private val clusterServ var query = it.query conflictingPaths.forEach { conflictingPath -> if (query.contains(conflictingPath)) { + query = transformExistsQuery(query, conflictingPath, "", monitorId) query = query.replace("$conflictingPath:", "${conflictingPath}__$monitorId:") filteredConcreteIndices.addAll(conflictingPathToConcreteIndices[conflictingPath]!!) } @@ -358,6 +419,7 @@ class DocLevelMonitorQueries(private val client: Client, private val clusterServ var query = it.query flattenPaths.forEach { fieldPath -> if (!conflictingPaths.contains(fieldPath.first)) { + query = transformExistsQuery(query, fieldPath.first, sourceIndex, monitorId) query = query.replace("${fieldPath.first}:", "${fieldPath.first}_${sourceIndex}_$monitorId:") } } @@ -388,6 +450,26 @@ class DocLevelMonitorQueries(private val client: Client, private val clusterServ } } + /** + * Transforms the query if it includes an _exists_ clause to append the index name and the monitor id to the field value + */ + private fun transformExistsQuery(query: String, conflictingPath: String, indexName: String, monitorId: String): String { + return query + .replace("_exists_: ", "_exists_:") // remove space to read exists query as one string + .split("\\s+".toRegex()) + .joinToString(separator = " ") { segment -> + if (segment.contains("_exists_:")) { + val trimSegement = segment.trim { it == '(' || it == ')' } // remove any delimiters from ends + val (_, value) = trimSegement.split(":", limit = 2) // split into key and value + val newString = if (value == conflictingPath) + segment.replace(conflictingPath, "${conflictingPath}_${indexName}_$monitorId") else segment + newString + } else { + segment + } + } + } + private suspend fun updateQueryIndexMappings( monitor: Monitor, monitorMetadata: MonitorMetadata, diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/util/IndexUtils.kt b/alerting/src/main/kotlin/org/opensearch/alerting/util/IndexUtils.kt index 4e34ed3c2..387f5cb22 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/util/IndexUtils.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/util/IndexUtils.kt @@ -12,6 +12,7 @@ import org.opensearch.alerting.alerts.AlertIndices import org.opensearch.alerting.core.ScheduledJobIndices import org.opensearch.client.IndicesAdminClient import org.opensearch.cluster.ClusterState +import org.opensearch.cluster.metadata.IndexAbstraction import org.opensearch.cluster.metadata.IndexMetadata import org.opensearch.cluster.metadata.IndexNameExpressionResolver import org.opensearch.cluster.service.ClusterService @@ -153,5 +154,47 @@ class IndexUtils { return result } + + @JvmStatic + fun isDataStream(name: String, clusterState: ClusterState): Boolean { + return clusterState.metadata().dataStreams().containsKey(name) + } + + @JvmStatic + fun isAlias(name: String, clusterState: ClusterState): Boolean { + return clusterState.metadata().hasAlias(name) + } + + @JvmStatic + fun getWriteIndex(index: String, clusterState: ClusterState): String? { + if (isAlias(index, clusterState) || isDataStream(index, clusterState)) { + val metadata = clusterState.metadata.indicesLookup[index]?.writeIndex + if (metadata != null) { + return metadata.index.name + } + } + return null + } + + @JvmStatic + fun getNewestIndicesByCreationDate(concreteIndices: List, clusterState: ClusterState, thresholdDate: Long): List { + val filteredIndices = mutableListOf() + val lookup = clusterState.metadata().indicesLookup + concreteIndices.forEach { indexName -> + val index = lookup[indexName] + val indexMetadata = clusterState.metadata.index(indexName) + if (index != null && index.type == IndexAbstraction.Type.CONCRETE_INDEX) { + if (indexMetadata.creationDate >= thresholdDate) { + filteredIndices.add(indexName) + } + } + } + return filteredIndices + } + + @JvmStatic + fun getCreationDateForIndex(index: String, clusterState: ClusterState): Long { + return clusterState.metadata.index(index).creationDate + } } } diff --git a/alerting/src/main/kotlin/org/opensearch/alerting/workflow/CompositeWorkflowRunner.kt b/alerting/src/main/kotlin/org/opensearch/alerting/workflow/CompositeWorkflowRunner.kt index 94e8b9bc3..e68896bc2 100644 --- a/alerting/src/main/kotlin/org/opensearch/alerting/workflow/CompositeWorkflowRunner.kt +++ b/alerting/src/main/kotlin/org/opensearch/alerting/workflow/CompositeWorkflowRunner.kt @@ -255,7 +255,7 @@ object CompositeWorkflowRunner : WorkflowRunner() { executionId ) } else if (delegateMonitor.isDocLevelMonitor()) { - return DocumentLevelMonitorRunner.runMonitor( + return DocumentLevelMonitorRunner().runMonitor( delegateMonitor, monitorCtx, periodStart, diff --git a/alerting/src/main/resources/org/opensearch/alerting/alerts/alert_mapping.json b/alerting/src/main/resources/org/opensearch/alerting/alerts/alert_mapping.json index 53fb5b0a2..76e5104cc 100644 --- a/alerting/src/main/resources/org/opensearch/alerting/alerts/alert_mapping.json +++ b/alerting/src/main/resources/org/opensearch/alerting/alerts/alert_mapping.json @@ -169,6 +169,14 @@ "type": "text" } } + }, + "clusters": { + "type" : "text", + "fields" : { + "keyword" : { + "type" : "keyword" + } + } } } } \ No newline at end of file diff --git a/alerting/src/main/resources/org/opensearch/alerting/alerts/finding_mapping.json b/alerting/src/main/resources/org/opensearch/alerting/alerts/finding_mapping.json index d2ecc0907..1bfea4ebc 100644 --- a/alerting/src/main/resources/org/opensearch/alerting/alerts/finding_mapping.json +++ b/alerting/src/main/resources/org/opensearch/alerting/alerts/finding_mapping.json @@ -49,6 +49,9 @@ }, "fields": { "type": "text" + }, + "query_field_names": { + "type": "keyword" } } }, diff --git a/alerting/src/test/kotlin/org/opensearch/alerting/AlertingRestTestCase.kt b/alerting/src/test/kotlin/org/opensearch/alerting/AlertingRestTestCase.kt index 0e4216c69..50cae9d8c 100644 --- a/alerting/src/test/kotlin/org/opensearch/alerting/AlertingRestTestCase.kt +++ b/alerting/src/test/kotlin/org/opensearch/alerting/AlertingRestTestCase.kt @@ -76,6 +76,8 @@ import javax.management.MBeanServerInvocationHandler import javax.management.ObjectName import javax.management.remote.JMXConnectorFactory import javax.management.remote.JMXServiceURL +import kotlin.collections.ArrayList +import kotlin.collections.HashMap /** * Superclass for tests that interact with an external test cluster using OpenSearch's RestClient @@ -911,7 +913,7 @@ abstract class AlertingRestTestCase : ODFERestTestCase() { private fun indexDoc(client: RestClient, index: String, id: String, doc: String, refresh: Boolean = true): Response { val requestBody = StringEntity(doc, APPLICATION_JSON) val params = if (refresh) mapOf("refresh" to "true") else mapOf() - val response = client.makeRequest("PUT", "$index/_doc/$id", params, requestBody) + val response = client.makeRequest("POST", "$index/_doc/$id?op_type=create", params, requestBody) assertTrue( "Unable to index doc: '${doc.take(15)}...' to index: '$index'", listOf(RestStatus.OK, RestStatus.CREATED).contains(response.restStatus()) @@ -947,6 +949,11 @@ abstract class AlertingRestTestCase : ODFERestTestCase() { return index } + protected fun createTestIndex(index: String, mapping: String?, alias: String): String { + createIndex(index, Settings.EMPTY, mapping?.trimIndent(), alias) + return index + } + protected fun createTestConfigIndex(index: String = "." + randomAlphaOfLength(10).lowercase(Locale.ROOT)): String { try { createIndex( @@ -983,7 +990,7 @@ abstract class AlertingRestTestCase : ODFERestTestCase() { val indicesMap = mutableMapOf() val indicesJson = jsonBuilder().startObject().startArray("actions") indices.keys.map { - val indexName = createTestIndex(index = it.lowercase(Locale.ROOT), mapping = "") + val indexName = createTestIndex(index = it, mapping = "") val isWriteIndex = indices.getOrDefault(indexName, false) indicesMap[indexName] = isWriteIndex val indexMap = mapOf( @@ -1000,17 +1007,155 @@ abstract class AlertingRestTestCase : ODFERestTestCase() { return mutableMapOf(alias to indicesMap) } + protected fun createDataStream(datastream: String, mappings: String?, useComponentTemplate: Boolean) { + val indexPattern = "$datastream*" + var componentTemplateMappings = "\"properties\": {" + + " \"netflow.destination_transport_port\":{ \"type\": \"long\" }," + + " \"netflow.destination_ipv4_address\":{ \"type\": \"ip\" }" + + "}" + if (mappings != null) { + componentTemplateMappings = mappings + } + if (useComponentTemplate) { + // Setup index_template + createComponentTemplateWithMappings( + "my_ds_component_template-$datastream", + componentTemplateMappings + ) + } + createComposableIndexTemplate( + "my_index_template_ds-$datastream", + listOf(indexPattern), + (if (useComponentTemplate) "my_ds_component_template-$datastream" else null), + mappings, + true, + 0 + ) + createDataStream(datastream) + } + + protected fun createDataStream(datastream: String? = randomAlphaOfLength(10).lowercase(Locale.ROOT)) { + client().makeRequest("PUT", "_data_stream/$datastream") + } + + protected fun deleteDataStream(datastream: String) { + client().makeRequest("DELETE", "_data_stream/$datastream") + } + + protected fun createIndexAlias(alias: String, mappings: String?) { + val indexPattern = "$alias*" + var componentTemplateMappings = "\"properties\": {" + + " \"netflow.destination_transport_port\":{ \"type\": \"long\" }," + + " \"netflow.destination_ipv4_address\":{ \"type\": \"ip\" }" + + "}" + if (mappings != null) { + componentTemplateMappings = mappings + } + createComponentTemplateWithMappings( + "my_alias_component_template-$alias", + componentTemplateMappings + ) + createComposableIndexTemplate( + "my_index_template_alias-$alias", + listOf(indexPattern), + "my_alias_component_template-$alias", + mappings, + false, + 0 + ) + createTestIndex( + "$alias-000001", + null, + """ + "$alias": { + "is_write_index": true + } + """.trimIndent() + ) + } + + protected fun deleteIndexAlias(alias: String) { + client().makeRequest("DELETE", "$alias*/_alias/$alias") + } + + protected fun createComponentTemplateWithMappings(componentTemplateName: String, mappings: String?) { + val body = """{"template" : { "mappings": {$mappings} }}""" + client().makeRequest( + "PUT", + "_component_template/$componentTemplateName", + emptyMap(), + StringEntity(body, ContentType.APPLICATION_JSON), + BasicHeader("Content-Type", "application/json") + ) + } + + protected fun createComposableIndexTemplate( + templateName: String, + indexPatterns: List, + componentTemplateName: String?, + mappings: String?, + isDataStream: Boolean, + priority: Int + ) { + var body = "{\n" + if (isDataStream) { + body += "\"data_stream\": { }," + } + body += "\"index_patterns\": [" + + indexPatterns.stream().collect( + Collectors.joining(",", "\"", "\"") + ) + "]," + if (componentTemplateName == null) { + body += "\"template\": {\"mappings\": {$mappings}}," + } + if (componentTemplateName != null) { + body += "\"composed_of\": [\"$componentTemplateName\"]," + } + body += "\"priority\":$priority}" + client().makeRequest( + "PUT", + "_index_template/$templateName", + emptyMap(), + StringEntity(body, APPLICATION_JSON), + BasicHeader("Content-Type", "application/json") + ) + } + + protected fun getDatastreamWriteIndex(datastream: String): String { + val response = client().makeRequest("GET", "_data_stream/$datastream", emptyMap(), null) + var respAsMap = responseAsMap(response) + if (respAsMap.containsKey("data_streams")) { + respAsMap = (respAsMap["data_streams"] as ArrayList>)[0] + val indices = respAsMap["indices"] as List> + val index = indices.last() + return index["index_name"] as String + } else { + respAsMap = respAsMap[datastream] as Map + } + val indices = respAsMap["indices"] as Array + return indices.last() + } + + protected fun rolloverDatastream(datastream: String) { + client().makeRequest( + "POST", + datastream + "/_rollover", + emptyMap(), + null + ) + } + protected fun randomAliasIndices( alias: String, num: Int = randomIntBetween(1, 10), includeWriteIndex: Boolean = true, ): Map { val indices = mutableMapOf() - val writeIndex = randomIntBetween(0, num) + val writeIndex = randomIntBetween(0, num - 1) for (i: Int in 0 until num) { - var indexName = randomAlphaOfLength(10) + var indexName = randomAlphaOfLength(10).lowercase(Locale.ROOT) while (indexName.equals(alias) || indices.containsKey(indexName)) - indexName = randomAlphaOfLength(10) + indexName = randomAlphaOfLength(10).lowercase(Locale.ROOT) indices[indexName] = includeWriteIndex && i == writeIndex } return indices diff --git a/alerting/src/test/kotlin/org/opensearch/alerting/DocumentMonitorRunnerIT.kt b/alerting/src/test/kotlin/org/opensearch/alerting/DocumentMonitorRunnerIT.kt index 9f9164065..0f6218f30 100644 --- a/alerting/src/test/kotlin/org/opensearch/alerting/DocumentMonitorRunnerIT.kt +++ b/alerting/src/test/kotlin/org/opensearch/alerting/DocumentMonitorRunnerIT.kt @@ -10,6 +10,8 @@ import org.apache.http.entity.StringEntity import org.opensearch.action.search.SearchResponse import org.opensearch.alerting.alerts.AlertIndices.Companion.ALL_ALERT_INDEX_PATTERN import org.opensearch.alerting.alerts.AlertIndices.Companion.ALL_FINDING_INDEX_PATTERN +import org.opensearch.alerting.core.lock.LockService +import org.opensearch.alerting.settings.AlertingSettings import org.opensearch.client.Response import org.opensearch.client.ResponseException import org.opensearch.common.xcontent.json.JsonXContent @@ -17,16 +19,20 @@ import org.opensearch.commons.alerting.model.Alert import org.opensearch.commons.alerting.model.DataSources import org.opensearch.commons.alerting.model.DocLevelMonitorInput import org.opensearch.commons.alerting.model.DocLevelQuery +import org.opensearch.commons.alerting.model.IntervalSchedule import org.opensearch.commons.alerting.model.action.ActionExecutionPolicy import org.opensearch.commons.alerting.model.action.AlertCategory import org.opensearch.commons.alerting.model.action.PerAlertActionScope import org.opensearch.commons.alerting.model.action.PerExecutionActionScope import org.opensearch.core.rest.RestStatus import org.opensearch.script.Script +import org.opensearch.test.OpenSearchTestCase import java.time.ZonedDateTime import java.time.format.DateTimeFormatter +import java.time.temporal.ChronoUnit import java.time.temporal.ChronoUnit.MILLIS import java.util.Locale +import java.util.concurrent.TimeUnit class DocumentMonitorRunnerIT : AlertingRestTestCase() { @@ -73,10 +79,25 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { val alerts = searchAlerts(monitor) assertEquals("Alert saved for test monitor", 0, alerts.size) + + // ensure doc level query is deleted on dry run + val request = """{ + "size": 10, + "query": { + "match_all": {} + } + }""" + var httpResponse = adminClient().makeRequest( + "GET", "/${monitor.dataSources.queryIndex}/_search", + StringEntity(request, ContentType.APPLICATION_JSON) + ) + assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) + var searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) + searchResponse.hits.totalHits?.let { assertEquals("Query saved in query index", 0L, it.value) } } - fun `test execute monitor returns search result with dryrun`() { - val testIndex = createTestIndex() + fun `test dryrun execute monitor with queryFieldNames set up with correct field`() { + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) val testDoc = """{ "message" : "This is an error from IAD region", @@ -84,31 +105,45 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { "test_field" : "us-west-2" }""" - val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) - val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) + val index = createTestIndex() - val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) - val monitor = randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger)) + val docQuery = + DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf(), queryFieldNames = listOf("test_field")) + val docLevelInput = DocLevelMonitorInput("description", listOf(index), listOf(docQuery)) - indexDoc(testIndex, "1", testDoc) - indexDoc(testIndex, "5", testDoc) + val action = randomAction(template = randomTemplateScript("Hello {{ctx.monitor.name}}"), destinationId = createDestination().id) + val monitor = randomDocumentLevelMonitor( + inputs = listOf(docLevelInput), + triggers = listOf(randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = listOf(action))) + ) + + indexDoc(index, "1", testDoc) val response = executeMonitor(monitor, params = DRYRUN_MONITOR) val output = entityAsMap(response) - assertEquals(monitor.name, output["monitor_name"]) - @Suppress("UNCHECKED_CAST") - val searchResult = (output.objectMap("input_results")["results"] as List>).first() - @Suppress("UNCHECKED_CAST") - val matchingDocsToQuery = searchResult[docQuery.id] as List - assertEquals("Incorrect search result", 2, matchingDocsToQuery.size) - assertTrue("Incorrect search result", matchingDocsToQuery.contains("1|$testIndex")) - assertTrue("Incorrect search result", matchingDocsToQuery.contains("5|$testIndex")) + + assertEquals(1, output.objectMap("trigger_results").values.size) + + for (triggerResult in output.objectMap("trigger_results").values) { + assertEquals(1, triggerResult.objectMap("action_results").values.size) + for (alertActionResult in triggerResult.objectMap("action_results").values) { + for (actionResult in alertActionResult.values) { + @Suppress("UNCHECKED_CAST") val actionOutput = (actionResult as Map>)["output"] + as Map + assertEquals("Hello ${monitor.name}", actionOutput["subject"]) + assertEquals("Hello ${monitor.name}", actionOutput["message"]) + } + } + } + + val alerts = searchAlerts(monitor) + assertEquals("Alert saved for test monitor", 0, alerts.size) } - fun `test execute monitor generates alerts and findings`() { - val testIndex = createTestIndex() + fun `test seq_no calculation correctness when docs are deleted`() { + adminClient().updateSettings(AlertingSettings.DOC_LEVEL_MONITOR_SHARD_FETCH_SIZE.key, 2) val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) val testDoc = """{ "message" : "This is an error from IAD region", @@ -116,39 +151,42 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { "test_field" : "us-west-2" }""" - val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) - val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) + val index = createTestIndex() - val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) - val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) - assertNotNull(monitor.id) + val docQuery = + DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf(index), listOf(docQuery)) - indexDoc(testIndex, "1", testDoc) - indexDoc(testIndex, "5", testDoc) + val action = randomAction(template = randomTemplateScript("Hello {{ctx.monitor.name}}"), destinationId = createDestination().id) + val monitor = randomDocumentLevelMonitor( + inputs = listOf(docLevelInput), + triggers = listOf(randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = listOf(action))) + ) - val response = executeMonitor(monitor.id) + indexDoc(index, "1", testDoc) + indexDoc(index, "2", testDoc) + indexDoc(index, "3", testDoc) + indexDoc(index, "4", testDoc) + indexDoc(index, "5", testDoc) + indexDoc(index, "11", testDoc) + indexDoc(index, "21", testDoc) + indexDoc(index, "31", testDoc) + indexDoc(index, "41", testDoc) + indexDoc(index, "51", testDoc) + + deleteDoc(index, "51") + val response = executeMonitor(monitor, params = mapOf("dryrun" to "false")) val output = entityAsMap(response) - assertEquals(monitor.name, output["monitor_name"]) - @Suppress("UNCHECKED_CAST") - val searchResult = (output.objectMap("input_results")["results"] as List>).first() - @Suppress("UNCHECKED_CAST") - val matchingDocsToQuery = searchResult[docQuery.id] as List - assertEquals("Incorrect search result", 2, matchingDocsToQuery.size) - assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("1|$testIndex", "5|$testIndex"))) - - val alerts = searchAlertsWithFilter(monitor) - assertEquals("Alert saved for test monitor", 2, alerts.size) - val findings = searchFindings(monitor) - assertEquals("Findings saved for test monitor", 2, findings.size) - assertTrue("Findings saved for test monitor", findings[0].relatedDocIds.contains("1")) - assertTrue("Findings saved for test monitor", findings[1].relatedDocIds.contains("5")) + for (triggerResult in output.objectMap("trigger_results").values) { + assertEquals(9, triggerResult.objectMap("action_results").values.size) + } } - fun `test execute monitor with tag as trigger condition generates alerts and findings`() { - val testIndex = createTestIndex() + fun `test dryrun execute monitor with queryFieldNames set up with wrong field`() { + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) val testDoc = """{ "message" : "This is an error from IAD region", @@ -156,39 +194,53 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { "test_field" : "us-west-2" }""" - val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", tags = listOf("test_tag"), fields = listOf()) - val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) + val index = createTestIndex() + // using wrong field name + val docQuery = DocLevelQuery( + query = "test_field:\"us-west-2\"", + name = "3", + fields = listOf(), + queryFieldNames = listOf("wrong_field") + ) + val docLevelInput = DocLevelMonitorInput("description", listOf(index), listOf(docQuery)) - val trigger = randomDocumentLevelTrigger(condition = Script("query[tag=test_tag]")) - val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) - assertNotNull(monitor.id) + val action = randomAction(template = randomTemplateScript("Hello {{ctx.monitor.name}}"), destinationId = createDestination().id) + val monitor = randomDocumentLevelMonitor( + inputs = listOf(docLevelInput), + triggers = listOf(randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = listOf(action))) + ) - indexDoc(testIndex, "1", testDoc) - indexDoc(testIndex, "5", testDoc) + indexDoc(index, "1", testDoc) + indexDoc(index, "2", testDoc) + indexDoc(index, "3", testDoc) + indexDoc(index, "4", testDoc) + indexDoc(index, "5", testDoc) - val response = executeMonitor(monitor.id) + val response = executeMonitor(monitor, params = DRYRUN_MONITOR) val output = entityAsMap(response) - assertEquals(monitor.name, output["monitor_name"]) - @Suppress("UNCHECKED_CAST") - val searchResult = (output.objectMap("input_results")["results"] as List>).first() - @Suppress("UNCHECKED_CAST") - val matchingDocsToQuery = searchResult[docQuery.id] as List - assertEquals("Incorrect search result", 2, matchingDocsToQuery.size) - assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("1|$testIndex", "5|$testIndex"))) - val alerts = searchAlertsWithFilter(monitor) - assertEquals("Alert saved for test monitor", 2, alerts.size) + assertEquals(1, output.objectMap("trigger_results").values.size) - val findings = searchFindings(monitor) - assertEquals("Findings saved for test monitor", 2, findings.size) - assertTrue("Findings saved for test monitor", findings[0].relatedDocIds.contains("1")) - assertTrue("Findings saved for test monitor", findings[1].relatedDocIds.contains("5")) + for (triggerResult in output.objectMap("trigger_results").values) { + assertEquals(0, triggerResult.objectMap("action_results").values.size) + for (alertActionResult in triggerResult.objectMap("action_results").values) { + for (actionResult in alertActionResult.values) { + @Suppress("UNCHECKED_CAST") val actionOutput = (actionResult as Map>)["output"] + as Map + assertEquals("Hello ${monitor.name}", actionOutput["subject"]) + assertEquals("Hello ${monitor.name}", actionOutput["message"]) + } + } + } + + val alerts = searchAlerts(monitor) + assertEquals("Alert saved for test monitor", 0, alerts.size) } - fun `test execute monitor input error`() { - val testIndex = createTestIndex() + fun `test fetch_query_field_names setting is disabled by configuring queryFieldNames set up with wrong field still works`() { + adminClient().updateSettings(AlertingSettings.DOC_LEVEL_MONITOR_FETCH_ONLY_QUERY_FIELDS_ENABLED.key, "false") val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) val testDoc = """{ "message" : "This is an error from IAD region", @@ -196,29 +248,48 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { "test_field" : "us-west-2" }""" - val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", tags = listOf("test_tag"), fields = listOf()) - val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) + val index = createTestIndex() + // using wrong field name + val docQuery = DocLevelQuery( + query = "test_field:\"us-west-2\"", + name = "3", + fields = listOf(), + queryFieldNames = listOf("wrong_field") + ) + val docLevelInput = DocLevelMonitorInput("description", listOf(index), listOf(docQuery)) - val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) - val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) - assertNotNull(monitor.id) + val action = randomAction(template = randomTemplateScript("Hello {{ctx.monitor.name}}"), destinationId = createDestination().id) + val monitor = randomDocumentLevelMonitor( + inputs = listOf(docLevelInput), + triggers = listOf(randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = listOf(action))) + ) - deleteIndex(testIndex) + indexDoc(index, "1", testDoc) - val response = executeMonitor(monitor.id) + val response = executeMonitor(monitor, params = DRYRUN_MONITOR) val output = entityAsMap(response) assertEquals(monitor.name, output["monitor_name"]) - @Suppress("UNCHECKED_CAST") - val inputResults = output.stringMap("input_results") - assertTrue("Missing monitor error message", (inputResults?.get("error") as String).isNotEmpty()) + + assertEquals(1, output.objectMap("trigger_results").values.size) + + for (triggerResult in output.objectMap("trigger_results").values) { + assertEquals(1, triggerResult.objectMap("action_results").values.size) + for (alertActionResult in triggerResult.objectMap("action_results").values) { + for (actionResult in alertActionResult.values) { + @Suppress("UNCHECKED_CAST") val actionOutput = (actionResult as Map>)["output"] + as Map + assertEquals("Hello ${monitor.name}", actionOutput["subject"]) + assertEquals("Hello ${monitor.name}", actionOutput["message"]) + } + } + } val alerts = searchAlerts(monitor) - assertEquals("Alert not saved", 1, alerts.size) - assertEquals("Alert status is incorrect", Alert.State.ERROR, alerts[0].state) + assertEquals("Alert saved for test monitor", 0, alerts.size) } - fun `test execute monitor generates alerts and findings with per alert execution for actions`() { + fun `test execute monitor returns search result with dryrun`() { val testIndex = createTestIndex() val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) val testDoc = """{ @@ -230,27 +301,13 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) - val alertCategories = AlertCategory.values() - val actionExecutionScope = PerAlertActionScope( - actionableAlerts = (1..randomInt(alertCategories.size)).map { alertCategories[it - 1] }.toSet() - ) - val actionExecutionPolicy = ActionExecutionPolicy(actionExecutionScope) - val actions = (0..randomInt(10)).map { - randomActionWithPolicy( - template = randomTemplateScript("Hello {{ctx.monitor.name}}"), - destinationId = createDestination().id, - actionExecutionPolicy = actionExecutionPolicy - ) - } - - val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = actions) - val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) - assertNotNull(monitor.id) + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val monitor = randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger)) indexDoc(testIndex, "1", testDoc) indexDoc(testIndex, "5", testDoc) - val response = executeMonitor(monitor.id) + val response = executeMonitor(monitor, params = DRYRUN_MONITOR) val output = entityAsMap(response) @@ -260,33 +317,26 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { @Suppress("UNCHECKED_CAST") val matchingDocsToQuery = searchResult[docQuery.id] as List assertEquals("Incorrect search result", 2, matchingDocsToQuery.size) - assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("1|$testIndex", "5|$testIndex"))) + assertTrue("Incorrect search result", matchingDocsToQuery.contains("1|$testIndex")) + assertTrue("Incorrect search result", matchingDocsToQuery.contains("5|$testIndex")) - for (triggerResult in output.objectMap("trigger_results").values) { - assertEquals(2, triggerResult.objectMap("action_results").values.size) - for (alertActionResult in triggerResult.objectMap("action_results").values) { - assertEquals(actions.size, alertActionResult.values.size) - for (actionResult in alertActionResult.values) { - @Suppress("UNCHECKED_CAST") val actionOutput = (actionResult as Map>)["output"] - as Map - assertEquals("Hello ${monitor.name}", actionOutput["subject"]) - assertEquals("Hello ${monitor.name}", actionOutput["message"]) - } + // ensure doc level query is deleted on dry run + val request = """{ + "size": 10, + "query": { + "match_all": {} } - } - - refreshAllIndices() - - val alerts = searchAlertsWithFilter(monitor) - assertEquals("Alert saved for test monitor", 2, alerts.size) - - val findings = searchFindings(monitor) - assertEquals("Findings saved for test monitor", 2, findings.size) - assertTrue("Findings saved for test monitor", findings[0].relatedDocIds.contains("1")) - assertTrue("Findings saved for test monitor", findings[1].relatedDocIds.contains("5")) + }""" + var httpResponse = adminClient().makeRequest( + "GET", "/${monitor.dataSources.queryIndex}/_search", + StringEntity(request, ContentType.APPLICATION_JSON) + ) + assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) + var searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) + searchResponse.hits.totalHits?.let { assertEquals("Query saved in query index", 0L, it.value) } } - fun `test execute monitor generates alerts and findings with per trigger execution for actions`() { + fun `test execute monitor returns search result with dryrun then without dryrun ensure dry run query not saved`() { val testIndex = createTestIndex() val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) val testDoc = """{ @@ -298,24 +348,13 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) - val actionExecutionScope = PerExecutionActionScope() - val actionExecutionPolicy = ActionExecutionPolicy(actionExecutionScope) - val actions = (0..randomInt(10)).map { - randomActionWithPolicy( - template = randomTemplateScript("Hello {{ctx.monitor.name}}"), - destinationId = createDestination().id, - actionExecutionPolicy = actionExecutionPolicy - ) - } - - val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = actions) - val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) - assertNotNull(monitor.id) + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val monitor = randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger)) indexDoc(testIndex, "1", testDoc) - indexDoc(testIndex, "5", testDoc) + indexDoc(testIndex, "2", testDoc) - val response = executeMonitor(monitor.id) + val response = executeMonitor(monitor, params = DRYRUN_MONITOR) val output = entityAsMap(response) @@ -325,36 +364,79 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { @Suppress("UNCHECKED_CAST") val matchingDocsToQuery = searchResult[docQuery.id] as List assertEquals("Incorrect search result", 2, matchingDocsToQuery.size) - assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("1|$testIndex", "5|$testIndex"))) + assertTrue("Incorrect search result", matchingDocsToQuery.contains("1|$testIndex")) + assertTrue("Incorrect search result", matchingDocsToQuery.contains("2|$testIndex")) - for (triggerResult in output.objectMap("trigger_results").values) { - assertEquals(2, triggerResult.objectMap("action_results").values.size) - for (alertActionResult in triggerResult.objectMap("action_results").values) { - assertEquals(actions.size, alertActionResult.values.size) - for (actionResult in alertActionResult.values) { - @Suppress("UNCHECKED_CAST") val actionOutput = (actionResult as Map>)["output"] - as Map - assertEquals("Hello ${monitor.name}", actionOutput["subject"]) - assertEquals("Hello ${monitor.name}", actionOutput["message"]) - } + // ensure doc level query is deleted on dry run + val request = """{ + "size": 10, + "query": { + "match_all": {} } - } + }""" + var httpResponse = adminClient().makeRequest( + "GET", "/${monitor.dataSources.queryIndex}/_search", + StringEntity(request, ContentType.APPLICATION_JSON) + ) + assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) + var searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) + searchResponse.hits.totalHits?.let { assertEquals(0L, it.value) } - val alerts = searchAlertsWithFilter(monitor) + // create and execute second monitor not as dryrun + val testIndex2 = createTestIndex("test1") + val testTime2 = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc2 = """{ + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime2", + "test_field" : "us-east-1" + }""" + + val docQuery2 = DocLevelQuery(query = "test_field:\"us-east-1\"", name = "3", fields = listOf()) + val docLevelInput2 = DocLevelMonitorInput("description", listOf(testIndex2), listOf(docQuery2)) + + val trigger2 = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val monitor2 = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput2), triggers = listOf(trigger2))) + assertNotNull(monitor2.id) + + indexDoc(testIndex2, "1", testDoc2) + indexDoc(testIndex2, "5", testDoc2) + + val response2 = executeMonitor(monitor2.id) + val output2 = entityAsMap(response2) + + assertEquals(monitor2.name, output2["monitor_name"]) + @Suppress("UNCHECKED_CAST") + val searchResult2 = (output2.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + val matchingDocsToQuery2 = searchResult2[docQuery2.id] as List + assertEquals("Incorrect search result", 2, matchingDocsToQuery2.size) + assertTrue("Incorrect search result", matchingDocsToQuery2.containsAll(listOf("1|$testIndex2", "5|$testIndex2"))) + + val alerts = searchAlertsWithFilter(monitor2) assertEquals("Alert saved for test monitor", 2, alerts.size) - val findings = searchFindings(monitor) + val findings = searchFindings(monitor2) assertEquals("Findings saved for test monitor", 2, findings.size) - assertTrue("Findings saved for test monitor", findings[0].relatedDocIds.contains("1")) - assertTrue("Findings saved for test monitor", findings[1].relatedDocIds.contains("5")) - } + assertTrue("Findings saved for test monitor", findings[0].relatedDocIds.contains("5")) + assertTrue("Findings saved for test monitor", findings[1].relatedDocIds.contains("1")) - fun `test execute monitor with wildcard index that generates alerts and findings for EQUALS query operator`() { - val testIndexPrefix = "test-index-${randomAlphaOfLength(10).lowercase(Locale.ROOT)}" - val testQueryName = "wildcard-test-query" - val testIndex = createTestIndex("${testIndexPrefix}1") - val testIndex2 = createTestIndex("${testIndexPrefix}2") + // ensure query from second monitor was saved + val expectedQueries = listOf("test_field_test1_${monitor2.id}:\"us-east-1\"") + httpResponse = adminClient().makeRequest( + "GET", "/${monitor.dataSources.queryIndex}/_search", + StringEntity(request, ContentType.APPLICATION_JSON) + ) + assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) + searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) + searchResponse.hits.forEach { hit -> + val query = ((hit.sourceAsMap["query"] as Map)["query_string"] as Map)["query"] + assertTrue(expectedQueries.contains(query)) + } + searchResponse.hits.totalHits?.let { assertEquals("Query saved in query index", 1L, it.value) } + } + fun `test execute monitor generates alerts and findings`() { + val testIndex = createTestIndex() val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) val testDoc = """{ "message" : "This is an error from IAD region", @@ -362,15 +444,15 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { "test_field" : "us-west-2" }""" - val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = testQueryName, fields = listOf()) - val docLevelInput = DocLevelMonitorInput("description", listOf("$testIndexPrefix*"), listOf(docQuery)) + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) - val trigger = randomDocumentLevelTrigger(condition = Script("query[name=$testQueryName]")) + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) assertNotNull(monitor.id) indexDoc(testIndex, "1", testDoc) - indexDoc(testIndex2, "5", testDoc) + indexDoc(testIndex, "5", testDoc) val response = executeMonitor(monitor.id) @@ -382,23 +464,57 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { @Suppress("UNCHECKED_CAST") val matchingDocsToQuery = searchResult[docQuery.id] as List assertEquals("Incorrect search result", 2, matchingDocsToQuery.size) - assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("1|$testIndex", "5|$testIndex2"))) + assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("1|$testIndex", "5|$testIndex"))) val alerts = searchAlertsWithFilter(monitor) assertEquals("Alert saved for test monitor", 2, alerts.size) val findings = searchFindings(monitor) assertEquals("Findings saved for test monitor", 2, findings.size) - val foundFindings = findings.filter { it.relatedDocIds.contains("1") || it.relatedDocIds.contains("5") } - assertEquals("Didn't find findings for docs 1 and 5", 2, foundFindings.size) + assertTrue("Findings saved for test monitor", findings[0].relatedDocIds.contains("5")) + assertTrue("Findings saved for test monitor", findings[1].relatedDocIds.contains("1")) } - fun `test execute monitor with wildcard index that generates alerts and findings for NOT EQUALS query operator`() { - val testIndexPrefix = "test-index-${randomAlphaOfLength(10).lowercase(Locale.ROOT)}" - val testQueryName = "wildcard-test-query" - val testIndex = createTestIndex("${testIndexPrefix}1") - val testIndex2 = createTestIndex("${testIndexPrefix}2") + fun `test monitor run generates no error alerts with versionconflictengineexception with locks`() { + val testIndex = createTestIndex() + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc = """{ + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" + }""" + + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) + + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val monitor = createMonitor( + randomDocumentLevelMonitor( + name = "__lag-monitor-test__", + inputs = listOf(docLevelInput), + triggers = listOf(trigger), + schedule = IntervalSchedule(interval = 1, unit = ChronoUnit.MINUTES) + ) + ) + assertNotNull(monitor.id) + + indexDoc(testIndex, "1", testDoc) + indexDoc(testIndex, "5", testDoc) + Thread.sleep(240000) + + val inputMap = HashMap() + inputMap["searchString"] = monitor.name + + val responseMap = getAlerts(inputMap).asMap() + val alerts = (responseMap["alerts"] as ArrayList>) + alerts.forEach { + assertTrue(it["error_message"] == null) + } + } + @AwaitsFix(bugUrl = "") + fun `test monitor run generate lock and monitor delete removes lock`() { + val testIndex = createTestIndex() val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) val testDoc = """{ "message" : "This is an error from IAD region", @@ -406,15 +522,57 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { "test_field" : "us-west-2" }""" - val docQuery = DocLevelQuery(query = "NOT (test_field:\"us-west-1\")", name = testQueryName, fields = listOf()) - val docLevelInput = DocLevelMonitorInput("description", listOf("$testIndexPrefix*"), listOf(docQuery)) + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) - val trigger = randomDocumentLevelTrigger(condition = Script("query[name=$testQueryName]")) + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val monitor = createMonitor( + randomDocumentLevelMonitor( + inputs = listOf(docLevelInput), + triggers = listOf(trigger), + schedule = IntervalSchedule(interval = 1, unit = ChronoUnit.MINUTES) + ) + ) + assertNotNull(monitor.id) + + indexDoc(testIndex, "1", testDoc) + indexDoc(testIndex, "5", testDoc) + OpenSearchTestCase.waitUntil({ + val response = client().makeRequest("HEAD", LockService.LOCK_INDEX_NAME) + return@waitUntil (response.restStatus().status == 200) + }, 240, TimeUnit.SECONDS) + + var response = client().makeRequest("GET", LockService.LOCK_INDEX_NAME + "/_search") + var responseMap = entityAsMap(response) + var noOfLocks = ((responseMap["hits"] as Map)["hits"] as List).size + assertEquals(1, noOfLocks) + + deleteMonitor(monitor) + refreshIndex(LockService.LOCK_INDEX_NAME) + response = client().makeRequest("GET", LockService.LOCK_INDEX_NAME + "/_search") + responseMap = entityAsMap(response) + noOfLocks = ((responseMap["hits"] as Map)["hits"] as List).size + assertEquals(0, noOfLocks) + } + + fun `test execute monitor with tag as trigger condition generates alerts and findings`() { + val testIndex = createTestIndex() + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc = """{ + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" + }""" + + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", tags = listOf("test_tag"), fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) + + val trigger = randomDocumentLevelTrigger(condition = Script("query[tag=test_tag]")) val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) assertNotNull(monitor.id) indexDoc(testIndex, "1", testDoc) - indexDoc(testIndex2, "5", testDoc) + indexDoc(testIndex, "5", testDoc) val response = executeMonitor(monitor.id) @@ -426,20 +584,19 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { @Suppress("UNCHECKED_CAST") val matchingDocsToQuery = searchResult[docQuery.id] as List assertEquals("Incorrect search result", 2, matchingDocsToQuery.size) - assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("1|$testIndex", "5|$testIndex2"))) + assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("1|$testIndex", "5|$testIndex"))) val alerts = searchAlertsWithFilter(monitor) assertEquals("Alert saved for test monitor", 2, alerts.size) val findings = searchFindings(monitor) assertEquals("Findings saved for test monitor", 2, findings.size) - val foundFindings = findings.filter { it.relatedDocIds.contains("1") || it.relatedDocIds.contains("5") } - assertEquals("Didn't find findings for docs 1 and 5", 2, foundFindings.size) + assertTrue("Findings saved for test monitor", findings[0].relatedDocIds.contains("5")) + assertTrue("Findings saved for test monitor", findings[1].relatedDocIds.contains("1")) } - fun `test execute monitor with new index added after first execution that generates alerts and findings`() { - val testIndex = createTestIndex("test1") - val testIndex2 = createTestIndex("test2") + fun `test execute monitor input error`() { + val testIndex = createTestIndex() val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) val testDoc = """{ "message" : "This is an error from IAD region", @@ -447,479 +604,257 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { "test_field" : "us-west-2" }""" - val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) - val docLevelInput = DocLevelMonitorInput("description", listOf("test*"), listOf(docQuery)) + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", tags = listOf("test_tag"), fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) assertNotNull(monitor.id) - indexDoc(testIndex, "1", testDoc) - indexDoc(testIndex2, "5", testDoc) - executeMonitor(monitor.id) - - var alerts = searchAlertsWithFilter(monitor) - assertEquals("Alert saved for test monitor", 2, alerts.size) - - var findings = searchFindings(monitor) - assertEquals("Findings saved for test monitor", 2, findings.size) - - var foundFindings = findings.filter { it.relatedDocIds.contains("1") || it.relatedDocIds.contains("5") } - assertEquals("Findings saved for test monitor expected 1 and 5", 2, foundFindings.size) - - // clear previous findings and alerts - deleteIndex(ALL_FINDING_INDEX_PATTERN) - deleteIndex(ALL_ALERT_INDEX_PATTERN) - - val testIndex3 = createTestIndex("test3") - indexDoc(testIndex3, "10", testDoc) - indexDoc(testIndex, "14", testDoc) - indexDoc(testIndex2, "51", testDoc) + deleteIndex(testIndex) val response = executeMonitor(monitor.id) val output = entityAsMap(response) - assertEquals(monitor.name, output["monitor_name"]) @Suppress("UNCHECKED_CAST") - val searchResult = (output.objectMap("input_results")["results"] as List>).first() - @Suppress("UNCHECKED_CAST") - val matchingDocsToQuery = searchResult[docQuery.id] as List - assertEquals("Incorrect search result", 3, matchingDocsToQuery.size) - assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("14|$testIndex", "51|$testIndex2", "10|$testIndex3"))) - - alerts = searchAlertsWithFilter(monitor) - assertEquals("Alert saved for test monitor", 3, alerts.size) - - findings = searchFindings(monitor) - assertEquals("Findings saved for test monitor", 3, findings.size) + val inputResults = output.stringMap("input_results") + assertTrue("Missing monitor error message", (inputResults?.get("error") as String).isNotEmpty()) - foundFindings = findings.filter { - it.relatedDocIds.contains("14") || it.relatedDocIds.contains("51") || it.relatedDocIds.contains("10") - } - assertEquals("Findings saved for test monitor expected 14, 51 and 10", 3, foundFindings.size) + val alerts = searchAlerts(monitor) + assertEquals("Alert not saved", 1, alerts.size) + assertEquals("Alert status is incorrect", Alert.State.ERROR, alerts[0].state) } - fun `test execute monitor with indices having fields with same name but different data types`() { - val testIndex = createTestIndex( - "test1", - """"properties": { - "source.device.port": { "type": "long" }, - "source.device.hwd.id": { "type": "long" }, - "nested_field": { - "type": "nested", - "properties": { - "test1": { - "type": "keyword" - } - } - }, - "my_join_field": { - "type": "join", - "relations": { - "question": "answer" - } - }, - "test_field" : { "type" : "integer" } - } - """.trimIndent() - ) - var testDoc = """{ - "source" : { "device": {"port" : 12345 } }, - "nested_field": { "test1": "some text" }, - "test_field": 12345 + fun `test execute monitor generates alerts and findings with per alert execution for actions`() { + val testIndex = createTestIndex() + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc = """{ + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" }""" - val docQuery1 = DocLevelQuery( - query = "(source.device.port:12345 AND test_field:12345) OR source.device.hwd.id:12345", - name = "4", - fields = listOf() - ) - val docQuery2 = DocLevelQuery( - query = "(source.device.port:\"12345\" AND test_field:\"12345\") OR source.device.hwd.id:\"12345\"", - name = "5", - fields = listOf() + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) + + val alertCategories = AlertCategory.values() + val actionExecutionScope = PerAlertActionScope( + actionableAlerts = (1..randomInt(alertCategories.size)).map { alertCategories[it - 1] }.toSet() ) - val docLevelInput = DocLevelMonitorInput("description", listOf("test*"), listOf(docQuery1, docQuery2)) + val actionExecutionPolicy = ActionExecutionPolicy(actionExecutionScope) + val actions = (0..randomInt(10)).map { + randomActionWithPolicy( + template = randomTemplateScript("Hello {{ctx.monitor.name}}"), + destinationId = createDestination().id, + actionExecutionPolicy = actionExecutionPolicy + ) + } - val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = actions) val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) assertNotNull(monitor.id) indexDoc(testIndex, "1", testDoc) - executeMonitor(monitor.id) - - var alerts = searchAlertsWithFilter(monitor) - assertEquals("Alert saved for test monitor", 1, alerts.size) + indexDoc(testIndex, "5", testDoc) - var findings = searchFindings(monitor) - assertEquals("Findings saved for test monitor", 1, findings.size) + val response = executeMonitor(monitor.id) - // clear previous findings and alerts - deleteIndex(ALL_FINDING_INDEX_PATTERN) - deleteIndex(ALL_ALERT_INDEX_PATTERN) + val output = entityAsMap(response) - indexDoc(testIndex, "2", testDoc) - - // no fields expanded as only index test1 is present - val oldExpectedQueries = listOf( - "(source.device.port_test__${monitor.id}:12345 AND test_field_test__${monitor.id}:12345) OR " + - "source.device.hwd.id_test__${monitor.id}:12345", - "(source.device.port_test__${monitor.id}:\"12345\" AND test_field_test__${monitor.id}:\"12345\") " + - "OR source.device.hwd.id_test__${monitor.id}:\"12345\"" - ) + assertEquals(monitor.name, output["monitor_name"]) + @Suppress("UNCHECKED_CAST") + val searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + val matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 2, matchingDocsToQuery.size) + assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("1|$testIndex", "5|$testIndex"))) - val request = """{ - "size": 10, - "query": { - "match_all": {} + for (triggerResult in output.objectMap("trigger_results").values) { + assertEquals(2, triggerResult.objectMap("action_results").values.size) + for (alertActionResult in triggerResult.objectMap("action_results").values) { + assertEquals(actions.size, alertActionResult.values.size) + for (actionResult in alertActionResult.values) { + @Suppress("UNCHECKED_CAST") val actionOutput = (actionResult as Map>)["output"] + as Map + assertEquals("Hello ${monitor.name}", actionOutput["subject"]) + assertEquals("Hello ${monitor.name}", actionOutput["message"]) + } } - }""" - var httpResponse = adminClient().makeRequest( - "GET", "/${monitor.dataSources.queryIndex}/_search", - StringEntity(request, ContentType.APPLICATION_JSON) - ) - assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) - var searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) - searchResponse.hits.forEach { hit -> - val query = ((hit.sourceAsMap["query"] as Map)["query_string"] as Map)["query"] - assertTrue(oldExpectedQueries.contains(query)) } - val testIndex2 = createTestIndex( - "test2", - """ - "properties" : { - "test_strict_date_time" : { "type" : "date", "format" : "strict_date_time" }, - "test_field" : { "type" : "keyword" }, - "number" : { "type" : "keyword" } - } - """.trimIndent() - ) - testDoc = """{ - "source" : { "device": {"port" : "12345" } }, - "nested_field": { "test1": "some text" }, - "test_field": "12345" - }""" - indexDoc(testIndex2, "1", testDoc) - executeMonitor(monitor.id) - - // only fields source.device.port & test_field is expanded as they have same name but different data types - // in indices test1 & test2 - val newExpectedQueries = listOf( - "(source.device.port_test2_${monitor.id}:12345 AND test_field_test2_${monitor.id}:12345) " + - "OR source.device.hwd.id_test__${monitor.id}:12345", - "(source.device.port_test1_${monitor.id}:12345 AND test_field_test1_${monitor.id}:12345) " + - "OR source.device.hwd.id_test__${monitor.id}:12345", - "(source.device.port_test2_${monitor.id}:\"12345\" AND test_field_test2_${monitor.id}:\"12345\") " + - "OR source.device.hwd.id_test__${monitor.id}:\"12345\"", - "(source.device.port_test1_${monitor.id}:\"12345\" AND test_field_test1_${monitor.id}:\"12345\") " + - "OR source.device.hwd.id_test__${monitor.id}:\"12345\"" - ) + refreshAllIndices() - alerts = searchAlertsWithFilter(monitor) + val alerts = searchAlertsWithFilter(monitor) assertEquals("Alert saved for test monitor", 2, alerts.size) - findings = searchFindings(monitor) + val findings = searchFindings(monitor) assertEquals("Findings saved for test monitor", 2, findings.size) - - httpResponse = adminClient().makeRequest( - "GET", "/${monitor.dataSources.queryIndex}/_search", - StringEntity(request, ContentType.APPLICATION_JSON) - ) - assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) - searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) - searchResponse.hits.forEach { hit -> - val query = ((hit.sourceAsMap["query"] as Map)["query_string"] as Map)["query"] - assertTrue(oldExpectedQueries.contains(query) || newExpectedQueries.contains(query)) - } + assertTrue("Findings saved for test monitor", findings[0].relatedDocIds.contains("5")) + assertTrue("Findings saved for test monitor", findings[1].relatedDocIds.contains("1")) } - fun `test execute monitor with indices having fields with same name but with different nesting`() { - val testIndex = createTestIndex( - "test1", - """"properties": { - "nested_field": { - "type": "nested", - "properties": { - "test1": { - "type": "keyword" - } - } - } - } - """.trimIndent() - ) - - val testIndex2 = createTestIndex( - "test2", - """"properties": { - "nested_field": { - "properties": { - "test1": { - "type": "keyword" - } - } - } - } - """.trimIndent() - ) + fun `test execute monitor generates alerts and findings with per trigger execution for actions`() { + val testIndex = createTestIndex() + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) val testDoc = """{ - "nested_field": { "test1": "12345" } + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" }""" - val docQuery = DocLevelQuery( - query = "nested_field.test1:\"12345\"", - name = "5", - fields = listOf() - ) - val docLevelInput = DocLevelMonitorInput("description", listOf("test*"), listOf(docQuery)) + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) - val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val actionExecutionScope = PerExecutionActionScope() + val actionExecutionPolicy = ActionExecutionPolicy(actionExecutionScope) + val actions = (0..randomInt(10)).map { + randomActionWithPolicy( + template = randomTemplateScript("Hello {{ctx.monitor.name}}"), + destinationId = createDestination().id, + actionExecutionPolicy = actionExecutionPolicy + ) + } + + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = actions) val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) assertNotNull(monitor.id) indexDoc(testIndex, "1", testDoc) - indexDoc(testIndex2, "1", testDoc) + indexDoc(testIndex, "5", testDoc) - executeMonitor(monitor.id) + val response = executeMonitor(monitor.id) + + val output = entityAsMap(response) + + assertEquals(monitor.name, output["monitor_name"]) + @Suppress("UNCHECKED_CAST") + val searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + val matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 2, matchingDocsToQuery.size) + assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("1|$testIndex", "5|$testIndex"))) + + for (triggerResult in output.objectMap("trigger_results").values) { + assertEquals(2, triggerResult.objectMap("action_results").values.size) + for (alertActionResult in triggerResult.objectMap("action_results").values) { + assertEquals(actions.size, alertActionResult.values.size) + for (actionResult in alertActionResult.values) { + @Suppress("UNCHECKED_CAST") val actionOutput = (actionResult as Map>)["output"] + as Map + assertEquals("Hello ${monitor.name}", actionOutput["subject"]) + assertEquals("Hello ${monitor.name}", actionOutput["message"]) + } + } + } val alerts = searchAlertsWithFilter(monitor) assertEquals("Alert saved for test monitor", 2, alerts.size) val findings = searchFindings(monitor) assertEquals("Findings saved for test monitor", 2, findings.size) - - // as mappings of source.id & test_field are different so, both of them expands - val expectedQueries = listOf( - "nested_field.test1_test__${monitor.id}:\"12345\"" - ) - - val request = """{ - "size": 10, - "query": { - "match_all": {} - } - }""" - var httpResponse = adminClient().makeRequest( - "GET", "/${monitor.dataSources.queryIndex}/_search", - StringEntity(request, ContentType.APPLICATION_JSON) - ) - assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) - var searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) - searchResponse.hits.forEach { hit -> - val query = ((hit.sourceAsMap["query"] as Map)["query_string"] as Map)["query"] - assertTrue(expectedQueries.contains(query)) - } + assertTrue("Findings saved for test monitor", findings[0].relatedDocIds.contains("5")) + assertTrue("Findings saved for test monitor", findings[1].relatedDocIds.contains("1")) } - fun `test execute monitor with indices having fields with same name but different field mappings`() { - val testIndex = createTestIndex( - "test1", - """"properties": { - "source": { - "properties": { - "id": { - "type":"text", - "analyzer":"whitespace" - } - } - }, - "test_field" : { - "type":"text", - "analyzer":"whitespace" - } - } - """.trimIndent() - ) + fun `test execute monitor with wildcard index that generates alerts and findings for EQUALS query operator`() { + val testIndexPrefix = "test-index-${randomAlphaOfLength(10).lowercase(Locale.ROOT)}" + val testQueryName = "wildcard-test-query" + val testIndex = createTestIndex("${testIndexPrefix}1") + val testIndex2 = createTestIndex("${testIndexPrefix}2") - val testIndex2 = createTestIndex( - "test2", - """"properties": { - "source": { - "properties": { - "id": { - "type":"text" - } - } - }, - "test_field" : { - "type":"text" - } - } - """.trimIndent() - ) + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) val testDoc = """{ - "source" : {"id" : "12345" }, - "nested_field": { "test1": "some text" }, - "test_field": "12345" + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" }""" - val docQuery = DocLevelQuery( - query = "test_field:\"12345\" AND source.id:\"12345\"", - name = "5", - fields = listOf() - ) - val docLevelInput = DocLevelMonitorInput("description", listOf("test*"), listOf(docQuery)) + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = testQueryName, fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf("$testIndexPrefix*"), listOf(docQuery)) - val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val trigger = randomDocumentLevelTrigger(condition = Script("query[name=$testQueryName]")) val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) assertNotNull(monitor.id) indexDoc(testIndex, "1", testDoc) - indexDoc(testIndex2, "1", testDoc) + indexDoc(testIndex2, "5", testDoc) - executeMonitor(monitor.id) + val response = executeMonitor(monitor.id) - val alerts = searchAlertsWithFilter(monitor) - assertEquals("Alert saved for test monitor", 2, alerts.size) + val output = entityAsMap(response) - val findings = searchFindings(monitor) - assertEquals("Findings saved for test monitor", 2, findings.size) + assertEquals(monitor.name, output["monitor_name"]) + @Suppress("UNCHECKED_CAST") + val searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + val matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 2, matchingDocsToQuery.size) + assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("1|$testIndex", "5|$testIndex2"))) - // as mappings of source.id & test_field are different so, both of them expands - val expectedQueries = listOf( - "test_field_test2_${monitor.id}:\"12345\" AND source.id_test2_${monitor.id}:\"12345\"", - "test_field_test1_${monitor.id}:\"12345\" AND source.id_test1_${monitor.id}:\"12345\"" - ) + val alerts = searchAlertsWithFilter(monitor) + assertEquals("Alert saved for test monitor", 2, alerts.size) - val request = """{ - "size": 10, - "query": { - "match_all": {} - } - }""" - var httpResponse = adminClient().makeRequest( - "GET", "/${monitor.dataSources.queryIndex}/_search", - StringEntity(request, ContentType.APPLICATION_JSON) - ) - assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) - var searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) - searchResponse.hits.forEach { hit -> - val query = ((hit.sourceAsMap["query"] as Map)["query_string"] as Map)["query"] - assertTrue(expectedQueries.contains(query)) - } + val findings = searchFindings(monitor) + assertEquals("Findings saved for test monitor", 2, findings.size) + val foundFindings = findings.filter { it.relatedDocIds.contains("1") || it.relatedDocIds.contains("5") } + assertEquals("Didn't find findings for docs 1 and 5", 2, foundFindings.size) } - fun `test execute monitor with indices having fields with same name but different field mappings in multiple indices`() { - val testIndex = createTestIndex( - "test1", - """"properties": { - "source": { - "properties": { - "device": { - "properties": { - "hwd": { - "properties": { - "id": { - "type":"text", - "analyzer":"whitespace" - } - } - } - } - } - } - }, - "test_field" : { - "type":"text" - } - } - """.trimIndent() - ) - - val testIndex2 = createTestIndex( - "test2", - """"properties": { - "test_field" : { - "type":"keyword" - } - } - """.trimIndent() - ) - - val testIndex4 = createTestIndex( - "test4", - """"properties": { - "source": { - "properties": { - "device": { - "properties": { - "hwd": { - "properties": { - "id": { - "type":"text" - } - } - } - } - } - } - }, - "test_field" : { - "type":"text" - } - } - """.trimIndent() - ) + fun `test execute monitor for bulk index findings`() { + val testIndexPrefix = "test-index-${randomAlphaOfLength(10).lowercase(Locale.ROOT)}" + val testQueryName = "wildcard-test-query" + val testIndex = createTestIndex("${testIndexPrefix}1") + val testIndex2 = createTestIndex("${testIndexPrefix}2") - val testDoc1 = """{ - "source" : {"device" : {"hwd" : {"id" : "12345"}} }, - "nested_field": { "test1": "some text" } - }""" - val testDoc2 = """{ - "nested_field": { "test1": "some text" }, - "test_field": "12345" + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc = """{ + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" }""" - val docQuery1 = DocLevelQuery( - query = "test_field:\"12345\"", - name = "4", - fields = listOf() - ) - val docQuery2 = DocLevelQuery( - query = "source.device.hwd.id:\"12345\"", - name = "5", - fields = listOf() - ) - - val docLevelInput = DocLevelMonitorInput("description", listOf("test*"), listOf(docQuery1, docQuery2)) + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = testQueryName, fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf("$testIndexPrefix*"), listOf(docQuery)) - val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val trigger = randomDocumentLevelTrigger(condition = Script("query[name=$testQueryName]")) val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) assertNotNull(monitor.id) - indexDoc(testIndex4, "1", testDoc1) - indexDoc(testIndex2, "1", testDoc2) - indexDoc(testIndex, "1", testDoc1) - indexDoc(testIndex, "2", testDoc2) + for (i in 0 until 9) { + indexDoc(testIndex, i.toString(), testDoc) + } + indexDoc(testIndex2, "3", testDoc) + adminClient().updateSettings("plugins.alerting.alert_findings_indexing_batch_size", 2) - executeMonitor(monitor.id) + val response = executeMonitor(monitor.id) - val alerts = searchAlertsWithFilter(monitor) - assertEquals("Alert saved for test monitor", 4, alerts.size) + val output = entityAsMap(response) - val findings = searchFindings(monitor) - assertEquals("Findings saved for test monitor", 4, findings.size) + assertEquals(monitor.name, output["monitor_name"]) + @Suppress("UNCHECKED_CAST") + val searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + val matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Correct search result", 10, matchingDocsToQuery.size) + assertTrue("Correct search result", matchingDocsToQuery.containsAll(listOf("1|$testIndex", "2|$testIndex", "3|$testIndex2"))) - val request = """{ - "size": 0, - "query": { - "match_all": {} - } - }""" - val httpResponse = adminClient().makeRequest( - "GET", "/${monitor.dataSources.queryIndex}/_search", - StringEntity(request, ContentType.APPLICATION_JSON) - ) - assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) + val alerts = searchAlertsWithFilter(monitor) + assertEquals("Alert saved for test monitor", 10, alerts.size) - val searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) - searchResponse.hits.totalHits?.let { assertEquals(5L, it.value) } + val findings = searchFindings(monitor) + assertEquals("Findings saved for test monitor", 10, findings.size) + val foundFindings = + findings.filter { it.relatedDocIds.contains("1") || it.relatedDocIds.contains("2") || it.relatedDocIds.contains("3") } + assertEquals("Found findings for all docs", 4, foundFindings.size) } - fun `test no of queries generated for document-level monitor based on wildcard indexes`() { - val testIndex = createTestIndex("test1") + fun `test execute monitor with wildcard index that generates alerts and findings for NOT EQUALS query operator`() { + val testIndexPrefix = "test-index-${randomAlphaOfLength(10).lowercase(Locale.ROOT)}" + val testQueryName = "wildcard-test-query" + val testIndex = createTestIndex("${testIndexPrefix}1") + val testIndex2 = createTestIndex("${testIndexPrefix}2") + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) val testDoc = """{ "message" : "This is an error from IAD region", @@ -927,46 +862,38 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { "test_field" : "us-west-2" }""" - val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) - val docLevelInput = DocLevelMonitorInput("description", listOf("test*"), listOf(docQuery)) + val docQuery = DocLevelQuery(query = "NOT (test_field:\"us-west-1\")", name = testQueryName, fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf("$testIndexPrefix*"), listOf(docQuery)) - val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val trigger = randomDocumentLevelTrigger(condition = Script("query[name=$testQueryName]")) val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) assertNotNull(monitor.id) indexDoc(testIndex, "1", testDoc) - executeMonitor(monitor.id) + indexDoc(testIndex2, "5", testDoc) - val request = """{ - "size": 0, - "query": { - "match_all": {} - } - }""" - var httpResponse = adminClient().makeRequest( - "GET", "/${monitor.dataSources.queryIndex}/_search", - StringEntity(request, ContentType.APPLICATION_JSON) - ) - assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) + val response = executeMonitor(monitor.id) - var searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) - searchResponse.hits.totalHits?.let { assertEquals(1L, it.value) } + val output = entityAsMap(response) - val testIndex2 = createTestIndex("test2") - indexDoc(testIndex2, "1", testDoc) - executeMonitor(monitor.id) + assertEquals(monitor.name, output["monitor_name"]) + @Suppress("UNCHECKED_CAST") + val searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + val matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 2, matchingDocsToQuery.size) + assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("1|$testIndex", "5|$testIndex2"))) - httpResponse = adminClient().makeRequest( - "GET", "/${monitor.dataSources.queryIndex}/_search", - StringEntity(request, ContentType.APPLICATION_JSON) - ) - assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) + val alerts = searchAlertsWithFilter(monitor) + assertEquals("Alert saved for test monitor", 2, alerts.size) - searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) - searchResponse.hits.totalHits?.let { assertEquals(1L, it.value) } + val findings = searchFindings(monitor) + assertEquals("Findings saved for test monitor", 2, findings.size) + val foundFindings = findings.filter { it.relatedDocIds.contains("1") || it.relatedDocIds.contains("5") } + assertEquals("Didn't find findings for docs 1 and 5", 2, foundFindings.size) } - fun `test execute monitor with new index added after first execution that generates alerts and findings from new query`() { + fun `test execute monitor with new index added after first execution that generates alerts and findings`() { val testIndex = createTestIndex("test1") val testIndex2 = createTestIndex("test2") val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) @@ -976,9 +903,8 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { "test_field" : "us-west-2" }""" - val docQuery1 = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) - val docQuery2 = DocLevelQuery(query = "test_field_new:\"us-west-2\"", name = "4", fields = listOf()) - val docLevelInput = DocLevelMonitorInput("description", listOf("test*"), listOf(docQuery1, docQuery2)) + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf("test*"), listOf(docQuery)) val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) @@ -1001,14 +927,10 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { deleteIndex(ALL_FINDING_INDEX_PATTERN) deleteIndex(ALL_ALERT_INDEX_PATTERN) - val testDocNew = """{ - "message" : "This is an error from IAD region", - "test_strict_date_time" : "$testTime", - "test_field_new" : "us-west-2" - }""" - val testIndex3 = createTestIndex("test3") - indexDoc(testIndex3, "10", testDocNew) + indexDoc(testIndex3, "10", testDoc) + indexDoc(testIndex, "14", testDoc) + indexDoc(testIndex2, "51", testDoc) val response = executeMonitor(monitor.id) @@ -1018,304 +940,1490 @@ class DocumentMonitorRunnerIT : AlertingRestTestCase() { @Suppress("UNCHECKED_CAST") val searchResult = (output.objectMap("input_results")["results"] as List>).first() @Suppress("UNCHECKED_CAST") - val matchingDocsToQuery = searchResult[docQuery2.id] as List - assertEquals("Incorrect search result", 1, matchingDocsToQuery.size) - assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("10|$testIndex3"))) + val matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 3, matchingDocsToQuery.size) + assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("14|$testIndex", "51|$testIndex2", "10|$testIndex3"))) alerts = searchAlertsWithFilter(monitor) - assertEquals("Alert saved for test monitor", 1, alerts.size) + assertEquals("Alert saved for test monitor", 3, alerts.size) findings = searchFindings(monitor) - assertEquals("Findings saved for test monitor", 1, findings.size) + assertEquals("Findings saved for test monitor", 3, findings.size) foundFindings = findings.filter { - it.relatedDocIds.contains("10") + it.relatedDocIds.contains("14") || it.relatedDocIds.contains("51") || it.relatedDocIds.contains("10") } - assertEquals("Findings saved for test monitor expected 10", 1, foundFindings.size) + assertEquals("Findings saved for test monitor expected 14, 51 and 10", 3, foundFindings.size) } - fun `test document-level monitor when alias only has write index with 0 docs`() { - // Monitor should execute, but create 0 findings. - val alias = createTestAlias(includeWriteIndex = true) - val aliasIndex = alias.keys.first() - val query = randomDocLevelQuery(tags = listOf()) - val input = randomDocLevelMonitorInput(indices = listOf(aliasIndex), queries = listOf(query)) - val trigger = randomDocumentLevelTrigger(condition = Script("query[id=\"${query.id}\"]")) - val monitor = createMonitor(randomDocumentLevelMonitor(enabled = false, inputs = listOf(input), triggers = listOf(trigger))) - - val response: Response - try { - response = executeMonitor(monitor.id) + fun `test execute monitor with indices having fields with same name but different data types`() { + val testIndex = createTestIndex( + "test1", + """"properties": { + "source.device.port": { "type": "long" }, + "source.device.hwd.id": { "type": "long" }, + "nested_field": { + "type": "nested", + "properties": { + "test1": { + "type": "keyword" + } + } + }, + "my_join_field": { + "type": "join", + "relations": { + "question": "answer" + } + }, + "test_field" : { "type" : "integer" } + } + """.trimIndent() + ) + var testDoc = """{ + "source" : { "device": {"port" : 12345 } }, + "nested_field": { "test1": "some text" }, + "test_field": 12345 + }""" + + val docQuery1 = DocLevelQuery( + query = "(source.device.port:12345 AND test_field:12345) OR source.device.hwd.id:12345", + name = "4", + fields = listOf() + ) + val docQuery2 = DocLevelQuery( + query = "(source.device.port:\"12345\" AND test_field:\"12345\") OR source.device.hwd.id:\"12345\"", + name = "5", + fields = listOf() + ) + val docLevelInput = DocLevelMonitorInput("description", listOf("test*"), listOf(docQuery1, docQuery2)) + + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) + assertNotNull(monitor.id) + + indexDoc(testIndex, "1", testDoc) + executeMonitor(monitor.id) + + var alerts = searchAlertsWithFilter(monitor) + assertEquals("Alert saved for test monitor", 1, alerts.size) + + var findings = searchFindings(monitor) + assertEquals("Findings saved for test monitor", 1, findings.size) + + // clear previous findings and alerts + deleteIndex(ALL_FINDING_INDEX_PATTERN) + deleteIndex(ALL_ALERT_INDEX_PATTERN) + + indexDoc(testIndex, "2", testDoc) + + // no fields expanded as only index test1 is present + val oldExpectedQueries = listOf( + "(source.device.port_test__${monitor.id}:12345 AND test_field_test__${monitor.id}:12345) OR " + + "source.device.hwd.id_test__${monitor.id}:12345", + "(source.device.port_test__${monitor.id}:\"12345\" AND test_field_test__${monitor.id}:\"12345\") " + + "OR source.device.hwd.id_test__${monitor.id}:\"12345\"" + ) + + val request = """{ + "size": 10, + "query": { + "match_all": {} + } + }""" + var httpResponse = adminClient().makeRequest( + "GET", "/${monitor.dataSources.queryIndex}/_search", + StringEntity(request, ContentType.APPLICATION_JSON) + ) + assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) + var searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) + searchResponse.hits.forEach { hit -> + val query = ((hit.sourceAsMap["query"] as Map)["query_string"] as Map)["query"] + assertTrue(oldExpectedQueries.contains(query)) + } + + val testIndex2 = createTestIndex( + "test2", + """ + "properties" : { + "test_strict_date_time" : { "type" : "date", "format" : "strict_date_time" }, + "test_field" : { "type" : "keyword" }, + "number" : { "type" : "keyword" } + } + """.trimIndent() + ) + testDoc = """{ + "source" : { "device": {"port" : "12345" } }, + "nested_field": { "test1": "some text" }, + "test_field": "12345" + }""" + indexDoc(testIndex2, "1", testDoc) + executeMonitor(monitor.id) + + // only fields source.device.port & test_field is expanded as they have same name but different data types + // in indices test1 & test2 + val newExpectedQueries = listOf( + "(source.device.port_test2_${monitor.id}:12345 AND test_field_test2_${monitor.id}:12345) " + + "OR source.device.hwd.id_test__${monitor.id}:12345", + "(source.device.port_test1_${monitor.id}:12345 AND test_field_test1_${monitor.id}:12345) " + + "OR source.device.hwd.id_test__${monitor.id}:12345", + "(source.device.port_test2_${monitor.id}:\"12345\" AND test_field_test2_${monitor.id}:\"12345\") " + + "OR source.device.hwd.id_test__${monitor.id}:\"12345\"", + "(source.device.port_test1_${monitor.id}:\"12345\" AND test_field_test1_${monitor.id}:\"12345\") " + + "OR source.device.hwd.id_test__${monitor.id}:\"12345\"" + ) + + alerts = searchAlertsWithFilter(monitor) + assertEquals("Alert saved for test monitor", 2, alerts.size) + + findings = searchFindings(monitor) + assertEquals("Findings saved for test monitor", 2, findings.size) + + httpResponse = adminClient().makeRequest( + "GET", "/${monitor.dataSources.queryIndex}/_search", + StringEntity(request, ContentType.APPLICATION_JSON) + ) + assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) + searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) + searchResponse.hits.forEach { hit -> + val query = ((hit.sourceAsMap["query"] as Map)["query_string"] as Map)["query"] + assertTrue(oldExpectedQueries.contains(query) || newExpectedQueries.contains(query)) + } + } + + fun `test execute monitor with indices having fields with same name but with different nesting`() { + val testIndex = createTestIndex( + "test1", + """"properties": { + "nested_field": { + "type": "nested", + "properties": { + "test1": { + "type": "keyword" + } + } + } + } + """.trimIndent() + ) + + val testIndex2 = createTestIndex( + "test2", + """"properties": { + "nested_field": { + "properties": { + "test1": { + "type": "keyword" + } + } + } + } + """.trimIndent() + ) + val testDoc = """{ + "nested_field": { "test1": "12345" } + }""" + + val docQuery = DocLevelQuery( + query = "nested_field.test1:\"12345\"", + name = "5", + fields = listOf() + ) + val docLevelInput = DocLevelMonitorInput("description", listOf("test*"), listOf(docQuery)) + + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) + assertNotNull(monitor.id) + + indexDoc(testIndex, "1", testDoc) + indexDoc(testIndex2, "1", testDoc) + + executeMonitor(monitor.id) + + val alerts = searchAlertsWithFilter(monitor) + assertEquals("Alert saved for test monitor", 2, alerts.size) + + val findings = searchFindings(monitor) + assertEquals("Findings saved for test monitor", 2, findings.size) + + // as mappings of source.id & test_field are different so, both of them expands + val expectedQueries = listOf( + "nested_field.test1_test__${monitor.id}:\"12345\"" + ) + + val request = """{ + "size": 10, + "query": { + "match_all": {} + } + }""" + var httpResponse = adminClient().makeRequest( + "GET", "/${monitor.dataSources.queryIndex}/_search", + StringEntity(request, ContentType.APPLICATION_JSON) + ) + assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) + var searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) + searchResponse.hits.forEach { hit -> + val query = ((hit.sourceAsMap["query"] as Map)["query_string"] as Map)["query"] + assertTrue(expectedQueries.contains(query)) + } + } + + fun `test execute monitor with indices having fields with same name but different field mappings`() { + val testIndex = createTestIndex( + "test1", + """"properties": { + "source": { + "properties": { + "id": { + "type":"text", + "analyzer":"whitespace" + } + } + }, + "test_field" : { + "type":"text", + "analyzer":"whitespace" + } + } + """.trimIndent() + ) + + val testIndex2 = createTestIndex( + "test2", + """"properties": { + "source": { + "properties": { + "id": { + "type":"text" + } + } + }, + "test_field" : { + "type":"text" + } + } + """.trimIndent() + ) + val testDoc = """{ + "source" : {"id" : "12345" }, + "nested_field": { "test1": "some text" }, + "test_field": "12345" + }""" + + val docQuery = DocLevelQuery( + query = "test_field:\"12345\" AND source.id:\"12345\"", + name = "5", + fields = listOf() + ) + val docLevelInput = DocLevelMonitorInput("description", listOf("test*"), listOf(docQuery)) + + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) + assertNotNull(monitor.id) + + indexDoc(testIndex, "1", testDoc) + indexDoc(testIndex2, "1", testDoc) + + executeMonitor(monitor.id) + + val alerts = searchAlertsWithFilter(monitor) + assertEquals("Alert saved for test monitor", 2, alerts.size) + + val findings = searchFindings(monitor) + assertEquals("Findings saved for test monitor", 2, findings.size) + + // as mappings of source.id & test_field are different so, both of them expands + val expectedQueries = listOf( + "test_field_test2_${monitor.id}:\"12345\" AND source.id_test2_${monitor.id}:\"12345\"", + "test_field_test1_${monitor.id}:\"12345\" AND source.id_test1_${monitor.id}:\"12345\"" + ) + + val request = """{ + "size": 10, + "query": { + "match_all": {} + } + }""" + var httpResponse = adminClient().makeRequest( + "GET", "/${monitor.dataSources.queryIndex}/_search", + StringEntity(request, ContentType.APPLICATION_JSON) + ) + assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) + var searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) + searchResponse.hits.forEach { hit -> + val query = ((hit.sourceAsMap["query"] as Map)["query_string"] as Map)["query"] + assertTrue(expectedQueries.contains(query)) + } + } + + fun `test execute monitor with indices having fields with same name but different field mappings in multiple indices`() { + val testIndex = createTestIndex( + "test1", + """"properties": { + "source": { + "properties": { + "device": { + "properties": { + "hwd": { + "properties": { + "id": { + "type":"text", + "analyzer":"whitespace" + } + } + } + } + } + } + }, + "test_field" : { + "type":"text" + } + } + """.trimIndent() + ) + + val testIndex2 = createTestIndex( + "test2", + """"properties": { + "test_field" : { + "type":"keyword" + } + } + """.trimIndent() + ) + + val testIndex4 = createTestIndex( + "test4", + """"properties": { + "source": { + "properties": { + "device": { + "properties": { + "hwd": { + "properties": { + "id": { + "type":"text" + } + } + } + } + } + } + }, + "test_field" : { + "type":"text" + } + } + """.trimIndent() + ) + + val testDoc1 = """{ + "source" : {"device" : {"hwd" : {"id" : "12345"}} }, + "nested_field": { "test1": "some text" } + }""" + val testDoc2 = """{ + "nested_field": { "test1": "some text" }, + "test_field": "12345" + }""" + + val docQuery1 = DocLevelQuery( + query = "test_field:\"12345\"", + name = "4", + fields = listOf() + ) + val docQuery2 = DocLevelQuery( + query = "source.device.hwd.id:\"12345\"", + name = "5", + fields = listOf() + ) + + val docLevelInput = DocLevelMonitorInput("description", listOf("test*"), listOf(docQuery1, docQuery2)) + + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) + assertNotNull(monitor.id) + + indexDoc(testIndex4, "1", testDoc1) + indexDoc(testIndex2, "1", testDoc2) + indexDoc(testIndex, "1", testDoc1) + indexDoc(testIndex, "2", testDoc2) + + executeMonitor(monitor.id) + + val alerts = searchAlertsWithFilter(monitor) + assertEquals("Alert saved for test monitor", 4, alerts.size) + + val findings = searchFindings(monitor) + assertEquals("Findings saved for test monitor", 4, findings.size) + + val request = """{ + "size": 0, + "query": { + "match_all": {} + } + }""" + val httpResponse = adminClient().makeRequest( + "GET", "/${monitor.dataSources.queryIndex}/_search", + StringEntity(request, ContentType.APPLICATION_JSON) + ) + assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) + + val searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) + searchResponse.hits.totalHits?.let { assertEquals(5L, it.value) } + } + + fun `test no of queries generated for document-level monitor based on wildcard indexes`() { + val testIndex = createTestIndex("test1") + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc = """{ + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" + }""" + + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf("test*"), listOf(docQuery)) + + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) + assertNotNull(monitor.id) + + indexDoc(testIndex, "1", testDoc) + executeMonitor(monitor.id) + + val request = """{ + "size": 0, + "query": { + "match_all": {} + } + }""" + var httpResponse = adminClient().makeRequest( + "GET", "/${monitor.dataSources.queryIndex}/_search", + StringEntity(request, ContentType.APPLICATION_JSON) + ) + assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) + + var searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) + searchResponse.hits.totalHits?.let { assertEquals(1L, it.value) } + + val testIndex2 = createTestIndex("test2") + indexDoc(testIndex2, "1", testDoc) + executeMonitor(monitor.id) + + httpResponse = adminClient().makeRequest( + "GET", "/${monitor.dataSources.queryIndex}/_search", + StringEntity(request, ContentType.APPLICATION_JSON) + ) + assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) + + searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) + searchResponse.hits.totalHits?.let { assertEquals(1L, it.value) } + } + + fun `test execute monitor with new index added after first execution that generates alerts and findings from new query`() { + val testIndex = createTestIndex("test1") + val testIndex2 = createTestIndex("test2") + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc = """{ + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" + }""" + + val docQuery1 = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docQuery2 = DocLevelQuery(query = "test_field_new:\"us-west-2\"", name = "4", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf("test*"), listOf(docQuery1, docQuery2)) + + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) + assertNotNull(monitor.id) + + indexDoc(testIndex, "1", testDoc) + indexDoc(testIndex2, "5", testDoc) + executeMonitor(monitor.id) + + var alerts = searchAlertsWithFilter(monitor) + assertEquals("Alert saved for test monitor", 2, alerts.size) + + var findings = searchFindings(monitor) + assertEquals("Findings saved for test monitor", 2, findings.size) + + var foundFindings = findings.filter { it.relatedDocIds.contains("1") || it.relatedDocIds.contains("5") } + assertEquals("Findings saved for test monitor expected 1 and 5", 2, foundFindings.size) + + // clear previous findings and alerts + deleteIndex(ALL_FINDING_INDEX_PATTERN) + deleteIndex(ALL_ALERT_INDEX_PATTERN) + + val testDocNew = """{ + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field_new" : "us-west-2" + }""" + + val testIndex3 = createTestIndex("test3") + indexDoc(testIndex3, "10", testDocNew) + + val response = executeMonitor(monitor.id) + + val output = entityAsMap(response) + + assertEquals(monitor.name, output["monitor_name"]) + @Suppress("UNCHECKED_CAST") + val searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + val matchingDocsToQuery = searchResult[docQuery2.id] as List + assertEquals("Incorrect search result", 1, matchingDocsToQuery.size) + assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("10|$testIndex3"))) + + alerts = searchAlertsWithFilter(monitor) + assertEquals("Alert saved for test monitor", 1, alerts.size) + + findings = searchFindings(monitor) + assertEquals("Findings saved for test monitor", 1, findings.size) + + foundFindings = findings.filter { + it.relatedDocIds.contains("10") + } + assertEquals("Findings saved for test monitor expected 10", 1, foundFindings.size) + } + + fun `test document-level monitor when alias only has write index with 0 docs`() { + // Monitor should execute, but create 0 findings. + val alias = createTestAlias(includeWriteIndex = true) + val aliasIndex = alias.keys.first() + val query = randomDocLevelQuery(tags = listOf()) + val input = randomDocLevelMonitorInput(indices = listOf(aliasIndex), queries = listOf(query)) + val trigger = randomDocumentLevelTrigger(condition = Script("query[id=\"${query.id}\"]")) + val monitor = createMonitor(randomDocumentLevelMonitor(enabled = false, inputs = listOf(input), triggers = listOf(trigger))) + + val response: Response + try { + response = executeMonitor(monitor.id) + } catch (e: ResponseException) { + assertNotNull("Expected an error message: $e", e.message) + e.message?.let { + assertTrue("Unexpected exception: $e", it.contains("""reason":"no such index [.opensearch-alerting-findings]""")) + } + assertEquals(404, e.response.statusLine.statusCode) + return + } + + val output = entityAsMap(response) + val inputResults = output.stringMap("input_results") + val errorMessage = inputResults?.get("error") + @Suppress("UNCHECKED_CAST") + val searchResult = (inputResults?.get("results") as List>).firstOrNull() + @Suppress("UNCHECKED_CAST") + val findings = searchFindings() + + assertEquals(monitor.name, output["monitor_name"]) + assertNull("Unexpected monitor execution failure: $errorMessage", errorMessage) + findings.findings.forEach { + val queryIds = it.finding.docLevelQueries.map { query -> query.id } + assertFalse("No findings should exist with queryId ${query.id}, but found: $it", queryIds.contains(query.id)) + } + } + + fun `test document-level monitor when docs exist prior to monitor creation`() { + // FIXME: Consider renaming this test case + // Only new docs should create findings. + val alias = createTestAlias(includeWriteIndex = true) + val aliasIndex = alias.keys.first() + val indices = alias[aliasIndex]?.keys?.toList() as List + val query = randomDocLevelQuery(tags = listOf()) + val input = randomDocLevelMonitorInput(indices = listOf(aliasIndex), queries = listOf(query)) + val trigger = randomDocumentLevelTrigger(condition = Script("query[id=\"${query.id}\"]")) + + val preExistingDocIds = mutableSetOf() + indices.forEach { index -> + val docId = index.hashCode().toString() + val doc = """{ "message" : "${query.query}" }""" + preExistingDocIds.add(docId) + indexDoc(index = index, id = docId, doc = doc) + } + assertEquals(indices.size, preExistingDocIds.size) + + val monitor = createMonitor(randomDocumentLevelMonitor(enabled = false, inputs = listOf(input), triggers = listOf(trigger))) + + val response = executeMonitor(monitor.id) + + val output = entityAsMap(response) + val inputResults = output.stringMap("input_results") + val errorMessage = inputResults?.get("error") + @Suppress("UNCHECKED_CAST") + val searchResult = (inputResults?.get("results") as List>).firstOrNull() + @Suppress("UNCHECKED_CAST") + val findings = searchFindings() + + assertEquals(monitor.name, output["monitor_name"]) + assertNull("Unexpected monitor execution failure: $errorMessage", errorMessage) + findings.findings.forEach { + val docIds = it.finding.relatedDocIds + assertTrue( + "Findings index should not contain a pre-existing doc, but found $it", + preExistingDocIds.intersect(docIds).isEmpty() + ) + } + } + + fun `test document-level monitor when alias indices only contain docs that match query`() { + // Only new docs should create findings. + val alias = createTestAlias(includeWriteIndex = true) + val aliasIndex = alias.keys.first() + val indices = alias[aliasIndex]?.keys?.toList() as List + val query = randomDocLevelQuery(tags = listOf()) + val input = randomDocLevelMonitorInput(indices = listOf(aliasIndex), queries = listOf(query)) + val trigger = randomDocumentLevelTrigger(condition = Script("query[id=\"${query.id}\"]")) + + val preExistingDocIds = mutableSetOf() + indices.forEach { index -> + val docId = index.hashCode().toString() + val doc = """{ "message" : "${query.query}" }""" + preExistingDocIds.add(docId) + indexDoc(index = index, id = docId, doc = doc) + } + assertEquals(indices.size, preExistingDocIds.size) + + val monitor = createMonitor(randomDocumentLevelMonitor(enabled = false, inputs = listOf(input), triggers = listOf(trigger))) + executeMonitor(monitor.id) + + val newDocIds = mutableSetOf() + indices.forEach { index -> + (1..5).map { + val docId = "${index.hashCode()}$it" + val doc = """{ "message" : "${query.query}" }""" + newDocIds.add(docId) + indexDoc(index = index, id = docId, doc = doc) + } + } + assertEquals(indices.size * 5, newDocIds.size) + + val response = executeMonitor(monitor.id) + + val output = entityAsMap(response) + val inputResults = output.stringMap("input_results") + val errorMessage = inputResults?.get("error") + @Suppress("UNCHECKED_CAST") + val searchResult = (inputResults?.get("results") as List>).firstOrNull() + @Suppress("UNCHECKED_CAST") + val findings = searchFindings() + + assertEquals(monitor.name, output["monitor_name"]) + assertNull("Unexpected monitor execution failure: $errorMessage", errorMessage) + findings.findings.forEach { + val docIds = it.finding.relatedDocIds + assertTrue( + "Findings index should not contain a pre-existing doc, but found $it", + preExistingDocIds.intersect(docIds).isEmpty() + ) + assertTrue("Found an unexpected finding $it", newDocIds.intersect(docIds).isNotEmpty()) + } + } + + fun `test document-level monitor when alias indices contain docs that do and do not match query`() { + // Only matching docs should create findings. + val alias = createTestAlias(includeWriteIndex = true) + val aliasIndex = alias.keys.first() + val indices = alias[aliasIndex]?.keys?.toList() as List + val query = randomDocLevelQuery(tags = listOf()) + val input = randomDocLevelMonitorInput(indices = listOf(aliasIndex), queries = listOf(query)) + val trigger = randomDocumentLevelTrigger(condition = Script("query[id=\"${query.id}\"]")) + + val preExistingDocIds = mutableSetOf() + indices.forEach { index -> + val docId = index.hashCode().toString() + val doc = """{ "message" : "${query.query}" }""" + preExistingDocIds.add(docId) + indexDoc(index = index, id = docId, doc = doc) + } + assertEquals(indices.size, preExistingDocIds.size) + + val monitor = createMonitor(randomDocumentLevelMonitor(enabled = false, inputs = listOf(input), triggers = listOf(trigger))) + executeMonitor(monitor.id) + + val matchingDocIds = mutableSetOf() + val nonMatchingDocIds = mutableSetOf() + indices.forEach { index -> + (1..5).map { + val matchingDocId = "${index.hashCode()}$it" + val matchingDoc = """{ "message" : "${query.query}" }""" + indexDoc(index = index, id = matchingDocId, doc = matchingDoc) + matchingDocIds.add(matchingDocId) + + val nonMatchingDocId = "${index.hashCode()}${it}2" + var nonMatchingDoc = StringBuilder(query.query).insert(2, "difference").toString() + nonMatchingDoc = """{ "message" : "$nonMatchingDoc" }""" + indexDoc(index = index, id = nonMatchingDocId, doc = nonMatchingDoc) + nonMatchingDocIds.add(nonMatchingDocId) + } + } + assertEquals(indices.size * 5, matchingDocIds.size) + + val response = executeMonitor(monitor.id) + + val output = entityAsMap(response) + val inputResults = output.stringMap("input_results") + val errorMessage = inputResults?.get("error") + @Suppress("UNCHECKED_CAST") + val searchResult = (inputResults?.get("results") as List>).firstOrNull() + @Suppress("UNCHECKED_CAST") + val findings = searchFindings() + + assertEquals(monitor.name, output["monitor_name"]) + assertNull("Unexpected monitor execution failure: $errorMessage", errorMessage) + findings.findings.forEach { + val docIds = it.finding.relatedDocIds + assertTrue( + "Findings index should not contain a pre-existing doc, but found $it", + preExistingDocIds.intersect(docIds).isEmpty() + ) + assertTrue("Found doc that doesn't match query: $it", nonMatchingDocIds.intersect(docIds).isEmpty()) + assertFalse("Found an unexpected finding $it", matchingDocIds.intersect(docIds).isNotEmpty()) + } + } + + fun `test document-level monitor when datastreams contain docs that do match query`() { + val dataStreamName = "test-datastream" + createDataStream( + dataStreamName, + """ + "properties" : { + "test_strict_date_time" : { "type" : "date", "format" : "strict_date_time" }, + "test_field" : { "type" : "keyword" }, + "number" : { "type" : "keyword" } + } + """.trimIndent(), + false + ) + + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf(dataStreamName), listOf(docQuery)) + + val action = randomAction(template = randomTemplateScript("Hello {{ctx.monitor.name}}"), destinationId = createDestination().id) + val monitor = createMonitor( + randomDocumentLevelMonitor( + inputs = listOf(docLevelInput), + triggers = listOf(randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = listOf(action))) + ) + ) + + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc = """{ + "@timestamp": "$testTime", + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" + }""" + indexDoc(dataStreamName, "1", testDoc) + var response = executeMonitor(monitor.id) + var output = entityAsMap(response) + var searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + var matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 1, matchingDocsToQuery.size) + + rolloverDatastream(dataStreamName) + indexDoc(dataStreamName, "2", testDoc) + response = executeMonitor(monitor.id) + output = entityAsMap(response) + searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 1, matchingDocsToQuery.size) + + deleteDataStream(dataStreamName) + } + + fun `test document-level monitor when datastreams contain docs across read-only indices that do match query`() { + val dataStreamName = "test-datastream" + createDataStream( + dataStreamName, + """ + "properties" : { + "test_strict_date_time" : { "type" : "date", "format" : "strict_date_time" }, + "test_field" : { "type" : "keyword" }, + "number" : { "type" : "keyword" } + } + """.trimIndent(), + false + ) + + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf(dataStreamName), listOf(docQuery)) + + val action = randomAction(template = randomTemplateScript("Hello {{ctx.monitor.name}}"), destinationId = createDestination().id) + val monitor = createMonitor( + randomDocumentLevelMonitor( + inputs = listOf(docLevelInput), + triggers = listOf(randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = listOf(action))) + ) + ) + + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc = """{ + "@timestamp": "$testTime", + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" + }""" + indexDoc(dataStreamName, "1", testDoc) + var response = executeMonitor(monitor.id) + var output = entityAsMap(response) + var searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + var matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 1, matchingDocsToQuery.size) + + indexDoc(dataStreamName, "2", testDoc) + rolloverDatastream(dataStreamName) + rolloverDatastream(dataStreamName) + indexDoc(dataStreamName, "4", testDoc) + rolloverDatastream(dataStreamName) + response = executeMonitor(monitor.id) + output = entityAsMap(response) + searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 2, matchingDocsToQuery.size) + + indexDoc(dataStreamName, "5", testDoc) + indexDoc(dataStreamName, "6", testDoc) + response = executeMonitor(monitor.id) + output = entityAsMap(response) + searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 2, matchingDocsToQuery.size) + deleteDataStream(dataStreamName) + } + + fun `test document-level monitor when index alias contain docs that do match query`() { + val aliasName = "test-alias" + createIndexAlias( + aliasName, + """ + "properties" : { + "test_strict_date_time" : { "type" : "date", "format" : "strict_date_time" }, + "test_field" : { "type" : "keyword" }, + "number" : { "type" : "keyword" } + } + """.trimIndent() + ) + + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf("$aliasName"), listOf(docQuery)) + + val action = randomAction(template = randomTemplateScript("Hello {{ctx.monitor.name}}"), destinationId = createDestination().id) + val monitor = createMonitor( + randomDocumentLevelMonitor( + inputs = listOf(docLevelInput), + triggers = listOf(randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = listOf(action))) + ) + ) + + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc = """{ + "@timestamp": "$testTime", + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" + }""" + indexDoc(aliasName, "1", testDoc) + var response = executeMonitor(monitor.id) + var output = entityAsMap(response) + var searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + var matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 1, matchingDocsToQuery.size) + + rolloverDatastream(aliasName) + indexDoc(aliasName, "2", testDoc) + response = executeMonitor(monitor.id) + output = entityAsMap(response) + searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 1, matchingDocsToQuery.size) + + deleteIndexAlias(aliasName) + } + + fun `test document-level monitor when multiple datastreams contain docs across read-only indices that do match query`() { + val dataStreamName1 = "test-datastream1" + createDataStream( + dataStreamName1, + """ + "properties" : { + "test_strict_date_time" : { "type" : "date", "format" : "strict_date_time" }, + "test_field" : { "type" : "keyword" }, + "number" : { "type" : "keyword" } + } + """.trimIndent(), + false + ) + val dataStreamName2 = "test-datastream2" + createDataStream( + dataStreamName2, + """ + "properties" : { + "test_strict_date_time" : { "type" : "date", "format" : "strict_date_time" }, + "test_field" : { "type" : "keyword" }, + "number" : { "type" : "keyword" } + } + """.trimIndent(), + false + ) + + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc = """{ + "@timestamp": "$testTime", + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" + }""" + indexDoc(dataStreamName2, "-1", testDoc) + rolloverDatastream(dataStreamName2) + indexDoc(dataStreamName2, "0", testDoc) + + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf("test-datastream*"), listOf(docQuery)) + + val action = randomAction(template = randomTemplateScript("Hello {{ctx.monitor.name}}"), destinationId = createDestination().id) + val monitor = createMonitor( + randomDocumentLevelMonitor( + inputs = listOf(docLevelInput), + triggers = listOf(randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = listOf(action))) + ) + ) + + indexDoc(dataStreamName1, "1", testDoc) + indexDoc(dataStreamName2, "1", testDoc) + var response = executeMonitor(monitor.id) + var output = entityAsMap(response) + var searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + var matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 2, matchingDocsToQuery.size) + + indexDoc(dataStreamName1, "2", testDoc) + indexDoc(dataStreamName2, "2", testDoc) + rolloverDatastream(dataStreamName1) + rolloverDatastream(dataStreamName1) + rolloverDatastream(dataStreamName2) + indexDoc(dataStreamName1, "4", testDoc) + indexDoc(dataStreamName2, "4", testDoc) + rolloverDatastream(dataStreamName1) + response = executeMonitor(monitor.id) + output = entityAsMap(response) + searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 4, matchingDocsToQuery.size) + + indexDoc(dataStreamName1, "5", testDoc) + indexDoc(dataStreamName1, "6", testDoc) + indexDoc(dataStreamName2, "5", testDoc) + indexDoc(dataStreamName2, "6", testDoc) + response = executeMonitor(monitor.id) + output = entityAsMap(response) + searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 4, matchingDocsToQuery.size) + deleteDataStream(dataStreamName1) + deleteDataStream(dataStreamName2) + } + + fun `test document-level monitor ignoring old read-only indices for datastreams`() { + val dataStreamName = "test-datastream" + createDataStream( + dataStreamName, + """ + "properties" : { + "test_strict_date_time" : { "type" : "date", "format" : "strict_date_time" }, + "test_field" : { "type" : "keyword" }, + "number" : { "type" : "keyword" } + } + """.trimIndent(), + false + ) + + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc = """{ + "@timestamp": "$testTime", + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" + }""" + indexDoc(dataStreamName, "-1", testDoc) + rolloverDatastream(dataStreamName) + indexDoc(dataStreamName, "0", testDoc) + + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf(dataStreamName), listOf(docQuery)) + + val action = randomAction(template = randomTemplateScript("Hello {{ctx.monitor.name}}"), destinationId = createDestination().id) + val monitor = createMonitor( + randomDocumentLevelMonitor( + inputs = listOf(docLevelInput), + triggers = listOf(randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = listOf(action))) + ) + ) + + indexDoc(dataStreamName, "1", testDoc) + var response = executeMonitor(monitor.id) + var output = entityAsMap(response) + var searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + var matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 1, matchingDocsToQuery.size) + + rolloverDatastream(dataStreamName) + indexDoc(dataStreamName, "2", testDoc) + response = executeMonitor(monitor.id) + output = entityAsMap(response) + searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 1, matchingDocsToQuery.size) + + deleteDataStream(dataStreamName) + } + + fun `test execute monitor with non-null data sources`() { + + val testIndex = createTestIndex() + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc = """{ + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" + }""" + + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) + + val alertCategories = AlertCategory.values() + val actionExecutionScope = PerAlertActionScope( + actionableAlerts = (1..randomInt(alertCategories.size)).map { alertCategories[it - 1] }.toSet() + ) + val actionExecutionPolicy = ActionExecutionPolicy(actionExecutionScope) + val actions = (0..randomInt(10)).map { + randomActionWithPolicy( + template = randomTemplateScript("Hello {{ctx.monitor.name}}"), + destinationId = createDestination().id, + actionExecutionPolicy = actionExecutionPolicy + ) + } + + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = actions) + try { + createMonitor( + randomDocumentLevelMonitor( + inputs = listOf(docLevelInput), + triggers = listOf(trigger), + dataSources = DataSources( + findingsIndex = "custom_findings_index", + alertsIndex = "custom_alerts_index", + ) + ) + ) + fail("Expected create monitor to fail") } catch (e: ResponseException) { - assertNotNull("Expected an error message: $e", e.message) - e.message?.let { - assertTrue("Unexpected exception: $e", it.contains("""reason":"no such index [.opensearch-alerting-findings]""")) - } - assertEquals(404, e.response.statusLine.statusCode) - return + assertTrue(e.message!!.contains("illegal_argument_exception")) } + } + + fun `test execute monitor with indices removed after first run`() { + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc = """{ + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" + }""" + + val index1 = createTestIndex() + val index2 = createTestIndex() + val index4 = createTestIndex() + val index5 = createTestIndex() - val output = entityAsMap(response) - val inputResults = output.stringMap("input_results") - val errorMessage = inputResults?.get("error") - @Suppress("UNCHECKED_CAST") - val searchResult = (inputResults?.get("results") as List>).firstOrNull() - @Suppress("UNCHECKED_CAST") - val findings = searchFindings() + val docQuery = DocLevelQuery(query = "\"us-west-2\"", name = "3", fields = listOf()) + var docLevelInput = DocLevelMonitorInput("description", listOf(index1, index2, index4, index5), listOf(docQuery)) + + val action = randomAction(template = randomTemplateScript("Hello {{ctx.monitor.name}}"), destinationId = createDestination().id) + val monitor = createMonitor( + randomDocumentLevelMonitor( + inputs = listOf(docLevelInput), + triggers = listOf(randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = listOf(action))) + ) + ) + + indexDoc(index1, "1", testDoc) + indexDoc(index2, "1", testDoc) + indexDoc(index4, "1", testDoc) + indexDoc(index5, "1", testDoc) + + var response = executeMonitor(monitor.id) + var output = entityAsMap(response) assertEquals(monitor.name, output["monitor_name"]) - assertNull("Unexpected monitor execution failure: $errorMessage", errorMessage) - findings.findings.forEach { - val queryIds = it.finding.docLevelQueries.map { query -> query.id } - assertFalse("No findings should exist with queryId ${query.id}, but found: $it", queryIds.contains(query.id)) - } + + assertEquals(1, output.objectMap("trigger_results").values.size) + deleteIndex(index1) + deleteIndex(index2) + + indexDoc(index4, "2", testDoc) + response = executeMonitor(monitor.id) + + output = entityAsMap(response) + assertEquals(1, output.objectMap("trigger_results").values.size) } - fun `test document-level monitor when docs exist prior to monitor creation`() { - // FIXME: Consider renaming this test case - // Only new docs should create findings. - val alias = createTestAlias(includeWriteIndex = true) - val aliasIndex = alias.keys.first() - val indices = alias[aliasIndex]?.keys?.toList() as List - val query = randomDocLevelQuery(tags = listOf()) - val input = randomDocLevelMonitorInput(indices = listOf(aliasIndex), queries = listOf(query)) - val trigger = randomDocumentLevelTrigger(condition = Script("query[id=\"${query.id}\"]")) + fun `test execute monitor generates alerts and findings with NOT EQUALS query and EXISTS query`() { + val testIndex = createTestIndex() + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc = """{ + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" + }""" - val preExistingDocIds = mutableSetOf() - indices.forEach { index -> - val docId = index.hashCode().toString() - val doc = """{ "message" : "${query.query}" }""" - preExistingDocIds.add(docId) - indexDoc(index = index, id = docId, doc = doc) - } - assertEquals(indices.size, preExistingDocIds.size) + val query = "NOT test_field: \"us-east-1\" AND _exists_: test_field" + val docQuery = DocLevelQuery(query = query, name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) - val monitor = createMonitor(randomDocumentLevelMonitor(enabled = false, inputs = listOf(input), triggers = listOf(trigger))) + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) + assertNotNull(monitor.id) + + indexDoc(testIndex, "1", testDoc) + indexDoc(testIndex, "5", testDoc) val response = executeMonitor(monitor.id) val output = entityAsMap(response) - val inputResults = output.stringMap("input_results") - val errorMessage = inputResults?.get("error") + + assertEquals(monitor.name, output["monitor_name"]) @Suppress("UNCHECKED_CAST") - val searchResult = (inputResults?.get("results") as List>).firstOrNull() + val searchResult = (output.objectMap("input_results")["results"] as List>).first() @Suppress("UNCHECKED_CAST") - val findings = searchFindings() + val matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 2, matchingDocsToQuery.size) + assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("1|$testIndex", "5|$testIndex"))) - assertEquals(monitor.name, output["monitor_name"]) - assertNull("Unexpected monitor execution failure: $errorMessage", errorMessage) - findings.findings.forEach { - val docIds = it.finding.relatedDocIds - assertTrue( - "Findings index should not contain a pre-existing doc, but found $it", - preExistingDocIds.intersect(docIds).isEmpty() + val alerts = searchAlertsWithFilter(monitor) + assertEquals("Alert saved for test monitor", 2, alerts.size) + + val findings = searchFindings(monitor) + assertEquals("Findings saved for test monitor", 2, findings.size) + assertTrue("Findings saved for test monitor", findings[0].relatedDocIds.contains("1") || findings[0].relatedDocIds.contains("5")) + assertTrue("Findings saved for test monitor", findings[1].relatedDocIds.contains("1") || findings[0].relatedDocIds.contains("5")) + } + + fun `test document-level monitor when index alias contain docs that do match a NOT EQUALS query and EXISTS query`() { + val aliasName = "test-alias" + createIndexAlias( + aliasName, + """ + "properties" : { + "test_strict_date_time" : { "type" : "date", "format" : "strict_date_time" }, + "test_field" : { "type" : "keyword" }, + "number" : { "type" : "keyword" } + } + """.trimIndent() + ) + + val docQuery = DocLevelQuery(query = "NOT test_field:\"us-east-1\" AND _exists_: test_field", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf("$aliasName"), listOf(docQuery)) + + val action = randomAction(template = randomTemplateScript("Hello {{ctx.monitor.name}}"), destinationId = createDestination().id) + val monitor = createMonitor( + randomDocumentLevelMonitor( + inputs = listOf(docLevelInput), + triggers = listOf(randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = listOf(action))) ) - } + ) + + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc = """{ + "@timestamp": "$testTime", + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" + }""" + indexDoc(aliasName, "1", testDoc) + var response = executeMonitor(monitor.id) + var output = entityAsMap(response) + var searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + var matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 1, matchingDocsToQuery.size) + + rolloverDatastream(aliasName) + indexDoc(aliasName, "2", testDoc) + response = executeMonitor(monitor.id) + output = entityAsMap(response) + searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 1, matchingDocsToQuery.size) + + deleteIndexAlias(aliasName) } - fun `test document-level monitor when alias indices only contain docs that match query`() { - // Only new docs should create findings. - val alias = createTestAlias(includeWriteIndex = true) - val aliasIndex = alias.keys.first() - val indices = alias[aliasIndex]?.keys?.toList() as List - val query = randomDocLevelQuery(tags = listOf()) - val input = randomDocLevelMonitorInput(indices = listOf(aliasIndex), queries = listOf(query)) - val trigger = randomDocumentLevelTrigger(condition = Script("query[id=\"${query.id}\"]")) + fun `test execute monitor with wildcard index that generates alerts and findings for NOT EQUALS and EXISTS query operator`() { + val testIndexPrefix = "test-index-${randomAlphaOfLength(10).lowercase(Locale.ROOT)}" + val testQueryName = "wildcard-test-query" + val testIndex = createTestIndex("${testIndexPrefix}1") + val testIndex2 = createTestIndex("${testIndexPrefix}2") - val preExistingDocIds = mutableSetOf() - indices.forEach { index -> - val docId = index.hashCode().toString() - val doc = """{ "message" : "${query.query}" }""" - preExistingDocIds.add(docId) - indexDoc(index = index, id = docId, doc = doc) - } - assertEquals(indices.size, preExistingDocIds.size) + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testDoc = """{ + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" + }""" - val monitor = createMonitor(randomDocumentLevelMonitor(enabled = false, inputs = listOf(input), triggers = listOf(trigger))) - executeMonitor(monitor.id) + val query = "NOT test_field:\"us-west-1\" AND _exists_: test_field" + val docQuery = DocLevelQuery(query = query, name = testQueryName, fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf("$testIndexPrefix*"), listOf(docQuery)) - val newDocIds = mutableSetOf() - indices.forEach { index -> - (1..5).map { - val docId = "${index.hashCode()}$it" - val doc = """{ "message" : "${query.query}" }""" - newDocIds.add(docId) - indexDoc(index = index, id = docId, doc = doc) - } - } - assertEquals(indices.size * 5, newDocIds.size) + val trigger = randomDocumentLevelTrigger(condition = Script("query[name=$testQueryName]")) + val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) + assertNotNull(monitor.id) + + indexDoc(testIndex, "1", testDoc) + indexDoc(testIndex2, "5", testDoc) + + val response = executeMonitor(monitor.id) + + val output = entityAsMap(response) + + assertEquals(monitor.name, output["monitor_name"]) + @Suppress("UNCHECKED_CAST") + val searchResult = (output.objectMap("input_results")["results"] as List>).first() + @Suppress("UNCHECKED_CAST") + val matchingDocsToQuery = searchResult[docQuery.id] as List + assertEquals("Incorrect search result", 2, matchingDocsToQuery.size) + assertTrue("Incorrect search result", matchingDocsToQuery.containsAll(listOf("1|$testIndex", "5|$testIndex2"))) + + val alerts = searchAlertsWithFilter(monitor) + assertEquals("Alert saved for test monitor", 2, alerts.size) + + val findings = searchFindings(monitor) + assertEquals("Findings saved for test monitor", 2, findings.size) + val foundFindings = findings.filter { it.relatedDocIds.contains("1") || it.relatedDocIds.contains("5") } + assertEquals("Didn't find findings for docs 1 and 5", 2, foundFindings.size) + } + + fun `test execute monitor with indices having fields with same name different field mappings in multiple indices with NOT EQUALS`() { + val testIndex = createTestIndex( + "test1", + """"properties": { + "source": { + "properties": { + "device": { + "properties": { + "hwd": { + "properties": { + "id": { + "type":"text", + "analyzer":"whitespace" + } + } + } + } + } + } + }, + "test_field" : { + "type":"text" + } + } + """.trimIndent() + ) + + val testIndex2 = createTestIndex( + "test2", + """"properties": { + "test_field" : { + "type":"keyword" + } + } + """.trimIndent() + ) + + val testIndex4 = createTestIndex( + "test4", + """"properties": { + "source": { + "properties": { + "device": { + "properties": { + "hwd": { + "properties": { + "id": { + "type":"text" + } + } + } + } + } + } + }, + "test_field" : { + "type":"text" + } + } + """.trimIndent() + ) - val response = executeMonitor(monitor.id) + val testDoc1 = """{ + "source" : {"device" : {"hwd" : {"id" : "123456"}} }, + "nested_field": { "test1": "some text" } + }""" + val testDoc2 = """{ + "nested_field": { "test1": "some text" }, + "test_field": "123456" + }""" - val output = entityAsMap(response) - val inputResults = output.stringMap("input_results") - val errorMessage = inputResults?.get("error") - @Suppress("UNCHECKED_CAST") - val searchResult = (inputResults?.get("results") as List>).firstOrNull() - @Suppress("UNCHECKED_CAST") - val findings = searchFindings() + val docQuery1 = DocLevelQuery( + query = "NOT test_field:\"12345\" AND _exists_: test_field", + name = "4", + fields = listOf() + ) + val docQuery2 = DocLevelQuery( + query = "NOT source.device.hwd.id:\"12345\" AND _exists_: source.device.hwd.id", + name = "5", + fields = listOf() + ) - assertEquals(monitor.name, output["monitor_name"]) - assertNull("Unexpected monitor execution failure: $errorMessage", errorMessage) - findings.findings.forEach { - val docIds = it.finding.relatedDocIds - assertTrue( - "Findings index should not contain a pre-existing doc, but found $it", - preExistingDocIds.intersect(docIds).isEmpty() - ) - assertTrue("Found an unexpected finding $it", newDocIds.intersect(docIds).isNotEmpty()) - } - } + val docLevelInput = DocLevelMonitorInput("description", listOf("test*"), listOf(docQuery1, docQuery2)) - fun `test document-level monitor when alias indices contain docs that do and do not match query`() { - // Only matching docs should create findings. - val alias = createTestAlias(includeWriteIndex = true) - val aliasIndex = alias.keys.first() - val indices = alias[aliasIndex]?.keys?.toList() as List - val query = randomDocLevelQuery(tags = listOf()) - val input = randomDocLevelMonitorInput(indices = listOf(aliasIndex), queries = listOf(query)) - val trigger = randomDocumentLevelTrigger(condition = Script("query[id=\"${query.id}\"]")) + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) + assertNotNull(monitor.id) - val preExistingDocIds = mutableSetOf() - indices.forEach { index -> - val docId = index.hashCode().toString() - val doc = """{ "message" : "${query.query}" }""" - preExistingDocIds.add(docId) - indexDoc(index = index, id = docId, doc = doc) - } - assertEquals(indices.size, preExistingDocIds.size) + indexDoc(testIndex4, "1", testDoc1) + indexDoc(testIndex2, "1", testDoc2) + indexDoc(testIndex, "1", testDoc1) + indexDoc(testIndex, "2", testDoc2) - val monitor = createMonitor(randomDocumentLevelMonitor(enabled = false, inputs = listOf(input), triggers = listOf(trigger))) executeMonitor(monitor.id) - val matchingDocIds = mutableSetOf() - val nonMatchingDocIds = mutableSetOf() - indices.forEach { index -> - (1..5).map { - val matchingDocId = "${index.hashCode()}$it" - val matchingDoc = """{ "message" : "${query.query}" }""" - indexDoc(index = index, id = matchingDocId, doc = matchingDoc) - matchingDocIds.add(matchingDocId) - - val nonMatchingDocId = "${index.hashCode()}${it}2" - var nonMatchingDoc = StringBuilder(query.query).insert(2, "difference").toString() - nonMatchingDoc = """{ "message" : "$nonMatchingDoc" }""" - indexDoc(index = index, id = nonMatchingDocId, doc = nonMatchingDoc) - nonMatchingDocIds.add(nonMatchingDocId) - } - } - assertEquals(indices.size * 5, matchingDocIds.size) + val alerts = searchAlertsWithFilter(monitor) + assertEquals("Alert saved for test monitor", 4, alerts.size) - val response = executeMonitor(monitor.id) + val findings = searchFindings(monitor) + assertEquals("Findings saved for test monitor", 4, findings.size) - val output = entityAsMap(response) - val inputResults = output.stringMap("input_results") - val errorMessage = inputResults?.get("error") - @Suppress("UNCHECKED_CAST") - val searchResult = (inputResults?.get("results") as List>).firstOrNull() - @Suppress("UNCHECKED_CAST") - val findings = searchFindings() + val request = """{ + "size": 0, + "query": { + "match_all": {} + } + }""" + val httpResponse = adminClient().makeRequest( + "GET", "/${monitor.dataSources.queryIndex}/_search", + StringEntity(request, ContentType.APPLICATION_JSON) + ) + assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) - assertEquals(monitor.name, output["monitor_name"]) - assertNull("Unexpected monitor execution failure: $errorMessage", errorMessage) - findings.findings.forEach { - val docIds = it.finding.relatedDocIds - assertTrue( - "Findings index should not contain a pre-existing doc, but found $it", - preExistingDocIds.intersect(docIds).isEmpty() - ) - assertTrue("Found doc that doesn't match query: $it", nonMatchingDocIds.intersect(docIds).isEmpty()) - assertFalse("Found an unexpected finding $it", matchingDocIds.intersect(docIds).isNotEmpty()) - } + val searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) + searchResponse.hits.totalHits?.let { assertEquals(5L, it.value) } } - fun `test execute monitor with non-null data sources`() { - - val testIndex = createTestIndex() - val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) - val testDoc = """{ - "message" : "This is an error from IAD region", - "test_strict_date_time" : "$testTime", - "test_field" : "us-west-2" - }""" - - val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) - val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) - - val alertCategories = AlertCategory.values() - val actionExecutionScope = PerAlertActionScope( - actionableAlerts = (1..randomInt(alertCategories.size)).map { alertCategories[it - 1] }.toSet() + fun `test execute monitor with indices having fields with same name but different field mappings with NOT EQUALS`() { + val testIndex = createTestIndex( + "test1", + """"properties": { + "source": { + "properties": { + "id": { + "type":"text", + "analyzer":"whitespace" + } + } + }, + "test_field" : { + "type":"text", + "analyzer":"whitespace" + } + } + """.trimIndent() ) - val actionExecutionPolicy = ActionExecutionPolicy(actionExecutionScope) - val actions = (0..randomInt(10)).map { - randomActionWithPolicy( - template = randomTemplateScript("Hello {{ctx.monitor.name}}"), - destinationId = createDestination().id, - actionExecutionPolicy = actionExecutionPolicy - ) - } - - val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = actions) - try { - createMonitor( - randomDocumentLevelMonitor( - inputs = listOf(docLevelInput), - triggers = listOf(trigger), - dataSources = DataSources( - findingsIndex = "custom_findings_index", - alertsIndex = "custom_alerts_index", - ) - ) - ) - fail("Expected create monitor to fail") - } catch (e: ResponseException) { - assertTrue(e.message!!.contains("illegal_argument_exception")) - } - } - fun `test execute monitor with indices removed after first run`() { - val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + val testIndex2 = createTestIndex( + "test2", + """"properties": { + "source": { + "properties": { + "id": { + "type":"text" + } + } + }, + "test_field" : { + "type":"text" + } + } + """.trimIndent() + ) val testDoc = """{ - "message" : "This is an error from IAD region", - "test_strict_date_time" : "$testTime", - "test_field" : "us-west-2" + "source" : {"id" : "12345" }, + "nested_field": { "test1": "some text" }, + "test_field": "12345" }""" - val index1 = createTestIndex() - val index2 = createTestIndex() - val index4 = createTestIndex() - val index5 = createTestIndex() - - val docQuery = DocLevelQuery(query = "\"us-west-2\"", name = "3", fields = listOf()) - var docLevelInput = DocLevelMonitorInput("description", listOf(index1, index2, index4, index5), listOf(docQuery)) - - val action = randomAction(template = randomTemplateScript("Hello {{ctx.monitor.name}}"), destinationId = createDestination().id) - val monitor = createMonitor( - randomDocumentLevelMonitor( - inputs = listOf(docLevelInput), - triggers = listOf(randomDocumentLevelTrigger(condition = ALWAYS_RUN, actions = listOf(action))) - ) + val docQuery = DocLevelQuery( + query = "(NOT test_field:\"123456\" AND _exists_:test_field) AND source.id:\"12345\"", + name = "5", + fields = listOf() ) + val docLevelInput = DocLevelMonitorInput("description", listOf("test*"), listOf(docQuery)) - indexDoc(index1, "1", testDoc) - indexDoc(index2, "1", testDoc) - indexDoc(index4, "1", testDoc) - indexDoc(index5, "1", testDoc) + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val monitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) + assertNotNull(monitor.id) - var response = executeMonitor(monitor.id) + indexDoc(testIndex, "1", testDoc) + indexDoc(testIndex2, "1", testDoc) - var output = entityAsMap(response) - assertEquals(monitor.name, output["monitor_name"]) + executeMonitor(monitor.id) - assertEquals(1, output.objectMap("trigger_results").values.size) - deleteIndex(index1) - deleteIndex(index2) + val alerts = searchAlertsWithFilter(monitor) + assertEquals("Alert saved for test monitor", 2, alerts.size) - indexDoc(index4, "1", testDoc) - response = executeMonitor(monitor.id) + val findings = searchFindings(monitor) + assertEquals("Findings saved for test monitor", 2, findings.size) - output = entityAsMap(response) - assertEquals(1, output.objectMap("trigger_results").values.size) + // as mappings of source.id & test_field are different so, both of them expands + val expectedQueries = listOf( + "(NOT test_field_test2_${monitor.id}:\"123456\" AND _exists_:test_field_test2_${monitor.id}) " + + "AND source.id_test2_${monitor.id}:\"12345\"", + "(NOT test_field_test1_${monitor.id}:\"123456\" AND _exists_:test_field_test1_${monitor.id}) " + + "AND source.id_test1_${monitor.id}:\"12345\"" + ) + + val request = """{ + "size": 10, + "query": { + "match_all": {} + } + }""" + var httpResponse = adminClient().makeRequest( + "GET", "/${monitor.dataSources.queryIndex}/_search", + StringEntity(request, ContentType.APPLICATION_JSON) + ) + assertEquals("Search failed", RestStatus.OK, httpResponse.restStatus()) + var searchResponse = SearchResponse.fromXContent(createParser(JsonXContent.jsonXContent, httpResponse.entity.content)) + searchResponse.hits.forEach { hit -> + val query = ((hit.sourceAsMap["query"] as Map)["query_string"] as Map)["query"] + assertTrue(expectedQueries.contains(query)) + } } @Suppress("UNCHECKED_CAST") diff --git a/alerting/src/test/kotlin/org/opensearch/alerting/MonitorDataSourcesIT.kt b/alerting/src/test/kotlin/org/opensearch/alerting/MonitorDataSourcesIT.kt index ee0be50ac..963d659c3 100644 --- a/alerting/src/test/kotlin/org/opensearch/alerting/MonitorDataSourcesIT.kt +++ b/alerting/src/test/kotlin/org/opensearch/alerting/MonitorDataSourcesIT.kt @@ -24,8 +24,6 @@ import org.opensearch.action.fieldcaps.FieldCapabilitiesRequest import org.opensearch.action.index.IndexRequest import org.opensearch.action.search.SearchRequest import org.opensearch.action.support.WriteRequest -import org.opensearch.alerting.action.SearchMonitorAction -import org.opensearch.alerting.action.SearchMonitorRequest import org.opensearch.alerting.alerts.AlertIndices import org.opensearch.alerting.core.ScheduledJobIndices import org.opensearch.alerting.model.DocumentLevelTriggerRunResult @@ -47,6 +45,7 @@ import org.opensearch.commons.alerting.action.DeleteMonitorRequest import org.opensearch.commons.alerting.action.GetAlertsRequest import org.opensearch.commons.alerting.action.GetAlertsResponse import org.opensearch.commons.alerting.action.IndexMonitorResponse +import org.opensearch.commons.alerting.action.SearchMonitorRequest import org.opensearch.commons.alerting.aggregation.bucketselectorext.BucketSelectorExtAggregationBuilder import org.opensearch.commons.alerting.model.Alert import org.opensearch.commons.alerting.model.ChainedAlertTrigger @@ -367,6 +366,204 @@ class MonitorDataSourcesIT : AlertingSingleNodeTestCase() { assertEquals("Didn't match query", 1, findings[0].docLevelQueries.size) } + fun `test all fields fetched and submitted to percolate query when one of the queries doesn't have queryFieldNames`() { + // doesn't have query field names so even if other queries pass the wrong fields to query, findings will get generated on matching docs + val docQuery1 = DocLevelQuery( + query = "source.ip.v6.v1:12345", + name = "3", + fields = listOf() + ) + val docQuery2 = DocLevelQuery( + query = "source.ip.v6.v2:16645", + name = "4", + fields = listOf(), + queryFieldNames = listOf("alias.some.fff", "source.ip.v6.v1") + ) + val docQuery3 = DocLevelQuery( + query = "source.ip.v4.v0:120", + name = "5", + fields = listOf(), + queryFieldNames = listOf("alias.some.fff", "source.ip.v6.v1") + ) + val docQuery4 = + DocLevelQuery( + query = "alias.some.fff:\"us-west-2\"", + name = "6", + fields = listOf(), + queryFieldNames = listOf("alias.some.fff", "source.ip.v6.v1") + ) + val docQuery5 = DocLevelQuery( + query = "message:\"This is an error from IAD region\"", + name = "7", + queryFieldNames = listOf("alias.some.fff", "source.ip.v6.v1"), + fields = listOf() + ) + val docQuery6 = + DocLevelQuery( + query = "type.subtype:\"some subtype\"", + name = "8", + fields = listOf(), + queryFieldNames = listOf("alias.some.fff", "source.ip.v6.v1") + ) + val docQuery7 = + DocLevelQuery( + query = "supertype.type:\"some type\"", + name = "9", + fields = listOf(), + queryFieldNames = listOf("alias.some.fff", "source.ip.v6.v1") + ) + val docLevelInput = DocLevelMonitorInput( + "description", listOf(index), listOf(docQuery1, docQuery2, docQuery3, docQuery4, docQuery5, docQuery6, docQuery7) + ) + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val customFindingsIndex = "custom_findings_index" + val customFindingsIndexPattern = "custom_findings_index-1" + val customQueryIndex = "custom_alerts_index" + var monitor = randomDocumentLevelMonitor( + inputs = listOf(docLevelInput), + triggers = listOf(trigger), + dataSources = DataSources( + queryIndex = customQueryIndex, + findingsIndex = customFindingsIndex, + findingsIndexPattern = customFindingsIndexPattern + ) + ) + val monitorResponse = createMonitor(monitor) + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + // Trying to test here few different "nesting" situations and "wierd" characters + val testDoc = """{ + "message" : "This is an error from IAD region", + "source.ip.v6.v1" : 12345, + "source.ip.v6.v2" : 16645, + "source.ip.v4.v0" : 120, + "test_bad_char" : "\u0000", + "test_strict_date_time" : "$testTime", + "test_field.some_other_field" : "us-west-2", + "type.subtype" : "some subtype", + "supertype.type" : "some type" + }""" + indexDoc(index, "1", testDoc) + client().admin().indices().putMapping( + PutMappingRequest(index).source("alias.some.fff", "type=alias,path=test_field.some_other_field") + ) + assertFalse(monitorResponse?.id.isNullOrEmpty()) + monitor = monitorResponse!!.monitor + val id = monitorResponse.id + val executeMonitorResponse = executeMonitor(monitor, id, false) + Assert.assertEquals(executeMonitorResponse!!.monitorRunResult.monitorName, monitor.name) + Assert.assertEquals(executeMonitorResponse.monitorRunResult.triggerResults.size, 1) + searchAlerts(id) + val table = Table("asc", "id", null, 1, 0, "") + var getAlertsResponse = client() + .execute(AlertingActions.GET_ALERTS_ACTION_TYPE, GetAlertsRequest(table, "ALL", "ALL", null, null)) + .get() + Assert.assertTrue(getAlertsResponse != null) + Assert.assertTrue(getAlertsResponse.alerts.size == 1) + val findings = searchFindings(id, customFindingsIndex) + assertEquals("Findings saved for test monitor", 1, findings.size) + assertTrue("Findings saved for test monitor", findings[0].relatedDocIds.contains("1")) + assertEquals("Didn't match all 7 queries", 7, findings[0].docLevelQueries.size) + } + + fun `test percolate query failure when queryFieldNames has alias`() { + // doesn't have query field names so even if other queries pass the wrong fields to query, findings will get generated on matching docs + val docQuery1 = DocLevelQuery( + query = "source.ip.v6.v1:12345", + name = "3", + fields = listOf(), + queryFieldNames = listOf("alias.some.fff", "source.ip.v6.v1") + ) + val docQuery2 = DocLevelQuery( + query = "source.ip.v6.v2:16645", + name = "4", + fields = listOf(), + queryFieldNames = listOf("source.ip.v6.v2") + ) + val docQuery3 = DocLevelQuery( + query = "source.ip.v4.v0:120", + name = "5", + fields = listOf(), + queryFieldNames = listOf("source.ip.v6.v4") + ) + val docQuery4 = + DocLevelQuery( + query = "alias.some.fff:\"us-west-2\"", + name = "6", + fields = listOf(), + queryFieldNames = listOf("alias.some.fff") + ) + val docQuery5 = DocLevelQuery( + query = "message:\"This is an error from IAD region\"", + name = "7", + queryFieldNames = listOf("alias.some.fff", "source.ip.v6.v1"), + fields = listOf() + ) + val docQuery6 = + DocLevelQuery( + query = "type.subtype:\"some subtype\"", + name = "8", + fields = listOf(), + queryFieldNames = listOf("alias.some.fff", "source.ip.v6.v1") + ) + val docQuery7 = + DocLevelQuery( + query = "supertype.type:\"some type\"", + name = "9", + fields = listOf(), + queryFieldNames = listOf("alias.some.fff", "source.ip.v6.v1") + ) + val docLevelInput = DocLevelMonitorInput( + "description", listOf(index), listOf(docQuery1, docQuery2, docQuery3, docQuery4, docQuery5, docQuery6, docQuery7) + ) + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val customFindingsIndex = "custom_findings_index" + val customFindingsIndexPattern = "custom_findings_index-1" + val customQueryIndex = "custom_alerts_index" + var monitor = randomDocumentLevelMonitor( + inputs = listOf(docLevelInput), + triggers = listOf(trigger), + dataSources = DataSources( + queryIndex = customQueryIndex, + findingsIndex = customFindingsIndex, + findingsIndexPattern = customFindingsIndexPattern + ) + ) + val monitorResponse = createMonitor(monitor) + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(MILLIS)) + // Trying to test here few different "nesting" situations and "wierd" characters + val testDoc = """{ + "message" : "This is an error from IAD region", + "source.ip.v6.v1" : 12345, + "source.ip.v6.v2" : 16645, + "source.ip.v4.v0" : 120, + "test_bad_char" : "\u0000", + "test_strict_date_time" : "$testTime", + "test_field.some_other_field" : "us-west-2", + "type.subtype" : "some subtype", + "supertype.type" : "some type" + }""" + indexDoc(index, "1", testDoc) + client().admin().indices().putMapping( + PutMappingRequest(index).source("alias.some.fff", "type=alias,path=test_field.some_other_field") + ) + assertFalse(monitorResponse?.id.isNullOrEmpty()) + monitor = monitorResponse!!.monitor + val id = monitorResponse.id + val executeMonitorResponse = executeMonitor(monitor, id, false) + Assert.assertEquals(executeMonitorResponse!!.monitorRunResult.monitorName, monitor.name) + Assert.assertEquals(executeMonitorResponse.monitorRunResult.triggerResults.size, 0) + searchAlerts(id) + val table = Table("asc", "id", null, 1, 0, "") + var getAlertsResponse = client() + .execute(AlertingActions.GET_ALERTS_ACTION_TYPE, GetAlertsRequest(table, "ALL", "ALL", null, null)) + .get() + Assert.assertTrue(getAlertsResponse != null) + Assert.assertTrue(getAlertsResponse.alerts.size == 1) + Assert.assertTrue(getAlertsResponse.alerts[0].state.toString().equals(Alert.State.ERROR.toString())) + val findings = searchFindings(id, customFindingsIndex) + assertEquals("Findings saved for test monitor", 0, findings.size) + } + fun `test execute monitor with custom query index`() { val q1 = DocLevelQuery(query = "source.ip.v6.v1:12345", name = "3", fields = listOf()) val q2 = DocLevelQuery(query = "source.ip.v6.v2:16645", name = "4", fields = listOf()) @@ -1447,12 +1644,12 @@ class MonitorDataSourcesIT : AlertingSingleNodeTestCase() { Assert.assertTrue(getAlertsResponse.alerts.size == 1) val searchRequest = SearchRequest(SCHEDULED_JOBS_INDEX) var searchMonitorResponse = - client().execute(SearchMonitorAction.INSTANCE, SearchMonitorRequest(searchRequest)) + client().execute(AlertingActions.SEARCH_MONITORS_ACTION_TYPE, SearchMonitorRequest(searchRequest)) .get() Assert.assertEquals(searchMonitorResponse.hits.hits.size, 0) searchRequest.source().query(MatchQueryBuilder("monitor.owner", "security_analytics_plugin")) searchMonitorResponse = - client().execute(SearchMonitorAction.INSTANCE, SearchMonitorRequest(searchRequest)) + client().execute(AlertingActions.SEARCH_MONITORS_ACTION_TYPE, SearchMonitorRequest(searchRequest)) .get() Assert.assertEquals(searchMonitorResponse.hits.hits.size, 1) } diff --git a/alerting/src/test/kotlin/org/opensearch/alerting/MonitorRunnerServiceIT.kt b/alerting/src/test/kotlin/org/opensearch/alerting/MonitorRunnerServiceIT.kt index 72b7c0423..ff79afffb 100644 --- a/alerting/src/test/kotlin/org/opensearch/alerting/MonitorRunnerServiceIT.kt +++ b/alerting/src/test/kotlin/org/opensearch/alerting/MonitorRunnerServiceIT.kt @@ -969,7 +969,7 @@ class MonitorRunnerServiceIT : AlertingRestTestCase() { // GIVEN val indices = (1..5).map { createTestIndex() }.toTypedArray() val pathParams = indices.joinToString(",") - val path = "/_cluster/health/" + val path = "/_cluster/health" val input = randomClusterMetricsInput( path = path, pathParams = pathParams diff --git a/alerting/src/test/kotlin/org/opensearch/alerting/action/GetMonitorActionTests.kt b/alerting/src/test/kotlin/org/opensearch/alerting/action/GetMonitorActionTests.kt deleted file mode 100644 index 3d1a5dffa..000000000 --- a/alerting/src/test/kotlin/org/opensearch/alerting/action/GetMonitorActionTests.kt +++ /dev/null @@ -1,16 +0,0 @@ -/* - * Copyright OpenSearch Contributors - * SPDX-License-Identifier: Apache-2.0 - */ - -package org.opensearch.alerting.action - -import org.opensearch.test.OpenSearchTestCase - -class GetMonitorActionTests : OpenSearchTestCase() { - - fun `test get monitor action name`() { - assertNotNull(GetMonitorAction.INSTANCE.name()) - assertEquals(GetMonitorAction.INSTANCE.name(), GetMonitorAction.NAME) - } -} diff --git a/alerting/src/test/kotlin/org/opensearch/alerting/action/GetMonitorRequestTests.kt b/alerting/src/test/kotlin/org/opensearch/alerting/action/GetMonitorRequestTests.kt deleted file mode 100644 index c8d8731c5..000000000 --- a/alerting/src/test/kotlin/org/opensearch/alerting/action/GetMonitorRequestTests.kt +++ /dev/null @@ -1,60 +0,0 @@ -/* - * Copyright OpenSearch Contributors - * SPDX-License-Identifier: Apache-2.0 - */ - -package org.opensearch.alerting.action - -import org.opensearch.common.io.stream.BytesStreamOutput -import org.opensearch.core.common.io.stream.StreamInput -import org.opensearch.rest.RestRequest -import org.opensearch.search.fetch.subphase.FetchSourceContext -import org.opensearch.test.OpenSearchTestCase - -class GetMonitorRequestTests : OpenSearchTestCase() { - - fun `test get monitor request`() { - - val req = GetMonitorRequest("1234", 1L, RestRequest.Method.GET, FetchSourceContext.FETCH_SOURCE) - assertNotNull(req) - - val out = BytesStreamOutput() - req.writeTo(out) - val sin = StreamInput.wrap(out.bytes().toBytesRef().bytes) - val newReq = GetMonitorRequest(sin) - assertEquals("1234", newReq.monitorId) - assertEquals(1L, newReq.version) - assertEquals(RestRequest.Method.GET, newReq.method) - assertEquals(FetchSourceContext.FETCH_SOURCE, newReq.srcContext) - } - - fun `test get monitor request without src context`() { - - val req = GetMonitorRequest("1234", 1L, RestRequest.Method.GET, null) - assertNotNull(req) - - val out = BytesStreamOutput() - req.writeTo(out) - val sin = StreamInput.wrap(out.bytes().toBytesRef().bytes) - val newReq = GetMonitorRequest(sin) - assertEquals("1234", newReq.monitorId) - assertEquals(1L, newReq.version) - assertEquals(RestRequest.Method.GET, newReq.method) - assertEquals(null, newReq.srcContext) - } - - fun `test head monitor request`() { - - val req = GetMonitorRequest("1234", 2L, RestRequest.Method.HEAD, FetchSourceContext.FETCH_SOURCE) - assertNotNull(req) - - val out = BytesStreamOutput() - req.writeTo(out) - val sin = StreamInput.wrap(out.bytes().toBytesRef().bytes) - val newReq = GetMonitorRequest(sin) - assertEquals("1234", newReq.monitorId) - assertEquals(2L, newReq.version) - assertEquals(RestRequest.Method.HEAD, newReq.method) - assertEquals(FetchSourceContext.FETCH_SOURCE, newReq.srcContext) - } -} diff --git a/alerting/src/test/kotlin/org/opensearch/alerting/action/GetMonitorResponseTests.kt b/alerting/src/test/kotlin/org/opensearch/alerting/action/GetMonitorResponseTests.kt deleted file mode 100644 index af7b0e5c9..000000000 --- a/alerting/src/test/kotlin/org/opensearch/alerting/action/GetMonitorResponseTests.kt +++ /dev/null @@ -1,66 +0,0 @@ -/* - * Copyright OpenSearch Contributors - * SPDX-License-Identifier: Apache-2.0 - */ - -package org.opensearch.alerting.action - -import org.opensearch.alerting.randomUser -import org.opensearch.common.io.stream.BytesStreamOutput -import org.opensearch.commons.alerting.model.CronSchedule -import org.opensearch.commons.alerting.model.Monitor -import org.opensearch.core.common.io.stream.StreamInput -import org.opensearch.core.rest.RestStatus -import org.opensearch.test.OpenSearchTestCase -import java.time.Instant -import java.time.ZoneId - -class GetMonitorResponseTests : OpenSearchTestCase() { - - fun `test get monitor response`() { - val req = GetMonitorResponse("1234", 1L, 2L, 0L, RestStatus.OK, null, null) - assertNotNull(req) - - val out = BytesStreamOutput() - req.writeTo(out) - val sin = StreamInput.wrap(out.bytes().toBytesRef().bytes) - val newReq = GetMonitorResponse(sin) - assertEquals("1234", newReq.id) - assertEquals(1L, newReq.version) - assertEquals(RestStatus.OK, newReq.status) - assertEquals(null, newReq.monitor) - } - - fun `test get monitor response with monitor`() { - val cronExpression = "31 * * * *" // Run at minute 31. - val testInstance = Instant.ofEpochSecond(1538164858L) - - val cronSchedule = CronSchedule(cronExpression, ZoneId.of("Asia/Kolkata"), testInstance) - val monitor = Monitor( - id = "123", - version = 0L, - name = "test-monitor", - enabled = true, - schedule = cronSchedule, - lastUpdateTime = Instant.now(), - enabledTime = Instant.now(), - monitorType = Monitor.MonitorType.QUERY_LEVEL_MONITOR, - user = randomUser(), - schemaVersion = 0, - inputs = mutableListOf(), - triggers = mutableListOf(), - uiMetadata = mutableMapOf() - ) - val req = GetMonitorResponse("1234", 1L, 2L, 0L, RestStatus.OK, monitor, null) - assertNotNull(req) - - val out = BytesStreamOutput() - req.writeTo(out) - val sin = StreamInput.wrap(out.bytes().toBytesRef().bytes) - val newReq = GetMonitorResponse(sin) - assertEquals("1234", newReq.id) - assertEquals(1L, newReq.version) - assertEquals(RestStatus.OK, newReq.status) - assertNotNull(newReq.monitor) - } -} diff --git a/alerting/src/test/kotlin/org/opensearch/alerting/action/SearchMonitorActionTests.kt b/alerting/src/test/kotlin/org/opensearch/alerting/action/SearchMonitorActionTests.kt deleted file mode 100644 index 61f96529d..000000000 --- a/alerting/src/test/kotlin/org/opensearch/alerting/action/SearchMonitorActionTests.kt +++ /dev/null @@ -1,17 +0,0 @@ -/* - * Copyright OpenSearch Contributors - * SPDX-License-Identifier: Apache-2.0 - */ - -package org.opensearch.alerting.action - -import org.junit.Assert -import org.opensearch.test.OpenSearchTestCase - -class SearchMonitorActionTests : OpenSearchTestCase() { - - fun `test search monitor action name`() { - Assert.assertNotNull(SearchMonitorAction.INSTANCE.name()) - Assert.assertEquals(SearchMonitorAction.INSTANCE.name(), SearchMonitorAction.NAME) - } -} diff --git a/alerting/src/test/kotlin/org/opensearch/alerting/action/SearchMonitorRequestTests.kt b/alerting/src/test/kotlin/org/opensearch/alerting/action/SearchMonitorRequestTests.kt deleted file mode 100644 index 43a5bc873..000000000 --- a/alerting/src/test/kotlin/org/opensearch/alerting/action/SearchMonitorRequestTests.kt +++ /dev/null @@ -1,33 +0,0 @@ -/* - * Copyright OpenSearch Contributors - * SPDX-License-Identifier: Apache-2.0 - */ - -package org.opensearch.alerting.action - -import org.opensearch.action.search.SearchRequest -import org.opensearch.common.io.stream.BytesStreamOutput -import org.opensearch.common.unit.TimeValue -import org.opensearch.core.common.io.stream.StreamInput -import org.opensearch.search.builder.SearchSourceBuilder -import org.opensearch.test.OpenSearchTestCase -import org.opensearch.test.rest.OpenSearchRestTestCase -import java.util.concurrent.TimeUnit - -class SearchMonitorRequestTests : OpenSearchTestCase() { - - fun `test search monitors request`() { - val searchSourceBuilder = SearchSourceBuilder().from(0).size(100).timeout(TimeValue(60, TimeUnit.SECONDS)) - val searchRequest = SearchRequest().indices(OpenSearchRestTestCase.randomAlphaOfLength(10)).source(searchSourceBuilder) - val searchMonitorRequest = SearchMonitorRequest(searchRequest) - assertNotNull(searchMonitorRequest) - - val out = BytesStreamOutput() - searchMonitorRequest.writeTo(out) - val sin = StreamInput.wrap(out.bytes().toBytesRef().bytes) - val newReq = SearchMonitorRequest(sin) - - assertNotNull(newReq.searchRequest) - assertEquals(1, newReq.searchRequest.indices().size) - } -} diff --git a/alerting/src/test/kotlin/org/opensearch/alerting/model/WriteableTests.kt b/alerting/src/test/kotlin/org/opensearch/alerting/model/WriteableTests.kt index 6851c471d..6ef15f8d8 100644 --- a/alerting/src/test/kotlin/org/opensearch/alerting/model/WriteableTests.kt +++ b/alerting/src/test/kotlin/org/opensearch/alerting/model/WriteableTests.kt @@ -40,7 +40,10 @@ class WriteableTests : OpenSearchTestCase() { runResult.writeTo(out) val sin = StreamInput.wrap(out.bytes().toBytesRef().bytes) val newRunResult = QueryLevelTriggerRunResult(sin) - assertEquals("Round tripping ActionRunResult doesn't work", runResult, newRunResult) + assertEquals(runResult.triggerName, newRunResult.triggerName) + assertEquals(runResult.triggered, newRunResult.triggered) + assertEquals(runResult.error, newRunResult.error) + assertEquals(runResult.actionResults, newRunResult.actionResults) } fun `test bucket-level triggerrunresult as stream`() { diff --git a/alerting/src/test/kotlin/org/opensearch/alerting/resthandler/FindingsRestApiIT.kt b/alerting/src/test/kotlin/org/opensearch/alerting/resthandler/FindingsRestApiIT.kt index 1839bc807..bc9f1261c 100644 --- a/alerting/src/test/kotlin/org/opensearch/alerting/resthandler/FindingsRestApiIT.kt +++ b/alerting/src/test/kotlin/org/opensearch/alerting/resthandler/FindingsRestApiIT.kt @@ -31,6 +31,33 @@ class FindingsRestApiIT : AlertingRestTestCase() { assertFalse(response.findings[0].documents[0].found) } + fun `test find Finding where source docData is null`() { + val testIndex = createTestIndex() + val testDoc = """{ + "message" : "This is an error from IAD region", + "test_field" : "us-west-2" + }""" + indexDoc(testIndex, "someId", testDoc) + + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val trueMonitor = createMonitor(randomDocumentLevelMonitor(inputs = listOf(docLevelInput), triggers = listOf(trigger))) + executeMonitor(trueMonitor.id, mapOf(Pair("dryrun", "true"))) + + createFinding(matchingDocIds = listOf("someId"), index = testIndex) + val responseBeforeDelete = searchFindings() + assertEquals(1, responseBeforeDelete.totalFindings) + assertEquals(1, responseBeforeDelete.findings[0].documents.size) + assertTrue(responseBeforeDelete.findings[0].documents[0].found) + + deleteDoc(testIndex, "someId") + val responseAfterDelete = searchFindings() + assertEquals(1, responseAfterDelete.totalFindings) + assertEquals(1, responseAfterDelete.findings[0].documents.size) + assertFalse(responseAfterDelete.findings[0].documents[0].found) + } + fun `test find Finding where doc is retrieved`() { val testIndex = createTestIndex() val testDoc = """{ diff --git a/alerting/src/test/kotlin/org/opensearch/alerting/resthandler/SecureWorkflowRestApiIT.kt b/alerting/src/test/kotlin/org/opensearch/alerting/resthandler/SecureWorkflowRestApiIT.kt index 1c838aaeb..6d0112c52 100644 --- a/alerting/src/test/kotlin/org/opensearch/alerting/resthandler/SecureWorkflowRestApiIT.kt +++ b/alerting/src/test/kotlin/org/opensearch/alerting/resthandler/SecureWorkflowRestApiIT.kt @@ -394,6 +394,7 @@ class SecureWorkflowRestApiIT : AlertingRestTestCase() { assertEquals("Get workflow failed", RestStatus.FORBIDDEN.status, e.response.statusLine.statusCode) } finally { deleteRoleAndRoleMapping(TEST_HR_ROLE) + deleteRoleMapping(ALERTING_FULL_ACCESS_ROLE) deleteUser(getUser) getUserClient?.close() } @@ -444,6 +445,7 @@ class SecureWorkflowRestApiIT : AlertingRestTestCase() { } catch (e: ResponseException) { assertEquals("Create workflow failed", RestStatus.FORBIDDEN.status, e.response.statusLine.statusCode) } finally { + deleteRoleMapping(ALERTING_FULL_ACCESS_ROLE) deleteUser(userWithDifferentRole) userWithDifferentRoleClient?.close() } @@ -1101,6 +1103,7 @@ class SecureWorkflowRestApiIT : AlertingRestTestCase() { assertEquals("Delete workflow failed", RestStatus.OK, response?.restStatus()) } finally { deleteRoleAndRoleMapping(TEST_HR_ROLE) + deleteRoleMapping(ALERTING_FULL_ACCESS_ROLE) deleteUser(deleteUser) deleteUserClient?.close() } diff --git a/alerting/src/test/kotlin/org/opensearch/alerting/resthandler/WorkflowRestApiIT.kt b/alerting/src/test/kotlin/org/opensearch/alerting/resthandler/WorkflowRestApiIT.kt index 8c073c4b6..94c7e940e 100644 --- a/alerting/src/test/kotlin/org/opensearch/alerting/resthandler/WorkflowRestApiIT.kt +++ b/alerting/src/test/kotlin/org/opensearch/alerting/resthandler/WorkflowRestApiIT.kt @@ -37,6 +37,8 @@ import org.opensearch.search.aggregations.bucket.terms.TermsAggregationBuilder import org.opensearch.search.builder.SearchSourceBuilder import org.opensearch.test.junit.annotations.TestLogging import java.time.Instant +import java.time.ZonedDateTime +import java.time.format.DateTimeFormatter import java.time.temporal.ChronoUnit import java.util.Collections import java.util.Locale @@ -1185,4 +1187,45 @@ class WorkflowRestApiIT : AlertingRestTestCase() { val findings = searchFindings(monitor.copy(id = monitorResponse.id)) assertEquals("Findings saved for test monitor", 1, findings.size) } + + fun `test workflow run generates no error alerts with versionconflictengineexception with locks`() { + val testIndex = createTestIndex() + val testTime = DateTimeFormatter.ISO_OFFSET_DATE_TIME.format(ZonedDateTime.now().truncatedTo(ChronoUnit.MILLIS)) + val testDoc = """{ + "message" : "This is an error from IAD region", + "test_strict_date_time" : "$testTime", + "test_field" : "us-west-2" + }""" + + val docQuery = DocLevelQuery(query = "test_field:\"us-west-2\"", name = "3", fields = listOf()) + val docLevelInput = DocLevelMonitorInput("description", listOf(testIndex), listOf(docQuery)) + + val trigger = randomDocumentLevelTrigger(condition = ALWAYS_RUN) + val monitor = createMonitor( + randomDocumentLevelMonitor( + name = "__lag-monitor-test__", + inputs = listOf(docLevelInput), + triggers = listOf(trigger), + enabled = false, + schedule = IntervalSchedule(interval = 1, unit = ChronoUnit.MINUTES) + ) + ) + assertNotNull(monitor.id) + createWorkflow( + randomWorkflow( + monitorIds = listOf(monitor.id), + enabled = true, + schedule = IntervalSchedule(1, ChronoUnit.MINUTES) + ) + ) + + indexDoc(testIndex, "1", testDoc) + indexDoc(testIndex, "5", testDoc) + Thread.sleep(240000) + + val alerts = searchAlerts(monitor) + alerts.forEach { + assertTrue(it.errorMessage == null) + } + } } diff --git a/alerting/src/test/kotlin/org/opensearch/alerting/transport/AlertingSingleNodeTestCase.kt b/alerting/src/test/kotlin/org/opensearch/alerting/transport/AlertingSingleNodeTestCase.kt index 46b656b94..f1f8882f7 100644 --- a/alerting/src/test/kotlin/org/opensearch/alerting/transport/AlertingSingleNodeTestCase.kt +++ b/alerting/src/test/kotlin/org/opensearch/alerting/transport/AlertingSingleNodeTestCase.kt @@ -22,8 +22,6 @@ import org.opensearch.alerting.action.ExecuteMonitorResponse import org.opensearch.alerting.action.ExecuteWorkflowAction import org.opensearch.alerting.action.ExecuteWorkflowRequest import org.opensearch.alerting.action.ExecuteWorkflowResponse -import org.opensearch.alerting.action.GetMonitorAction -import org.opensearch.alerting.action.GetMonitorRequest import org.opensearch.alerting.alerts.AlertIndices import org.opensearch.alerting.model.MonitorMetadata import org.opensearch.alerting.model.WorkflowMetadata @@ -37,6 +35,7 @@ import org.opensearch.commons.alerting.action.DeleteMonitorRequest import org.opensearch.commons.alerting.action.DeleteWorkflowRequest import org.opensearch.commons.alerting.action.GetFindingsRequest import org.opensearch.commons.alerting.action.GetFindingsResponse +import org.opensearch.commons.alerting.action.GetMonitorRequest import org.opensearch.commons.alerting.action.GetWorkflowAlertsRequest import org.opensearch.commons.alerting.action.GetWorkflowAlertsResponse import org.opensearch.commons.alerting.action.GetWorkflowRequest @@ -346,7 +345,7 @@ abstract class AlertingSingleNodeTestCase : OpenSearchSingleNodeTestCase() { version: Long = 1L, fetchSourceContext: FetchSourceContext = FetchSourceContext.FETCH_SOURCE, ) = client().execute( - GetMonitorAction.INSTANCE, + AlertingActions.GET_MONITOR_ACTION_TYPE, GetMonitorRequest(monitorId, version, RestRequest.Method.GET, fetchSourceContext) ).get() diff --git a/build-tools/merged-coverage.gradle b/build-tools/merged-coverage.gradle index c4e07ab75..a5bb24894 100644 --- a/build-tools/merged-coverage.gradle +++ b/build-tools/merged-coverage.gradle @@ -5,7 +5,7 @@ allprojects { plugins.withId('jacoco') { - jacoco.toolVersion = '0.8.7' + jacoco.toolVersion = '0.8.11' // For some reason this dependency isn't getting setup automatically by the jacoco plugin tasks.withType(JacocoReport) { dependsOn tasks.withType(Test) @@ -13,26 +13,18 @@ allprojects { } } -task jacocoMerge(type: JacocoMerge) { +task jacocoReport(type: JacocoReport, group: 'verification') { + description = 'Generates an aggregate report from all subprojects' gradle.projectsEvaluated { - subprojects.each { - jacocoMerge.dependsOn it.tasks.withType(JacocoReport) - jacocoMerge.executionData it.tasks.withType(JacocoReport).collect { it.executionData } + subprojects.each { + jacocoReport.dependsOn it.tasks.withType(JacocoReport) + jacocoReport.executionData it.tasks.withType(JacocoReport).collect { it.executionData } } } - doFirst { - executionData = files(executionData.findAll { it.exists() }) - } -} - -task jacocoReport(type: JacocoReport, group: 'verification') { - description = 'Generates an aggregate report from all subprojects' - dependsOn jacocoMerge - executionData jacocoMerge.destinationFile reports { - html.enabled = true // human readable - xml.enabled = true + html.required = true // human readable + xml.required = true } gradle.projectsEvaluated { diff --git a/build-tools/opensearchplugin-coverage.gradle b/build-tools/opensearchplugin-coverage.gradle index adea8414e..df2c0513b 100644 --- a/build-tools/opensearchplugin-coverage.gradle +++ b/build-tools/opensearchplugin-coverage.gradle @@ -61,8 +61,8 @@ jacocoTestReport { getSourceDirectories().from(sourceSets.main.allSource) getClassDirectories().from(sourceSets.main.output) reports { - html.enabled = true // human readable - xml.enabled = true // for coverlay + html.required = true // human readable + xml.required = true // for coverlay } } diff --git a/build-tools/pkgbuild.gradle b/build-tools/pkgbuild.gradle index 4ac3eb325..8a70c13e3 100644 --- a/build-tools/pkgbuild.gradle +++ b/build-tools/pkgbuild.gradle @@ -1,4 +1,4 @@ -apply plugin: 'nebula.ospackage' +apply plugin: 'com.netflix.nebula.ospackage' // This is afterEvaluate because the bundlePlugin ZIP task is updated afterEvaluate and changes the ZIP name to match the plugin name afterEvaluate { @@ -8,7 +8,7 @@ afterEvaluate { version = "${project.version}" - "-SNAPSHOT" into '/usr/share/opensearch/plugins' - from(zipTree(bundlePlugin.archivePath)) { + from(zipTree(bundlePlugin.archiveFile)) { into opensearchplugin.name } @@ -39,9 +39,8 @@ afterEvaluate { task renameRpm(type: Copy) { from("$buildDir/distributions") into("$buildDir/distributions") - include archiveName - rename archiveName, "${packageName}-${version}.rpm" - doLast { delete file("$buildDir/distributions/$archiveName") } + rename "$archiveFileName", "${packageName}-${archiveVersion}.rpm" + doLast { delete file("$buildDir/distributions/$archiveFileName") } } } @@ -52,9 +51,8 @@ afterEvaluate { task renameDeb(type: Copy) { from("$buildDir/distributions") into("$buildDir/distributions") - include archiveName - rename archiveName, "${packageName}-${version}.deb" - doLast { delete file("$buildDir/distributions/$archiveName") } + rename "$archiveFileName", "${packageName}-${archiveVersion}.deb" + doLast { delete file("$buildDir/distributions/$archiveFileName") } } } } \ No newline at end of file diff --git a/build.gradle b/build.gradle index 53d66e347..0cfe30ceb 100644 --- a/build.gradle +++ b/build.gradle @@ -7,7 +7,7 @@ buildscript { apply from: 'build-tools/repositories.gradle' ext { - opensearch_version = System.getProperty("opensearch.version", "2.11.0-SNAPSHOT") + opensearch_version = System.getProperty("opensearch.version", "2.12.0-SNAPSHOT") buildVersionQualifier = System.getProperty("build.version_qualifier", "") isSnapshot = "true" == System.getProperty("build.snapshot", "true") // 2.7.0-SNAPSHOT -> 2.7.0.0-SNAPSHOT @@ -39,7 +39,7 @@ buildscript { } plugins { - id 'nebula.ospackage' version "8.3.0" apply false + id 'com.netflix.nebula.ospackage' version "11.5.0" id "com.dorongold.task-tree" version "1.5" } @@ -48,7 +48,12 @@ apply plugin: 'jacoco' apply from: 'build-tools/merged-coverage.gradle' configurations { - ktlint + ktlint { + resolutionStrategy { + force "ch.qos.logback:logback-classic:1.3.14" + force "ch.qos.logback:logback-core:1.3.14" + } + } } dependencies { diff --git a/core/src/main/kotlin/org/opensearch/alerting/core/lock/LockModel.kt b/core/src/main/kotlin/org/opensearch/alerting/core/lock/LockModel.kt new file mode 100644 index 000000000..706bd0987 --- /dev/null +++ b/core/src/main/kotlin/org/opensearch/alerting/core/lock/LockModel.kt @@ -0,0 +1,117 @@ +/* + * Copyright OpenSearch Contributors + * SPDX-License-Identifier: Apache-2.0 + */ + +package org.opensearch.alerting.core.lock + +import org.opensearch.core.xcontent.ToXContent +import org.opensearch.core.xcontent.ToXContentObject +import org.opensearch.core.xcontent.XContentBuilder +import org.opensearch.core.xcontent.XContentParser +import org.opensearch.core.xcontent.XContentParserUtils +import org.opensearch.index.seqno.SequenceNumbers +import java.io.IOException +import java.time.Instant + +class LockModel( + val lockId: String, + val scheduledJobId: String, + val lockTime: Instant, + val released: Boolean, + val seqNo: Long, + val primaryTerm: Long +) : ToXContentObject { + + constructor( + copyLock: LockModel, + seqNo: Long, + primaryTerm: Long + ) : this ( + copyLock.lockId, + copyLock.scheduledJobId, + copyLock.lockTime, + copyLock.released, + seqNo, + primaryTerm + ) + + constructor( + copyLock: LockModel, + released: Boolean + ) : this ( + copyLock.lockId, + copyLock.scheduledJobId, + copyLock.lockTime, + released, + copyLock.seqNo, + copyLock.primaryTerm + ) + + constructor( + copyLock: LockModel, + updateLockTime: Instant, + released: Boolean + ) : this ( + copyLock.lockId, + copyLock.scheduledJobId, + updateLockTime, + released, + copyLock.seqNo, + copyLock.primaryTerm + ) + + constructor( + scheduledJobId: String, + lockTime: Instant, + released: Boolean + ) : this ( + generateLockId(scheduledJobId), + scheduledJobId, + lockTime, + released, + SequenceNumbers.UNASSIGNED_SEQ_NO, + SequenceNumbers.UNASSIGNED_PRIMARY_TERM + ) + + override fun toXContent(builder: XContentBuilder, params: ToXContent.Params): XContentBuilder { + builder.startObject() + .field(SCHEDULED_JOB_ID, scheduledJobId) + .field(LOCK_TIME, lockTime.epochSecond) + .field(RELEASED, released) + .endObject() + return builder + } + + companion object { + const val SCHEDULED_JOB_ID = "scheduled_job_id" + const val LOCK_TIME = "lock_time" + const val RELEASED = "released" + + fun generateLockId(scheduledJobId: String): String { + return "$scheduledJobId-lock" + } + + @JvmStatic + @JvmOverloads + @Throws(IOException::class) + fun parse(xcp: XContentParser, seqNo: Long, primaryTerm: Long): LockModel { + lateinit var scheduledJobId: String + lateinit var lockTime: Instant + var released: Boolean = false + + XContentParserUtils.ensureExpectedToken(XContentParser.Token.START_OBJECT, xcp.currentToken(), xcp) + while (xcp.nextToken() != XContentParser.Token.END_OBJECT) { + val fieldName = xcp.currentName() + xcp.nextToken() + + when (fieldName) { + SCHEDULED_JOB_ID -> scheduledJobId = xcp.text() + LOCK_TIME -> lockTime = Instant.ofEpochSecond(xcp.longValue()) + RELEASED -> released = xcp.booleanValue() + } + } + return LockModel(generateLockId(scheduledJobId), scheduledJobId, lockTime, released, seqNo, primaryTerm) + } + } +} diff --git a/core/src/main/kotlin/org/opensearch/alerting/core/lock/LockService.kt b/core/src/main/kotlin/org/opensearch/alerting/core/lock/LockService.kt new file mode 100644 index 000000000..35618e156 --- /dev/null +++ b/core/src/main/kotlin/org/opensearch/alerting/core/lock/LockService.kt @@ -0,0 +1,311 @@ +package org.opensearch.alerting.core.lock + +import org.apache.logging.log4j.LogManager +import org.opensearch.ResourceAlreadyExistsException +import org.opensearch.action.DocWriteResponse +import org.opensearch.action.admin.indices.create.CreateIndexRequest +import org.opensearch.action.admin.indices.create.CreateIndexResponse +import org.opensearch.action.delete.DeleteRequest +import org.opensearch.action.delete.DeleteResponse +import org.opensearch.action.get.GetRequest +import org.opensearch.action.get.GetResponse +import org.opensearch.action.index.IndexRequest +import org.opensearch.action.index.IndexResponse +import org.opensearch.action.update.UpdateRequest +import org.opensearch.action.update.UpdateResponse +import org.opensearch.client.Client +import org.opensearch.cluster.service.ClusterService +import org.opensearch.common.settings.Settings +import org.opensearch.common.xcontent.LoggingDeprecationHandler +import org.opensearch.common.xcontent.XContentFactory +import org.opensearch.common.xcontent.XContentType +import org.opensearch.commons.alerting.model.ScheduledJob +import org.opensearch.core.action.ActionListener +import org.opensearch.core.xcontent.NamedXContentRegistry +import org.opensearch.core.xcontent.ToXContent +import org.opensearch.index.IndexNotFoundException +import org.opensearch.index.engine.DocumentMissingException +import org.opensearch.index.engine.VersionConflictEngineException +import org.opensearch.index.seqno.SequenceNumbers +import java.io.IOException +import java.time.Instant + +private val log = LogManager.getLogger(LockService::class.java) + +class LockService(private val client: Client, private val clusterService: ClusterService) { + private var testInstant: Instant? = null + + companion object { + const val LOCK_INDEX_NAME = ".opensearch-alerting-config-lock" + + @JvmStatic + fun lockMapping(): String? { + return LockService::class.java.classLoader.getResource("mappings/opensearch-alerting-config-lock.json") + ?.readText() + } + } + + fun lockIndexExist(): Boolean { + return clusterService.state().routingTable().hasIndex(LOCK_INDEX_NAME) + } + + fun acquireLock( + scheduledJob: ScheduledJob, + listener: ActionListener + ) { + val scheduledJobId = scheduledJob.id + acquireLockWithId(scheduledJobId, listener) + } + + fun acquireLockWithId( + scheduledJobId: String, + listener: ActionListener + ) { + val lockId = LockModel.generateLockId(scheduledJobId) + createLockIndex( + object : ActionListener { + override fun onResponse(created: Boolean) { + if (created) { + try { + findLock( + lockId, + object : ActionListener { + override fun onResponse(existingLock: LockModel?) { + if (existingLock != null) { + if (isLockReleased(existingLock)) { + log.debug("lock is released or expired: {}", existingLock) + val updateLock = LockModel(existingLock, getNow(), false) + updateLock(updateLock, listener) + } else { + log.debug("Lock is NOT released or expired. {}", existingLock) + listener.onResponse(null) + } + } else { + val tempLock = LockModel(scheduledJobId, getNow(), false) + log.debug("Lock does not exist. Creating new lock {}", tempLock) + createLock(tempLock, listener) + } + } + + override fun onFailure(e: Exception) { + listener.onFailure(e) + } + } + ) + } catch (e: VersionConflictEngineException) { + log.debug("could not acquire lock {}", e.message) + listener.onResponse(null) + } + } else { + listener.onResponse(null) + } + } + + override fun onFailure(e: Exception) { + listener.onFailure(e) + } + } + ) + } + + private fun createLock( + tempLock: LockModel, + listener: ActionListener + ) { + try { + val request = IndexRequest(LOCK_INDEX_NAME).id(tempLock.lockId) + .source(tempLock.toXContent(XContentFactory.jsonBuilder(), ToXContent.EMPTY_PARAMS)) + .setIfSeqNo(SequenceNumbers.UNASSIGNED_SEQ_NO) + .setIfPrimaryTerm(SequenceNumbers.UNASSIGNED_PRIMARY_TERM) + .create(true) + client.index( + request, + object : ActionListener { + override fun onResponse(response: IndexResponse) { + listener.onResponse(LockModel(tempLock, response.seqNo, response.primaryTerm)) + } + + override fun onFailure(e: Exception) { + if (e is VersionConflictEngineException) { + log.debug("Lock is already created. {}", e.message) + listener.onResponse(null) + return + } + listener.onFailure(e) + } + } + ) + } catch (ex: IOException) { + log.error("IOException occurred creating lock", ex) + listener.onFailure(ex) + } + } + + private fun updateLock( + updateLock: LockModel, + listener: ActionListener + ) { + try { + val updateRequest = UpdateRequest().index(LOCK_INDEX_NAME) + .id(updateLock.lockId) + .setIfSeqNo(updateLock.seqNo) + .setIfPrimaryTerm(updateLock.primaryTerm) + .doc(updateLock.toXContent(XContentFactory.jsonBuilder(), ToXContent.EMPTY_PARAMS)) + .fetchSource(true) + + client.update( + updateRequest, + object : ActionListener { + override fun onResponse(response: UpdateResponse) { + listener.onResponse(LockModel(updateLock, response.seqNo, response.primaryTerm)) + } + + override fun onFailure(e: Exception) { + if (e is VersionConflictEngineException) { + log.debug("could not acquire lock {}", e.message) + } + if (e is DocumentMissingException) { + log.debug( + "Document is deleted. This happens if the job is already removed and" + " this is the last run." + "{}", + e.message + ) + } + if (e is IOException) { + log.error("IOException occurred updating lock.", e) + } + listener.onResponse(null) + } + } + ) + } catch (ex: IOException) { + log.error("IOException occurred updating lock.", ex) + listener.onResponse(null) + } + } + + fun findLock( + lockId: String, + listener: ActionListener + ) { + val getRequest = GetRequest(LOCK_INDEX_NAME).id(lockId) + client.get( + getRequest, + object : ActionListener { + override fun onResponse(response: GetResponse) { + if (!response.isExists) { + listener.onResponse(null) + } else { + try { + val parser = XContentType.JSON.xContent() + .createParser(NamedXContentRegistry.EMPTY, LoggingDeprecationHandler.INSTANCE, response.sourceAsString) + parser.nextToken() + listener.onResponse(LockModel.parse(parser, response.seqNo, response.primaryTerm)) + } catch (e: IOException) { + log.error("IOException occurred finding lock", e) + listener.onResponse(null) + } + } + } + + override fun onFailure(e: Exception) { + log.error("Exception occurred finding lock", e) + listener.onFailure(e) + } + } + ) + } + + fun release( + lock: LockModel?, + listener: ActionListener + ) { + if (lock == null) { + log.debug("Lock is null. Nothing to release.") + listener.onResponse(false) + } else { + log.debug("Releasing lock: {}", lock) + val lockToRelease = LockModel(lock, true) + updateLock( + lockToRelease, + object : ActionListener { + override fun onResponse(releasedLock: LockModel?) { + listener.onResponse(releasedLock != null) + } + + override fun onFailure(e: Exception) { + listener.onFailure(e) + } + } + ) + } + } + + fun deleteLock( + lockId: String, + listener: ActionListener + ) { + val deleteRequest = DeleteRequest(LOCK_INDEX_NAME).id(lockId) + client.delete( + deleteRequest, + object : ActionListener { + override fun onResponse(response: DeleteResponse) { + listener.onResponse( + response.result == DocWriteResponse.Result.DELETED || response.result == DocWriteResponse.Result.NOT_FOUND + ) + } + + override fun onFailure(e: Exception) { + if (e is IndexNotFoundException || e.cause is IndexNotFoundException) { + log.debug("Index is not found to delete lock. {}", e.message) + listener.onResponse(true) + } else { + listener.onFailure(e) + } + } + } + ) + } + + private fun createLockIndex(listener: ActionListener) { + if (lockIndexExist()) { + listener.onResponse(true) + } else { + val indexRequest = CreateIndexRequest(LOCK_INDEX_NAME).mapping(lockMapping()) + .settings(Settings.builder().put("index.hidden", true).build()) + client.admin().indices().create( + indexRequest, + object : ActionListener { + override fun onResponse(response: CreateIndexResponse) { + listener.onResponse(response.isAcknowledged) + } + + override fun onFailure(ex: Exception) { + log.error("Failed to update config index schema", ex) + if (ex is ResourceAlreadyExistsException || ex.cause is ResourceAlreadyExistsException + ) { + listener.onResponse(true) + } else { + listener.onFailure(ex) + } + } + } + ) + } + } + + private fun isLockReleased(lock: LockModel): Boolean { + return lock.released + } + + private fun getNow(): Instant { + return if (testInstant != null) { + testInstant!! + } else { + Instant.now() + } + } + + fun setTime(testInstant: Instant) { + this.testInstant = testInstant + } +} diff --git a/core/src/main/resources/mappings/opensearch-alerting-config-lock.json b/core/src/main/resources/mappings/opensearch-alerting-config-lock.json new file mode 100644 index 000000000..401374a8f --- /dev/null +++ b/core/src/main/resources/mappings/opensearch-alerting-config-lock.json @@ -0,0 +1,18 @@ +{ + "dynamic": "strict", + "properties": { + "scheduled_job_id": { + "type": "keyword" + }, + "lock_time": { + "type": "date", + "format": "epoch_second" + }, + "lock_duration_seconds": { + "type": "long" + }, + "released": { + "type": "boolean" + } + } +} \ No newline at end of file diff --git a/gradle/wrapper/gradle-wrapper.jar b/gradle/wrapper/gradle-wrapper.jar index 943f0cbfa754578e88a3dae77fce6e3dea56edbf..7f93135c49b765f8051ef9d0a6055ff8e46073d8 100644 GIT binary patch delta 41154 zcmZ6yV|*sjvn`xVY}>YN+qUiGiTT8~ZQHhOPOOP0b~4GlbI-Z&x%YoRb#?9CzwQrJ zwRYF46@CbIaSsNeEC&XTo&pMwk%Wr|ik`&i0{UNfNZ=qKAWi@)CNPlyvttY6zZX-$ zK?y+7TS!42VgEUj;GF);E&ab2jo@+qEqcR8|M+(SM`{HB=MOl*X_-g!1N~?2{xi)n zB>$N$HJB2R|2+5jmx$;fAkfhNUMT`H8bxB3azUUBq`}|Bq^8EWjl{Ts@DTy0uM7kv zi7t`CeCti?Voft{IgV-F(fC2gvsaRj191zcu+M&DQl~eMCBB{MTmJHUoZHIUdVGA% zXaGU=qAh}0qQo^t)kR4|mKqKL-8sZQ>7-*HLFJa@zHy0_y*ua!he6^d1jMqjXEv;g z5|1we^OocE*{vq+yeYEhYL;aDUDejtRjbSCrzJ&LlFbFGZL7TtOu9F={y4$O^=evX zz%#OSQay8o6=^_YM(5N-H<35|l3C7QZUF@7aH=;k!R!Vzj=bMzl$**|Ne<1TYsn?T z@98M0#ZL9=Q&XFBoJ_Jf<0Fn;OcCl5x^koelbG4BbjMQ>*!nE0yT@6k7A+ebv`X1w zt|Xjn4FVXX9-Gr+Eak=408_Fui&@?foGz6qak-tHu>2o@ZVRQ-X;HZhb1Hw|ZAoxx z!)Cn4hxBI}ZbBCOTp3L63EU3Wv1dxk@J?)0_#oYR7HOP5Yx6W3jnagH;c}y$G^}eN z_gNT{1AanZ<}mw2ELMxx@ZzZ(2RvE4c)lH8c7Gi~3R2#hx}p9!hKPMW>ekYbK86>N zL&7Ky#*zv-P4iuIQ5RV(+vKjmwl+P}KH+$~xd=b5Dx1{hqqu0tbG{fYWstL&Kcz*d zOc@$}f?5vBmO8f3pj<+2PO7R}Jd6N{qRexKo>ElNYgVeYkyhIUY}X%clJ>unwsuOm z;;>SVKUJt$Kgz4Ax?PKY8F>##IJuP>EQ5R;Cq6}Xuvz;%La(_I4j$jv%s z_v}|apMsrN_%S~~HmEwu3RG@~x!CES{G~n#-()k{<4D?L%JT%I>3r{ML&;j7U#{u0 zJ?Wc+C3`^378b`@&yD4v8!cjFCp`ed7Vun)3h1Mkly&;(&fuUsq`8F2oWWnBfh9v! z%)WBwE2S9RJJIEHjIzyFh7TbyvbDRCqs zz`u%UBFGa1z6^Z;hSo~r?|SGTS_dE)60uPS35n|LB018jWS`wU7vFvrB4e$T&m zHc|hf8hn9fWZKeyH(lwiTQ1#0@gld4;-h@NX+Rzmyy}R9oxYJVHoXb zyV@nf36;c=c`b21vH@(g3?J$vx=?@!?R$yVrnPrplW!cQS})U%>{%lmdXH)bK|}WB zcslr*h|XiL-|~x4Ki6AvE3d+lTEd33pE)hY`fn@yv8^AoR52`*L^Kh!TF%3Zj&Vo) z=)bDG$a-IkN7fJsTT4x6FFNyqV+gZs@`P2OIF#{#7x)$_Cxj2bW2H2c)@w~>M9-`> z4Rw#yV$w+Qv?+!cb>ZXasldjG=R;#7T0@G-UcsiUBp%^VX-Dc8J_GSU8yDRiKwU|c zWvpbDr3EA4NPJjox0F|pxJqXQs*5zW32Z1yt8f{bm&ngF4za}c3?5YO)hu10?0t>G z?ULZt7!+Z}hMH(DP{TvGVkLv~GA_zNQf_1_ni6^ym;89EzQ5#iE4m6n-r2uEvoizl zq5cbd{wH>EyOaK;1d^KqLzrk_GD1tax$Dq$Q})b@IuYAblTIlc7NyShO4+UxQ!h@9 z`1~UTW%+i=c#J0?vlJ~q&h%e?Z+*S2@M z9)%F6JI5V&Z_>NgLbq|?usS;Lz#Hcsr^jx;DUTy_azC&RZ=O&Cop&s-TL-CH84KYl~J8>BsHHR%FFg^brE_t={xLMsXGwF zIyCKUONvr-f1;TKTPsMS*((XEUx+LCFvCe!sDD;lU=eO>tQ@>$nrs^M^q((M>TR#Q zOI>o=R+r!OkY1EKbUNuYY&$~TEk$WBzF19Z=DLh}j4c%g5#bz8au{mO(Tbi7uvF$Khaa+4M=?LiGQV#Lt>t>bsPrzJ1l+$MHNZAg*yv2Aj^GPdOj?yc~aVqIC*@K@(1i)SWh_{G{A zG1@USpgj^;P7~3AZ~V|GoHJ2?7%^R(%z)V*M!^T-q5otVw?hcavR3}JStYt4!&fXD z1+e)IzeoW7Z+C(-4G(4Cs?Tv2T4LY_Vi&j`Y32s=e7#vP1KE&fqM6+)W7s0H-(S1iQEl`JtY37ONAZL+Nu$hJdF28aC@KL1>?4iXE{ODGHT*$J!M(}w| z?iMo7ViHWSXq^tSRA9d49%mjWkK}6`jDOB=bRBJKkM^)P5DObI%N@QWmwBtA`U5as zY$MJ>tCT^Cl?=nqgIhYmmXxgSlTJp?*nuQde}DXE0r*uaEGzc|1QO)--|@1i^EYRU z-jUJ0(A^Onr66{}m%_N0m8V*Wgx!(Y+58UA>yEFY)xg)=ABaIlk4IPQu;Ff z^U0cjG$rBb6bPd4&~HD7 zuilr*e$ya*bYJ1slNQmcQRBfYGVv^7U*TP&1&j+6K!Gtya8k0ZVXlRaXonBQud{(- z8{H;11N->}EsfRH&PRJ+Zvv6nmNL5gZt^1ycQR+y^$-cE4ysf=aesOre{qVP8ZE-N z5b!{I@h=~}ezVU}r}w|kH1)|0eTt{uhLWwJF_ooj=394^#ps{7%#C64V{PAIM-QlV zWljWxJv?vy{cg$FR1<-R)1ooe&bh%H@q1B31dgl|L#Hi%;b1m+v3-Qi#xKFwtej6F zMD#OP7dy=d7x@>b$WbMbmRN5H4!ud^fkEiH^4c)#SM=rlV2(hQC})_B#wcQlF8lZe zG5d9j)R?jGyvJKno5h^QKFplNMt_2USAR%e+t$izw$>w&nxaUtQ<^8j*4Y`hJ=&70 zX!}IKNGDkF?b-aTbUt6IUAZ-_H)qqB}z z!Oxw~3$9y#kV1rG*7QSA92I_QlZsfNs`aV()|gms1UM2eQcsq<@USs>c&Gp?rddNQ zEV(xadXNq%+{o-xVl40Gp9^W}smgI{@XyRnBS|vC^n18J$sI&VO$Z4O<7O!Q^QmAM z=VJ|CUZTSd-k)5(U*-_`!=NxqE$3{g0d$9+KcYE)<3axb{$^F! zy^*(#FX8*az%oN7PXD!W!#xk;cyKXPlk#REJfCc@D3GUbxUdbf3 zgKAiY3UkwLeALOY#IYIP>YMzVjl!=0xvd{+phh(_O7tE9qy4gb>yre|RzH3^lT zWrRQ??y`cGvDufpSH>KBD+)tNgKaf$kj^Of{&pP#R7K8Q)1rNc)c#pAknYFKm6g5g zOW=*;dhTx-*{h7*GlF>Xh!oxu^ZvA7xfcsG7i<(iMKq?ht{pz!I?YZzNOki^74gx-@+C`zFrDH5GU4uDsNnfkcmY zQbAo?mp6?L4ni5+PG2%Zz&h=kLQn?S^b(Dt8DLm&ns$jXoaqk)El;XE@SK;iXX0wQ z;Olbo>zZ$ds`WKqciZ7*g0)utwY8VaYRl@26NmB|nw(xe&+Db*ldXdLA3d+d!5Pld z#$pjwmtrF~-?5pz)jXGt4sqBp0!26N_8b8iD|4ubbY3_O)aT;{K-ll#%wV!e8E)Ff zZt9=A;m691@9&~gi1oqV5Es86S%S0^+zH~VOTzgoDcz_X@d(}Xq%@uJsnC0)Q&1IY z-slwRxI@HX4M(nEzsE&vZxtyFLZ+F_)>Ne2^$IA3VfO}gAb?iJd!u^Zp!ak#LpeXGXMcSS#4&+DJBT91RSM<{qPz8@SJTKl;oJiy+6QQ@VK$5PjOa zD+x}7a3gCeP*X}*EGre%RbJ1fDeIQx!HOK|aONo)ukFgyfI!6{f)z*54Oco>&mI9i z;18~KEb$7_mh|HUv5!txYFdUQRaHc4J$-H^`SruU<8nJI(%i<(vp!&63A z!=>cO@-l5t{(3p5DoxawpiZul&;+*%46Q7W8tOty9cNCiNcm!@cTBA*_Sge^l>@eE0yb+7& z_G2$v0AnxOpW$Bfw?kEjDNw8x$j1q>M?gh4yM{&(@rM;tUsM8^hWY_z`J5riM7;CK zXlXQxK*Ska!rCWbb;(&bgG;Hb5qw>0eZ#Y?eVJDrz8L6*knEMm4+N7N(`k+2TB6u{ zP*lDK>Mi6JLU|r2J~*(|iBapcCaxQF(%pGfoCzq)y_CA_cws+oJ%9&=jAXjQtbN5k zAkClhvE(E$F&65^ij?_t*1kpm7|9VZEJ95(6bfqN%+8`g)#l5IQpmhG`ofn;5>7hk z2xnq?L2V}~_8;0Ll(dVlX(LSJO0x+1jr6Vw{Bo%vNJRugYT&*KUaL3&}YH4OWt#%tJVil>0MY&zxM zvAMLu22RDvj^Z_sa*ao26u32j#Gbhope{6`+4?eF)` zE3QBt`YUPT2C^v8Lt3;Or%uLTrW8xK5 zqLEc(9k<4`l{8L0=Vea0-xQYvFOQB(duQK#S=rMa^RK=p>fI!(^ef$BOyb)qUF|i~ zTl#JvRhkRlzl}D@lzj(;62K{qy$1rr=B~=Lb$%JgnRkS6>I{yw{h}QBka+IE&GX>% zAJ+|^G*Y#^rb6nMgMPQ3GkuC1B4U!BUk;Dd)rpy`_Yr1&E2!i z^7vz6B1W#bfEhpYDh3<@bGEu{6Jux__bwaZ2^g?PY_`Tg39vJlA>bfG>_pQj^Zq_6 zi#$Qa0DQ}Y6R}vkCm%Lt0&{NR63oo55%F%pOS?lg^XX1ghs3MiQf1Dt+2j*IGJMZa z#;0K^rLufIwaWc(uyfHqLcf`(@H^dMl)6c&#e6xWQ_(k zRz=x*OVFt#$cTpB?i@m*D8nm*lFVev555nBCQr+JihUaz;5fsw6-=qeW9iHz&hX|F zS&VP=r( zbO+X0bOM!y4TuJgS-&=u(*nR@cH5dzCPjGU>oS0CMPQMj^F@SYX(rvl+Y_76GURaR zp^G)7`Er$dE7Z-tH5)^X|2PfO8!}okjcZz8d-)|VT0R3v@@&4{g70e)0cTWq;*xOm z(e039+BRgcLB1nuoSwBO|5QIk3DjemLfsP#H=)+^8#8+J3)z15n?g%BFq#&yf_7EO zfboQ=qKNN1+=K$ZC!5;4mB7lqUt<5XQQP&I?f8PVp{Ss!{*_G;r@nDPQ&mY8R2sjM zxw4d?#_I?))gJ4O*V9&Rsx*U{fp-ncs_ng#Z?c5hplhQI$TVrp(5v3H%;YCL3+Ss1 z@~NQVv3~ibw5b*z1+1!z?twQOa?Q`OS#VheAa&;=;`&|UHmni$-h(qeO3wV5F;DBM z>Rzon?A7Hk;9}!a=XHn0klvPBC)cbM32aD#8!3$18Lf;z1s zG}(1&!y$ehWEo1unGS_G3z!!A`(GAjnMmxq6>>m{LCm?+e-_slha9vVFc1)#e+&xO z{}k7K4#<>CZWN%#E?`9x{d+x~OoDohJ4$Ssh&WVN)-)Gf);hNw=GQ`HPus_XphMt>}b*b=*@rzV<@1ijU?f6raCIlI+Jv) z_0^LwE%@~_m9Py3lW*#h3gZajMH(|r!5rbOj`l3l7#$X@_;ot*I=44BnR^WVW+{|f zt~onHYA&99JI6s+EY=zmEPc^){`=&kUD;P{at;X{_ARTe zb*LtuT`NFT6Gy-TS6^0$;50mdO<$$Z?t=u8bmqZ0RE46zk=w{TlhFPSwqLyMMt7K2 z%Xg6IA$cy(qYA|k zb)SKGwihPbq|>C0fY40>&8}gl98cThVt>8?(GfU{+og%;xM7#A#h_x_&-6#Y!tAf80_?y=XIxJt2Q&4q!8vC7 z?^~enOF_MOt1-6R5rje3P%fEa>l`txDAwOh$KS`=Bk+;j$DeuIoDi{%Hr*1dYJKUg z1@ddnOA9vBgGilNZyj|9f)XpAPPHx(go4{{KYs`#5%s~11b9v)@UYZt#g*C#j`9(# z*s!3d_`Ot_ek2y5cK*F{kXLdukiN@AE{O(0_zWb3m?Zb3p{gD|EM5}mrb)9VXKe|T z0?TD!ZawCi>si-w93t>jw&I?a!^WwqoIfVWxOt@cl6BJ z9Xl_11OE;aC;o4y$JGf7{3p2eau=Jc)qHMN*LA^w5D+YLtcBgj#G1UE-CP;fk|)dt zfy<;ibE&YHTwEe@3;iZ)lLrGyo!>mtWnd^#Z|@hdpzFf9!=yf}|C;j`PO>3gt3XC7 z#CF?=MEI1bm3~D<=R9(Qk9$m!)0RhFTHden(}ClhcnVr?j+EdoMt%-!sn{C#FT!3Mr`9asC7OOBkKx)@ZaE+XxKZ*xJ8L>uixI6iBh zKUc6oC)GTS)SciDQbhnvHur8HUtwTsFoRfVBx zND}|`cdIj36VJDmIW1haD0==ic!Q|+{Vrmd60J?2*7nU~Jw526CG7mpcM^D9Z@Vhk zK2Ntl6F|}%t4oMlc-^|JC+#vh3=Q(W}UY9Jo^1{B~gIY24 z0=mOyd=lVUu3W}us9s0D z{J*xZHKGUkBI?n~O}$@9gzpR#;(T0rtYDbPT{hlRan>z*%oZFuxGnU{ls$ECJm9UH z>BXmC*me*j;V>t%HpXHgBw)Au0BR!#tGk0vAw8@Mw0F5oo1sKKa#@+f;elcwo_p|i zf4zh1(PPF;vHKJm!Y}szf*YVt0CEmRp6t)d6`pxRBz!!1u_4dXst;7PqakTnr&yb# zy5R0SPn_YGvQuRQ1KHmt;Rg|7lPy&9=MNW@sgdll7K$pJ3agxoXmcJ1Bx`J6&_6PL z!oi)a7D|1iLw|mQJVW#d7Xziw&2yruRgPgk>;o&9C!vx~#WD|VPTrYi{lI7Z=t)~q zxvr6u_Y`)br5%qsy>llS%aIK2j=5Y@(nyb2w zsH`8K_@s+-Wt0x zEHp8g-ad7(dJ^(Jj-xbu1N);g{@8BcEE3FavmjOQn0uDn@%43f#smUoy(L{@OBP~_ zspPQQXkjuTnwRK(A;aV&A-#q-0p5ZJZ!m1Tk#ci5)_Gf z-!|L|W^Gt2u8&+SJ9Weu6C;9p(LXJLd;D^@G>K}79RO>Sj7Bx1*~i|xgr9GJVwFFM z*oST)uxtKzO`Ni}yjp?VJeLJsA(76F ze}2NOjg1)CrQ<^^Fk>zqr~~`bB;YN>fOYUs7DJ14AcvSzh~c99I7Qz zvf#)6h3UvIytr|wARx4~ARv_g`w>VWqnW*lt81Q)jj`TZ+IKv|#nb{*4jL7TIf_o? zwHHiK=BQ2{1oNokAjyypbo7@!ohCWi6nS`KsPGnzT#E@*GN@?!`;C7x{T3|eSCQv!&ugyhg20UDg1^u4<|7n{e8v~h+j^wp z@;=MwPeYUsKI@$pnj=2zJ@9SkR7HEVfuLbisk5Xl+ew5)i%A0A0*#FMycc;@T6_iJHNuhjtinw9&QSk0TF z)>0Yd#5Yq~&LP@b)&R{UR=%hBZEd({8IxVrp7~nov|wx5s#G)bI*ez&r$1=LGNk)x z=uSi%YSmL};Jc)a|B-hdZYtEsF5)=mO8&Mg~ndT{dj5?Ua_g^DK4wGAqwD^9n^0wTT%=+EHSoJ z!PP+cszWE*1f*+no9GPTd^rMC3;2uB69^nl9T!sd2U2DQVrQTHt$dgNZpG$MWNXwS7B`M_O7>WCgcfzU z4gLmu*mwix+Y@J#n^I^J+)TyENce+W#Hg#m>5i-05n6XzqOsLBc`gU|my@INVPL3t z7A8b$Q?{>eyRhcw^RQYGpPL+zh}mP{?5O-1)-DWV>UT>}@91Fj$nzs%)lPy>B|wSd z+*&gC;VzNwda2y4HAuwA$u8enHkQB0*|zjVMP>x5flRL>PLy2wN3CF579W!f)OL~* zxM0NSaF{#Z({GiM2&j$fOqndh&nst7cZs#aZ0{%pF$72TU1xG6Q$7D&gqgIo+Lq+3 zT$mOp`AbF$S3ois-io~}YrTgJ!+P)wy$nVd9VYCzBmu~lDKA`ZH_YAi_65~pGXfrs zxJV8#Keo(o*%#r1+_It?bs;?dm*r{hl0T+yrPV56t{QWazt$Igo<=1-tH58%77&>8 zF;0^=Ezh>NX+2?@Vkw_PnW?`j1dIO2KEK6U7vWld#P3g>>rWe58mS{2>WR3O8?s%S z;3kfzBS|ApxFx09m27tCxMOk1x#M`KxYh%NdPObrN#~|QwmW4F2WQx#cEG%uU?#r{9!X$A%NlnuM zbm@~&UwMu_;c76nrZwtmw*NZnx+>QNl)32w()1msIGX2@?JW3;N~{BFxkXqydPjlD zS0_FaPYiO7iFhyxK86Z4I(|@|O~x{@X?1i=COZ|NTFuCMsBx0T={u#Vglk+3!9|p5 zEW`f0^c~uOnjOoj>uKcu^y~B;5>H(~#*X#WZs$hw?W92ZPL25Ui(Y|t`$^A(z`C-I zvFh0P0^6T%QrqpPnuAtQO<@5pBn#kAg3G3rSP|UkUE^ky{xaca5rKK?7>`h<-_qQx7YR_N4!|zc`@m|)gjvL0QLZGvVMZvHuDbq_7kZGY)^I_sFCB?jm-T9Z2I>m z*U=wB(d0?W}1#g=l!qus4$Xk4k)Svul8k}pbG_&G;N0ANuif%WAR*S$K@ zw!*1wOaXPo_iA#5`mzQCY$$LfsZ(fiHFdLnL~aB;x&4WYm%W!$;`n=R$g2h@yOj!n z<2sNO%Wpry@m^09puOh>w}Yf!V(~L0$46SU3sUyABc8n$4~hF8*Yv4W;frKE)a}+0 zD*I!nHUh&Ymfun;N5fifef_7-Zo8opQRODhPPMQ3`ARmLVT78*<h-gwf(YuMTpacqNgSyG2=nR1QhH+2ax1bbjX~wwhYy z1ml%qPoUeL>g>Gu2o1RA-;buAcS*=X`x%$Z<^V<=^DzMZ0_+k{XwY2Lf=kyJN}ZFk zv}d}2a~H5f7`^<>;PN#U`kY5sYb1$|VMUi5;Rx&IsLXY1&F>9EPd}|1P_J14%XocI zv>HQv0fV~w#Im^G?;ld(Z&veQme0F|ilV2jp3-JcSQ^ah00*pTu|IU`qO|%lXXS3n zWNrR-V|4&|eK9Pck2UU`+AC(fV|1*N>}sL>T$e`>;YEOeYw7xxQ=eDBonm@cWmivC z$d-DZr11h1Ef{@2PF6MJp`y74)v@Wat|V}oqj-(cjG^l->d{HDS3QynIhhc8MS55Y z7GXPm!kJF}1pw-yx8`Ouyfj02FfLd@D#@`gFZI(_uG2^__&i&Pj%}rWr|_aA^$C-C zzg+MjVbvgp^+W1p5>j#{c5flgNE@B;MKy1j@~vYdPztrT)hNNTwb*+HO5U|@<>4kl zy~?jcrn2nN?pb>@e0LYw^y&wcJ^mX@u16!7*NVxH@d0*6e1e`lG2xjtQ#dNocjbr? zG_9WuEzNlGLqTC@N7;SUI+fa4&RRkU`E0I^naoC&w(5zFcYL7ROFUC_OD&RO`aO5^ zI<>OdpEPdp%D1#g*DFlpB~vPVA&E^|H=7Mr?xuFvRe|3ggf2~IewENZMD zWy^0umLP7`Xh;a>+}bgjmq}!ymHVLXkc6llH%XkT4TBCS;2QuL?>h$A zO=9^^U2w2H%mAox4>R=;Qv!nyJ;H;=1~{tgL7CF0E*U=n*0{R2Up`|j#gHay>3_x*zLks^As z4{DVs=>T5JMYNg`Ib2jVzwNf*LV)~K5sDP8PX1`LE?;j(qJf3AESX4GT`isjy1Ksd za#&Tgmo1j824DH~)uTs|Jru0p-ib#QEYMMN54gr?vb zI}Rf=5>6#9jT@`x%>(6!wQ+N;B-Q$XZLNiEt=XVatW+bRuQQAx>0cQ55<|j2AVMdPgs~Nx3C*w2;pZ$N z**f#|?k?x>^_-wjaPmEB>egW-h8}sW+N@({F)1c~6CBc;5wpIbt~Bh&q@zWINub zD>xfG{A&S=#VQJVlP5ZdAMQE7XdI&1o{8jf1~{POKNkLGj?@(I#bkg?bZ4h$sHqLs>BZFN zdbPV5EUkV=*0ZQ*u`Q-b|2*IDlt$s#$pw$O02x$Gy(`IsLtb3q`V|7o?<_4l=@?MiG(0dFeV(YETtlz{=rf*Tek(1 zSdx|f!?So9fYB)+)P!d~Fitjb_hbYVHg$Mx*?NorFgK z#us}*O<|*P)#LQJGO$9S?&rYrY6+>B9k1duYBp||BLo2BQ(5c6vX(mC!e8g78vRU~ z#LKbYTs;O)SL?x#4Y*3DNewhQ@MnY0#GD+B?44~{$C|`{zi9`gRv|a=50F}-#UoyS zG{?>}rSPdO;T5c2n5<5~BMVJ_{kHt|yALSe6_LpSg&je}d=s#+ zHxb*YRC!@i{F|khl+uu*zMoO>kLdUTf=-~(v}!NS%pINSmR>V~(~Q5D)ZS3f1L0oE z>pdR9Rfie#DbqL|>~rU(nOE8}LcK57zwxKoUkNNx)}Cx_f56S|;S@S@v-#(9@0D_6K8gA0{x*4tnbax7>#T zOY8m{M9CZ6HM%;&odxZKZpPk^xFDcN*5%vuBNr=gaP|Z!@=s;e^M~1z`iWzW>RP`^ncxsp-UY2&+-}%hSy=srh9knmjX2Ng)i?zLM3DGL*VU`Z zh#`Bkw3_ouYHo+`f>4O1MO`{$>y7*(xbKSo+0hozMU9IVPyM+U3(roD1HPPy;&@tB z_-NUuOEyLOsi;04(DqEHa{>k&g7%wUIc1wIZNNHesErepVq*!QJF6elioGY}|4cyj zk7ofURP-|csQXBDarH=?Cv%_1m(F8_Lams+ekz;pILR`_578nbmr@=AApl~d4FrBt z!@2|6*~qC7pO1v@3ZhcFgX;jftS&cbeK)Xd%k$P;-*R>Gzl07KbTVCijM$smfXVI_ zID^x%y?+%AvM|qa2DKK~!;q06Hyk?w1!JSZ3ZKXUm~;NOieeYZR&Aa5c0tZ}K=vu4 z#rYS&dH@PVBCTc%pf6Rchk6@(d&~aVo=;%YP|_u5%h6IIMyMYrjA`bpic)!Y|- zy_U+KdCg(p(bTt|7IJOhK=$=)KTwwRKpb!}^$Gm1eppJt8BWV@y+^2j!oLGEGO&Nb zKl*c=76Pm8|0M<7v|j#S;=q48#FRl>-2ZLe*^>QVJu#wrQu&^Lq*&CyaSOJTds}>< zvWc6uI>5xk0^n+5FJ^6FW@iET?;cs2x}FxE2Ksk6xFxh0lUfr5t)x$o{5Fn{h+I)? zrfOX|4X1FKgh7OJcCH62+Cpw1|NBt^F>o+Luo8(zF5}}S0noKTUS<=AL}`~dv-kP? zcDv*K>elElh%>~#`C`HhPV8|sFscT#J}YzXK+G>y1a{-uW_}oN- zzstd7YIx!!zr%UrA8FBpDL8eYwu3in^`>6~i+Phnjf<^~T%;TWsk+kT4tC+!I){MI z5SfUD*T%r8wWTSHT7jIV(>Pzc_!`e#S53-!fJLfvPnYZfwc|vM@)5@%_ zmu(-hm<{$z%P4T=aT<)@Qmc2D&?FN&tAJbBM0^Cp)clj2OjFL)T28Vj?SE6eNNognH=FibthG z`YBIiJIOjg$3Ab}fGrRQ6zh(NQ;xzl!fGN`l{3Mv8l~&Py`9Icfg8XM8LX9qx18maYTf%gsvQ|Q>NdR3+m&^`L(lyJE-=1)g+%Yo>mubEh7(QAz%E+m)j z%t*58Q5Eati6k^X{=5pQvqEo;g5uP?3kwghE(wi+gx?>p{$*?r{OO!Bf`DhI-Qgl~ z^~wK``tyk&FQJw5)H|p3BWm-}56lwX7k6nigOk&Febfw3N%*FJc%yXBKW$U)Z%x?V z!9F8-+rx_VdL}FLM#-!atP|8u&xlVuG(tGd(W$P%waUHOSZQ&(vIf|C&3uuM$H1&s z7X7^w9zXqK=@>mB(9v_xO>I90qX7rI+PRIigf|1X$RW|3B#YO!xxa1MWZRP_@-8tN zc8M{=8`D!kwL>9+`ySMv=A#Js#q8Fy#4Ey8;2|cro537VE=IIh;ZBSaPbOEh%Snut z(u#BhKkq^4G$`+eb_4qH;&RDV%9-o-;rZlLy0Z)lX*m1`xbhW6uNt*M)(XbsbBY=k zW3Wf%jCf{KAZs7D0xs6F81$YmZBwGt0Z|hLSI@R7S{@~{fg_7p66(Zt*g5YEC-uVO z7g+Miydp%J=i?G7D5(O?fQQN}hX^q;JX zitgBu$iEgk&OhCU;Qv-8Tcy0)q64)6CeF?l0C5{vH-L?)yPJ)ZqXxiU%*pXzRdD>ObjV$Sz&viz$nu=E?RJQCOUiW>Yarq%av_mmaT=&S17>$3(^=t2{380C(0551jmfkZgt*2hvF%{ zUyMu+YYw9bFFI3|`3fe{q20hy#S>9uj$JQB)yo?RkKB6VG6TGNCTcXs#pMBBod7OBz6_B>N|0NHdwf!rc(X z)|6`l3m7FRs7XHtqL%Bf)k{In+g-%icG=Mu<>g&-jdJ|#RZRYy6GGA=wY4o$h$C6g zy3GGmgz7<@sEe4$gX2}u@uAW4ZKuXeDYRU5dzf|0G1tZm8}qNrT{MYR=H3l81CoS6 zJ4I4G9fmcb8tbfnJ}pvN3r1yK{B1)-v+XgYJ>(}KX8hl5?=cE3FmSKRp1Ts;ZEf7F zmWBUo-<>7aAokJWSlEkwIBQ0svmo`?#MczFJmO|?m-SZqVtoe_qK!6M*+U_R!i(6B zvKK(f=hjOc0!vmagR@gu7ityBUBBByfjNQxi};sJV3tTSKIII_oODIT{9ym+9rRSu zCQpn?vIiFk(5zF2H->+lW||x*2`jTa=1T4nMcmZ|h+g%KEg3}yYE(?((cvko zG@s3_z&DQaN{?y^{-JqH8^(x6$&AyXGm7r0a!OzBlCuYXlgI`3f(8*&i_@$cx?gs? z)p_fidF5^h67c`7kEBC@%o`6J_mB>eN zORD8d)_f`fuH`VG@Y^)D1rnPMdh}rlcgKjewMBN-c}iMJRP#~{zh{`4Gkx0ypG{t~ zuaXZsaf-M??w})`U<#2%>En6Xyt)&n#WH+Jf6GsJ-|N@ZEL*z97p7F%SbQzozhp4r zUw*b|8l({I^JoC&=FR6MndV;NEA1|o{Eto|Q>Y#izgk$J{k-m_CBQa0sd+bK9*VUt zp${49PPx$ka2(RXXd~ZU*FHo z3JRnrfOF2cs(V}yq~!mmVoWHoi;8$Oaf>n(r?bxB+b8ZLiaybh|)ak{MX~F-lPH3nfTvzj2uSXN8rls|oB|{E#|HCdXYsAk80gvcS^Vlul|B&PX{_#+l5KUU(u*@?HiK3bI%U94%*{#yCeWSvm!d zNU4SX1VR%%l#8159s()ZVfz2a)j3Aj6}Q_yjT+mw+1S{zZQHhX(Ac(ZG>vUrjcsSA zaeDLKbH=#mo-x*!^?l)a{_{8I{K<-&tCe_1wCy-*??rdu` zV~ci=Fwte~L|<9mGHoBWVm&>Vg9~lQ-ZHhTn8h>W#8Qg;E>qbsQG0P-rI4gFF;(^2 zWMjSGNe1G(zT1x~>BwJbRCzU2y$ z)>w1eVh zC*|vy*ZXwI(W81S6|AUqkpM{R>!fLKb!==0-NShiaKC$<%oisn#ftHNz~LG~zLbnsvrI$NmtaIkvri72296&WoTLTaK)RO~ zEN@5qjFXSj>DDsZUCeGU%zGV#@ss8mBY&O;^CYOko~AN*)){CxfDP9(q>0v}af=9D z?L_ykdV%^u25N=t8H9k^Irzr04F7j&_h&HiE&1RryhDM*IzU^s6c9@&F=#y93`ggF z@#pmOv)W#|o?tmybEi}?`x3L3&}j-^_5p(nuiAd-rSjEfT9ZNbjX`z58)9!c*z>qO zdAo_wpu+LRss`A2@mD9WMNgH{L8+(l+^tH&XM!nF647yWm9cI?_;f6dVXxwKOB;J7 z8Sa+TGf5s=RS|@{x9;XsFIQG*vBa6FLH7H+f%hp##mCoV7SDQ1adAF!J_hlD$&s5i z_24cCT@`h{ueL=}h0FdrwqIDIiw%Jtq4U_XI@NLEy#ctTdxZt)v{;R4<;-<6`PJ5O zzJ+Te5+mTOK8#mJp}#|YMuZI%WMO@^A}p$h6u=dLAm1?RU66%0DEqyP8OADCy^l*0 zg(H9~!6Kv4ocRbS0v2HGh)kw7_Re?18&VxU{RmGqTNK z4~C@Rz3KKbeI63?rRC;kNrb$k_Sg+5x9r{a5P$~cNe1=KB0F^(3t(LWuHX5#)qO%b}j;A4t z{%6sGJpOm3Y-DPdAbHDINuE4k*dT>(<)%N{pN{ilr zwWa9jw)1h?{hBfRg7a!9+Tl;Lrra#rKm2SF;9wOi!qk1Z#nxZN=qV!%f-Kh-?P_P2 zwg9a9y?+rBmC_n`ElG~Ak2(&6ZdF|abBT0a46GKWWW*tjB6_SX zB2x6jgI~q3)jkj>F8MINA^pINir}9eyySb}oDRFAA36@)dctm8Nga>=41I(AXQDW{IQ~ll(;%defD&}PVx2tW$dN#GvblIL3bzJXe*@RIc_vx z_}!7J3#xNpdpQN>pix5s$>S=}o!DYaT46sj4Wjuwn^Sz$;hEHWth6K9~I%K;rNeLNK?j5L?!^DF2HT@(am z0j-<&5%?Fxtn?X{M|6pBEmC^-$5qUV4F&lF&R#v^pQxOishMA>6HIU_nf4=qTmw~1 z3j=l~jtFZMM%E<9-6YFh+QWK5)=J)ktt}?Sj4MRB3Hs1RE)T!_HykDEMS;Cf4_=BP z7tM*OkB^ZRG9xQ+Ydb?F`P@~H%%Z>KmHZX*q@)8m*J@P4ppYYQ*-fRCp+|Tl=9Q1k zcI%v|2-uUdtC|rupWyt>IB8y1`U=2&F-n2ohtVm87M5U+%`zHRno=#sBy-57CV{E# zQ!l?Spp0{veSfclkxWl2lUOvMROVpIq9cvHg@ULrTOuRnMQwse^k4%l- zX7Q@$NSO~!I?`9+S~Xbrzx!e>=sfH$9+n=xnYk|(9yhD$LLUgb3^LGh#_TeK+7SL; znw2L-UdT7}XAls?`&~h-F&Aw{B)}>#Wxbf)q%3C712`%-z1RYj{*t(O1ki3)5M&*_ zBk@IB;Q@LW6L71F>Hz^le3kxWB9G?JkJi0N8F8O>Y0tq%ePulAU8t{*ge*cxW!xAD z4bZlmMgdTqcR6&ss^&OjjNr)DKoeiZ_?vXgP|AfhNC&x|{kZv-jm`no2lDoq!|goc zJR^=K8uVi=S5e6IEY6R2Bhg%cHi0b1{RSUpZVZ;Z==9EUx7vIB7JE@!P5!}p@NK;gnMk}+A4_7&~DT_m=qsV^C0~I;A)F(;Du_!R9 zU+B2Q0KZ(>TGMb9daHKIXd=&t+sPO?B*p1}?oaaqT03YuJ$j0%-DDHy1$mrfQ} zdF&rp;jxtaeV*_az=7;r{zhqJRl07Kg0dazoK#UC*borX)4cBVzO#F@6r6}^dKB-A z{K8CP*}R=u7?H@N9Vv*=8V}m)k__P%Utw+x;!mG+m%OW%yT{<5VM(ZUo%uNoFdnco zKvr3e)SclCbM;+}h`gf<%CsWx8nV1FZY`d>W)Ie9W z$j`4bYO8zdFWgV$k3vxrEFf=)v5On}oFhomyU2BloHLrQRSI^q4<+{=3-^hbG_KTF zeLBo%hDin@%pr|ToaR=cpcS==Ra*oBA=hOyczs%c{{lxv2#`2%GAKe4_UYN0p<0B1 zAsZ24s+5R)svKG*u_X9vq}W==cUUP;DC!O|m+WxqpZlnA^~j5wumAqnio5_pGSB>$LTzez$NXs6Q22BV?{!%}=>gJmyRki1Wdk+WFP*0Nh( zkMj6sQW~w(+LFe!U_y_MLccDq+xf@8HCi{le&xD)`bp@i`%e<|Z5J=A?cT>ok}USGT$}eOdRq z`L-1ReEZDc<0eUTEYbSNiO(s$U*5>1TR>_!*4;~!OVG^Zk!$EwO^QV-yZi#XZI{jg zyui{J@Rz$o;%sz@cJYJGi`{a&yx@s%MbN7CX5E8NE_0f4czE8if;H#Z89vALLfZzw zwtW;}>y;dyhv_g2*J|ngi#=Ux@uKjAdv{OpI^80AMpvLYY85l_y^@4(PxB!#Ja5mQ z*YWAL)Gzb0P0xa9)hm3ae*RAiBO%@mM(y`fAa2q~l7&_lsv2u5+9yZ(pI%l}f-;r`17hVGGy0i~GZT#Sq zf%CXXy7MgwxY63IWo#?jgBD~MhS-15k;JD8r{~9{mZF9`f*aeQM5&m|{$A^5N5t#w zc{$C+NU~^e@BC`CTwKW`)Lr+5$j$Z^f-+)Er0=Ep;bXJ<=o5g%x5!;N!f z1;EOlgvdp&{H{0L*ja8ZF7I}{DBF(Z1HSThZg4$5U7cQEo}VK$x7wd;V;k+yh!(lh zWyt8ft=2oQf``tPE%17`%3=q zECeyFEWb5o3*IUTdfniYs~LZoMPBwdEGOe^Sc|_+<&w(k5#X`|bf>J8MrKOr1@V5C z!CU;mGIMy_ky)WF%H_m?y$N%M04_54E4ZhzvcXTwmU|b#u*6*tT6TW$P^X(DW;jbnRhyF{yr+Q+3Un~nAO9R_fRrbGkQYu) zkd+QLP|CQi4LT7MrW#%qgFnK3YFDXhaKI}UzHuh$nF1ZlbCaAfTBc@e+=dPgKDzZQ zn2mqJAwmB9BO~d`var@(>3>u3rW#x9r=5hv z5y1RI^i|jl(toUx&gK*&61YfKgB->{*=vD>7#e*s=yi^#|&T)8tZ%C`2(j;Yw+?j33JXCVOSesfKP)WND=39QQ zr%OS~ka2uWlV>`|#wHsyw#!6+t(HSDSOuq+s$r%|CYToi0h`7X20RKj;vS{ln<^S< zweiayX|;V9jJ=WKg9y;!#)MG)Xd$sAYhWheda{sJhYD%UYTVsbTVkBPs6LyBUgZxt zV|{0II7L8~42;ROn9>Od@byx{oSQ~tbMkE6wFQ+$Nn7#*j=%z zhXrR8&na5IG-iLQ10F5G?TQ^Utzp=66&DsLO^+8%w8WC>C5oSFu!x*A*ASkEt(9W! zR`Q{y(>R7iCg8TdE~atQ_vX7SYox(f)29o@0i4}~IJa{SFnTgAG*1Nj$z635Xb#V{ zO^|bZbs{`JtHJZ4TP)Wo9A)xR9 zGM*nZaBLUwZX6;sKy03sdU9@bJNjGhQH-7_jVd6;yL$C zPuhaS00f5&1c#ZDMCeGq{&5=OHdi2ds%&I~@zQ3jci+{vxcl~!EXDZ)e^PF6o6R}z za}LEKf8qICNW9BJf#Do8V&1MPH1WxIRDNbdM5Q0R>#KEa&ya(Ed&~X>FNy{GK(Rx# zqpZBK3)$UD2Mp~>4u8+zn=PAByS)$(7VD7>N7^@~19Ix3_a{Ws7yGTV#F_5BU2>1V;xmpzK#0g=P%T_B`)R*2;}{GFU?;dvBV2tt2kY{9|x_EQ8pZ%)XNW9p{hq=x%-#8<1*xR{XfU^eKjYwkSwvmXzOu z2D{43g)pXj>|H2G~Y0ThIgWY6i zfLzb5?_bZ{Wq0%f-^8Wp5_V%q-(IqQ9Q$W(fA5J$R1=+VSE8_oWt z1C;9CFX#QtUqYeQzL2vIam99^(AM`!X64Z%Y31A{3M znjfCmzj%I(=&fCV`UaB<+xL6}f+m7x49myC-J^Tf`}pEqHYBigoBEGhhRqCXYSDa% zHH7+6LOBApV!Sfjis@Bsb^079Mok0Wp+V3>D<7BHmescdAAUj)-s2oDk-fIf0Zk3X z9bSK`n-~0lvqY&bu1o}|^bF%bas`89>}fyvY-{Iv?CMQhuS}${O%*oNPWCZS zALXPCGrrN<_FnD6{uJha-1HD%{?%3C<6E84NhV48TP>tqbE3y?JXVkBw6m8XQ2Yk*7k~MVkYj8gj_j2&08}kS7K#V97WK6^` zGFESge(0cnWm&rPumDN1p4r503pLep%P4CKSN)`h5{vYLPC=Wvn9A?F&$J>!v#o>w ze%Tl0gIv|d~gn3GO^aHE!aZKN)jPn&vOd3}Fogcfs1rd*It6!Gw z*^VGZ#E)&EpPVRoEk??vQYBx~;Q9 zxtoVcf3kGys)Zz=Mk}0x^`5Hbi6t)jspntRB(Ucs=c*gW&x%2;kGhjCl+e|AFe(K; zWHN;&Zux^&KiQLZTs16MvktNfiYjX~RG?~AYGzuwO0?C1W!mar7jI1o^=rG+gz+o) zN?!_mBiX)#pvZL)>_Uf4QVDUnN!fMB!J%=6GY>DNTzta3sxB}`CNoJbOo3>$4FSk0z!U`ZcewC;{lZnzbHOZOd%#D<>3~OBqTN$}l`TninpOvvtaqdHAU>YR- ziXrHJUI6@_;uu$j4o6T$QE~Yj*~lK;*8b2ZvI~!J@${L3kuqHZd7V5Kflg`5KY1;s zQ^|^XcW0-;0%G^){Rp7N_*BPh(7v;~Zu{gOQ$0_0@41L&68mEJuScnDw0z#`Rd8!C zI~d#|SVIsQ4TDM+9@59wT>Tj8#iC42IALR6Ul)+--*SOPa2LmKNox)H59KWV16RUQ z9*&-(;vo*|3Y&r!hhPOh8CTomw)iCEp@$zy%!MY+*de~(eRAiFAg03%kCm}=0b6Rw z|8gX=Q#1%UTbnf|7jzh9ZGSV=E;oJM5Y(1XSGZc9wK7QdCO>=sBytb#8*nJp)_DMH zd;)?F*n7cfs@002Y(O}v`30d69Q-1d1mr-8+8>mn%+uw9Rb`Aae%X5}lJBrk6TvT( z86OD#E3iS6EY!h7bpjHWRA)8U!D$^7xgRi$HZCuE+r!d2DykO%lDrUQ4!L%A=>{&b zdrDY%>8j+i9&-^&|2?KEJ`qF+>I&3(H(=dU7X{;>as7Q>{7f)~{;qzULXw8u+(dG? zm3y+S#W|ImodmX5_Ej#~_<8aZ017!)6(O@vqZg`;6b~$?)%ZvyOFX^5IGw!sx`5XQ zF)3MEz8O7{3uXt|_=d&qC(S>^tM%2G-VMjWV_+IGdy9` z)6g0ypVQx;NuLvF8R$7->wCm-Qdl3F2cAxUNNbwI^?$ZQ0-P^&QZ-Nkwuc4QhHD=6+XOheXV=qnia5P`2xGLic0q!$Czj>tG<0}U_fS)3f1brp@5<&jcJ$u^)VW7<~N^#GU zqjm>Y_eFzUo2;~kC*@?_|&@}m|_l?yoxI06k4e^YL)Yxv3V<}xUqT5r#wHC z=`@{9um_yc3R%!G>8pNKQ;~M1r6aZGOP^-^lA1xYZHD^x{!URPDlQ0qf-E&BCpw;f zkcb)I@vhS+eXrR+161KYSDb74rpMjFmL+@ViW|T*I*at)Wf43@uAfBI9r8QrUajCQ zan|FQ;yvE@SdbSUio}}81PoNr zaJJpPNzK@hoj~G3f60ai_oj!(c0PZm8A*Fhwi|Vi$lwTG2e)oGmAH;^Y6=KA^e{D6)EssBzj^?Jw|C^-F!O%7MM}JEX;0ZE0{+{XI(kINw0X zkwNs-K}4E9GRbgdl@s@hKI0V4L6&4u;A`!Vm2b5I*)s1q1rw64l5A#jOO=hTxZ0uRP7Z zcpsL#@s_CKvxRQ_@wyYtO%4^U+*q{b7j44cUdE)9w;ia_ON%U>DdJ2ejCv&w6O4`@itcXXSSw1?zv)qZ()b;XeK$LPC#}lQ;~g!qt+3e@oXm zUm%l;g%TqpSzlL3vc$=pDq%yPZ}Hf98fMD*>)H#7)`!XQQFt3x{7Cj$&)eop77k7% zcXHY3eA@ch_S|`Y+_?dQaR;{hTn<}9vqD?q@DCbE0qDcjW2}^%HHLu|VLk|KE^(fw z?hy|@d9()zR5)@!+6s(ORPlVA6Z=bj_@hs}JhcZOyn?jdETpZZ$Vx@_;fk#VGc=5? z)J4$;Dq$ChIB~)9 z;!~_>JhKh8&ZBy0O(j5VLgMJeISC8d^%YF=TvxYa)j2^kzB8-!dDXI*8D1Yw`rK2q zhQH}eNq)6l_HFiCa2^_HQQCFo*;EgNYz%{Zg?+H~BU(hNlr^WX5N~UOg(ORk9Tzg9p7p?ePhI3t95VTo{Sl|P zi3u2Tql^4B>8h%$3xl#v>I3nu(wY*v$3kd&nVrj%|+x~o*ljX_wTsJ^L0B}Wp^Xkr@n6*cwRMC1LfLW80+ z-wB2Jt}1H_lLfH2B)=)C>}_{;iaJ zC1wx-k!FMapJi^2mQ=w^wy6|1$U0+}<^7+mn zzmA^sW<=Cr$+);uxvZ|)OEyXvl9%DsKK?hg{x{9=nUA-JVV4jVy+;7+!XSb5 z2_D(wjg8ZzwKO#wu>uRPL z?sqe=MeOe^AkuBBm~Me5{#?q{il|V^b(-IX48Gzc)2nI@(2zzE^zD@eq6ID1%o!#8 z8*r2pBZq*Lh1F=?W{R49q9i$)w$TeTqOaY!_lkJVriR~C2f<^O*kCnwi%DCd z^4+hs*OZ4MYp;@dB*twe2boSM_k8lLu?<6G&E1#h3(X9`vZD}`5D3W|#+I}G#M$Q# zfya>mCzm=P=(cp;EJ6UrJHJQ3zWRa2y6AfHK9hc@7^}eIH>?p*1BTBsPgKiJ_24F2rV&y}hm>kSJ{ab+zVU6U{7UC-*37MG}w zqc-^cgh%Ezh+pS&w6R(H(3j}#qP)Y$UK?(|QTEfg)U9h!q{@<*FAp6kV4QIo1hTGD zuqd_mL=+2{D}t;=Lf{PuMlzmEWr{{tS9#b7VlFu9rL1r* ze3INmX~hl^lRxIraL;v`pL)(eT+=m})h6u9W)K=3WjsdphB{G$Z2W{n>XDp;Nc9tO zVu3wQ<)!d`>Ra>u<+laHI2I_nZ^t60f-W_osDBkmsZDT4oDr3PY_OI#RN3yD@E)K+Ky9SPU>c<$cQ)VtZBSrU%-lvu<)EcIA#je*I8tEm9R*;pn8 z=vK<`Ax{=>Q8^1AVlALEs^?q8q9ytc-}+tLGoMO%qd-IF0u9N=Y>RMO3(k;%XGU}~cZ5(@yoGQL;1_+Cc?B$Jo^LQ)BjC>zT)H5bK`E2s% z6)l(f@zz}Qu$w3#Ki#J0bMoN~+fQ8ZBdI=RRGlcG*Uj*1&(`cZ0NF5mcJ=P@-Z_Nd z0d)Jl3q;%_eS+*$DgNvg>zJ0OTY{Os65i!U4_uQ)?U5gPjkt8~8*IJs3wH}xk|jQh z2TGsh67|S#d-}c*^{fsOrza}HK;)-H=HK6nFaxuM$nk+1CvRO#gZPIB0oso|na_dY z#7i#;GvNa7-pD`^iQdyv!2l^DfI;5OATM#^)1U#~F7p}xeyP7npyc641%HQoz|>^? z1Nyz!f^7QjFwtjIc>evp=5w|8JG&4$@SXo+uYUZE=g;8ZnWs2GIn5& zuRIN!OpQ5jCkV%dP&dib(s$m2%2L01(kyEUBPxRt!k^H>&K4!aB+tr{rAq(@e!O+- zOb!%gw4%-9*+TGb)0fZGg2i|xd>^)KnTK-CxZC*ZT4`38Ap=I7oFke67!M;}ElzC` zH8bU0CO#?;hvshlrd44o+|xQdAcxL)kIJUpUHcnV6>fmc#D9c87x?qKtZ_?jaz{NI zex!B)se?tCII5IWanhn<+B5X^2%k4ZDC48)OE5U)M9=O1Ltw`|U6#N&mC<;x!p(0a zI>g?&|5ypOr~k}0JQhU-Y(dsE#5u2ruBIjG2RfGpZ1{vk%(VmwwmEpBFa*XCv9U7I zuoN<)Uh?Iuzl z*^f-sX>gDYm@AEAte;M}q~!;Lgdr!CTP(A(7bR#{TFPOHtDRkeRD0I?7He`DQ8O!6 zz~uJPpUlHU*fOK4&Tf&ixREuH$!wR)kenj!HXaDbf2j}FgeUz$jOm5 z2`9AV)~_Gu#Om9D$RDJ_s;y*okNuApy3q#~C&COVI5iH?ZQ$A$0D-cF=we+ZhC!^v z&mc$-){w9CC|>Aq2K{0Qw8)3GTZxk+&dmWN7+Aph7i`{tD&<0=2fkBU6}~Ks)w;#= zKV41P_Nj);C>$#Hk4uz4{8dGU+=EwX4g;G(4TQhJKq z`0;NhsHSqTi?mzWxz78?|N78eCKj>f%!A3nf3wb@6%_9~+1 zO_1UVFZxXi#Jhl}LW9H2F{Y4_yS@PnHn*~rWuT+wKSR464=5|TL$^`sFZaPGC&9-* z4gdVHXB2GS(_v+3$O0bD$wG_wYfI}yvoKuAPm(6M30jU%2K(Eut$8n5rKwy?<4764 zgET+b1?uK2 zN1}euHFy5AAA#Gbif$Sfy&WoPcTQBP9Ke%E&QSFTo!WuTV9=FONo{E&yQ1(qg9S*a>EmNRgrVQ6^E*{|( z&VRXp>r_63=x`_S6Bcu)>9iHvKaPmyl*E6%V0O+Du_OMP>)G?&H}@aOjS${D_2;jC z;GR&i0&kdf8ccgH-aFSPpVu_T@GkIH=o_gd(9rI-*DFk6D;k2kPk0Q~@`!ZJ17_ppZ7uY;^xU9wUGOwG*g-PRYv5XnNm*d>fu5lT(F!&e)9s8(aC86P>2x5=vHvP6*WpM{T=IK>=?%93X+{!`zyNu>p z*67^*vwRqE+oV5P1YGOrwv@XshI}c~u?e0K{)HKsMRWDD#$_ zaC-5~bv1jPg}9caA1D)ZWwwHV?82|Q676+6{cKY!R}L0l#cbpUYiite@IN=3i>XiM zx<1CzeucgCHY2GK+@X}gg%LtHxN@w>Q+4-TYn6s2*Akrf*>4H|217n6tx2m3fVIuu zoSr%14gmUj15kC>)A%Qlv|5mR7ROrBmG-rAu(`bW0DCovyX_y3{4!l!-}Fd<_gIIX$~1 z@9yzuH!RZ;La3J)>0`Gyh?G8Gp*m!6dZzxLVva09;b(>!59}>-JH*i@#wK&fsLHfenDqt~v_jT(Zy`0grYU;3SD1=fGe69gv5+TN z^1{UBtf4)+bx~zY758-O(Lh4)lK;EwoS|GBV8I&{|>|2 z6w=I~slaGU9wcvnU_s+!msh5Knnft7hB@AmdtQN2?IwAmFJRY5P!e$2BWEZI1R+2ZYO zo?#Sl#m-e`AUIm*_t(zgfx0*(_{L3rPElT2>~Th8XbKqxb(?8LF|IP^rzlx`*Y9u& zw*o~*!eoE5)O9==%2xn)VLhKi1)IUumvsT3IFcSucRyw1Uo*N?;>OF5mzM4fzjGfH z!WU9}UlLN-OgVEk|NS^`1-^!M=_o>2w8ph&c16C;XK8XeUE>mef(U}+k$Odo|nX}fyq z;)8PXQxG1qWla*jEIFQjwdA=Gf$GeV$)xpnX@JZOPKENfZH%qxLwt-1h3iBf>Jy^8 z!$|boym3u^N0t@nQMMr6iSZocBgtV}uJN*iN#K3`CH}Ou@cyyYlpRdA{~Tq@1h!a< z(69QMC704^DV7?Wf?C!bc+3*d4-b0(i~HYEXQL{{I%xI zEN~ve3)}cQ#0_S4@Y#pCeJt`RxXIWhEjFRLdrn_?7Ag4?#d~6cxTvcsDtt^=;|1l2 zScA`xXcqTy#1&Jcu7K7J&Pz+)l}4Ca8PWe6xjB~nE17^;iOv9eb(&LYW!mkL@C^!L zv1G*#z&q+b>YnsR)?|;=iq`#i(V!ZOSg4}X zd?ALfDk;Xi4!>e?q#8WdYRHk#@Vbs|2!<{FDU1LDm0oj3j~ICYOCr_+Ifz>;8=Q?_ zL{T&Ymp!>BCM`N|0FU~Zd2p(JPLpxuh3#~5aBN!e1VtXUjevgZI+Zsg-zSiN7o5Ttkq{*7!=Y{GETe!wmpv& z;(_GsGH|ke!M{{crv@0KfLF+KMb6&ppYb005N0LV!dL0^4G*C9LylU=;IhXb)HJy3 z4sKtwU zH`)YtSRq^7l(JkEU!0M>lIYj4Zy?$Pa33y$5WE{q2nA#f0q{D~)^8T2;u?&y8w+TJ zd}^|Gdytl^^R7-V*fa(J!|wuIZCz14-y~PhvNPJV_;2PQbIGP&;ufD7fj_)bj*}$I zO>(2$UekO8>#0yK*e@7yGajM&*%kwt=b|+TZpqi=5V*J>As{|LM%Y%iFSE58vTV^V&B`O>K);cR7CJWxtmG%k(e2ZVc z=O=O+XnaUo(L*vxm9z@Q0e(5?Z`3o{6h!LVX1;1hh=a8(lVLAVKa0+|z@BL4@TPOR1&PMS zx|(Odg@iOl`r()z{LsXl%)tfvG{4XuN7Jzf5~_`BHDxSrDa#f!I)_+Hn)0aWm3?L7 z*7!OL?*5J?qoafHR4@k?71L^0q@1MF!P8EN?$&;5A#gc<;f+&|brE8D(jsh;JBAP8 z_Scyd6^}AelX5snpnN4+e6vKZ&Gt}I$>567X*h@+zpeM%k6@SVi9q;r4o!Z!-*Swp z$mn!;5Y1?@ywKf6cB56TTgOYy&HI&zd`NMEu3A^gVNad6UHBe7-xK|q?S}vqFgXpm< zFF}fIzIQ80-AHU9#k5YsQP@eO-H~Xlz~rVi^`S3_kqBqlhGb{@DiHF@Yy4`-kmEMo zTN3FKLInL|@am4|Bp0xkT-c0t!xbBlqi%^y=^_N#Zg>%L=1oh^yu=Q$B`yN`%C?-A z5!UX;kjE0Z9U<(TYh)aZLDtzmXF~A zoumoLY3~n5RpT_E8z`I(Ad#7j0D^PIa3-}liEI!|O(vGs!XjpBA33 z!)z;~Fpnh9KDu;6CGoW>bPa3zmmTTA(a5eSCmks1m&|u;<5+!b>~ui<)`F{ z=E&+kqIp}2yiDZqYy?yJAlfnjme5ZfL{gjnPpanDz+WYmn&ci7WNxW>$u?HMV+C=w zMJ$n@pB7a%PNh|K1*BEe6X=PTQ)ax2xmiy=1ctrAmvh49t!HcxO&a4yUY@@)lyIeg zC6Udm3O76q|Ap&?9|SwMfM$98-AP<3)mh8}$3=4)j^2mOWQAXrQDGag|1Eu%Lo5=a zxt}fvdi}_EmgP$Q_ae+yh{yNZb8Bhez6W;shqF*@9oB<~X2f~%G1K~}BxVO5sb36D z7jq0SBneD}MUy25-HfS<$wF+lz%FL}?^@aiEG4uO%5I3FvfHg+BZ%qsz+Ny(57M3h ze^8Vc8RmnT&IlC7uIOnyj1f!d%u%JApkndlnxtl9e%)TC8{=$I_FPY>wQolNG7(4aw**KwoHVV`gmq z=ynxt?lX-wkT#Qs^?79qF@NbmHfno#-)gc<$M?Rit(Il{u>;)Up2}C;e`LImXZ(bz z>2adO4&2}UgZ*Zvq{S|j%j_1;l3)Y8LgFpgaJ->86D#QHy53>*@4Wv{U0)p+)%L|p zcXvyJbPe4|3(_DUNK1D~3?U&y58W}M^cCqGx{+481%v?xP*6eM==FCm-1px6vCls1 zthHn9edcq{K6`z?tzNM;PF(!KH$+c(U+W$eeM-OIPBa%(o*D|QXz2U26qeyoAuzwq z5uAHMnrv89U1r`tt=C@TSQC&#Au&HuBj1^8Ty|As{Z&t#K_fSP!g?3B?X?Vih5GiZ zzWU9@YY!DBF5{=A8Unh?;1-(V#w-dr36<4--hm4Zi+r4S%2^Va!=o?PO^Q-eP+cVDu$}Ss#&RUI(PlziL-#O2aF}dH|I}nM!=twm3*$TVD74-Ek;f`2Yuf6 z$`07dv!+`WG+JoU?|XhtF1mygx2h-7xP7~1BvQ8Cv=6xPX7RyQ+*N|fUI?rp_P7)5 z*`*8Zix$d0yqEG(+#{KNeVvJyPMRQbHJaW%Q<3=*O{cU0uvz;uu!)Kic>Os)CUSKr zz*5_|qDeR;ZCn3-(uXP58n%12F@y740@Lyb0jPkJ{rVKKDL%InH#da`E(0&CqA*TY z2hH}F-IQO{Pa&)$FMQhpzEI9$QCV!=4Ml+ND4R|ht@-!{c!OV3PcKU|Fy-{@KwqP) zVDymHUnUdCE%&~ZyBTFbYb(E@Zlng=NgZe=0PLEWbDy~{xq{WjfQXnbW{jMVC0I-L z9&%2*z;LW^8LE1}6QeK18;TdQS;mI0P3FbBGYQN1nm!@*%v$+XDV2cFVE2?VJZYrky>gwh|#J3)t#|;@u!m0DS6(hsp%t17@ zVIb2~8c2t-+cr5}l*IVCEn)4pp`Q!jX?mWvkFTE>#NA-o3B=VOv6j>G`XkR?ETQJU zBqpQ|X_&^s=3s-T&FD13_EjFBzE`5$G$K&npyPgpg-(MTPP+%-pbR?dJg23=rEBxv zH#kRxR|Pu2&@}6i7bJ)4v|N6@{56zj*~DwYQZ#b&CVAZ}dNyE<|GJ-3roomX! zx1m@aQG;iKNWr;Bc<&gyynpRJsX3y4SKU2wo3_Wee6W=i5iKuI@C)Mq7k&)18~+c) zf4b4WCG7`tnaB)cYZGQ0si%M09nvsileKv+Q4Qjg^bIMj*S-3vexgRx_i;L22$azF z+PBF^YZ0QAE4q?<1W91ysE1$t)N&2&@V8G6i)H_I`l(aQU*e+u$5GHt=mpFlDX;rl zK-;F8-ZL%0gixvfi-5XV0OuJ{hqxGCHvz7Qb?DuL(jdT-vzbh;8hWud*!kBst(5v; z0?*;u0--p1<=veo-0NEGrQGzer zV@~Lee)18n*{H5L?4uL&N1vcl0I7O3ncC@kl1y#}nJtJoXU%*V`(d>^Q+{W0PLr($D3Cb9IV*&nu!s zpWHVAS16RaEmIIl*E&@IeHEZ@Kf1wI-lfvW$Z@1V$&UnBN;4$qm3Ugc&WqrW(oML^#X}wDg_yJw(bUB5tDUH7o*-V<|2*gzHp(eDb(v}N-NvC?L ze0gFGa&Ks@vDss(rVJe`ZL6E!;8g`7E94I~8Vp_SM zT!topq5#>jlvZ~p= z&+PZ6CnUZQX%?Cc*}hv6Pr52(-w{o&@_s~twZ4D7|FLN_s@bqPVd5U`h7o0brSbx^ zfB45a5ik+Y^QnlpRc&4Z8Lll);70UaEp^Rqdri#%$z{LE(J&}wdO{1pZvjOLKIOLf z4fgXnXND_1Fw-5iRwT{AwZ-KKzH4-TIg}o<$+IOclc7CBI-Vpe9siE#L@=j;N#QK% zI4g;{XC-MHOHBTI+<8(CUP98eOO#LWIV|x5Qy;1S6vd;}sAK%W%LtoKui=~tgOiDz zt(@l^j%=TEHUu9cCH7gNscy=Ctl187CpUpp3GKSC=A(JdiCMFcr};Uvs03Z zU-GJ$TC(X=eomT_H0$wsD%_Y@ z&4oP}6@VwK+DX6j{H5p-cAQY8cAa~Mvpb6^zbxzlsk#liF=r~}h1?#5gRG0Nz4?6# zknP7IM}c(OE%rPWu+B_`kKfNZ$wc3n|hiR;SO0bT~6EFn*3zaYdBs% z);gn6zIE!Jhag2^V7Y`my_r`9T< zHjmlHKt)^YRbx`G(!~W*r$%={$@-%i9ts&HTEB7R;Gu*Wq+o`q0Sw~k8tw0vK5_cp zs`;l7=agyb$gh83Xx>3b2+_&@-NGSnshnkpKx2E14Ln7#I7*n z=^6YksAc8mh`%ZG>ihK;2hu~R3f`swcdt2`g>o_dCp(i^C^PSwZN?DK=wFJV=?}xl zoQ0f)$m~oUiqh~0fei_fInE~Rk&T;gzv}91tr+?@;>_S}Ccx3hr>K1{B@D-_-mt}I zlgl0_d}0t(6Lm{ZtbaLNt_MpCzZw=lHL5&;<1cZy=5~3Rzz?y2<}iZ7 zv>}Uerg`m}G$73r%AJ}5m)9&B+V-`(aT-4cM?akS&zQUQ#!qWMmVj<>xoq~54 z8_kh;J_Nw3*K^}v*w7`5ajnp%Bleq3 z$*oNJ{q{;d$kY@wQC4iHAu#f1XY(zO8|v#&nu=7zEt-MV@+gvIYAMWTg<_O}{QM)I zy5ENG^)UWK-n=AyDNkzHXGq!+u?r4=gx)E2vJYhe?Gj^#pit%0E`|n6c85fVvR=@e z)NHBHL(Ek*>22NV;qz1G%%ruw<9P;{JC(*xNUryWp0&yM3<$c=4659B@uj83FQ)H% zvq>4#){F{lW{3__upyV}&+%R>`ZBs!!npL1SIRu#z%!sp1gr)5k2^%V8AqlDi}K! zDXlUNCQ7zN65>O1+)^mOQ7{lxqa_qd$jK&3Hb4TN>R^#N42k1Nw=QeUMJk*wGoqysEQJa66vzFru&x!} zz=Z?&kaPo*n-r5N#Y2!|dm`JF#(xkIeUH-JqJDIC`XAc9Og8~k7&hYR_9cP_i}Tmh zZR!ao*mi~ph#gGkKz{S6ZrCMSosl**mFjZ_hMFjo!U+B=EyZNsmNB;oY^S_K?bPt` z2|vFKqLiHM7s>P?IW(*l0myl?_ijYM)ac|L9Cw{JuK&SM~~CY}b|0+C|4j z=Z#e7#p(sj=8?=LQ5Zn6Jk~h2c_ztNgR`gdLA$9UHZW0*$b(Yff@QNIv|YRJfQ@c| zmNji7frMf`HdajCB$maFHBbz=I#yWP4rln;9_7Ery-^)NJKC|5<*bkA-f34Vwyr+q5ko5u6r zUO9L<2@|-mtREU25h6K07IW!s+DDC@d!o)R$74lOxG7VZae^hwvZ0%2XZc?Jl80ey zVV5Yk7e8hH3;aWxgy^8w?--?gMlhWq|A3&NdguB{Gi1GyN&N~dCnr<+ z1eoW*EQ#y2gk<0oL5>C`&EzHWMo3u`Jcm`o=DC-7F45#{H7%(tX*9{BH?A|$sU;z< zZ7?8$KxUXKu6$p8(Ew0G3vsBW5pHtE2=U!23TsICww|eYC|NWhRG$D1&VQcAWA?F{ zZEkgJHp}Tjx?m$O;21T2*z49~wf_7o zPwm3fSBsr#NEjy@os9W>>0`_9bD+q2>y-kbV&(;)o#=|bCTGW)Nz8;7Vf_hO780f@ zlj#9n2jB29n2sreqXV~3|;cUc~WpB1!G(nd*=-!F|-jGET#9}(d6P;{> z@y70OfpW74ih&&5MJ@0fbyVF3bBF-Uaankci60_BX~n!Td@$(?erTj8gY%=Bsto94 zg1Mm|j~%Cy9z!6c_>KvE)>4BBJ!A~^hB@%q5hrSaVyXO5%6nYS;0Y_;)7ocw?s>aR zquS~WhlW&krGuTL%5jxj8tk6MrE>v{QMo;7gI4aJ)Ml^*_klBZ3j&*tFA(7&V>*Pf zxSR{`qhl|*>?(`Pe0mS3rX5LgL8(BpDPfg|4Vpfl3dQw8?O}E|ZIB^C9@H3vQx`hHw2Pg$4T7Z8sveK+8ecHr>IBq zwW!%^3_Nm;H?zBn@84E3V_Shn02bXm6)+Axh4I6=73dqF>&T@HgG0BO?G-W*61qN>vu`f>(VMA=&tBF4u%)xZom<(1*Unz%bj775&+`UWbee5E3U~b|I%?8d(qaIqWitb7Lg3L3he zFk_f=5xf0+TvoKI&S&1hyr5k&^D5k#s3vzin@+qR`i35=-pIrNm^yFrPEHQLvwkMo z&?l!%lYLMf>ms1!m5Z<5V#i++qRqW;DbxG*$GPpfiT7H|AHLtcr0G5Mqj#k_t?{rf zs@c>g3^*X%kKj%WjzHAMiQ#H`sEM*A^~a$PS5U-fzqtA z_E(dTDd{AVb<$b8{Hgew&R@`V8=iZ6AYT(%gqiwSVOAGtRWetVe$oEWn(?!z=8K-` z+PW#GW5;NNTaj;*Gq59dUZ%!3cD|#=mm4LxV_F(2Rt1>O*R(;uenH*C%Mi~b~9Mgoc*OHs=vf_fuk z;)P?^i#+U4gBuM}N_jl=IIYmit~i*!KO+%`CfxAFRNV6_Kj6NT=W;*+8~5ytJRzM~ zxd)}|3Gq%aCkO|{LJbPiE)kmbA_>-a{ArLa1PGL67x)ok}))? zz+zvL-QuTsf7#OkyK_9P$scp*MwYzBhGb2cKS+rs0Wy3bO26l=t%5flcQKce84hiw zu9CyOPiB(YAH6dF*V~gT|LN=)UFD4T`M`Xem)<`OlP{;#CaX5xnBm{}Wumlted=?A za_+a9?Ew!2)jFhW>Pn1NXezuHYasQ=f%+<)hB|?0Z{#Z+t4iM%9NNPaMoV64e}%_) zI6(TTUIShH`-^k7pbA8S!=95GZ!-gAQ&&}Q&TG7w>!MY@4!alrXovzd4(>Fq!Oi4J zTHkezFIe{cz!ghs5zX8deeF?7T8SoM&~shXKi}#4&fU%%yNy`f?sQltINzv@cq;>&pPz07fbcjkOULeO&m_h zM+e8Mw{~7M*8wSW8&v!3n2wlC7p6Wr@KDk55 zZ_#vw4+8972#m4SaEYb6}_h2iLOGNB>y?H5VxS0GrQ3t zw!pb4%@RgF|5mROSqIGy8&FuL*{S~(r4EKBG71^$-$W0ND@@2_DSXsm$OlPC;hi9hjZ3qJj4TU061&Qgz#Uv%h`V0Yw*n$Im(g{c!QG#EV0eR{_o{( zKjNfcJpjknm*yxLf=_xO|)LmikxMT+&>`E3P^g5|Y^ebP-2L_=Y$_ zT>=c;!nmb=hfDsB`jj+yN=g#kwT*GBt-qP%!G$~IC=;^3D@PE>)BYV2sq^=^{ljVd zo0mKF6FJJT!XM4s&HPP*jObM}0uff|PQ9%Uemh}FiTGFDx0-r~B=?R9f$DD)0TqV- znA{=By+UHcPSOx33{K$VXHsB-bwoJn?^#N;Pk;h<1~cwc{Sllvq2c}A03sxq0+S2a zV*mCa-(fDf(@@i2s&vZ#UmlbHy8XXA2>&Y#5^nE-D2bNh|4oMgzF8x`N*%pIJ~Wo`c_h=DuZr z?>08@9eaf!ggpZSD)_egZ^Tc;91lg@O(J*H$AMta1L<2O|BEn)gv4}5wIdQyGCla@ z;8&}z4_Ht{v%njv!vBoC`5_Amdp0=yPzrIqJBRwu@cr@uZ_)2gX(MBRuMdAw(NMuy zP~auMg?g~te!LS!e5d-Q(%ap8t;go%p0X zb+J_aF~t9;cUDI%C{G4{i*t{D&FLD10DJhi0NTy=e-(b`ThyJxVIzNx@WE!szkCTD zx$Ub5)4wktj?jaor-Z<2D}x{F(!eI~^_3c2h;C4BN%%wM4{G&`hsv7%DZk5%b(wVxUR7hDh3;g)F$NKg)je>(rQ%S5ojB1!-c;Q4s_}T^0c^fB-6pnC+5QRsQ z!uw7+But7hfp-+wc1FuqXnkrPNLr4wXRsU3%t4nsSzZt8ZK1TwEUBW6X-v|Jh15x? z$g&kO&p3DG`rH~sw~D?*->F8-gA*)f;eZ<$oL%iStr@@IO@>VN)t#-KjG%jlDwLYH zXy@MLxId-^Ea~3dJ7f5Gtj^}i=h`vy$$2~ATHG|c3ix<&d z^@?p~tBiiMVQe}>^Ews-z;OI;i!%=^Q9v078P6urO^8>DT0<aBuUP zGRH$2`WVB%Xp@MDrZs|`YD>OrW*U2bQL({tf~tjLEy0EUs2r_|nt0}Er6-WJkx#`8 zuwura3^-xnbZ}Q!GCjWtCmY(&)b+aEG@g2F7Xo>9on$`pp!KbtKQBzltVAUE z|ITNhuFd@kT&S}?j*xR_o>xlpTx#}Oxju|dfpFI^25=!w#Ve-ioxfCVU{aO zHLoop_tPK5>ep2Pc1iJuJnk)vdy8?xB7ZH+!?X(xV7$fRn ze|_?UNnAzVGeD5CzK9{F!+&OI43Yd6)F8ppMA`hq3bqj|Mt0oORonuzZC(U(_?)X& zAJZ99WScZcF?lukuV`ZNDl2I9*)9cKHXM4u=veO#&ImdH*Hy{kovrwS_1iXwsirE)*ohdM-swhL$d|e)<{3 zyk|)t{}s>w9bWAo#`&N)QW3yG2}1;R?9-32$Ca_Qf<#CQGML^uD4J|k{FamgOJQD8 z#fafbMXAou(vKz(vM+|2LPdt-4&t>iwrQ;?r}?NqgQ|BX{fL&(jr55Z*RR zf!Xjk{Nf#oxHB4jY16@e3I-xIzA`*Eta`(fB3;+885Zq(^O-6cLl3~A`hahhoQc5G z!)47XkJMucEgpz5@#gp$P&1vV|5yb%M>}+H88DNU@RlW)R!mtxxWkqnzbIz52pn_Z zHnyz==n45B^5-d^C!@CNyZaQIfNZbg2&3>QNF$4W(_Z-J_8B#4`7`}3ODc3~e$47S zPMeaL(Y-4rw|y|HMg-uP?C3gLB_fEG#8LSyaecF0+36VH)rV|hTqCvbB$^vm-uz5H z@RS(*4wN`E``ENk)%E7>M09AsV>FOAYHbVm`gm7x`?UCaO{I5F?2Q82~GSeWL}BJvla2`h;Q zc67R-IwilL3-BNF?hfE?q{=c@(u>kUF%P7^q%|AL%LiH|(*{+An(NSu({UxX&Et}w z9IbF8SRb!(sS%~j5uzVgf_Or4<{7pwsp3NkpXnB3+ZgqPjo+)x?rN_=rWM;^() z)MTD+tPTigprFRf&rs%vd180}vljig@50dcQGkLdOo9IOFi6-AOqg;HQM<)#228Di zH1@{rBdmAWfEf2OqGYz7KX&Ce^HMhDeiSg5>m*AP^8boLo?zE*;AYdN@aNkZ4w#!a z#UaCDxwUo*YZ!-=W<(ez9-cmuDc%}SUCa#pSe0@Ysn{sr*bJDX%XXRz%-2cWerPF0 zN!)BgA0XZj@$d7Rq#)lAOIo$gvHFIp7oD$cHEv~#Zc9}bKkv};O{JzmTVqL&c}7If zw6oo!-d_(SsqUqs^xRF;#8q2-iDo zuq+>+fCs(PauI{<(AXlF=~ok2Tt-)=qljfc#WJ;_IFZ8rh`bdxP_JttVh}0#?#U^Y zFYStR4es#Zu%%ufkAjHOs+{Wd&cl?gPb7j{xaxURrIeE}rDD7uK<;x)G*|=j>$PGx z1v0RP2*sMy{SaLXSATF!;w2>x5%DdB_(7ep78&E7@LaP~C=FM(|M@n6EwultE`qj& zh{i00B`|D-7?YProY7@@Rk=aQKKIq5Wbex;WEC^sffT=X!(?2QXk_5R|bmj?rvH zG!^N}vPGacH`qKeJ3lVeT#M0-VgU9i0oe^-_xyrChUJ{U28xl8$4U2@tzzZBBku={ zkMKBzvMy@n-Ix{N3NikTWtRXNZ@&}@B7SrxIJ5GS^(^{cbYLU~y%Tpp>XBt=u}-G3 zj@FS5)R9kVRx;{)6&QJn5Pj2P4VDRE&{S)_CzTei#6>rE!{kmJ=HM))r3R#TcgJ0& z34@zRQG=Uilpjjy7p94>*Gawp4zhF|@Wgk}!Nl{n^q&xp5vCJ951+Hu@fcbCY-k92 zu`!eg>NSoSd?1y5Xs38TLOfVTo`Y2@1QXBihF*ZOa>gmjpWP#$h@7+e5KaF4uXgpn z6Tq#*ExOznFlexwf93WHZ@JN2AX*9wEa38w%(4m}w+`(T5|LgsWvXV| zcB_yGdKq10g3H(2mDrOc#N*6c@};Ym;k!(yr)7|HT;2Ct6+) z7l&Syn)Vm{3OhyC#n&X={L=te(GBtOlr>n-V2nXpL#blNlN}nkumT$1WgG z=o#aEY)6baU5*7_hBQCLzc<5%fAi-JLQGrr-y+mg1FfW$KFG)jTBd>twG(9WR5?sx z)>n`*$@<+_F|SQRuQApU?_L_PMq%2~W}OmuD9;-sy72S#Ug7?Cz0oW7!&!m`1EWF% z?Tb)@+Aj!!8SOJK3=PcB9isY||Fw63D?7-8Dv~^Dx61x=Zc=45WA>`H*6xWcEoa5w-rZ67xpT z7}(@U;}xR@e{AHOE!0g}LP+sg?LiGhUJo;ZY>xdsvED|IFYHIbY}?;qe0-z_hy4G- z8VT!0jNJx>jb%Q(}h1+HjOg`t(UI{4Ek87mbDcFrawqzasl|!IU%uf-Z*0Ze6m4CS?qj&aj zsPnt9#NYg$cC3{&rD%*zI@;!G3lZ$m#EYGE<$1QHuPR||a}{ebt9X9>ud8Y+X>Yv- zH0OAV2&K<1OVOE+Y1|mwl-kyyN{bM^_ow}Tl_)7Tht?$C=7gP4c9Yybd@C{M8MG(8~;68b2%8bFDeDi0t3KJ9lhxaNtkp zG3u$kz)NIYF`YF8UIG0>0 zmQ{?rzFLqpk8y4}rFlq=wnMY80ab6JEdsNN)g9jg|Fd5g&Iw*$eSRx9;!v3hczYA2 zf~hq&G~~JXgY9zFs3@fZJTnjRA1tuzoAFD_@rcZ{!Yrn4H)SF-iq zocXo?H#~CV)7QNr8N-(;i2+j2SE*O+LFzBIg$F9mxKfZzLNj?Z^8z!rdEy+@C^aUM z4k87CRGf^;f<2lN4&L39n+7|pfc{ijuu*#flx^9<=AT&+r1VvmkxrxH4?mkcnZ`mh zTag9l33ifS$JwIO0;Sn9ZF8U%MM25Y@_x<1 zyP$>cnWIiUk#CGXWg%?qKe`+kR9`}M(H#SUiq{Fes;BFAs<%s7@IGB<=*&z2(OP8EE zSgo!emqGChCgB1cG!&jK4y7=w1UZ&&R4zBGnZ~OWU`gw~eViG!&6MTvdZR{y%4axI zM1M1AMW7OR_%v=)9z5{@_n(SQvs*UEA?H!E677g4x;FmkNAMw+1z&|Jo3*4yR4^qH z3fo_3M9FEQMkP_>R};QPbH$RN=PLCtk`pIj>*9!h8P=mgb1Km9j`K0)t{NuJp?)@v z@`lkupM}q7uNzzm@-kHj_cAqz>Bg|ryUf+zMcrxJ&-&ISND^{K$r?sxCC(s zf498@*BvHpj!6(epbI$DLFoJezn1J+eMTs{;Dd114Sf~dN+5U@Oi7`HKax3=F_%0x z&Wk%*V^(o#!JO{_csHUqZTR2O(v4Utk9?=rzQcnG(ei5-r6dtF_+m5D%&tn2=t)D6 z%`!BSj)5!jMKa_oyim{lcpGU5k<~78v1LeAFZtZ;g)}zK$6!-ziKfS^Il`$RXS7@Q z(CB<`EW+#zq=m^xL2T06f{w7HCurTroyTa8pW!^3Gxi_^fcX44ZqgBuw0?wZ!UBD3BJN<6`A(*_PFzQXa3%;949&C{Q8`%}gr!rbu( zq66L^(aB1lsy{tdb7JnWeClG@QXd|?_lB?q5ac#WxTdgJEH9i(b(W z#W4`6yr25Bbe04u9juN<37j6gyh)=(55m9pqgV(i>HP|#47HH)nq6`WJZZVg@9PVM z$QVeD$Asrwq$$&(qxDdgg63Y?NJ*ZQk*8)Ao6lj~bu~wCgAHYdcuRE_TrvQj!ky4# ztyHtF8yN-W9$}j_#%j|q>MAxYeU@4$rxc4x&1-FC*dGamzvvv$crn_%y}&)Z8G?m# zikfazx(Go`I+t#&v+S&y4*dcxX;`VP+YPp)5ED`T@xm_We;m9QkXscyf@mYwXqgv` ziOgh?(^A;o zVLR%hM){dzmJ&k?<&czpnL0ZnB1tt<8`3F{g)uTY^sffvJu)V|_Rs}@0vpb)hY`)> zp5ia%bWV40Si|)di9Dehk4dpwT3g>YUzRJzEp-#RHw>p3;^v|amQtm04>!s9T|=9| z`a4ib*+6ncMiy#5V&ij43+|^?baSM3!z0EWk*1kFGFL)Vd8E*Np&EwOt340Lm`)n1 zG`!p&{>UWQPF#@|QyUy0%84;=^hIRLfv|RD!I>E63$l~Uu+QurFT^7bK++r2yLiOZ zd=u%M|3#mP!+!srJSW*b&6L4t)EcrEhcZLcc$U=%_q~I0%pD=i@i=2>1(x&cDO$9f z|3hCur)xR0^uijLRagWm6GGl#3+(-O#zT{#=WqOx$s4xSKYC1f4@C}B-HEtEqN%>j z39KQ;k`4@2_DfeVYeR`C@7BZD{SYbx+gKL<->w{K_{9Wt&8Y`lno@#O?y&d`q{8(L zkhYsoO6p>bBR<5ZVyPVXra6)Vjm1vqif@{sp`z@POKRwmrQo<0o#wz6i%q05w*pnq zIj!Gfa-8S7pVhJ=oIx4!{bkX0>5cdlS^sxI;;F?{Yd1e43U$c-z&*$U+G3?rr4jCI z-I|lW%zKm`=^ha?m(D4r<44IAKUO9aAa*{{YQ_6JiHy^$J8?)n^5n6_HDVjuRVULP z-p{bqlX+?YQut^!O{VM)KpdJEzrzA%+>jjC+$fc_Jp*j+dBrkfI_2Bxr5Rl=k;a5b zzJHZ@4rK1!i%lroP|7}EHS4|7lBVZnRN-7>uqo+OoRgLyf}`-r8O@0g%vp2+Fox)U zd2A1cL`x9KXOq(BvTSarqwHsDy+zlay_H3(OaSc7*@w{9}AT9GNOo%jb1A}>N z@_*$VG`1~pQSu%h?fv45)BNU-lL`jP>+W=~1_+dH*_@iE`{Xq{va95B@n@?wmOPf+ zd^$%m&9>!u8}vSpKZx77E(~k|>JxmVQCjlHj^#5k{K0}8Cz8w*45d64O#C8nmK@S* z>F+GHGUQzdmOtuY@zlCtf&00TS=Ahif(BNKb)Jav^dj8c!Y6_GpE7Z%E{6R-=DMX@ z#U)}dacI6GzmZ|WOnKglBm0oGf0x6wYL7+Bw}J_9P#<`WWl?!rnCvBWx<{@QtjJzW zMw=9PCyo*v<_b{zy&kAnSt$dZCY3Vdd5VqffzW%cu~`jkL0qZcjVie%V>HC8izG{~ zIM*cm-4wcx=Kj;nI180k_kqH-*dXTA>3{njVk5|~>t713`jiD=#jf?x3`! zj2U>nx}d^GSP$PDgt!AA%JvO48kT8+L8sq5VmQHqqp8GBW(y675DsGw1SgN$Z|WPX z$d5f~MN;IVWiptX3Yc}f7Cdw zjX#c>Oq0i7V^|H%j%*drmrEYldgR7ShO$Tyq2Y&t9;&UfA>gn5)w|!j@WObHsF~a8 zcy(4caWzi+dLy4e+U0kY9dF>7CDmE|JAR5p%YMswD(%__nl!B{eoL94F3=dyc85)4 zkjvwDP`OWSSKhv&(eXU)u&=R7kyf}J?4UFwaiRM6CAdfQA$9+LO8+Eac9X+r+4xCW zcvVBLjlg!sOQEYU+zxUAXA|4&1tqSt-orN~8kP}kY_+81E|20TGmHE#9k zIqMgw*?f#U{ZZ^zo7+0zImCpX%R=WRHu?&$)$09M!$PJX88oRGA~P=4Vb!S1A!6w# z@!vjVY|nU;cD)Ph=)wWZlcOqxYWxBu#rlsG)iOrOb(IcWg64`*rJ>qn z6jf%}XjGSZ&K{@ACe(%nHf)loa6gg$x0`1&^c|LigMk%8fPwM+x0|B~N<*Lq;5F$A zdV{0<&xntgf%>u%v@i4x_*DEig!p*&y-Gc_wnIaekYHFOW|Tq$Lab|8S;;aAaQ`^j zFdyIgu5A-ZejTB{HIjxzSMUe>I?2<;2-_@EC}U-1Y1R8?X}Ki~03XmSkyZh?L6xA4 z)g)1sj8Y0q_wArk6V1qoB2){~U&-zCrYD@+Yqvq9qoI6ao<7+C@GEVqr?UkcDqhRT zSo(El{7z66n`ka74Y%w**CjV^kS|pC&W=mWcjN9dHFva(sQWOxj*;$7C zQc_QG+_+{kgb|m`$S03TU7t9DYe#n~P`T|KyuPDV!-J^$fE$0iItty%2X#~A+FXb; zv1)4pf!eM^-Mj3Rk432 zRuRK6q1d&658~f$-IgI1t-SowN7;{%rPIy!-d{J68ox`)(^$5HS;$op^+iX&VTXze za3xAcWTeYNWB(Xuam9Xli$7ew?C-~5z-y0Ug_ayxV=3DQXvNM%yYbjQ@3gqvu58^W zJuo>J{30G#A58!F;@*+K3-c^s4=Jb5U<)mRPG-ukR=zv71tDv7S>c~~Gn3(#=Nm-_ zcEZPjEM;^xV>4#kACW?c>2wIpM^5FlBfn=Z{Z^hi5L(ZZCrY-V>sPN}h!;xx7Hxw| z_32HIMBWY$NGqh6sA7a;F4_=Wu@yE4OFY0V+y4QJa>D%$<#329WAdDozu*G)mtQi0 z>0Qa-QZ-QCRTh?^OpaI*p>5`nFMyKW_n=I45#fL-QVs5X7 z))E6+k=T_+yHl3(O zbBK0f>A31QSX$D=e+Xn%#&Y7R15I{g`(^SD;<0HMoJ+%XsLhiu7_7MA{XAxtX^G8?fb| z(KAH8A2a`e7n^mV8@PfdVvOckd;P}PbQ~i%#+DuLfc~u9?$@}%z7La@-`A-cE@CH% z#+ggIj56H7LbTsHjjMAugK6hJ8dH@@`J1?Z#_Alp6A1TA~=K%gq-%jP%vp?9;&|3Sn(sLdM7a-lYuZB zX9LUwPiB6+=i{8JHfS?MEL1O>W}JU2%~QHCv|(LK%|$qFn)k757kHJ2nj{)l!ZU@J zT7M@4F_33x2*LnT$YcLHjLah$^KoudusV|z2GG4b3J@XcV_fYHEfD~r5P+buk@1^T zSH+Eber#Ed!kgNQZU=*6c^0OEd7T%X5)h^4R~L`M-z}Ykeg%v0hUcv_%R(vYU!r*+ zWw?_Ux8`-+q@?H}Xva(19%Q`dNzxnSaa5Zjyl)rFlO;ZNUtBjgGWp@X!mh_LyK~J` z9i}y2W+j(jFtc!j2$9=oog7bVZZ5FF*eIFi1!bXykQE1x^9Q3nl6F(eS$Oc|j-I4( zsa?@C=bjdmRrF_Am=&~j7MU#WC-VPJ&^3sogixRvaaq8>UEyR5)g`n+WhZyQnrvY> z%wCSAYkdcSqejlh?x&~ZCy4u}^+C9B|5pnErzt7|3Te=@|0D05&#ceZE)e+f#W^S~ zj-!f;t-|tdSQ1Hz@lv}2gN<46OrjpP()TdT*fXS1PW#GtQLr2L#o$TgF<>^ObKu)6 zwWtqY9tALSI*kC=VOA)r^+bl|!uDw>tL5R37QxhK?*YN-Dk+N?u!i@Vw*)t8MjF{E zrmB@q=Y#=oc2|1iT2fS;kc1;WsN3VmBm_5x<9hT+c08fdq!-BKo?G;TKf*k!CDdJ~yj;e45OBIE z<`NPnU;cnm_~ADYj^BPG5O99Mo{^SEg@T|Hq5%+bJ?kkT%m_RyH~&yt*R$jzzS{MS ze?PeCb6(4$lp0`1QdC5B=2K9p!mP9k4PY{}H*s^z)$uV_U+MS^n3&$4n1K={>8F_*C))_6 zK^w%!=9K+{4>?4pn4E1s&Vpe{CAwV^;!!!WS{vd)TU*v@3&k0va&C8_YFFJVRNJd* zwDsaIr&a5KrSs1J4;8mej*0qfcFp^ebK`%a2b}PG847(ont-t%R}wZK0!rt1R!^@n z@xJ~A`(i|n{so8iuhJ*CyBvQEhJ;=3sDxGv4>5>%d5ZSch=yD^aK@YkaDF?7pafGD znMIoWQ0k4*xY!)5vT6z#E-ZIx3A$p2*A>gT_Lx>AnVhrg$Hm^t5?drC8BVO3r^MU5 z2Juf>E5N&B9zfK`n$VlA0A@M5`)Ubln4xM|yLacTA`>>X*%>iIY*8qLYs2-$s)|eI zjqw|75}?W~`6X5@#WfYmB`k|VDhWR9MR?VkV%-^a*(j<~zBIZBrwNMslm$A~I!n54 zHnA($2rbL1XL*C!WSuIl%888*ZQ7?4#Rpi}V^$40f*jL~Px zl+a2kZ5~0ksfA4uRBEVGrp_-lX5m_Q{?8cSqTs=!<6GK=D!qrHj)=)Ax=!U(xNDNu z`N|QU0uwea=l%~o19bRdQ%iXkO9?3fT)4Hd;sNK2tg)10u9e2Dc} zn9I;+td0(Ino@eTwCAcj=-kE6*76{akcHK8-T`X{!`%((4lbC%8S@|O_EF+9frnW& zN_Yjnb|kizmt7)>*{sgA-12Q{#>>&E)wY_;4mH(v2CXiD#Z#5W6jK$(YuZ0tnz-UL z7Dfz9rDWG&O6<(rs6}l~R2F?vG#X?Hw5bxt) zx}uVfuV@w6S~O;u$7Qn-pX)Z}!8IZoXJRmQZ%-4?QlJ#}s=Cx%M+BO_Ii$8j5XbtRyg_tBRfgq zFZ&zNv_zeBRasHc{|2Ew%m!ZmGuCY&Gn4!w;tMVQ^#Vl!St$Mpb3BypnUYbbR24rT z?z|^D8dIz^w0nHKc=n7no7JTXrM}d%eS>}?;e;H9qYq2j(TpQ^*LU=E7}4DAWrv7E zm8afmM+7!fw<^vWO$ok3I&30J^TE5Ib^Cq5ohrsHd8h<0kCY?aEz%mJbt4;M3sUWU zYk@Vn{8{ak!rU!)*?ABcTWNw8<|bUVS*(x&Q9=5xl}RUpbJw%PEhJ0Rm~-iX?_n+> zlN?i>(>s-uuSbt6m>1p7X2q8JF9Im<6qaSONJ?f_axt#6?cfBAt(ME4jORgf0>Qv< z+*x|fG@a9$q%6WB)qMU^waiMGtgLBm`|}z+cE}}o7^z?~;su>8%g5TBTFpG(lvIex zXo?u#=x{IM^o{Zk!>?ztTn${R-vune%3Vjs`fJxM_Ncb)_ok{|G`1zf&bJP_ztzNT zS)1Z7lGQ?Aj|P1CH2%RK1hX`n$OQ+u!fccfUxiGHBoK_9E7l_0G;SpR5Jx;qWQ-7( zAe9A9@X*s0@jL&KsHQBc3+(5KmJpO$(3*_s>GizM{vc6CxXYB2YK^Bx=u>JDxW?I2 zyJIrNVo}okO2;9%hw9KoGq4&*-jMNb{a^-u_@J$VJ#Xtd@) z=RESId|VHm@VdJmMOu z>f41LZX|7+B4Vf^G*%rMb;AoZz~WSR*Ki(1ohs_R2xGg-J$FebH3D^kWT}3zZ#!~P ze@W2c760{@nKj}vO%Sh2U8)!eW(uE1j8g3jHac(+yH3i6WI4KUtuI)qnipdQx@R6A zVDF@K|HL(Wb~E5UQ!DPees-j>1XjC8znSI6vAW4oqtJ7ruiGzbvH~Q(62im`KDPdei$s7E)C+{IhfUw?xp}4q`S#5l?xp7 zxWuO2qkjTh`V$>fr_FLyG(b-D?~yuwC0KgmGppaekJz4A(xuRq=!kDfJ}c#8XJ4bJ z_9-tTa(KOE`&32F{Born@k$Zvvl}O;JF7~A9Nd@Q{%Y;0Sm97#YVt)!?>*Z-OIrXn z?spsXp~J_S#nKegb`Sx;>9d?TcI1~OP>1dmW4wdPyBXmW9d-!r%ZPezjSyC;$gzIp zWLq4W#Q?KFxjkjerO5m_z5HWH9uyF9sTC5-tlxKA^+cql89DxP?onyDF^V)I5g|aT zv{(@;$r$rnA`~6TS0$gFCI+;#5t*zr%k4WbJqMuR!*Ayw+4Tc8g@w;S2@qlu@gMoH zkfr%2WE?4Gln)Q06BB1ds?2>TxHVIW$1f&$ShQ1)<>%a?(_fbMGbwG+)!z4#tPkOa zFFS1To|KEa>G8E&{c1vT>Bq3Jb88MBa!Y=ybANA`#c`i)Mkyr0{{zqS+VYHO*Y+n^ z^+y4GnY|E|SWgMSygX9Fh+56wHd4z>Q*)_ra$(_alBDL7)5bf$QgOY#;C=tDrF^RF zU8iA$-0FVMb1s$m2nlBkjeBaOWpv29yIl4sqjZOU$8ejzs*OS--OJD_GhWZ%*)yR);aLN(6SDy|8z?SvB2#D~Y`@A&m_otw=Q zXNpsSP)BWSLE|!%^dd7U*myTeUTNo9SbRF9H^W0;E_CP7bq=2iUoJq(zK!qHG?)TC z9v5~&wqFAAz%W^YKb7LYQ46oD=oKwk-aq?mVtUUePLhv*6>>%B?M$b>SEhm|TgY3L znym4wHH8u&Xg51FQzce7_YJ!&QV~kk>pcnY%nl$`dDgWZ%_$nqd?r>$>b4k&^r&Wt-AGFD6dr$l%nPZl@Cdjz zYYrx67uFKY<2jwnQ1dW zX$cu1+~a62CJrJfuc%7zVelY`+>_J%#QyMRZF_I4;&RIA6uS?=wei+FdGPFI{Kn;% z!z2WNX}K_OBf;$X!0Y_3N%gv|Y=W{J2;XQ22@aKw8NA|e+4(f=D!`A}3F%2>tLeL< z>Y{WflxKL?NK7$6o7opZLac0s`tf{vJR8{G9N#!@OAU;3{I*hOO}E$Y8MiqN|Ec_k zq7j)?%NcAG9I^Up*o5%ADH?UFbSi z$>;AYfHCtVug5VXSYCAT*l}f!3=rh8;8r9pol`ha^JD3qJbZXG{sLD=HXt5TuzfO{ zy&Ec%N}Syogc&GN{l@WiS3h0HPU*17t;4#p<&-04^;j`6xpwYKW;NWNuorLDO+VNz+X}aWtJvjI~o;B>9@+{NjPp~ z3`=;Fa-0UwSDc3qeDE3*SN#ws4rBgRHmp_F>(xUM`+iV6=4V8lH&Xh(R*64K+$&6$ z^OqSm#Ijh482NA2uM1QX>l9YS2IpwW9_tL*s7c7~<16cA+~jm9q*MvFe`)Q3RI7PP z<~?{HthMo*1_Pmfr{s^7JKN6e$lEfyf(hM1w!x!kC*9`%VgCq>P0T&ddLZSjH~=I@|mG2f6~`#M$BPm0hDi!xOv{vazLu zjKRjftd?hKSHHpD=9D^<^IXk+3YFXaGy~zFj*fQI(o8$h_K0c;cRp#4I=J2l39UhA zPIoo1u|xfn`8wu!;t|~%huMKGshvC2fil30)XWSipeLpo9nf|^Wte+J==KPdD_ zgjY?wGcSaO;fW0+Pm&qnRD7Q&_K6_3J)w2ZHKy_9Xtwj#95N9K|MiNimQb}2AjOgw z)2md;2r?I*yMi6M_hh!@P9f|kJaELi^LB*Tcl!xGT~99u{tq50VDxQBY}HdgrfZs3 zk+_w^J;GX%(;hZ!s?@Z{2E{=YhDB4#P{~c0*r>R?zl zPCUMpi}6HGcd}MGbsFzIMn(Gn{NsjlAaE^;W{bInlW2GM6N2&6AGzJ1SLiP))41j% zyMNuEkUz)I^u=PiTrIKyWu=rv1m&6dikx&6Ry)?X2BEk?X?8E0EC=O*m-i>3IMXmqt6&*0`z z=T8#36fWP`9mK+&*unY3!B7kvG4ll3AMLb53!UJ6)G0$CiRiU*J5=k~qnpWT@8n+Y z&tUYE0j}bQT-1uV;PpF8V88Wh`_0uWY2dJR;8(Rf0V}umUC3_Zh1wF#os`j`07L{< z&TCg%!{G!Cp{cD20h1exKVgNrCrC9Sbd?CxxMG{Q9!|RJ1mFA_d1LeSHV^aI4k4&(I+^#0cGfL;Rw7%A39B)e5 z3);`!jOa$pbn32!B;)K)2m?NMq%}(Q#Vm{}QW_lq^`(d+t(fglmZ}GrQI+ooLZ?#q z2=3;3q|Grr15F%`lgnHi?J0j)w$a0KVEXGb@!b`-cG#{dYoOHMPx^4D(i-I50ic3C z`sA&?{fp*}!Mm!-x@p~CCAkENqaXBtV;fI_DNPwxnE8dp7%8^sj0UPJxW#IMpN`CT6* z_$UQY5?LYHzmB>$fIwm0lonmT`BrKXj7z1arIDt?GDCH)it{%xu9Y-hmW;p@c?LP~ zsHx;BjjZ9Kmzr~$t;Rsg82>#b+&JL}_csIp;RVH}qpy+boj~{2OMj%};uh}8E6nGE zcHxAk_*r`rp?JYEm(6h&x#?aymn~AQe0qPFwkf7p{Vh#Et@-d)7iL05o#h)f(LhSV zzEH#$`YgO;wfpW@r?R>LRjnTy@oH4FfhD^JCYqagSEHX5xgD>-9e^Cu^lrqFgYyaW z9nP|a<>LADDH&v9>#zqg_w~S#5+l|*wm0VYr~H|LK@0rC2yvt6<34eS-Nj&*Q zJ@Iv6;2C~35N#xpOtZ;#XOP|?#Ub`eztyER*PfZWtJc1);mXu6Ty8AYzW(61cB@uX zPsEj{EAi|zf_=S(UoJfY|N1q>-zR&NK5NqBZ0`xsgVKKjoK3_Ah_L^b$hvRvn#3*O zA^vTfDOoT5$F^DEEnbs=ZO4Cz|JL0=2=@5@PvW;H$N3+KtGf%!I`H4r3im&Vo}gf7 zLO|EXSbD$!Qx7MVj0XtU6ODL4k%G;8wKx-FshQ6 zD0Ut(i;2|HlC!exkEmA^_n{(p6W2|(&YOhuGolShoN1Am%#MVPU9Ydjz{lT1%AkkW zx*&m&FY$*M82P$9P6wjkIIJ2$!E?alS#tQ4Z+Tf)#-?Br zr`s?*;_M8x?0yUrJ{B1-wpG@QimhtGaH@u-p8S(**gDX7kb2|v;2XJQlF20DWQO!K zsB6Jv7)3r}k@7P!s-i;sw{WUPojD;Z;`n1>EZ0FT2R-T=&ZG-@<@an=vyI$Gf1=G% zIDcz1LJak!HSM-RWpy8K#!7(8mP{2vSW~=*$lv|ao$uG=a-8Y0tCpK5knEvly;`TZ zQQp`S?{4)Mj+Mh@Yn1HTy1k3*dY=IT!G3$K-F_w_oygKY<*t9Ur1j+ub3*$efIx23 z_Vok5R;29tZ_q%jnh|-HcIzE#xxBxf8B{eFaj^bOt(J!K?%MgMiX(uNRd(r;v)pc) zalH^BRm$ylDF`K~C(Uj*>0YR*wY8k1ekZOnwM-TIRMoV6be;W>EkGOd6`!o?yFDZW znuGhbyV|Z|Y6UB1wdjt*L8ml_=oYD*fFCLcWPsdp&%z}e>V8qQ++CXVFP}}$s~sW1 z`{A+2?+^1v#PV~Pwd4Q=R5N?+lVOlxx!6~sq84E_>G zXihi*{J0+&qtT$Lzu|wP_fx$VKEM3h{poaMVy^PCfZXYDz@dVB5iKN|#onA?(R&Fw zFhA~P97v_QX)|dzU~9VQ2CYlO!FtX+a_O9MdqqV!hi7lHiwq5TH2^S{*IGxhR(_{r zH1zYQ*v{CI&hh5?u5hP)oEszNRWUNc3QaDlRNoSS$Z2Lj^Bf4sRq>zRzj$HvoU_zn zPL#_C?sJARN1t;FV;8f{B`aY2p(Y>5qT9pB+vO?B;4CFSV|GQx#p}EqXTIkdM=n6J zF7Aq)gc=aO5e+di5_-g=RY{r|T;&u$>93l8{vRq&D&Zx3c$u9u+6BP8qkF&}+MB+2fnV?UCMbMG@#lcT&a)}raSPV14q)0NI;li& z$FnQ`85=2RF=x(fR)%&GH9cFpVj-a5>sWvg4BV!&P7+%+(qImaxj6n-*`4H2TcDt}vXv=q}`9>9m&l_B;=}#&=DSfgams>eYF&p6?nR1878+6zipuT z0SMWgv4G4#h$(^sn}AZ;Qv;DHOxKqydTu7=&*Ddi8&S%f{>@#5RHOxT#vfiIo%Z{o zTs8t&uexkayzHybm#61VF2zAJ<60Q&beMZ;PXris_f@#_izg>5{=s;X4mw#`BtYY5>ARFYQASXqQ$;14Y8?J z!?q*(LwvQ7TkHKuzThf`BLn}3C}RW8Q2iAO42%Lq>cItUsbOg0eiFHD{9yRbN*5K< zASwff!Q7(xLo<$M9*Pm%p?Q;Og0#lb^USh%-u9-pN8ly6?E~(aa-FSZDc!RS03AN^ zcge_@cK1(jH0?gU7;t|I{r$@tRQxI<35LODI0OZUBbJfELTQG7GkGhL%0h0UDT$bF zu0QI>_A3#<8taZl#WvR;&~WMgDypbl7w&8@2!pQ5M%GShM#xR+zaC}iPa%R}qOv|2 zQ%r))WFj*f2u>1F6!wqwHObMD%i8RknT{7y?ylyg$o`5xL67(mtFec;rLl9=l)6aP zcG4mdN@KDzX*Y^BvpV7zcj}eq(U_#p*pEU zZ0$euh!rr@Awoo5oHJg!)Y5qcOy1c~F0W!=`BhVJ@5U(&mABuYzrPS1rq(f>4cZE6pVluo}otiRXZ4I zva+}!gdhGo+6=53eHGI`LvNKU1X2Y_*K*k-#cJTq+g5A5u=U>exwJryB{2Ka~D*0i4|$T_dVq6E&0$q zJsr!VP(5IC->Qvtd})&_sY@Z3X0T}8s6>^QyE1zBbgt>N@GI))ilkOeaF7|}v_jIZ zc%zGT`Dpe9p9V-hKS@3OE^Ts|pBzvK=B4Un1i88BhIZ~(#y(=%?sW~`eAg|h$Q9? zZD5KC$A_f^CWE*HGJ!0nkeIxLq!{ zrI4#i&3aY9cSI|i(rCCl6aaqHpRilSA9TJ&AGh&zgJ!|jB;BAAa$X~Et=8xrvjOWw zA8e`9fVq{5Pv*xuvG{jDEJY;fxvN0!w8V{BWG52&u`oZ7Czd=OQKDY;54Q^{Wpt0| z!Y(-y+I9)fH%1;8esRwXKee=fIp<*rwY~I<{@= zjcwbuZ9928>Daby+fK(;$F}X$-_+EpnW@@8Vb|XG^IYp%w?)D0iI*>G{f49GHN;an zsoF!e$uEM>^Ap^)p|1^yJvF>E>`z?1!r0GVL7Yc?zdr;W#pqUH$XJS;*{bDKOV)9M zrO;1IpftiUi5o;6PIM3!)V$8e5HJaZB|i{Ru|Rmip1?+-ILAykdN>tMf12)Hd#D6| z|F^5*`VKqM*L&=L47EfPXvxg~I&!O!|C9>0gF5DGm5_} z3@3i#SUbn_hD*f!2a?)%77|InoX_H(LU^KD15D(tYla8suoh3v}2o!512b zzK@J&Yqi`$a-{qxY*)XG(y^lgMfQX2md~?5>G6ioAAqKXg;Hu6JqZqP(Ql$tt`%Ja zvnljj>?@O1gP#}C?ZWon+LrT^(qd$WVcjR@VJUi{32$Oos-HO77p%W~5_L_ZP=sHn zquPOgRkKiZv$K`c)byS2DWFvs)-^8=LyqH*4>Q7d>^vu?Fw9G@QeG<@+7aqc@)7|Y zwr2^c1E%^k9zz~}3tRcv+l6h7MX4JQ!v}zV=8i-Q%^x@?BW^!0Iqma)W<$v5P4sEO zp;j8f6Jn={j0p-%V?AtiRU2*f!P8l4qU8W%Pc;$qrW}Q9{<=P_wTJBM(E6@WBc-zX z8t<1p4;CesS2lhE&LPV$9?9^x8|mDW09I9VR%?b5(*g9C$3usXg_3al*Q z!g}(VB){yl$b2Hc01F+$Cd`e{DcBo=L@Z4p6{u&GKH}*M{mCL_n8OlJy9XKY~ zl0^qcWk>mPo!M~ru5Tt15IS>Zn(RH1rkF|r;*P~~dy=Pzcre0Ytf-Qx6(k5vET-i^ z`x28NF~)IiDPc4&j6t+W8S3LO-KaknQe_c%-7DfZC3Ld>8v6PX#?BO?)XJ(@<0t6w z;z6!+@N^8l4NohwOCaf) zB~esXWb~_dP}B~%oRasx^>*lLuYW*%#m1LYGRG4*g#-S=hEyCy`~nep3E#c2`ztjs zACc|)oV9tJ#dpkQ@yzf2wJ`HTzR+}sO1zLYi+wiKBiWlG;`yd0fuHe)cp3+5unidF z0#N{iG%SIfrEwo@uR9bJBZG`alD6oyCWu&rCAnjAc!`Ry;ihYHa<@`#%{;@X=-TaE za}k~ZaKX(fQ!Pr(bmhQXM=y2)&1uA&5SU?1I}4L#U~F5eCOpJ3KXf0nRZGF7Lnph*TAPT-jEVf#T|L9&U79YssezB7?rx zoc@KI`_ZeJj0Uvb!aTxMDL^@>*jEXG&3-*k+qD#|b!VszQ3^qY%)^fIY6Tvp-$fLs z#a!B&sX^e??sASYJFzl*`_CG4qVvXTM?`T8GC*uvqs5KXegf;{;n)ZK4>JBExR@t|i2ZxyqSsVVaGX z3Q?3+A}qDfNHkb7O^(7~rv*kVvhyu%r~*RCbmCOA4kmW65T>T5HZ;vS9~=BL3&<)k zk0Y8E@zAT0A~??BHqQz!*qyEsdYY$VJq*Lo7-Ut{E63(|*({&_70uTDrdluXHW42O zsDec7#g%{N_d^tX zvgHrb0U7?ej_NR{^Bf?bJ%AJH&h&!#==4}dNa6w*wmB1z?- zMV#`I({HGzV@9_J+wZTRvA;R-tQi@lAc%)G%q?MU{}Ilu$incY(l{m7b%HoT>}A6A zb(8yYRM*568(3>rX!M`Mq-gqtB@9o!|3~{0TBrK9`48m!Mg8Lk@qY$_|K_-+w4r_f z!*TUYIy+Es5MvWDhB4B}JVB8df-0e)K$-T-fQnt_TB(-$;X@6F5Uyg&7&B{0oDzr8nqOtv5U9`hWprzXDq zz0UQC4@`XA@Vo;(vwjxaHX2cS>i$7SLZct>P|Ul@CU@Qp=asRrM}N}9xm z2I>--YlHCh2g6`OhyC=pV`V~%%ojDP4VCul(40gXloiTTJ=EG;v`zMdnYo?i>i8+4!fsgjjPq%%qy=+lI-py`*~r4wjRPw0_-p`)VT!04ambG*_epY!U@R1W1Yfi`IG zLIlq|DU$N2@=N%xgykz8QrzlQIyb&xgyHI-=~J%iF@&fy*&+>!Gc|?h>%rl}c+hl7 z9lG!>4g!RMzvz>H65t%!C;O;wRo!8L@F`d}t%!VfsU6x8ypiYD-hsmyPzkQ9^+=lO z<)@-;?e6klX{;f6z)|s*-NcOxC3{qK$sERw4+f6bHK+gVkKkIz_8%ID;1)nx&=vkLjUPp}* zlkUB`Q5CuuGM>r^&|KeWC@HP>@sV(OPMNNWUhmXd&*bsxXl-q6X>Nwd$ov;;L!`AW z2{p9=)5`PeON)!!ZG7_FY*+-%hE$u%D_!(7T!`wrt@vIXp=ep!MdJH~`fsG{;!g}O z4OOoZv73Dg-u{FkSKB3K8cOwdh!CRTzeDNloZDi)4a8f^S%SHnDh-P@# z3=tto?>p6T&>6XJgK2GWo5nYi-AzS%!-NvdR5L1;ZY2Z$F;Mz8C$xYa>y{ptMkWkN zAma@$DfUY0Q4fAaC@PUJn0SiwQ#2X6YVP@$td7Uks?|gKV^?I<&9eXf? z+6!|qgN3yKQ-H#1c}?29AW)Vp3u+iz9l6EtVdhUCw6#>7*I}-T0y7lB`-Dc)feU@0 z8xd!iDo1_bpQv*xR7vaJDLm=;%eg=Hz-B(Z>Zjp;Njn)vQ^`$QZ(uLd4{AnSbhCo( zu=M^qKM$oREijk+Eq)VG+dW681aq5J)&e8Gpr5yC$a*Z6tI04<-K6}CL=}QtE#gQB zqoJK}{iXrfOpl70$ONpR%=XN0i9}n!7Srh{(f zppi8<%Zk&}$=vugR%1pZGbh_uWwp1su{@5UV@*%DPUHrbT9EyV6}VM(Zpz6x zMK$5|TcVF0ch`Ro0YekEGnGclq1fj5C zM5ImG7H|FqqDN^rll6;r8(4J<@U0xl-Ca*BIi@ zJyCy)5^tSr#@Uo-H2XyR1IwQ;1v&WP70oZ>Vd=`TnK#$y3dPT6nnB7&O7aZjV(p8$ z9lb%8SI>&|mJ@MzaAbhgDY5Ir))U*Ccl-JAYXni4=K(a@`a$F^9yo${J1F@*hPOz? zQa+cPULFU-!s2?A(q@1a)?0R4_>m@lJ@13_8NweF4L-Ou&%pMDLOA;}@qr%QOA|8s z3LTw$P|Es+L#X;<6`i|VYx6<3oxz$e8Jb<5XL3g#{BS*I@|7pKIQJ4}^{@hU7K$cs z$vYm(#y>E_6hB~s0~DBXjKU0ZKL!d{h#$yDWsG<-l9_anNns0q3$ij_th|Wv(}biV z1%H9ybD!4UXD(-dG1uS2AI$A)w(neK{t-!Ms<~|qI`-3nV}M1+Lv7R;x}kx{dO_%N ziolC5N;PkRWrGc3VJ~f!4EZo05-5-`2{D*0GMKBK6O-|n2fna<(Jy=fY&y8Ld6r*& zjD;H>H#ajsnD^fsReA&hDYj}u^B>aYLLvHdr_TZ<8!FzM{`AUwm-G_$5WruB!Diun zm&675GQ;LSZ1^CSZw{ z?F;#v6PCft)jP(!{{VBgo6RRwgpDU~_ba2hqde2onN@3s^wfm3t*=SbM%08Xha1!M zO%I`a>@;>*LY#zX;TqDG@U8|396WMA*|af>$Ki?8JS}F$Z5yj$ zA1DSE+;)=Gn7x***92_yXEfunRaw;05H5lI z{1leLzz}@Ysz~6pW-QK|r+X;c6K&)!8s(1aF7c(d6@$f3iM((pTe52JKNtVWUz1T@ zrvlQr?ll^AlA)0z$Ip$OaQ_tzY>-GjF~VzGV6F*N2ro4{3%98+T%4*@X3V&dk;OnXszzVKxn+$vCN0YxRCjKzF8=OuJ zG3&M%(nsE)%R2pVq1Z@&jJgxfq2J%*)c@V``%329`6LnetQNTcc>(jvJ9 zta>ip5u24z|CkzAlzbYL>0)*S;PuJ&pzbbKxAD*zB-b?9MH_2np*J{JB0Vad@U~As z0byxK=&)?$JQ$(wbrFdqm0l&Gf( zUm0!k9R45`OZNNvk(p~Z%2CBW$zm>Jl%H(05l1+JlmL|c95s7t#O436xqNUxe7nm! zUW$93$ib4p&vd56q^IC-iww+&4yqIxuH^$_6I{eXp(nKEU=VprM9=6uGO% z2b_&|^XZd16nuXzfExr!$Z*#&7lEJC#^#IO{;$-BH&d{r2PeS$eKXUTE zrE03qYL79pJ#M$3>wO$KkxO5hQ@w{B9Rh(q` zp|--mi{f714WuCQ)2Ef3<;a=?Bdm(ankTvANYoPk>oHD5VIj^%1-A0a#u_4ut_9%t zUzIR9KgT4@tH%>PLLIU+Yv2>d?l^E!fXgz(FDq&J1;M1Fs0NL(mT27c@Cs%$|9F5x z=dqj!xYeLR+>7l50npK9=1Gi}dG-{~5XYp%{?HR9Q#K7sW_Fvc()4^kS%QwOTCuF#Okskg? zDTHEhm6Bq^{g(f8+_2`Y-S8MSwQTc?cF*Auz3`yAj8}Al7FdBe7X_1hk2FuWP%m9tWL`s>G=_3<|AL1la9@G-Wl=htDo_rcMt9puvD<_F%yRH zeDa}hL$_VpexK_mhgt~*Vl20hS}I**0j|Jz8z4ssp}iZV@x5 z)lbyW6}MdZ2nK=HiHQq-(_nydm*k|md+Tf(b(vJ3I}NmhTB?V3U+ci>6TO_=XCghv zv=s7_NuM&usFY`Kji~wY!eVfi`{#377-`tU*1CUR*le~e;PGdX(M03-w2xIx5H>B0 zki9!O(6AU>qHIfUiGPWSML?($A!<9c`gCgv+WMwFCjQKY{N&=yq^X))=ol~Widzf` zuTcCTjpF(*7#l3KPS-hOqp9Y#-7v2ZuJ1UN9j`VA;5j&=ZI{d zE=edsB^x}9a^5uW#8`VUBtlFO8l{y9G2r>*qvKb)pA*CsbsN68RHb7>^uV z@#0sT?LZ%zBZg2(P}ris*yBDj8;~piVLLF>{&I>_+}Gd2bawh^UxQZ+kTA=8pR01U zILzt%(Vi(EoVnqUTCGC-SovTQZUC>rWo$vr5&o@Ef{XQUo^vPgMiECUZ_Eb;vQD3u zg?QyWJ*fwlg4uPavSK5xBO$46-6zJo2889FDRs<7;FE>A@@G-1JpaslBwBn&;`9nK z&kuW3vehV@E%8YXQN^*5EUt%mqsAjcEY+BwG4;mu718P~1l4pGm(HCCc;Nrsn5PWA zXz+qvY~d5aY}W941iQR>;?mCpet}`*t8IbQ6Sx|0p5GvV|EO|NXLRnz%I)~zP8*anwVW#0VE0pX} zXxu~5O`rgx@_gXPA+D7!m%833gNLlLGqI{=>@dex=&1xOe!_3eEL)|9L8@BM+0V-Q z@=0I4N?M|SBucQhFBkhHxD(ualV2lG#AO`P&on~uw(+UB;yBdhJL84 zgP}j(lP9Xu1BUeIx+@5ZY=>6I;PM|<$uEZaf%~|PZ#C;89PpFvB60Nv^EZz*y5>Uk zcf?8EHVdv#O7kn@mmf0OEEfLO_1M4T0P7_kWclVG<1=uoG3&mj5AQy$}CLvh>UUn8}GAu^VOb zD*hui_j$xlL@b>B9~m0}qDo)ypR+vS-?bXae_<$fPmqalPf);Y4KH6b4gBw(3#JKE zP%El@TuWT)78xQf;dT@RSm+9#OocC)H#GjhMz-b{^ zIs#?ht~AKZP*xyfOq9AQ8D2c{KwVHFv#F`4Q&g`#{4P0fZ<2@p$l!z*=k%Z*bAo4Z zlzmr1==$*vLRUnuK471%Em7(!!HGj3ci=qj;gPs2z^q$oB=g36%S(p*s$VDl#;(iP zhuqjxf_%@D5|Na6V`BYz%oB49P?%dSxyQ%br7`d>>;KZgfRmPK7z zkngmIot&w)?H1TSNogNtr%0Eb3cotNA}X0kqccwreH0y0PnS2;Dm9}lfYyn|)plJ~ zSD~JYc>;R7=nzxihRQ5Gz?D@lS6z;gfm`#Mg;-M*@QE5ygb|;>otzfaf@7pE%o~;9 zFEOUs7uugqn+NcaupnbV=5NqkpOf>jZJwF=fPdJjXBF_*s&F*d$Tk=>dQ^U9g=Wah zPv|c#du6XbG^T<|hgv0gipZ6+Y3sB7rj*GXvjPf870{_kq$E>l%<(m}fB@LZb`xtZ z%CjTM01TaRNkL=s%xvML23fl#CpP^>^D)`$F(XZ?^rOzo&7rL8q_xa!fE@eQ5V=9+ z{h_wpx)gOpo6BNN6+GIda{M{CfqyCZrBta4>0hZPvbGjZ*HPuyo2ArQdpd+s^S$ab zC*b1hECSnMgXBw6kO~R`kqxDsf($3X(ThLLUfkme(VA(QuT(-Its=2rYBS{&3>-qy zZHT$0I$gdgI`Gfg_$ukMCN@urm(cieN%$BP>-uDxQ8BbM535CAgv?7*AS6?TA+Eh@ z9dag`?a{7Rn@~T7t=epeJNSA$xmIum8ZgQzwQoQ<%zo-bT(=QDYCkG4)G_7muPHqB z3tVmMej^OQNKoL+?Ty#gJ7`bYj?7C#VC_yNN>9noIcMn(?n_CK1ez~wEX@l$1(~lz zzsD_KfS`(WGd}qCDB_u zy%(I%SdOZ_D{^4Uw(2cWYkCX{A1Kc|JToMEVockjgS6^gTvgABxzilT5_u zFAj(pa>_NOe5lRAYO;`7L09yQ!+}SMIW~HWEs+xe8Pkj6p@mBO4Hv#ix{)visvb_o zL4VV#%VTspK>=nwc54J&)dH_18)TIz8I3Y#wcXrtmYc;C&GR%_9q^hhQg%g5$|aZb z>*M`dSRyfpcF?-0)GJRi?;gcD_A@`H*izrr6%~Bkr%8EQA4&nF`<9)m7QkPig(Jq= z8T1v&u4?4we&FbxOp()BD(jQh8k0pWX^Guq_cg{_ckOgDIvEq7rq6gXJUDuX{8KHk z3}Jc)=^|fN)(w4U=UVqAS~m0{Lm$lCP*T+T*g?nOhjGHt2qW$Jip-eFN&;>MQob6O z^==)}q{E0oX2lMLT{@MyY{0xjmkld&6nf3bzU)pzb5M_$f&nVjkmNLGJNV2Nr>7FP zgP#q%B!2cZhWRfC$4lhMv=9iC#G=g#k3H)Jsb*4I7{acXjhT7_yq~v7^OU#-c11V5 z+eqQ!IEo8;f0fSc{1>wCAgWA$;#ipUecfGl>Uuo9N$vWS?MQLp>jTf%QZniLQ-p^M zq$eh5Esel&zpgEsXe03c3NiU znrcrc5WSLhAn_=%i04`84EHB}I|1_KC~V+AG!vr7S@yITy;6w|MkuMWvc?RoAU(wn zY$G8=k<20?#E`5qLI6)e3aa{GE#XiEBo|<apV@dKp=(WY!L~cVC2vWO6;znh!)s0A7J4vTzBRPjf}N*YD6s^Z|3&IL#h)Y` zJ>*f3GUhL?xD8`1%L#`4CZSe-DBQX>KDVdKcY-Ae+QtxepxkW@Wo(XB zTvxD^o}FfN-{IUncNboj+6Y=W87UNf#ByIiZNpSB$^~kM@z)c)53A!Ltvg0?hKZ0s zN3)g>i1)S;GC3hQ-9qtI&zK1EBIxmk3*9ongp{FRJAAQQ^ zKgfVwPxBgaMfgP7?@;?eH@f(uL#QW(Dgj1LgiC86dwld){Zp#h{Ji96j<5&`N%xwU z(g-_ZL%?FBZV&gcg8X@TmIK&TN@BA7cdN{R)u+ikA` zni8tBVN(i$2pSIgNaS58qKg12MLvuAFV!_^b>6@%$V6jHJjYWvxQ=@yxMw=(#JeAh zv-+@Iz4m`E13vx_8ykQv@Br=e&o_qs4`&Db|EWax2^E-}r1u|FO~|%Yt6s;ZaP{_! zTCt=Kj1Ei&(lC}(7=;*vb;izS9h{kw)tG(d7pX(rtZ2d9kPO3z(g;iAI-)c@tK7`w zTJ|&7^;-JoKi|>^sA9j#PbW09GzCSfu{xp!F?=&FdThzO0+aN9OHbcXbOxQ!K%<6^ znQfN|E1(u`3~kPAu-;W0zJe{x_A$J6JhXaS*IJ#m-Ha8xWimmHF{1(7C%ebyBcBmr zSrc`aTk3ny4fgV##Zs{KgpFnZ^!e}6*q9)T;G_+r=rhanYJ3`IHMfjCw_`^gPlwl* zO5JALtL66NKfGOKgzj?Z{-Z6_Mw(N(SB_a{x4=^Q1uFrQH0;1A%L;k(bsq#Zywbqk z7q?xm&X`rBup`)OEahhQRz&t!rPZWL@tZK>&F3d`AA?o9kkfCIMT({PVh$Q127NH>(_iM;R-q$chX)9*!qhqe;w*hH z;V&4p{=dJp%IQ_2CU|TyG%W2QDjn6B;MNYPgyW-$(BzRpIgr;3C07gRr1#O1bE&3t z1!+^{kv``KP0PSEp>cEs0|CYfS}^JGrogljfZ!BaR(ZNDyRd-wol!a_>$p$(o`z*_ zzD;hPSnR2=YoCo-pA8Mzb}7OXBO6#3_>`e;ZaoD zB%DCq+#a5}L|7pvs=&pir@;q){QVF$dCs}rQ(js&CBhR|tucsI^ZUaX36$NU0Pg?h zNi1Z{wpY=9`~U)e{2==8_Y@vjt@WQcdP#t?v1{yR6%-_7N(tJO{9*zupBM}}rz3c>`M(j%-Obg_faT2=*ZS%*GQ$eX<~AK0kLp!z8=doIH9NJQ?{)`s zW+?*DFMi1@ziWDn&s>+8POp#OKG&25Tpu;L?suA2FvZxEQ=V272ZcarNfsr2%GdHB zN*mr`v5!c|kAvfKW{(6`A%WbU<-9n)_snEP7O+c+F9YVb+?f)N7RAb--nmfb zUhfCxCfXjlM6^aq^Z0I%`?W&328{50E)uI>?kFG>w!W&&>z3V?exe=Dyu6ZZktnPo~(Nv6)j? zlZZ20QKW;?S_3uI9jdk+rz1K>;Lp!f95xhNTXwVDmT=XX&=+$pY&mF1#jCm-=A?s6 zF4bfQZT5s2%m@fV%`(oK>g>h5_1j!=)Fs$Nqu3>a?x^K)mk(?DOMv0#8ZumR&_z%4s zI?DSqgMc(!DKWz>yZ{Ai0?FbbYHzGPXZ*UQETFXws^*cM-t@s4V<#g)h{HPCU>8-O zX@ne+RZ9_yQDvty{4BI(4ODBU6l{~O9q8`?x|I%1i*ZoOg)r$RHjlyg5HrTZ-(3KZ zg&<;OfJQ=CAugcMY#paiWmk6JhV?5@+=umxWTEZSoMdA;+ou2VmNU+6gs3`{<}xgm z6)0bNctjD#$d3s}J83FIvSIbUx``t%xUR(RHz`4LJ~$5NpZQSXi57y(!D!gQ!y<%= z*W@%NSf8qH^-=84wy;|{Bpu;$jESgrkrqMP&|G%O%GqyOuPiPqL@t1>K&~x|jx;z~ zBi}g8tg{rwU~Ga?0g-*1RM{ah8(2)*3PjMYm@_e)$Fh&bR;K|uYK#%G31j<_RAjZ} z4&sx7<(Qow7Lq6x{$psz&{~mIp^lT7$Nu-Yt)2r;wL7z%cJ*yi@^GbbH*-!HPD(LG z8>ys!-ojdxXpYIGTX%F-$WRZ_kO*Dm~Liv3gqj^VX7tRn9xQ>npnHIrn&xU@aox|1)WR-clfqs6? zGIEtO&wwP)`({_c>>y~v*tN4HO5Ij%9?Q_6abp?# zp)d*ZwJupw0BW9$6O$md%yAu5$%xlj}q)jcbI!s?*V5!3C;6sLF$w{UVp_^ zB-Rii3m;KiLZ`}d{h|QErjvNEQ-%qFbleeqwM=jCUY~vNeML-r+&stEWldm31M}_? zU*)r(@ASC0r=BGn6^;pdgpT#S<4-phlDqmhlu0=Ysklg2aX1qDftnjI1$)g=j&zno z9rbhp`&{%~j^=gje(I3bo^|k)LbDeG^#rE~)6+HQIpH+L25rq#$3~jRpT2^}KJ5y} zgtvOK7Wismtk-0kL%-)@DFUXSZ$EJe7hggA?l0 zHTQ!bsM(19MH}@}!0{5t$nm{?&&-8o!%UFlmKSwB6p0-?&my4kdT?3ir?mcZ?L`5Fr<=+!JW-_yqf=#XjU>#_k4^M_VBm}{JRiZT1z?{0s_VfZx##TXuf*x4# z$ON%^Sz&l-HyV{E$K_PODK|?|7btVTwHh5l`s^!m56GjEK+kfiu{x)<`8gJJhlG;a zgT?ll=)g$|TGhSDL&0?HJu0^VwSq!_fUBnQz&2h?@xCS$Z#2!GCHm?QSBoARUI6|a zr!>XN!0Wvm@Qd0QUNvX#)#9USS^%>z)};n69TXw7m9O-k;U!i0L}UKL`y1L6uHkE62<*vw=ewOynCau?ETQAv?ge2o4s<$$_Tuqv9NyG zY`lvLhNRzix1v;3ug7B1z}kDlAtr5JE|!o8rTtV_;1_9c)ore;fADwJy}?Uu+LqBu zDtGTVaN(8BAD_J11iN@a=Obj=A%b*!uh)W(x5$0|l`x_6yZpZQ#Vq?9aFxTmifsdl z_8ZlC!RadLWmjM?rPa+l^{eup=R+^=_9XPjLezd$sG`3`33-{4!!*}z>|4}z@DBz`Sb;AjKS0vDxo_Cx%YISfR2bx&o(*`vYY zOFg}hoXeO5{VIg^Wm~dK4OzGNn@sKB+f2D#AZRIFUl^ibWdp&)|%N*G&~6K zWQckUOV0IOoPqH+lS)46* zfu?p%3j-^)t#a9LXRy{#_0=MP3o8E>_gp~pyH$)a{J7N$U5rXjQmjA>-zN`10vio=>HOB@6ktb?4c;{gq*xoY)Rg zXWjbWfJ5P-9?1Ge6*m`V0x7MMq4zuhu>IE9eGOZ{fWBA#uZDIZNI$W4P$Dfe8~ue< z6R1)t!TT37ow!gM4k<=#zIgQNd=#yFo2SxT9GXmITYJn|ui3mimJ#eDB*IVhOs;fa zU->{BPh6h<*9760=+j*3?Ub=Hg@P7+y>I{*F&SdJW+GZUg$r!DO6RVsO1P0HkQ>=- z3z7i#$ufDv&kTm*k?xJOwtuLP#z&-cmFuO>&tAX9HP{ z49&csH%>l}bH(Z>!;-i=))?C@Fh~oRsqh9t6FxFsPyNlUF0;W)^!L&Q?%0Nl?wkff z6gDz7JT9rT83&D3Q+m4a+}2^VeWSzQpPzMj&cD^uZOT9}-6RHkEg+{U66H&>RfRNyORw5 zN_>?B8Wo<->CZpU-OM-m6(y%!lR8)p^W)TOIKA-x4L=a$!R4W~qMHhUbFreCDpaxJ z*Zci%JrjjrK#ycmjwpq?0*P9VH%hxM)+dTyByzmDz^jxi~+UM8EHkahI*sK#1m4?tVIcvCl zp^9aCf=lugfc9AVjad&e-1Jl9q%vXVA09HE>o&3UmDd=2>3T;YjC4=}EDHrs$X%WNh-mDbzxj9^|w-wVnUG39wyir9k?Jk$I1Aa1m|Pv-=_ z?YcMGJi3Gw#3SG1D){|0DGp=S`$Y8_z)H7+%~ly&Vu#>qT4s4quD|7SU}ePdDG$Yk zGNJ_mlB(2vrZ;Yz2`odpbT4cEN!tOE4ZE|K?TX@;U_M(~mPJ3*qsMqOyxJUTJy#oO zg1hr~%F%`eZymbL?WZFqh_!~{1LM}xzp!uTfX{KwQJsi5j?S1t`{H zeJx&j&h(oyesyc7bsw>e=OyoqwX_rC8=q;Y>(OK@K4^<4J44TwI8_oP&%aMO{qUlO z_}I`x%Q~96QjdZ*&{5J{YD?GpPP(SR@7apgwM-Imf5HNBGI`OX<@zehuYbXsveQpu$Tev%NKj?x6c7Y08d<&d2& znl{(whpHJTQ9{r2K0ctLPkNLuj(z{^*YzMQkeJ6I5Mqe$?1;r6T^3G=s&HRGah1Wz zd)$C=+WiE=ULesuY=~3JyiC=LTsLhNr9w9SY+{YQh~v@aUoD0d{MJvyriP12`@u08 za*Mej^v9KO;B>c2#$v$|+#jCyc&_jP3+< zG7~|ALeT`Hh=Zak;9E5pU>E}V2`7m!ei1n>CO;s64>WUx14i|>9PQ6fb1x+gfw7Ou zk2v-xg@q`q~RJ z@WiI~&2^95tF!AO7VN>kp_aaaw3-eQl6@tnHLnid%mj*gxC2L z*sCN!0+_v^**8mCD*u9=DgtgD!?2pr>C=3dXW>lYqN)k2!n0bjvpS=;;E{Ykv}J#| zCm+z`!$i$nR-zXbm^qfZlbO1I)b0d>CGICm>2+=WHzU<2#_1K8=OJSmU##LMzVHM1 z&Y9Scx94AHGUJ8Wzs}7wpsi=h;%Gs;E&8j0ze{FkK;#KIt06B9Kmo+uzx~1A2o#^@ z9s2s=JGgUlR1ASskpZEqjD-;5Z$A0~-ALuP9$IxP2{RsGzJAz*`=#QAhdAANbJp-y zL0g68Sg1s8E|~Fjf5XT#Lm`Ep7#HhV_OLw@({NM_FG`O(%Zfv_&nI9e39LIweL&xV zO9=u(M;b{_`xk(JHvf}PAM_%xyOmVdUC6vjvW5y~bSanF=2~h2QSzE%m2s!#Y~2b+ z4Sq9(Un?^9TpYEOvF9-|x?xS2^}bv9zZ=FY4Ien_e4@qY&Y@a36nVR$DZ9iPZ+d=h zO+)BFtAEAohj_s%7X0Bl80b+hyNN&ncGKA0(Y!S>x2JhcY=Ot~zE&!C zKzJfu`_r{|#Bv`zAad-`2`I$*{Aw9PHXc_A%6C^q*?%cGRc|o}1x>Iw!NMR-c8W4cK zN_N=qbVVD-(L#ty=f?i@$9&T(TXZ*bS@J%>EBH2g@;ji(Dv`tx%aVXy6Ps9-Yre== zyC?))l(H}9&pAfqE977AAvYyT%$0hvUJ3)b-YVM!)4ll|^+lr{$8G9ngp&wsE4i&< z&;D@i;J|N?IL>UNPuEW4d|JN&CEg@EGrr0lb7J@Q4IUiYs1qMAvcLP~o#fc8knnGd z=hm+mNTLh+9+4Sm;K9?IELgom6-ghuL|(s(dDX~FW5c2ipXon2{6sgxm;$WkEIFjx zLzq|Hofkl#SFA+6f@yZ>S6vR=emT*gC}?~$m0By)lQAgd&Ee&Z@nRnWN6|7yR9p~l z3#vxM>b?+@M#P}zIF^X*G3DrS0$g7G)Fd~HU1xk0BC`Fim585V`Nky(`&`^@myA=h z2(+!RW9$roN4yD++0bWCpqWJM zKC9<-Yd*f2Y<*K(c+wU?qS}hFERS{Q(afb`n{!a^*I!jX@t^T2B`vUa>`9IB;piP7 z``cddoG#spOo4c>4N1OvrYqF^nkzLbzEFfxo7j|FF#M#RrWrGWlt)~Rua1;3nRr1r zR6`}dnO4)El%j;Z!tAS#nO+2uiaKxl>re^d{FB8FwjX)nOO3xE|C>f=G#;t^4|IV1 zjNM2%HV*PXsDr%S79OyFtzKG$|LAJ|BflV+zyJbqm(cl40HhQ9;KNdN^{B(p^>kwx zQqW19MWQX?=yW`f;@Ll@C2=MaC0UG9hEC<7ZD0ZK%F@p%S0&`i5wO13+lzPS-7F?! z?9KrU4p%-d+kD449>>$~&!vR^a66p*F#A~Q+Q<^|p-YUax(JDG!J#OKX2>`U=e*@% zo zp}UO_Tl2T7Uw;l%Wxt-7^jH8|jTnas)les#0z5AE#|dl_Tc9o33OJFj?=bV1d?ReX&{F7!92WY`*8016b+gjexZc` zq987rX@jwG^ja!z_~I%9HHu|AF&6x_&y2zp!80iA`PoV^jX@9wRJnR(SYTti9+{yR zW|VUG7DqR=XYjb-kPheo8i!oy5#FD^D^Ru`bb(wZyQ~E}(%Ksm^tYAaHb+QkgIhr)~~ueF}r#wJBuy`}2{ci~c* zxZ+gZHMxSl!Ww-&qQi}4pk5%aX{;^0A_&J`>V>jY&!p|-xL7XR9%$i^d_&A$rrA{@ z#WsHE$Sh5csWw$stvB~z6T|&ounB*}!eyVfo^sT+_cYu_i{RgXohm#O;*N2WgCEUs zxv1+=WtD&JdXz+s7%~q78qzy!ajk~Q!6dUidEmIP)eD#{c!yWQb>Z?xO&!FSW5uH6 zlIf5g5>?Sx!hloj0|p#zyNDoKooDTR<@z5yMMT}pOrqwSbe8toh`>5&IQnD=un*}0 zEHz*n#CAdJBdz4QHE(7E>!UgwI8v;*>UZo>Vs+5S7ARR?|6_6nP1rS7~2o3@Y2BcbO(xi8hq7*@@ zNG}2ckuC^-;QM^!y~$cLH|Oqsa!+O^nRUxP5ejn?uA@~SO{&F`T;jT7CxnRSye&na zC`?x|D$1j7Fr{g-W}<`o#H(ZX`tHxD(@%x|KJT?Q%KcU7(0++&%nYgXf;Vy(Xl3(t1^nAlA(6uF_UJL z18TQf?ASl#kOJgx@T7Z_0knOE7zN1mPEhx|R zOlou2Yg2p?1**3XPf zh;Y-Ezi_2GZTZ62Mmb*oER$-CvNqS-UeR+-9Y6ZMO6;Y>p0%|rs8bN}$3^(q6L$cI z{1^!DN3;GWIa>7QM0361_K?>;3eJ>#Sd3^;mA$M#DN3L;-y29y{y5oaNM`-W^juteTN-66In;gFY3{66|JWD)H~#(9pXVcd1f*lT0Wboi(!02 zz^_BZRZ3}e?~Y+x1}kA=x(4*N;T77>=iU}|z9e=23=e59;RU@}VE{BZn0&AotSmSW zm}qH}Q_fl|D9ZOBnIWsjTxt=IJUJPT@I;pHw|j-mTP`3sIt3)u9A*cMT{nCxmjSj2 zF}h(m(IG!upB>63nYDS}DouV0**-*lPYzYqzaP$kOrxluPUQPVBJgp>`4`8@P$ST_ zb3CPwzSQ^A4AH57P^PeCa|kFMWdP3I`~L`e{VpjGGsK9tM)s>P-0e|w8KqnF+hd?* z7|qa7HNcPVg%p^0Su{dDQxedvFLq+x=`zM|I@MlqDqW~u zQy%Hx?3uXUfNw**twwW~k3f;Cu@n~5d>1bje>k&iLCW4`7&1xxQ9)|uQ#?IPOeHMT ztY3I+;FSp4QW8awmE$v4%C-DTgyV5^DiPPSE$eqi$9Ar~=^^7* zcq{1C**O}R^O+kK+$U;~-7t75!wQ=*c3~CDy_8<3mmZR9P9(vDZprWRMZT0a-uq-MDr$`UYi&O5 zIeK781o=F4_v}=)>q1Q-{84u?6J5n{u5)!wR7o3DXeWf;^!B(P(_8~t!;@=aWWae%n3@M%3(+nfE|G!{-Kevdw7 zRy(@dK`53aEa5fV&!BP*Z568s1}Zyag?D}Pv52}G{L-{-H`F4?G&Hq8&A&T3DCb=I z=tcc%Vd*86-+B$@jFw0vo^R`Cyd1w<&0RWTd9{k@d56|URuZ)$iWw>8tKR80^+Lah zxZaVBO#-%uGGyKIllgayP~xBNPOZgtiNpYt?^EzNFtln%Wb%1lek>tQ#az`GuXB## z^x2q=Za}<$G=7t?)pO|XUrs)`>WU~4j27H9?cFN=sEA0pYHbtwg&Ln1PyASoZYD3N zm~Os8FE;Jigb_~LsX{7Y(ed+PyJj@x(ufe1#r(x4sOY`MaA6-+Ze{S8pW#?et=&v$ zJ{=xY3|qE}-w%$bh1e5Hr-rCJVs9rD)ea@@xTTu32g8ThGf7t|!vv@`=EI51>JW5stuKXtPIzYX zsq{}8_P1tf0b}HNc=?QYc>E{Np!K_q=*Ol3HVku&M|vfxBb0!R3^xr~wQZt8t+eQ9ZT_bumwcP4Js-*iN_D+&f6mhPO-b9Jd76x6Nl*RSp8 zmam0Ku5{ldK^zV5#qHEj)J*Jr2aW?fLQxfv8<$=*fbu(SN};Mn=s zfSef9$aPm}k)apUwbvOCQpOKW_|PKcc_vcV`y0xKwU~=hyOX~T0tk^4%+0C(!IEpW zkxrgeQyz-Sjp=Wr*3Ys}bj*?l+?I>*!|u|czp^z4`71W3N2&XZJzTTw~8+q-65tn&7Pjf_bu4o`rY$tK2zMlPP`=KSJYT4Cwd z<2-`hPV|6|V9V*2WS)%F7C3zOu*l6on3<6jd#k_dzhiJ^W$b+MXLT!sOPA3ebk3*#kJN@CAP z-hi_?pKRApE1R2KpB}ZfZ!ZHS9nX&i=&?$>NGdqSfEkz^EM~sXB*{GO8{09ni0dx* z^lQ@#&$Dre zD7m;y4S^mK8RJ#Ww9~@`fBTX;=u=2aupOZq=c#a!J{6p!(k~oRnvj=v4lQuLJ;On} z5?1|zHvaSX?WRL+Qf^O3K1#rt=Yf$iOl$^epPA#9MevEWOacY7bIl6Dl#xNrJH4$_ zg-2B{@s(DEe|Il>jCFygCt%HQ9aY{8`V1d;&J$jzo9;W}L)csUhh83mj$f!*J&1o~ z74;-W%p=QXG&?FQsjN95|tBwH@%33;F=AIU}-zN0XyLCWyb+gfx$QK)eoesp& z@x`-fCdiY;0(cdVU<2sBoBq7KPk_E@gw+hb&+=Et{Gr;Q4DXgL zo33`Hyr%ACg18ZH$>yhnPwg*sy7FDLHtUQngkI$8n$^L-v>69OuitRFp3{xOg&^9dudj@iV zIM97p6tI0R#vu8e$Z{hTf=+!<<2^QFe``!+yz}dqA5wY{tR-VZ%i7h0FiDO0`9OQw>GJK?W}k`C*%T@4*t9jzt?$7ce5LN|%XNRfRs8n&ZKSV9 zz2{T&yPcWqLGR_z&w%=>A%jtbqen1G{*aHBrJnSH)XD+a~ZXQq!{ zGh&8yt6F%dQGEI}`8tm_Mb*ohMY=!!Aw{JJdefIV*;XOs>H-ahC&NwL zn20q>*!li2gW8T=6QyfZv+kt$2k5J~oGU8NE{%h0d4-YSu6LR6?71+^SjHnQICm%~y^HU?Te=@w z9?$+fJ6-*Bx}GB0PI_)onjuCyrPX5Py*z7cd4#vp^L+ICmdd!pJDv>PF|?g6&TXYw z)~O{VbDIyoxE(3Nw(SX$$7&qx&LGQv8ZMZno7=E0#p|5RA+b3ZZy8~(X?N~Id0n(( z$R2%~ej?(1uUw>%vlGwU07Ic7i&&(*FxQ~YJxl&pOMxs;8WtG^v%+htRuNK{bi@XE ziM~-LwcDdhue`?l9!I<5ZwREevTHauPsIsXZ4o7!RobzbdZstM;uN=5iggo|EKD|D zREINa3(Sb#zUn1e*at^Wy-gF^Q06M(iFPV>yr)=Pp@2zgmj7uT#H>sGuuIdxO~(Xj zxEh&Ql3^|qS%9&R5O|jAY^7}{6jsVk`+Uq+b|V1Y7Iq^$;JjCC!+`97u6ph_3M;H>UBVSUKm2XyNGn8m^vQ1ZPjGX51?b_B<# zoBA|PaO}#S)N8sFzseL@I@yh(chd!Y#bGS%6K-`JT<Pjb31^_dz`X294HV#6Q2LGW zU*H2v84K6I{n3ImxOZMWE+ZJBz~tgFiv1fB++$gxK=e1oQ)%rMcU>BQ&)+OhnJRs4 z(gEcq_~s{#P9!g>mhlQyc69Jl@XDj;0*6!aw7BnYhpB`76uJg2Fjd^-obbPI4;+>Gzgq(3!ph*)3r@EFvt@@3dHp8Cz#!LtzzI}upBNqn()=d?97WMT zK8^$6_WMmD2ZDll{zE(w4g|Yf{sK>~6b`XG=ns)Jl=(kI9LX{q;!E^zA{iA<$lxMQ zIKjI|e^&yYDG<`X&rbtbRSuj~?QQac@xr#?1hr334A(MP{@{W@bLcZ5V@vSV6^FCb z6Q?Pz7+@U-db|MR&-x1-C}|#`vMqheE6x;n9O%cFpjXy^1=qR_Czz=}Y4G7T9T)-a z4xHd&(}{t-iwZE?VL#;@XPgMfxlhY)3jLp;yFgDhoxu8z+$j$@{bV>Ex_SVwA+i6Q jFr01y9D;B^02>p87$Atj-#$D%S?~p?$HUVbKYsT=?Xdg@ diff --git a/gradle/wrapper/gradle-wrapper.properties b/gradle/wrapper/gradle-wrapper.properties index 774fae876..3499ded5c 100644 --- a/gradle/wrapper/gradle-wrapper.properties +++ b/gradle/wrapper/gradle-wrapper.properties @@ -1,5 +1,6 @@ distributionBase=GRADLE_USER_HOME distributionPath=wrapper/dists -distributionUrl=https\://services.gradle.org/distributions/gradle-7.6.1-bin.zip +distributionUrl=https\://services.gradle.org/distributions/gradle-8.5-bin.zip +networkTimeout=10000 zipStoreBase=GRADLE_USER_HOME zipStorePath=wrapper/dists diff --git a/gradlew b/gradlew index 65dcd68d6..1aa94a426 100755 --- a/gradlew +++ b/gradlew @@ -83,10 +83,8 @@ done # This is normally unused # shellcheck disable=SC2034 APP_BASE_NAME=${0##*/} -APP_HOME=$( cd "${APP_HOME:-./}" && pwd -P ) || exit - -# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. -DEFAULT_JVM_OPTS='"-Xmx64m" "-Xms64m"' +# Discard cd standard output in case $CDPATH is set (https://github.com/gradle/gradle/issues/25036) +APP_HOME=$( cd "${APP_HOME:-./}" > /dev/null && pwd -P ) || exit # Use the maximum available, or set MAX_FD != -1 to use that value. MAX_FD=maximum @@ -133,10 +131,13 @@ location of your Java installation." fi else JAVACMD=java - which java >/dev/null 2>&1 || die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. + if ! command -v java >/dev/null 2>&1 + then + die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. Please set the JAVA_HOME variable in your environment to match the location of your Java installation." + fi fi # Increase the maximum file descriptors if we can. @@ -144,7 +145,7 @@ if ! "$cygwin" && ! "$darwin" && ! "$nonstop" ; then case $MAX_FD in #( max*) # In POSIX sh, ulimit -H is undefined. That's why the result is checked to see if it worked. - # shellcheck disable=SC3045 + # shellcheck disable=SC2039,SC3045 MAX_FD=$( ulimit -H -n ) || warn "Could not query maximum file descriptor limit" esac @@ -152,7 +153,7 @@ if ! "$cygwin" && ! "$darwin" && ! "$nonstop" ; then '' | soft) :;; #( *) # In POSIX sh, ulimit -n is undefined. That's why the result is checked to see if it worked. - # shellcheck disable=SC3045 + # shellcheck disable=SC2039,SC3045 ulimit -n "$MAX_FD" || warn "Could not set maximum file descriptor limit to $MAX_FD" esac @@ -197,11 +198,15 @@ if "$cygwin" || "$msys" ; then done fi -# Collect all arguments for the java command; -# * $DEFAULT_JVM_OPTS, $JAVA_OPTS, and $GRADLE_OPTS can contain fragments of -# shell script including quotes and variable substitutions, so put them in -# double quotes to make sure that they get re-expanded; and -# * put everything else in single quotes, so that it's not re-expanded. + +# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. +DEFAULT_JVM_OPTS='"-Xmx64m" "-Xms64m"' + +# Collect all arguments for the java command: +# * DEFAULT_JVM_OPTS, JAVA_OPTS, JAVA_OPTS, and optsEnvironmentVar are not allowed to contain shell fragments, +# and any embedded shellness will be escaped. +# * For example: A user cannot expect ${Hostname} to be expanded, as it is an environment variable and will be +# treated as '${Hostname}' itself on the command line. set -- \ "-Dorg.gradle.appname=$APP_BASE_NAME" \ diff --git a/release-notes/opensearch-alerting.release-notes-2.11.1.0.md b/release-notes/opensearch-alerting.release-notes-2.11.1.0.md new file mode 100644 index 000000000..409559669 --- /dev/null +++ b/release-notes/opensearch-alerting.release-notes-2.11.1.0.md @@ -0,0 +1,15 @@ +## Version 2.11.1.0 2023-11-21 +Compatible with OpenSearch 2.11.1 + +### Maintenance +* Increment version to 2.11.1-SNAPSHOT. ([#1274](https://github.com/opensearch-project/alerting/pull/1274)) + +### Bug Fixes +* Fix for ConcurrentModificationException with linkedHashmap. ([#1255](https://github.com/opensearch-project/alerting/pull/1255)) + +### Refactoring +* Handle Doc level query 'fields' param for query string query. ([#1240](https://github.com/opensearch-project/alerting/pull/1240)) +* Add nested fields param mapping findings index for doc level queries. ([#1276](https://github.com/opensearch-project/alerting/pull/1276)) + +### Documentation +* Added 2.11.1 release notes. ([#1306](https://github.com/opensearch-project/alerting/pull/1306)) \ No newline at end of file diff --git a/release-notes/opensearch-alerting.release-notes-2.12.0.0.md b/release-notes/opensearch-alerting.release-notes-2.12.0.0.md new file mode 100644 index 000000000..1ecdeadd7 --- /dev/null +++ b/release-notes/opensearch-alerting.release-notes-2.12.0.0.md @@ -0,0 +1,27 @@ +## Version 2.12.0.0 2024-02-06 +Compatible with OpenSearch 2.12.0 + +### Maintenance +* Increment version to 2.12.0-SNAPSHOT. ([#1239](https://github.com/opensearch-project/alerting/pull/1239)) +* Removed default admin credentials for alerting ([#1399](https://github.com/opensearch-project/alerting/pull/1399)) +* ipaddress lib upgrade as part of cve fix ([#1397](https://github.com/opensearch-project/alerting/pull/1397)) + +### Bug Fixes +* Don't attempt to parse workflow if it doesn't exist ([#1346](https://github.com/opensearch-project/alerting/pull/1346)) +* Set docData to empty string if actual is null ([#1325](https://github.com/opensearch-project/alerting/pull/1325)) + +### Enhancements +* Optimize doc-level monitor execution workflow for datastreams ([#1302](https://github.com/opensearch-project/alerting/pull/1302)) +* Inject namedWriteableRegistry during ser/deser of SearchMonitorAction ([#1382](https://github.com/opensearch-project/alerting/pull/1382)) +* Bulk index findings and sequentially invoke auto-correlations ([#1355](https://github.com/opensearch-project/alerting/pull/1355)) +* Implemented cross-cluster monitor support ([#1404](https://github.com/opensearch-project/alerting/pull/1404)) + +### Refactoring +* Reference get monitor and search monitor action / request / responses from common-utils ([#1315](https://github.com/opensearch-project/alerting/pull/1315)) + +### Infrastructure +* Fix workflow security tests. ([#1310](https://github.com/opensearch-project/alerting/pull/1310)) +* Upgrade to Gradle 8.5 ([#1369](https://github.com/opensearch-project/alerting/pull/1369)) + +### Documentation +* Added 2.12 release notes ([#1408](https://github.com/opensearch-project/alerting/pull/1408)) \ No newline at end of file diff --git a/scripts/build.sh b/scripts/build.sh index 5a173adfb..9f0afc3da 100755 --- a/scripts/build.sh +++ b/scripts/build.sh @@ -74,6 +74,7 @@ distributions="$(dirname "${zipPath}")" echo "COPY ${distributions}/*.zip" cp ${distributions}/*.zip ./$OUTPUT/plugins +./gradlew publishToMavenLocal -Dopensearch.version=$VERSION -Dbuild.snapshot=$SNAPSHOT -Dbuild.version_qualifier=$QUALIFIER ./gradlew publishPluginZipPublicationToZipStagingRepository -Dopensearch.version=$VERSION -Dbuild.snapshot=$SNAPSHOT -Dbuild.version_qualifier=$QUALIFIER mkdir -p $OUTPUT/maven/org/opensearch cp -r ./build/local-staging-repo/org/opensearch/. $OUTPUT/maven/org/opensearch