Skip to content

Commit

Permalink
Rename all SSO references to SS4O (#1470)
Browse files Browse the repository at this point in the history
* Rename all sso references to ss4o

Signed-off-by: Simeon Widdis <[email protected]>

* Fix corrupted png file

Signed-off-by: Simeon Widdis <[email protected]>

* Fix misspelled directory name

Signed-off-by: Simeon Widdis <[email protected]>

* Fix missing substitution

Signed-off-by: Simeon Widdis <[email protected]>

* Add deprecation message and alternative

Signed-off-by: Simeon Widdis <[email protected]>

---------

Signed-off-by: Simeon Widdis <[email protected]>
  • Loading branch information
Swiddis authored Mar 29, 2023
1 parent cde5a2c commit abcab27
Show file tree
Hide file tree
Showing 28 changed files with 97 additions and 95 deletions.
14 changes: 7 additions & 7 deletions docs/API/swagger.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -427,7 +427,7 @@ paths:
creationDate: '2016-08-29T09:12:33.001Z'
status: LOADED
assets:
- name: sso-logs-dashboard-new.ndjson
- name: ss4o-logs-dashboard-new.ndjson
creationDate: "'2016-08-29T09:12:33.001Z'"
status: LOADED

Expand Down Expand Up @@ -458,7 +458,7 @@ paths:
creationDate: '2016-08-29T09:12:33.001Z'
status: LOADED
assets:
- name: sso-logs-dashboard-new.ndjson
- name: ss4o-logs-dashboard-new.ndjson
creationDate: "'2016-08-29T09:12:33.001Z'"
status: LOADED

Expand Down Expand Up @@ -502,7 +502,7 @@ paths:
creationDate: '2016-08-29T09:12:33.001Z'
status: LOADED
assets:
- name: sso-logs-dashboard-new.ndjson
- name: ss4o-logs-dashboard-new.ndjson
creationDate: "'2016-08-29T09:12:33.001Z'"
status: LOADED

Expand Down Expand Up @@ -541,7 +541,7 @@ paths:
creationDate: '2016-08-29T09:12:33.001Z'
status: LOADED
assets:
- name: sso-logs-dashboard-new.ndjson
- name: ss4o-logs-dashboard-new.ndjson
creationDate: "'2016-08-29T09:12:33.001Z'"
status: LOADED

Expand Down Expand Up @@ -574,7 +574,7 @@ paths:
creationDate: '2016-08-29T09:12:33.001Z'
status: LOADED
assets:
- name: sso-logs-dashboard-new.ndjson
- name: ss4o-logs-dashboard-new.ndjson
creationDate: "'2016-08-29T09:12:33.001Z'"
status: LOADED

Expand Down Expand Up @@ -603,7 +603,7 @@ paths:
creationDate: '2016-08-29T09:12:33.001Z'
status: LOADED
assets:
- name: sso-logs-dashboard-new.ndjson
- name: ss4o-logs-dashboard-new.ndjson
creationDate: "'2016-08-29T09:12:33.001Z'"
status: LOADED

Expand Down Expand Up @@ -643,7 +643,7 @@ paths:
creationDate: '2016-08-29T09:12:33.001Z'
status: LOADED
assets:
- name: sso-logs-dashboard-new.ndjson
- name: ss4o-logs-dashboard-new.ndjson
creationDate: "'2016-08-29T09:12:33.001Z'"
status: LOADED
'400':
Expand Down
12 changes: 7 additions & 5 deletions docs/Integration-API.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,10 +196,12 @@ In case the user wants to update the data-stream / index naming details - he may
Selection of the naming convention may also display available existing data-streams that are selectable if the user wants to choose from available ones and not creating new templates.

Once user changes the data-stream / index pattern - this will be reflected in every asset that has this attribute.
- update the asset name (according to the `instance_name` field)
- `${instance_name}-assetName.json`, this can also be extended using more configurable patterns such as `${instance_name}-{dataset}-{namespace}-assetName.json`
- update the index template's `index_pattern` field with the added pattern
- "index_patterns":` ["sso_logs-*-*"]` -> `["sso_logs-*-*", "myLogs-*"]`
- Update the asset name (according to the `instance_name` field)
- `${instance_name}-assetName.json`, this can also be extended using more configurable patterns such as `${instance_name}-{dataset}-{namespace}-assetName.json`
- Update the index template's `index_patterns` field with the added pattern
- "index_patterns":` ["ss4o_logs-*-*"]` -> `["ss4o_logs-*-*", "myLogs-*"]`
- Note that the old `sso_*` pattern is deprecated and new integrations aren't created with it by default.
One can also update this field to use old `sso_*` logs.

#### Loading Integration

Expand Down Expand Up @@ -238,7 +240,7 @@ the current state of the integration:
"name": "nginx-prod-core",
"url": "file:///.../nginx/integration/assets/nginx-prod-core.ndjson",
"issue": [
"field cloud.version is not present in mapping sso_log-nginx-prod"
"field cloud.version is not present in mapping ss4o_log-nginx-prod"
]
}
]
Expand Down
6 changes: 3 additions & 3 deletions docs/Integration-fields-mapping.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ This capability allows for queries and dashboards to work seamlessly without any

For example, the field `request_url` can be connected to the `http.url` field with the next command:
```
PUT sso_logs-nginx-demo/_mapping
PUT ss4o_logs-nginx-demo/_mapping
{
"properties": {
"http.url": {
Expand All @@ -51,14 +51,14 @@ This will allow queries / dashboards using the `http.url` field to execute corre

We can also validate if an alias exists using the `field_caps` API
```
GET sso_logs-nginx-demo/_field_caps?fields=http.url
GET ss4o_logs-nginx-demo/_field_caps?fields=http.url
```

Returning:
```
{
"indices": [
"sso_logs-nginx-demo"
"ss4o_logs-nginx-demo"
],
"fields": {
"http.url": {
Expand Down
10 changes: 5 additions & 5 deletions docs/Integration-plugin-tasks.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ The catalog API can be queries according to the next fields:

```json
{
"templates": ["sso_logs-template","http-template"],
"templates": ["ss4o_logs-template","http-template"],
"catalog":[...],
"category": [...],
"version": [...]
Expand All @@ -41,7 +41,7 @@ The catalog API can be queries according to the next fields:

Using the template names one can access the template directly using the `_index_template` URL:

`GET _index_template/sso_logs-template`
`GET _index_template/ss4o_logs-template`

---

Expand Down Expand Up @@ -123,7 +123,7 @@ After the `_integration/store/$instance_name` API was called the next steps will
- During this step the integration engine will rename all the assets names according to the user's given name `${instance_name}-assetName.json`
- `${instance_name}-assetName.json`, this can also be extended using more configurable patterns such as `${instance_name}-{dataset}-{namespace}-assetName.json`
- update the index template's `index_pattern` field with the added pattern
- "index_patterns":` ["sso_logs-*-*"]` -> `["sso_logs-*-*", "myLogs-*"]`
- "index_patterns":` ["ss4o_logs-*-*"]` -> `["sso_logs-*-*", "myLogs-*"]`
- if user selected custom index with proprietary fields - mapping must be called ([field aliasing](Integration-fields-mapping.md))
---
- **Success**: If the user changes the data-stream / index naming pattern - this will also be changes in every assets that supports such capability.
Expand Down Expand Up @@ -164,7 +164,7 @@ After the `_integration/store/$instance_name` API was called the next steps will
"name": "nginx-prod-core",
"url": "file:///.../nginx/integration/assets/nginx-prod-core.ndjson",
"issue": [
"field cloud.version is not present in mapping sso_log-nginx-prod"
"field cloud.version is not present in mapping ss4o_log-nginx-prod"
]
}
]
Expand Down Expand Up @@ -199,7 +199,7 @@ After the `_integration/store/$instance_name` API was called the next steps will
"name": "nginx-prod-core",
"url": "file:///.../nginx/integration/assets/nginx-prod-core.ndjson",
"issue": [
"field cloud.version is not present in mapping sso_log-nginx-prod"
"field cloud.version is not present in mapping ss4o_log-nginx-prod"
]
}
]
Expand Down
2 changes: 1 addition & 1 deletion docs/Integration-structure.md
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@ Let's dive into a specific log collection:
This log collects nginx access logs as described in the `info` section.
The `input_type` is a categorical classification of the log kind which is specified in the ECS specification as well.

- `dataset` is defined above and indicates the target routing index, in this example `sso_logs-nginx.access-${namespace}`
- `dataset` is defined above and indicates the target routing index, in this example `ss4o_logs-nginx.access-${namespace}`
- `lables` are general purpose labeling tags that allow further correlation and associations.
- `schema` optional parameter - is the location of the mapping configuration between the original log format to the Observability Log format.
* * *
Expand Down
12 changes: 6 additions & 6 deletions docs/Integration-verification.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@ Validation of an Integration is expected to be a build-time phase. It also expec

- **Schema Validation**:

* make sure all the input_types defined in the `collections` elements have a compatible transformation schema and this schema complies with the SSO versioned schema.
* make sure all the transformation’s conform to the SSO versioned schema.
* make sure all the input_types defined in the `collections` elements have a compatible transformation schema and this schema complies with the SS4O versioned schema.
* make sure all the transformation’s conform to the SS4O versioned schema.

- **Display Validation**: make sure all the display components have a valid json structure and if the explicitly reference fields - these fields must be aligned with the SSO schema type (Trace/Metrics/Logs...)
- **Display Validation**: make sure all the display components have a valid json structure and if the explicitly reference fields - these fields must be aligned with the SS4O schema type (Trace/Metrics/Logs...)

- **Query** **Validation**: make sure all the queries have a valid PPL structure and if the explicitly reference fields - these fields must be aligned with the SSO schema type (Trace/Metrics/Logs...)
- **Query** **Validation**: make sure all the queries have a valid PPL structure and if the explicitly reference fields - these fields must be aligned with the SS4O schema type (Trace/Metrics/Logs...)

- **Assets** **Validation**: make sure all the assets are valid

Expand All @@ -25,7 +25,7 @@ Validation of an Integration is expected to be a build-time phase. It also expec
***_End to End_***
- **Sample Validation:**

* make sure the sample outcome of the transformation is compatible with the SSO schema
* make sure the sample outcome of the transformation is compatible with the SS4O schema
* make sure the outcome result shares all the transformable information from the input source sample

All these validations would use a dedicated validation & testing library supplied by SimpleSchema plugin.
Expand All @@ -37,7 +37,7 @@ In order to simplify and automate the process of validating an Integration compl

- Docker compose with the following :

* Component (Agent / Exporter) responsible of transforming the source format to the Observability SSO format.
* Component (Agent / Exporter) responsible of transforming the source format to the Observability SS4O format.
* Pipeline that will be used to push Observability signals into OpenSearch Index
* OpenSearch with Observability plugin
* Ready made sample from the original signals that will be used by the transformation component to produce the Observability documents.
Expand Down
20 changes: 10 additions & 10 deletions docs/observability/Naming-convention.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,28 +39,28 @@ A data stream is internally composed of multiple backing indices. Search request

Index pattern will follow the next naming structure `{type}`-`{dataset}`-`{namespace}`

- **type** - indicated the observability high level types "logs", "metrics", "traces" (prefixed by the `sso_` schema convention )
- **type** - indicated the observability high level types "logs", "metrics", "traces" (prefixed by the `ss4o_` schema convention )
- **dataset** - The field can contain anything that classify the source of the data - such as `nginx.access` (If none specified "**default** " will be used).
- **namespace** - A user defined namespace. Mainly useful to allow grouping of data such as production grade, geography classification

3) The ***sso_{type}-{dataset}-{namespace}*** Pattern address the capability of differentiation of similar information structure to different indices accordingly to customer strategy.
3) The ***ss4o_{type}-{dataset}-{namespace}*** Pattern address the capability of differentiation of similar information structure to different indices accordingly to customer strategy.

This strategy will be defined by the two degrees of naming freedom: `dataset` and `namespace`

For example a customer may want to route the nginx logs from two geographical areas into two different indices:
- `sso_logs-nginx-us`
- `sso_logs-nginx-eu`
- `ss4o_logs-nginx-us`
- `ss4o_logs-nginx-eu`

This type of distinction also allows for creation of crosscutting queries by setting the next **index query pattern** `sso_logs-nginx-*` or by using a geographic based crosscutting query `sso_logs-*-eu`.
This type of distinction also allows for creation of crosscutting queries by setting the next **index query pattern** `ss4o_logs-nginx-*` or by using a geographic based crosscutting query `ss4o_logs-*-eu`.


## Data index routing
The [ingestion component](https://github.com/opensearch-project/data-prepper) which is responsible for ingesting the Observability signals should route the data into the relevant indices.
The `sso_{type}-{dataset}-{namespace}` combination dictates the target index, `{type}` is prefixed with the `sso_` prefix into one of the supported type:
The `ss4o_{type}-{dataset}-{namespace}` combination dictates the target index, `{type}` is prefixed with the `ss4o_` prefix into one of the supported type:

- Traces - `sso_traces`
- Metrics - `sso_metrics`
- Logs - `sso_logs`
- Traces - `ss4o_traces`
- Metrics - `ss4o_metrics`
- Logs - `ss4o_logs`

For example if within the ingested log contains the following section:
```json
Expand All @@ -75,7 +75,7 @@ For example if within the ingested log contains the following section:
}
}
```
This indicates that the target index for this observability signal should be `sso_traces`-`mysql`-`prod` index that follows uses the traces schema mapping.
This indicates that the target index for this observability signal should be `ss4o_traces`-`mysql`-`prod` index that follows uses the traces schema mapping.

If the `data_stream` information if not present inside the signal, the default index should be used.

Expand Down
Loading

0 comments on commit abcab27

Please sign in to comment.