Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Kubernetes] Bump package-spec format_version to 2.9.0 #7144

Merged
merged 1 commit into from
Aug 10, 2023

Conversation

zmoog
Copy link
Contributor

@zmoog zmoog commented Jul 25, 2023

What does this PR do?

Upgrade the integration package-spec format_version from 1.0.0 to 2.9.0 and addresses all the reported by elastic-package.

Here're the adjustments needed due to the new validations introduced with the newer package-spec versions:

  • Remove the deprecated attributes license and release from the manifest.yml
  • Remove duplicated pod.ip definitions
  • Update datasets in sample_event.json files

We need this upgrade in preparation for the introduction of the routing rules that require a more recent versions of the package-spec.

Failed validations

Context

  • Stack: 8.9.0
  • Integration: 1.43.1
  • elastic-package: v0.84.0

Changes

$ git diff
diff --git a/packages/kubernetes/manifest.yml b/packages/kubernetes/manifest.yml
index 45ddd5c33..15343b610 100644
--- a/packages/kubernetes/manifest.yml
+++ b/packages/kubernetes/manifest.yml
@@ -1,4 +1,4 @@
-format_version: 1.0.0
+format_version: 2.9.0
 name: kubernetes
 title: Kubernetes
 version: 1.43.1

elastic-package check

$ elastic-package check
2023/08/04 15:44:30  WARN CommitHash is undefined, in both /Users/zmoog/.elastic-package/version and the compiled binary, config may be out of date.
Format the package
Done
Lint the package
Error: checking package failed: linting package failed: found 4 validation errors:
   1. file "/Users/zmoog/code/projects/zmoog/integrations/packages/kubernetes/manifest.yml" is invalid: field (root): Additional property release is not allowed
   2. file "/Users/zmoog/code/projects/zmoog/integrations/packages/kubernetes/manifest.yml" is invalid: field (root): Additional property license is not allowed
   3. field "kubernetes.pod.ip" is defined multiple times for data stream "pod", found in: /Users/zmoog/code/projects/zmoog/integrations/packages/kubernetes/data_stream/pod/fields/base-fields.yml, /Users/zmoog/code/projects/zmoog/integrations/packages/kubernetes/data_stream/pod/fields/fields.yml
   4. field "kubernetes.pod.ip" is defined multiple times for data stream "state_pod", found in: /Users/zmoog/code/projects/zmoog/integrations/packages/kubernetes/data_stream/state_pod/fields/base-fields.yml, /Users/zmoog/code/projects/zmoog/integrations/packages/kubernetes/data_stream/state_pod/fields/fields.yml

elastic-package test static

$ elastic-package test -vvv static                                                                                                                                         main  ✭
2023/08/04 16:04:59 DEBUG Enable verbose logging
Run static tests for the package
--- Test results for package: kubernetes - START ---
FAILURE DETAILS:
kubernetes/state_node Verify sample_event.json:
[0] field "event.dataset" should have value "kubernetes.state_node", it has "kubernetes.node"
kubernetes/state_persistentvolume Verify sample_event.json:
[0] field "event.dataset" should have value "kubernetes.state_persistentvolume", it has "kubernetes.persistentvolume"
kubernetes/state_resourcequota Verify sample_event.json:
[0] field "event.dataset" should have value "kubernetes.state_resourcequota", it has "kubernetes.resourcequota"
kubernetes/state_storageclass Verify sample_event.json:
[0] field "event.dataset" should have value "kubernetes.state_storageclass", it has "kubernetes.storageclass"


╭────────────┬─────────────────────────────┬───────────┬──────────────────────────┬────────────────────────────────────────────┬──────────────╮
│ PACKAGE    │ DATA STREAM                 │ TEST TYPE │ TEST NAME                │ RESULT                                     │ TIME ELAPSED │
├────────────┼─────────────────────────────┼───────────┼──────────────────────────┼────────────────────────────────────────────┼──────────────┤
│ kubernetes │ apiserver                   │ static    │ Verify sample_event.json │ PASS                                       │  38.962292ms │
│ kubernetes │ audit_logs                  │ static    │ Verify sample_event.json │ PASS                                       │  31.776584ms │
│ kubernetes │ container                   │ static    │ Verify sample_event.json │ PASS                                       │  37.501291ms │
│ kubernetes │ container_logs              │ static    │ Verify sample_event.json │ PASS                                       │  42.820959ms │
│ kubernetes │ controllermanager           │ static    │ Verify sample_event.json │ PASS                                       │  39.386125ms │
│ kubernetes │ event                       │ static    │ Verify sample_event.json │ PASS                                       │   32.25325ms │
│ kubernetes │ node                        │ static    │ Verify sample_event.json │ PASS                                       │  35.808792ms │
│ kubernetes │ pod                         │ static    │ Verify sample_event.json │ PASS                                       │  34.016709ms │
│ kubernetes │ proxy                       │ static    │ Verify sample_event.json │ PASS                                       │  38.732542ms │
│ kubernetes │ scheduler                   │ static    │ Verify sample_event.json │ PASS                                       │  32.846834ms │
│ kubernetes │ state_container             │ static    │ Verify sample_event.json │ PASS                                       │  31.724083ms │
│ kubernetes │ state_cronjob               │ static    │ Verify sample_event.json │ PASS                                       │   32.95025ms │
│ kubernetes │ state_daemonset             │ static    │ Verify sample_event.json │ PASS                                       │  32.213667ms │
│ kubernetes │ state_deployment            │ static    │ Verify sample_event.json │ PASS                                       │  34.571541ms │
│ kubernetes │ state_job                   │ static    │ Verify sample_event.json │ PASS                                       │  43.513625ms │
│ kubernetes │ state_node                  │ static    │ Verify sample_event.json │ FAIL: one or more errors found in document │  40.349708ms │
│ kubernetes │ state_persistentvolume      │ static    │ Verify sample_event.json │ FAIL: one or more errors found in document │   32.80525ms │
│ kubernetes │ state_persistentvolumeclaim │ static    │ Verify sample_event.json │ PASS                                       │  36.171792ms │
│ kubernetes │ state_pod                   │ static    │ Verify sample_event.json │ PASS                                       │  33.346625ms │
│ kubernetes │ state_replicaset            │ static    │ Verify sample_event.json │ PASS                                       │  32.471084ms │
│ kubernetes │ state_resourcequota         │ static    │ Verify sample_event.json │ FAIL: one or more errors found in document │   40.11425ms │
│ kubernetes │ state_service               │ static    │ Verify sample_event.json │ PASS                                       │  33.864791ms │
│ kubernetes │ state_statefulset           │ static    │ Verify sample_event.json │ PASS                                       │  34.204333ms │
│ kubernetes │ state_storageclass          │ static    │ Verify sample_event.json │ FAIL: one or more errors found in document │   31.76375ms │
│ kubernetes │ system                      │ static    │ Verify sample_event.json │ PASS                                       │  53.379333ms │
│ kubernetes │ volume                      │ static    │ Verify sample_event.json │ PASS                                       │  34.518375ms │
╰────────────┴─────────────────────────────┴───────────┴──────────────────────────┴────────────────────────────────────────────┴──────────────╯
--- Test results for package: kubernetes - END   ---
Done
Error: one or more test cases failed

Checklist

  • I have reviewed tips for building integrations and this pull request is aligned with them.
  • I have verified that all data streams collect metrics or logs.
  • I have added an entry to my package's changelog.yml file.
  • I have verified that Kibana version constraints are current according to guidelines.

Related issues

Upgrade the integration package-spec format_version from 1.0.0 to 2.9.0

Here're the adjustments needed due to the new validations in place:

- remove the deprecated attributes `license` and `release` from the manifest.yml
- remove duplicated `pod.ip` definitions
- update datasets in `sample_event.json` files

We need this upgrade in preparation for other changes (routing rules)
that require a more recent versions of the package-spec.
@zmoog zmoog self-assigned this Jul 25, 2023
@zmoog zmoog added the Team:Cloud-Monitoring Label for the Cloud Monitoring team label Jul 25, 2023
@elasticmachine
Copy link

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview preview

Expand to view the summary

Build stats

  • Start Time: 2023-07-25T21:31:28.906+0000

  • Duration: 32 min 39 sec

Test stats 🧪

Test Results
Failed 0
Passed 92
Skipped 0
Total 92

🤖 GitHub comments

Expand to view the GitHub comments

To re-run your PR in the CI, just comment with:

  • /test : Re-trigger the build.

@elasticmachine
Copy link

🌐 Coverage report

Name Metrics % (covered/total) Diff
Packages 100.0% (0/0) 💚
Files 100.0% (0/0) 💚
Classes 100.0% (0/0) 💚
Methods 96.154% (75/78) 👍 62.821
Lines 100.0% (0/0) 💚
Conditionals 100.0% (0/0) 💚

@zmoog zmoog marked this pull request as ready for review July 25, 2023 22:29
@zmoog zmoog requested a review from a team as a code owner July 25, 2023 22:29
@zmoog zmoog requested review from gsantoro and constanca-m July 25, 2023 22:29
@@ -81,7 +81,7 @@
"address": "kube-state-metrics:8080"
},
"event": {
"dataset": "kubernetes.node",
"dataset": "kubernetes.state_node",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm, I wonder how this change occured? It looks like a breaking change to me 🤔 .

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I checked the Metricbeat module, and it seems this metricset is using the kubernetes.state_node dataset since the early days.

Is it possible that we manually edited the sample_event.json file, and we never noticed this error until the elastic-package validation caught it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

@ChrsMark ChrsMark Jul 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to elastic/beats@b3f6632#diff-a60c719544b7cf37e25376068d5f8f95c520a6d0d181a1c7a8d7b0d96abdca2eR112 the state_ prefix should not be there.
I suggest to wait a bit for @constanca-m and @gizas to double check this.

Not sure what we miss here and when but looks important to me.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What values are we getting for event.dataset by running the latest released integration version?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quick test running the Kubernetes integration version 1.43.1 on a stack version 8.8.0:

CleanShot 2023-07-26 at 15 42 52@2x

Copy link
Contributor Author

@zmoog zmoog Jul 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And, for the records, here's the error that the elastic-package returns during execution:

$ elastic-package test static -v
2023/07/26 14:36:37 DEBUG Enable verbose logging
Run static tests for the package
--- Test results for package: kubernetes - START ---
FAILURE DETAILS:
kubernetes/state_node Verify sample_event.json:
[0] field "event.dataset" should have value "kubernetes.state_node", it has "kubernetes.node"
kubernetes/state_persistentvolume Verify sample_event.json:
[0] field "event.dataset" should have value "kubernetes.state_persistentvolume", it has "kubernetes.persistentvolume"
kubernetes/state_resourcequota Verify sample_event.json:
[0] field "event.dataset" should have value "kubernetes.state_resourcequota", it has "kubernetes.resourcequota"
kubernetes/state_storageclass Verify sample_event.json:
[0] field "event.dataset" should have value "kubernetes.state_storageclass", it has "kubernetes.storageclass"

@@ -1,10 +1,6 @@
- name: kubernetes.pod
type: group
fields:
- name: ip
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what happened here? don't we have pod.ip anymore?

Copy link
Contributor Author

@zmoog zmoog Jul 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have one. According to elastic-package with the latest package-spec this one is a duplicate.

I applied my best judgement to pick the one to remove, but maybe it is worth double-check with you. Here's the output of elastic-package when executed on today's main branch with the package-spec format_version set to 2.9.0:

# elastic-package check -v
....
Error: checking package failed: linting package failed: found 4 validation errors:
   1. file "/Users/zmoog/code/projects/zmoog/integrations/packages/kubernetes/manifest.yml" is invalid: field (root): Additional property release is not allowed
   2. file "/Users/zmoog/code/projects/zmoog/integrations/packages/kubernetes/manifest.yml" is invalid: field (root): Additional property license is not allowed
   3. field "kubernetes.pod.ip" is defined multiple times for data stream "pod", found in: /Users/zmoog/code/projects/zmoog/integrations/packages/kubernetes/data_stream/pod/fields/base-fields.yml, /Users/zmoog/code/projects/zmoog/integrations/packages/kubernetes/data_stream/pod/fields/fields.yml
   4. field "kubernetes.pod.ip" is defined multiple times for data stream "state_pod", found in: /Users/zmoog/code/projects/zmoog/integrations/packages/kubernetes/data_stream/state_pod/fields/base-fields.yml, /Users/zmoog/code/projects/zmoog/integrations/packages/kubernetes/data_stream/state_pod/fields/fields.yml

Here's the relevant last two errors, unpacked:

field "kubernetes.pod.ip" is defined multiple times for data stream "pod", found in: 
  .../packages/kubernetes/data_stream/pod/fields/base-fields.yml, 
  .../packages/kubernetes/data_stream/pod/fields/fields.yml

field "kubernetes.pod.ip" is defined multiple times for data stream "state_pod", found in: 
  .../packages/kubernetes/data_stream/state_pod/fields/base-fields.yml, 
  .../packages/kubernetes/data_stream/state_pod/fields/fields.yml

@gsantoro do you think updating fields.yml is the right choice, or should we update base-fields.yml?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the details. I didn't fully understand it until now.

So both pod and state_pod define kubernetes.pod.ip twice each in fields.yml and base-fields.yml.

That's clearly a mistake. It's crazy that was only caught now.

I'm not sure which is the preferred file for that property fields or base-fields.yml. I don't think it matters much but maybe @ChrsMark knows better

Copy link
Member

@ChrsMark ChrsMark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to clarify #7144 (comment) before moving forward (even if not directly related to this patch).
@gizas @constanca-m could you have a look?

@zmoog
Copy link
Contributor Author

zmoog commented Jul 26, 2023

I think we need to clarify #7144 (comment) before moving forward (even if not directly related to this patch). @gizas @constanca-m could you have a look?

Yep, it makes sense. I cherry-picked this change from the main PR to discuss it earlier with you. Thanks for checking.

@zmoog
Copy link
Contributor Author

zmoog commented Aug 2, 2023

I think we need to clarify #7144 (comment) before moving forward (even if not directly related to this patch). @gizas @constanca-m could you have a look?

@gizas @constanca-m, let me know what you think when you have time 😇

@gizas
Copy link
Contributor

gizas commented Aug 3, 2023

Quick tests on myside also prove what you said, it is state_node (8.9.0 and k8s integration version: 1.43.1)
Screenshot 2023-08-03 at 12 02 04 PM

What still confuses me is that in beats still the dataset is kubernetes.node (see https://github.com/elastic/beats/blob/main/metricbeat/module/kubernetes/state_node/_meta/testdata/ksm.v2.4.2.plain-expected.json#L4). Probably then is a test error?

Another question is regarding this comment: #7144 (comment)

Can we have two datastreams write to same field? What if user decides to enable only one of the two datastreams then we will miss the value is not it?

@gizas
Copy link
Contributor

gizas commented Aug 3, 2023

I think found the reason here: https://github.com/elastic/beats/blob/main/metricbeat/helper/kubernetes/state_metricset.go#L47

The init function will initialise the metricset as name = prefix + name (with prefix="state_")

So that is why all tests have kubernetes.node inside the state_* subfloders as the replacement takes place later.

@ChrsMark
Copy link
Member

ChrsMark commented Aug 3, 2023

I think found the reason here: https://github.com/elastic/beats/blob/main/metricbeat/helper/kubernetes/state_metricset.go#L47

The init function will initialise the metricset as name = prefix + name (with prefix="state_")

So that is why all tests have kubernetes.node inside the state_* subfloders as the replacement takes place later.

Did you test it? If so which versions does this affect? Which PR introduced this?

Since I believe that was not intentional we have a regression and we need to spot where this change was introduced and fix+backport accordingly.

@gizas
Copy link
Contributor

gizas commented Aug 3, 2023

I can see this here: elastic/beats@b3f6632#diff-e68559ac427c7449ad78073317eb417a1163be73f47fb249282343130ab43941L86

So before we were passing state_* .

So @constanca-m as your PR introduced this change , could you please open a new issue to track any work that needs fixing this regression?

@constanca-m
Copy link
Contributor

Sorry for being late to the party, but I am confused about what the problem is. @gizas @ChrsMark

I checked the history of the data.json file inside state_node metricbeat and there were only two versions. One before and one after this PR commit. For both, the event.dataset for kubernetes.state_node was kubernetes.node. Is this not the expected result? Should it be kubernetes.state_node? The state_* is never added to any state_* datasets for event.dataset.

@ChrsMark
Copy link
Member

ChrsMark commented Aug 3, 2023

Hey @constanca-m ! You are right event.dataset should always be node, pod, container etc, not state_node, state_pod, state_container etc.

I'm not sure either how this occured.
See #7144 (comment) for why we are looking into this. Could you verify that the elastic/beats#34432 did not introduce any change regarding this?
We need to figure out why we see https://github.com/elastic/integrations/pull/7144/files#diff-84695c89e2b99f9fa927b432447c7635b7abe0d393561472e7e93822d34b3401R84 in this PR.

@ChrsMark
Copy link
Member

ChrsMark commented Aug 3, 2023

So I did a quick test with docker.elastic.co/beats/metricbeat:8.9.0 and I see that the dataset field is populated properly:

k -n kube-system logs -f metricbeat-f8gmd | grep dataset
...
   "dataset": "kubernetes.pod"
    "dataset": "kubernetes.pod",
    "dataset": "kubernetes.container"
    "dataset": "kubernetes.pod",
    "dataset": "kubernetes.container",
    "dataset": "kubernetes.pod",
    "dataset": "kubernetes.container",
    "dataset": "kubernetes.container"
    "dataset": "kubernetes.pod",
    "dataset": "kubernetes.pod",
    "dataset": "kubernetes.pod",
    "dataset": "kubernetes.container",
    "dataset": "kubernetes.pod",
    "dataset": "kubernetes.container",
    "dataset": "kubernetes.container",
...

So maybe sth has changed on Agent side that overrides that field?

@constanca-m
Copy link
Contributor

I tested directly on metricbeat. This PR was backported in 8.6.0, so I checked 8.5.0. I found this:

image

So we already had some kubernetes.state_node for the state_node metricset before the PR.

@constanca-m
Copy link
Contributor

I checked what was the difference in the documents between kubernetes.state_node and kubernetes.node event datasets, and the only difference was that kubernetes.state_node had the documents with error messages:

image

And kubernetes.node has the correct ones. I will check if this happened in 8.6.0 as well, but I find it likely it did. This PR only moved the common code to one single file, it doesn't change what it is doing.

@ChrsMark
Copy link
Member

ChrsMark commented Aug 3, 2023

@constanca-m that's weird that you see both 🤔 Is your ES empty? With 8.9 I don't see any state_* values.

@constanca-m
Copy link
Contributor

Yes @ChrsMark , I created a new instance just to check Kubernetes module.

@constanca-m
Copy link
Contributor

@constanca-m that's weird that you see both thinking Is your ES empty? With 8.9 I don't see any state_* values.

What if you delete all KSM deployments to generate documents with error messages? @ChrsMark

@constanca-m
Copy link
Contributor

constanca-m commented Aug 3, 2023

I tested again with a new instance, this time for 8.6.0. I started metricbeat without KSM and then applied KSM. I saw the same behavior again:
image

If I filter the document with kubernetes.state_node event, then all the documents have an error.message:
image
(Edit: in this screenshot, I tried to filter documents that didn't have an error message, and I got 0. That is why I say every one of them has an error message.)

If I filter documents with kubernetes.node with an error.message, I get 0:
image

So the conclusion: when KSM is not deployed, then we receive documents that use kubernetes.state_node event. Otherwise, the documents are received as expected.

@ChrsMark
Copy link
Member

ChrsMark commented Aug 3, 2023

@constanca-m I can confirm that switching on/off the KSM deployment seems to be the reason. Thanks for spotting this :)!

@zmoog could you verify if the update happens with a running KSM instance or not? In any case it seems that under normal circumstances the dataset values should not be as state_* and hence the sample docs should follow accordingly.

For Agent though maybe there is a replacement because of elastic/beats#20076?
We need to verify these and file an issue to see if we need to fix anything.
cc: @gizas

@gizas
Copy link
Contributor

gizas commented Aug 3, 2023

So to summarise above is that we see for kube-state metricsets:

  • in agent: event.dataset: state_*
  • in beats: event.dataset: <normal_names> (without prefix)

I have opened related issue to track effort: elastic/beats#36227

@zmoog
Copy link
Contributor Author

zmoog commented Aug 3, 2023

@zmoog could you verify if the update happens with a running KSM instance or not? In any case it seems that under normal circumstances the dataset values should not be as state_* and hence the sample docs should follow accordingly.

Testing with:

  • Stack: 8.9.0
  • Kubernetes integration: v1.43.1

I started collecting data with KSM available and received documents with event.dataset: kubernetes.state_node only with no error.message fields.

After a few minutes, I deleted the KSM replicaset. Without KSM, all documents had an error.message field. However, the event.dataset field only has kubernetes.state_node values.

CleanShot 2023-08-03 at 16 40 28@2x

@ChrsMark
Copy link
Member

ChrsMark commented Aug 4, 2023

So we have verified that for Beats the value of event.dataset is set to node (not state_node) when the events are collected properly (== KSM is reachable).
The issue is that in anycase Agent seems to be replacing node with state_node. @joshdover do you know if this is expected? Also I wonder why we only hit this now and not earlier 🤔 .

I any case @zmoog if you can rollback the changes in the sample_event.json files I suggest we can unblock this PR and continue the investigation at elastic/beats#36227.

Copy link
Member

@ChrsMark ChrsMark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apart from elastic/beats#36227, it looks good to me. if we can roll back those specific changes it's good to go :).

@@ -6,7 +6,7 @@
"event": {
"module": "kubernetes",
"duration": 12149615,
"dataset": "kubernetes.persistentvolume"
"dataset": "kubernetes.state_persistentvolume"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we rollback those changes until we figure out why it's happening?

@joshdover
Copy link
Contributor

The issue is that in anycase Agent seems to be replacing node with state_node. @joshdover do you know if this is expected? Also I wonder why we only hit this now and not earlier 🤔 .

I don't have any immediate idea. I'd first suggest looking at an agent diagnostic to figure out exactly what configuration is being passed to metricbeat in this scenario in the components-actual.yaml or computed-config.yaml files

@zmoog
Copy link
Contributor Author

zmoog commented Aug 4, 2023

I any case @zmoog if you can rollback the changes in the sample_event.json files I suggest we can unblock this PR and continue the investigation at elastic/beats#36227.

I am not sure I can easily rollback this change, IIRC it comes from a validation error reported by elastic-package after upgrading package-spec from 1.0.0 to 2.9.0.

I’ll check it again to get the exact error message.

@ChrsMark
Copy link
Member

ChrsMark commented Aug 4, 2023

It might be elastic/beats#20076 that applies this override.

We already have some state_* values for example at

so it's not big deal if we merge this one.

Let's continue the investigation at elastic/beats#36227.

@zmoog
Copy link
Contributor Author

zmoog commented Aug 4, 2023

I any case @zmoog if you can rollback the changes in the sample_event.json files I suggest we can unblock this PR and continue the investigation at elastic/beats#36227.

I am not sure I can easily rollback this change, IIRC it comes from a validation error reported by elastic-package after upgrading package-spec from 1.0.0 to 2.9.0.

I’ll check it again to get the exact error message.

I updated the PR description, adding the failed validations from elastic-package. Each change in this PR is only here to address those errors.

@zmoog zmoog merged commit 7385dcf into elastic:main Aug 10, 2023
@zmoog zmoog deleted the zmoog/upgrade-kubernetes-format-version branch August 10, 2023 10:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Integration:kubernetes Kubernetes Team:Cloud-Monitoring Label for the Cloud Monitoring team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants