Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature/add-workloadmonitors-roles #626

Merged
merged 1 commit into from
Feb 13, 2025

Conversation

klinch0
Copy link
Contributor

@klinch0 klinch0 commented Feb 10, 2025

Summary by CodeRabbit

  • Chores

    • Updated the application version to 1.6.8 and aligned version tracking for improved consistency.
  • New Features

    • Enhanced access controls for workload monitoring by extending permissions across various user roles.

@klinch0 klinch0 requested a review from kvaps as a code owner February 10, 2025 12:30
@dosubot dosubot bot added the size:XXL This PR changes 1000+ lines, ignoring generated files. label Feb 10, 2025
Copy link
Contributor

coderabbitai bot commented Feb 10, 2025

Walkthrough

This pull request updates version information and RBAC configurations for the tenant application. The Chart.yaml now reflects version 1.6.8 compared to 1.6.7, and the versions_map has been revised with a new commit reference and version entry. Additionally, new RBAC rules for the workloadmonitors resource under the cozystack.io API group have been added across multiple role definitions in tenant.yaml.

Changes

File(s) Change Summary
packages/apps/.../Chart.yaml
packages/apps/.../versions_map
Updated tenant version: Chart.yaml version bumped from 1.6.7 to 1.6.8; versions_map updated with a new commit reference for 1.6.7 and added version 1.6.8 HEAD.
packages/apps/.../tenant.yaml Added new RBAC rules for the workloadmonitors resource under apiGroups: [cozystack.io] with verbs get, list, and watch across default, view, use, admin, and super admin roles.

Possibly related PRs

Suggested labels

size:L

Suggested reviewers

  • kvaps

Poem

Oh, what a thrill in code delight,
A bunny hops with joy at night.
Versions rise and roles expand,
Rules in YAML, crafted so grand.
Leaping through changes, we cheer with glee,
A carrot crunch for every PR decree!
🐇💻


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7caf632 and 4cc48e6.

📒 Files selected for processing (3)
  • packages/apps/tenant/Chart.yaml (1 hunks)
  • packages/apps/tenant/templates/tenant.yaml (5 hunks)
  • packages/apps/versions_map (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • packages/apps/versions_map
  • packages/apps/tenant/Chart.yaml
  • packages/apps/tenant/templates/tenant.yaml

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@dosubot dosubot bot added the enhancement New feature or request label Feb 10, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (19)
packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/047-Crd-kafkaconnector.yaml (2)

95-106: Verify the new 'listOffsets' property definition.
The newly added listOffsets property is well defined:

  • It correctly specifies its type as an object with detailed sub-properties.
  • The toConfigMap field is nested appropriately and marked as required.
  • The description clearly indicates its purpose for writing the list of offsets to a ConfigMap.

This structure is consistent with similar CRD modifications in the environment (e.g., KafkaMirrorMaker2). Consider, for future improvements, adding validation examples or constraints if naming conventions for config maps must be enforced.


107-118: Verify the new 'alterOffsets' property definition.
The alterOffsets addition mirrors the pattern used in listOffsets:

  • It includes a required fromConfigMap sub-property with its type and description clearly specified.
  • The structure and explanatory notes are consistent, enhancing clarity about its use for storing new offset values.

The alignment with other CRD enhancements is a strong point. If applicable later, you might add further validation or examples to demonstrate correct usage.

packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/046-Crd-kafkabridge.yaml (1)

1192-1233: Enhanced Environment Variable Sourcing for BridgeContainer
The new valueFrom property in the env array allows environment variables to be sourced from a Kubernetes Secret or ConfigMap as an alternative to providing a direct value. The use of a oneOf construct to require either a direct value or a valueFrom reference is a clean solution that improves flexibility and security. Verify that all container configurations using this schema are updated accordingly.

packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/045-Crd-kafkamirrormaker.yaml (1)

1335-1377: Dynamic Environment Variable Sourcing in MirrorMakerContainer
A new valueFrom block has been introduced for environment variables within the mirrorMakerContainer definition. This allows values to be populated from either a secret or a config map, using the same oneOf pattern to enforce that one of the two is provided. This change increases the flexibility for securing sensitive configuration data.

packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/041-Crd-kafkaconnect.yaml (2)

1106-1125: CSI Volume Support Addition in Volume Specification

A new csi property has been introduced in the volume definition. This block defines a CSI volume’s schema—including properties such as driver, fsType, nodePublishSecretRef, readOnly, and volumeAttributes—to enable Container Storage Interface support. Please verify that these fields align with expected CSI volume definitions and that documentation reflects the revised schema.


1214-1255: Dynamic Environment Variable Sourcing via valueFrom

A new valueFrom block has been added under the environment variable definition of the connectContainer. This addition allows environment variable values to be sourced dynamically from either a Secret (via secretKeyRef) or a ConfigMap (via configMapKeyRef). The use of oneOf to enforce mutual exclusivity with a direct value is a robust way to ensure correct configuration. Ensure that related documentation and any unit tests for CRD validation are updated accordingly.

packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/048-Crd-kafkamirrormaker2.yaml (4)

340-363: Offset Management Properties in Source Connector

New listOffsets and alterOffsets properties have been added under the sourceConnector specification.
• The listOffsets configuration requires a toConfigMap field that references a ConfigMap where the list of offsets will be written.
• The alterOffsets configuration requires a fromConfigMap field that references a ConfigMap holding new offset values.

These additions enhance the manageability of connector offsets. Please confirm that the required keys are documented and that similar changes in other connector specifications remain consistent.


396-419: Offset Management in Heartbeat Connector

The heartbeat connector now also supports the listOffsets and alterOffsets properties. As with the source connector, the same required structure is enforced (i.e. toConfigMap for listing and fromConfigMap for altering offsets). Verify that the schema for heartbeat connector offsets mirrors the source connector’s configuration to maintain consistency.


452-475: Offset Management in Checkpoint Connector

In the checkpoint connector specification, the new listOffsets and alterOffsets properties make it possible to manage offsets for checkpointing. The definitions follow the same pattern as in the source and heartbeat connectors. Please ensure that these new configurations correctly support the operator’s offset management flow and that required fields are properly validated.


2336-2354: CSI Volume Support in KafkaMirrorMaker2

A CSI volume option has been added to the volume specification in this CRD. The new csi property includes sub-properties for driver, fsType, nodePublishSecretRef, readOnly, and volumeAttributes. This addition mirrors the updates made in the Kafka Connect CRD, thereby enhancing consistency across CRDs. Please verify that these fields satisfy the necessary CSI volume requirements.

packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/020-RoleBinding-strimzi-cluster-operator.yaml (1)

5-5: Fix trailing spaces in the template.

There are trailing spaces at the end of line 5 that should be removed.

-  {{- $watchNamespaces = list -}}  
+  {{- $watchNamespaces = list -}}
🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 5-5: trailing spaces

(trailing-spaces)

packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/023-RoleBinding-strimzi-cluster-operator.yaml (1)

4-6: Resetting $watchNamespaces Based on watchAnyNamespace

Good use of the conditional check to reset $watchNamespaces to an empty list when watchAnyNamespace is true. Note that YAMLlint flagged trailing spaces on line 5; please remove these extraneous spaces to avoid lint warnings.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 5-5: trailing spaces

(trailing-spaces)

packages/apps/kafka/templates/podscrape.yaml (1)

9-10: Indentation Adjustment for RelabelConfigs

YAMLlint reported an indentation issue on line 10. The dash item (- separator: ;) should be indented two additional spaces (i.e. 8 spaces total from the parent level) to maintain proper YAML hierarchy.

Proposed diff:

-      - separator: ;
+        - separator: ;
🧰 Tools
🪛 YAMLlint (1.35.1)

[warning] 10-10: wrong indentation: expected 8 but found 6

(indentation)

packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml (1)

4-6: Resetting $watchNamespaces Based on watchAnyNamespace

The conditional reset to an empty list for $watchNamespaces when watchAnyNamespace is set is correctly implemented. As in the previous file, trailing spaces (notably on line 5) should be removed to satisfy linting standards.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 5-5: trailing spaces

(trailing-spaces)

packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/023-ClusterRole-strimzi-cluster-operator-role.yaml (1)

82-89: Indentation Issues in the New Deletion Rule

The new rule granting deletion rights for kafkarebalances is functionally correct; however, YAMLlint has flagged indentation issues on lines 86 and 88. Please adjust the indentation so that the nested keys align with a two-space indent relative to their parent.

Proposed diff:

-    - kafkarebalances
+  - kafkarebalances

And similarly for the verbs section:

-    - delete
+  - delete

Ensure the overall block resembles:

- apiGroups:
  - "kafka.strimzi.io"
  resources:
    # The Cluster Operator needs deletion for KafkaRebalance only (during auto-rebalancing)
    - kafkarebalances
  verbs:
  - delete
🧰 Tools
🪛 YAMLlint (1.35.1)

[warning] 86-86: wrong indentation: expected 2 but found 4

(indentation)


[warning] 88-88: wrong indentation: expected 2 but found 4

(indentation)

packages/system/kafka-operator/charts/strimzi-kafka-operator/values.yaml (1)

185-185: YAMLlint: Missing Newline at End of File

YAMLlint reports a missing newline at the end of the file. Please add a newline after the last line to comply with YAML best practices.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 185-185: no new line character at the end of file

(new-line-at-end-of-file)

hack/download-dashboards.sh (1)

83-83: New Dashboard Entry for Strimzi Kafka

A new dashboard reference (modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//kafka/strimzi-kafka.json) has been added. Please verify that this file exists and consider removing the extra slash (//) if it is unintentional to ensure consistent path formatting.

packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/049-Crd-kafkarebalance.yaml (1)

67-69: Review Description for brokers Field

The description for the brokers field mentions usage with rebalancing mode add-brokers and removed-brokers, while the enum value in mode is defined as remove-brokers. For consistency, please update the description to match the correct enum value.

packages/system/kafka-operator/charts/strimzi-kafka-operator/README.md (1)

68-69: Prerequisites Version Update and List Style
The prerequisites now specify "Kubernetes 1.25+" instead of previous versions. Please ensure that this update is reflected consistently across all documentation. Also, note that the unordered list here uses a dash; if your project conventions prefer asterisks for bullet lists, consider updating the list style to comply with your markdown guidelines.

🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

68-68: Unordered list style
Expected: asterisk; Actual: dash

(MD004, ul-style)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3fa4dd3 and 7caf632.

📒 Files selected for processing (28)
  • dashboards/kafka/strimzi-kafka.json (1 hunks)
  • hack/download-dashboards.sh (1 hunks)
  • packages/apps/kafka/Chart.yaml (1 hunks)
  • packages/apps/kafka/templates/kafka.yaml (2 hunks)
  • packages/apps/kafka/templates/metrics-configmap.yaml (1 hunks)
  • packages/apps/kafka/templates/podscrape.yaml (1 hunks)
  • packages/apps/tenant/Chart.yaml (1 hunks)
  • packages/apps/tenant/templates/tenant.yaml (5 hunks)
  • packages/apps/versions_map (2 hunks)
  • packages/extra/monitoring/dashboards.list (1 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/Chart.yaml (2 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/README.md (5 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/040-Crd-kafka.yaml (19 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/041-Crd-kafkaconnect.yaml (8 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/045-Crd-kafkamirrormaker.yaml (2 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/046-Crd-kafkabridge.yaml (3 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/047-Crd-kafkaconnector.yaml (1 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/048-Crd-kafkamirrormaker2.yaml (10 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/049-Crd-kafkarebalance.yaml (3 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/files/grafana-dashboards/strimzi-kraft.json (4 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/020-RoleBinding-strimzi-cluster-operator.yaml (1 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/023-ClusterRole-strimzi-cluster-operator-role.yaml (1 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/023-RoleBinding-strimzi-cluster-operator.yaml (1 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml (1 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/060-Deployment-strimzi-cluster-operator.yaml (1 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/_kafka_image_map.tpl (1 hunks)
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/values.yaml (3 hunks)
  • packages/system/kafka-operator/templates/prometheus-rules.yaml (1 hunks)
✅ Files skipped from review due to trivial changes (3)
  • packages/apps/tenant/Chart.yaml
  • packages/apps/kafka/Chart.yaml
  • packages/system/kafka-operator/charts/strimzi-kafka-operator/Chart.yaml
🧰 Additional context used
🪛 YAMLlint (1.35.1)
packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/023-RoleBinding-strimzi-cluster-operator.yaml

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)


[error] 5-5: trailing spaces

(trailing-spaces)

packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)


[error] 5-5: trailing spaces

(trailing-spaces)

packages/apps/kafka/templates/podscrape.yaml

[warning] 10-10: wrong indentation: expected 8 but found 6

(indentation)


[error] 4-4: syntax error: expected , but found ''

(syntax)

packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/023-ClusterRole-strimzi-cluster-operator-role.yaml

[warning] 86-86: wrong indentation: expected 2 but found 4

(indentation)


[warning] 88-88: wrong indentation: expected 2 but found 4

(indentation)

packages/apps/kafka/templates/metrics-configmap.yaml

[error] 4-4: syntax error: expected , but found ''

(syntax)

packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/020-RoleBinding-strimzi-cluster-operator.yaml

[error] 5-5: trailing spaces

(trailing-spaces)

packages/system/kafka-operator/charts/strimzi-kafka-operator/values.yaml

[error] 185-185: no new line character at the end of file

(new-line-at-end-of-file)

🪛 LanguageTool
packages/system/kafka-operator/charts/strimzi-kafka-operator/README.md

[style] ~34-~34: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...fka® cluster via the HTTP protocol. * Includes Cruise Control - Automates the proces...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)

🪛 markdownlint-cli2 (0.17.2)
packages/system/kafka-operator/charts/strimzi-kafka-operator/README.md

68-68: Unordered list style
Expected: asterisk; Actual: dash

(MD004, ul-style)

🔇 Additional comments (71)
packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/046-Crd-kafkabridge.yaml (3)

1095-1113: New CSI Volume Source Configuration Added
A new csi volume type is introduced here with properties such as driver, fsType, nodePublishSecretRef, readOnly, and volumeAttributes. This addition expands the data mounting options for Kafka Bridge deployments.


1120-1120: Including CSI in Volume Type Alternatives
The update in the oneOf block (adding csi: {}) integrates the new volume type as a valid option alongside secret, configMap, emptyDir, and persistentVolumeClaim. Ensure that this matches the intended CRD design and that any downstream validations are updated accordingly.


1371-1412: Consistent Environment Variable Configuration for InitContainer
A similar valueFrom block has been added to the initContainer section. This consistency between the primary container and the init container is beneficial for managing secrets and config maps uniformly. Please ensure that the oneOf enforcement is behaving as intended in runtime validations.

packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/045-Crd-kafkamirrormaker.yaml (2)

1271-1290: New CSI Volume Type in MirrorMaker CRD
The addition of the csi volume type with properties like driver, fsType, nodePublishSecretRef, readOnly, and volumeAttributes in the volumes section aligns with the changes implemented for KafkaBridge. This provides enhanced options for managing storage via the CSI interface. Confirm alignment with your cluster’s CSI drivers and documentation.


1296-1296: Expansion of Valid Volume Options with CSI
By extending the oneOf list with an empty mapping for csi, the CRD now accepts CSI-based volume definitions as an alternative. Ensure downstream controllers can correctly process this new volume type alongside existing volume definitions.

packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/041-Crd-kafkaconnect.yaml (1)

1126-1131: Extend oneOf Block to Include CSI

The oneOf block now explicitly accepts volumes with a csi: {} field alongside the traditional options (secret, configMap, emptyDir, and persistentVolumeClaim). This change ensures that CSI volumes are valid under the CRD schema. Confirm that validation tests cover this new option.

packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/048-Crd-kafkamirrormaker2.yaml (1)

2355-2362: Extending Volume oneOf Block for CSI

The oneOf block in the volumes section now includes an option for a csi: {} volume. This ensures that volumes defined with the new CSI specification are accepted. Confirm that the schema properly enforces mutual exclusion among the different volume source types and that existing manifests using older volume types remain valid.

packages/extra/monitoring/dashboards.list (1)

37-37: LGTM! Addition of Strimzi Kafka dashboard.

The addition of the Strimzi Kafka dashboard entry is consistent with the PR objectives and enhances the monitoring capabilities.

packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/_kafka_image_map.tpl (1)

9-11: Verify compatibility of the new Kafka versions.

The version updates from 3.8.0 to 3.9.0 and addition of 3.8.1 look good. However, let's verify the compatibility of these versions with the rest of the system.

Also applies to: 15-16, 20-21, 25-26, 30-31

✅ Verification successful

🌐 Web query:

What are the latest stable versions of Strimzi Kafka Operator and what Kafka versions do they support?

💡 Result:

The latest stable version of Strimzi Kafka Operator is 0.45.0, released with support for Apache Kafka 3.8.x and 3.9.x[1][4][7]. Here's a detailed breakdown:

Supported Kafka Versions by Strimzi Releases

Strimzi Version Supported Apache Kafka Versions Kubernetes Support
0.45.x 3.8.1, 3.9.0 1.25+
**0.**44.x **3.**7.x, **3.**8.0 1.25+
**0.**43.x **3.**7.x, **3.**8.0 **1.**23+
**0.**42.x **3.**6.x, **3.**7.x **1.**23+

Key details:

  • Strimzi 0.45 adds support for Kafka 3.9.0 and drops older KRaft metadata versions[4][8]
  • From Strimzi 0.44 onward:
    • Minimum Kubernetes version is 1.25[6][7]
    • Last version supporting ZooKeeper-based clusters (deprecated)[5][9]
  • Earlier releases:
    • Strimzi 0.41 supported Kafka 3.6-3.7 with JBOD storage in KRaft mode[4]
    • Strimzi 0.39 introduced KRaft upgrades[4]

Version Compatibility Notes

  • Always verify Kafka metadata version compatibility when upgrading[8]
  • Container images are available via Quay.io registry with SHA256 digests[7]
  • Full changelogs available in GitHub releases[4][7]

For production environments using Kubernetes ≥1.25, Strimzi 0.45 with Kafka 3.9 is currently recommended[1][7][9].

Citations:


Compatibility Verified: The updated image tags and Kafka versions are compatible with the supported versions.

Based on the latest stable Strimzi Kafka Operator (version 0.45.0) documentation:

  • Supported Kafka Versions: Kafka 3.8.x (including 3.8.1) and 3.9.0 are fully supported.
  • Image Tags in Templates: The usage of -kafka-3.9.0 in the templates (lines 9-11 and similarly on lines 15-16, 20-21, 25-26, and 30-31) aligns with the supported versions.
  • Operator Compatibility: For production environments, Strimzi 0.45.0 with Kafka 3.9.0 is recommended on supported Kubernetes versions (≥1.25).

No issues have been identified regarding compatibility.

packages/apps/versions_map (1)

26-27: LGTM! Version updates for kafka and tenant packages.

The version updates follow the established pattern with proper commit references for fixed versions and HEAD for latest versions.

Also applies to: 104-105

packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/020-RoleBinding-strimzi-cluster-operator.yaml (1)

3-7: LGTM! Namespace watching logic implementation.

The implementation of namespace watching logic with proper conditionals for RoleBinding vs ClusterRoleBinding is well-structured and follows best practices.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 5-5: trailing spaces

(trailing-spaces)

packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/023-RoleBinding-strimzi-cluster-operator.yaml (3)

1-3: Conditional Block Initialization and Variable Definition

The initial block correctly checks for rbac.create and sets the root context as well as initializing $watchNamespaces.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)


7-14: Binding Kind Determination in the Range Loop

The range loop correctly appends the primary namespace to $watchNamespaces and conditionally sets the binding kind. Using ClusterRoleBinding when watchAnyNamespace is true and RoleBinding otherwise is clear and well implemented.


15-23: Metadata and Namespace Handling

The metadata section consistently uses the same binding name regardless of condition, and the namespace is conditionally included only when not watching any namespace. This is acceptable and meets the intended design.

packages/apps/kafka/templates/podscrape.yaml (1)

1-5: VMPodScrape Resource Definition

The new VMPodScrape resource is defined appropriately. The use of Helm templating in the metadata (line 4) may trigger YAML lint errors, but these are likely false positives due to the templating syntax. Please verify that the rendered output is valid YAML.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 4-4: syntax error: expected , but found ''

(syntax)

packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml (3)

1-3: Conditional Block Initialization and Variable Definition

The initial block correctly checks .Values.rbac.create and sets up the root context with $watchNamespaces.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)


7-14: Role Binding Kind Determination

The logic to choose between ClusterRoleBinding and RoleBinding based on the watchAnyNamespace flag is clear and effectively implemented.


15-19: Metadata and Namespace Handling for Entity Operator Delegation

The metadata section properly sets the name (strimzi-cluster-operator-entity-operator-delegation) and conditionally applies the namespace when not targeting all namespaces.

packages/apps/kafka/templates/kafka.yaml (1)

60-65: Kafka Metrics Configuration

The newly introduced metricsConfig section under the Kafka specification is well implemented with the jmxPrometheusExporter type. Ensure that the referenced ConfigMap ({{ .Release.Name }}-metrics) correctly contains the kafka-metrics-config.yml key.

packages/system/kafka-operator/charts/strimzi-kafka-operator/values.yaml (4)

13-13: Updated defaultImageTag Version

The image tag has been updated to 0.45.0, which appears consistent with the intended upgrade. Please verify that related files (e.g., Chart.yaml and README.md) have been updated accordingly.


129-129: Updated kafkaBridge.image.tag Version

The Kafka Bridge image tag is now set to 0.31.1. Ensure that downstream configurations and documentation are aligned with this new version.


183-183: Unchanged connectBuildTimeoutMs Value

Although the value remains 300000, the formatting change may have been intentional. It would be good to confirm that this update is purely cosmetic and causes no functional change.


185-185: Added generatePodDisruptionBudget Option

A new configuration option generatePodDisruptionBudget: true has been introduced to control the creation of Pod Disruption Budget resources. Please ensure that the corresponding deployment template utilizes this flag correctly.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 185-185: no new line character at the end of file

(new-line-at-end-of-file)

packages/apps/kafka/templates/metrics-configmap.yaml (3)

1-5: New Metrics ConfigMap Introduction

This new ConfigMap resource for Kafka and Zookeeper metrics is well structured, with clear metadata and data sections. It cleanly separates the configurations for each component.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 4-4: syntax error: expected , but found ''

(syntax)


6-167: Comprehensive Kafka Metrics Rules

The Kafka metrics configuration includes detailed regex patterns covering various metric types (GAUGE, COUNTER, etc.). Given the complexity, please ensure that each pattern is thoroughly tested with the JMX Prometheus Exporter so that label extraction and metric naming behave as expected.


168-190: Zookeeper Metrics Configuration

The Zookeeper metrics rules are similarly structured and provide for metrics extraction. Confirm that the regex patterns accurately capture the necessary data and that any special characters are properly escaped.

packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/060-Deployment-strimzi-cluster-operator.yaml (1)

143-146: Review Pod Disruption Budget Environment Variable Logic

A conditional block has been added to inject the STRIMZI_POD_DISRUPTION_BUDGET_GENERATION environment variable when .Values.generatePodDisruptionBudget is not equal to true. Since the default in values is set to true, this block will not be rendered under default settings. Please verify that this behavior is intended and aligns with the operator’s requirements.

packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/049-Crd-kafkarebalance.yaml (3)

34-41: Enhanced AdditionalPrinterColumns

A new printer column Status (with JSON path .status.conditions[*].type) has been added. This should improve visibility into the current rebalancing status. Please confirm that the wildcard correctly captures the intended conditions.


57-64: Updated Rebalance Modes with remove-disks

The mode property now includes an additional enum value (remove-disks) along with an updated description that reflects its functionality. Ensure that both the documentation and the operator’s logic fully support this new mode.


105-120: New Property moveReplicasOffVolumes Addition

A new property, moveReplicasOffVolumes, has been introduced to enable finer control over replica movement by specifying brokers and the corresponding volume IDs. Please make sure the expected object schema is well documented and that the operator properly implements this feature.

packages/apps/tenant/templates/tenant.yaml (5)

37-41: RBAC Rule for Default Role - WorkloadMonitors Permission
The new rule correctly grants the get, list, and watch verbs on the workloadmonitors resource under the cozystack.io API group. The YAML indentation and list syntax are sound.


110-114: RBAC Update for View Role
The addition of the workloadmonitors rule in the view role is consistent with the default role. This uniformly extends read-only access for workload monitors across roles.


187-191: RBAC Update for Use Role
The newly added workloadmonitors permission in the use role grants get, list, and watch as needed. Ensure that these privileges align with your security model and intended use cases.


283-287: RBAC Update for Admin Role
The admin role now includes the same workloadmonitors access, enabling administrators to view these resources. The consistency across roles is commendable.


356-360: RBAC Update for Super Admin Role
Adding the workloadmonitors rule here provides super admins with read access over these resources. This is coherent with the other role definitions.

packages/system/kafka-operator/templates/prometheus-rules.yaml (1)

1-76: New PrometheusRule for Kafka Monitoring
A comprehensive set of alerting rules is introduced for Kafka and related components (Zookeeper, Entity Operator, Connect, Bridge, MirrorMaker, Kafka Exporter, and Certificates). The expressions appear well structured and meaningful. As a follow-up, please verify that all alert durations (e.g. for: 10s, for: 5m) and threshold values truly reflect the operational expectations in your environment.

packages/system/kafka-operator/charts/strimzi-kafka-operator/README.md (1)

9-16: Update Migration and Deprecation Notices
The changes clearly inform users that Strimzi 0.45 is the final version supporting ZooKeeper-based Kafka clusters and MirrorMaker 1, and it details the migration requirements for KRaft and MirrorMaker 2. The provided links and deprecation notes are helpful.

packages/system/kafka-operator/charts/strimzi-kafka-operator/files/grafana-dashboards/strimzi-kraft.json (1)

1076-1158: New Grafana Panel: Current Raft State
A new table panel titled "Current Raft State" has been added to display the node’s role in KRaft mode. The panel uses the query
  max(kafka_server_raftmetrics_current_state) by (kubernetes_pod_name, current_state)
and applies a transformation to reorganize and rename columns for clarity. Please verify that this query returns the expected aggregated state data and that the table transformation meets your visualization needs.

dashboards/kafka/strimzi-kafka.json (26)

1-17: Annotations Configuration Verification
The annotations block is well defined. It uses a built‐in annotation with the “grafana” datasource and proper flags (e.g. "enable": true and "hide": true). This configuration will allow built-in annotations and alerts to display correctly.


18-23: Dashboard Metadata and General Settings
The general dashboard properties (editable flag, fiscalYearStartMonth, graphTooltip, id, and links) are defined appropriately. Consider adding any descriptive metadata (such as "description" or "tags") if future users need more context.


24-103: Brokers Online Panel
This stat panel titled "Brokers Online" correctly uses a Prometheus query to count broker metrics. The fieldConfig includes a mapping for null values (displaying “N/A”) and a clear set of thresholds. Verify that the threshold values (null, 0, 2) are appropriate for your deployment.


104-181: Active Controllers Panel
The panel shows the number of active controllers using a sum query. Configuration such as the color thresholds, mapping, and layout mirror the style of the previous panel. It would be good to double‑check that the threshold step values (using colors like "#299c46", "#e5ac0e", "#bf1b00") match your operational expectations.


182-259: Unclean Leader Election Rate Panel
The "Unclean Leader Election Rate" panel uses an irate() query over a 5‑minute window, which is appropriate to capture recent changes. The mapping for null values and the threshold settings are clear. Verifying that the chosen thresholds truly reflect a problematic rate in your cluster is recommended.


260-337: Online Replicas Panel
This panel titled "Online Replicas" accurately displays the number of replicas online using a summing query. The thresholds and unit settings are consistent with the overall dashboard theme.


338-416: Under Replicated Partitions Panel
The configuration for the under-replicated partitions is straightforward with a clear Prometheus query and threshold definitions.


417-496: Partitions at Minimum In‑Sync Replica (ISR) Panel
This panel indicates partitions that exactly meet the minimum in‑sync replica count. The use of a special mapping (displaying “0” with a color change) is inventive and clear.


497-576: Under Minimum ISR Panel – Threshold Duplication Check
In this panel, note that the threshold steps include two entries with the value 1 (one with color "#ef843c" and the next with color "#bf1b00"). This duplicate threshold value may be unintentional. Please verify whether this is by design or if the second threshold should use a different boundary to differentiate performance states.


577-655: Offline Partitions Count Panel
The panel for displaying offline partition counts is configured correctly using a summing query and common style settings.


656-668: Row Grouping for Kafka Panels
The collapsed row titled "Kafka" is used to group related panels. This improves the dashboard’s readability and organization.


670-761: Kafka Broker Pods Memory Usage Panel
This timeseries panel visualizes memory usage of Kafka broker pods. The configuration—including legend display and thresholds—is consistent. Make sure that the unit ("bytes") and the defined thresholds align with your memory monitoring goals.


762-854: Aggregated Kafka Broker Pods CPU Usage Panel
The CPU usage panel employs a rate() function over container CPU usage and organizes data by pod. The style and options are consistent with similar panels. Confirm that both the query and the interval settings are optimal for your monitoring needs.


855-947: Kafka Broker Pods Disk Usage Panel
This panel displays disk usage with a focus on available disk space. The query aggregates metrics per persistent volume claim, and styling settings (palette-classic, thresholds) are applied. Consider verifying performance when many pods are monitored simultaneously.


948-1040: Open File Descriptors Panel
The panel for open file descriptors is well configured, using a sum() aggregation over container metrics. Its visual settings match the other panels, ensuring consistency.


1041-1131: JVM Memory Used Panel
This timeseries panel displays JVM memory usage per pod using the metric jvm_memory_used_bytes. The legend configuration and aggregation support clear monitoring; ensure that the data source labels match your Kafka deployment’s conventions.


1132-1222: JVM GC Time Panel
The garbage collection time panel uses an irate() query to capture GC performance. The unit settings appear correct; consider whether additional units or scaling (e.g. seconds vs milliseconds) might improve readability.


1223-1313: JVM GC Count Panel
This panel reflects the GC count using a summation over rate metrics. Its configuration is consistent with other JVM panels.


1314-1405: JVM Thread Count Panel
The panel displaying JVM thread count is clear and maintains consistent styling. Verifying that the metric captures all necessary thread types is suggested.


1406-1486: Total Incoming Byte Rate Panel
This stat panel uses an irate-based query to derive the incoming byte rate. The thresholds and text mode provide immediate feedback.


1487-1567: Total Outgoing Byte Rate Panel
The outgoing byte rate panel mirrors the incoming byte rate panel with analogous configuration. Ensure that both panels are synchronized in terms of refresh intervals and threshold settings.


1568-1648: Incoming Messages Rate Panel
This stat panel calculates the rate of incoming messages with an irate() query. The mapping for null values and threshold steps are applied consistently.


1649-1729: Total Produce Request Rate Panel
This panel illustrates the total produce request rate and is clearly configured with proper legend formatting. The thresholds and aggregation mirror the design of similar panels for consistency.


1730-1833: Byte Rate Panel
By combining targets for both incoming and outgoing byte rates, this timeseries panel offers a comprehensive view of data flow. The dual-target approach and clear legend formatting are both effective.


2823-2929: Templating and Variable Configuration
The dashboard templating is well set up, defining variables such as DS_PROMETHEUS, kubernetes_namespace, strimzi_cluster_name, kafka_broker, kafka_topic, and kafka_partition. The use of regex expressions to capture values from query results provides dynamic filtering. Double-check that the regex patterns extract the intended segments from the metrics.


2930-2940: Time Settings and Dashboard Metadata
The time range (from “now-1h” to “now”), refresh interval ("5s"), and other metadata (title, uid, version) are correctly defined, ensuring that the dashboard displays near‑real‑time data. Verify that the UID is unique within your Grafana instance for proper dashboard management.

packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/040-Crd-kafka.yaml (7)

483-495: LGTM: New template properties for host configuration

The new hostTemplate and advertisedHostTemplate properties provide flexible hostname generation for brokers using placeholders like {nodeId} and {nodePodName}. The allocateLoadBalancerNodePorts property aligns with Kubernetes Service configuration.


1576-1594: LGTM: CSI volume support added

The addition of CSI volume support enables integration with Container Storage Interface drivers, providing more storage options for Kafka deployments.


6411-6432: LGTM: Auto-rebalancing configuration added

The new autoRebalance array property enables configuring automatic rebalancing when brokers are added or removed, with mode-specific templates. This improves cluster scaling operations.


8333-8364: LGTM: Auto-rebalancing status tracking

The new autoRebalance status field provides visibility into rebalancing operations with:

  • Current state (Idle/RebalanceOnScaleDown/RebalanceOnScaleUp)
  • Last transition timestamp
  • Mode-specific broker lists for running/queued operations

8320-8332: LGTM: Enhanced metadata state tracking

The kafkaMetadataState enum now includes detailed states for tracking the ZooKeeper to KRaft migration process:

  • ZooKeeper
  • KRaftMigration
  • KRaftDualWriting
  • KRaftPostMigration
  • PreKRaft
  • KRaft

This provides better visibility into the migration status.


1832-1874: LGTM: Environment variable value sourcing

The new valueFrom field enables sourcing environment variable values from Secrets and ConfigMaps across multiple container templates, improving configuration flexibility and security.

Also applies to: 4540-4582, 6014-6056, 6157-6199, 6683-6725, 7165-7207, 7826-7868, 8081-8123


1-8364: Verify Kubernetes version compatibility

The CRD uses apiextensions.k8s.io/v1 API version. Let's verify the minimum supported Kubernetes version for this API version.

✅ Verification successful

🌐 Web query:

What is the minimum Kubernetes version required for apiextensions.k8s.io/v1 CRDs?

💡 Result:

The minimum Kubernetes version required to use apiextensions.k8s.io/v1 CRDs is v1.16+. This requirement arises because:

  • The apiextensions.k8s.io/v1 API became generally available (GA) in Kubernetes 1.16[3][6][9].
  • Prior versions (≤1.15) only support the older apiextensions.k8s.io/v1beta1 CRD API version[3][4][6].
  • Key differences driving this requirement:
    • v1 CRDs require structural schemas and schema validation[4][7]
    • Storage version handling was formalized in v1[4]
    • Conversion webhooks became mandatory for multi-version CRDs[4]

Projects like cert-manager enforce this by requiring Kubernetes 1.16+ when using their v1 CRD manifests[3]. Attempting to use apiextensions.k8s.io/v1 on older clusters results in errors like No kind "CustomResourceDefinition" is registered for version "apiextensions.k8s.io/v1"[8].

Citations:


Kubernetes CRD API Version Compatibility Verified

The CRD uses apiextensions.k8s.io/v1, which requires Kubernetes v1.16+ as confirmed by the web query.

Comment on lines 77 to 82
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
name: {{ .Release.Name }}-metrics
key: kafka-metrics-config.yml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Zookeeper Metrics Configuration Key

The metricsConfig added under the Zookeeper section currently references the key kafka-metrics-config.yml. Given that the accompanying ConfigMap is expected to contain a dedicated configuration for Zookeeper (likely named zookeeper-metrics-config.yml), please verify if this is intentional or if it should be updated.

Proposed diff if a change is intended:

-          key: kafka-metrics-config.yml
+          key: zookeeper-metrics-config.yml
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
name: {{ .Release.Name }}-metrics
key: kafka-metrics-config.yml
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
name: {{ .Release.Name }}-metrics
key: zookeeper-metrics-config.yml

Copy link
Member

@lllamnyp lllamnyp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Feb 11, 2025
@klinch0 klinch0 force-pushed the feature/add-workloadmonitors-roles branch from 7caf632 to 4cc48e6 Compare February 13, 2025 10:33
@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. and removed size:XXL This PR changes 1000+ lines, ignoring generated files. labels Feb 13, 2025
@lllamnyp lllamnyp merged commit eefef8b into aenix-io:main Feb 13, 2025
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request lgtm This PR has been approved by a maintainer size:M This PR changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants