-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature/add-workloadmonitors-roles #626
feature/add-workloadmonitors-roles #626
Conversation
WalkthroughThis pull request updates version information and RBAC configurations for the tenant application. The Chart.yaml now reflects version 1.6.8 compared to 1.6.7, and the versions_map has been revised with a new commit reference and version entry. Additionally, new RBAC rules for the Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (3)
🚧 Files skipped from review as they are similar to previous changes (3)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (19)
packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/047-Crd-kafkaconnector.yaml (2)
95-106
: Verify the new 'listOffsets' property definition.
The newly addedlistOffsets
property is well defined:
- It correctly specifies its type as an object with detailed sub-properties.
- The
toConfigMap
field is nested appropriately and marked as required.- The description clearly indicates its purpose for writing the list of offsets to a ConfigMap.
This structure is consistent with similar CRD modifications in the environment (e.g., KafkaMirrorMaker2). Consider, for future improvements, adding validation examples or constraints if naming conventions for config maps must be enforced.
107-118
: Verify the new 'alterOffsets' property definition.
ThealterOffsets
addition mirrors the pattern used inlistOffsets
:
- It includes a required
fromConfigMap
sub-property with its type and description clearly specified.- The structure and explanatory notes are consistent, enhancing clarity about its use for storing new offset values.
The alignment with other CRD enhancements is a strong point. If applicable later, you might add further validation or examples to demonstrate correct usage.
packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/046-Crd-kafkabridge.yaml (1)
1192-1233
: Enhanced Environment Variable Sourcing for BridgeContainer
The newvalueFrom
property in theenv
array allows environment variables to be sourced from a Kubernetes Secret or ConfigMap as an alternative to providing a direct value. The use of aoneOf
construct to require either a directvalue
or avalueFrom
reference is a clean solution that improves flexibility and security. Verify that all container configurations using this schema are updated accordingly.packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/045-Crd-kafkamirrormaker.yaml (1)
1335-1377
: Dynamic Environment Variable Sourcing in MirrorMakerContainer
A newvalueFrom
block has been introduced for environment variables within themirrorMakerContainer
definition. This allows values to be populated from either a secret or a config map, using the sameoneOf
pattern to enforce that one of the two is provided. This change increases the flexibility for securing sensitive configuration data.packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/041-Crd-kafkaconnect.yaml (2)
1106-1125
: CSI Volume Support Addition in Volume SpecificationA new
csi
property has been introduced in the volume definition. This block defines a CSI volume’s schema—including properties such asdriver
,fsType
,nodePublishSecretRef
,readOnly
, andvolumeAttributes
—to enable Container Storage Interface support. Please verify that these fields align with expected CSI volume definitions and that documentation reflects the revised schema.
1214-1255
: Dynamic Environment Variable Sourcing viavalueFrom
A new
valueFrom
block has been added under the environment variable definition of theconnectContainer
. This addition allows environment variable values to be sourced dynamically from either a Secret (viasecretKeyRef
) or a ConfigMap (viaconfigMapKeyRef
). The use ofoneOf
to enforce mutual exclusivity with a directvalue
is a robust way to ensure correct configuration. Ensure that related documentation and any unit tests for CRD validation are updated accordingly.packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/048-Crd-kafkamirrormaker2.yaml (4)
340-363
: Offset Management Properties in Source ConnectorNew
listOffsets
andalterOffsets
properties have been added under thesourceConnector
specification.
• ThelistOffsets
configuration requires atoConfigMap
field that references a ConfigMap where the list of offsets will be written.
• ThealterOffsets
configuration requires afromConfigMap
field that references a ConfigMap holding new offset values.These additions enhance the manageability of connector offsets. Please confirm that the required keys are documented and that similar changes in other connector specifications remain consistent.
396-419
: Offset Management in Heartbeat ConnectorThe heartbeat connector now also supports the
listOffsets
andalterOffsets
properties. As with the source connector, the same required structure is enforced (i.e.toConfigMap
for listing andfromConfigMap
for altering offsets). Verify that the schema for heartbeat connector offsets mirrors the source connector’s configuration to maintain consistency.
452-475
: Offset Management in Checkpoint ConnectorIn the checkpoint connector specification, the new
listOffsets
andalterOffsets
properties make it possible to manage offsets for checkpointing. The definitions follow the same pattern as in the source and heartbeat connectors. Please ensure that these new configurations correctly support the operator’s offset management flow and that required fields are properly validated.
2336-2354
: CSI Volume Support in KafkaMirrorMaker2A CSI volume option has been added to the volume specification in this CRD. The new
csi
property includes sub-properties fordriver
,fsType
,nodePublishSecretRef
,readOnly
, andvolumeAttributes
. This addition mirrors the updates made in the Kafka Connect CRD, thereby enhancing consistency across CRDs. Please verify that these fields satisfy the necessary CSI volume requirements.packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/020-RoleBinding-strimzi-cluster-operator.yaml (1)
5-5
: Fix trailing spaces in the template.There are trailing spaces at the end of line 5 that should be removed.
- {{- $watchNamespaces = list -}} + {{- $watchNamespaces = list -}}🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 5-5: trailing spaces
(trailing-spaces)
packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/023-RoleBinding-strimzi-cluster-operator.yaml (1)
4-6
: Resetting $watchNamespaces Based on watchAnyNamespaceGood use of the conditional check to reset
$watchNamespaces
to an empty list whenwatchAnyNamespace
is true. Note that YAMLlint flagged trailing spaces on line 5; please remove these extraneous spaces to avoid lint warnings.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 5-5: trailing spaces
(trailing-spaces)
packages/apps/kafka/templates/podscrape.yaml (1)
9-10
: Indentation Adjustment for RelabelConfigsYAMLlint reported an indentation issue on line 10. The dash item (
- separator: ;
) should be indented two additional spaces (i.e. 8 spaces total from the parent level) to maintain proper YAML hierarchy.Proposed diff:
- - separator: ; + - separator: ;🧰 Tools
🪛 YAMLlint (1.35.1)
[warning] 10-10: wrong indentation: expected 8 but found 6
(indentation)
packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml (1)
4-6
: Resetting $watchNamespaces Based on watchAnyNamespaceThe conditional reset to an empty list for
$watchNamespaces
whenwatchAnyNamespace
is set is correctly implemented. As in the previous file, trailing spaces (notably on line 5) should be removed to satisfy linting standards.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 5-5: trailing spaces
(trailing-spaces)
packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/023-ClusterRole-strimzi-cluster-operator-role.yaml (1)
82-89
: Indentation Issues in the New Deletion RuleThe new rule granting deletion rights for
kafkarebalances
is functionally correct; however, YAMLlint has flagged indentation issues on lines 86 and 88. Please adjust the indentation so that the nested keys align with a two-space indent relative to their parent.Proposed diff:
- - kafkarebalances + - kafkarebalancesAnd similarly for the verbs section:
- - delete + - deleteEnsure the overall block resembles:
- apiGroups: - "kafka.strimzi.io" resources: # The Cluster Operator needs deletion for KafkaRebalance only (during auto-rebalancing) - kafkarebalances verbs: - delete🧰 Tools
🪛 YAMLlint (1.35.1)
[warning] 86-86: wrong indentation: expected 2 but found 4
(indentation)
[warning] 88-88: wrong indentation: expected 2 but found 4
(indentation)
packages/system/kafka-operator/charts/strimzi-kafka-operator/values.yaml (1)
185-185
: YAMLlint: Missing Newline at End of FileYAMLlint reports a missing newline at the end of the file. Please add a newline after the last line to comply with YAML best practices.
🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 185-185: no new line character at the end of file
(new-line-at-end-of-file)
hack/download-dashboards.sh (1)
83-83
: New Dashboard Entry for Strimzi KafkaA new dashboard reference (
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//kafka/strimzi-kafka.json
) has been added. Please verify that this file exists and consider removing the extra slash (//
) if it is unintentional to ensure consistent path formatting.packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/049-Crd-kafkarebalance.yaml (1)
67-69
: Review Description forbrokers
FieldThe description for the
brokers
field mentions usage with rebalancing modeadd-brokers
andremoved-brokers
, while the enum value inmode
is defined asremove-brokers
. For consistency, please update the description to match the correct enum value.packages/system/kafka-operator/charts/strimzi-kafka-operator/README.md (1)
68-69
: Prerequisites Version Update and List Style
The prerequisites now specify "Kubernetes 1.25+" instead of previous versions. Please ensure that this update is reflected consistently across all documentation. Also, note that the unordered list here uses a dash; if your project conventions prefer asterisks for bullet lists, consider updating the list style to comply with your markdown guidelines.🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
68-68: Unordered list style
Expected: asterisk; Actual: dash(MD004, ul-style)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (28)
dashboards/kafka/strimzi-kafka.json
(1 hunks)hack/download-dashboards.sh
(1 hunks)packages/apps/kafka/Chart.yaml
(1 hunks)packages/apps/kafka/templates/kafka.yaml
(2 hunks)packages/apps/kafka/templates/metrics-configmap.yaml
(1 hunks)packages/apps/kafka/templates/podscrape.yaml
(1 hunks)packages/apps/tenant/Chart.yaml
(1 hunks)packages/apps/tenant/templates/tenant.yaml
(5 hunks)packages/apps/versions_map
(2 hunks)packages/extra/monitoring/dashboards.list
(1 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/Chart.yaml
(2 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/README.md
(5 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/040-Crd-kafka.yaml
(19 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/041-Crd-kafkaconnect.yaml
(8 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/045-Crd-kafkamirrormaker.yaml
(2 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/046-Crd-kafkabridge.yaml
(3 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/047-Crd-kafkaconnector.yaml
(1 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/048-Crd-kafkamirrormaker2.yaml
(10 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/049-Crd-kafkarebalance.yaml
(3 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/files/grafana-dashboards/strimzi-kraft.json
(4 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/020-RoleBinding-strimzi-cluster-operator.yaml
(1 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/023-ClusterRole-strimzi-cluster-operator-role.yaml
(1 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/023-RoleBinding-strimzi-cluster-operator.yaml
(1 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml
(1 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/060-Deployment-strimzi-cluster-operator.yaml
(1 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/_kafka_image_map.tpl
(1 hunks)packages/system/kafka-operator/charts/strimzi-kafka-operator/values.yaml
(3 hunks)packages/system/kafka-operator/templates/prometheus-rules.yaml
(1 hunks)
✅ Files skipped from review due to trivial changes (3)
- packages/apps/tenant/Chart.yaml
- packages/apps/kafka/Chart.yaml
- packages/system/kafka-operator/charts/strimzi-kafka-operator/Chart.yaml
🧰 Additional context used
🪛 YAMLlint (1.35.1)
packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/023-RoleBinding-strimzi-cluster-operator.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[error] 5-5: trailing spaces
(trailing-spaces)
packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[error] 5-5: trailing spaces
(trailing-spaces)
packages/apps/kafka/templates/podscrape.yaml
[warning] 10-10: wrong indentation: expected 8 but found 6
(indentation)
[error] 4-4: syntax error: expected , but found ''
(syntax)
packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/023-ClusterRole-strimzi-cluster-operator-role.yaml
[warning] 86-86: wrong indentation: expected 2 but found 4
(indentation)
[warning] 88-88: wrong indentation: expected 2 but found 4
(indentation)
packages/apps/kafka/templates/metrics-configmap.yaml
[error] 4-4: syntax error: expected , but found ''
(syntax)
packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/020-RoleBinding-strimzi-cluster-operator.yaml
[error] 5-5: trailing spaces
(trailing-spaces)
packages/system/kafka-operator/charts/strimzi-kafka-operator/values.yaml
[error] 185-185: no new line character at the end of file
(new-line-at-end-of-file)
🪛 LanguageTool
packages/system/kafka-operator/charts/strimzi-kafka-operator/README.md
[style] ~34-~34: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...fka® cluster via the HTTP protocol. * Includes Cruise Control - Automates the proces...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
🪛 markdownlint-cli2 (0.17.2)
packages/system/kafka-operator/charts/strimzi-kafka-operator/README.md
68-68: Unordered list style
Expected: asterisk; Actual: dash
(MD004, ul-style)
🔇 Additional comments (71)
packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/046-Crd-kafkabridge.yaml (3)
1095-1113
: New CSI Volume Source Configuration Added
A newcsi
volume type is introduced here with properties such asdriver
,fsType
,nodePublishSecretRef
,readOnly
, andvolumeAttributes
. This addition expands the data mounting options for Kafka Bridge deployments.
1120-1120
: Including CSI in Volume Type Alternatives
The update in theoneOf
block (addingcsi: {}
) integrates the new volume type as a valid option alongsidesecret
,configMap
,emptyDir
, andpersistentVolumeClaim
. Ensure that this matches the intended CRD design and that any downstream validations are updated accordingly.
1371-1412
: Consistent Environment Variable Configuration for InitContainer
A similarvalueFrom
block has been added to theinitContainer
section. This consistency between the primary container and the init container is beneficial for managing secrets and config maps uniformly. Please ensure that the oneOf enforcement is behaving as intended in runtime validations.packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/045-Crd-kafkamirrormaker.yaml (2)
1271-1290
: New CSI Volume Type in MirrorMaker CRD
The addition of thecsi
volume type with properties likedriver
,fsType
,nodePublishSecretRef
,readOnly
, andvolumeAttributes
in the volumes section aligns with the changes implemented for KafkaBridge. This provides enhanced options for managing storage via the CSI interface. Confirm alignment with your cluster’s CSI drivers and documentation.
1296-1296
: Expansion of Valid Volume Options with CSI
By extending theoneOf
list with an empty mapping forcsi
, the CRD now accepts CSI-based volume definitions as an alternative. Ensure downstream controllers can correctly process this new volume type alongside existing volume definitions.packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/041-Crd-kafkaconnect.yaml (1)
1126-1131
: Extend oneOf Block to Include CSIThe
oneOf
block now explicitly accepts volumes with acsi: {}
field alongside the traditional options (secret
,configMap
,emptyDir
, andpersistentVolumeClaim
). This change ensures that CSI volumes are valid under the CRD schema. Confirm that validation tests cover this new option.packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/048-Crd-kafkamirrormaker2.yaml (1)
2355-2362
: Extending Volume oneOf Block for CSIThe
oneOf
block in the volumes section now includes an option for acsi: {}
volume. This ensures that volumes defined with the new CSI specification are accepted. Confirm that the schema properly enforces mutual exclusion among the different volume source types and that existing manifests using older volume types remain valid.packages/extra/monitoring/dashboards.list (1)
37-37
: LGTM! Addition of Strimzi Kafka dashboard.The addition of the Strimzi Kafka dashboard entry is consistent with the PR objectives and enhances the monitoring capabilities.
packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/_kafka_image_map.tpl (1)
9-11
: Verify compatibility of the new Kafka versions.The version updates from 3.8.0 to 3.9.0 and addition of 3.8.1 look good. However, let's verify the compatibility of these versions with the rest of the system.
Also applies to: 15-16, 20-21, 25-26, 30-31
✅ Verification successful
🌐 Web query:
What are the latest stable versions of Strimzi Kafka Operator and what Kafka versions do they support?
💡 Result:
The latest stable version of Strimzi Kafka Operator is 0.45.0, released with support for Apache Kafka 3.8.x and 3.9.x[1][4][7]. Here's a detailed breakdown:
Supported Kafka Versions by Strimzi Releases
Strimzi Version Supported Apache Kafka Versions Kubernetes Support 0.45.x 3.8.1, 3.9.0 1.25+ **0.**44.x **3.**7.x, **3.**8.0 1.25+ **0.**43.x **3.**7.x, **3.**8.0 **1.**23+ **0.**42.x **3.**6.x, **3.**7.x **1.**23+ Key details:
- Strimzi 0.45 adds support for Kafka 3.9.0 and drops older KRaft metadata versions[4][8]
- From Strimzi 0.44 onward:
- Minimum Kubernetes version is 1.25[6][7]
- Last version supporting ZooKeeper-based clusters (deprecated)[5][9]
- Earlier releases:
- Strimzi 0.41 supported Kafka 3.6-3.7 with JBOD storage in KRaft mode[4]
- Strimzi 0.39 introduced KRaft upgrades[4]
Version Compatibility Notes
- Always verify Kafka metadata version compatibility when upgrading[8]
- Container images are available via Quay.io registry with SHA256 digests[7]
- Full changelogs available in GitHub releases[4][7]
For production environments using Kubernetes ≥1.25, Strimzi 0.45 with Kafka 3.9 is currently recommended[1][7][9].
Citations:
- 1: https://strimzi.io/downloads/
- 2: https://github.com/strimzi/strimzi-kafka-operator/blob/main/kafka-versions.yaml
- 3: https://strimzi.io/docs/operators/latest/full/overview
- 4: https://github.com/strimzi/strimzi-kafka-operator/blob/main/CHANGELOG.md?plain=1
- 5: https://operatorhub.io/operator/strimzi-kafka-operator
- 6: https://staging.artifacthub.io/packages/olm/community-operators/strimzi-kafka-operator/0.43.0
- 7: https://github.com/strimzi/strimzi-kafka-operator/releases
- 8: https://strimzi.io/docs/operators/latest/deploying
- 9: https://strimzi.io
- 10: https://strimzi.io/docs/operators/latest/configuring.html
Compatibility Verified: The updated image tags and Kafka versions are compatible with the supported versions.
Based on the latest stable Strimzi Kafka Operator (version 0.45.0) documentation:
- Supported Kafka Versions: Kafka 3.8.x (including 3.8.1) and 3.9.0 are fully supported.
- Image Tags in Templates: The usage of
-kafka-3.9.0
in the templates (lines 9-11 and similarly on lines 15-16, 20-21, 25-26, and 30-31) aligns with the supported versions.- Operator Compatibility: For production environments, Strimzi 0.45.0 with Kafka 3.9.0 is recommended on supported Kubernetes versions (≥1.25).
No issues have been identified regarding compatibility.
packages/apps/versions_map (1)
26-27
: LGTM! Version updates for kafka and tenant packages.The version updates follow the established pattern with proper commit references for fixed versions and HEAD for latest versions.
Also applies to: 104-105
packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/020-RoleBinding-strimzi-cluster-operator.yaml (1)
3-7
: LGTM! Namespace watching logic implementation.The implementation of namespace watching logic with proper conditionals for RoleBinding vs ClusterRoleBinding is well-structured and follows best practices.
🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 5-5: trailing spaces
(trailing-spaces)
packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/023-RoleBinding-strimzi-cluster-operator.yaml (3)
1-3
: Conditional Block Initialization and Variable DefinitionThe initial block correctly checks for
rbac.create
and sets the root context as well as initializing$watchNamespaces
.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
7-14
: Binding Kind Determination in the Range LoopThe range loop correctly appends the primary namespace to
$watchNamespaces
and conditionally sets the binding kind. UsingClusterRoleBinding
whenwatchAnyNamespace
is true andRoleBinding
otherwise is clear and well implemented.
15-23
: Metadata and Namespace HandlingThe metadata section consistently uses the same binding name regardless of condition, and the namespace is conditionally included only when not watching any namespace. This is acceptable and meets the intended design.
packages/apps/kafka/templates/podscrape.yaml (1)
1-5
: VMPodScrape Resource DefinitionThe new
VMPodScrape
resource is defined appropriately. The use of Helm templating in the metadata (line 4) may trigger YAML lint errors, but these are likely false positives due to the templating syntax. Please verify that the rendered output is valid YAML.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 4-4: syntax error: expected , but found ''
(syntax)
packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml (3)
1-3
: Conditional Block Initialization and Variable DefinitionThe initial block correctly checks
.Values.rbac.create
and sets up the root context with$watchNamespaces
.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
7-14
: Role Binding Kind DeterminationThe logic to choose between
ClusterRoleBinding
andRoleBinding
based on thewatchAnyNamespace
flag is clear and effectively implemented.
15-19
: Metadata and Namespace Handling for Entity Operator DelegationThe metadata section properly sets the name (
strimzi-cluster-operator-entity-operator-delegation
) and conditionally applies the namespace when not targeting all namespaces.packages/apps/kafka/templates/kafka.yaml (1)
60-65
: Kafka Metrics ConfigurationThe newly introduced
metricsConfig
section under the Kafka specification is well implemented with thejmxPrometheusExporter
type. Ensure that the referenced ConfigMap ({{ .Release.Name }}-metrics
) correctly contains thekafka-metrics-config.yml
key.packages/system/kafka-operator/charts/strimzi-kafka-operator/values.yaml (4)
13-13
: UpdateddefaultImageTag
VersionThe image tag has been updated to
0.45.0
, which appears consistent with the intended upgrade. Please verify that related files (e.g., Chart.yaml and README.md) have been updated accordingly.
129-129
: UpdatedkafkaBridge.image.tag
VersionThe Kafka Bridge image tag is now set to
0.31.1
. Ensure that downstream configurations and documentation are aligned with this new version.
183-183
: UnchangedconnectBuildTimeoutMs
ValueAlthough the value remains
300000
, the formatting change may have been intentional. It would be good to confirm that this update is purely cosmetic and causes no functional change.
185-185
: AddedgeneratePodDisruptionBudget
OptionA new configuration option
generatePodDisruptionBudget: true
has been introduced to control the creation of Pod Disruption Budget resources. Please ensure that the corresponding deployment template utilizes this flag correctly.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 185-185: no new line character at the end of file
(new-line-at-end-of-file)
packages/apps/kafka/templates/metrics-configmap.yaml (3)
1-5
: New Metrics ConfigMap IntroductionThis new ConfigMap resource for Kafka and Zookeeper metrics is well structured, with clear metadata and data sections. It cleanly separates the configurations for each component.
🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 4-4: syntax error: expected , but found ''
(syntax)
6-167
: Comprehensive Kafka Metrics RulesThe Kafka metrics configuration includes detailed regex patterns covering various metric types (GAUGE, COUNTER, etc.). Given the complexity, please ensure that each pattern is thoroughly tested with the JMX Prometheus Exporter so that label extraction and metric naming behave as expected.
168-190
: Zookeeper Metrics ConfigurationThe Zookeeper metrics rules are similarly structured and provide for metrics extraction. Confirm that the regex patterns accurately capture the necessary data and that any special characters are properly escaped.
packages/system/kafka-operator/charts/strimzi-kafka-operator/templates/060-Deployment-strimzi-cluster-operator.yaml (1)
143-146
: Review Pod Disruption Budget Environment Variable LogicA conditional block has been added to inject the
STRIMZI_POD_DISRUPTION_BUDGET_GENERATION
environment variable when.Values.generatePodDisruptionBudget
is not equal to true. Since the default in values is set to true, this block will not be rendered under default settings. Please verify that this behavior is intended and aligns with the operator’s requirements.packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/049-Crd-kafkarebalance.yaml (3)
34-41
: Enhanced AdditionalPrinterColumnsA new printer column
Status
(with JSON path.status.conditions[*].type
) has been added. This should improve visibility into the current rebalancing status. Please confirm that the wildcard correctly captures the intended conditions.
57-64
: Updated Rebalance Modes withremove-disks
The
mode
property now includes an additional enum value (remove-disks
) along with an updated description that reflects its functionality. Ensure that both the documentation and the operator’s logic fully support this new mode.
105-120
: New PropertymoveReplicasOffVolumes
AdditionA new property,
moveReplicasOffVolumes
, has been introduced to enable finer control over replica movement by specifying brokers and the corresponding volume IDs. Please make sure the expected object schema is well documented and that the operator properly implements this feature.packages/apps/tenant/templates/tenant.yaml (5)
37-41
: RBAC Rule for Default Role - WorkloadMonitors Permission
The new rule correctly grants theget
,list
, andwatch
verbs on theworkloadmonitors
resource under thecozystack.io
API group. The YAML indentation and list syntax are sound.
110-114
: RBAC Update for View Role
The addition of theworkloadmonitors
rule in the view role is consistent with the default role. This uniformly extends read-only access for workload monitors across roles.
187-191
: RBAC Update for Use Role
The newly added workloadmonitors permission in the use role grantsget
,list
, andwatch
as needed. Ensure that these privileges align with your security model and intended use cases.
283-287
: RBAC Update for Admin Role
The admin role now includes the same workloadmonitors access, enabling administrators to view these resources. The consistency across roles is commendable.
356-360
: RBAC Update for Super Admin Role
Adding the workloadmonitors rule here provides super admins with read access over these resources. This is coherent with the other role definitions.packages/system/kafka-operator/templates/prometheus-rules.yaml (1)
1-76
: New PrometheusRule for Kafka Monitoring
A comprehensive set of alerting rules is introduced for Kafka and related components (Zookeeper, Entity Operator, Connect, Bridge, MirrorMaker, Kafka Exporter, and Certificates). The expressions appear well structured and meaningful. As a follow-up, please verify that all alert durations (e.g.for: 10s
,for: 5m
) and threshold values truly reflect the operational expectations in your environment.packages/system/kafka-operator/charts/strimzi-kafka-operator/README.md (1)
9-16
: Update Migration and Deprecation Notices
The changes clearly inform users that Strimzi 0.45 is the final version supporting ZooKeeper-based Kafka clusters and MirrorMaker 1, and it details the migration requirements for KRaft and MirrorMaker 2. The provided links and deprecation notes are helpful.packages/system/kafka-operator/charts/strimzi-kafka-operator/files/grafana-dashboards/strimzi-kraft.json (1)
1076-1158
: New Grafana Panel: Current Raft State
A new table panel titled "Current Raft State" has been added to display the node’s role in KRaft mode. The panel uses the query
max(kafka_server_raftmetrics_current_state) by (kubernetes_pod_name, current_state)
and applies a transformation to reorganize and rename columns for clarity. Please verify that this query returns the expected aggregated state data and that the table transformation meets your visualization needs.dashboards/kafka/strimzi-kafka.json (26)
1-17
: Annotations Configuration Verification
The annotations block is well defined. It uses a built‐in annotation with the “grafana” datasource and proper flags (e.g."enable": true
and"hide": true
). This configuration will allow built-in annotations and alerts to display correctly.
18-23
: Dashboard Metadata and General Settings
The general dashboard properties (editable flag, fiscalYearStartMonth, graphTooltip, id, and links) are defined appropriately. Consider adding any descriptive metadata (such as "description" or "tags") if future users need more context.
24-103
: Brokers Online Panel
This stat panel titled "Brokers Online" correctly uses a Prometheus query to count broker metrics. The fieldConfig includes a mapping for null values (displaying “N/A”) and a clear set of thresholds. Verify that the threshold values (null, 0, 2) are appropriate for your deployment.
104-181
: Active Controllers Panel
The panel shows the number of active controllers using a sum query. Configuration such as the color thresholds, mapping, and layout mirror the style of the previous panel. It would be good to double‑check that the threshold step values (using colors like "#299c46", "#e5ac0e", "#bf1b00") match your operational expectations.
182-259
: Unclean Leader Election Rate Panel
The "Unclean Leader Election Rate" panel uses an irate() query over a 5‑minute window, which is appropriate to capture recent changes. The mapping for null values and the threshold settings are clear. Verifying that the chosen thresholds truly reflect a problematic rate in your cluster is recommended.
260-337
: Online Replicas Panel
This panel titled "Online Replicas" accurately displays the number of replicas online using a summing query. The thresholds and unit settings are consistent with the overall dashboard theme.
338-416
: Under Replicated Partitions Panel
The configuration for the under-replicated partitions is straightforward with a clear Prometheus query and threshold definitions.
417-496
: Partitions at Minimum In‑Sync Replica (ISR) Panel
This panel indicates partitions that exactly meet the minimum in‑sync replica count. The use of a special mapping (displaying “0” with a color change) is inventive and clear.
497-576
: Under Minimum ISR Panel – Threshold Duplication Check
In this panel, note that the threshold steps include two entries with the value1
(one with color "#ef843c" and the next with color "#bf1b00"). This duplicate threshold value may be unintentional. Please verify whether this is by design or if the second threshold should use a different boundary to differentiate performance states.
577-655
: Offline Partitions Count Panel
The panel for displaying offline partition counts is configured correctly using a summing query and common style settings.
656-668
: Row Grouping for Kafka Panels
The collapsed row titled "Kafka" is used to group related panels. This improves the dashboard’s readability and organization.
670-761
: Kafka Broker Pods Memory Usage Panel
This timeseries panel visualizes memory usage of Kafka broker pods. The configuration—including legend display and thresholds—is consistent. Make sure that the unit ("bytes") and the defined thresholds align with your memory monitoring goals.
762-854
: Aggregated Kafka Broker Pods CPU Usage Panel
The CPU usage panel employs a rate() function over container CPU usage and organizes data by pod. The style and options are consistent with similar panels. Confirm that both the query and the interval settings are optimal for your monitoring needs.
855-947
: Kafka Broker Pods Disk Usage Panel
This panel displays disk usage with a focus on available disk space. The query aggregates metrics per persistent volume claim, and styling settings (palette-classic, thresholds) are applied. Consider verifying performance when many pods are monitored simultaneously.
948-1040
: Open File Descriptors Panel
The panel for open file descriptors is well configured, using a sum() aggregation over container metrics. Its visual settings match the other panels, ensuring consistency.
1041-1131
: JVM Memory Used Panel
This timeseries panel displays JVM memory usage per pod using the metricjvm_memory_used_bytes
. The legend configuration and aggregation support clear monitoring; ensure that the data source labels match your Kafka deployment’s conventions.
1132-1222
: JVM GC Time Panel
The garbage collection time panel uses an irate() query to capture GC performance. The unit settings appear correct; consider whether additional units or scaling (e.g. seconds vs milliseconds) might improve readability.
1223-1313
: JVM GC Count Panel
This panel reflects the GC count using a summation over rate metrics. Its configuration is consistent with other JVM panels.
1314-1405
: JVM Thread Count Panel
The panel displaying JVM thread count is clear and maintains consistent styling. Verifying that the metric captures all necessary thread types is suggested.
1406-1486
: Total Incoming Byte Rate Panel
This stat panel uses an irate-based query to derive the incoming byte rate. The thresholds and text mode provide immediate feedback.
1487-1567
: Total Outgoing Byte Rate Panel
The outgoing byte rate panel mirrors the incoming byte rate panel with analogous configuration. Ensure that both panels are synchronized in terms of refresh intervals and threshold settings.
1568-1648
: Incoming Messages Rate Panel
This stat panel calculates the rate of incoming messages with an irate() query. The mapping for null values and threshold steps are applied consistently.
1649-1729
: Total Produce Request Rate Panel
This panel illustrates the total produce request rate and is clearly configured with proper legend formatting. The thresholds and aggregation mirror the design of similar panels for consistency.
1730-1833
: Byte Rate Panel
By combining targets for both incoming and outgoing byte rates, this timeseries panel offers a comprehensive view of data flow. The dual-target approach and clear legend formatting are both effective.
2823-2929
: Templating and Variable Configuration
The dashboard templating is well set up, defining variables such asDS_PROMETHEUS
,kubernetes_namespace
,strimzi_cluster_name
,kafka_broker
,kafka_topic
, andkafka_partition
. The use of regex expressions to capture values from query results provides dynamic filtering. Double-check that the regex patterns extract the intended segments from the metrics.
2930-2940
: Time Settings and Dashboard Metadata
The time range (from “now-1h” to “now”), refresh interval ("5s"), and other metadata (title, uid, version) are correctly defined, ensuring that the dashboard displays near‑real‑time data. Verify that the UID is unique within your Grafana instance for proper dashboard management.packages/system/kafka-operator/charts/strimzi-kafka-operator/crds/040-Crd-kafka.yaml (7)
483-495
: LGTM: New template properties for host configurationThe new
hostTemplate
andadvertisedHostTemplate
properties provide flexible hostname generation for brokers using placeholders like{nodeId}
and{nodePodName}
. TheallocateLoadBalancerNodePorts
property aligns with Kubernetes Service configuration.
1576-1594
: LGTM: CSI volume support addedThe addition of CSI volume support enables integration with Container Storage Interface drivers, providing more storage options for Kafka deployments.
6411-6432
: LGTM: Auto-rebalancing configuration addedThe new
autoRebalance
array property enables configuring automatic rebalancing when brokers are added or removed, with mode-specific templates. This improves cluster scaling operations.
8333-8364
: LGTM: Auto-rebalancing status trackingThe new
autoRebalance
status field provides visibility into rebalancing operations with:
- Current state (Idle/RebalanceOnScaleDown/RebalanceOnScaleUp)
- Last transition timestamp
- Mode-specific broker lists for running/queued operations
8320-8332
: LGTM: Enhanced metadata state trackingThe
kafkaMetadataState
enum now includes detailed states for tracking the ZooKeeper to KRaft migration process:
- ZooKeeper
- KRaftMigration
- KRaftDualWriting
- KRaftPostMigration
- PreKRaft
- KRaft
This provides better visibility into the migration status.
1832-1874
: LGTM: Environment variable value sourcingThe new
valueFrom
field enables sourcing environment variable values from Secrets and ConfigMaps across multiple container templates, improving configuration flexibility and security.Also applies to: 4540-4582, 6014-6056, 6157-6199, 6683-6725, 7165-7207, 7826-7868, 8081-8123
1-8364
: Verify Kubernetes version compatibilityThe CRD uses
apiextensions.k8s.io/v1
API version. Let's verify the minimum supported Kubernetes version for this API version.✅ Verification successful
🌐 Web query:
What is the minimum Kubernetes version required for apiextensions.k8s.io/v1 CRDs?
💡 Result:
The minimum Kubernetes version required to use
apiextensions.k8s.io/v1
CRDs is v1.16+. This requirement arises because:
- The
apiextensions.k8s.io/v1
API became generally available (GA) in Kubernetes 1.16[3][6][9].- Prior versions (≤1.15) only support the older
apiextensions.k8s.io/v1beta1
CRD API version[3][4][6].- Key differences driving this requirement:
v1
CRDs require structural schemas and schema validation[4][7]- Storage version handling was formalized in
v1
[4]- Conversion webhooks became mandatory for multi-version CRDs[4]
Projects like cert-manager enforce this by requiring Kubernetes 1.16+ when using their
v1
CRD manifests[3]. Attempting to useapiextensions.k8s.io/v1
on older clusters results in errors likeNo kind "CustomResourceDefinition" is registered for version "apiextensions.k8s.io/v1"
[8].Citations:
- 1: Unable to use apiextensions.k8s.io/v1 as CRD version for envtest kubernetes-sigs/controller-runtime#1034
- 2: https://github.com/Azure/bicep-types-k8s/blob/main/generated/kubernetes/apiextensions.k8s.io/v1/types.md
- 3: https://cert-manager.io/docs/releases/upgrading/upgrading-0.16-1.0/
- 4: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/
- 5: https://www.pulumi.com/registry/packages/kubernetes/api-docs/apiextensions/v1beta1/customresourcedefinition/
- 6: [v2]What is the minimum Kubernetes version required to use keda? kedacore/keda#1160
- 7: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/
- 8: https://discuss.kubernetes.io/t/no-kind-customresourcedefinition-is-registered-for-version-apiextensions-k8s-io-v1/17366
- 9: https://kubernetes.io/docs/reference/using-api/deprecation-guide/
- 10: https://www.civo.com/learn/kubernetes-api-custom-resources-definitions
Kubernetes CRD API Version Compatibility Verified
The CRD uses
apiextensions.k8s.io/v1
, which requires Kubernetes v1.16+ as confirmed by the web query.
metricsConfig: | ||
type: jmxPrometheusExporter | ||
valueFrom: | ||
configMapKeyRef: | ||
name: {{ .Release.Name }}-metrics | ||
key: kafka-metrics-config.yml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Zookeeper Metrics Configuration Key
The metricsConfig
added under the Zookeeper section currently references the key kafka-metrics-config.yml
. Given that the accompanying ConfigMap is expected to contain a dedicated configuration for Zookeeper (likely named zookeeper-metrics-config.yml
), please verify if this is intentional or if it should be updated.
Proposed diff if a change is intended:
- key: kafka-metrics-config.yml
+ key: zookeeper-metrics-config.yml
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
metricsConfig: | |
type: jmxPrometheusExporter | |
valueFrom: | |
configMapKeyRef: | |
name: {{ .Release.Name }}-metrics | |
key: kafka-metrics-config.yml | |
metricsConfig: | |
type: jmxPrometheusExporter | |
valueFrom: | |
configMapKeyRef: | |
name: {{ .Release.Name }}-metrics | |
key: zookeeper-metrics-config.yml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
7caf632
to
4cc48e6
Compare
Summary by CodeRabbit
Chores
New Features