Skip to content

Commit

Permalink
Merge pull request #98 from rhrmo/rebase-v5.0.1
Browse files Browse the repository at this point in the history
STOR-1593: Rebase to upstream v5.0.1 for 4.17
  • Loading branch information
openshift-merge-bot[bot] authored Jul 22, 2024
2 parents 9e8af01 + 4fe2f18 commit d8c6952
Show file tree
Hide file tree
Showing 1,554 changed files with 125,274 additions and 50,987 deletions.
2 changes: 1 addition & 1 deletion .ci-operator.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
build_root_image:
name: release
namespace: openshift
tag: rhel-9-release-golang-1.21-openshift-4.16
tag: rhel-9-release-golang-1.22-openshift-4.17
18 changes: 0 additions & 18 deletions CHANGELOG/CHANGELOG-4.0.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,3 @@
# Release notes for v4.0.1

[Documentation](https://kubernetes-csi.github.io)



## Dependencies

### Added
_Nothing has changed._

### Changed
- github.com/golang/protobuf: [v1.5.3 → v1.5.4](https://github.com/golang/protobuf/compare/v1.5.3...v1.5.4)
- google.golang.org/protobuf: v1.31.0 → v1.33.0

### Removed
_Nothing has changed._

# Release notes for v4.0.0

[Documentation](https://kubernetes-csi.github.io)
Expand Down
310 changes: 310 additions & 0 deletions CHANGELOG/CHANGELOG-5.0.md

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions Dockerfile.openshift.rhel7
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
FROM registry.ci.openshift.org/ocp/builder:rhel-9-golang-1.21-openshift-4.16 AS builder
FROM registry.ci.openshift.org/ocp/builder:rhel-9-golang-1.22-openshift-4.17 AS builder
WORKDIR /go/src/github.com/kubernetes-csi/external-provisioner
COPY . .
RUN make build

FROM registry.ci.openshift.org/ocp/4.16:base-rhel9
FROM registry.ci.openshift.org/ocp/4.17:base-rhel9
COPY --from=builder /go/src/github.com/kubernetes-csi/external-provisioner/bin/csi-provisioner /usr/bin/
ENTRYPOINT ["/usr/bin/csi-provisioner"]
12 changes: 9 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@
The external-provisioner is a sidecar container that dynamically provisions volumes by calling `CreateVolume` and `DeleteVolume` functions of CSI drivers. It is necessary because internal persistent volume controller running in Kubernetes controller-manager does not have any direct interfaces to CSI drivers.

## Overview
The external-provisioner is an external controller that monitors `PersistentVolumeClaim` objects created by user and creates/deletes volumes for them. Full design can be found at Kubernetes proposal at [container-storage-interface.md](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/container-storage-interface.md)
The external-provisioner is an external controller that monitors `PersistentVolumeClaim` objects created by user and creates/deletes volumes for them.
The [Kubernetes Container Storage Interface (CSI) Documentation](https://kubernetes-csi.github.io/docs/) explains how to develop, deploy, and test a Container Storage Interface (CSI) driver on Kubernetes.

## Compatibility

Expand All @@ -26,7 +27,7 @@ Following table reflects the head of this branch.
| CSIStorageCapacity | GA | On | Publish [capacity information](https://kubernetes.io/docs/concepts/storage/volumes/#storage-capacity) for the Kubernetes scheduler. | No |
| ReadWriteOncePod | Beta | On | [Single pod access mode for PersistentVolumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). | No |
| CSINodeExpandSecret | Beta | On | [CSI Node expansion secret](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3107-csi-nodeexpandsecret) | No |
| HonorPVReclaimPolicy| Alpha |Off | [Honor the PV reclaim policy](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2644-honor-pv-reclaim-policy) | No |
| HonorPVReclaimPolicy| Beta | On | [Honor the PV reclaim policy](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2644-honor-pv-reclaim-policy) | No |
| PreventVolumeModeConversion | Beta |On | [Prevent unauthorized conversion of source volume mode](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3141-prevent-volume-mode-conversion) | `--prevent-volume-mode-conversion` (No in-tree feature gate) |
| CrossNamespaceVolumeDataSource | Alpha |Off | [Cross-namespace volume data source](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/3294-provision-volumes-from-cross-namespace-snapshots) | `--feature-gates=CrossNamespaceVolumeDataSource=true` |

Expand Down Expand Up @@ -138,7 +139,7 @@ protocol](https://github.com/kubernetes/design-proposals-archive/blob/main/stora
The [design document](./doc/design.md) explains this in more detail.

### Topology support
When `Topology` feature is enabled and the driver specifies `VOLUME_ACCESSIBILITY_CONSTRAINTS` in its plugin capabilities, external-provisioner prepares `CreateVolumeRequest.AccessibilityRequirements` while calling `Controller.CreateVolume`. The driver has to consider these topology constraints while creating the volume. Below table shows how these `AccessibilityRequirements` are prepared:
When `Topology` feature is enabled* and the driver specifies `VOLUME_ACCESSIBILITY_CONSTRAINTS` in its plugin capabilities, external-provisioner prepares `CreateVolumeRequest.AccessibilityRequirements` while calling `Controller.CreateVolume`. The driver has to consider these topology constraints while creating the volume. Below table shows how these `AccessibilityRequirements` are prepared:

[Delayed binding](https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode) | Strict topology | [Allowed topologies](https://kubernetes.io/docs/concepts/storage/storage-classes/#allowed-topologies) | Immediate Topology | [Resulting accessibility requirements](https://github.com/container-storage-interface/spec/blob/master/spec.md#createvolume)
:---: |:---:|:---:|:---:|:---|
Expand All @@ -149,6 +150,11 @@ No | Irrelevant | Yes | Irrelevant | `Requisite` = Allowed topologies<br>`Prefer
No | Irrelevant | No | Yes | `Requisite` = Aggregated cluster topology<br>`Preferred` = `Requisite` with randomly selected node topology as first element
No | Irrelevant | No | No | `Requisite` and `Preferred` both nil

*) `Topology` feature gate is enabled by default since v5.0.
<!-- TODO: remove the feature gate in the next release - remove the whole column in the table above. -->

When enabling topology support in a CSI driver that had it disabled, please make sure the topology is first enabled in the driver's node DaemonSet and topology labels are populated on all nodes. The topology can be then updated in the driver's Deployment and its external-provisioner sidecar.

### Capacity support

The external-provisioner can be used to create CSIStorageCapacity
Expand Down
65 changes: 34 additions & 31 deletions cmd/csi-provisioner/csi-provisioner.go
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,9 @@ import (
_ "k8s.io/component-base/metrics/prometheus/clientgo/leaderelection" // register leader election in the default legacy registry
_ "k8s.io/component-base/metrics/prometheus/workqueue" // register work queues in the default legacy registry
csitrans "k8s.io/csi-translation-lib"
"k8s.io/klog/v2"
"sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller"
libmetrics "sigs.k8s.io/sig-storage-lib-external-provisioner/v9/controller/metrics"
klog "k8s.io/klog/v2"
"sigs.k8s.io/sig-storage-lib-external-provisioner/v10/controller"
libmetrics "sigs.k8s.io/sig-storage-lib-external-provisioner/v10/controller/metrics"

"github.com/kubernetes-csi/csi-lib-utils/leaderelection"
"github.com/kubernetes-csi/csi-lib-utils/metrics"
Expand Down Expand Up @@ -210,13 +210,13 @@ func main() {
metrics.WithSubsystem(metrics.SubsystemSidecar),
)

grpcClient, err := ctrl.Connect(*csiEndpoint, metricsManager)
grpcClient, err := ctrl.Connect(ctx, *csiEndpoint, metricsManager)
if err != nil {
klog.Error(err.Error())
os.Exit(1)
}

err = ctrl.Probe(grpcClient, *operationTimeout)
err = ctrl.Probe(ctx, grpcClient, *operationTimeout)
if err != nil {
klog.Error(err.Error())
os.Exit(1)
Expand Down Expand Up @@ -244,15 +244,15 @@ func main() {
// Will be provided via default gatherer.
metrics.WithProcessStartTime(false),
metrics.WithMigration())
migratedGrpcClient, err := ctrl.Connect(*csiEndpoint, metricsManager)
migratedGrpcClient, err := ctrl.Connect(ctx, *csiEndpoint, metricsManager)
if err != nil {
klog.Error(err.Error())
os.Exit(1)
}
grpcClient.Close()
grpcClient = migratedGrpcClient

err = ctrl.Probe(grpcClient, *operationTimeout)
err = ctrl.Probe(ctx, grpcClient, *operationTimeout)
if err != nil {
klog.Error(err.Error())
os.Exit(1)
Expand Down Expand Up @@ -553,34 +553,20 @@ func main() {
csiProvisioner = capacity.NewProvisionWrapper(csiProvisioner, capacityController)
}

provisionController = controller.NewProvisionController(
clientset,
provisionerName,
csiProvisioner,
provisionerOptions...,
)

csiClaimController := ctrl.NewCloningProtectionController(
clientset,
claimLister,
claimInformer,
claimQueue,
controllerCapabilities,
)

// Start HTTP server, regardless whether we are the leader or not.
if addr != "" {
// To collect metrics data from the metric handler itself, we
// let it register itself and then collect from that registry.
// Start HTTP server, regardless whether we are the leader or not.
// Register provisioner metrics manually to be able to add multiplexer in front of it
m := libmetrics.New("controller")
reg := prometheus.NewRegistry()
reg.MustRegister([]prometheus.Collector{
libmetrics.PersistentVolumeClaimProvisionTotal,
libmetrics.PersistentVolumeClaimProvisionFailedTotal,
libmetrics.PersistentVolumeClaimProvisionDurationSeconds,
libmetrics.PersistentVolumeDeleteTotal,
libmetrics.PersistentVolumeDeleteFailedTotal,
libmetrics.PersistentVolumeDeleteDurationSeconds,
m.PersistentVolumeClaimProvisionTotal,
m.PersistentVolumeClaimProvisionFailedTotal,
m.PersistentVolumeClaimProvisionDurationSeconds,
m.PersistentVolumeDeleteTotal,
m.PersistentVolumeDeleteFailedTotal,
m.PersistentVolumeDeleteDurationSeconds,
}...)
provisionerOptions = append(provisionerOptions, controller.MetricsInstance(m))
gatherers = append(gatherers, reg)

// This is similar to k8s.io/component-base/metrics HandlerWithReset
Expand Down Expand Up @@ -611,6 +597,23 @@ func main() {
}()
}

logger := klog.FromContext(ctx)
provisionController = controller.NewProvisionController(
logger,
clientset,
provisionerName,
csiProvisioner,
provisionerOptions...,
)

csiClaimController := ctrl.NewCloningProtectionController(
clientset,
claimLister,
claimInformer,
claimQueue,
controllerCapabilities,
)

run := func(ctx context.Context) {
factory.Start(ctx.Done())
if factoryForNamespace != nil {
Expand Down
2 changes: 1 addition & 1 deletion deploy/kubernetes/rbac.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ rules:
# verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
verbs: ["get", "list", "watch", "create", "patch", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
Expand Down
Loading

0 comments on commit d8c6952

Please sign in to comment.