Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong AccessibilityRequirement passed in CreateVolumeRequest #221

Closed
avalluri opened this issue Jan 31, 2019 · 24 comments · Fixed by #282
Closed

Wrong AccessibilityRequirement passed in CreateVolumeRequest #221

avalluri opened this issue Jan 31, 2019 · 24 comments · Fixed by #282

Comments

@avalluri
Copy link
Contributor

My csi driver deals with storage local to Node, hence i would like to limit topology segment to individual node, by passing as below in NodeInfoGetRespose.

AccessibleTopology: &csi.Topology{
     Segments: map[string]string{
        "pmem-csi/node": nodeID,
},

I am using delay binding(WaitForFirstConsumer) so that volume get provisioned only after scheduler picks the node for scheduling pod that claims the PV.

My expectation was'CreateVolumeRequest.AccessibileRequirement is filled with the topology of selected node. But its filled with all the nodes in cluster where the driver running.

GRPC call: /csi.v1.Controller/CreateVolume
I0131 11:23:54.079912       1 glog.go:58] GRPC request: name:"pvc-ab1837f9-254a-11e9-8f0d-deadbeef0100" capacity_range:<required_bytes:8589934592 > volume_capabilities:<mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > accessibility_requirements:<requisite:<segments:<key:"pmem-csi/node" value:"host-3" > > requisite:<segments:<key:"pmem-csi/node" value:"host-1" > > requisite:<segments:<key:"pmem-csi/node" value:"host-2" > > preferred:<segments:<key:"pmem-csi/node" value:"host-1" > > preferred:<segments:<key:"pmem-csi/node" value:"host-2" > > preferred:<segments:<key:"pmem-csi/node" value:"host-3" > > >

My observation is that the code in aggregateTopologies() is considering only TopologyKeys provided in NodeInfo of SelectedNode to select the Nodes with similar topology, where it's omitting the keyvalue,This results in ending up with all the nodes(where my driver runs) as all nodes has topology constraints with the same key but with unique value.

`		// TODO (verult) retry
		selectedNodeInfo, err := csiAPIClient.CsiV1alpha1().CSINodeInfos().Get(selectedNode.Name, metav1.GetOptions{})
		if err != nil {
			// We must support provisioning if CSINodeInfo is missing, for backward compatibility.
			glog.Warningf("error getting CSINodeInfo for selected node %q: %v; proceeding to provision without topology information", selectedNode.Name, err)
			return nil, nil
		}
		topologyKeys = getTopologyKeys(selectedNodeInfo, driverName)
	}

	if len(topologyKeys) == 0 {
		// Assuming the external provisioner is never running during node driver upgrades.
		// If selectedNode != nil, the scheduler selected a node with no topology information.
		// If selectedNode == nil, all nodes in the cluster are missing topology information.
		// In either case, provisioning needs to be allowed to proceed.
		return nil, nil
	}

	selector, err := buildTopologyKeySelector(topologyKeys)
	if err != nil {
		return nil, err
	}
	nodes, err := kubeClient.CoreV1().Nodes().List(metav1.ListOptions{LabelSelector: selector})
	if err != nil {
		return nil, fmt.Errorf("error listing nodes: %v", err)
	}

	var terms []topologyTerm
	for _, node := range nodes.Items {
		// missingKey bool can be ignored because nodes were selected by these keys.
		term, _ := getTopologyFromNode(&node, topologyKeys)
		terms = append(terms, term)
	}
	return terms, nil`

Environement :

  • external-provisioner v1.0.1: with topology feature gate enabled
  • Kuberntes v1.13.2 : CSINodeInfo, CSIDriverRegistry feature gates enabled.
@msau42
Copy link
Collaborator

msau42 commented Jan 31, 2019

Look at the first entry of the preferred topology field to see which node the scheduler picked. Requisite is supposed to contain all valid choices

@msau42
Copy link
Collaborator

msau42 commented Jan 31, 2019

Although this brings up an interesting scalability problem if it's going to list every single node. Maybe we should consider truncating the results? cc @verult @ddebroy

@avalluri
Copy link
Contributor Author

avalluri commented Feb 1, 2019

Look at the first entry of the preferred topology field to see which node the scheduler picked. Requisite is supposed to contain all valid choices

@msau42 Thanks for your response. I suspect the way the valid choices are being prepared, currently this code is:

  • getting the topology keys from selected/randomly chosen node info
  • and to find the nodes with similar topology., it does a cluster level node selection using those keys, which resulting requisite is ending up with all nodes, because almost all nodes(where the csi driver runs) will have a Label with that topology key but different value.

@pohly
Copy link
Contributor

pohly commented Feb 1, 2019

@msau42 optimizing the case where the legitimate result is "all" or "many" nodes is one aspect that needs to be considered. But @avalluri's point is that the current code produces the wrong result for a scenario where the volume has to be created exactly on the one node chosen for the pod.

Do you agree that AccessibleRequirement should only match a single node in the scenario from the description?

@avalluri perhaps you can extend https://github.com/kubernetes-csi/external-provisioner/blob/master/pkg/controller/topology_test.go with a test case for your scenario?

@msau42
Copy link
Collaborator

msau42 commented Feb 1, 2019

Maybe there's a difference in interpretation of the spec.

Requisite is supposed to contain all possible choices for the topology that has not be filtered out through storageclass.allowedtopologies. the driver must pick any subset from requisite.

Preferred gives a suggested ordering out of the requisite to pick from. If you have delayed binding set, then the first choice will be what the scheduler chose.

@msau42
Copy link
Collaborator

msau42 commented Feb 1, 2019

This is to handle the scenario where your driver may have be spread across multiple topologies. For example, it can replicate data across many nodes or zones. The scheduler only picks one of the nodes, but your driver needs to know what are valid choices for the secondary domains.

@avalluri
Copy link
Contributor Author

avalluri commented Feb 1, 2019

@avalluri perhaps you can extend https://github.com/kubernetes-csi/external-provisioner/blob/master/pkg/controller/topology_test.go with a test case for your scenario?

There is a test case for this, but to my surprise the expectation is bit different:

"different values across cluster": {
			nodeLabels: []map[string]string{
				{"com.example.csi/zone": "zone1"},
				{"com.example.csi/zone": "zone2"},
				{"com.example.csi/zone": "zone2"},
			},
			topologyKeys: []map[string][]string{
				{testDriverName: []string{"com.example.csi/zone"}},
				{testDriverName: []string{"com.example.csi/zone"}},
				{testDriverName: []string{"com.example.csi/zone"}},
			},
			expectedRequisite: []*csi.Topology{
				{Segments: map[string]string{"com.example.csi/zone": "zone1"}},
				{Segments: map[string]string{"com.example.csi/zone": "zone2"}},
			},
		},

the above testcase is having zone1 and zone2 having 1 and 2 nodes respectively. And the three nodes info having the same topology key, i.e, "zone"(i am not sure if there is a driver which could handle different topology keys on different nodes). And SelectedNode is Node1(the above screenshot is not having this part of code), which is located in zone1 and PVC.AllowedTopologies is nil. Now the test expects the AccessibilityRquirement.Requisite should be both zone1 and zone2(infact it would expect select all the zones in the cluster).

If this expectation is true, then the driver could choose subset of Requisite set{zone1, zone2} and might choose "zone2" for provisioning the volume where the pod is not running.

What my expectation in this case is that the provisioner will choose the topology of the selected Node,.

@msau42 can you please correct me, if my expectation is wrong.

@pohly
Copy link
Contributor

pohly commented Feb 1, 2019

@avalluri what you quoted is the case where no node has been selected yet, so the result is indeed that running in either zone1 or zone2 would be acceptable.

But right below it is the same case with a selected node:

"selected node; different values across cluster": {
hasSelectedNode: true,
nodeLabels: []map[string]string{
{"com.example.csi/zone": "zone1"},
{"com.example.csi/zone": "zone2"},
{"com.example.csi/zone": "zone2"},
},
topologyKeys: []map[string][]string{
{testDriverName: []string{"com.example.csi/zone"}},
{testDriverName: []string{"com.example.csi/zone"}},
{testDriverName: []string{"com.example.csi/zone"}},
},
expectedRequisite: []*csi.Topology{
{Segments: map[string]string{"com.example.csi/zone": "zone1"}},
{Segments: map[string]string{"com.example.csi/zone": "zone2"}},
},
},

And that one still allows running in zone2, although that is not compatible with the selected node. I agree that this looks fishy. @verult?

@msau42
Copy link
Collaborator

msau42 commented Feb 1, 2019

So this unit test is misleading because it only checks requisite, which selectedNode has no impact on.

Requisite is the set of topology values that includes all of the values in the cluster, reduced from StorageClass.AllowedTopologies.

Preferred is where selectedNode comes into play. The first entry in preferred should contain the topology from the selectedNode.

@avalluri
Copy link
Contributor Author

avalluri commented Feb 1, 2019

So this unit test is misleading because it only checks requisite, which selectedNode has no impact on.

Requisite is the set of topology values that includes all of the values in the cluster, reduced from StorageClass.AllowedTopologies.

Preferred is where selectedNode comes into play. The first entry in preferred should contain the topology from the selectedNode.

@msau42 I am bit confused, So you mean as per the above test case(@pohly thanks for pointing the right one), where StorageClass.AllowedTopologies is a nll set, so Requisite will contain all the zones in the cluster? How does this information is useful for the driver,

Isn't useful in the case ofnil AllowedTopology choosing SelectedNode topology for both Requisite and Preferred. And If both SelectedNode and AllowedTopologies are nil then nil AccessibilityRequirement,

@msau42
Copy link
Collaborator

msau42 commented Feb 1, 2019

There's a use case for volumes that can span/replicate across multiple topologies, ie multiple zones or multiple nodes. However, the scheduler only picks one node, ie one topology. The driver can treat the scheduler choice as the primary zone/node. But then the driver needs a way to know the other available topologies in the cluster for choosing its secondary topologies. That is why we cannot just restrict requisite topologies to only the selected node by the scheduler.

Requisite is topologies that are available in the cluster, and preferred is an ordering that is influenced by the scheduler's decision.

@avalluri
Copy link
Contributor Author

avalluri commented Feb 4, 2019

There's a use case for volumes that can span/replicate across multiple topologies, ie multiple zones or multiple nodes. However, the scheduler only picks one node, ie one topology. The driver can treat the scheduler choice as the primary zone/node. But then the driver needs a way to know the other available topologies in the cluster for choosing its secondary topologies.

I respect this requirement, though i couldn't understand why can't the driver get the same information, ie, cluster level topology from SP, as the driver is the one who defined this topology.

One thing missing here is how could the driver differentiates that the provided Requisite/Preferred topology is:

  1. really requested by CO(scheduler SelectedNode/PVC.Selector/Storageclass.AllowedTopologies)
  2. injected/chosen by external-provider.

It would be nice if we have some way to differentiate this. As Preferred is supposed to be subset of Requisite, At least if we can ensure that Preferred carries the real CO requirement in priority order.

@msau42
Copy link
Collaborator

msau42 commented Feb 4, 2019

why can't the driver get the same information, ie, cluster level topology from SP, as the driver is the one who defined this topology.

As an example, for gce pds, the storage provider knows about all zones in a region, ie us-central1a/b/c/d/f, but your Kubernetes cluster may have only be created in a subset of those zones. And nodes could be added/removed in new zones at any time.

At least if we can ensure that Preferred carries the real CO requirement in priority order.

Can you explain why you need to distinguish between chosen by CO and chosen by external-provisioner? External-provisioner respects the Kubernetes decision if it gave one, and it maintains backwards compatibility with the old StatefulSet hashing method if you don't enable delayed binding. So Preferred does already carry the CO requirement in priority order.

@avalluri
Copy link
Contributor Author

avalluri commented Feb 5, 2019

why can't the driver get the same information, ie, cluster level topology from SP, as the driver is the one who defined this topology.

As an example, for gce pds, the storage provider knows about all zones in a region, ie us-central1a/b/c/d/f, but your Kubernetes cluster may have only be created in a subset of those zones. And nodes could be added/removed in new zones at any time.

Thanks @msau42 for clarification, now i understand why this design was chosen.

At least if we can ensure that Preferred carries the real CO requirement in priority order.

Can you explain why you need to distinguish between chosen by CO and chosen by external-provisioner? External-provisioner respects the Kubernetes decision if it gave one, and it maintains backwards compatibility with the old StatefulSet hashing method if you don't enable delayed binding. So Preferred does already carry the CO requirement in priority order.

Our driver deals with local storage and we treat every node is in its own zone. We would like to support volume replication where Pod can choose(we are still open if its via PVC.Selector or StorageClass.AllowedTopologies) where to accesses the volume from, so that we can create the volumes on those nodes/zones.

In this case, Currently provisioner is sending AllowedTopoogies if set, via both Requisite and Preferred(by placing SelectedNode at 0), this is quite clear that we can consider Preferred to create those volumes.
But if no AllowedTopoogies is set, porvisioner is sending "all nodes/zones" in both Requisite/Preferred, Now we don't know if the Pod really requested for replication or not. We might end-up creating the volumes on all nodes.

@msau42
Copy link
Collaborator

msau42 commented Feb 5, 2019

For your case, can you make it explicit that replication is requested by having an additional storageclass parameter for number of replicas instead of inferring from restricted topologies? Users may not care exactly which nodes are chosen: "I want 2 replicas but choose any 2 nodes you think are best". Then that also opens it up to the storage provider to make a smarter decision based off of capacity, etc

@pohly
Copy link
Contributor

pohly commented Feb 5, 2019

Let's ignore replication for now, I believe that isn't relevant here.

I think some of confusion comes from a different understanding what topology means and how strict it is. I'm getting the feeling that currently topology is treated as a suggestion, but not a hard requirement. That's why even though Kubernetes knows that a pod using the volume will run in zone1, it (through external-provisioner) allows the CSI driver to create the volume in zone2, and thus we get zone1 and zone2 in expectedRequisite = CreateVolumeRequest.AccessibilityRequirements.

For local storage, we have to replace "zone" with "host", and now adhering to that topology becomes a hard requirement. A CSI driver can still do that, it just has to know that it must create a volume in the first topology listed in CreateVolumeRequest.AccessibilityRequirements. This is where it gets tricky: how can the driver know that? To the driver, a CreateVolumeRequest looks exactly the same, regardless whether "wait for first consumer" was set or not.

I tried to define some usage models for local volumes here: intel/pmem-csi#160

The case that is affected by this ambiguity in CreateVolumeRequest.AccessibilityRequirements is "Delayed Persistent Volume".

@msau42
Copy link
Collaborator

msau42 commented Feb 5, 2019

I think the driver should just always create in whatever the 1st topology is in the preferred. If delayed binding is set, then it will be what the scheduler chose. If not, then it will be random.

@pohly
Copy link
Contributor

pohly commented Mar 18, 2019

We just verified (intel/pmem-csi#92 (comment)) that scheduling indeed fails if:

  1. WaitForFirstConsumer is set
  2. the pod is assigned to run one one node
  3. the CSI driver picks a different node for creating the volume

I'm not happy with the suggestion that the CSI driver should "just always create in whatever the 1st topology is in the preferred". That adds a meaning to the field which isn't part of the CSI spec.

It's TopologyRequirement.requisite that is set in an odd way by the external-provisioner. For PMEM-CSI, what we get in a cluster where three nodes have PMEM, WaitForFirstConsumer is set and the pod was scheduled onto host-3:

I0318 07:51:46.606639       1 glog.go:58] GRPC call: /csi.v1.Controller/CreateVolume
I0318 07:51:46.606659       1 glog.go:58] GRPC request: {"accessibility_requirements":{"preferred":[{"segments":{"pmem-csi.intel.com/node":"host-3"}},{"segments":{"pmem-csi.intel.com/node":"host-1"}},{"segments":{"pmem-csi.intel.com/node":"host-2"}}],"requisite":[{"segments":{"pmem-csi.intel.com/node":"host-1"}},{"segments":{"pmem-csi.intel.com/node":"host-2"}},{"segments":{"pmem-csi.intel.com/node":"host-3"}}]},"capacity_range":{"required_bytes":8589934592},"name":"pvc-a60f5def-4952-11e9-9505-deadbeef0100","volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]}

The spec explicitly lists the case that a driver can only make the volume available in some of the required segments ("If x<n, then the SP SHALL choose x unique topologies from the list of requisite topologies. If it is unable to do so, the SP MUST fail the CreateVolume call.").

So suppose the driver replies to the request above by making the volume available on host-2 because local space on host-3 is exhausted. That's allowed by the spec, because that node is among the required ones. But then attaching the volume fails. It would be better to let the CreateVolume call fail, because that's really the problem in this case: the volume cannot be provisioned on the required host (singular!).

So why does the external-provisioner send a TopologyRequirement.requisite with three entries instead of just one in this case? That still looks like a bug to me.

Note that there is an easy workaround: just don't use WaitForFirstConsumer. It has the effect that local, persistent storage gets allocated based on criteria like current CPU and RAM usage, which doesn't make much sense to me. But it's not wrong either, just weird.

@avalluri
Copy link
Contributor Author

So why does the external-provisioner send a TopologyRequirement.requisite with three entries instead of just one in this case? That still looks like a bug to me.

Currently, both Preferred and Requisite are filled with complete available topologies(in different order) by external-provisioner.

As per spec Preferred could be subset of Requisite, I wish, at least if we can ensure that Preferred always filled with the real CO requirement( i.e, in case of WaitForFirstConsumer only selected node topology) and Requisite might hold all available topologies as it is doing now.

@pohly
Copy link
Contributor

pohly commented Mar 18, 2019

at least if we can ensure that Preferred always filled with the real CO requirement( i.e, in case of WaitForFirstConsumer only selected node topology)

And then what? As shown in "example 1", the expectation is that the CSI driver falls back to provisioning the volume outside of the "preferred" set if it has to, so the same problem would still occur. If the CO (Kubernetes + external-provisioner) want to enforce that the volume gets provisioned where needed or not at all, then requisite has to be limited to that one node.

@msau42
Copy link
Collaborator

msau42 commented Mar 18, 2019

These are the different topology use cases I can think of:

"External" topology, where the storage topology != node topology. We don't support this well in Kubernetes today, but there have been requests to support it. We can discuss this as a separate issue/topic.

"Hyper-converged" topology, where the storage topology == node topology

  1. Simple topology: the volume can only be accessed from a single topology domain, ie a single node/rack/zone
  2. Multi topology: the volume can be accessed from multiple topology domains, ie multiple nodes/racks/zones
  3. Global topology with preferences: the volume can be accessed from anywhere, however it may prefer certain domains for locality to the workload

The problem today is that CSI does not have a way to specify how many topology domains a plugin supports. Even if it could, it's still problematic for drivers such as PD, where some PDs only support a single zone, and others PDs support multi-zones. So to support this use case today, we have the current logic which just gives all possible choices, with preferred as hints to the scheduling decision, and relies on the driver to reduce that set to the number of topology domains that it needs. As pointed above, it causes confusion for the simple case.

What if we add a new flag to external-provisioner where a driver can specify how many topology domains it supports? By default, it can be 1 for the simple (and most common) case. That flag will restrict how many entries are populated in the CSI topology requisite + preferred. If late binding is enabled, then it will only contain the selected node. If not, it will be chosen according to a hashing function (== statefulset hashing for backwards compatibility). cc @verult @davidz627

@pohly
Copy link
Contributor

pohly commented Mar 19, 2019

What if we add a new flag to external-provisioner where a driver can specify how many topology domains it supports? By default, it can be 1 for the simple (and most common) case. That flag will restrict how many entries are populated in the CSI topology requisite + preferred. If late binding is enabled, then it will only contain the selected node.

That makes sense to me.

@avalluri
Copy link
Contributor Author

If not, it will be chosen according to a hashing function (== statefulset hashing for backwards compatibility).

While choosing this topology/node, does external-provisioner consider the available capacity. I mean, what should the driver do if there is no free space available on chosen node?

@avalluri
Copy link
Contributor Author

If the CO (Kubernetes + external-provisioner) want to enforce that the volume gets provisioned where needed or not at all, then requisite has to be limited to that one node.

I too agree with this, the only reason i suggest to keep filling requisite with full topologies is to support other existing usecases which are currently depend on this field.

sunnylovestiramisu added a commit to sunnylovestiramisu/external-provisioner that referenced this issue Apr 12, 2023
6613c3980 Merge pull request kubernetes-csi#223 from sunnylovestiramisu/update
0e7ae993d Update k8s image repo url
77e47cce8 Merge pull request kubernetes-csi#222 from xinydev/fix-dep-version
155854b09 Fix dep version mismatch
8f839056a Merge pull request kubernetes-csi#221 from sunnylovestiramisu/go-update
1d3f94dd5 Update go version to 1.20 to match k/k v1.27
e322ce5e5 Merge pull request kubernetes-csi#220 from andyzhangx/fix-golint-error
b74a51209 test: fix golint error
aa61bfd0c Merge pull request kubernetes-csi#218 from xing-yang/update_csi_driver
7563d1963 Update CSI_PROW_DRIVER_VERSION to v1.11.0
a2171bef0 Merge pull request kubernetes-csi#216 from msau42/process
cb9878261 Merge pull request kubernetes-csi#217 from msau42/owners
a11216e47 add new reviewers and remove inactive reviewers
dd9867540 Add step for checking builds
b66c08249 Merge pull request kubernetes-csi#214 from pohly/junit-fixes
b9b6763bd filter-junit.go: fix loss of testcases when parsing Ginkgo v2 JUnit
d4277839f filter-junit.go: preserve system error log
38e11468f prow.sh: publish individual JUnit files as separate artifacts

git-subtree-dir: release-tools
git-subtree-split: 6613c3980d1e418bebb7bc49d64c977cfff85671
sunnylovestiramisu added a commit to sunnylovestiramisu/external-provisioner that referenced this issue Apr 14, 2023
6613c3980 Merge pull request kubernetes-csi#223 from sunnylovestiramisu/update
0e7ae993d Update k8s image repo url
77e47cce8 Merge pull request kubernetes-csi#222 from xinydev/fix-dep-version
155854b09 Fix dep version mismatch
8f839056a Merge pull request kubernetes-csi#221 from sunnylovestiramisu/go-update
1d3f94dd5 Update go version to 1.20 to match k/k v1.27
e322ce5e5 Merge pull request kubernetes-csi#220 from andyzhangx/fix-golint-error
b74a51209 test: fix golint error
aa61bfd0c Merge pull request kubernetes-csi#218 from xing-yang/update_csi_driver
7563d1963 Update CSI_PROW_DRIVER_VERSION to v1.11.0
a2171bef0 Merge pull request kubernetes-csi#216 from msau42/process
cb9878261 Merge pull request kubernetes-csi#217 from msau42/owners
a11216e47 add new reviewers and remove inactive reviewers
dd9867540 Add step for checking builds
b66c08249 Merge pull request kubernetes-csi#214 from pohly/junit-fixes
b9b6763bd filter-junit.go: fix loss of testcases when parsing Ginkgo v2 JUnit
d4277839f filter-junit.go: preserve system error log
38e11468f prow.sh: publish individual JUnit files as separate artifacts

git-subtree-dir: release-tools
git-subtree-split: 6613c3980d1e418bebb7bc49d64c977cfff85671
sunnylovestiramisu added a commit to sunnylovestiramisu/external-provisioner that referenced this issue Apr 14, 2023
6613c3980 Merge pull request kubernetes-csi#223 from sunnylovestiramisu/update
0e7ae993d Update k8s image repo url
77e47cce8 Merge pull request kubernetes-csi#222 from xinydev/fix-dep-version
155854b09 Fix dep version mismatch
8f839056a Merge pull request kubernetes-csi#221 from sunnylovestiramisu/go-update
1d3f94dd5 Update go version to 1.20 to match k/k v1.27
e322ce5e5 Merge pull request kubernetes-csi#220 from andyzhangx/fix-golint-error
b74a51209 test: fix golint error
aa61bfd0c Merge pull request kubernetes-csi#218 from xing-yang/update_csi_driver
7563d1963 Update CSI_PROW_DRIVER_VERSION to v1.11.0
a2171bef0 Merge pull request kubernetes-csi#216 from msau42/process
cb9878261 Merge pull request kubernetes-csi#217 from msau42/owners
a11216e47 add new reviewers and remove inactive reviewers
dd9867540 Add step for checking builds
b66c08249 Merge pull request kubernetes-csi#214 from pohly/junit-fixes
b9b6763bd filter-junit.go: fix loss of testcases when parsing Ginkgo v2 JUnit
d4277839f filter-junit.go: preserve system error log
38e11468f prow.sh: publish individual JUnit files as separate artifacts

git-subtree-dir: release-tools
git-subtree-split: 6613c3980d1e418bebb7bc49d64c977cfff85671
kbsonlong pushed a commit to kbsonlong/external-provisioner that referenced this issue Dec 29, 2023
kbsonlong pushed a commit to kbsonlong/external-provisioner that referenced this issue Dec 29, 2023
6613c398 Merge pull request kubernetes-csi#223 from sunnylovestiramisu/update
0e7ae993 Update k8s image repo url
77e47cce Merge pull request kubernetes-csi#222 from xinydev/fix-dep-version
155854b0 Fix dep version mismatch
8f839056 Merge pull request kubernetes-csi#221 from sunnylovestiramisu/go-update
1d3f94dd Update go version to 1.20 to match k/k v1.27
e322ce5e Merge pull request kubernetes-csi#220 from andyzhangx/fix-golint-error
b74a5120 test: fix golint error
aa61bfd0 Merge pull request kubernetes-csi#218 from xing-yang/update_csi_driver
7563d196 Update CSI_PROW_DRIVER_VERSION to v1.11.0
a2171bef Merge pull request kubernetes-csi#216 from msau42/process
cb987826 Merge pull request kubernetes-csi#217 from msau42/owners
a11216e4 add new reviewers and remove inactive reviewers
dd986754 Add step for checking builds
b66c0824 Merge pull request kubernetes-csi#214 from pohly/junit-fixes
b9b6763b filter-junit.go: fix loss of testcases when parsing Ginkgo v2 JUnit
d4277839 filter-junit.go: preserve system error log
38e11468 prow.sh: publish individual JUnit files as separate artifacts

git-subtree-dir: release-tools
git-subtree-split: 6613c3980d1e418bebb7bc49d64c977cfff85671
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants