You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When a CSI volume claim is freed, the server may encounter "abort" errors from the node plugins because the plugin is in the middle of detaching the volume. This is somewhat related to the workflow issues described in #10833, but the volume watcher should not be writing these logs until we know the operation has failed and hasn't just been forced to retry.
2022-01-31T16:30:43.288Z [ERROR] nomad.volumes_watcher: error releasing volume claims: namespace=default volume_id=csi-volume-nfs0
error=
| 1 error occurred:
| * could not detach from node: node detach volume: CSI.NodeDetachVolume: 2 errors occurred:
| * rpc error: code = Aborted desc = operation locked due to in progress operation(s): ["volume_id_csi-volume-nfs0"]
| * remove /var/nomad/data/client/csi/node/org.democratic-csi.nfs/per-alloc/345fb411-6d9e-85fe-0169-e94302be8bdd/csi-volume-nfs0/rw-file-system-multi-node-multi-writer: de>
|
|
|
The text was updated successfully, but these errors were encountered:
This is closed by #12387#11892#12102 and a handful of other PRs landing in the upcoming Nomad 1.3.0. We'll still get this log in the cases we actually care about.
Wow @tgross I just wanted to say huge thank you for effort regarding eliminating all those CSI-related bugs and problems and for being so transparent about it ❤️ I can't wait to try Nomad 1.3.0 🤞
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
When a CSI volume claim is freed, the server may encounter "abort" errors from the node plugins because the plugin is in the middle of detaching the volume. This is somewhat related to the workflow issues described in #10833, but the volume watcher should not be writing these logs until we know the operation has failed and hasn't just been forced to retry.
The text was updated successfully, but these errors were encountered: