-
Notifications
You must be signed in to change notification settings - Fork 66
Jenkins trying to spin up multiple slaves at the same time #2384
Comments
@jfchevrette I'm starting to look into this and I could use some help understanding this.
Nothing I know of, but I'll take a look and comment here if any. |
Among the few volumes I found at the jenkins dc, this is the only volume that's mounted in a rw mode (I'm assuming the default is read only). Is this the one that cannot be shared?
|
@jfchevrette @pbergene Jaseem needs your input on this issue. Thanks. |
@jaseemabid that is correct, the PV mounted at /var/lib/jenkins cannot be shared between multiple pods - this is a limitation of our current architecture and storage backend. The slave pods are launched by jenkins. That's all I know. However as you pointed out, unlike the jenkins master the slave pods don't mount a PV. |
Below are the few observations regarding the issue after running multiple builds simultaneously -
|
As discussed @rupalibehera , I guess this issue be taken care if @hrishin has a fix for #2729 |
linked to #2729 |
closing this as its duplicate/directly related to #2729 |
Just observed a namespace that had 6 openshift builds in 'Running' state and jenkins was trying to start multiple slave pods at once. Because slaves are started with a PV mount, only one can run at once.
Once I cancelled all the builds and made sure jenkins had stopped trying to start slaves I triggered one build it completed just fine.
Is there something in jenkins that ensures that builds are queued? Especially builds coming from multiple different buildconfigs/apps?
The text was updated successfully, but these errors were encountered: