-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial support for Host Volumes #5923
Conversation
48b1a56
to
773e560
Compare
25eae45
to
39037d7
Compare
39037d7
to
b21861b
Compare
474d7c5
to
a456a69
Compare
a456a69
to
51963fe
Compare
5935e7a
to
4a1d99c
Compare
4a1d99c
to
d2ee5fc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a quick skim of the PR very quickly and made few comments. I can roughly see how the pieces fit together and see the relevancy of changes, but having comments on the newly added structs and stating the convention for how volume vs mount vs host volume is used will help a lot in following the code.
I'll review again my morning! Thanks!
return result | ||
} | ||
|
||
func (h *volumeHook) hostVolumeMountConfigurations(vmounts []*structs.VolumeMount, volumes map[string]*structs.VolumeRequest, client map[string]*structs.ClientHostVolumeConfig) ([]*drivers.MountConfig, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm getting lost tracking the the distinction between volume mount, volume config, mount config, and volume mount config :). I'd suggest adding a comment somewhere to explain what each type is meant to represent.
Also, I'd love to clarify the parameter names here, client
. Maybe naming the parameter based on source of value would be useful if they are all volumes of some kind.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok - So I have a better handle now - Let me phrase them as best as I understand and have me confirm:
volume
is the filesystem thingy that is unit of scheduling to be assigned to an allochost volume
is the client configuration of a volume.HostVolume
fields and variables in this PR refer to client config values of typeClientHostVolumeConfig
(or derived from it) and never to the volumes instances themselves.volume mount
refers to the task level configuration for how the volume is mounted to the task filesystem.mount config
refers to the concrete OS bind specification that executors will use to actually bind mount?
Is this roughly correct? Would be nice to call that out in the struct documentation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just a first pass. I have an additional question: what actually causes the volume to be mounted in the task directory?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did one more round of review. I have a high level question about the feasibility logic and the distinction between Volume
and VolumeRequest
.
I think I have a better handle on the names now, but I believe this PR would benefit greatly from better having documentation for the new types and methods, specially ones in structs
package.
I think I'll need one more review cycle before I feel I grock this PR well.
Also, I don't expect to be addressed in this review, but wondering how the feasibility logic will evolve when we support non-host-volumes that may have different feasability logic? In this iteration, the schedule is aware of the volume type requirements, but I guess it will not be with pluggable volume types?
return result | ||
} | ||
|
||
func (h *volumeHook) hostVolumeMountConfigurations(vmounts []*structs.VolumeMount, volumes map[string]*structs.VolumeRequest, client map[string]*structs.ClientHostVolumeConfig) ([]*drivers.MountConfig, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok - So I have a better handle now - Let me phrase them as best as I understand and have me confirm:
volume
is the filesystem thingy that is unit of scheduling to be assigned to an allochost volume
is the client configuration of a volume.HostVolume
fields and variables in this PR refer to client config values of typeClientHostVolumeConfig
(or derived from it) and never to the volumes instances themselves.volume mount
refers to the task level configuration for how the volume is mounted to the task filesystem.mount config
refers to the concrete OS bind specification that executors will use to actually bind mount?
Is this roughly correct? Would be nice to call that out in the struct documentation?
// Create the feasibility wrapper which wraps all feasibility checks in | ||
// which feasibility checking can be skipped if the computed node class has | ||
// previously been marked as eligible or ineligible. Generally this will be | ||
// checks that only needs to examine the single node to determine feasibility. | ||
jobs := []FeasibilityChecker{s.jobConstraint} | ||
tgs := []FeasibilityChecker{s.taskGroupDrivers, s.taskGroupConstraint, s.taskGroupDevices} | ||
tgs := []FeasibilityChecker{s.taskGroupDrivers, s.taskGroupConstraint, s.taskGroupHostVolumes, s.taskGroupDevices} | ||
s.wrappedChecks = NewFeasibilityWrapper(ctx, s.quota, jobs, tgs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Noting that this needs to happen in nomad enterprise as well.
// Create the feasibility wrapper which wraps all feasibility checks in | ||
// which feasibility checking can be skipped if the computed node class has | ||
// previously been marked as eligible or ineligible. Generally this will be | ||
// checks that only needs to examine the single node to determine feasibility. | ||
jobs := []FeasibilityChecker{s.jobConstraint} | ||
tgs := []FeasibilityChecker{s.taskGroupDrivers, s.taskGroupConstraint, s.taskGroupDevices} | ||
tgs := []FeasibilityChecker{s.taskGroupDrivers, s.taskGroupConstraint, s.taskGroupHostVolumes, s.taskGroupDevices} | ||
s.wrappedChecks = NewFeasibilityWrapper(ctx, s.quota, jobs, tgs) | ||
|
||
// Filter on distinct host constraints. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IS this worth adding an e2e test for (in a separate PR)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe? - I doubt it tho tbh - there's v little new behaviour overall.
Co-Authored-By: Mahmood Ali <[email protected]>
bd34a58
to
22b16de
Compare
I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions. |
This is the minimum viable work for #5377.
Subsequent PRs to the f-host-volumes branch will include support for features like getting volume status from the API, further improvements to driver support, validating volumes on client start, etc.
Example Client Config
Example Jobspec