-
Notifications
You must be signed in to change notification settings - Fork 284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Utilize and expose functions for starting up the manager #1601
Comments
/assign |
Some of them don't seem to be necessary for a custom integration, unless you are essentially forking kueue? My general recommendation would be run the custom integration in a separate binary that can be released independently, similar to https://github.com/kubernetes-sigs/kueue/blob/main/cmd/experimental/podtaintstolerations/main.go |
Yes, I meant a separate binary. Currently, separate main function can not import functions like It means that the platform developer implements the main function like the following: func main() {
...
ctx := ctrl.SetupSignalHandler()
setupIndexes(ctx, mgr)
setupProbeEndpoints(mgr)
go setupControllers(mgr, certsReady, &cfg)
setupLog.Info("starting manager")
if err := mgr.Start(ctx); err != nil {
setupLog.Error(err, "could not run manager")
os.Exit(1)
}
func setupIndexes(ctx context.Context, mgr ctrl.Manager) {
err := jobframework.ForEachIntegration(func(name string, cb jobframework.IntegrationCallbacks) error {
if err := cb.SetupIndexes(ctx, mgr.GetFieldIndexer()); err != nil {
return fmt.Errorf("integration %s: %w", name, err)
}
return nil
})
if err != nil {
setupLog.Error(err, "unable to setup jobs indexes")
}
}
func setupProbeEndpoints(mgr ctrl.Manager) {
defer setupLog.Info("probe endpoints are configured on healthz and readyz")
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up health check")
os.Exit(1)
}
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up ready check")
os.Exit(1)
}
}
func setupControllers(mgr ctrl.Manager, certsReady chan struct{}, cfg *configapi.Configuration) {
...
opts := []jobframework.Option{
jobframework.WithManageJobsWithoutQueueName(cfg.ManageJobsWithoutQueueName),
jobframework.WithWaitForPodsReady(waitForPodsReady(cfg)),
}
err := jobframework.ForEachIntegration(func(name string, cb jobframework.IntegrationCallbacks) error {
log := setupLog.WithValues("jobFrameworkName", name)
if err := cb.NewReconciler(
mgr.GetClient(),
mgr.GetEventRecorderFor(fmt.Sprintf("%s-%s-controller", name, constants.ManagerName)),
opts...,
...
if err != nil {
os.Exit(1)
}
// +kubebuilder:scaffold:builder
}
func waitForPodsReady(cfg *configapi.Configuration) bool {
return cfg.WaitForPodsReady != nil && cfg.WaitForPodsReady.Enable
} |
oh I see. So you have a single binary to add multiple integrations. Then it might make sense to export things like But I don't think you should be using Kueue's Configuration API. You should probably have your own. |
Yes, I will expose the only needed functions.
In my env, I deployed another ConfigMap embedded Kueue Config to cluster and then the another Kueue Config is mounted on the inhouse kueue manager. It means that I have 2 configMaps embedded kueue config. |
I think we shouldn't make the same assumption in the exported functions. The functions could take some specific fields. |
Maybe, I couldn't understand correctly what you want to mean.
Does this mean that you suggested not mounting another Kueue's Configuration on the in-house Kueue manager? |
I think we should export something like |
Ah, it makes sense. Thank you for the clarifications! |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
What would you like to be cleaned:
I would like to utilize and expose the below functions for starting up the manager:
We can probably cut an
initialize
package under thepkg
directory, and then we can put the below functions there.kueue/cmd/kueue/main.go
Line 194 in 60177bf
kueue/cmd/kueue/main.go
Line 221 in 60177bf
kueue/cmd/kueue/main.go
Line 317 in 60177bf
kueue/cmd/kueue/main.go
Line 343 in 60177bf
kueue/cmd/kueue/main.go
Line 365 in 60177bf
kueue/cmd/kueue/main.go
Line 369 in 60177bf
kueue/cmd/kueue/main.go
Line 373 in 60177bf
kueue/cmd/kueue/main.go
Line 403 in 60177bf
Why is this needed:
When the platform developers implement the managers for the in-house custom jobs, they can avoid copying and pasting those functions to their manager.
The text was updated successfully, but these errors were encountered: