-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Track] Call for Machine/Device Sponsorship #1906
Comments
I will specific my points with execution ordering, and summarize with a digram. We 1st of all thinking about how audience trigger the pipeline, well, here is not integration level which is another topic going to discuss later on. But here is where we start the GHA pipeline. Obviously we want audience just click a button from action page and the pipeline runs, maybe we can provide them by publish a kepler-action on GHA market place? One step further, we now going to set up pre request for kepler, for example ebpf, rapl.... or just container runtime if audience provide us a clean instance. Or in some cases, audience provide us a kubeconfig file, which allow us avoid setting up container runtime, and we can starts from prometheus, or audience may have prometheus there already installed. Trigger by GHA, runs on a customer agent which means, in other word, audience should install GHA agent on their proxy instance or a instance which connected to GHA. Which is important that usually an instance inside a data center or audience environment behind firewall or it may have some reason makes the instance can't install GHA agent. So, the trigger from GHA, which means if audience is able to use GHA agent(customer GHA provide by them) to start the pipeline or just use default GHA runner to start the pipeline. As an alternative, user may just provide us a k8s cluster. Which luck for us that we don't need set up k8s from zero, but, the similar question comes, do we need build metal-ci stack from zero on k8s(like Tekton or prometheus)? or we ask the provider to make it once? after running, how audience send the result.(maybe both kepler validation result and kepler server model, if they are going to train) back to us? I suppose we need to define some key steps or nodes here, and decouple what's audience behavior and what's our pipeline code's behavior? |
@SamYuan1990 Thank you so much.
Yes, definitely. We should start from defining the objective and the actor.
Personally, I also would like to go for call for machine way but we need to get through at least the concern mentioned in the table. |
@sunya-ch or @rootfs , I am not sure if below things looks good as draft in the call for article. Must read/sign:As people who donates a machine for contribute kepler validation result or kepler-model into kepler-model-db, please complete the following: Security
Donated agreement and governance
As how it's going to work:The mini scope:
After the mini scope, let's considering with integration with GHA:
here, config specific, we have an overview for how we are going to integrate with kepler metal CI with GHA as pipeline trigger to reuse community pipeline as much as possible, but considering as machine specific.
Finally get success, after a while, and here things come:
|
What would you like to be added?
Currently, we have a limited number/type of machines and sensors to train and provide a power model in the model database.
We have an idea to open a call for machine/device sponsorship with a proper recognition.
Here are the tasks I could initially list up
I could take care of the first task but still need help for the rest.
Why is this needed?
As mentioned in several research works, the power consumption behavior can be varied by several factors.
Using the right power model to predict power consumption on the machine that has no power meter is critical the precision of the reported values.
The text was updated successfully, but these errors were encountered: