-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
difference between kepler-model-server and kepler-estimator #375
Comments
@sunya-ch @KaiyiLiu1234 can you provide a high level usage description? Preferably here |
I put the general explanation about power estimation and deployment here. With the minimum deployment (no estimator and no model server), the kepler should use an offline model weight with embed linear regression model defined by PodComponentPowerModelConfig variable (which is currently hard coded pointing to this file Line 35 in 1a0f342
The high level usage description in the document will be coming soon. I will remove the hard-coded variable and do the local test with full integration to the current version of Kepler first. |
@jichenjc As there are possible unmatched features name to the current version of Kepler, the initial model weight may not be applicable. You may further investigate the json file and the collected metric calling below. kepler/pkg/collector/container_energy_collector.go Lines 76 to 79 in 1a0f342
|
this is great sharing , thanks ! @sunya-ch about the original issue, I thinks it's by default there is no model server param provided in manifest pkg/collector/metric/utils.go
and in turn all the logic reltaed to |
@marceloamaral we have https://github.com/sustainable-computing-io/kepler/blob/main/pkg/model/model.go#L66 https://github.com/sustainable-computing-io/kepler/blob/main/cmd/exporter.go#L84 set the endpoint so my proposal is to change this to keep it a string so we have
but: |
IMO we should externalize this via a apiVersion: v1
kind: ConfigMap
metadata:
name: model-server-cfm
namespace: kepler-system
data:
ENABLED: (true or false)
ENDPOINT: ("https://some.endpoint" or "")
... |
but then we depends on model-server ?should this be a resource defined in kepler instead of kepler-model-server? |
I can
thoughts? |
@jichenjc Does it mean if no configmap, Kepler will use default model and when configmap is provided by the user it will have models provided by user to be accessed via Both Minimum Deployment and Deployment with General Estimator Sidecar uses offline models and thus would need the model for the local/general estimator |
@husky-parul based on my test
no endpoint then if it does have some value so I am curious on the logic here |
I proposed |
@sunya-ch will have a discussion on GH project, also going to have this supported in operator |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
I am getting all pod energy are 0, check code seems I need at least enable one of kepler-model-server and kepler-estimator
what's the difference between them and which one should use? the side car (https://github.com/sustainable-computing-io/kepler-estimator) ?
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: