Use-Case: Measuring the total carbon emissions for a given ML workload (training/inference) #59
Replies: 4 comments 1 reply
-
@buchananwp can you also add in
Thanks! |
Beta Was this translation helpful? Give feedback.
-
Will: Using a lifecycle methodology thinking about training, inference, etc. will help Vaughn: where does the inference happen will also help - embodied cost in the device that is on the edge vs. having to go to the cloud Abhishek: you can cache frequently used results rather than doing fresh inferences every time Will: Triton better packs inferences on the GPU which helps to get more in terms of the utilization of the hardware |
Beta Was this translation helpful? Give feedback.
-
@buchananwp what was the next step that we needed on this? |
Beta Was this translation helpful? Give feedback.
-
I am sharing some thoughts - Some factors that affect carbon emissions for a given ML workload
In terms of measurement, in my view we should look at the above factors holistically to come up with the total carbon emission model. |
Beta Was this translation helpful? Give feedback.
-
AzureML is a great use case to explore, due to the high cost (including carbon) of ML workloads. They are discrete 'jobs' and can showcase the carbon intensity specification work.
Beta Was this translation helpful? Give feedback.
All reactions