-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Profile API enhancement #653
Conversation
Signed-off-by: Zan Niu <[email protected]>
Codecov Report
@@ Coverage Diff @@
## 2.x #653 +/- ##
============================================
+ Coverage 84.91% 84.93% +0.01%
- Complexity 986 1023 +37
============================================
Files 92 94 +2
Lines 3540 3730 +190
Branches 326 340 +14
============================================
+ Hits 3006 3168 +162
- Misses 399 415 +16
- Partials 135 147 +12
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
@@ -52,18 +54,29 @@ public class MLProfileInput implements ToXContentObject, Writeable { | |||
@Setter | |||
private boolean returnAllModels; | |||
|
|||
@Setter | |||
private String profileAndDeployment; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's possible that we add new content in profile response. How about renaming this as target_response
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make sense, I can make the change, what's your opinion regarding the path parameter? Do we need to change as well? /_plugins/_ml/profile?profile_and_deployment=all
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should change the path parameter too
public class MultiGetRequestBuilderFactory { | ||
|
||
public MultiGetRequestBuilder createMultiGetRequestBuilder(Client client) { | ||
return new MultiGetRequestBuilder(client, MultiGetAction.INSTANCE); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems over design by adding a new method for just one line of code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should always avoid new
in code since the concept of IoC
, also new
is a blocker of UT, using factory method is a preferred approach to avoid these.
@@ -235,6 +235,7 @@ List<String> jacocoExclusions = [ | |||
'org.opensearch.ml.indices.MLIndicesHandler', | |||
'org.opensearch.ml.rest.RestMLPredictionAction', | |||
'org.opensearch.ml.profile.MLModelProfile', | |||
'org.opensearch.ml.profile.MLDeploymentProfile', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why exclude this class?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tried to cover this file by UT but it keeps showing the branch coverage is not above the threshold, and I see the MLModelProfile is also been excluded here so I thought this is a workaround of the code coverage.
} | ||
} | ||
|
||
private void buildModelDeploymentData( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method seems heavy. I did some refactor for ML task worker nodes and add target worker node in profile response, check #656. So you can get target worker nodes directly from profile response, no need to query index..
For model name part, I think it's mainly for frontend. How about we just return model id in profile API, then frontend can query model index to get name. Another option: frontend cache model id and model name mapping, then no need to query model index every time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If so, I think this PR can be closed, and it seems no necessary to add a new node under the response, maybe we can add target worker nodes directly to nodes
entry?
Signed-off-by: Zan Niu [email protected]
Description
This is an enhancement to ML profile API to return model deployment status information.
Issues Resolved
#646
Check List
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.