-
Notifications
You must be signed in to change notification settings - Fork 918
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Segrep to Dashboard saved object service #4522
Comments
Thanks @wbeckler for opening up this issue. The use of primary shard for search is a good workaround solution for low read throughput systems. We are exploring another option for supporting real time reads with GET API as called out in meta issue and captured interest from AD and Job-Scheduler plugins. |
@manasvinibs @wbeckler : opensearch-project/OpenSearch#8536 fix is available in core. As we are approaching the code freeze for 2.10.0, request you to work on this issue. Please let know in case you need any help. |
Ping @manasvinibs @wbeckler @seanneumann, as we approaching CC @anasalkouz @mch2 |
I think for the sake completion I will just add the workaround to avoid any weird cases or potential impact from performance (ie congested clusters). The saved objects call is also somewhat cached so having a slower performance should been too bad. But the same as the parent issue, performance test results on the size of the datum and the size of the large clusters with a lot of replication would probably sway the decision here. Pretty sure but will have the answer in a few hours that there shouldn't be any crashes from an eventually consistent cluster. Regardless, I will still include it. I think this can also pull in @opensearch-project/opensearch-ux. One point being, how does the user experience of OpenSearch Dashboards feel if there is an eventually consistent cluster. Should there be an indication within OpenSearch Dashboards that the data is potentially stale? If we think about OpenSearch Dashboards as a data lakehouse, should I be able to configure segrep for indices. EDIT: Nothing breaking in with segment for replication for the |
Yes, but I think that should be one of the index configuration options in Index Management. |
https://github.com/kavilla/OpenSearch-Dashboards-1/actions/runs/6041902712. Verifying through CI. EDIT: All tests passed with configuration. |
@kavilla : Curious to know results of your testing. Are there any changes needed in the dashboards plugin ? |
Looking good, granted the test suites do not configure every setting possible within OpenSearch Dashboards but I would check it off as good to go. Just leaving this open until I have install custom plugin but I wouldn't block configuring this as default. Will close this issue EOD. |
Will close, will just use release candidate build to verify one more time. |
Segment replication will become the default storage mechanism for remote stores. If the saved objects of a dashboard are stored on a remote store, and segment replication is in effect, then we need to make sure that the saved object service doesn't rely on strong read-after-write guarantees, which are not available in Seg rep.
There is
const DEFAULT_REFRESH_SETTING = 'wait_for';
here: https://github.com/opensearch-project/OpenSearch-Dashboards/blob/main/src/core/server/saved_objects/service/lib/repository.ts#L127Should we instead point the saved object service at the primary shard as recommended here: #4444 ?
The text was updated successfully, but these errors were encountered: