-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add info about interacting with the docker solution & configure grafana data sources automatically #552
Add info about interacting with the docker solution & configure grafana data sources automatically #552
Conversation
Signed-off-by: Mikayla Thompson <[email protected]>
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #552 +/- ##
============================================
- Coverage 76.64% 76.05% -0.60%
- Complexity 1414 1499 +85
============================================
Files 155 162 +7
Lines 6033 6339 +306
Branches 543 563 +20
============================================
+ Hits 4624 4821 +197
- Misses 1044 1143 +99
- Partials 365 375 +10
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Signed-off-by: Mikayla Thompson <[email protected]>
You can send the same calls to the source cluster while bypassing the Capture Proxy (calls will not be relayed to the | ||
target cluster) via `localhost:19200`, and to the target cluster directly at `localhost:29200`. | ||
|
||
For sample data that exercises various endpoints with a range of datatypes, you can `ssh` into the Migration Console |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ssh threw me here. There's no ssh involved. You could say 'exec', but that might not be clear to somebody new to Docker. Maybe s/ssh into/execute a shell within/ would be better.
ssh kept making me think that you were trying to hit some kind of a deployed cloud resource.
|
||
For sample data that exercises various endpoints with a range of datatypes, you can `ssh` into the Migration Console | ||
(`docker exec -i -t $(docker ps -aqf "ancestor=migrations/migration_console:latest") bash` or via the Docker console) | ||
and run `./runTestBenchmarks.sh`. By default, this runs four workloads from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/four/four short test/ workloads.
(9200). The Migration Console contains other utility functions (`./catIndices.sh`, `kafka-tools`, etc.) to interact | ||
with the various containers of the solution. | ||
|
||
You can also access the metrics generated by the solution in Grafana. While the solution is running, go to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefix this with "with the default docker-compose" configuration launched with :dockerSolution:composeUp, instrumentation containers will be started (see below for other options).
Jaeger and Prometheus are automatically provisioned (see them under `Connections->Data sources`), so you can go | ||
directly to `Explore` and define a query using the supplied data from either data source. | ||
|
||
Traces for the capture proxy and replayer are available via Jaeger at [http://localhost:16686](http://localhost:16686). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the prefix above, I'd pull this sentence into the last paragraph to bind it to the clause.
@@ -35,6 +35,7 @@ services: | |||
- "3000:3000" | |||
volumes: | |||
- ./grafana_data:/var/lib/grafana | |||
- ./grafana_datasources.yaml:/usr/share/grafana/conf/provisioning/datasources/datasources.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you for figuring this out!
Should this be a mounted file or a configuration file that's within the image? I can't think of how I would want to change this, and it seems like it might be useful for a quick-and-dirty deployment elsewhere w/out compose.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, I think you're right. This was helpful for testing, but a user is quite unlikely to need that.
Signed-off-by: Mikayla Thompson <[email protected]>
@@ -28,14 +28,13 @@ services: | |||
- COLLECTOR_OTLP_ENABLED=true | |||
|
|||
grafana: | |||
image: grafana/grafana:latest | |||
image: 'migrations/grafana:latest' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes a lot more sense since the datasources are REALLY going to be specific for the compose environment. Thanks!
650093d
into
opensearch-project:main
Description
dockerSolution
README, I find documentation on how to set up the docker solution, but nothing about what to do then. I had to puzzle out how to access the clusters, migration console, metrics, etc., but if this is a default user entrypoint to our tools, we should explain how to access them.Issues Resolved
n/a (as far as I know)
Testing
n/a
Check List
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.