Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add info about interacting with the docker solution & configure grafana data sources automatically #552

Conversation

mikaylathompson
Copy link
Collaborator

@mikaylathompson mikaylathompson commented Apr 4, 2024

Description

  • Category:
    • Documentation & tiny feature
  • Why these changes are required?
    • I'm walking through the setup process as a very rusty user. Following the link on the main README to the dockerSolution README, I find documentation on how to set up the docker solution, but nothing about what to do then. I had to puzzle out how to access the clusters, migration console, metrics, etc., but if this is a default user entrypoint to our tools, we should explain how to access them.
    • Additionally, while a grafana container was set up, it required the user to configure everythign from scratch. In the future, we'd like many things to be pre-configured so the user can just pull up grafana and see a dashboard immediately. The first step to get there is automatically configuring the datasources, which are very predictable within our docker setup, allowing the user to go straight to exploring metrics.

Issues Resolved

n/a (as far as I know)

Testing

n/a

Check List

  • New functionality includes testing
    • All tests pass, including unit test, integration test and doctest
  • New functionality has been documented
  • Commits are signed per the DCO using --signoff

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Copy link

codecov bot commented Apr 4, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 76.05%. Comparing base (5865ca3) to head (ffd5066).
Report is 58 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff              @@
##               main     #552      +/-   ##
============================================
- Coverage     76.64%   76.05%   -0.60%     
- Complexity     1414     1499      +85     
============================================
  Files           155      162       +7     
  Lines          6033     6339     +306     
  Branches        543      563      +20     
============================================
+ Hits           4624     4821     +197     
- Misses         1044     1143      +99     
- Partials        365      375      +10     
Flag Coverage Δ
unittests 76.05% <ø> (-0.60%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Signed-off-by: Mikayla Thompson <[email protected]>
@mikaylathompson mikaylathompson changed the title [DOCUMENTATION] Add info about interacting with the docker solution Add info about interacting with the docker solution & configure grafana data sources automatically Apr 11, 2024
You can send the same calls to the source cluster while bypassing the Capture Proxy (calls will not be relayed to the
target cluster) via `localhost:19200`, and to the target cluster directly at `localhost:29200`.

For sample data that exercises various endpoints with a range of datatypes, you can `ssh` into the Migration Console
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ssh threw me here. There's no ssh involved. You could say 'exec', but that might not be clear to somebody new to Docker. Maybe s/ssh into/execute a shell within/ would be better.
ssh kept making me think that you were trying to hit some kind of a deployed cloud resource.


For sample data that exercises various endpoints with a range of datatypes, you can `ssh` into the Migration Console
(`docker exec -i -t $(docker ps -aqf "ancestor=migrations/migration_console:latest") bash` or via the Docker console)
and run `./runTestBenchmarks.sh`. By default, this runs four workloads from
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/four/four short test/ workloads.

(9200). The Migration Console contains other utility functions (`./catIndices.sh`, `kafka-tools`, etc.) to interact
with the various containers of the solution.

You can also access the metrics generated by the solution in Grafana. While the solution is running, go to
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefix this with "with the default docker-compose" configuration launched with :dockerSolution:composeUp, instrumentation containers will be started (see below for other options).

Jaeger and Prometheus are automatically provisioned (see them under `Connections->Data sources`), so you can go
directly to `Explore` and define a query using the supplied data from either data source.

Traces for the capture proxy and replayer are available via Jaeger at [http://localhost:16686](http://localhost:16686).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the prefix above, I'd pull this sentence into the last paragraph to bind it to the clause.

@@ -35,6 +35,7 @@ services:
- "3000:3000"
volumes:
- ./grafana_data:/var/lib/grafana
- ./grafana_datasources.yaml:/usr/share/grafana/conf/provisioning/datasources/datasources.yaml
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you for figuring this out!
Should this be a mounted file or a configuration file that's within the image? I can't think of how I would want to change this, and it seems like it might be useful for a quick-and-dirty deployment elsewhere w/out compose.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, I think you're right. This was helpful for testing, but a user is quite unlikely to need that.

@@ -28,14 +28,13 @@ services:
- COLLECTOR_OTLP_ENABLED=true

grafana:
image: grafana/grafana:latest
image: 'migrations/grafana:latest'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes a lot more sense since the datasources are REALLY going to be specific for the compose environment. Thanks!

@mikaylathompson mikaylathompson merged commit 650093d into opensearch-project:main Apr 15, 2024
6 of 7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants