Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stop using pytest to run our hub health checks #1232

Open
sgibson91 opened this issue Apr 25, 2022 · 3 comments
Open

Stop using pytest to run our hub health checks #1232

sgibson91 opened this issue Apr 25, 2022 · 3 comments
Labels
allocation:internal-eng nominated-to-be-resolved-during-q4-2023 Nomination to be resolved during q4 goal of reducing the technical debt

Comments

@sgibson91
Copy link
Member

sgibson91 commented Apr 25, 2022

Context

We currently use pytest to run our hub health checks. Upon failure, pytest's output includes the parameters that were passed to the function. For us, this includes a JupyerHub API token which is secret. In order to prevent this from being leaked via CI logs, we currently do not print output when we run in CI environments. However, this adds extra toil to the engineering team since they would need to rerun the hub health check locally in order to see logs and hope that the problem can be reproduced using the deployer locally.

Proposal

Since we don't actually use any functionality pytest offers, we should remove the pytest dependency and call the functions as normal functions. This will allow diagnostic logs to be available in CI/CD without leaking the secret API token or requiring an engineer to do another run of the checks locally.

Updates and actions

No response

Related

@yuvipanda
Copy link
Member

Fully support!

@sgibson91 sgibson91 added nominated-to-be-resolved-during-q4-2023 Nomination to be resolved during q4 goal of reducing the technical debt and removed nominated-to-be-resolved-during-q4-2023 Nomination to be resolved during q4 goal of reducing the technical debt labels Oct 18, 2023
@sgibson91
Copy link
Member Author

Issue is specifically referencing this section:

# On failure, pytest prints out params to the test that failed.
# This can contain sensitive info - so we hide stderr
# FIXME: Don't use pytest - just call a function instead
#
# Show errors locally but redirect on CI
gh_ci = os.environ.get("CI", "false")
pytest_args = [
"-q",
"deployer/tests",
f"--hub-url={hub_url}",
f"--api-token={service_api_token}",
f"--hub-type={hub.spec['helm_chart']}",
]
if (hub.spec["helm_chart"] == "daskhub") and check_dask_scaling:
pytest_args.append("--check-dask-scaling")
if gh_ci == "true":
print_colour("Testing on CI, not printing output")
with open(os.devnull, "w") as dn, redirect_stderr(dn), redirect_stdout(dn):
exit_code = pytest.main(pytest_args)
else:
print_colour("Testing locally, do not redirect output")
exit_code = pytest.main(pytest_args)

Note that we do use pytest to verify the deployer code itself, which is fine. This is only regarding running the hub health check.

@sgibson91 sgibson91 added the nominated-to-be-resolved-during-q4-2023 Nomination to be resolved during q4 goal of reducing the technical debt label Oct 18, 2023
@consideRatio
Copy link
Contributor

consideRatio commented Oct 19, 2023

I think this issue should be worked before #1206, and that #1206 is suitable work following this is done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
allocation:internal-eng nominated-to-be-resolved-during-q4-2023 Nomination to be resolved during q4 goal of reducing the technical debt
Projects
No open projects
Status: Ready to work
Development

No branches or pull requests

4 participants