-
Notifications
You must be signed in to change notification settings - Fork 523
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nightly builds for package docs publishing workflow #1869
Conversation
Not sure if the comment on the cron schedule is really necessary. It's pretty self-explanatory but maybe it would be helpful for some folks. |
Should we deploy nightly builds to a staging repo? As I understand it, we are only pushing to prod on the manual workflow_dispatch runs, which I do support. |
That's a good question. I guess my only concern would be that doing so would overwrite any changes on stage that we wanted to get feedback on over a period of days. For instance if we did another big re-org of content like the user guide. I'm open about it. I just figured nightly builds only need to test for errors so we can get alerted to "broken-ness". @samccann What do you think? |
We've autopublished devel docs (core and package) for years so my nickel is we keep doing that. |
And yes, not overwriting stage is a good thing for the reasons @oraNod mentions. That said, maybe we don't need that particular feature now that each PR has an RTD preview? I think the one time I still use it is when playing with the version switcher, but that beastie will go away in the not-too-distant future. Basically, I'd create a PR to update the versions in the devel branch, then hack some dummy branches for myself to add the same changes and test all active branches on staging to make sure the switching happens correctly. |
Yeah, we could change the workflow to just push straight to prod for the devel branch if that's what's desired.
The RTD previews only contain the minimal core docs build, so they're different. |
Core docs are on RTD and get built automatically on merge. For the package docs, I can't recall where we discussed it but there was talk of keeping the builds to prod on manual trigger for an initial period. I think that was just for evaluation and sanity's sake while everyone gets comfortable with the workflow. I've been assuming that we'd want to turn on automatic builds at some point down the line. Thinking ahead to that, it seems excessive to build package docs on push since it's such a memory hog and longer running task. The best option would be to do a single build that does a cumulative update. So maybe we should deploy nightlies straight to prod as @gotmax23 says? We can save the stage / test environment for experimental stuff and PRs where we want to try things out. Maybe @felixfontein would find stage helpful for some of his tooling work. |
I'm uncomfortable with autobuilds to /latest/ as that's the one that gets some rediculously-high number of views, like 40M a year? If something goes wrong, no one is looking at the build results until people start screaming. That said, we haven't had a /latest/ build screaming event in..hmm... well over a year. So maybe my fears are unfounded at this point. Definitely no autobuilds for say a month or so to /latest/ and then we can turn it on and see what happens. We can always turn it back to manual if we end up with docs scream-events :-)
|
I'm going ahead and merging this one because it's re-introducing changes that were approved before we had to revert them. The discussion about automatically deploying nightly builds is a good one and totally valid. I do think it could be a follow up though. At least this PR gets us to the point where we're building devel in community to catch failures early. |
@oraNod, can you open a Github Issue to discuss? |
Thanks for holding me accountable for the follow up discussion @gotmax23 👍 |
* Nightly builds for package docs publishing workflow (#1663)
Related to #1663 and #1683
Also follows up on #1814
Thanks to @x1101 for helping fix things by adding default values for inputs. And thanks to @gotmax23 for suggestions to improve the check-deploy job that avoids failures on scheduled runs.