-
Notifications
You must be signed in to change notification settings - Fork 713
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: generate citations meta data #4205
Conversation
c77f244
to
be8080f
Compare
172e920
to
6271987
Compare
Why all the renames? |
This was a change in how duvet linked to sections, in order for them to actually resolve once you followed them. |
If this depends on duvet's implementation, should we be committing these specs? Or should they be generated when needed? |
We dont make alot of changes to duvet so it feels ok to commit. However we could try auto generating them in the CI.. and that would save us a step when adding new specs. |
I think it's useful to commit the specs, especially for CI where the fetch from tools.ietf.org might fail. |
I can definitely see either way, but it looks like we haven't been remembering to commit them and there isn't currently a mechanism to remind us. What's to ensure we commit them going forwards? I'm not super worried about the fetch failing-- we can always add a retry for that step in whatever ci job we write. |
I think this is the more appropriate path forward, TBH. It should be possible to run the extract script and run |
.github/workflows/ci_compliance.yml
Outdated
toolchain: stable | ||
override: true | ||
|
||
- uses: camshaft/rust-cache@v1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is also a cache
operation in the duvet action and I am not sure if the caching will mess with consecutive runs. From my own testing it didnt seem to interact. @camshaft
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to duplicate all of these steps from the workflow? Why not just run the check after it runs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to avoid the upload to s3 (part of the duvet action) if the checks fails. But we do install duvet twice and the duplication is not ideal. I like your suggestion 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The duvet report
command generates some artifacts (specs folder) that interfere with detecting uncommitted files. Instead I introduce a step to cleanup those files. Since the cleanup runs prior to the “Extract RFC spec data” phase I think it relatively safe
3a43f3e
to
4352fc4
Compare
# If this fails you need to run `cd compliance && ./compliance/initialize_duvet.sh` | ||
# |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the fix is to run "initialize_duvet.sh", and this diff is produced by running "initialize_duvet.sh", why are we requiring developers to do this manually? If we need to run "initialize_duvet.sh" anyway why not just use the result?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like Duvet doesn't do anything if the spec folder already exists: https://github.com/awslabs/duvet/blob/5ed1e4edf75026f83a9fc678a382aa32d16faff7/src/target.rs#L79 So this DOES still save us network calls and make this action less likely to fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please put a comment on this so that it's clear why we're doing this, and future developers don't ask the same question :) Without context, this looks very silly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sadly our duvet reporting is a bit messy at the moment and I found 2 issues with it. This does mean we are downloading the RFC specs in our CI at the moment. I am going to leave a comment linking these issue since ideally we do not re-download the RFC spec each time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Documented the issue in code as well:
e988e26
e988e26
to
802b276
Compare
Description of changes:
NOTE: In this PR I only made changes to
.github/workflows/ci_compliance.yml
to check for uncommitted changes. All other changes were auto generated by running theinitialize_duvet.sh
script.After this PR, we now generating the duvet report. However, it lacks links for the TLS1.2 RFC (5246).
We need to
duvet extract
and commit the the meta-data inorder for the links to work (tested locally). I cleaned up thespecs
folder and re-ran theinitialize_duvet.sh
script to regenerate this meta-data so you will notice other rfc links also changed in minor ways.Callout
section
keyword got added in the rfc website which resulted in a larger change to all the meta-data. However, if we continue to re-run theinitialize_duvet.sh
script we should remain consistent.Testing
I tested changes to the github action on my personal fork: https://github.com/toidiu/s2n-tls/commits/main
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.