Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix the encoding of ECDSA public keys with leading 00's #240

Merged
merged 1 commit into from
Jun 14, 2024

Conversation

benburkert
Copy link
Contributor

@benburkert benburkert commented Jun 5, 2024

The public portion of an ECDSA public key has an 'x' and 'y' component. For a new account request that uses EAB token, this public key is added to the protected section of the JWS. When encoding the key, it's important to preserve all of the bytes in the different parts, even if they are zero bytes.

The previous encoding would dump the key to hex and split the x and y component from those, then encode them into an OpenSSL bignum, then dump that bignum to bytes and urlsafe-base64 encode that. But OpenSSL's bignum does not track the number of input bytes, only the most compact representation of the number. This means that leading zero bytes are dropped when re-encoded into a byte array from the bignum.

Pushing the 'x' and 'y' component through OpenSSL::BN is unecessary and causes issues for ECDSA keys that are randomly generated with leading zeros. This can be demonstrated with the following code:

irb> OpenSSL::BN.new("00FF", 16).to_s(2)
=> "\xFF"

The test generates a key with a leading zero for the encoded 'x' component, then checks to make sure the JWS contains the corresponding 0x00 bytes in the encoding for 'x'.

The public portion of an ECDSA public key has an 'x' and 'y' component.
For a new account request that uses EAB token, this public key is added
to the protected section of the JWS. When encoding the key, it's
important to preserve all of the bytes in the different parts, even if
they are zero bytes.

The previous encoding would dump the key to hex and split the x and y
component from those, then encode them into an OpenSSL bignum, then dump
that bignum to bytes and urlsafe-base64 encode that. But OpenSSL's
bignum does not track the number of input bytes, only the most compact
representation of the number. This means that leading zero bytes are
dropped when re-encoded into a byte array from the bignum.

Pushing the 'x' and 'y' component through OpenSSL::BN is unecessary and
avoids issues for ECDSA keys that are randomly generated with leading
zeros. This can be demonstrated with the following code:

irb> OpenSSL::BN.new("00FF", 16).to_s(2)
=> "\xFF"

The test generates a key with a leading zero for the encoded 'x'
component, then checks to make sure the JWS contains the corresponding
0x00 bytes in the encoding for 'x'.
@benburkert
Copy link
Contributor Author

This bug may affect encoded RSA keys too, I have not investigated that case.

@unixcharles
Copy link
Owner

Thanks for the pull request @benburkert

This bug may affect encoded RSA keys too, I have not investigated that case.

I don't have enough context to known either but given that the change will apply to any type of keys. I'll need to do a bit of digging and bring myself up to date before merging. My brain completely purged all knowledge of OpenSSL::BN the minute I had it working.

Do you have a context where you found this to be necessary that I can use to test it out? I only use letsencrypt and never had any need to external account binding. I'm just confused as why it's only a problem now and would like to investigate.

Anyhow, I'll try to have it merged soon

@benburkert
Copy link
Contributor Author

I don't have enough context to known either but given that the change will apply to any type of keys. I'll need to do a bit of digging and bring myself up to date before merging. My brain completely purged all knowledge of OpenSSL::BN the minute I had it working.

I can look into this in a bit.

Do you have a context where you found this to be necessary that I can use to test it out? I only use letsencrypt and never had any need to external account binding. I'm just confused as why it's only a problem now and would like to investigate.

Right, you wouldn't run into this with LE since they don't use EAB tokens. ZeroSSL uses EAB tokens but this is not reproducible there (they likely zero pad if the number of bytes is short, some libraries do this). I work on anchor.dev and we've had this bug cause CI failures, and i'm pretty sure it's impacted users of https://github.com/anchordotdev/puma-acme. You can use anchor.dev or lcl.host to reproduce this, but the chances of randomly hitting this bug are 1/128. So it's not a new problem, just an unlikely one. But it can be reliably reproduced against anchor.dev if you setup the account with a "bad" private key here, I can provide instructions if you'd like.

@unixcharles unixcharles merged commit 3c0da04 into unixcharles:master Jun 14, 2024
8 checks passed
@benburkert benburkert deleted the fix-ecdh-pubkey-encoding branch July 1, 2024 14:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants