-
Notifications
You must be signed in to change notification settings - Fork 273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documenting or creating a TAP to describe keyid_hash_algorithms #848
Comments
((Background note for others: KeyIDs matter in TUF. It is technically possible, I think, to construct a slower client for the reference implementation that could verify metadata without knowing how keyids were calculated, without substantial security issues. You could do this by -- to take signature checking as an example -- trying to verify a signature using all authorized keys rather than just the one with a matching keyid the signature lists.... I'm not recommending that.)) There's been some debate (#442 and #434, for example), about keyid_hash_algorithms and keyid calculation in general. Personally, I don't think the keyid_hash_algorithms field is of use, and I would like to remove it from keys, as I think it's probably sensible to assume across an implementation and deployment that all keyid calculations employ a specific hash function. As for your question, there are two answers:
Sorry for the headache. ^_^ Does that clear things up enough to ensure compatibility? |
Oh, sorry, I didn't answer the TAP question: If we thought that keyid_hash_algorithms was an important element of the system, it'd be a good thing to add to the spec (using a TAP or not), but I don't think we do.... We just need to discuss it a bit more at some point and (hopefully) remove it from the reference implementation in a breaking release. So I'd say this is a compatibility-with-the-reference-implementation issue rather than a specification issue, if that makes sense. Please let me know if I missed anything else. :) |
@awwad hello! Since it looks like 0.12 is coming up soon, has there been a discussion about removing Is it too late to get removing |
Cc @lukpueh |
I'm wary of removing
@JustinCappos thoughts? |
I believe our thoughts here were that we should support multiple hash
algorithms in metadata and support updating of those algorithms so that a
client which supports both can use them together unambiguously.
We might as well make this a "party issue". Others?
@mnm678 @SantiagoTorres
…On Tue, Sep 24, 2019 at 2:58 PM Trishank K Kuppusamy < ***@***.***> wrote:
I'm wary of removing keyid_hash_algorithms for two reasons:
1. Security. There might be some unexpected issues we haven't thought
about.
2. Backwards-compatibility. I'm not a big fan of removing harmless
things at the expense of unnecessarily breaking old clients.
@JustinCappos <https://github.com/JustinCappos> thoughts?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#848>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAGROD525DYMOTQT2J4I44TQLJPMBANCNFSM4HDU36MA>
.
|
If there's security advantages to it, I'm fine implementing it in go-tuf/rust-tuf, I'm just not really sure how to do it. As best as I can tell, even though python-tuf has My first thought is that we're supposed to list keys multiple times, once for each item in the Regarding backwards compatibility, at the very least tuf 0.12 could do what I do, and treat the field as optional so python-tuf can parse metadata produced by another library that doesn't emit |
I believe this happens because the runtime settings aren't the same as the ones being checked in, althoughI could be wrong. |
@erickt Yes, I see what you mean now. I think right now there isn't a good way in TUF to group together otherwise identical keys with different keyids. Until we solve this problem (perhaps in a backwards-incompatible TAP), I agree we should ignore |
Okay, so the idea would be to have there be a TAP that addresses
specifically this issue come out soon? We would make a (likely) backwards
incompatible change at that point to add in multiple hash algorithms.
Assuming I'm understanding this correctly, does anyone disagree with this
course of action?
…On Wed, Sep 25, 2019 at 3:39 PM Trishank K Kuppusamy < ***@***.***> wrote:
@erickt <https://github.com/erickt> Yes, I see what you mean now. I think
right now there isn't a good way in TUF to group together otherwise
identical keys with different keyids. Until we solve this problem (perhaps
in a backwards-incompatible TAP), I agree we should ignore
keyid_hash_algorithms (it's not like we use SHA-512 keyids anywhere
AFAICT?), simply assume SHA-256, and document it clearly in the spec.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#848>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAGROD5VORJJN2CQVKISNKDQLO47DANCNFSM4HDU36MA>
.
|
spec-wise, I'm a little bit worried about backwards compatibility and what it entails on the semver angle... |
@SantiagoTorres True, it would break the reference implementation if we remove it from the code. What I'm saying is: This also means that while other implementations can consume reference implementation metadata w/o breaking, the converse is not true (because it silently needs As for backwards-incompatibility produced by the TAP, we can worry about that later. Is this all clear? |
The complication is that the This is essentially the approach I'm taking in go-tuf, where I automatically trust both keyids, even if both aren't listed in the metadata, and only count the keys once during verification (looking at the code though I really should be ignoring unknown keyid algorithms though). Anyway, with this approach, once all the clients were updated to the latest version, you could switch towards generating the standard TUF-1.0 keyids. Perhaps 0.12 could implement parsing both IDs, and 0.13 could switch the code generation to remove Also, perhaps the spec state machine should be updated to explicitly include rules on how to deduplicate keys. Would it be sufficient to dedup on the |
Regarding what others have said about I don't think that the desired flexibility gained by supporting different hash algorithms for keyids, justifies the complexity of the (current) implementation, and the associated maintenance cost. |
Btw. and since this has been an argument here, the next tuf release will already be backwards incompatible. See first line of tentative changelog:
|
@lukpueh We are going off-topic, but would the next release break existing clients that have not updated TUF to the next release? |
@trishankatdatadog, not necessarily, but let's discuss this somewhere else. |
Just checking in, have you had any luck figuring out what to do with |
@erickt We haven't come to a consensus, and we need to fix this. We should at least remove it from the spec, if not the reference code right now. |
In theupdateframework/specification#58, @trishankatdatadog and @lukpueh clarified that the core roles always find their keys in the root role, and that delegated target roles find their keys in the delegating target role (and not by looking in any delegation chains). If this is correct, then we might be able to simplify handling of the Could the spec be loosened to leave it up to the TUF implementation on how exactly keyids are defined? They just need to guarantee that:
It is not required that keyids be globally unique. The spec could suggest, but not require, that implementations could use For the case of python-tuf if you wanted to get rid of |
@erickt 💯 I've been trying to say the same thing: keyids need not be globally unique, only within a metadata file |
Seconding 💯. However, I'd be very careful to say that they only need to be unique within a metadata file, because the scope of a keyid always stretches over at least two metadata files, that of the delegating role, where the key is defined, and that of the delegated role, where that key is used to create/verify signatures. And this gets even tricker if multiple roles delegate to a role. Which, IIUC is a rather hypothetical but not strictly forbidden scenario in TUF (see @awwad's excellent elaborations on that matter in #660). So in a quite roundabout way, I'd phrase @erickt's first condition has
I am aware that this wording is not suitable for the spec. But maybe someone can transform it into something more readable? @jhdalek55? Maybe we can also just get away by saying something simpler, e.g. along the lines of ...
|
At any rate, the information about keyids in the spec is pretty sparse so far. There are only these three places. (1) 4.2. File formats: general principles L497: ...
(2) ... and L572-L573:
(3) 4.3. File formats: root.json L630-L633:
I suggest to
|
I can take a look at the section you referenced above, but its a bit hard to do out of context. Can you point to the section where this falls? |
Thanks, @jhdalek55. You're probably right that it is a lot to ask without additional context. Let's see if @trishankatdatadog or @erickt have feedback to my comment. @awwad's input here would also be helpful, given that he has spent a lot of thought on "promiscuous delegations". And he has also shown (on a different channel) that he has an opinion about the scope/uniqueness of keyids. |
Prior to this commit metadadata signature verification as provided by `tuf.sig.verify()` and used e.g. in `tuf.client.updater` counted multiple signatures with identical authorized keyids each separately towards the threshold. This behavior practically subverts the signature thresholds check. This commit fixes the issue by counting identical authorized keyids only once towards the threshold. The commit further clarifies the behavior of the relevant functions in the `sig` module, i.e. `get_signature_status` and `verify` in their respective docstrings. And adds tests for those functions and also for the client updater. --- NOTE: With this commit signatures with different authorized keyids still each count separately towards the threshold, even if the keyids identify the same key. If this behavior is not desired, I propose the following fix instead. It verifies uniqueness of keys (and not keyids): ``` diff --git a/tuf/sig.py b/tuf/sig.py index ae9bae15..5392e596 100755 --- a/tuf/sig.py +++ b/tuf/sig.py @@ -303,7 +303,14 @@ def verify(signable, role, repository_name='default', threshold=None, if threshold is None or threshold <= 0: #pragma: no cover raise securesystemslib.exceptions.Error("Invalid threshold: " + repr(threshold)) - return len(good_sigs) >= threshold + # Different keyids might point to the same key + # To be safe, check against unique public key values + unique_good_sig_keys = set() + for keyid in good_sigs: + key = tuf.keydb.get_key(keyid, repository_name) + unique_good_sig_keys.add(key["keyval"]["public"]) + + return len(unique_good_sig_keys) >= threshold ``` Signed-off-by: Lukas Puehringer <[email protected]>
Another argument for loosening the keyid specification is interoperability with existing PKI-infrastructure. Here are two data points that support that claim.
|
python-tuf is [considering] getting rid of keyid_hash_algorithms, so we shouldn't default to generating keys with them specified. [considering]: theupdateframework/python-tuf#848 Change-Id: I2c3af5d5eb7b0cc30793b54e45155320164cf706
Before this patch, the resolver assumed that all tuf keys should have the `"keyid_hash_algorithms": ["sha256"]` specified. However this field was only added to check compatibility with python-tuf, and python-tuf is [considering] getting rid of them, if they can figure out how to do it without breaking their current users. So we'd like to migrate away from using them to avoid having to have these fields around for all time. This is the first step is to allow us to verify the initial TUF metadata with two variations of the root TUF keys, one with, and one without the keyid_hash_algorithms specified. This is safe (as in we won't double count a key) as long as the metadata doesn't list the same key multiple times with different keyids. Once everyone has migrated over to the new metadata that doesn't mention `keyid_hash_algorithms`, we can get rid of the call to `PublicKey::from_ed25519_with_keyid_hash_algorithms`. [considering]: theupdateframework/python-tuf#848 Fixed: 44490 Change-Id: Ib84ca4551b9d68f322039215ba40996608d6ca58
IMHO: What hashing algorithm is used when a key ID is calculated is an implementation detail. It is not a property of the key. Keys should not each individually define a list of hash algorithms that are acceptable for referring to those keys, leading the key to change any time implementation details do. Referring to a key is something that signatures, delegating metadata, and clients do. I doubt there is any serious need to permit an implementation to support multiple different key ID hash algorithms for the same key type; however, if there is, it should still not be in the hashed value of the key itself (see below). Refresher:
TODO:
I've said previously that the result of the current policy in the reference implementation (aside from the fact that it is confusing) is that trying to refer to a key in a different way requires changing the key's identity and breaking any delegations to it. It will cause (and has caused) headaches. Break metadata delegations once and remove this for the sake of clarity and simplicity, instead of breaking delegations whenever hash algorithms need to change. This change should not cost us any interoperability. Re. GPG keys: for my part, when I've had to support OpenPGP keys and signatures (basically, for YubiKey interfacing), I've used them as a vector for more typical ed25519 signatures: in signatures, in delegating metadata, and in the verification process, I still refer to and use the public key using its underlying value (not the OpenPGP keyid). |
On a related note, I've been working on a proposal to change the TUF specification to allow for more flexibility in how keyids are calculated. A draft of the document is available here and I'd appreciate any feedback. This change would give implementers (including the reference implementation) more flexibility in how keyids can be used while adhering to the specification. |
I suggest we close this issue. There is no longer a need for documenting or creating a TAP to describe Let's discuss the removal of the field in the corresponding issue #1084. |
Hello again,
Would you consider documenting, or possibly writing a TAP, to describe the
keyid_hash_algorithms
field in a key definition? As I upgrade go-tuf (and eventually rust-tuf) to the TUF 1.0 Draft, I am also updating it to be compatible with the metadata produced and consumed by this project. In order to do so, I need to also produce a compatiblekeyid_hash_algorithms
field the metadata I produce. However, I'm not completely sure what I am supposed to do with this field when I receive it from python-tuf.It appears this field originally came from secure-systems-lab/securesystemslib#37. As best as I can tell, it is present so that a client knows which algorithm was used to compute a keyid, so it can verify that a keyid is correct. Is this right, and are there other ways the
keyid_hash_algorithms
is used?The text was updated successfully, but these errors were encountered: