Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion of TAP 12: Improving keyid flexibility #135

Open
mnm678 opened this issue Apr 29, 2021 · 4 comments
Open

Discussion of TAP 12: Improving keyid flexibility #135

mnm678 opened this issue Apr 29, 2021 · 4 comments

Comments

@mnm678
Copy link
Contributor

mnm678 commented Apr 29, 2021

This is a thread to discuss TAP 12, introduced in #112.

The corresponding issue for the reference implementation is theupdateframework/python-tuf#1084.

@mnm678
Copy link
Contributor Author

mnm678 commented Apr 29, 2021

This TAP may be superceded by the signing spec project once there is a TAP for that. Though this backwards-compatible solution may still be a good intermediate step that can be done before a TUF 2.0 release.

@lukpueh
Copy link
Member

lukpueh commented Mar 23, 2022

Let's consider removing key uniqueness check as a requirement, or at least make it clear that there is no real security benefit over checking for unique keyids only.

--

Context:

With python-tuf maintainers we recently had a discussion about the client key uniqueness check prescribed by TAP 12:

"Clients MUST use each key only once during a given signature verification." During this de-duplication check, the client should use a standardized representation for keys, like the modulus and exponent for RSA or the point and curve for ECC.

@jku argues that

this protects against an attacker that

  • has the ability to add keys it wants
  • but wants the keys to look different from each other

why, in this case would the attacker want the two keys to be the same key with different keyid? why not two different keys?

It seems that there has already been some confusion about this in the original TAP 12 discussion, to which @mnm678 concluded:

If all goes correctly, using the key [content to check uniqueness] does not make any difference, but if the metadata accidentally has duplicate keys with different keyids, using keyids to verify uniqueness of keys could lead to a key being applied to a threshold more than once.

On a related side-note: The reference implementation currently checks signature thresholds based on keyid uniqueness only (see theupdateframework/python-tuf@48b58d9)

@trishankatdatadog
Copy link
Member

If all goes correctly, using the key [content to check uniqueness] does not make any difference, but if the metadata accidentally has duplicate keys with different keyids, using keyids to verify uniqueness of keys could lead to a key being applied to a threshold more than once.

If we internally represent keys correctly (e..g, modulus and exponent for RSA instead of simply the PEM encoding), we would naturally not have this problem.

@jku
Copy link
Member

jku commented Mar 23, 2022

Thanks lukas, that makes sense: the requirement is in there to not prevent an attack but a mistake in the repository side:

If all goes correctly, using the key [content to check uniqueness] does not make any difference, but if the metadata accidentally has duplicate keys with different keyids, using keyids to verify uniqueness of keys could lead to a key being applied to a threshold more than once.

I think this should not lead to a client side spec requirement to verify key content uniqueness: clients should be as easy to implement as possible and implementing this is clearly not trivial (evidenced by the fact that to my knowledge no TUF client has yet implemented this).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants