-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add blake2b-256 algorithm #283
Conversation
This uses the same blake2b algorithm from hashlib, just with shorter digest.
Add tests for blake2b, blake2b-256, blake2s. Shuffle things a bit to make sure we don't test blake2* with pyca_crypto.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, I can't think of weaknesses. Thanks!
Huh, blake2* is only supported in hashlib from python3.6 apparently |
I think test rig looks ok now: it should test every algorithm if the library and python version allow. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we be throwing some kind of error if a caller is trying to generate blake2* digests with Python < 3.6 or pyca?
It does throw an error... and yes: if the tests checked for that, they would be much easier to reason about |
* Make sure we actually run every test on every algorithm and library: just expect UnsupportedAlgorithmError if the algorithm is not supported on this library on this python version * refactor all algorithm update tests into one function
This raises TypeError on python < 3.6: hashlib.new('blake2b', digest_size=32) because the argument is unexpected: re-raise as UnsupportedAlgorithmError
I can squash the commits into two: but did not do it yet in case Joshua wants to see diff vs the previously reviewed version. Notes for reviewers:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks much better, thanks @jku!
I wonder if it would be clearer or not if we were using the unittest.skip*
decorators to explicitly skip test cases where we don't expect them to work. That might lead to fewer unexpected errors than only accepting failures if it's an UnsupportedAlgorithmError
, though does require explicit tests for the UnsupportedAlgorithmError
on configurations which don't support it.
Let me know what you think.
I like the fact that we now test every combination and actually verify the correct Error so skip doesn't sound right. A Even with that, I don't think we can make it work since the tests are not "unique": a single test case tests all libraries so we can't condition the skip/expectedfailure on the library argument. Parameterizing 'library' (and maybe 'algorithm') would enable that but that means an additional test dependency on something like https://github.com/wolever/parameterized -- I think that's too much. TL;DR: we should keep parameterizing in mind as an option in future, but for now the decorations are IMO not helpful |
Thanks for taking the time to reason through that and discuss with me! Merging as is with a plan to look at parameterising tests in future. |
Fixes: #282
Description of the changes being introduced by the pull request:
Add 'blake2b-256' as a new supported algorithm: it is really the same algorithm as blake2b, just half the digest size.
Refactors hash tests a little to ensure that blake2* gets tested but that they do not get tested on pyca_crypto (which does not support them).
I've manually verified that we now get same results as Warehouse :)
Please verify and check that the pull request fulfils the following requirements: