Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add blake2b-256 algorithm #283

Merged
merged 5 commits into from
Oct 15, 2020
Merged

Conversation

jku
Copy link
Collaborator

@jku jku commented Oct 8, 2020

Fixes: #282

Description of the changes being introduced by the pull request:

Add 'blake2b-256' as a new supported algorithm: it is really the same algorithm as blake2b, just half the digest size.

Refactors hash tests a little to ensure that blake2* gets tested but that they do not get tested on pyca_crypto (which does not support them).

I've manually verified that we now get same results as Warehouse :)

Please verify and check that the pull request fulfils the following requirements:

  • The code follows the Code Style Guidelines
  • Tests have been added for the bug fix or new feature
  • Docs have been added for the bug fix or new feature

Jussi Kukkonen added 2 commits October 8, 2020 17:09
This uses the same blake2b algorithm from hashlib, just with shorter
digest.
Add tests for blake2b, blake2b-256, blake2s.

Shuffle things a bit to make sure we don't test
blake2* with pyca_crypto.
@coveralls
Copy link

coveralls commented Oct 8, 2020

Coverage Status

Coverage increased (+0.001%) to 98.879% when pulling 800dad3 on jku:add-blake2b-256 into 0ec854b on secure-systems-lab:master.

Copy link
Contributor

@trishankatdatadog trishankatdatadog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, I can't think of weaknesses. Thanks!

@jku
Copy link
Collaborator Author

jku commented Oct 8, 2020

Huh, blake2* is only supported in hashlib from python3.6 apparently

@jku
Copy link
Collaborator Author

jku commented Oct 8, 2020

I think test rig looks ok now: it should test every algorithm if the library and python version allow.
What I don't like is that it's not very easy to see that he correct algorithms did get tested -- but I think it's better than before.

Copy link
Collaborator

@joshuagl joshuagl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we be throwing some kind of error if a caller is trying to generate blake2* digests with Python < 3.6 or pyca?

@jku
Copy link
Collaborator Author

jku commented Oct 14, 2020

It does throw an error... and yes: if the tests checked for that, they would be much easier to reason about

Jussi Kukkonen added 2 commits October 14, 2020 12:01
* Make sure we actually run every test on every algorithm and library:
  just expect UnsupportedAlgorithmError if the algorithm is not
  supported on this library on this python version
* refactor all algorithm update tests into one function
This raises TypeError on python < 3.6:
  hashlib.new('blake2b', digest_size=32)
because the argument is unexpected: re-raise as
UnsupportedAlgorithmError
@jku
Copy link
Collaborator Author

jku commented Oct 14, 2020

I can squash the commits into two: but did not do it yet in case Joshua wants to see diff vs the previously reviewed version.

Notes for reviewers:

  • ended up refactoring the _do_{algorithm}_update() functions into a single function, and modifying all _do_*() functions to take a algorithm as parameter: this makes the next item possible ⬇️
  • the supported algorithms for a test are still not listed in every test but now it's pretty easy to reason and verify what is expected to fail and what is expected to succeed: _run_with_all_hash_libraries() decides that and can easily be made to print out those decisions if they need to be checked
  • found an inconsistency in my new code: see the TypeError handling in hash.py (yay for testing)

Copy link
Collaborator

@joshuagl joshuagl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks much better, thanks @jku!

I wonder if it would be clearer or not if we were using the unittest.skip* decorators to explicitly skip test cases where we don't expect them to work. That might lead to fewer unexpected errors than only accepting failures if it's an UnsupportedAlgorithmError, though does require explicit tests for the UnsupportedAlgorithmError on configurations which don't support it.

Let me know what you think.

@jku
Copy link
Collaborator Author

jku commented Oct 15, 2020

I wonder if it would be clearer or not if we were using the unittest.skip* decorators to explicitly skip test cases where we don't expect them to work.

I like the fact that we now test every combination and actually verify the correct Error so skip doesn't sound right. A @ExpectedFailureIf(condition, error, reason) might be useful but that does not exist out of the box.

Even with that, I don't think we can make it work since the tests are not "unique": a single test case tests all libraries so we can't condition the skip/expectedfailure on the library argument. Parameterizing 'library' (and maybe 'algorithm') would enable that but that means an additional test dependency on something like https://github.com/wolever/parameterized -- I think that's too much.

TL;DR: we should keep parameterizing in mind as an option in future, but for now the decorations are IMO not helpful

@joshuagl
Copy link
Collaborator

Thanks for taking the time to reason through that and discuss with me! Merging as is with a plan to look at parameterising tests in future.

@joshuagl joshuagl merged commit 63423a3 into secure-systems-lab:master Oct 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

blake2b digest length is incompatible with Warehouse
4 participants