Skip to content
This repository has been archived by the owner on Nov 29, 2018. It is now read-only.

Optimize ChecksumIEEE helper #2

Merged
merged 1 commit into from
Aug 31, 2015
Merged

Optimize ChecksumIEEE helper #2

merged 1 commit into from
Aug 31, 2015

Conversation

eapache
Copy link
Contributor

@eapache eapache commented Aug 31, 2015

It was calling the fallback update method directly and so wasn't seeing any
optimization. Call updateIEEE instead which includes the optimized path (and
bypasses the unnecessary switch on the table vs just calling Update).

@klauspost
cc @wvanbergen

This explains the confusion we ran into trying to benchmark IBM/sarama#527

It was calling the fallback `update` method directly and so wasn't seeing any
optimization. Call `updateIEEE` instead which includes the optimized path (and
bypasses the unnecessary switch on the table vs just calling `Update`).
klauspost added a commit that referenced this pull request Aug 31, 2015
@klauspost klauspost merged commit 7ae177d into klauspost:master Aug 31, 2015
@klauspost
Copy link
Owner

Yes, I just when submitting the CL to the main Go repo.

Well spotted, thanks!

@eapache eapache deleted the fix-checksum-ieee branch August 31, 2015 16:17
@eapache
Copy link
Contributor Author

eapache commented Aug 31, 2015

Does that mean this could be built-in to 1.6?

@klauspost
Copy link
Owner

I cannot see a reason why it shouldn't. You can follow it here:

https://go-review.googlesource.com/#/c/14080/

@klauspost
Copy link
Owner

@eapache - you might also be interested in github.com/klauspost/compress, I see you use gzip. This version not only uses the faster crc32, but also eliminates all allocations if you re-use/pool the writer, and is generally much faster.

@eapache
Copy link
Contributor Author

eapache commented Aug 31, 2015

Nobody's complained about gzip yet though :)

Actually, we just use snappy internally, and I suspect most other places do the same so if you have an optimized snappy lib I'm interested :). I suspect they just put gzip in the protocol for completeness's sake, the algorithm isn't really suited for our use case regardless of implementation.

Thanks for all your work on this stuff!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants