You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried compressing a dd generated file of bigger size using snappy.
And i found the resulting compressed size to be larger than the actual size.
How to avoid it ?
Is fixed size compression possible?
The text was updated successfully, but these errors were encountered:
Any compression algorithm that reduces the length of some strings will necessarily increase the length of others (intuitively, there are fewer possible codes of length n than strings of length n+1). Typically random-looking data will result in at least some expansion, since each literal will have some framing and Snappy doesn't use entropy coding.
I wouldn't expect more than a couple percent overhead even from a random file, but some expansion is possible.
I tried compressing a
dd
generated file of bigger size using snappy.And i found the resulting compressed size to be larger than the actual size.
How to avoid it ?
Is fixed size compression possible?
The text was updated successfully, but these errors were encountered: