-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SUPPORT] Should we introduce extensible simple bucket index ? #12210
Comments
@danny0405 Hi ! I have some idea about extensible simple bucket. Looking forward to your reply! |
I don't think this is a real problem
Bucket join relies on engine specific naming of the files, which are very probably different with Hudi style, I don't think it is easy to be generalized. |
IMO, The query engine only needs to know the number of buckets to distribute different records to the specified partition (bucket) according to the hash value. The consistent hash bucket index cannot do this, because there is no fixed rule for the number of buckets and hash mapping relationship. |
This is very engine specific because the "bucket" definition is varigated among different engines. |
Got it~ |
Also the main gains for consistent-hasing is to try to rewrite as less data files for re-hashing. Otherwise, you have to rewrie all the existing data set(the whole table) which is a cost that unaccepted for many cases. |
So how can people who are using simple-bucket today deal with the increasing amount of data in their buckets? At present, it seems that only the method of deleting the table reconstruction can be solved, but the cost is relatively high, if we can support dynamically adjusting number of buckets through clustering for each partition, will it be more appropriate? |
Another idea is to implement a light-weight index for each bucket, kind of a bit set of the record key hash codes. |
Currently we have two types of buck engines:
When user build a table with simple bucket index. The amount of data in each bucket can get larger over time, affecting the performance of compaction. However, users have no way to adjust the number of buckets in real time except to delete the table and rebuild.
So do we need to provide a way to dynamically adjust the bucket count?
Bucket resizing should not block reading and writing. We can use clustering to adjust the number of buckets, and we can use a dual write for new writes during resizing execution to ensure data consistency.
Why don't we use Consistent-Hash bucket engine?
So should we introduce extensible simple bucket? Welcome to the discussion!
Tips before filing an issue
Have you gone through our FAQs?
Join the mailing list to engage in conversations and get faster support at [email protected].
If you have triaged this as a bug, then file an issue directly.
Describe the problem you faced
A clear and concise description of the problem.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Environment Description
Hudi version :
Spark version :
Hive version :
Hadoop version :
Storage (HDFS/S3/GCS..) :
Running on Docker? (yes/no) :
Additional context
Add any other context about the problem here.
Stacktrace
Add the stacktrace of the error.
The text was updated successfully, but these errors were encountered: