You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To test the rust engine, we cleared out any existing delta tables in our nonprod environment and switched from pyarrow over to the rust engine with schema merging, with this write_deltalake call:
Despite it being a brand new Delta table and after some successful writes, eventually the lambdas started erroring with Generic DeltaTable error: Version mismatch. I believe the error is coming from here:
Especially since we are testing with a fresh table, I'd expect all writes to work (and not just some) even with the new schema merge flag set.
How to reproduce it:
I was not able to reproduce with a randomly generated dataset locally, so my guess is its something more to do with the dynamo locking on S3 If you have thoughts on how I could test this better, please let me know.
Note that we have roughly 10 concurrent lambdas that could potentially write to Lambda. However, before this change we had 50 writing with pyarrow and all was well.
The text was updated successfully, but these errors were encountered:
Environment
Delta-rs version: python v0.16
Binding: ^^
Environment:
Bug
What happened:
To test the rust engine, we cleared out any existing delta tables in our nonprod environment and switched from pyarrow over to the rust engine with schema merging, with this
write_deltalake
call:Despite it being a brand new Delta table and after some successful writes, eventually the lambdas started erroring with
Generic DeltaTable error: Version mismatch
. I believe the error is coming from here:delta-rs/crates/core/src/table/state.rs
Line 192 in 3e6a4d6
What you expected to happen:
Especially since we are testing with a fresh table, I'd expect all writes to work (and not just some) even with the new schema merge flag set.
How to reproduce it:
I was not able to reproduce with a randomly generated dataset locally, so my guess is its something more to do with the dynamo locking on S3 If you have thoughts on how I could test this better, please let me know.
Note that we have roughly 10 concurrent lambdas that could potentially write to Lambda. However, before this change we had 50 writing with pyarrow and all was well.
The text was updated successfully, but these errors were encountered: