You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
the timestamp in delta table definition is macro, and in parquet it is nanos. (Spark behavior)
Then, when datafusion reads it, the schema seems to be inferred on parquet raw data, rather than from delta lake definition.
This is a limitation of datafusion, please set the correct spark conf : SparkSession.config("spark.sql.parquet.outputTimestampType", "TIMESTAMP_MICROS")
Environment
WSL2 Ubuntu 22.04
Delta-rs version:
0.17.1
Binding:
Environment:
Bug
What happened:
the timestamp in delta table definition is macro, and in parquet it is nanos. (Spark behavior)
Then, when datafusion reads it, the schema seems to be inferred on parquet raw data, rather than from delta lake definition.
What you expected to happen:
How to reproduce it:
More details:
The text was updated successfully, but these errors were encountered: