-
Notifications
You must be signed in to change notification settings - Fork 438
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PARQUET-906: Add LogicalType annotation. #51
PARQUET-906: Add LogicalType annotation. #51
Conversation
@julienledem, @mkornacker, here is an early version of additional metadata for time and timestamp types. When I looked at adding just metadata for Time and Timestamp, the work was nearly identical to the LogicalType union that this PR includes, but required more spec to state what metadata should be associated with a certain ConvertedType. I think it is simpler to solve the compatibility problem (can't add to a thrift enum) and add the extra metadata at the same time by replacing ConvertedType with this new LogicalType. I haven't updated all the docs yet because I wanted to get feedback on the approach first. Please comment. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the general idea for this.
* Annotates a column that is always null | ||
* Sometimes when discovering the schema of existing data | ||
* values are always null | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you don't want to duplicate the comment, maybe refer to NullType below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, I intended to remove NULL entirely because it is an unreleased type that can be replaced with the LogicalType version.
src/main/thrift/parquet.thrift
Outdated
|
||
/** Timestamp logical type annotation */ | ||
struct TimestampType { | ||
1: required bool isAdjustedToUTC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about "withoutTimeZone" meaning the opposite of "isAdjustedToUTC"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isAdjustedToUTC is more clear. With or without time zone is language from the SQL spec, which is very difficult to understand and apply. Rather than relying on it, this captures a very specific piece of information: whether the timestamp was adjusted to UTC from its original offset (or would have been if the original offset is UTC).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if isAdjusted to UTC is true, we need to record the original TZ. So we should add an optional String timezone field. (and then we don't need isAdjustedToUTC since it is implied by timezone != null )
Something similar similar to Arrow:
https://github.com/apache/arrow/blob/a4f29f3a3ff1c64a6f547bfb0d5e4500142ea5ec/format/Schema.fbs#L117
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model we discussed for Parquet is different from the Arrow spec.
In Arrow, both Timestamp
and TimestampTz
are stored in UTC. The writer Timezone
string is additionally stored to re-compute the Timestamp
values.
For Parquet, the idea is to store Timestamp
values as is from epoch without conversion to UTC.
The conversion to UTC happens only for TimestampTz
values.
The parquet approach is slightly efficient since we don't have to re-compute the Timestamp
values thereby enabling bulkload of the column.
The presence of a writer Timezone
in the file metadata will also prohibit concatenation of files from different Timezones
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In Arrow, both Timestamp and TimestampTz are stored in UTC
This isn't accurate, see: https://github.com/apache/arrow/blob/master/format/Schema.fbs#L120. In Arrow, if the timestamp metadata does not have a time zone, it is time zone naive, not UTC.
From naive timestamp values, we can choose later to localize to UTC (which is a no-op) or localize to some other time zone (which will adjust the values to the UTC values internally based on the tz database)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opened https://issues.apache.org/jira/browse/ARROW-1020 to clarify in the comments in Schema.fbs
When timezone is set, the physical values are UTC timestamps regardless of the time zone of the writer or the client, so changing the timezone does not change the underlying integers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need to store the original zone if isAdjustedToUTC is true. This isn't done by other systems, and there is no guarantee that there is a single zone that the timestamps were converted from. This just indicates that whatever the source zone, the values have been normalized to the same one, UTC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I linked the parquet and arrow jiras about timestamp types. There's a discution there about timestamp types and timezones:
https://issues.apache.org/jira/browse/PARQUET-905
https://issues.apache.org/jira/browse/ARROW-637
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We spoke about this on the Arrow sync yesterday.
In Arrow, we have 3 cases:
-
No time zone in the metadata: the data is time zone naive. One can later localize to a time zone (which may alter the values, because they will be internally normalized to UTC)
-
Time zone in the metadata. Whether the time zone is
'UTC'
or'America/Los_Angeles'
, the physical values themselves are the same: changing the time zone only changes the metadata, not the values of the int64 timestamps
What is proposed here simplifies this to either isAdjustedToUTC=false
(what we currently call "no time zone" or "time zone naive" in Arrow) or isAdjustedToUTC=true
(which covers BOTH the case that the time zone is set as UTC or some other time zone)
The problem I see here is that if a data processing system runs the query:
select hour(timestamp_field), count(*)
from my_parquet_data
group by 1
For timestamps with time zone, if the time zone is known then the hour
function can be implemented to compute the hour values based on the time zone (e.g. America/Los_Angeles). But what's proposed here precludes that, you would need to do something like
hour_localized(timestamp_field, 'America/Los_Angeles')
...
Or maybe the analytics system has some means to cast to a timestamp with the additional metadata. Either way there's some awkwardness.
This isn't done by other systems, and there is no guarantee that there is a single zone that the timestamps were converted from.
I see this concern, but if there is no consistency about the origin of the data, why not store "UTC" as the storage time zone? The fact that others may preserve a local time zone need not complicate this use case.
We can use the keyvalue metadata to preserve the time zone metadata in the Parquet file footer to at least maintain compatibility with Arrow, but it's not ideal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've resolved this discussion in the Parquet sync-ups. Parquet timestamps are always stored with respect to UTC and won't have a time zone.
src/main/thrift/parquet.thrift
Outdated
|
||
/** Time logical type annotation */ | ||
struct TimeType { | ||
1: required bool isAdjustedToUTC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do this actually apply to Time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Applies to at least PostgreSQL
https://www.postgresql.org/docs/9.1/static/datatype-datetime.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is required by the SQL spec
src/main/thrift/parquet.thrift
Outdated
* bitWidth must be 8, 16, 32, or 64. | ||
*/ | ||
struct IntType { | ||
1: required i32 bitWidth |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
byte instead of i32?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
src/main/thrift/parquet.thrift
Outdated
*/ | ||
struct IntType { | ||
1: required i32 bitWidth | ||
2: required bool isSigned |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just signed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine either way. isSigned
is the Java convention.
src/main/thrift/parquet.thrift
Outdated
} | ||
|
||
/** Embedded logical type annotation */ | ||
struct EmbeddedType { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel that we have too many layers here.
A Union is already replacing a notion of format field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds fine to me. I can add JsonType and BsonType instead.
+1 LGTM |
I believe the physical type for Timestamp is only going to be |
6: DateType DATE // use ConvertedType DATE | ||
7: TimeType TIME // use ConvertedType TIME_MICROS or TIME_MILLIS | ||
8: TimestampType TIMESTAMP // use ConvertedType TIMESTAMP_MICROS or TIMESTAMP_MILLIS | ||
// 9: reserved for INTERVAL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about creating an empty struct IntervalType and leaving a todo with that struct instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but then we can not add required fields?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with Julien. Then we can't add required fields to interval and there isn't much value to adding it now.
src/main/thrift/parquet.thrift
Outdated
/** | ||
* Logical type to annotate a column that is always null. | ||
* | ||
* Sometimes when discovering the schema of existing data values are always |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found this sentence a bit tricky to parse. Can you add a comma to make it more clear?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added some clarification here. Thanks for the suggestion.
src/main/thrift/parquet.thrift
Outdated
2: MicroSeconds MICROS | ||
} | ||
|
||
/** Timestamp logical type annotation */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a sentence or two to the comments explaining the difference between TimestampType and TimeType? They're not clear to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These types are already documented in LogicalTypes.md. We can add some things here, like the allowed physical types, but I don't think we should maintain everything in two places.
} | ||
|
||
/** | ||
* Integer logical type annotation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this needed in addition to the physical types? Could it just be UnsignedIntType and leave out the bitWidth?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unless we want to support other widths in the future?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should talk about this in the sync-up today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I didn't make it to that sync. Was there some conclusion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bit width is needed to signal that all of the values will fit in a particular width. Since the spec currently has 8 and 16 bit widths, we have to capture those possible values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is consistent with the Arrow integer metadata https://github.com/apache/arrow/blob/master/format/Schema.fbs#L87
src/main/thrift/parquet.thrift
Outdated
@@ -211,6 +204,93 @@ struct Statistics { | |||
4: optional i64 distinct_count; | |||
} | |||
|
|||
/** Empty structs to use as logical type annotations */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think think it would help the reader if the comments for each logical type mention the physical type(s) they can annotate. Currently one needs to look them up in the table below, and then look in LogicalTypes.md or ConvertedType to find what physical types can be annotated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added.
src/main/thrift/parquet.thrift
Outdated
8: TimestampType TIMESTAMP // use ConvertedType TIMESTAMP_MICROS or TIMESTAMP_MILLIS | ||
// 9: reserved for INTERVAL | ||
10: IntType INTEGER // use ConvertedType INT_* or UINT_* | ||
11: NullType NULL // no compatible ConvertedType |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the comment mean ", so leave it empty."?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use NIL
or NA
here since NULL
is a reserved keyword in C++?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated this to NONE. Good suggestion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed it to NONE, which is a little more common than NIL or NA. Python and Scala use it.
src/main/thrift/parquet.thrift
Outdated
@@ -211,6 +204,93 @@ struct Statistics { | |||
4: optional i64 distinct_count; | |||
} | |||
|
|||
/** Empty structs to use as logical type annotations */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
src/main/thrift/parquet.thrift
Outdated
@@ -211,6 +204,93 @@ struct Statistics { | |||
4: optional i64 distinct_count; | |||
} | |||
|
|||
/** Empty structs to use as logical type annotations */ | |||
struct StringType {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add a comment that it is UTF8 by default ? (in case we want to add an encoding field at some point).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can add a comment about UTF8, but I don't think we should allow other encodings, unless there is a new encoding that becomes the best standard, like UTF8 is today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a comment about UTF-8
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about SQL engines that let the user put anything in a string without checking its encoding? Should they annotate strings with UTF8 even though they can not guarantee that users will actually store UTF8 in them, should they omit the annotation and thereby lose the information that those binary fields are actually strings, or should there be an encoding value for unknown string encoding? (See also: IMPALA-5982)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SQL engines that store arbitrary binary shouldn't mark it as UTF-8 because it isn't guaranteed to be. Non-UTF-8 characters would cause exceptions in Java. There are two follow-ups to this:
- Start validating that data written by Impala is UTF-8 and add the
UTF8
annotation to string columns. Scanning to verify an encoding shouldn't be too expensive. - Make sure that we can add annotations later in the expected schema. This is something we should add tests for in parquet-avro and other object models. It makes sense to me that converting one of these broken Parquet schemas to Avro shouldn't assume that binary columns can be read as Strings or Utf8 objects, but if I to read them that way by providing a schema with those fields as UTF8, I should be able to. Make sense?
} | ||
|
||
/** | ||
* Integer logical type annotation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unless we want to support other widths in the future?
… Arrow Author: Julien Le Dem <[email protected]> Closes #45 from julienledem/types and squashes the following commits: 2956b63 [Julien Le Dem] review feedback 94236c4 [Julien Le Dem] PARQUET-757: Bring Parquet logical types to par with Arrow
Updated and, I think, ready to commit? Any more comments? |
+1 |
@julienledem, any comments or do you think this is ready to merge? |
@julienledem, can you take one last look? I think this is ready to go in. |
* Removed EmbeddedTypes, made JsonType and BsonType top-level * Changed IntType bitWidth to a byte
This hasn't been in a release, so there is no need to keep it, which is a forward-incompatbile change.
c997ac4
to
02f3868
Compare
This fixes the Java test errors and uses UNKNOWN to avoid name conflicts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 LGTM
maybe just review @lekv about the comment readability |
UUIDs are commonly used as unique identifiers. A binary representation will reduce memory when writing or building bloom filters and will reduce cycles needed to compare values. This commit is based on PARQUET-906 / PR #51. Author: Ryan Blue <[email protected]> Closes #71 from rdblue/PARQUET-1125-add-uuid-logical-type and squashes the following commits: dc01707 [Ryan Blue] PARQUET-1125: Add UUID logical type.
* LogicalType replaces ConvertedType, but ConvertedType is still required | ||
* for some logical types to ensure forward-compatibility in format v1. | ||
*/ | ||
10: optional LogicalType logicalType |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just noticed that this commit uses camelCase field names while the existing ones are underscore_separated. Was this an intentional naming convention change? In any case, it's too late to do anything about it as this has already been released as 2.4.0.
ConvertedType::NA corresponds to an invalid converted type that was once added to the Parquet spec: apache/parquet-format#45 but then quickly removed in favour of the Null logical type: apache/parquet-format#51 Unfortunately, Parquet C++ could still in some cases emit the unofficial converted type. Also remove the confusingly-named LogicalType::Unknown, while "UNKNOWN" in the Thrift specification points to LogicalType::Null.
ConvertedType::NA corresponds to an invalid converted type that was once added to the Parquet spec: apache/parquet-format#45 but then quickly removed in favour of the Null logical type: apache/parquet-format#51 Unfortunately, Parquet C++ could still in some cases emit the unofficial converted type. Also remove the confusingly-named LogicalType::Unknown, while "UNKNOWN" in the Thrift specification points to LogicalType::Null.
ConvertedType::NA corresponds to an invalid converted type that was once added to the Parquet spec: apache/parquet-format#45 but then quickly removed in favour of the Null logical type: apache/parquet-format#51 Unfortunately, Parquet C++ could still in some cases emit the unofficial converted type. Also remove the confusingly-named LogicalType::Unknown, while "UNKNOWN" in the Thrift specification points to LogicalType::Null.
ConvertedType::NA corresponds to an invalid converted type that was once added to the Parquet spec: apache/parquet-format#45 but then quickly removed in favour of the Null logical type: apache/parquet-format#51 Unfortunately, Parquet C++ could still in some cases emit the unofficial converted type. Also remove the confusingly-named LogicalType::Unknown, while "UNKNOWN" in the Thrift specification points to LogicalType::Null. Closes #9863 from pitrou/PARQUET-1990-invalid-converted-type Authored-by: Antoine Pitrou <[email protected]> Signed-off-by: Micah Kornfield <[email protected]>
ConvertedType::NA corresponds to an invalid converted type that was once added to the Parquet spec: apache/parquet-format#45 but then quickly removed in favour of the Null logical type: apache/parquet-format#51 Unfortunately, Parquet C++ could still in some cases emit the unofficial converted type. Also remove the confusingly-named LogicalType::Unknown, while "UNKNOWN" in the Thrift specification points to LogicalType::Null. Closes apache#9863 from pitrou/PARQUET-1990-invalid-converted-type Authored-by: Antoine Pitrou <[email protected]> Signed-off-by: Micah Kornfield <[email protected]>
ConvertedType::NA corresponds to an invalid converted type that was once added to the Parquet spec: apache/parquet-format#45 but then quickly removed in favour of the Null logical type: apache/parquet-format#51 Unfortunately, Parquet C++ could still in some cases emit the unofficial converted type. Also remove the confusingly-named LogicalType::Unknown, while "UNKNOWN" in the Thrift specification points to LogicalType::Null. Closes apache#9863 from pitrou/PARQUET-1990-invalid-converted-type Authored-by: Antoine Pitrou <[email protected]> Signed-off-by: Micah Kornfield <[email protected]>
ConvertedType::NA corresponds to an invalid converted type that was once added to the Parquet spec: apache/parquet-format#45 but then quickly removed in favour of the Null logical type: apache/parquet-format#51 Unfortunately, Parquet C++ could still in some cases emit the unofficial converted type. Also remove the confusingly-named LogicalType::Unknown, while "UNKNOWN" in the Thrift specification points to LogicalType::Null. Closes apache#9863 from pitrou/PARQUET-1990-invalid-converted-type Authored-by: Antoine Pitrou <[email protected]> Signed-off-by: Micah Kornfield <[email protected]>
This commit adds a
LogicalType
union and a field for this logical type toSchemaElement
. Adding a new structure for logical types is needed for a few reasons:LogicalType
union is forward-compatible.isAdjustedToUTC
, without adding more fields toSchemaElement
that don't apply to all types.encoding
field toStringType
when it is needed.