-
Notifications
You must be signed in to change notification settings - Fork 435
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PARQUET-675: Add INTERVAL_YEAR_MONTH and INTERVAL_DAY_TIME types #43
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The types make sense, but I'd like to have quotes from the SQL spec here to explain a bit more what year-month and day-millisecond "units" are. I think this should update the logical types doc as well.
@@ -162,19 +162,48 @@ enum ConvertedType { | |||
BSON = 20; | |||
|
|||
/** | |||
* @Deprecated: use INTERVAL_YEAR_MONTH or INTERVAL_DAY_TIME | |||
* since the SQL standard defines either YEAR_MONTH or DAY_TIME unit. | |||
* This is deprecated in favor of those 2 types |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does the SQL spec say?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See links in https://issues.apache.org/jira/browse/PARQUET-675.
For example: "An interval is defined as the difference between two dates and times. Intervals are expressed in one of two different ways. One is a year-month interval that expresses intervals in terms of years and an integral number of months. The other is a day-time interval that expresses intervals in terms of days, minutes, and seconds. These two types of intervals are distinct and cannot be mixed, because months can have varying numbers of days."
https://msdn.microsoft.com/en-us/library/ms716506(v=vs.85).aspx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should I add this quote in there?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SQL 92 Spec:
4.5.2 Intervals
There are two classes of intervals. One class, called year-month
intervals, has an express or implied datetime precision that in-
cludes no fields other than YEAR and MONTH, though not both are
required. The other class, called day-time intervals, has an ex-
press or implied interval precision that can include any fields
other than YEAR or MONTH.
Further:
Operations involving items of
type interval require that the interval items be mutually compara-
ble.
...
Year-month intervals are mutually comparable only with other year-
month intervals
...
Day-time intervals are mutually comparable only with other day-
time intervals.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How are intervals comparable? Are there details in the spec for resolving cases like (1 day, 0 ms) <=> (0 days, 86400000 ms)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By the SQL definition 1 day = 24 hours = 86400 seconds.
_Keyword______Valid_values_of_INTERVAL_fields______________________
| DAY | Unconstrained except by <interval leading field |
precision
| HOUR | Hours (within days) (0-23) |
| | |
| MINUTE | Minutes (within hours) (0-59) |
| | |
|_SECOND_____|_Seconds_(within_minutes)_(0-59.999...)______________|
and
When it is
necessary to add new most significant datetime fields, the asso-
ciated value is effectively converted to the new precision in a
manner obeying the natural rules for dates and times associated
with the Gregorian calendar.
Of course this is just the standard SQL, Parquet does not have to follow this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@michal-databricks, can you post the reference for 1 day = 24 hours = 86400 seconds? I'm not sure that what you've quoted here requires it. The second paragraph, "When it is necessary...", is talking about conversion and the first constrains the fields. That doesn't mean that "1 day" and "23 hours 59 minutes 59.999999 seconds" are comparable. It just means that 86400 seconds is an invalid interval.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rdblue, this is the complete paragraph defining the comparison of day-time intervals where the quote was taken from.
Day-time intervals are mutually comparable only with other day-
time intervals. If two day-time intervals have different interval
precisions, they are, for the purpose of any operations between
them, effectively converted to the same precision by appending new
<datetime field>s to either the most significant end or the least
significant end of one or both day-time intervals. New least sig-
nificant <datetime field>s are assigned a value of 0. When it is
necessary to add new most significant datetime fields, the asso-
ciated value is effectively converted to the new precision in a
manner obeying the natural rules for dates and times associated
with the Gregorian calendar.
To my understanding, this means that, for example, comparing INTERVAL '100' HOUR
with INTERVAL '4 04:00' DAY TO MINUTE
requires conversion to the common super type, in this case INTERVAL DAY TO MINUTE
. During conversion least significant fields are filled with zeros and most significant "in a manner obeying the natural rules for dates and times associated with the Gregorian calendar", since the non-most-significant fields must conform to the constraints in the table (comes from the same chapter). In this case INTERVAL '100' HOUR
becomes INTERVAL '4 04:00' DAY TO MINUTE
. Now they are compared field by field (making them equal).
This is the best reference from the SQL '92 standard I can give to argument 1 day = 24 hours. And while implementations of the standard vary they still tend to use the above method for comparisons.
* the provided duration. This duration of time is independent of any | ||
* particular timezone or date. | ||
*/ | ||
INTERVAL_DAY_TIME = 23; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there an alternative to embedding two values in a single byte array? The problem is that this is incompatible with stats filtering and efficient integer encodings. I think those are significant enough that we should consider alternatives.
I like the idea of using this annotation for a group of two int32 values. That way we get good compression and encoding, stats filtering works, and we get can get independent dictionary encoding, which would work well for the days field. What are the downsides to that approach instead of the byte array?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the downside is the two values not being contiguous for processing.
We can also make it an Int64 with the most significant 32 bits for day and the rest for ms with the constraint that ms are < 1 day. That would make it sortable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that a consumer would be unlikely to read/consume the two values separately, storing them separately seems like overkill. Since we already do the set of three values in the existing interval, it seems like storing two in a single 8 bytes would be okay and consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is that using a single INT64 with milliseconds (1 day = 86400000 milliseconds) would be the most convenient solution. Besides better encoding and statistics it has the advantage of unique representation (does not allow for 1 day and 86400000 ms
). Non unique representation would be especially confusing with negative intervals. Single integer storage would be also consistent with the existing storage format of TIMESTAMP
s and the storage format for INTERVAL_YEAR_MONTH
proposed here.
On that note, it would be better to have 2 types instead: INTERVAL_DAY_MILLIS
and INTERVAL_DAY_MICROS
just as there are TIMESTAMP_MILLIS
and TIMESTAMP_MICROS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with Jacques that this is lossy because a day isn't consistently 86400000 ms.
We need to store 2 values, but I'm not convinced that there is much value in storing these in a single byte array. What about users that only use days or only use micros? The proposed type would use 8 bytes per value either way. I don't think it is worth that cost when we can deserialize two columns to a single arrow column instead. Is there a micro-benchmark to back up the benefit of using this format instead of a 2-column format?
Using an int64 won't work because with 2 values that aren't constrained by one another, min and max stats are invalid (days always overrides micros). In fact, because days and micros can't be converted to one another, I'd argue that there is no consistent ordering for this interval type. So we would get compression for the days value but at the cost of more confusion about how to produce stats.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am guessing that by day not equal 86400000 you mean the influence of leap seconds. I don't know any database system which would implement those, especially for intervals. Most just store one component for days and seconds (among them Spark) and while for example Postgres uses separate components for them, it still makes a day equal 24 hours for comparison purposes. But if you do decide to store milliseconds with intention of it being an independent dimension, 32 bits seem not enough: 2 ^ 32 / 86400000 = 49.71
. This is only +/-25 "days" range in the signed case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, another reason you may mean for different day lengths is if they span different DST time zones (on the DST change dates). SQL does not provide a meaningful way to handle those as SQL time stamps only carry time zone offset (in minutes) and not the time zone in the traditional sense (location).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The case I was thinking about was DST. The length of a day may be 23 hours, 24 hours, or 25 hours. That makes 1 day incomparable with 24 hours, unless the SQL spec includes a way to compare them. I'm fine with not having a comparison for Parquet, even if the SQL spec defines one.
The other consideration is whether we should constrain hours, minutes, and seconds to less than 24 hours. The other comment from the SQL spec appears to support that, but constraining the microseconds field to < 86,400,000 will require conversion logic somewhere in order to support arithmetic on interval values: 0 days, 22 hours + 0 days 4 hours = ???. Parquet could just require the microseconds field to be less than 86,400,000, but that shifts the problem to processing engines. And either way, it doesn't support the idea of storing the entire interval using microseconds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rdblue, there are basically two approaches:
- Treat days and seconds as two dimensions. In this case seconds cannot be constrained by 86,400 (otherwise you cannot express 25 hour days on a DST change). Addition and multiplication by a scalar works per dimension. 0 days, 22 hours + 0 days, 4 hours = 0 days, 26 hours.
- Treat one day as 24 hours making this one dimension: a scalar of seconds (which I consider the SQL way, but it seems you are not convinced that this is the SQL standard semantic). 0 days 22 hours + 0 days 4 hour = 1 day 2 hours. In this case the storage can be
days + 86400000 + ms
to make the ordering and arithmetic easier. While it could be still stored as 2 values, it would complicate the implementation and hamper the encoding. Which was my original point.
Going forward with option 1) is fine (as long as it is clearly documented), but then I don't see the advantage over the existing interval type with this is deprecating.
|
||
#### INTERVAL | ||
`INTERVAL` is deprecated. Please use the more precise `INTERVAL_YEAR_MONTH` and | ||
`INTERVAL_DAY_TIME` per SQL spec. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we link to the SQL spec here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I meant if we could add a link to the sql spec in the md file, it might be useful for folks reading in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I should have added a comment. The SQL spec is not free. This is a paid link.
There are drafts hosted here and there. We can use those.
closes apache#43 and closes apache#50 This PR also implements 1) PARQUET-532: Null values detection needs to be fixed and tested 2) PARQUET-502: Scanner segfaults when its batch size is smaller than the number of rows 3) PARQUET-526: Add more complete unit test coverage for column Scanner implementations 4) PARQUET-531: Can't read past first page in a column Author: Deepak Majeti <[email protected]> Closes apache#62 from majetideepak/PARQUET-538 and squashes the following commits: 1e56f83 [Deepak Majeti] Trigger notification 6478a7c [Deepak Majeti] TYPED_TEST 1d14171 [Deepak Majeti] Added Boolean Test and Scanner:Next API d1da031 [Deepak Majeti] lint issue 45f10aa [Deepak Majeti] Reproducer for PARQUET-502 88e27c6 [Deepak Majeti] formatting 8aac435 [Deepak Majeti] PARQUET-526 dca7e2d [Deepak Majeti] PARQUET-532 and PARQUET-502 Fix a622021 [Deepak Majeti] Reverted PARQUET-524 and addressed comments 859c1df [Deepak Majeti] minor comment edits d938a13 [Deepak Majeti] PARQUET-538 df1fbd7 [Deepak Majeti] Templated single page tests 8548e3c [Deepak Majeti] PARQUET-524 c265fea [Deepak Majeti] fixed PARQUET-499 bugs
@@ -162,19 +162,48 @@ enum ConvertedType { | |||
BSON = 20; | |||
|
|||
/** | |||
* @Deprecated: use INTERVAL_YEAR_MONTH or INTERVAL_DAY_TIME | |||
* since the SQL standard defines either YEAR_MONTH or DAY_TIME unit. | |||
* This is deprecated in favor of those 2 types |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SQL 92 Spec:
4.5.2 Intervals
There are two classes of intervals. One class, called year-month
intervals, has an express or implied datetime precision that in-
cludes no fields other than YEAR and MONTH, though not both are
required. The other class, called day-time intervals, has an ex-
press or implied interval precision that can include any fields
other than YEAR or MONTH.
Further:
Operations involving items of
type interval require that the interval items be mutually compara-
ble.
...
Year-month intervals are mutually comparable only with other year-
month intervals
...
Day-time intervals are mutually comparable only with other day-
time intervals.
* An interval of time with a year-month unit. | ||
* | ||
* This type annotates data stored as an INT32 | ||
* This data is stored as a little endian unsigned |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be signed. SQL standard does not mention this explicitly, but defines interval - interval
operation, which would be problematic without negative intervals. Also all SQL implementations I am aware of, allow negative intervals.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agreed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
* the provided duration. This duration of time is independent of any | ||
* particular timezone or date. | ||
*/ | ||
INTERVAL_DAY_TIME = 23; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is that using a single INT64 with milliseconds (1 day = 86400000 milliseconds) would be the most convenient solution. Besides better encoding and statistics it has the advantage of unique representation (does not allow for 1 day and 86400000 ms
). Non unique representation would be especially confusing with negative intervals. Single integer storage would be also consistent with the existing storage format of TIMESTAMP
s and the storage format for INTERVAL_YEAR_MONTH
proposed here.
On that note, it would be better to have 2 types instead: INTERVAL_DAY_MILLIS
and INTERVAL_DAY_MICROS
just as there are TIMESTAMP_MILLIS
and TIMESTAMP_MICROS
Ugh, we didn't realize that this hadn't been merged. We've been using the representations here with multiple customers for the last six months using the annotations and formats above so the reality is that there are files with the representation described in the PR in the wild at this point. I'm happy to help get this over the finish line but we should wrap up discussion. Michal is recommending a collapsed format for day. However, I think that is a bit surprising to users since it is lossy. A user may want to express 28 hours. The collapsed format would replace this with 1 day, 4 hours so I'm inclined to stick with the separate, non-lossy representation. It is a trivial calculating if you want to canonicalize for internal processing but the storage should be lossless. @rdblue and @julienledem, what are you thoughts to the previous points. It seems like there was some discussion of a group but Julien and myself preferred a consolidated approach since we don't see any scenario where you'd only be interacting with one of the two items in the group. Additionally, in general, since Intervals are such a small part of every dataset I've seen in the real world, I'm not inclined to spend a lot of engineering work trying to optimize them too extensively. That's probably just laziness talking... but I like to think of it as pragmatism :) |
My suggestion to use the collapsed format was based on the assumption that the point of moving away from the old Interval format was to make it closer to the SQL standard definition of the interval data type. However the classic SQL implementations are usually far from implementing the standard in the first place. MySQL and MS SQL Server do not have interval as a storable data type at all. PostgreSQL allows literals and declaration as standard (with an exception of explicit precision, like
From the common SQL implementations Oracle seem to be the closest to the standard. Either way, I think, the comparison semantics should be well defined within Paruqet format as this influences filter pushdown features like min max statistics and hashing. Of course if there is no linear order on interval then min max cannot be even defined. |
Whatever decision is reached here, could we incorporate this into the related Arrow patch apache/arrow#920 so that we can round-trip to Arrow memory without any data loss? |
@julianhyde, would love your thoughts on this as you know more than anybody on these things. Oracle: SELECT TIMESTAMP '2017-03-12 0:00:00 US/Pacific' + INTERVAL '2' HOUR FROM DUAL SELECT TIMESTAMP '2017-03-12 0:00:00 US/Pacific' + INTERVAL '3' HOUR FROM DUAL SELECT TIMESTAMP '2017-03-12 0:00:00 US/Pacific' + INTERVAL '24' HOUR FROM DUAL SELECT TIMESTAMP '2017-03-12 0:00:00 US/Pacific' + INTERVAL '1' DAY FROM DUAL SELECT TIMESTAMP '2017-03-12 0:00:00 US/Pacific' + INTERVAL '86400' SECOND(2,4) FROM Postgres: SELECT TIMESTAMP '2017-03-12 0:00:00 PST' + INTERVAL '2' HOUR SELECT TIMESTAMP '2017-03-12 0:00:00 PST' + INTERVAL '3' HOUR SELECT TIMESTAMP '2017-03-12 0:00:00 PST' + INTERVAL '24' HOUR SELECT TIMESTAMP '2017-03-12 0:00:00 PST' + INTERVAL '1' day SELECT TIMESTAMP '2017-03-12 0:00:00 US/Pacific' + INTERVAL '86400' SECOND So people don't agree on what adding intervals across time skips mean. However, they each agree internally that skipping is the same whether we're using 1 day or 86400seconds. This means that interval should be comparable. On the flipside, Postgres maintains a distinction between 25 hours and 1 day, 1 hour whereas Oracle does not Oracle: SELECT INTERVAL '1 1' DAY TO HOUR FROM dual Postges: select INTERVAL '1 1' day to hour |
These examples are complicated. I don't recall whether the Oracle ones will use TIMESTAMP WITH TIME ZONE or TIMESTAMP WITH LOCAL TIME ZONE literals. And Postgres will use TIMESTAMP WITH TIME ZONE, but it's semantics are non-standard. Also, I don't know whether PST and US/Pacific are the same, and whether either of them time-shifts. I'm not an expert on TIMESTAMP WITH TIME ZONE. Evidently its arithmetic is more complicated than TIMESTAMP, so someone should consult the sql standard. Could you punt on TIMESTAMP WITH TIME ZONE for now? Regarding your second example. I'm pretty sure both systems will give the two literals different types - interval hour and interval day to hour. So they are distinct in both systems, regardless of how they are printed. Distinct in the same sense that integer 2 and double 2 are distinct. They have different types, may or may not be printed as the same string, but yield the same value if converted to each other's type. IIRC calcite would print '1 1' and '25', but store 25 * 60 * 60 * 1000 for both. If arrow is being frugal it should store 25 for both. |
@jacques-n, Postgres does actually make use of the difference between hours and days in some cases:
However the docs clearly state this is not standard SQL: "(...) the SQL standard has an odd mix of date and time types and capabilities. Two obvious problems are: (...) The default time zone is specified as a constant numeric offset from UTC. It is therefore impossible to adapt to daylight-saving time when doing date/time arithmetic across DST boundaries. (...) To address these difficulties, (...) PostgreSQL assumes your local time zone for any type containing only date or time." (From 8.5.3. Time Zones).
Without non-standard time zone implementation the distinction between days and 24 hour wouldn't be very useful. Also while for Postgres DATETIME + INTERVAL semantic is as above, for DATETIME - DATETIME the result assumes 24h = 1 day which leads to confusing results like:
Either way the current proposal in the PR cannot represent Postrgres intervals because of the small range for seconds and potential mixing of months and seconds:
@julianhyde, It is true that From Parquet point of view the most flexible way would be to have parametrized interval type (like decimal) with least significant field/seconds scale. So one of: DAY, HOUR, MINUTE, SECOND, SECOND(1), ... SECOND(9). As this may be excessive implementation wise for such a minor type, a middle ground would be to have at least DAY, MILLISECOND (SECOND(3)), and MICROSECOND(SECOND(6)) scales. This is made even easier with the new logical types PR. |
There's a lot of behavior to consider for processing engines here, but the conclusion I think we can draw from all of this information for Parquet is that engines may have semantics where 24 hours and 1 day are not equal, so Parquet should not force engines to consider them equal by storing a single value. That means what we're back where we started: we have two values to store for these types and need to choose a good way to store them. I don't see a down-side to separate columns, since we can easily construct a single column in Arrow from them when deserializing. |
So, Postgres diverges from the SQL standard and allows time zones that don't have fixed offsets from UTC. This seems useful, but as Jacques has noted, it leads to madness. So, let's stick to the standard. And, if I read you correctly, Postgres allows intervals with a mixture of hours and months. Again, madness. At a minimum, Arrow needs 2 interval types: INTERVAL MONTH and INTERVAL SECOND. Optionally, we could also have finer-grained types. For instance, INTERVAL HOUR TO SECOND would be compatible with INTERVAL SECOND but would use a format string of 'HH:MM:SS'. I don't feel strongly whether Arrow has 2 interval types or goes for the full list in standard SQL: YEAR, YEAR_TO_MONTH, MONTH, DAY, DAY_TO_HOUR, DAY_TO_MINUTE, DAY_TO_SECOND, HOUR, HOUR_TO_MINUTE, HOUR_TO_SECOND, MINUTE, MINUTE_TO_SECOND, SECOND. Postgres even has a few more: QUARTER, MILLENNIUM, MILLISECOND etc. I think a reasonable level of support in Arrow is precision and scale, e.g. DAY(5) TO SECOND(6). implies that the value holds microsecond values between -8,640,000,000,000,000 and 8,640,000,000,000,000 (the number of microseconds in 10,000 days). Thus behind the scenes it looks like a fixed decimal type. |
@julianhyde my thinking in my patch https://github.com/apache/arrow/pull/920/files was to add a time resolution to the DAY_TIME interval type to address resolutions from seconds down to nanoseconds. This matches up with the |
Do whatever you have for timestamp. If you only have an 8 byte timestamp type, an 8 byte interval will be enough. But if you have 16 byte timestamp, it will generate 16 byte values if you subtract them. |
Understood. Should it become necessary in the future, we can augment the metadata to accommodate additional precision. I think 8 bytes will be adequate for the vast majority of use cases for Arrow |
Hi everyone, I've opened #165 to implement a LogicalType for the Interval. From looking at the conversations on this thread, it looks like the Arrow-related concerns have either been addressed, or might not be issues as we have stabilised on an Interval type in Arrow. Pleas let me know if we're missing any details, or concerns to still address. Thanks |
No description provided.