Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Rollup] Validate timezones based on rules not string comparision #36237

Merged
merged 26 commits into from
Apr 17, 2019
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
9037c4a
[Rollup] Compare timezones based on rules not string comparision
polyfractal Dec 4, 2018
cc6563e
Fix timestamps in test
polyfractal Dec 4, 2018
4867d99
Use ZoneId
polyfractal Dec 6, 2018
1ef04f0
Merge remote-tracking branch 'origin/master' into rollup_obsolete_tz
polyfractal Dec 7, 2018
da7ebe3
Fix test
polyfractal Dec 10, 2018
21a38f5
Merge remote-tracking branch 'origin/master' into rollup_obsolete_tz
polyfractal Dec 10, 2018
aadf68b
Deprecation warnings when using obsolete timezones
polyfractal Dec 11, 2018
38ca541
Merge remote-tracking branch 'origin/master' into rollup_obsolete_tz
polyfractal Dec 11, 2018
ef0f695
Switch DateHistoConfig over to ZoneId
polyfractal Dec 18, 2018
d41dda4
Merge remote-tracking branch 'origin/master' into rollup_obsolete_tz
polyfractal Dec 18, 2018
d049fea
Test fix: searching the wrong index
polyfractal Dec 18, 2018
e9f42ef
checkstyle
polyfractal Dec 18, 2018
1f7802f
ML needs to use zones instead of strings too
polyfractal Dec 18, 2018
5a2cac4
Revert "Switch DateHistoConfig over to ZoneId"
polyfractal Jan 9, 2019
72a1bed
Fix rule matching
polyfractal Jan 10, 2019
94abbeb
Add test for TZ validation in action
polyfractal Jan 10, 2019
d573aef
Merge remote-tracking branch 'origin/master' into rollup_obsolete_tz
polyfractal Jan 10, 2019
feba7e5
Yml test version updates
polyfractal Jan 11, 2019
9f4aada
Merge remote-tracking branch 'origin/master' into rollup_obsolete_tz
polyfractal Feb 5, 2019
de22b2f
Remove last traces of Joda
polyfractal Feb 5, 2019
b88091b
Do not set timezone on rollup request histo
polyfractal Feb 7, 2019
d80e035
Merge remote-tracking branch 'origin/master' into rollup_obsolete_tz
polyfractal Apr 8, 2019
83bf9bf
Merge remote-tracking branch 'origin/master' into rollup_obsolete_tz
polyfractal Apr 15, 2019
fbda167
checkstyle
polyfractal Apr 15, 2019
8822afd
Fix tests, make more realistic
polyfractal Apr 15, 2019
a9f2911
Remove unnecessary changes
polyfractal Apr 15, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
121 changes: 121 additions & 0 deletions server/src/main/java/org/elasticsearch/common/time/DateUtils.java
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,127 @@ public static DateTimeZone zoneIdToDateTimeZone(ZoneId zoneId) {
DEPRECATED_SHORT_TZ_IDS = tzs.keySet();
}

// Map of deprecated timezones and their recommended new counterpart
public static final Map<String, String> DEPRECATED_LONG_TIMEZONES;
static {
Map<String, String> tzs = new HashMap<>();
tzs.put("Africa/Asmera","Africa/Nairobi");
tzs.put("Africa/Timbuktu","Africa/Abidjan");
tzs.put("America/Argentina/ComodRivadavia","America/Argentina/Catamarca");
tzs.put("America/Atka","America/Adak");
tzs.put("America/Buenos_Aires","America/Argentina/Buenos_Aires");
tzs.put("America/Catamarca","America/Argentina/Catamarca");
tzs.put("America/Coral_Harbour","America/Atikokan");
tzs.put("America/Cordoba","America/Argentina/Cordoba");
tzs.put("America/Ensenada","America/Tijuana");
tzs.put("America/Fort_Wayne","America/Indiana/Indianapolis");
tzs.put("America/Indianapolis","America/Indiana/Indianapolis");
tzs.put("America/Jujuy","America/Argentina/Jujuy");
tzs.put("America/Knox_IN","America/Indiana/Knox");
tzs.put("America/Louisville","America/Kentucky/Louisville");
tzs.put("America/Mendoza","America/Argentina/Mendoza");
tzs.put("America/Montreal","America/Toronto");
tzs.put("America/Porto_Acre","America/Rio_Branco");
tzs.put("America/Rosario","America/Argentina/Cordoba");
tzs.put("America/Santa_Isabel","America/Tijuana");
tzs.put("America/Shiprock","America/Denver");
tzs.put("America/Virgin","America/Port_of_Spain");
tzs.put("Antarctica/South_Pole","Pacific/Auckland");
tzs.put("Asia/Ashkhabad","Asia/Ashgabat");
tzs.put("Asia/Calcutta","Asia/Kolkata");
tzs.put("Asia/Chongqing","Asia/Shanghai");
tzs.put("Asia/Chungking","Asia/Shanghai");
tzs.put("Asia/Dacca","Asia/Dhaka");
tzs.put("Asia/Harbin","Asia/Shanghai");
tzs.put("Asia/Kashgar","Asia/Urumqi");
tzs.put("Asia/Katmandu","Asia/Kathmandu");
tzs.put("Asia/Macao","Asia/Macau");
tzs.put("Asia/Rangoon","Asia/Yangon");
tzs.put("Asia/Saigon","Asia/Ho_Chi_Minh");
tzs.put("Asia/Tel_Aviv","Asia/Jerusalem");
tzs.put("Asia/Thimbu","Asia/Thimphu");
tzs.put("Asia/Ujung_Pandang","Asia/Makassar");
tzs.put("Asia/Ulan_Bator","Asia/Ulaanbaatar");
tzs.put("Atlantic/Faeroe","Atlantic/Faroe");
tzs.put("Atlantic/Jan_Mayen","Europe/Oslo");
tzs.put("Australia/ACT","Australia/Sydney");
tzs.put("Australia/Canberra","Australia/Sydney");
tzs.put("Australia/LHI","Australia/Lord_Howe");
tzs.put("Australia/NSW","Australia/Sydney");
tzs.put("Australia/North","Australia/Darwin");
tzs.put("Australia/Queensland","Australia/Brisbane");
tzs.put("Australia/South","Australia/Adelaide");
tzs.put("Australia/Tasmania","Australia/Hobart");
tzs.put("Australia/Victoria","Australia/Melbourne");
tzs.put("Australia/West","Australia/Perth");
tzs.put("Australia/Yancowinna","Australia/Broken_Hill");
tzs.put("Brazil/Acre","America/Rio_Branco");
tzs.put("Brazil/DeNoronha","America/Noronha");
tzs.put("Brazil/East","America/Sao_Paulo");
tzs.put("Brazil/West","America/Manaus");
tzs.put("Canada/Atlantic","America/Halifax");
tzs.put("Canada/Central","America/Winnipeg");
tzs.put("Canada/East-Saskatchewan","America/Regina");
tzs.put("Canada/Eastern","America/Toronto");
tzs.put("Canada/Mountain","America/Edmonton");
tzs.put("Canada/Newfoundland","America/St_Johns");
tzs.put("Canada/Pacific","America/Vancouver");
tzs.put("Canada/Yukon","America/Whitehorse");
tzs.put("Chile/Continental","America/Santiago");
tzs.put("Chile/EasterIsland","Pacific/Easter");
tzs.put("Cuba","America/Havana");
tzs.put("Egypt","Africa/Cairo");
tzs.put("Eire","Europe/Dublin");
tzs.put("Europe/Belfast","Europe/London");
tzs.put("Europe/Tiraspol","Europe/Chisinau");
tzs.put("GB","Europe/London");
tzs.put("GB-Eire","Europe/London");
tzs.put("Greenwich","Etc/GMT");
tzs.put("Hongkong","Asia/Hong_Kong");
tzs.put("Iceland","Atlantic/Reykjavik");
tzs.put("Iran","Asia/Tehran");
tzs.put("Israel","Asia/Jerusalem");
tzs.put("Jamaica","America/Jamaica");
tzs.put("Japan","Asia/Tokyo");
tzs.put("Kwajalein","Pacific/Kwajalein");
tzs.put("Libya","Africa/Tripoli");
tzs.put("Mexico/BajaNorte","America/Tijuana");
tzs.put("Mexico/BajaSur","America/Mazatlan");
tzs.put("Mexico/General","America/Mexico_City");
tzs.put("NZ","Pacific/Auckland");
tzs.put("NZ-CHAT","Pacific/Chatham");
tzs.put("Navajo","America/Denver");
tzs.put("PRC","Asia/Shanghai");
tzs.put("Pacific/Johnston","Pacific/Honolulu");
tzs.put("Pacific/Ponape","Pacific/Pohnpei");
tzs.put("Pacific/Samoa","Pacific/Pago_Pago");
tzs.put("Pacific/Truk","Pacific/Chuuk");
tzs.put("Pacific/Yap","Pacific/Chuuk");
tzs.put("Poland","Europe/Warsaw");
tzs.put("Portugal","Europe/Lisbon");
tzs.put("ROC","Asia/Taipei");
tzs.put("ROK","Asia/Seoul");
tzs.put("Singapore","Asia/Singapore");
tzs.put("Turkey","Europe/Istanbul");
tzs.put("UCT","Etc/UCT");
tzs.put("US/Alaska","America/Anchorage");
tzs.put("US/Aleutian","America/Adak");
tzs.put("US/Arizona","America/Phoenix");
tzs.put("US/Central","America/Chicago");
tzs.put("US/East-Indiana","America/Indiana/Indianapolis");
tzs.put("US/Eastern","America/New_York");
tzs.put("US/Hawaii","Pacific/Honolulu");
tzs.put("US/Indiana-Starke","America/Indiana/Knox");
tzs.put("US/Michigan","America/Detroit");
tzs.put("US/Mountain","America/Denver");
tzs.put("US/Pacific","America/Los_Angeles");
tzs.put("US/Samoa","Pacific/Pago_Pago");
tzs.put("Universal","Etc/UTC");
tzs.put("W-SU","Europe/Moscow");
tzs.put("Zulu","Etc/UTC");
DEPRECATED_LONG_TIMEZONES = Collections.unmodifiableMap(tzs);
}

public static ZoneId dateTimeZoneToZoneId(DateTimeZone timeZone) {
if (timeZone == null) {
return null;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
import org.joda.time.DateTimeZone;

import java.io.IOException;
import java.time.ZoneId;
import java.util.Map;
import java.util.Objects;

Expand Down Expand Up @@ -53,7 +54,7 @@ public class DateHistogramGroupConfig implements Writeable, ToXContentObject {
private static final String FIELD = "field";
public static final String TIME_ZONE = "time_zone";
public static final String DELAY = "delay";
private static final String DEFAULT_TIMEZONE = "UTC";
public static final String DEFAULT_TIMEZONE = "UTC";
private static final ConstructingObjectParser<DateHistogramGroupConfig, Void> PARSER;
static {
PARSER = new ConstructingObjectParser<>(NAME, a ->
Expand Down Expand Up @@ -103,7 +104,8 @@ public DateHistogramGroupConfig(final String field,
this.interval = interval;
this.field = field;
this.delay = delay;
this.timeZone = (timeZone != null && timeZone.isEmpty() == false) ? timeZone : DEFAULT_TIMEZONE;
this.timeZone = ZoneId.of((timeZone != null && timeZone.isEmpty() == false)
? timeZone : DEFAULT_TIMEZONE, ZoneId.SHORT_IDS).toString();

// validate interval
createRounding(this.interval.toString(), this.timeZone);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
import org.elasticsearch.xpack.core.rollup.job.DateHistogramGroupConfig;
import org.joda.time.DateTimeZone;

import java.time.ZoneId;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.HashSet;
Expand Down Expand Up @@ -97,11 +98,13 @@ private static void checkDateHisto(DateHistogramAggregationBuilder source, List<
if (agg.get(RollupField.AGG).equals(DateHistogramAggregationBuilder.NAME)) {
DateHistogramInterval interval = new DateHistogramInterval((String)agg.get(RollupField.INTERVAL));

String thisTimezone = (String)agg.get(DateHistogramGroupConfig.TIME_ZONE);
String sourceTimeZone = source.timeZone() == null ? DateTimeZone.UTC.toString() : source.timeZone().toString();
ZoneId thisTimezone = ZoneId.of(((String)agg.get(DateHistogramGroupConfig.TIME_ZONE)), ZoneId.SHORT_IDS);
ZoneId sourceTimeZone = source.timeZone() == null
? ZoneId.of(DateHistogramGroupConfig.DEFAULT_TIMEZONE)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using ZoneId.of("UTC") (as this effectively does) causes printing using formats with this ZoneId to be messed up. They will get a [UTC] appended to the end of the time, like 1970-01-02T10:17:36.789Z[UTC]. Instead, use ZoneOffset.UTC.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Urgh, yeah not what was intended. Fixing, thanks!

: ZoneId.of(source.timeZone().toString(), ZoneId.SHORT_IDS);

// Ensure we are working on the same timezone
if (thisTimezone.equalsIgnoreCase(sourceTimeZone) == false) {
if (thisTimezone.getRules().equals(sourceTimeZone.getRules()) == false) {
continue;
}
if (source.dateHistogramInterval() != null) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,6 @@
import org.elasticsearch.common.io.stream.NamedWriteableAwareStreamInput;
import org.elasticsearch.common.io.stream.NamedWriteableRegistry;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.TermQueryBuilder;
import org.elasticsearch.search.aggregations.AggregationBuilder;
import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder;
import org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregationBuilder;
Expand All @@ -21,7 +19,6 @@
import org.elasticsearch.search.aggregations.metrics.SumAggregationBuilder;
import org.elasticsearch.search.aggregations.support.ValuesSourceAggregationBuilder;
import org.elasticsearch.xpack.core.rollup.RollupField;
import org.elasticsearch.xpack.core.rollup.job.DateHistogramGroupConfig;
import org.joda.time.DateTimeZone;

import java.util.ArrayList;
Expand All @@ -47,7 +44,7 @@
* }</pre>
*
*
* The only publicly "consumable" API is {@link #translateAggregation(AggregationBuilder, List, NamedWriteableRegistry)}.
* The only publicly "consumable" API is {@link #translateAggregation(AggregationBuilder, NamedWriteableRegistry)}.
*/
public class RollupRequestTranslator {

Expand Down Expand Up @@ -116,26 +113,22 @@ public class RollupRequestTranslator {
* relevant method below.
*
* @param source The source aggregation to translate into rollup-enabled version
* @param filterConditions A list used to track any filter conditions that sub-aggs may
* require.
* @param registry Registry containing the various aggregations so that we can easily
* deserialize into a stream for cloning
* @return Returns the fully translated aggregation tree. Note that it returns a list instead
* of a single AggBuilder, since some aggregations (e.g. avg) may result in two
* translated aggs (sum + count)
*/
public static List<AggregationBuilder> translateAggregation(AggregationBuilder source,
List<QueryBuilder> filterConditions,
NamedWriteableRegistry registry) {
public static List<AggregationBuilder> translateAggregation(AggregationBuilder source, NamedWriteableRegistry registry) {

if (source.getWriteableName().equals(DateHistogramAggregationBuilder.NAME)) {
return translateDateHistogram((DateHistogramAggregationBuilder) source, filterConditions, registry);
return translateDateHistogram((DateHistogramAggregationBuilder) source, registry);
} else if (source.getWriteableName().equals(HistogramAggregationBuilder.NAME)) {
return translateHistogram((HistogramAggregationBuilder) source, filterConditions, registry);
return translateHistogram((HistogramAggregationBuilder) source, registry);
} else if (RollupField.SUPPORTED_METRICS.contains(source.getWriteableName())) {
return translateVSLeaf((ValuesSourceAggregationBuilder.LeafOnly)source, registry);
} else if (source.getWriteableName().equals(TermsAggregationBuilder.NAME)) {
return translateTerms((TermsAggregationBuilder)source, filterConditions, registry);
return translateTerms((TermsAggregationBuilder)source, registry);
} else {
throw new IllegalArgumentException("Unable to translate aggregation tree into Rollup. Aggregation ["
+ source.getName() + "] is of type [" + source.getClass().getSimpleName() + "] which is " +
Expand Down Expand Up @@ -195,22 +188,13 @@ public static List<AggregationBuilder> translateAggregation(AggregationBuilder s
* <li>Field: `{timestamp field}.date_histogram._count`</li>
* </ul>
* </li>
* <li>Add a filter condition:</li>
* <li>
* <ul>
* <li>Query type: TermQuery</li>
* <li>Field: `{timestamp_field}.date_histogram.interval`</li>
* <li>Value: `{source interval}`</li>
* </ul>
* </li>
* </ul>
*
*/
private static List<AggregationBuilder> translateDateHistogram(DateHistogramAggregationBuilder source,
List<QueryBuilder> filterConditions,
NamedWriteableRegistry registry) {

return translateVSAggBuilder(source, filterConditions, registry, () -> {
return translateVSAggBuilder(source, registry, () -> {
DateHistogramAggregationBuilder rolledDateHisto
= new DateHistogramAggregationBuilder(source.getName());

Expand All @@ -220,10 +204,8 @@ private static List<AggregationBuilder> translateDateHistogram(DateHistogramAggr
rolledDateHisto.interval(source.interval());
}

String timezone = source.timeZone() == null ? DateTimeZone.UTC.toString() : source.timeZone().toString();
filterConditions.add(new TermQueryBuilder(RollupField.formatFieldName(source,
DateHistogramGroupConfig.TIME_ZONE), timezone));

DateTimeZone timezone = source.timeZone() == null ? DateTimeZone.UTC : source.timeZone();
rolledDateHisto.timeZone(timezone);
rolledDateHisto.offset(source.offset());
if (source.extendedBounds() != null) {
rolledDateHisto.extendedBounds(source.extendedBounds());
Expand All @@ -245,14 +227,13 @@ private static List<AggregationBuilder> translateDateHistogram(DateHistogramAggr
* Notably, it adds a Sum metric to calculate the doc_count in each bucket.
*
* Conventions are identical to a date_histogram (excepting date-specific details), so see
* {@link #translateDateHistogram(DateHistogramAggregationBuilder, List, NamedWriteableRegistry)} for
* {@link #translateDateHistogram(DateHistogramAggregationBuilder, NamedWriteableRegistry)} for
* a complete list of conventions, examples, etc
*/
private static List<AggregationBuilder> translateHistogram(HistogramAggregationBuilder source,
List<QueryBuilder> filterConditions,
NamedWriteableRegistry registry) {

return translateVSAggBuilder(source, filterConditions, registry, () -> {
return translateVSAggBuilder(source, registry, () -> {
HistogramAggregationBuilder rolledHisto
= new HistogramAggregationBuilder(source.getName());

Expand Down Expand Up @@ -325,10 +306,9 @@ private static List<AggregationBuilder> translateHistogram(HistogramAggregationB
*
*/
private static List<AggregationBuilder> translateTerms(TermsAggregationBuilder source,
List<QueryBuilder> filterConditions,
NamedWriteableRegistry registry) {

return translateVSAggBuilder(source, filterConditions, registry, () -> {
return translateVSAggBuilder(source, registry, () -> {
TermsAggregationBuilder rolledTerms
= new TermsAggregationBuilder(source.getName(), source.valueType());
rolledTerms.field(RollupField.formatFieldName(source, RollupField.VALUE));
Expand Down Expand Up @@ -356,8 +336,6 @@ private static List<AggregationBuilder> translateTerms(TermsAggregationBuilder s
* ValueSourceBuilder. This method is called by all the agg-specific methods (e.g. translateDateHistogram())
*
* @param source The source aggregation that we wish to translate
* @param filterConditions A list of existing filter conditions, in case we need to add some
* for this particular agg
* @param registry Named registry for serializing leaf metrics. Not actually used by this method,
* but is passed downwards for leaf usage
* @param factory A factory closure that generates a new shallow clone of the `source`. E.g. if `source` is
Expand All @@ -368,15 +346,14 @@ private static List<AggregationBuilder> translateTerms(TermsAggregationBuilder s
* @return the translated multi-bucket ValueSourceAggBuilder
*/
private static <T extends ValuesSourceAggregationBuilder> List<AggregationBuilder>
translateVSAggBuilder(ValuesSourceAggregationBuilder source, List<QueryBuilder> filterConditions,
NamedWriteableRegistry registry, Supplier<T> factory) {
translateVSAggBuilder(ValuesSourceAggregationBuilder source, NamedWriteableRegistry registry, Supplier<T> factory) {

T rolled = factory.get();

// Translate all subaggs and add to the newly translated agg
// NOTE: using for loop instead of stream because compiler explodes with a bug :/
for (AggregationBuilder subAgg : source.getSubAggregations()) {
List<AggregationBuilder> translated = translateAggregation(subAgg, filterConditions, registry);
List<AggregationBuilder> translated = translateAggregation(subAgg, registry);
for (AggregationBuilder t : translated) {
rolled.subAggregation(t);
}
Expand Down
Loading