-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tests: Extend InternalStatsTests #24212
Conversation
Currently we don't test for count = 0 which will make a difference when adding tests for parsing for the high level rest client. Also min/max/sum should also be tested with negative values and on a larger range.
@colings86 I pulled this out of a current WIP Pr, can you have a quick look if this makes sense to you? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM but I left two small comments, feel free to push without another review
double sum = randomDoubleBetween(0, 100, true); | ||
return new InternalStats(name, count, sum, minMax[0], minMax[1], DocValueFormat.RAW, | ||
pipelineAggregators, Collections.emptyMap()); | ||
long count = randomIntBetween(0, 50); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be worth using frequently()
here so the 0
case has more of a chance of being tested (since its more likely to cause issues than a random positive number). Also not sure why we only went up to 50 before, seems to me like we should be testing much larger ranges (probably worth going above Integer.Max_VALUE
) so we make sure that we are ok for very high values since we expect aggregations to collect a lot of documents
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So would still use a positive value more frequently though. But I will change this and see if it breaks anything.
long count = randomIntBetween(0, 50); | ||
double min = randomDoubleBetween(-1000, 1000, false); | ||
double max = randomDoubleBetween(-1000, 1000, false); | ||
double sum = randomDoubleBetween(-1000, 1000, false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these ranges are also too small, we should probably test a large range of values since this aggregation will likely be used for large ranges. Mostly here I'm thinking this might catch bugs in the reduce logic rather than the serialisation logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I increased the range already but this can lead to rounding errors in the reduce logic so we need to be more lenient with the delta here then: https://github.com/elastic/elasticsearch/pull/24212/files/ac7bfcc469014fb376fbbbd7396fc726d55450c8#diff-3d8a3c85fc5c06b0859a2a4749fb834dL61
@colings86 thanks for the review, I made a few slight changes according to your comments and will wait for CI to pass |
double min = randomDoubleBetween(-1000, 1000, false); | ||
double max = randomDoubleBetween(-1000, 1000, false); | ||
double sum = randomDoubleBetween(-1000, 1000, false); | ||
long count = frequently() ? Integer.MAX_VALUE : 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually meant to keep this with a range of random value for the frequently branch. Something like
long count = frequently() ? randomPositiveLong() : 0;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, my mistake, thats what I also intended. I was a bit to fast here, will change that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
randomPositiveLong() won't work though I think because then we run the risk of overflowing long when summing those values. Interger.MAX_VALUE should work I think.
035597f
to
53f0982
Compare
* master: (61 commits) Build: Move plugin cli and tests to distribution tool (elastic#24220) Peer Recovery: remove maxUnsafeAutoIdTimestamp hand off (elastic#24243) Adds version 5.3.2 and backwards compatibility indices for 5.3.1 Add utility method to parse named XContent objects with typed prefix (elastic#24240) MultiBucketsAggregation.Bucket should not extend Writeable (elastic#24216) Don't expose cleaned-up tasks as pending in PrioritizedEsThreadPoolExecutor (elastic#24237) Adds declareNamedObjects methods to ConstructingObjectParser (elastic#24219) ESIntegTestCase.indexRandom should not introduce types. (elastic#24202) Tests: Extend InternalStatsTests (elastic#24212) IndicesQueryCache should delegate the scorerSupplier method. (elastic#24209) Speed up parsing of large `terms` queries. (elastic#24210) [TEST] make sure that the random query_string query generator defines a default_field or a list of fields token_count type : add an option to count tokens (fix elastic#23227) (elastic#24175) Query string default field (elastic#24214) Make Aggregations an abstract class rather than an interface (elastic#24184) [TEST] ensure expected sequence no and version are set when index/delete engine operation has a document failure Extract batch executor out of cluster service (elastic#24102) Add 5.3.1 to bwc versions Added "release-state" support to plugin docs Added examples to cross cluster search of using cluster settings ...
Currently we don't test for count = 0 which will make a difference when adding tests for parsing for the high level rest client. Also min/max/sum should also be tested with negative values and on a larger range.
Currently we don't test for count = 0 which will make a difference when adding
tests for parsing for the high level rest client. Also min/max/sum should also
be tested with negative values and on a larger range and we could use a
randomized numeric format.