forked from elastic/elasticsearch
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
merge the lastest commits from author #1
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…)" This reverts commit f114ef6.
Using the document update API on aliases with a write index does not work. Follow-up to #31520
gracefully handle if index response returns null, increase and assert timeout closes #45238
* Restrict which tasks can use testclusters This PR fixes a problem between the interaction of test-clusters and build cache. Before this any task could have used a cluster without tracking it as input. With this change a new interface is introduced to track the tasks that can use clusters and we do consider the cluster as input for all of them.
Introducing a IsoLocal.ROOT constant which should be used instead of java.util.Locale.ROOT in ES when dealing with dates. IsoLocal.ROOT customises start of the week to be Monday instead of Sunday. closes #42588 an issue with investigation details relates #41670 bug raised (this won't fix it on its own. joda.parseInto has to be reimplemented closes #43275 an issue raised by community member
This commit adds a first draft of a regression analysis to data frame analytics. There is high probability that the exact syntax might change. This commit adds the new analysis type and its parameters as well as appropriate validation. It also modifies the extractor and the fields detector to be able to handle categorical fields as regression analysis supports them.
* Reduces complicated callback relations in `testSuccessfulSnapshotAndRestore` to flat steps of sequential actions * Will refactor the other tests in this suit as a follow up * This format certainly makes it easier to create more complicated tests that involve multiple subsequent snapshots as it would allow adding loops
Changes the order of parameters in Geometries from lat, lon to lon, lat and moves all Geometry classes are moved to the org.elasticsearch.geomtery package. Closes #45048
* Follow up to #44949 * Stop using a special code path for multi-line JSON and instead handle its detection like that of other XContent types when creating the request * Only leave a single path that holds a reference to the full REST request * In the next step we can move the copying of request content to happen before the actual request handling and make it conditional on the handler in question to stop copying bulk requests as suggested in #44564
Relates #44756
This commit makes sure that mapping parameters to `CreateIndex` and `PutIndexTemplate` are keyed by the type name. `IndexCreationTask` expects mappings to be keyed by the type name. It asserts this for template mappings but not for the mappings in the request. The `CreateIndexRequest` and `RestCreateIndexAction` mostly make it sure that the mapping is keyed by a type name, but not always. When building the create-index request outside of the REST handler, there are a few methods to set the mapping for the request. Some of them add the type name some of them do not. For example, `CreateIndexRequest#mapping(String type, Map<String, ?> source)` adds the type name, but `CreateIndexRequest#mapping(String type, XContentBuilder source)` does not. This PR asserts the type name in the request mapping inside `IndexCreationTask` and makes all `CreateIndexRequest#mapping` methods add the type name.
* Resolving the todo, cleaning up the unused `settings` parameter * Cleaning up some other minor dead code in affected classes
The current implementations make it difficult for adding new privileges (example: a cluster privilege which is more than cluster action-based and not exposed to the security administrator). On the high level, we would like our cluster privilege either: - a named cluster privilege This corresponds to `cluster` field from the role descriptor - or a configurable cluster privilege This corresponds to the `global` field from the role-descriptor and allows a security administrator to configure them. Some of the responsibilities like the merging of action based cluster privileges are now pushed at cluster permission level. How to implement the predicate (using Automaton) is being now enforced by cluster permission. `ClusterPermission` helps in enforcing the cluster level access either by performing checks against cluster action and optionally against a request. It is a collection of one or more permission checks where if any of the checks allow access then the permission allows access to a cluster action. Implementations of cluster privilege must be able to provide information regarding the predicates to the cluster permission so that can be enforced. This is enforced by making implementations of cluster privilege aware of cluster permission builder and provide a way to specify how the permission is to be built for a given privilege. This commit renames `ConditionalClusterPrivilege` to `ConfigurableClusterPrivilege`. `ConfigurableClusterPrivilege` is a renderable cluster privilege exposed as a `global` field in role descriptor. Other than this there is a requirement where we would want to know if a cluster permission is implied by another cluster-permission (`has-privileges`). This is helpful in addressing queries related to privileges for a user. This is not just simply checking of cluster permissions since we do not have access to runtime information (like request object). This refactoring does not try to address those scenarios. Relates #44048
* SQL: ODBC: document newest conn string parameters This commit adds the documentation for two newly added connection string parameters: AutoEscapePVA and IndexIncludeFrozen. It also removes the recommended OSes from the prerequisites list and places the recommendation distinctively: the unmet prerequisites will fail the installation, while the driver would install on other OSes than those recommended. * address review suggestions. - adjust phrasing for clearer message.
As of #43939 Watcher tests now correctly block until all Watch executions kicked off by that test are finished. Prior we allowed tests to finish with outstanding watch executions. It was known that this would increase the time needed to finish a test. However, running the tests on CI can be slow and on at least 1 occasion it took 60s to actually finish. This PR simply increases the max allowable timeout for Watcher tests to clean up after themselves.
Upgrades: Apache Tika: 1.19.1 -> 1.22. pdfbox : 2.0.12 -> 2.0.16 poi : 4.0.0 -> 4.0.1
Encapsulate the serialization/deserialization of SQL client classes. Make configuration specific parameters (such as ZoneId) generic just like the version and remove the need for consumer classes to manage them individually. This is not only consistent but also provides significant savings in the cursor. Fix #40216
* Add input and outut tracking of built bwc versions This PR adds tracking of the bwc versions git has as input and all the expected files as output. The effect is that `gradlew` is not called at all when the git has doesn't change and the version was allready built. Previusly gradlew would be called for the bwc version and it would have to configure the project and go trough up to date checks to figure out that nothing changed. This helps when working on bwc tests locally needing to run the test multiple times. This should also help in CI not re-build bwc versions across different runs. * Enable caching of bwc builds
* [ML][Data Frame] fixing _start?force=true bug * removing unused import * removing old TODO
This change adds a new option called user_dictionary_rules to Kuromoji's tokenizer. It can be used to set additional tokenization rules to the Japanese tokenizer directly in the settings (instead of using a file). This commit also adds a check that no rules are duplicated since this is not allowed in the UserDictionary. Closes #25343
If the background refresh is running, but not finished yet then the document might not be visible to the next search. Thus, if scheduledRefresh returns false, we need to wait until the background refresh is done. Closes #45571
This PR migrates tests from MaxIT integration test to MaxAggregatorTests, as described in #42893
The java based distribution tests currently have a single Tests class which encapsulates all of the tests for a particular distribution. The test task in gradle then depends on all distributions being built, and each individual tests class looks for the particular distribution it is trying to test. This means that reproducing a single test failure triggers all the distributions to be built, even though only one is needed for the test. This commit reworks the java distribution tests to pass in a particular distribution to be tested, and changes the base test classes to be actual test classes which have assumptions around which distributions they operate on. For example, the archives tests will be skipped when run with an rpm distribution, and vice versa for the package tests. This makes reproduction much more granular. It also also better splitting up tests around a particular use case. For example, all tests for systemd behavior can be in one test class, and run independently of all tests against rpm/deb distributions.
* Executing SLM policies on the snapshot thread will block until a snapshot finishes if the pool is completely busy executing that snapshot * Fixes #45594
* Streamlined GS indexing topic. * Incorporated review feedback * Applied formatting per the style guidelines.
This deprecated option was added in 0d8e399 and can now be removed.
Regression analysis support missing fields. Even more, it is expected that the dependent variable has missing fields to the part of the data frame that is not for training. This commit allows to declare that an analysis supports missing values. For such analysis, rows with missing values are not skipped. Instead, they are written as normal with empty strings used for the missing values. This also contains a fix to the integration test. Closes #45425
* [ML] better handle empty results when evaluating regression * adding new failure test to ml_security black list * fixing equality check for regression results
Since #45136, we use soft-deletes instead of translog in peer recovery. There's no need to retain extra translog to increase a chance of operation-based recoveries. This commit ignores the translog retention policy if soft-deletes is enabled so we can discard translog more quickly. Co-authored-by: David Turner <[email protected]> Relates #45136
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
gradle check
?