-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Trino 449 release notes #22195
Add Trino 449 release notes #22195
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,78 @@ | ||
# Release 449 (31 May 2024) | ||
|
||
## General | ||
|
||
* Add an event listener which exposes collected events to an HTTP endpoint. ({issue}`22158`) | ||
* Fix rare query failure or incorrect results for array types when the data is | ||
dictionary encoded. ({issue}`21911`) | ||
* Fix JMX metrics not exporting for resource groups. ({issue}`21343`) | ||
|
||
## BigQuery connector | ||
|
||
* Improve performance when listing schemas while the | ||
`bigquery.case-insensitive-name-matching` configuration property is enabled. ({issue}`22033`) | ||
|
||
## ClickHouse connector | ||
|
||
* Add support for pushing down execution of the `count(distinct)`, `corr`, | ||
`covar_samp`, and `covar_pop` functions to the underlying database. ({issue}`7100`) | ||
* Improve performance when pushing down equality predicates on textual types. ({issue}`7100`) | ||
|
||
## Delta Lake connector | ||
|
||
* Add support for [the `$partitions` system table](delta-lake-partitions-table). ({issue}`18590`) | ||
* Add support for reading from and writing to tables with | ||
[VACUUM Protocol Check](https://github.com/delta-io/delta/blob/master/PROTOCOL.md#vacuum-protocol-check). ({issue}`21398`) | ||
* Add support for configuring the query retry policy on the S3 filesystem with | ||
the `s3.retry-mode` and `s3.max-error-retries` configuration properties. | ||
* Automatically use `varchar` in struct types as a type during table creation | ||
when `char` is specified. ({issue}`21511`) | ||
* Improve performance of writing to Parquet files. ({issue}`22089`) | ||
* Fix query failure when the `hive.metastore.glue.catalogid` configuration | ||
property is set. ({issue}`22048`) | ||
|
||
## Hive connector | ||
|
||
* Add support for specifying a catalog name in the Thrift metastore with the | ||
`hive.metastore.thrift.catalog-name` configuration property. (`10287`) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. - `hive.metastore.thrift.catalog-name` configuration property. (`10287`)
+ `hive.metastore.thrift.catalog-name` configuration property. ({issue}`10287`) |
||
* Add support for configuring the query retry policy on the S3 filesystem with | ||
the `s3.retry-mode` and `s3.max-error-retries` configuration properties. | ||
* Improve performance of writing to Parquet files. ({issue}`22089`) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. All Delta data files are Parquet, so maybe we should just say "Improve writing performance". |
||
* Fix failure when filesystem caching is enabled on Trino clusters with a single | ||
node. ({issue}`21987`) | ||
* Fix failure when listing Hive tables with unsupported syntax. ({issue}`21981`) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Not sure whether "syntax" here is right. Maybe "serde" or "storage format"? |
||
* Fix query failure when the `hive.metastore.glue.catalogid` configuration | ||
property is set. ({issue}`22048`) | ||
* Fix failure when running the `flush_metadata_cache` table procedure with the | ||
Glue v2 metastore. ({issue}`22075`) | ||
|
||
## Hudi connector | ||
|
||
* Add support for configuring the query retry policy on the S3 filesystem with | ||
the `s3.retry-mode` and `s3.max-error-retries` configuration properties. | ||
* Improve performance of writing to Parquet files. ({issue}`22089`) | ||
mosabua marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
## Iceberg connector | ||
|
||
* Add support for views when using the Iceberg REST catalog. ({issue}`19818`) | ||
* Add support for configuring the query retry policy on the S3 filesystem with | ||
the `s3.retry-mode` and `s3.max-error-retries` configuration properties. | ||
mosabua marked this conversation as resolved.
Show resolved
Hide resolved
|
||
* Automatically use `varchar` in struct types as a type during table creation | ||
when `char` is specified. ({issue}`21511`) | ||
* Automatically use microsecond precision for temporal types in struct types | ||
during table creation. ({issue}`21511`) | ||
* Improve performance and memory usage when | ||
[equality delete](https://iceberg.apache.org/spec/#equality-delete-files) | ||
files are used. ({issue}`18396`) | ||
* Improve performance of writing to Parquet files. ({issue}`22089`) | ||
* Fix failure when writing to tables with Iceberg `VARBINARY` values. ({issue}`22072`) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I believe this fixed writes to tables partitioned on varbinary column, not just tables with varbinary column |
||
|
||
## Pinot connector | ||
|
||
* {{breaking}} Remove support for non-gRPC clients and the `pinot.grpc.enabled` | ||
and `pinot.estimated-size-in-bytes-for-non-numeric-column` configuration | ||
properties. ({issue}`22213`) | ||
|
||
## Snowflake connector | ||
|
||
* Fix incorrect type mapping for numeric values. ({issue}`20977`) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should this also be cc @ebyhr There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I dont think so from what I can tell following along on the PR, but @ebyhr should confirm There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. we wanted initial snowflake PR to have as little type mapping as possible, because fixing it later is a breaking change (results are different). or was it not a correctness fix? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How rare is the "rare"? What does it mean from user perspective?
Asking for guidance when formulating future proposed release notes entries.