You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are certain connectors (like JDBC connectors) that maps remote catalog to Trino catalog. In case remote data source have multiple catalogs then user needs to setup multiple catalogs in Trino too manually. Each of them would be considered from Trino engine point of view as separate and independent catalog like they were from different connectors. With this way we are losing an opportunity of optimization like join pushdown over multiple Trino catalogs but the same connector.
Having connector to provision multiple catalogs, Trino engine would be able notice such case and potentially it would possible to allow join pushdown in such case to shared connector.
The other benefit would be that connectors can create catalogs for each schema in the remote database. This avoids users having to provision a new catalog and restart Trino if a new schema is added in remote system without having to implement 4 part identifiers. (e.g. Redshift, BigQuery etc.)
There are certain connectors (like JDBC connectors) that maps remote catalog to Trino catalog. In case remote data source have multiple catalogs then user needs to setup multiple catalogs in Trino too manually. Each of them would be considered from Trino engine point of view as separate and independent catalog like they were from different connectors. With this way we are losing an opportunity of optimization like join pushdown over multiple Trino catalogs but the same connector.
Having connector to provision multiple catalogs, Trino engine would be able notice such case and potentially it would possible to allow join pushdown in such case to shared connector.
CC: @wendigo @martint @findepi @hashhar
The text was updated successfully, but these errors were encountered: