Java 9 port, Oracle and Hive dialect improvements
Summary
We ported Virtual Schema to Java 9 and improved the Oracle SQL dialect. We also removed the deprecated adapter entry point. Please use the entry point
com.exasol.adapter.RequestDispatcher
in all your CREATE JAVA ADAPTER SCRIPT
statements.
Note: you need Exasol 6.2 or later to run Virtual Schema 2.0.0 and above.
Unrecognized and Unsupported Data Types
Not all data types present in a source database have a matching equivalent in Exasol. Also software updates on the source database can introduce new data type that the Virtual schema does not recognize.
In version 2.0.0 we changed the handling of those types.
There are a few important things you need to know about those data types.
- Columns of an unrecognized / unsupported data type are not mapped in a Virtual Schema. From Exasol's perspective those columns do not exist on a table. This is done so that tables containing those columns can still be mapped and do not have to be rejected as a whole.
- You can't query columns of an unrecognized / unsupported data type. If the source table contains them, you have to explicitly exclude them from the query. You can for example not use the asterisk (
*
) on a table that contains one ore more of those columns. This will result in an error issued by the Virtual schema. - You can't use functions that result in an unsupported / unknown data type.
Example
Source table:
T1(C1 VARCHAR, C2 BLOB, C3 DATE);
Mapped table in Exasol:
T2(C1 VARCHAR, C3 DATE);
Caveats
Note that there is a special pitfall when you select all supported columns in a table by ID where the tables has unsupported column data types. The optimizer in the core Exasol database recognizes that all columns that it is aware of are selected. It does not know about the unsupported ones and optimizes the list to a SELECT *
. The problem then is that the resulting push-down query will select all source columns, including the unsupported ones. That way the result dataset will have unsupported columns.
To circumvent that problem, you need to modify the select list in a way that does not allow this optimization. For example by adding a pseudo-column with a constant.
Changes in Hive dialect
Added HIVE_CAST_NUMBER_TO_DECIMAL_WITH_PRECISION_AND_SCALE property in Hive dialect which can be used when you want to have decimal numbers even if the precision is bigger than Exasol's maximum precision. Please read Hive dialect documentation for further details.
Changes
- #249: Improved
CLOB
handling in Oracle dialect. - #247: Ported Virtual Schemas to Java 9.
- #243: Oracle timestamps now converted to Exasol timestamps instead of
VARCHAR
. - #241:
BLOB
now not mapped anymore in Oracle dialect. - #252: Improved handling of unsupported column in
SELECT *
- #176: Added HIVE_CAST_NUMBER_TO_DECIMAL_WITH_PRECISION_AND_SCALE property in Hive dialect