Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable build for Databricks 13.3 [databricks] #9677

Merged
merged 34 commits into from
Nov 23, 2023
Merged
Show file tree
Hide file tree
Changes from 17 commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
181066e
pom changes
razajafri Oct 12, 2023
664af61
pom changes
razajafri Oct 21, 2023
02dadb3
pom changes
razajafri Oct 25, 2023
4c34d3e
add databricks13.3 to premerge
razajafri Nov 6, 2023
9482ba0
Added ToPrettyString support
razajafri Nov 9, 2023
0a7fa52
xfail approximate percentile test
razajafri Nov 12, 2023
06a5770
xfail failing udf tests
razajafri Nov 12, 2023
00b498e
xfail failing tests due to WriteIntoDeltaCommand
razajafri Nov 12, 2023
ea2fd40
xfail test_delta_atomic_create_table_as_select and test_delta_atomic_…
razajafri Nov 12, 2023
f693799
Added 341db to shim-deps and removed from datagen/pom.xml
razajafri Nov 12, 2023
1eb2904
updated udf-compiler pom.xml
razajafri Nov 12, 2023
cedb635
updated sql-plugin pom.xml
razajafri Nov 12, 2023
4b6fd48
fixed multiple pom.xml
razajafri Nov 12, 2023
7a20826
updated udf-compiler pom.xml
razajafri Nov 12, 2023
80d5c47
removed TODO
razajafri Nov 12, 2023
d2365db
Signoff
razajafri Nov 12, 2023
e5acc9b
updated scala 2.13 poms
razajafri Nov 12, 2023
e2eea68
Revert "xfail failing tests due to WriteIntoDeltaCommand"
razajafri Nov 13, 2023
799ce62
Revert "xfail test_delta_atomic_create_table_as_select and test_delta…
razajafri Nov 13, 2023
f58616f
remove tests/pom.xml changes
razajafri Nov 13, 2023
df465d0
reverted 2.13 generation of tests/pom.xml
razajafri Nov 13, 2023
f65d19e
removed 341db profile from tests as we don't run unit tests on databr…
razajafri Nov 13, 2023
04b6c32
fixed the xfail reason to point to the correct issue
razajafri Nov 14, 2023
49ee94f
removed diff.patch
razajafri Nov 14, 2023
2af52e1
Revert "xfail approximate percentile test"
razajafri Nov 14, 2023
863d586
Merge branch 'branch-23.12' into final-pr
jlowe Nov 15, 2023
8175e96
build fixes
jlowe Nov 15, 2023
fbf3150
Fix spark321db build
jlowe Nov 15, 2023
cb01bb8
Skip UDF tests until UDF handling is updated
jlowe Nov 16, 2023
f43a14f
Remove xfail/skips eclipsed by module-level skip
jlowe Nov 16, 2023
8a0398b
Merge branch 'branch-23.12' into final-pr
jlowe Nov 17, 2023
e0d96e8
xfail fastparquet tests due to nulls being introduced by pandas
jlowe Nov 17, 2023
20d3e51
Fix incorrect shimplify directives for 341db
jlowe Nov 20, 2023
9cc0fbc
Fix fallback test
jlowe Nov 21, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 17 additions & 0 deletions aggregator/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -543,6 +543,23 @@
</dependency>
</dependencies>
</profile>
<profile>
<id>release341db</id>
<activation>
<property>
<name>buildver</name>
<value>341db</value>
</property>
</activation>
<dependencies>
<dependency>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-delta-spark341db_${scala.binary.version}</artifactId>
<version>${project.version}</version>
<classifier>${spark.version.classifier}</classifier>
</dependency>
</dependencies>
</profile>
<profile>
<id>release333</id>
<activation>
Expand Down
3 changes: 2 additions & 1 deletion integration_tests/src/main/python/delta_lake_update_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
from delta_lake_utils import *
from marks import *
from spark_session import is_before_spark_320, is_databricks_runtime, \
supports_delta_lake_deletion_vectors, with_cpu_session, with_gpu_session
supports_delta_lake_deletion_vectors, with_cpu_session, with_gpu_session, is_spark_340_or_later

delta_update_enabled_conf = copy_and_update(delta_writes_enabled_conf,
{"spark.rapids.sql.command.UpdateCommand": "true",
Expand Down Expand Up @@ -71,6 +71,7 @@ def checker(data_path, do_update):
delta_writes_enabled_conf # Test disabled by default
], ids=idfn)
@pytest.mark.skipif(is_before_spark_320(), reason="Delta Lake writes are not supported before Spark 3.2.x")
@pytest.mark.xfail(condition=is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9675")
jlowe marked this conversation as resolved.
Show resolved Hide resolved
def test_delta_update_disabled_fallback(spark_tmp_path, disable_conf):
data_path = spark_tmp_path + "/DELTA_DATA"
def setup_tables(spark):
Expand Down
12 changes: 11 additions & 1 deletion integration_tests/src/main/python/delta_lake_write_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
from marks import *
from parquet_write_test import parquet_part_write_gens, parquet_write_gens_list, writer_confs
from pyspark.sql.types import *
from spark_session import is_before_spark_320, is_before_spark_330, is_spark_340_or_later, with_cpu_session
from spark_session import is_before_spark_320, is_before_spark_330, is_spark_340_or_later, with_cpu_session, is_databricks_runtime

delta_write_gens = [x for sublist in parquet_write_gens_list for x in sublist]

Expand Down Expand Up @@ -65,6 +65,7 @@ def do_sql(spark, q): spark.sql(q)
{"spark.rapids.sql.format.parquet.enabled": "false"},
{"spark.rapids.sql.format.parquet.write.enabled": "false"}], ids=idfn)
@pytest.mark.skipif(is_before_spark_320(), reason="Delta Lake writes are not supported before Spark 3.2.x")
@pytest.mark.xfail(condition=is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9675")
def test_delta_write_disabled_fallback(spark_tmp_path, disable_conf):
data_path = spark_tmp_path + "/DELTA_DATA"
assert_gpu_fallback_write(
Expand Down Expand Up @@ -178,13 +179,15 @@ def do_write(spark, path):
@delta_lake
@ignore_order(local=True)
@pytest.mark.skipif(is_before_spark_320(), reason="Delta Lake writes are not supported before Spark 3.2.x")
@pytest.mark.xfail(condition=is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9676")
def test_delta_atomic_create_table_as_select(spark_tmp_table_factory, spark_tmp_path):
_atomic_write_table_as_select(delta_write_gens, spark_tmp_table_factory, spark_tmp_path, overwrite=False)

@allow_non_gpu(*delta_meta_allow)
@delta_lake
@ignore_order(local=True)
@pytest.mark.skipif(is_before_spark_320(), reason="Delta Lake writes are not supported before Spark 3.2.x")
@pytest.mark.xfail(condition=is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9676")
def test_delta_atomic_replace_table_as_select(spark_tmp_table_factory, spark_tmp_path):
_atomic_write_table_as_select(delta_write_gens, spark_tmp_table_factory, spark_tmp_path, overwrite=True)

Expand Down Expand Up @@ -403,6 +406,7 @@ def setup_tables(spark):
@ignore_order
@pytest.mark.parametrize("ts_write", ["INT96", "TIMESTAMP_MICROS", "TIMESTAMP_MILLIS"], ids=idfn)
@pytest.mark.skipif(is_before_spark_320(), reason="Delta Lake writes are not supported before Spark 3.2.x")
@pytest.mark.xfail(condition=is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9675")
def test_delta_write_legacy_timestamp_fallback(spark_tmp_path, ts_write):
gen = TimestampGen(start=datetime(1590, 1, 1, tzinfo=timezone.utc))
data_path = spark_tmp_path + "/DELTA_DATA"
Expand All @@ -425,6 +429,7 @@ def test_delta_write_legacy_timestamp_fallback(spark_tmp_path, ts_write):
{"parquet.encryption.column.keys": "k2:a"},
{"parquet.encryption.footer.key": "k1", "parquet.encryption.column.keys": "k2:a"}])
@pytest.mark.skipif(is_before_spark_320(), reason="Delta Lake writes are not supported before Spark 3.2.x")
@pytest.mark.xfail(condition=is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9675")
def test_delta_write_encryption_option_fallback(spark_tmp_path, write_options):
def write_func(spark, path):
writer = unary_op_df(spark, int_gen).coalesce(1).write.format("delta")
Expand All @@ -446,6 +451,7 @@ def write_func(spark, path):
{"parquet.encryption.column.keys": "k2:a"},
{"parquet.encryption.footer.key": "k1", "parquet.encryption.column.keys": "k2:a"}])
@pytest.mark.skipif(is_before_spark_320(), reason="Delta Lake writes are not supported before Spark 3.2.x")
@pytest.mark.xfail(condition=is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9675")
def test_delta_write_encryption_runtimeconfig_fallback(spark_tmp_path, write_options):
data_path = spark_tmp_path + "/DELTA_DATA"
assert_gpu_fallback_write(
Expand All @@ -462,6 +468,7 @@ def test_delta_write_encryption_runtimeconfig_fallback(spark_tmp_path, write_opt
{"parquet.encryption.column.keys": "k2:a"},
{"parquet.encryption.footer.key": "k1", "parquet.encryption.column.keys": "k2:a"}])
@pytest.mark.skipif(is_before_spark_320(), reason="Delta Lake writes are not supported before Spark 3.2.x")
@pytest.mark.xfail(condition=is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9675")
def test_delta_write_encryption_hadoopconfig_fallback(spark_tmp_path, write_options):
data_path = spark_tmp_path + "/DELTA_DATA"
def setup_hadoop_confs(spark):
Expand All @@ -486,6 +493,7 @@ def reset_hadoop_confs(spark):
@ignore_order
@pytest.mark.parametrize('codec', ['gzip'])
@pytest.mark.skipif(is_before_spark_320(), reason="Delta Lake writes are not supported before Spark 3.2.x")
@pytest.mark.xfail(condition=is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9675")
def test_delta_write_compression_fallback(spark_tmp_path, codec):
data_path = spark_tmp_path + "/DELTA_DATA"
confs=copy_and_update(delta_writes_enabled_conf, {"spark.sql.parquet.compression.codec": codec})
Expand All @@ -500,6 +508,7 @@ def test_delta_write_compression_fallback(spark_tmp_path, codec):
@delta_lake
@ignore_order
@pytest.mark.skipif(is_before_spark_320(), reason="Delta Lake writes are not supported before Spark 3.2.x")
@pytest.mark.xfail(condition=is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9675")
def test_delta_write_legacy_format_fallback(spark_tmp_path):
data_path = spark_tmp_path + "/DELTA_DATA"
confs=copy_and_update(delta_writes_enabled_conf, {"spark.sql.parquet.writeLegacyFormat": "true"})
Expand Down Expand Up @@ -880,6 +889,7 @@ def test_delta_write_optimized_supported_types_partitioned(spark_tmp_path):
simple_string_to_string_map_gen,
StructGen([("x", ArrayGen(int_gen))]),
ArrayGen(StructGen([("x", long_gen)]))], ids=idfn)
@pytest.mark.xfail(condition=is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9675")
def test_delta_write_optimized_unsupported_sort_fallback(spark_tmp_path, gen):
data_path = spark_tmp_path + "/DELTA_DATA"
confs=copy_and_update(delta_writes_enabled_conf, {
Expand Down
3 changes: 2 additions & 1 deletion integration_tests/src/main/python/hash_aggregate_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
from pyspark.sql.types import *
from marks import *
import pyspark.sql.functions as f
from spark_session import is_databricks104_or_later, with_cpu_session, is_before_spark_330
from spark_session import is_databricks104_or_later, with_cpu_session, is_before_spark_330, is_databricks_runtime, is_spark_340_or_later

pytestmark = pytest.mark.nightly_resource_consuming_test

Expand Down Expand Up @@ -1652,6 +1652,7 @@ def test_hash_groupby_approx_percentile_double_single(aqe_enabled):
@ignore_order(local=True)
@allow_non_gpu('TakeOrderedAndProjectExec', 'Alias', 'Cast', 'ObjectHashAggregateExec', 'AggregateExpression',
'ApproximatePercentile', 'Literal', 'ShuffleExchangeExec', 'HashPartitioning', 'CollectLimitExec')
@pytest.mark.xfail(condition=is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9493")
def test_hash_groupby_approx_percentile_partial_fallback_to_cpu(aqe_enabled):
conf = {
'spark.rapids.sql.hashAgg.replaceMode': 'partial',
Expand Down
7 changes: 6 additions & 1 deletion integration_tests/src/main/python/udf_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
import pytest

from conftest import is_at_least_precommit_run
from spark_session import is_databricks_runtime, is_before_spark_330, is_before_spark_350, is_spark_350_or_later
from spark_session import is_databricks_runtime, is_before_spark_330, is_before_spark_350, is_spark_340_or_later

from pyspark.sql.pandas.utils import require_minimum_pyarrow_version, require_minimum_pandas_version

Expand Down Expand Up @@ -123,6 +123,7 @@ def group_size_udf(to_process: pd.Series) -> float:

@ignore_order
@pytest.mark.parametrize('data_gen', integral_gens, ids=idfn)
@pytest.mark.xfail(condition=is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9493")
def test_group_aggregate_udf(data_gen):
@f.pandas_udf('long')
def pandas_sum(to_process: pd.Series) -> int:
Expand All @@ -140,6 +141,7 @@ def pandas_sum(to_process: pd.Series) -> int:

@ignore_order(local=True)
@pytest.mark.parametrize('data_gen', arrow_common_gen, ids=idfn)
@pytest.mark.skipif(is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9493")
def test_group_aggregate_udf_more_types(data_gen):
@f.pandas_udf('long')
def group_size_udf(to_process: pd.Series) -> int:
Expand Down Expand Up @@ -181,6 +183,7 @@ def group_size_udf(to_process: pd.Series) -> int:
@ignore_order
@pytest.mark.parametrize('data_gen', integral_gens, ids=idfn)
@pytest.mark.parametrize('window', udf_windows, ids=window_ids)
@pytest.mark.skipif(is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9493")
def test_window_aggregate_udf(data_gen, window):

@f.pandas_udf('long')
Expand All @@ -199,6 +202,7 @@ def pandas_sum(to_process: pd.Series) -> int:
@ignore_order
@pytest.mark.parametrize('data_gen', [byte_gen, short_gen, int_gen], ids=idfn)
@pytest.mark.parametrize('window', udf_windows, ids=window_ids)
@pytest.mark.skipif(is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9493")
def test_window_aggregate_udf_array_from_python(data_gen, window):

@f.pandas_udf(returnType=ArrayType(LongType()))
Expand Down Expand Up @@ -326,6 +330,7 @@ def create_df(spark, data_gen, left_length, right_length):

@ignore_order
@pytest.mark.parametrize('data_gen', [ShortGen(nullable=False)], ids=idfn)
@pytest.mark.skipif(is_spark_340_or_later() and is_databricks_runtime(), reason="https://github.com/NVIDIA/spark-rapids/issues/9493")
def test_cogroup_apply_udf(data_gen):
def asof_join(l, r):
return pd.merge_asof(l, r, on='a', by='b')
Expand Down
2 changes: 1 addition & 1 deletion jenkins/Jenkinsfile-blossom.premerge-databricks
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ pipeline {
// 'name' and 'value' only supprt literal string in the declarative Jenkins
// Refer to Jenkins issue https://issues.jenkins.io/browse/JENKINS-62127
name 'DB_RUNTIME'
values '10.4', '11.3', '12.2'
values '10.4', '11.3', '12.2', '13.3'
}
}
stages {
Expand Down
29 changes: 28 additions & 1 deletion pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -509,6 +509,31 @@
<module>delta-lake/delta-spark332db</module>
</modules>
</profile>
<profile>
<!-- Note Databricks requires 2 properties -Ddatabricks and -Dbuildver=341db -->
<id>release341db</id>
<activation>
<property>
<name>buildver</name>
<value>341db</value>
</property>
</activation>
<properties>
<!-- Downgrade scala plugin version due to: https://github.com/sbt/sbt/issues/4305 -->
<scala.plugin.version>3.4.4</scala.plugin.version>
<spark.version.classifier>spark341db</spark.version.classifier>
<spark.version>${spark341db.version}</spark.version>
<spark.test.version>${spark341db.version}</spark.test.version>
<hadoop.client.version>3.3.1</hadoop.client.version>
<rat.consoleOutput>true</rat.consoleOutput>
<parquet.hadoop.version>1.12.0</parquet.hadoop.version>
<iceberg.version>${spark330.iceberg.version}</iceberg.version>
</properties>
<modules>
<module>shim-deps/databricks</module>
<module>delta-lake/delta-spark341db</module>
</modules>
</profile>
<profile>
<id>release350</id>
<activation>
Expand Down Expand Up @@ -691,6 +716,7 @@
<spark332cdh.version>3.3.2.3.3.7190.0-91</spark332cdh.version>
<spark330db.version>3.3.0-databricks</spark330db.version>
<spark332db.version>3.3.2-databricks</spark332db.version>
<spark341db.version>3.4.1-databricks</spark341db.version>
<spark350.version>3.5.0</spark350.version>
<mockito.version>3.12.4</mockito.version>
<scala.plugin.version>4.3.0</scala.plugin.version>
Expand Down Expand Up @@ -745,7 +771,8 @@
<databricks.buildvers>
321db,
330db,
332db
332db,
341db
</databricks.buildvers>
<!--
Build and run unit tests on one specific version for each sub-version (e.g. 311, 320, 330)
Expand Down
17 changes: 17 additions & 0 deletions scala2.13/aggregator/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -543,6 +543,23 @@
</dependency>
</dependencies>
</profile>
<profile>
<id>release341db</id>
<activation>
<property>
<name>buildver</name>
<value>341db</value>
</property>
</activation>
<dependencies>
<dependency>
<groupId>com.nvidia</groupId>
<artifactId>rapids-4-spark-delta-spark341db_${scala.binary.version}</artifactId>
<version>${project.version}</version>
<classifier>${spark.version.classifier}</classifier>
</dependency>
</dependencies>
</profile>
<profile>
<id>release333</id>
<activation>
Expand Down
29 changes: 28 additions & 1 deletion scala2.13/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -509,6 +509,31 @@
<module>delta-lake/delta-spark332db</module>
</modules>
</profile>
<profile>
<!-- Note Databricks requires 2 properties -Ddatabricks and -Dbuildver=341db -->
<id>release341db</id>
<activation>
<property>
<name>buildver</name>
<value>341db</value>
</property>
</activation>
<properties>
<!-- Downgrade scala plugin version due to: https://github.com/sbt/sbt/issues/4305 -->
<scala.plugin.version>3.4.4</scala.plugin.version>
<spark.version.classifier>spark341db</spark.version.classifier>
<spark.version>${spark341db.version}</spark.version>
<spark.test.version>${spark341db.version}</spark.test.version>
<hadoop.client.version>3.3.1</hadoop.client.version>
<rat.consoleOutput>true</rat.consoleOutput>
<parquet.hadoop.version>1.12.0</parquet.hadoop.version>
<iceberg.version>${spark330.iceberg.version}</iceberg.version>
</properties>
<modules>
<module>shim-deps/databricks</module>
<module>delta-lake/delta-spark341db</module>
</modules>
</profile>
<profile>
<id>release350</id>
<activation>
Expand Down Expand Up @@ -691,6 +716,7 @@
<spark332cdh.version>3.3.2.3.3.7190.0-91</spark332cdh.version>
<spark330db.version>3.3.0-databricks</spark330db.version>
<spark332db.version>3.3.2-databricks</spark332db.version>
<spark341db.version>3.4.1-databricks</spark341db.version>
<spark350.version>3.5.0</spark350.version>
<mockito.version>3.12.4</mockito.version>
<scala.plugin.version>4.3.0</scala.plugin.version>
Expand Down Expand Up @@ -745,7 +771,8 @@
<databricks.buildvers>
321db,
330db,
332db
332db,
341db
</databricks.buildvers>
<!--
Build and run unit tests on one specific version for each sub-version (e.g. 311, 320, 330)
Expand Down
41 changes: 41 additions & 0 deletions scala2.13/shim-deps/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,47 @@
</dependency>
</dependencies>
</profile>
<profile>
<id>release341db</id>
<activation>
<property>
<name>buildver</name>
<value>341db</value>
</property>
</activation>
<dependencies>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.parquet</groupId>
<artifactId>parquet-format-internal_${scala.binary.version}</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-common-utils_${scala.binary.version}</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql-api_${scala.binary.version}</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>shaded.parquet.org.apache.thrift</groupId>
<artifactId>shaded-parquet-thrift_${scala.binary.version}</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>
</dependencies>
</profile>
<profile>
<id>dbdeps</id>
<activation>
Expand Down
Loading