Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RoundRobinPartitioning fails with map type #5206

Closed
marin-ma opened this issue Mar 29, 2024 · 1 comment · Fixed by #5349
Closed

RoundRobinPartitioning fails with map type #5206

marin-ma opened this issue Mar 29, 2024 · 1 comment · Fixed by #5349
Labels
bug Something isn't working triage

Comments

@marin-ma
Copy link
Contributor

Backend

VL (Velox)

Bug description

Got exception when do round robin repartitioning if the input contains map type. The cause is that by default HashExpression will fail to get resolved in spark if its child contains map type https://github.com/apache/spark/blob/609bd4839e5d504917de74ed1cb9c23645fba51f/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/hash.scala#L279-L283

org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to qualifier on unresolved object
	at org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute.qualifier(unresolved.scala:229) ~[spark-catalyst_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.catalyst.expressions.package$AttributeSeq.$anonfun$hasThreeOrLessQualifierParts$1(package.scala:181) ~[spark-catalyst_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.catalyst.expressions.package$AttributeSeq.$anonfun$hasThreeOrLessQualifierParts$1$adapted(package.scala:181) ~[spark-catalyst_2.12-3.5.1.jar:3.5.1]
	at scala.collection.LinearSeqOptimized.forall(LinearSeqOptimized.scala:85) ~[scala-library-2.12.15.jar:?]
	at scala.collection.LinearSeqOptimized.forall$(LinearSeqOptimized.scala:82) ~[scala-library-2.12.15.jar:?]
	at scala.collection.immutable.List.forall(List.scala:91) ~[scala-library-2.12.15.jar:?]
	at org.apache.spark.sql.catalyst.expressions.package$AttributeSeq.<init>(package.scala:181) ~[spark-catalyst_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.catalyst.expressions.package$.AttributeSeq(package.scala:92) ~[spark-catalyst_2.12-3.5.1.jar:3.5.1]
	at io.glutenproject.expression.ExpressionConverter$.replaceWithExpressionTransformerInternal(ExpressionConverter.scala:211) ~[classes/:?]
	at io.glutenproject.expression.ExpressionConverter$.$anonfun$replaceWithExpressionTransformer$1(ExpressionConverter.scala:56) ~[classes/:?]
	at scala.collection.immutable.List.map(List.scala:293) ~[scala-library-2.12.15.jar:?]
	at io.glutenproject.expression.ExpressionConverter$.replaceWithExpressionTransformer(ExpressionConverter.scala:55) ~[classes/:?]
	at io.glutenproject.execution.ProjectExecTransformer.getRelNode(BasicPhysicalOperatorTransformer.scala:216) ~[classes/:?]
	at io.glutenproject.execution.ProjectExecTransformer.doValidateInternal(BasicPhysicalOperatorTransformer.scala:171) ~[classes/:?]
	at io.glutenproject.extension.GlutenPlan.doValidate(GlutenPlan.scala:67) ~[classes/:?]
	at io.glutenproject.extension.GlutenPlan.doValidate$(GlutenPlan.scala:64) ~[classes/:?]
	at io.glutenproject.execution.ProjectExecTransformer.doValidate(BasicPhysicalOperatorTransformer.scala:155) ~[classes/:?]
	at io.glutenproject.backendsapi.velox.SparkPlanExecApiImpl.genColumnarShuffleExchange(SparkPlanExecApiImpl.scala:261) ~[classes/:?]
	at io.glutenproject.extension.columnar.transform.ImplementExchange.impl(ImplementSingleNode.scala:119) ~[classes/:?]
	at io.glutenproject.extension.columnar.MiscColumnarRules$TransformPreOverrides$$anonfun$$nestedInanonfun$apply$2$1.applyOrElse(MiscColumnarRules.scala:57) ~[classes/:?]
	at io.glutenproject.extension.columnar.MiscColumnarRules$TransformPreOverrides$$anonfun$$nestedInanonfun$apply$2$1.applyOrElse(MiscColumnarRules.scala:57) ~[classes/:?]
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformUpWithPruning$2(TreeNode.scala:515) ~[spark-catalyst_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76) ~[spark-sql-api_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformUpWithPruning(TreeNode.scala:515) ~[spark-catalyst_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:488) ~[spark-catalyst_2.12-3.5.1.jar:3.5.1]
	at io.glutenproject.extension.columnar.MiscColumnarRules$TransformPreOverrides.$anonfun$apply$2(MiscColumnarRules.scala:57) ~[classes/:?]
	at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126) ~[scala-library-2.12.15.jar:?]
	at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122) ~[scala-library-2.12.15.jar:?]
	at scala.collection.immutable.List.foldLeft(List.scala:91) ~[scala-library-2.12.15.jar:?]
	at io.glutenproject.extension.columnar.MiscColumnarRules$TransformPreOverrides.apply(MiscColumnarRules.scala:57) ~[classes/:?]
	at io.glutenproject.extension.columnar.MiscColumnarRules$TransformPreOverrides.apply(MiscColumnarRules.scala:46) ~[classes/:?]
	at io.glutenproject.extension.ColumnarOverrideRules.$anonfun$transformPlan$3(ColumnarOverrides.scala:363) ~[classes/:?]
	at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126) ~[scala-library-2.12.15.jar:?]
	at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122) ~[scala-library-2.12.15.jar:?]
	at scala.collection.immutable.List.foldLeft(List.scala:91) ~[scala-library-2.12.15.jar:?]
	at io.glutenproject.extension.ColumnarOverrideRules.$anonfun$transformPlan$1(ColumnarOverrides.scala:361) ~[classes/:?]
	at io.glutenproject.metrics.GlutenTimeMetric$.withNanoTime(GlutenTimeMetric.scala:41) ~[classes/:?]
	at io.glutenproject.metrics.GlutenTimeMetric$.withMillisTime(GlutenTimeMetric.scala:46) ~[classes/:?]
	at io.glutenproject.extension.ColumnarOverrideRules.transformPlan(ColumnarOverrides.scala:371) ~[classes/:?]
	at io.glutenproject.extension.ColumnarOverrideRules.$anonfun$withTransformRules$3(ColumnarOverrides.scala:339) ~[classes/:?]
	at io.glutenproject.extension.ColumnarOverrideRules.prepareFallback(ColumnarOverrides.scala:308) ~[classes/:?]
	at io.glutenproject.extension.ColumnarOverrideRules.$anonfun$withTransformRules$2(ColumnarOverrides.scala:338) ~[classes/:?]
	at io.glutenproject.utils.QueryPlanSelector.maybe(QueryPlanSelector.scala:74) ~[classes/:?]
	at io.glutenproject.extension.ColumnarOverrideRules.io$glutenproject$extension$ColumnarOverrideRules$$$anonfun$withTransformRules$1(ColumnarOverrides.scala:336) ~[classes/:?]
	at io.glutenproject.extension.ColumnarOverrideRules$$anonfun$withTransformRules$4.apply(ColumnarOverrides.scala:335) ~[classes/:?]
	at io.glutenproject.extension.ColumnarOverrideRules$$anonfun$withTransformRules$4.apply(ColumnarOverrides.scala:335) ~[classes/:?]
	at io.glutenproject.extension.ColumnarOverrideRules.io$glutenproject$extension$ColumnarOverrideRules$$$anonfun$postColumnarTransitions$1(ColumnarOverrides.scala:330) ~[classes/:?]
	at io.glutenproject.extension.ColumnarOverrideRules$$anonfun$postColumnarTransitions$2.apply(ColumnarOverrides.scala:326) ~[classes/:?]
	at io.glutenproject.extension.ColumnarOverrideRules$$anonfun$postColumnarTransitions$2.apply(ColumnarOverrides.scala:326) ~[classes/:?]
	at org.apache.spark.sql.execution.ApplyColumnarRulesAndInsertTransitions.$anonfun$apply$2(Columnar.scala:532) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.ApplyColumnarRulesAndInsertTransitions.$anonfun$apply$2$adapted(Columnar.scala:532) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) ~[scala-library-2.12.15.jar:?]
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) ~[scala-library-2.12.15.jar:?]
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) ~[scala-library-2.12.15.jar:?]
	at org.apache.spark.sql.execution.ApplyColumnarRulesAndInsertTransitions.apply(Columnar.scala:532) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.ApplyColumnarRulesAndInsertTransitions.apply(Columnar.scala:482) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec$.$anonfun$applyPhysicalRules$2(AdaptiveSparkPlanExec.scala:845) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126) ~[scala-library-2.12.15.jar:?]
	at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122) ~[scala-library-2.12.15.jar:?]
	at scala.collection.immutable.List.foldLeft(List.scala:91) ~[scala-library-2.12.15.jar:?]
	at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec$.applyPhysicalRules(AdaptiveSparkPlanExec.scala:844) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.newQueryStage(AdaptiveSparkPlanExec.scala:592) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.createQueryStages(AdaptiveSparkPlanExec.scala:538) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$createQueryStages$2(AdaptiveSparkPlanExec.scala:577) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286) ~[scala-library-2.12.15.jar:?]
	at scala.collection.Iterator.foreach(Iterator.scala:943) ~[scala-library-2.12.15.jar:?]
	at scala.collection.Iterator.foreach$(Iterator.scala:943) ~[scala-library-2.12.15.jar:?]
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1431) ~[scala-library-2.12.15.jar:?]
	at scala.collection.IterableLike.foreach(IterableLike.scala:74) ~[scala-library-2.12.15.jar:?]
	at scala.collection.IterableLike.foreach$(IterableLike.scala:73) ~[scala-library-2.12.15.jar:?]
	at scala.collection.AbstractIterable.foreach(Iterable.scala:56) ~[scala-library-2.12.15.jar:?]
	at scala.collection.TraversableLike.map(TraversableLike.scala:286) ~[scala-library-2.12.15.jar:?]
	at scala.collection.TraversableLike.map$(TraversableLike.scala:279) ~[scala-library-2.12.15.jar:?]
	at scala.collection.AbstractTraversable.map(Traversable.scala:108) ~[scala-library-2.12.15.jar:?]
	at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.createQueryStages(AdaptiveSparkPlanExec.scala:577) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.$anonfun$getFinalPhysicalPlan$1(AdaptiveSparkPlanExec.scala:277) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900) ~[spark-sql_2.12-3.5.1.jar:?]
	at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.getFinalPhysicalPlan(AdaptiveSparkPlanExec.scala:272) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.withFinalPlanUpdate(AdaptiveSparkPlanExec.scala:417) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec.doExecute(AdaptiveSparkPlanExec.scala:402) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:195) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:246) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) ~[spark-core_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:243) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:191) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:207) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:206) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.Dataset.rdd$lzycompute(Dataset.scala:3849) ~[spark-sql_2.12-3.5.1.jar:?]
	at org.apache.spark.sql.Dataset.rdd(Dataset.scala:3847) ~[spark-sql_2.12-3.5.1.jar:?]
	at org.apache.spark.sql.GlutenQueryTest$.$anonfun$getErrorMessageInCheckAnswer$1(GlutenQueryTest.scala:303) ~[test-classes/:?]
	at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23) ~[scala-library-2.12.15.jar:?]
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withExecutionId$1(SQLExecution.scala:177) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.execution.SQLExecution$.withExecutionId(SQLExecution.scala:175) ~[spark-sql_2.12-3.5.1.jar:3.5.1]
	at org.apache.spark.sql.GlutenQueryTest$.getErrorMessageInCheckAnswer(GlutenQueryTest.scala:303) ~[test-classes/:?]
	at org.apache.spark.sql.GlutenQueryTest$.checkAnswer(GlutenQueryTest.scala:278) ~[test-classes/:?]
	at org.apache.spark.sql.GlutenQueryTest.checkAnswer(GlutenQueryTest.scala:175) ~[test-classes/:?]
	at io.glutenproject.execution.WholeStageTransformerSuite.compareResultsAgainstVanillaSpark(WholeStageTransformerSuite.scala:276) ~[test-classes/:?]
	at io.glutenproject.execution.WholeStageTransformerSuite.runQueryAndCompare(WholeStageTransformerSuite.scala:295) ~[test-classes/:?]
...

If setting SQLConf.LEGACY_ALLOW_HASH_ON_MAPTYPE to true, it fails on the native side:

16:20:17.385 [Executor task launch worker for task 0.0 in stage 5.0 (TID 6)] ERROR org.apache.spark.util.TaskResources - Task 6 failed by error: 
io.glutenproject.exception.GlutenException: java.lang.RuntimeException: Exception: VeloxUserError
Error Source: USER
Error Code: INVALID_ARGUMENT
Reason: Unsupported type for hash: ROW<f_8:ARRAY<DOUBLE>,f_9:ROW<f_12:TINYINT,f_13:DOUBLE,f_14:DATE,f_15:BIGINT,f_16:REAL>,f_10:ROW<f_17:BIGINT>,f_11:MAP<VARCHAR,DECIMAL(30, 10)>>
Retriable: False
Function: checkArgTypes
File: /home/sparkuser/gluten/ep/build-velox/build/velox_ep/velox/functions/sparksql/Hash.cpp
Line: 391

Spark version

None

Spark configurations

No response

System information

No response

Relevant logs

No response

@marin-ma marin-ma added bug Something isn't working triage labels Mar 29, 2024
@marin-ma
Copy link
Contributor Author

Related to #4872
cc: @zjuwangg

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant