You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Running NDS query 72 on Spark 3.1.2 throws the following exception on partitioned data and then produces an empty result which is incorrect.
21/12/20 21:58:53 WARN Predicate: Expr codegen error and falling back to interpreter mode
java.lang.RuntimeException: Unsupported literal type class org.apache.spark.sql.catalyst.expressions.UnsafeRow [0,2567cd,2a41,144e]
at org.apache.spark.sql.catalyst.expressions.Literal$.apply(literals.scala:90)
at org.apache.spark.sql.catalyst.expressions.InSet.$anonfun$genCodeWithSwitch$2(predicates.scala:542)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:321)
at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:977)
at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:977)
at scala.collection.TraversableLike.map(TraversableLike.scala:238)
at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
at scala.collection.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:51)
at scala.collection.SetLike.map(SetLike.scala:104)
at scala.collection.SetLike.map$(SetLike.scala:104)
at scala.collection.AbstractSet.map(Set.scala:51)
at org.apache.spark.sql.catalyst.expressions.InSet.genCodeWithSwitch(predicates.scala:542)
at org.apache.spark.sql.catalyst.expressions.InSet.doGenCode(predicates.scala:513)
at org.apache.spark.sql.execution.InSubqueryExec.doGenCode(subquery.scala:159)
at org.apache.spark.sql.catalyst.expressions.Expression.$anonfun$genCode$3(Expression.scala:146)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.catalyst.expressions.Expression.genCode(Expression.scala:141)
at org.apache.spark.sql.catalyst.expressions.DynamicPruningExpression.doGenCode(DynamicPruning.scala:93)
at org.apache.spark.sql.catalyst.expressions.Expression.$anonfun$genCode$3(Expression.scala:146)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.catalyst.expressions.Expression.genCode(Expression.scala:141)
at org.apache.spark.sql.catalyst.expressions.codegen.CodegenContext.$anonfun$generateExpressions$1(CodeGenerator.scala:1187)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike.map(TraversableLike.scala:238)
at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
at scala.collection.immutable.List.map(List.scala:298)
at org.apache.spark.sql.catalyst.expressions.codegen.CodegenContext.generateExpressions(CodeGenerator.scala:1187)
at org.apache.spark.sql.catalyst.expressions.codegen.GeneratePredicate$.create(GeneratePredicate.scala:41)
at org.apache.spark.sql.catalyst.expressions.codegen.GeneratePredicate$.generate(GeneratePredicate.scala:33)
at org.apache.spark.sql.catalyst.expressions.Predicate$.createCodeGeneratedObject(predicates.scala:88)
at org.apache.spark.sql.catalyst.expressions.Predicate$.createCodeGeneratedObject(predicates.scala:85)
at org.apache.spark.sql.catalyst.expressions.CodeGeneratorWithInterpretedFallback.createObject(CodeGeneratorWithInterpretedFallback.scala:52)
at org.apache.spark.sql.catalyst.expressions.Predicate$.create(predicates.scala:101)
at org.apache.spark.sql.rapids.GpuFileSourceScanExec.dynamicallySelectedPartitions$lzycompute(GpuFileSourceScanExec.scala:132)
at org.apache.spark.sql.rapids.GpuFileSourceScanExec.dynamicallySelectedPartitions(GpuFileSourceScanExec.scala:120)
at org.apache.spark.sql.rapids.GpuFileSourceScanExec.inputRDD$lzycompute(GpuFileSourceScanExec.scala:316)
at org.apache.spark.sql.rapids.GpuFileSourceScanExec.inputRDD(GpuFileSourceScanExec.scala:293)
at org.apache.spark.sql.rapids.GpuFileSourceScanExec.doExecuteColumnar(GpuFileSourceScanExec.scala:386)
Steps/Code to reproduce bug
Run NDS query 72 on scale factor 100 data that is partitioned
Expected behavior
Query should run without a warning emitted and produce the same results as the query run on the CPU.
The text was updated successfully, but these errors were encountered:
Describe the bug
Running NDS query 72 on Spark 3.1.2 throws the following exception on partitioned data and then produces an empty result which is incorrect.
Steps/Code to reproduce bug
Run NDS query 72 on scale factor 100 data that is partitioned
Expected behavior
Query should run without a warning emitted and produce the same results as the query run on the CPU.
The text was updated successfully, but these errors were encountered: