We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Last night's tests failed in hash_aggregate_test due to many tests failing with an error similar to this:
[2024-01-24T14:38:42.377Z] �[31mFAILED�[0m ../../src/main/python/hash_aggregate_test.py::�[1mtest_hash_multiple_filters[{'spark.rapids.sql.variableFloatAgg.enabled': 'true', 'spark.rapids.sql.castStringToFloat.enabled': 'true'}-[('a', RepeatSeq(Short)), ('b', Decimal(12,2)), ('c', Decimal(12,2))]][DATAGEN_SEED=1706101679, IGNORE_ORDER, INCOMPAT, APPROXIMATE_FLOAT, ALLOW_NON_GPU(HashAggregateExec,AggregateExpression,UnscaledValue,MakeDecimal,AttributeReference,Alias,Sum,Count,Max,Min,Average,Cast,StddevPop,StddevSamp,VariancePop,VarianceSamp,NormalizeNaNAndZero,GreaterThan,Literal,If,EqualTo,First,SortAggregateExec,Coalesce,IsNull,EqualNullSafe,PivotFirst,GetArrayItem,ShuffleExchangeExec,HashPartitioning)]�[0m - py4j.protocol.Py4JJavaError: An error occurred while calling o410972.collectToPython. [2024-01-24T14:38:42.377Z] : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 8516.0 failed 1 times, most recent failure: Lost task 0.0 in stage 8516.0 (TID 16243) (100.103.204.113 executor 0): java.lang.AssertionError: Type conversion is not allowed from Table{columns=[ColumnVector{rows=48, type=INT16, nullCount=Optional.empty, offHeap=(ID: 4641795 7f3abd2103f0)}, ColumnVector{rows=48, type=INT32, nullCount=Optional.empty, offHeap=(ID: 4641796 7f3ab605d8e0)}, ColumnVector{rows=48, type=INT32, nullCount=Optional.empty, offHeap=(ID: 4641797 7f3ab78ee6a0)}, ColumnVector{rows=48, type=UINT64, nullCount=Optional.empty, offHeap=(ID: 4641798 7f3b2d67a520)}, ColumnVector{rows=48, type=UINT64, nullCount=Optional.empty, offHeap=(ID: 4641799 7f3ab6063460)}, ColumnVector{rows=48, type=UINT64, nullCount=Optional.empty, offHeap=(ID: 4641800 7f3b309626b0)}, ColumnVector{rows=48, type=INT64, nullCount=Optional.empty, offHeap=(ID: 4641801 7f3ab6e5f910)}, ColumnVector{rows=48, type=INT64, nullCount=Optional.empty, offHeap=(ID: 4641802 7f3ab734d560)}, ColumnVector{rows=48, type=INT16, nullCount=Optional.empty, offHeap=(ID: 4641803 7f3ab7648e80)}, ColumnVector{rows=48, type=DECIMAL64 scale:-2, nullCount=Optional.empty, offHeap=(ID: 4641804 7f3ab7727ad0)}], cudfTable=139892686375040, rows=48} to [ShortType, IntegerType, IntegerType, LongType, LongType, LongType, LongType, LongType, ShortType, DecimalType(12,2)] columns 0 to 10
The text was updated successfully, but these errors were encountered:
Appears to be related to rapidsai/cudf#14679
Sorry, something went wrong.
jlowe
Successfully merging a pull request may close this issue.
Last night's tests failed in hash_aggregate_test due to many tests failing with an error similar to this:
The text was updated successfully, but these errors were encountered: