Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Active GPU thread not holding the semaphore #7729

Closed
abellina opened this issue Feb 10, 2023 · 1 comment · Fixed by #7846
Closed

[BUG] Active GPU thread not holding the semaphore #7729

abellina opened this issue Feb 10, 2023 · 1 comment · Fixed by #7846
Assignees
Labels
bug Something isn't working

Comments

@abellina
Copy link
Collaborator

abellina commented Feb 10, 2023

A user tried 23.02-SNAPSHOT and reported an OOM. In addition to this, other threads were seen on the GPU but not holding onto the semaphore (re: "Semaphore not held"):

I wonder if this is another instance of: #6980

Semaphore not held. Stack trace for task attempt id 4554:
    java.lang.Thread.getStackTrace(Thread.java:1564)
    com.nvidia.spark.rapids.GpuSemaphore.$anonfun$dumpActiveStackTracesToLog$1(GpuSemaphore.scala:184)
    com.nvidia.spark.rapids.GpuSemaphore.$anonfun$dumpActiveStackTracesToLog$1$adapted(GpuSemaphore.scala:181)
    java.util.concurrent.ConcurrentHashMap.forEach(ConcurrentHashMap.java:1597)
    com.nvidia.spark.rapids.GpuSemaphore.dumpActiveStackTracesToLog(GpuSemaphore.scala:181)
    com.nvidia.spark.rapids.GpuSemaphore$.dumpActiveStackTracesToLog(GpuSemaphore.scala:86)
    com.nvidia.spark.rapids.DeviceMemoryEventHandler.$anonfun$onAllocFailure$3(DeviceMemoryEventHandler.scala:145)
    com.nvidia.spark.rapids.DeviceMemoryEventHandler.$anonfun$onAllocFailure$3$adapted(DeviceMemoryEventHandler.scala:118)
    com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
    com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
    com.nvidia.spark.rapids.DeviceMemoryEventHandler.withResource(DeviceMemoryEventHandler.scala:37)
    com.nvidia.spark.rapids.DeviceMemoryEventHandler.onAllocFailure(DeviceMemoryEventHandler.scala:118)
    ai.rapids.cudf.Table.filter(Native Method)
    ai.rapids.cudf.Table.filter(Table.java:2113)
    com.nvidia.spark.rapids.GpuExpressionWithSideEffectUtils$.filterBatch(conditionalExpressions.scala:74)
    com.nvidia.spark.rapids.GpuCaseWhen.filterEvaluateWhenThen(conditionalExpressions.scala:464)
    com.nvidia.spark.rapids.GpuCaseWhen.$anonfun$columnarEvalWithSideEffects$6(conditionalExpressions.scala:427)
    com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
    com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
    com.nvidia.spark.rapids.GpuCaseWhen.withResource(conditionalExpressions.scala:314)
    com.nvidia.spark.rapids.GpuCaseWhen.$anonfun$columnarEvalWithSideEffects$1(conditionalExpressions.scala:421)
    com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
    com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
    com.nvidia.spark.rapids.GpuCaseWhen.withResource(conditionalExpressions.scala:314)
    com.nvidia.spark.rapids.GpuCaseWhen.columnarEvalWithSideEffects(conditionalExpressions.scala:388)
    com.nvidia.spark.rapids.GpuCaseWhen.columnarEval(conditionalExpressions.scala:359)
    com.nvidia.spark.rapids.RapidsPluginImplicits$ReallyAGpuExpression.columnarEval(implicits.scala:34)
    com.nvidia.spark.rapids.GpuAlias.columnarEval(namedExpressions.scala:109)
    com.nvidia.spark.rapids.RapidsPluginImplicits$ReallyAGpuExpression.columnarEval(implicits.scala:34)
    com.nvidia.spark.rapids.GpuExpressionsUtils$.columnarEvalToColumn(GpuExpressions.scala:94)
    com.nvidia.spark.rapids.GpuProjectExec$.projectSingle(basicPhysicalOperators.scala:108)
    com.nvidia.spark.rapids.GpuProjectExec$.$anonfun$project$1(basicPhysicalOperators.scala:115)
    com.nvidia.spark.rapids.RapidsPluginImplicits$MapsSafely.$anonfun$safeMap$1(implicits.scala:216)
    com.nvidia.spark.rapids.RapidsPluginImplicits$MapsSafely.$anonfun$safeMap$1$adapted(implicits.scala:213)
    scala.collection.immutable.List.foreach(List.scala:392)
    com.nvidia.spark.rapids.RapidsPluginImplicits$MapsSafely.safeMap(implicits.scala:213)
    com.nvidia.spark.rapids.RapidsPluginImplicits$AutoCloseableProducingSeq.safeMap(implicits.scala:248)
    com.nvidia.spark.rapids.GpuProjectExec$.project(basicPhysicalOperators.scala:115)
    com.nvidia.spark.rapids.GpuTieredProject.$anonfun$tieredProject$1(basicPhysicalOperators.scala:335)
    com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
    com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
    com.nvidia.spark.rapids.GpuTieredProject.withResource(basicPhysicalOperators.scala:286)
    com.nvidia.spark.rapids.GpuTieredProject.recurse$1(basicPhysicalOperators.scala:334)
    com.nvidia.spark.rapids.GpuTieredProject.tieredProject(basicPhysicalOperators.scala:354)
    com.nvidia.spark.rapids.GpuTieredProject.$anonfun$tieredProjectAndClose$2(basicPhysicalOperators.scala:360)
    com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
    com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
    com.nvidia.spark.rapids.GpuTieredProject.withResource(basicPhysicalOperators.scala:286)
    com.nvidia.spark.rapids.GpuTieredProject.$anonfun$tieredProjectAndClose$1(basicPhysicalOperators.scala:359)
    com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)
    com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)
    com.nvidia.spark.rapids.GpuTieredProject.withResource(basicPhysicalOperators.scala:286)
    com.nvidia.spark.rapids.GpuTieredProject.tieredProjectAndClose(basicPhysicalOperators.scala:358)
    com.nvidia.spark.rapids.GpuProjectExec.$anonfun$doExecuteColumnar$1(basicPhysicalOperators.scala:177)
    scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
    com.nvidia.spark.rapids.GpuHashAggregateIterator.aggregateInputBatches(aggregate.scala:285)
    com.nvidia.spark.rapids.GpuHashAggregateIterator.$anonfun$next$2(aggregate.scala:240)
    scala.Option.getOrElse(Option.scala:138)
    com.nvidia.spark.rapids.GpuHashAggregateIterator.next(aggregate.scala:237)
    com.nvidia.spark.rapids.GpuHashAggregateIterator.next(aggregate.scala:182)
    com.nvidia.spark.rapids.GpuHashAggregateIterator.aggregateInputBatches(aggregate.scala:285)
    com.nvidia.spark.rapids.GpuHashAggregateIterator.$anonfun$next$2(aggregate.scala:240)
    scala.Option.getOrElse(Option.scala:138)
    com.nvidia.spark.rapids.GpuHashAggregateIterator.next(aggregate.scala:237)
    com.nvidia.spark.rapids.GpuHashAggregateIterator.next(aggregate.scala:182)
    org.apache.spark.sql.rapids.GpuFileFormatDataWriter.writeWithIterator(GpuFileFormatDataWriter.scala:87)
    org.apache.spark.sql.rapids.GpuFileFormatWriter$.$anonfun$executeTask$1(GpuFileFormatWriter.scala:320)
    org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1473)
    org.apache.spark.sql.rapids.GpuFileFormatWriter$.executeTask(GpuFileFormatWriter.scala:327)
    org.apache.spark.sql.rapids.GpuFileFormatWriter$.$anonfun$write$15(GpuFileFormatWriter.scala:246)
    org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    org.apache.spark.scheduler.Task.run(Task.scala:131)
    org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)
    org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
    org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    java.lang.Thread.run(Thread.java:750)
@jlowe
Copy link
Member

jlowe commented Feb 13, 2023

Relates to #7743, but not clear whether it is the root cause of this issue.

@mattahrens mattahrens removed the ? - Needs Triage Need team to review and classify label Feb 14, 2023
@abellina abellina self-assigned this Mar 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants