Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consuming large results with table functions causes "Query exceeded per-node memory limit" error #20398

Open
ebyhr opened this issue Jan 17, 2024 · 1 comment · May be fixed by #21558 or #21378
Open
Labels
bug Something isn't working

Comments

@ebyhr
Copy link
Member

ebyhr commented Jan 17, 2024

SELECT *
FROM TABLE(exclude_columns(
 input => TABLE(tpch.sf100.orders),
 columns => DESCRIPTOR(orderstatus)));

 orderkey  | custkey  | totalprice | orderdate  |  orderpriority  |      clerk      | shippriority |                    >
-----------+----------+------------+------------+-----------------+-----------------+--------------+-------------------->
 500000001 |  9605725 |   63881.33 | 1994-10-28 | 4-NOT SPECIFIED | Clerk#000002414 |            0 | e of the carefully >
...

Query 20240117_021243_00035_3g5dg, FAILED, 3 nodes
http://localhost:8080/ui/query.html?20240117_021243_00035_3g5dg
Splits: 30 total, 21 done (70.00%)
CPU Time: 64.3s total, 1.53M rows/s,     0B/s, 69% active
Per Node: 2.8 parallelism, 4.23M rows/s,     0B/s
Parallelism: 8.3
Peak Memory: 13.6GB
7.73 [98.1M rows, 0B] [12.7M rows/s, 0B/s]

Query 20240117_021243_00035_3g5dg failed: Query exceeded per-node memory limit of 4.80GB [Allocated: 4.80GB, Delta: 497.44kB, Top Consumers: {PartitionAndSort=4.77GB, LazyOutputBuffer=26.40MB, TableScanOperator=2.90MB}]
io.trino.ExceededMemoryLimitException: Query exceeded per-node memory limit of 4.80GB [Allocated: 4.80GB, Delta: 497.44kB, Top Consumers: {PartitionAndSort=4.77GB, LazyOutputBuffer=26.40MB, TableScanOperator=2.90MB}]
	at io.trino.ExceededMemoryLimitException.exceededLocalUserMemoryLimit(ExceededMemoryLimitException.java:40)
	at io.trino.memory.QueryContext.enforceUserMemoryLimit(QueryContext.java:330)
	at io.trino.memory.QueryContext.updateUserMemory(QueryContext.java:165)
	at io.trino.memory.QueryContext.lambda$addTaskContext$0(QueryContext.java:250)
	at io.trino.memory.QueryContext$QueryMemoryReservationHandler.reserveMemory(QueryContext.java:311)
	at io.trino.memory.context.RootAggregatedMemoryContext.updateBytes(RootAggregatedMemoryContext.java:37)
	at io.trino.memory.context.ChildAggregatedMemoryContext.updateBytes(ChildAggregatedMemoryContext.java:38)
	at io.trino.memory.context.ChildAggregatedMemoryContext.updateBytes(ChildAggregatedMemoryContext.java:38)
	at io.trino.memory.context.ChildAggregatedMemoryContext.updateBytes(ChildAggregatedMemoryContext.java:38)
	at io.trino.memory.context.SimpleLocalMemoryContext.setBytes(SimpleLocalMemoryContext.java:65)
	at io.trino.operator.OperatorContext$InternalLocalMemoryContext.setBytes(OperatorContext.java:710)
	at io.trino.operator.TableScanOperator.getOutput(TableScanOperator.java:312)
	at io.trino.operator.Driver.processInternal(Driver.java:395)
	at io.trino.operator.Driver.lambda$process$8(Driver.java:298)
	at io.trino.operator.Driver.tryWithLock(Driver.java:701)
	at io.trino.operator.Driver.process(Driver.java:290)
	at io.trino.operator.Driver.processForDuration(Driver.java:261)
	at io.trino.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:887)
	at io.trino.execution.executor.timesharing.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:187)
	at io.trino.execution.executor.timesharing.TimeSharingTaskExecutor$TaskRunner.run(TimeSharingTaskExecutor.java:565)
	at io.trino.$gen.Trino_testversion____20240117_015825_75.run(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.base/java.lang.Thread.run(Thread.java:1589)

The following query doesn't cause such error.

SELECT orderkey, totalprice, orderdate, orderpriority, clerk, shippriority, comment FROM tpch.sf100.orders;
@ebyhr ebyhr added the bug Something isn't working label Apr 3, 2024
@findepi findepi linked a pull request Apr 3, 2024 that will close this issue
@findepi
Copy link
Member

findepi commented Apr 3, 2024

#21378 might help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
2 participants