You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Another problem spotted in a heavy loaded environment after upgrade from Trino 466 to 470. We started getting a lot of such errors:
io.trino.spi.TrinoException: Unexpected response from http://10.212.98.38:8061/v1/task/20250220_151839_00045_u7m6u.5.5.0?summarize
at io.trino.server.remotetask.SimpleHttpResponseHandler.onSuccess(SimpleHttpResponseHandler.java:70)
at io.trino.server.remotetask.SimpleHttpResponseHandler.onSuccess(SimpleHttpResponseHandler.java:27)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1139)
at io.airlift.concurrent.BoundedExecutor.drainQueue(BoundedExecutor.java:79)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
at java.base/java.lang.Thread.run(Thread.java:1575)
Caused by: java.lang.IllegalArgumentException: Unable to create class io.trino.execution.TaskInfo from JSON response:
[io.airlift.jaxrs.JsonParsingException: com.fasterxml.jackson.databind.JsonMappingException: java.util.concurrent.TimeoutException: Idle timeout 15000 ms elapsed (through reference chain: io.trino.server.TaskUpdateRequest["fragment"]->io.trino.sql.planner.PlanFragment["root"]->io.trino.sql.planner.plan.ProjectNode["source"]->io.trino.sql.planner.plan.FilterNode["source"]->io.trino.sql.planner.plan.TableScanNode["table"]->io.trino.metadata.TableHandle["connectorHandle"]->io.trino.plugin.iceberg.IcebergTableHandle["unenforcedPredicate"]->io.trino.spi.predicate.TupleDomain["columnDomains"]->java.util.ArrayList[0])
at io.airlift.jaxrs.JsonMapper.readFrom(JsonMapper.java:55)
What is notable about this cluster is it is writing a lot of Iceberg data. I initially though it is related max-writer limit for Iceberg that was fixed in 471, so I cherry-picked the change, and it still times out same way. I can't seem to find any configuration related to 15000 ms timeout too. Very weird.
The text was updated successfully, but these errors were encountered:
Another problem spotted in a heavy loaded environment after upgrade from Trino 466 to 470. We started getting a lot of such errors:
What is notable about this cluster is it is writing a lot of Iceberg data. I initially though it is related max-writer limit for Iceberg that was fixed in 471, so I cherry-picked the change, and it still times out same way. I can't seem to find any configuration related to 15000 ms timeout too. Very weird.
The text was updated successfully, but these errors were encountered: