You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We tried deploying Presto with latest Hoodie jars and observed sudden increase in the number of open sockets used from Presto coordinator. The sockets are in CLOSE_WAIT state. Since Presto is a long running process this seems like an issue specific for Presto and cannot be observed in short lived process like Hive/Spark applications. Debugging this further, we identified that this could be from HoodiePartitionMetadata.readFromFS() not closing the HOODIE_PARTITION_METAFILE.
The text was updated successfully, but these errors were encountered:
We tried deploying Presto with latest Hoodie jars and observed sudden increase in the number of open sockets used from Presto coordinator. The sockets are in CLOSE_WAIT state. Since Presto is a long running process this seems like an issue specific for Presto and cannot be observed in short lived process like Hive/Spark applications. Debugging this further, we identified that this could be from HoodiePartitionMetadata.readFromFS() not closing the HOODIE_PARTITION_METAFILE.
The text was updated successfully, but these errors were encountered: