Stop double closing SerializeBatchDeserializeHostBuffer host buffers when running with Spark 3.2 #3422
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Closes #3416
This PR fixes a bug that we don't seem to have been hitting until Spark 3.2.0.
SerializeConcatHostBuffersDeserializeBatch.close
closes all of its instances ofSerializeBatchDeserializeHostBuffer
which in turn causes the underlyingHostMemoryBuffer
instances to be closed.SerializeConcatHostBuffersDeserializeBatch.close
then also tries closes the underlyingHostMemoryBuffer
instances, causing theClose called too many times
issue.I added debug logging to both close methods and did not see them get called with earlier Spark versions, so maybe this code was only getting called in certain contexts where the existing code is safe? I feel like I may be missing something here.
Here is the output with Spark 3.2.0 demonstrating that the problem described in #3416 is now resolved.
I ran
mvn verify
with the default Spark version and did not see any additional host memory leaks reported.