We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The distributed workers flatten the chunks along the first dimension to write to SEG-Y.
Huge files >2TB use a lot of memory during export.
The output sharding strategy needs to be optimized:
mdio-python/src/mdio/converters/mdio.py
Lines 177 to 241 in 03b9e4f
and
mdio-python/src/mdio/segy/creation.py
Line 111 in 03b9e4f
The text was updated successfully, but these errors were encountered:
ref Dask Community Post
Sorry, something went wrong.
This has significant improvements to memory usage:
dask/distributed#7128
import dask import distributed with dask.config.set({"distributed.scheduler.worker-saturation": "1.0"}): client = distributed.Client(...)
Successfully merging a pull request may close this issue.
The distributed workers flatten the chunks along the first dimension to write to SEG-Y.
Huge files >2TB use a lot of memory during export.
The output sharding strategy needs to be optimized:
mdio-python/src/mdio/converters/mdio.py
Lines 177 to 241 in 03b9e4f
and
mdio-python/src/mdio/segy/creation.py
Line 111 in 03b9e4f
The text was updated successfully, but these errors were encountered: