-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Buffering implementation causes crash in VolumetricDataSource #1036
Comments
I started debugging the issue, and it seems that subsequently loading datasets in VolumetricDataSource causes this to happen, but only if the later datasets are larger than the first one. The error then occurs in zlib (gzread.c, line 35) when reading in data. errno reveals a code 22 error (EINVAL; invalid argument). Why this happens is still unclear. For now, a workaround is to change the input file name to the larger dataset before running the project. Further, this reveals another issue of VolumetricDataSource or RaycastVolumeRenderer, that although the data source returns an error, VolumetricDataSource increases the data hash of the left-side call. Thus, after the first return false statement from GetData, modules to the left (i.e., RaycastVolumeRenderer) keep calling for updates. As the data source notices that it has "already loaded" the current data, it returns true this time, resulting in a crash when the module to the left tries to access the (not provided) data. I am not sure, if this is bad behavior of VolumetricDataSource or RaycastVolumeRenderer. @reinago |
After further debugging, the above error is just a relict from buffer overflow when reading in the data. This is due to the faulty implementation of buffers in the VolumetricDataSource. There, the buffers are only newly created if the incoming call does not contain any data. Afterwards, the buffers are recycled in their original size. Thus the overflow when trying to load larger data than before. |
Describe the bug
Megamol seems to crash when using RaycastVolumeRenderer with larger but still not so large volumes.
To Reproduce
Open Megamol
Expected behavior
The dataset is being loaded properly and megamol does not crash.
Screenshots
If applicable, add screenshots to help explain your problem.
Log file
15:57:8|25336|200|Successfully loaded dat file c:/users/bauerrn/Data/tobias_itlr_flow_around_cylinder_experiment/Cylinderdata_Re400_100Hz_variables.dat.
15:57:8|25336|200|The grid is cartesian.
15:57:8|25336|200|The grid is uniform in dimension 0 and has a slice distance of 1.000000.
15:57:8|25336|200|The grid is uniform in dimension 1 and has a slice distance of 1.000000.
15:57:8|25336|200|The grid is uniform in dimension 2 and has a slice distance of 1.000000.
15:57:8|25336|200|Resolution in dimension 0 is 185.
15:57:8|25336|200|Resolution in dimension 1 is 165.
15:57:8|25336|200|Resolution in dimension 2 is 640.
15:57:8|25336|200|Origin in dimension 0 is 0.000000.
15:57:8|25336|200|Origin in dimension 1 is 0.000000.
15:57:8|25336|200|Origin in dimension 2 is 0.000000.
15:57:8|25336|200|Each voxel comprises 1 scalars.
15:57:8|25336|200|Scalars are unsigned integers.
15:57:8|25336|200|Scalar values comprise 1 bytes.
15:57:8|25336|200|The data set comprises 1 frames.
15:57:8|25336|1|Loading frame 0 failed.
Environment
Additional context
The very same dataset works when manually limited to e.g. 512x185x165 of its size.
My guess is, that the dataset is too large or rendering takes too long such that somehow megamol crashes.
The text was updated successfully, but these errors were encountered: