You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
removeUnusedVertices and by proxy mergeDuplicateVertices are a major bottleneck for speed in the main pipeline right now.
My impression is that removeUnusedVertices is so slow because mergeDuplicateVertices ends up creating lots of holes in the buffer that need to be closed. In order to close those holes, the entire buffer has to be moved over which can be very time consuming for large models.
This could be faster by dividing the buffer up into chunks, closing the gaps in the smaller chunks, then concatenating the used parts of the chunks. It may also be faster to iteratively read through and copy chunks into a new buffer, though probably less memory efficient.
The text was updated successfully, but these errors were encountered:
I ran into this today (specifically mergeDuplicateVertices being slow). I have a ~500mb obj model that I'm running through obj2gltf and then gltf-pipeline and if I comment out the call to mergeDuplicateVertices in Pipeline.js, it converts in ~150 seconds. If I run with mergeDuplicateVertices, it never completes (I killed it after 30 minutes). @lilleyse suggested we disable mergeDuplicateVertices by default until this issue can be resolved. I'm going to open a PR to put it behind a flag.
This works around #161 by adding a `mergeVertices` and disabling it by
default. Once `mergeDuplicateVertices` is refectored to use less memory
and be orders of magnitude faster, we can take out the option and just
always do it.
removeUnusedVertices
and by proxymergeDuplicateVertices
are a major bottleneck for speed in the main pipeline right now.My impression is that
removeUnusedVertices
is so slow becausemergeDuplicateVertices
ends up creating lots of holes in the buffer that need to be closed. In order to close those holes, the entire buffer has to be moved over which can be very time consuming for large models.This could be faster by dividing the buffer up into chunks, closing the gaps in the smaller chunks, then concatenating the used parts of the chunks. It may also be faster to iteratively read through and copy chunks into a new buffer, though probably less memory efficient.
The text was updated successfully, but these errors were encountered: