-
-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GraphNode: batch dispose mode #83
Comments
A while ago I'd been looking into using Related: |
Possible alternatives:
class Root {
disposeRefs<K extends RefListKeys<Attributes>>(attribute: K, refs: (Attributes[K]) => boolean | Attributes[K][]) {
this.startBatchDispose();
// do the dispose here
this.commitBatchDispose();
}
}
class GraphNode {
disposeRef(ref, attribute, refs) {
let index = 0;
let changed = false;
while ((index = refs.indexOf(ref, index)) > -1) {
refs[index] = null;
changed = true;
}
// if (changed) this.dirtyMap[attribute] = true;
}
listRefs(attribute) {
const refs = this[$attributes][attribute];
let length = refs.length;
let children = [], writeIndex = 0;
for (let readIndex = 0; readIndex < length; readIndex++) {
const ref = refs[readIndex];
if (ref !== null) {
if (readIndex !== writeIndex) {
refs[writeIndex] = ref;
}
children.push(ref.getChild());
writeIndex++;
}
}
if (writeIndex !== refs.length) {
refs.length = writeIndex;
this.dispatchEvent({
type: 'change',
attribute
});
}
return children;
}
} Should note that this is optimized using inplace compact, which could also make Maybe for loops can be faster than indexOf.
Maybe too complex here. |
That we still need to rebuild the array of everything that wasn't disposed is unfortunate. Using Sets would really be ideal, to avoid: for (const retainedRef of retained) refs.push(retainedRef); But this requires breaking API changes I'm not ready to make right now. In the meantime I think the simplest and safest optimization may be something like this: |
Relevant for benchmarking, the model linked below contains about 40,000 accessors. Disposing 50% of those could be a comparable test. https://sketchfab.com/3d-models/plant-3-5ecd9744ff6f4aa09766d796eec161ee |
A much simplier idea for benchmarking code
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<button type="button" id="btn0">Benchmark</button>
<script type="importmap">
{
"imports": {
"@gltf-transform/core": "./packages/core/dist/core.modern.js",
"property-graph": "./node_modules/property-graph/dist/property-graph.modern.js"
}
}
</script>
<script type="module">
import {Document} from '@gltf-transform/core';
console.log(Document);
/* Randomize array in-place using Durstenfeld shuffle algorithm */
function shuffleArray(array) {
for (var i = array.length - 1; i > 0; i--) {
var j = Math.floor(Math.random() * (i + 1));
var temp = array[i];
array[i] = array[j];
array[j] = temp;
}
}
document.getElementById('btn0').onclick = () => {
const doc = new Document();
const count = 30000;
const listRefs = 200;
const root = doc.getRoot();
// root.addRef()
doc.createBuffer();
// const buffer = doc.createBuffer();
for (let i = 0; i < count; i++)
doc.createAccessor();
let accessors = root.listAccessors();
const removeArray = [];
for (let i = 1; i < count; i += 3) {
removeArray.push(accessors[i]);
}
shuffleArray(removeArray);
console.time('dispose benchmark');
for (let i = 0, length = removeArray.length; i < length; i++) {
removeArray[i].dispose();
}
for (let i = 0; i < listRefs ; i++)
accessors = root.listAccessors();
console.timeEnd('dispose benchmark');
console.log(accessors.length === (count - removeArray.length),
accessors.length, count, removeArray.length);
console.log('done');
};
</script>
</body>
</html> Edit: benchmarking the inplace removal: https://jsben.ch/OxZiG |
While I fully agree it would be much faster, I don't think I can justify adding a batch API for disposal. We are dispatching events, user code may execute in those callbacks, and there's potential for this to go sideways in complex ways. Storing refs in a Set is both faster and safer. I'd much rather do that, in a future major version. I the meantime I would prefer to take a safer option (#84) for a smaller speed improvement. That said — it might help to understand better the kind of processing you're doing with glTF Transform, your performance goals, and what is working well vs. not well in general? Please feel free to start a thread if that would be helpful. |
Is the |
We can't turn off |
When using
dedup
orprune
function in glTF-Transform, the functions could dispose a large amount of accessors, in a model with large amount of accessors, thedispose
event listener inGraphNode#addRef
consumes ~43% of the total CPU time.screenshot
Proposal
Add batch dispose mode to GraphNode, in this mode, dispose events are buffered and processed later, so less array filter and push is called.
The patch to property-graph
The patch to @gltf-transform/functions
screenshot after patch
Total blocking time reduced from ~26312ms to ~10332ms, GC time was also reduced after patch.
Additional context
The model contains ~30000 accessors.
The model contains some business stuff which can not be shared, but any model with many duplicated or unused accessors or nodes could reproduce.
This is not a breaking change since everything would work like not patched without calls to
startBatchDispose
orcommitBatchDispose
.Should note that in batch dispose mode, less duplicated
change
events are dispatched.The text was updated successfully, but these errors were encountered: