Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

emscripten: use wasm-exceptions for better performance #29

Closed
kzhsw opened this issue May 30, 2023 · 8 comments
Closed

emscripten: use wasm-exceptions for better performance #29

kzhsw opened this issue May 30, 2023 · 8 comments

Comments

@kzhsw
Copy link

kzhsw commented May 30, 2023

Using this project to parse a large (~80MB) stp file, I waited for more 4 hours but it wouldn't finish, when I pause the script in devtools, it stops at some js glue code generated by emscripten, which is supposed to be exception handling.
The docs said wasm-exceptions should be faster, but I failed to compile it.
The same model took about 2 minutes to be converted to gltf using mayo, which is a native app using the same OpenCascade to handle stp files.
I had tried foxtrot on browser, but it just failed.

Other useful links:

@longhan
Copy link

longhan commented May 31, 2023

Hello,

Can you please share your .stp?

@kzhsw
Copy link
Author

kzhsw commented May 31, 2023

Hello,

Can you please share your .stp?

No, it's commercial stuff. During a quick look of OpenCascade code, I found that c++ exceptions are widely used in it, so using wasm-exceptions should improve the overall performance by reducing wasm-js calls, not limited to specify model.

@kovacsv
Copy link
Owner

kovacsv commented Jun 18, 2023

@kzhsw I managed to compile the library with -fwasm-exceptions, tests are running fine, and the binary is even smaller than before. Now I would like to test with your model, could you please share it with me somehow?

@kzhsw
Copy link
Author

kzhsw commented Jun 19, 2023

@kzhsw I managed to compile the library with -fwasm-exceptions, tests are running fine, and the binary is even smaller than before. Now I would like to test with your model, could you please share it with me somehow?

Thank you for this, I'll try to find a model that can be uploaded here. Could you upload the compiled binary here, or benchmark it with some smaller models?

@kovacsv
Copy link
Owner

kovacsv commented Jun 19, 2023

Some measurements for files I got at hand. It's definitely faster, but I guess the difference depends on the number of exceptions coming from the import.

File Name File Size Old version New version
as1_pe_203.stp 0.2Mb 0.671s 0.307s
GPT-05-BL.stp 0.5Mb 1.650s 0.956s
MDrive23Plus.stp 3.2Mb 9.127s 4.999s
MD34AC_single.stp 10.9Mb 26.869s 13.233s
Product1TamMontaj.stp 12.2Mb 40.710s 20.810s

@kovacsv
Copy link
Owner

kovacsv commented Jun 19, 2023

@kzhsw even if it doesn't solve your problem it's definitely better than it was before, so I committed the new compiled version. Please check it with your file.

@kovacsv kovacsv closed this as completed Jun 19, 2023
@kzhsw
Copy link
Author

kzhsw commented Jun 20, 2023

Thank you for this great work! Also note for best browser compatibility, both version should be compiled, and load the supported version based on browser version check or wasm-feature-detect.
BTW, it would be better to upload compiled wasm dist to releases, allowing devs to have a faster checkout.

@kovacsv
Copy link
Owner

kovacsv commented Jun 21, 2023

Browsers and node.js supports this feature for more than a year, I think it's fine to have only the new version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants