Skip to content
This repository has been archived by the owner on Jul 24, 2024. It is now read-only.

Compile to WebAssembly #2011

Open
mafintosh opened this issue Jun 9, 2017 · 46 comments
Open

Compile to WebAssembly #2011

mafintosh opened this issue Jun 9, 2017 · 46 comments

Comments

@mafintosh
Copy link

Since Node 8 supports WebAssembly it would be nice to compile libsass to that using emscripten so people wouldn't need to compile the c++ or depend on a prebuild.

@xzyfer
Copy link
Contributor

xzyfer commented Jun 10, 2017

Definitely open to it. We're also about to give napi a shot also. It's an exciting time for native extensions.

@nschonni
Copy link
Contributor

There already is an official libsass Emscripten project https://github.com/medialize/sass.js

@xzyfer
Copy link
Contributor

xzyfer commented Jun 10, 2017 via email

@saper
Copy link
Member

saper commented Jun 10, 2017

Dropping binaries would be fantastic... I wonder how well will that setup perform

@xzyfer
Copy link
Contributor

xzyfer commented Aug 4, 2017

@mafintosh I'm starting look into this seriously, but I'm struggling with finding resources on how to do this for native extensions in Node. Do you know of any other projects that have made this switch to WASM that I can reference?

@aduh95
Copy link

aduh95 commented Sep 3, 2017

@xzyfer I tried to install the Emscripten SDK on my computer to compile libsass to WASM; good news, it compiles without changing the C++ code!

I follow the following steps:

  1. Activate the Emscripten SDK on my environment
  2. On the latest commit of libsass, I run the makefile with the Emscripten compiler:
$ make clean && make --eval="CC=emcc
> CXX=emcc"
  1. Then I compile the library into wasm: emcc src/*.o lib/libsass.a -O2 -s WASM=1 -o libsass.js

The libsass.wasm file should now contain the library, and can be run by node >=8
Also I try to change the binding.js file of node-sass to use the cross-platform WASM instead of the platform-dependent binary, but I suppose I'm missing something since node displays an error:

In binding.js, I changed:

return ext.getBinaryPath()

by

    return require('./libsass.js'); // libsass.js is the file generated by emcc

And it outputs the following:

TypeError: binding.libsassVersion is not a function
    at Object.getVersionInfo (.../node_modules/node-sass/lib/extensions.js:368:27)
    at Object.<anonymous> (.../node_modules/node-sass/lib/index.js:448:28)
    at Module._compile (module.js:573:30)
    at Object.Module._extensions..js (module.js:584:10)
    at Module.load (module.js:507:32)
    at tryModuleLoad (module.js:470:12)
    at Function.Module._load (module.js:462:3)
    at Module.require (module.js:517:17)
    at require (internal/module.js:11:18)
    at Object.<anonymous>

I hope I'm helping!

@xzyfer
Copy link
Contributor

xzyfer commented Sep 4, 2017

Thanks for this work @aduh95. You got further than I did when I looked at it. For now I would comment out the binding.libsassVersion code paths to continue making progress.

@sass sass deleted a comment from fvsch Oct 4, 2017
@kwonoj
Copy link

kwonoj commented Jan 19, 2018

for anyone interested in this topic, I've prepped small poc at #2220 for further discussions.

@xzyfer
Copy link
Contributor

xzyfer commented Jan 19, 2018 via email

@kwonoj
Copy link

kwonoj commented Jul 22, 2018

For anyone who's interested in this further, I spawned https://github.com/kwonoj/libsass-asm based on my PR #2220 to push this further. So far it passes sass-spec for 3.5.4.

@gijsroge
Copy link

I came here after watching @callahad's amazing talk about Webassembly where he mentioned this project's potential for Webassembly.

Thought this might be of interest to the people who are currently putting time in this!

@justrhysism
Copy link

Is there a plan to use WebAssembly (libsass-asm) as a replacement for libsass directly? As in, is it being seriously considered? Or are we still in the POC phase?

I periodically run into issues with node-sass/libsass, particularly when switching Node versions because the binaries don't match up. But also, I'd like to remove the network dependency from my toolchain, and currently downloading libsass is the only thing I haven't been able to avoid.

Understand that everyone is busy - feels like we're on the edge of some sort of utopia... it's very exciting (and I can't wait!).

@xzyfer
Copy link
Contributor

xzyfer commented Nov 22, 2018

Compiling LibSass to WASM is easy. The difficulty is in compiling the C++ code that binds LibSass to Node. This is difficult in part because we need to rely on libuv for async custom importers, and the async render function. This is pretty much where we are stuck right now. The situation may improve when some kind of background worker or threading API lands in WebAssembly.

@justrhysism
Copy link

justrhysism commented Nov 22, 2018

Reading through this PR it appears that threading is available behind an experimental flag in Node 10.

Obviously there’s risk that might change - but is that enough to “unstuck” you? Is it even what you actually need?

Really wishing I could C and help. Might be time to level up...

Thank you for your prompt response @xzyfer :)

@chpio
Copy link

chpio commented Jan 9, 2019

Compiling LibSass to WASM is easy. The difficulty is in compiling the C++ code that binds LibSass to Node.

Isn't there a way to use libsass only as a self-contained library without any other bindings to eg. file io? We could extract the core functionality into a libsass-core or something, and make libsass just a wrapper over libsass-core. That way we could make it use callback-style async file io and map that to nodes fs api, so it wouldn't require any direct libsass <=> node magic. We could make this even work in the browser eg as a webpack runtime loader or on a scss to css conversion website.

Really wishing I could C and help. Might be time to level up...

i "leveled up" to rust, so im now even less willing to learn/"level down" to c :(

@loilo
Copy link

loilo commented Mar 13, 2019

FWIW, the folks of sass.js have an experimental WASM branch (which they apparently don't actively operate on) which works pretty nicely — I've at least tested it successfully in browsers:

medialize/sass.js#feature/wasm

@glebm
Copy link

glebm commented Apr 15, 2019

@chpio
Copy link

chpio commented Apr 15, 2019

@TomiBelan
Copy link

So the main issue is with how to call asynchronous JS functions (importers, sass functions) from libsass. I see two possible ways to do it:

a) Run libsass in a Worker via worker_threads (only on Node 10+, and behind a flag in 10.*) and use SharedArrayBuffer + Atomics to implement a lock, allowing the worker to synchronously wait for the importer. That "hard part" is already implemented in https://github.com/hyperdivision/async-wasm. I guess @mafintosh already knows about that project, since he's the author. ;)

b) Modify libsass to avoid the need for async - make it somehow possible to suspend the libsass code and resume it when the importer is done. This sounds difficult, but the sass.js code linked by @loilo earlier actually already does it! It happens automatically thanks to Emscripten. They use https://github.com/emscripten-core/emscripten/wiki/Emterpreter and emterpreter_whitelist.json. I think this looks pretty promising, but one possible problem is that sass.js only supports custom importers and not custom functions. Adding custom functions might make that whitelist much longer and force most of libsass code to the emterpreter "slow path". So it would need some benchmarking to find out how much slower is it.

@TomiBelan
Copy link

Good news! I did it! 🎉

I have a working "node-sass-wasm" which supports the full node-sass API, including asynchronous importers and functions (thanks to worker_threads), and passes all of test/api.js and test/spec.js. There are no more binaries to download, it has zero dependencies, and it doesn't need rebuilding for every future major version of Node. After some remaining minor cleanups, I'll put it on my GitHub so that everyone can take a look and try it. Stay tuned.

Bad news! It's a total rewrite! 💔

It basically doesn't share any code with node-sass. Especially not the C++ part - it's all done with Emscripten and Embind instead of V8 API and Nan. (More details incoming once I publish the code.)

I'd really like to contribute this code to node-sass so everyone can benefit from it, but I don't have any idea how. Node-sass still needs to support old Node versions and it's not clear to me how to combine them - two parallel codebases sound bad... Node-sass maintainers, what do you think?

@saper
Copy link
Member

saper commented Jun 18, 2019

Wow, congratulations! Have you also rewritten libsass or has this been handled with emscripten? What is the node oldest version it can run on?

@xzyfer
Copy link
Contributor

xzyfer commented Jun 18, 2019

I'm curious to see the implementation.

Update from our end: there's work going on right now to enable WASI un LibSass as well as libuv. This is the work we've been waiting for in order to ship node-sass as WASM.

We've investigate emscripten before but the resulting WASM module was large, and the performance suffered significantly.

@TomiBelan
Copy link

I'll try to publish the code ASAP so you can take a look. Sorry, it's not quite ready yet.

I didn't rewrite libsass. I'm compiling libsass with Emscripten and wrapping it with Embind, with a bit of special sauce to handle async importers/functions. It can run on node >= 10.5.0 with the --experimental-worker flag, and on node >= 11.7.0 without the flag.

The wasm file is 1.8 MB, which is not a lot. (The binding.node file in node-sass alone is 3.8 MB, and with all dependencies npm i node-sass; du -bsh node_modules is 18 MB.) Runtime performance seems decent, I'll get more data. Although I did notice that the startup time is quite slow: loading the wasm (during the first call to render / renderSync) can take ~0.9 s. :/

@jasonwilliams
Copy link

Node-sass still needs to support old Node versions and it's not clear to me how to combine them - two parallel codebases sound bad... Node-sass maintainers, what do you think?

Could do a @next or major version bump which supports Node 10 or above?
Doesn't make much sense to keep 2 parallel codebases just for Node 8 (i'm assuming your work supports Node 10+)

@xzyfer
Copy link
Contributor

xzyfer commented Jun 18, 2019

From our point of view we've been waiting for the ecosystem and WASM to catch up. The reality is it's not ready our use case. Which isn't to say it's not possible, but the results thus far have negated the benefits of node-sass over other native JS implementations.

Supporting the full node-sass API is a very impressive effort and I'm excited to see it. But if we can't overcome the start up or run performance issues we're unlikely to progress down this path. Alternative pure JS implementations exist at the cost slower compilation speeds.

We'll continue to invest in improving the installation process as much as npm and node allow us to. And there is on going work in LibSass to reduce the memory footprint and further improve Sass compilation times - widening the gap between pure JS implementations.

@jasonwilliams
Copy link

@xzyfer are these times measured somewhere? (the pure JS compilation and the C++ compilation)

@xzyfer
Copy link
Contributor

xzyfer commented Jun 18, 2019

It's worth noting that folks working on WASM and WASI are seriously looking specifically at the issues faced by node-sass and LibSass as part of their work to evolve the specs.

@xzyfer
Copy link
Contributor

xzyfer commented Jun 18, 2019

@jasonwilliams there have been various benchmarks over the years. You'll want to pay close attention to the language used in these benchmarks. I suggest looking at the raw numbers if they exist.

Some that I'm aware of are:

@xzyfer
Copy link
Contributor

xzyfer commented Jun 18, 2019

Most of these are out of date now, and non of them include the recent changes on LibSass master (unreleased).

@TomiBelan
Copy link

The prototype is ready. 🎉 Take a look: https://github.com/TomiBelan/node-sass-wasm

A very unprofessional benchmark says test/api.js + test/spec.js runs in 14.4 seconds on node-sass but in 25.6 seconds on node-sass-wasm. So it's not only the startup time, the execution is also ~2x slower. This is worrying to me, because most of these tests are sass-spec tests which shouldn't spend a lot of time in the binding layer or Emscripten system code. Is this performance penalty intrinsic to WebAssembly itself? It might also be interesting to compare it vs sass.js and @kwonoj's libsass-asm.

Anyway, @xzyfer I think I agree: it's best for node-sass to keep using the native library for now. I hope node-sass-wasm can serve as inspiration but currently it's not realistic to merge it - especially because of (un)supported Node versions, but also because of performance. So I won't send it as a PR, but of course feel free to reuse any pieces that look useful (in line with MIT).

Still, I'll put node-sass-wasm on npm, as some users might find it to be a good middle ground between node-sass and native JS implementations. Should I make it a scoped package like @TomiBelan/node-sass-wasm to avoid name-squatting or sounding too "official"?

I'm also curious about your WASI comments, do you have any links? I thought WASI is just a libc-like / POSIX-like "system interface" which didn't sound terribly relevant. Does WASI solve the ABI problem, e.g. converting std::string <-> JS String, and more complex structures? That was the main reason to use Emscripten for me.

@loilo
Copy link

loilo commented Jun 22, 2019

I have no deeper insight into WebAssembly perf limits, but I had a similar disappointing experience when I compiled the Rust-written YAML parser serde_yaml to WASM only to see it being 4x slower than the pure JS implementation of js-yaml at all file sizes.

Unfortunately I couldn't get any information from WASM engineers about how WASM performance roughly relates to native — maybe besides Surma's talk on this year's Google I/O, stating that WASM is not generally necessarily faster than JS.

@Bnaya
Copy link

Bnaya commented Jun 22, 2019

@loilo I'm curious whats pref difference between js-yaml and native?

@xzyfer
Copy link
Contributor

xzyfer commented Jun 22, 2019 via email

@loilo
Copy link

loilo commented Jun 22, 2019

@loilo I'm curious whats pref difference between js-yaml and native?

What exactly do you mean by "prefs difference"?

@Bnaya
Copy link

Bnaya commented Jun 22, 2019

Does natively-compiled yserde_yaml is faster than js-yaml, and by how much

@loilo
Copy link

loilo commented Jun 22, 2019

Ah, okay — haven't tried that out. I'm completely lacking experience in the systems programming language space anyway, so I can only hope that my perf tests were fair.

If you're interested in the setup, I've written down the approach I took in this Gist.

@xzyfer
Copy link
Contributor

xzyfer commented Jun 22, 2019 via email

@devsnek
Copy link

devsnek commented Jun 22, 2019

I'd suggest adding in some perf tracing calls at various points (before parse, after parse, before marshalling, after marshalling, etc) and then hooking them up to the performance api. my guess is that marshalling the data into the js world is likely the main slowdown, although there is probably about a 5-15% hit for the parser as well.

You might also want to try node master, where we have set node up to make wasm memory ops about 30% faster (coming in v13 unfortunately, due to relying on signals)

@MaxGraey
Copy link

Agree with @devsnek. If slowdown happening in main routine you could try change -O2 to -O3 + --llvm-lto 1 for emcc. Also try cache object creation for TextEncoder/TextDecoder.

@kripken
Copy link

kripken commented Jun 24, 2019

A 2x slowdown with wasm is surprising, but one possible reason is crossing the JS/wasm boundary - VM overhead, type conversion overhead, etc. C, C++, and Rust are at a disadvantage when you need to communicate a lot with JS (a future wasm GC might help for GC languages). Skimming the code, it seems like this issue might be relevant, but @TomiBelan it sounds like you think it's not?

Another possible reason is JS can inline at runtime while wasm doesn't. The flags @MaxGraey mentioned might help, but there's no guarantee the LLVM optimizer will manage to do the right thing at compile time...

It may be interesting to run the node profiler on the js and wasm builds, if nothing else helps.

@Daninet
Copy link

Daninet commented May 25, 2020

In the last months, I was porting some hashing algorithms to WASM and I got some very good results: https://www.npmjs.com/package/hash-wasm#benchmark

By example in SHA-1, I was able to archive 17x speedup compared to the fastest JavaScript only implementation.
For small input buffers, node.js's crypto module is about 10% percent slower than WASM, what makes me think that it's cheaper to call WASM functions compared to native bindings.
The native SHA-1 from node.js's crypto module is about twice as fast for large buffers. It's not a surprise for me because the native algorithm is using some architecture-specific, hand-written ASM code to archive better performance.

So, I see a huge potential in migrating node-sass to WASM. With a hand-tuned JS-WASM boundary, I think near-native performance can be archived.

@StEvUgnIn
Copy link

StEvUgnIn commented Sep 20, 2020

My intuition is telling me that the C++ library being optimized for GCC compilation might need to be ported to LLVM for enhancing greater performance on WebAssembly.

Thus, explaining why Mozilla advises to use their programming language Rust as it provides such native low-level optimization through typing, safety and execution.

LLVM optimized C/C++ code need a near complete rewrite. So we could experiment first with a LLVM-IR Wasm a compilation chain, to produce and optimize highly LLVM IR from code and then decompile to C++ which would turn into Wasm using emscripten.

Edit: see example here https://gist.github.com/alloy/d86b007b1b14607a112f

@kripken
Copy link

kripken commented Sep 20, 2020

There is probably no need to go that low-level. Writing a custom C API interface and using that from JS could run at optimal speed.

By doing it that way you have a direct call from JS to C (C to JS is possible too). That will still have some VM overhead from crossing the boundary, but it would be small unless you cross it for a very small amount of work. Typically you would write some data and then do a single call to process it all, instead of doing many individual calls for a single item, etc.

How much work that would be would depend on the size of the API layer here, but usually it's very little additional C and JS. I'd be happy to help someone with any interfacing questions if there are any.

@StEvUgnIn
Copy link

There is probably no need to go that low-level. Writing a custom C API interface and using that from JS could run at optimal speed.

By doing it that way you have a direct call from JS to C (C to JS is possible too). That will still have some VM overhead from crossing the boundary, but it would be small unless you cross it for a very small amount of work. Typically you would write some data and then do a single call to process it all, instead of doing many individual calls for a single item, etc.

How much work that would be would depend on the size of the API layer here, but usually it's very little additional C and JS. I'd be happy to help someone with any interfacing questions if there are any.

No... maybe I was misexpressed myself in my first post. Why would you build a C to JS API for Wasm code??

@kripken
Copy link

kripken commented Sep 20, 2020

@StEvUgnIn I may have misunderstood you, sorry if so. But be ported to LLVM and LLVM optimized C/C++ code need a near complete rewrite all sounds like a lot more work than is necessary.

The standard approach for using wasm from JS at high speed is to write a C API for the wasm. C APIs are well defined and can be called directly from JS with very little overhead. That is, JS literally calls the wasm exports, which are defined by the C API.

Tiny example (without all the details):

// foo.cpp

// C API function
extern "C" int foo() {
  return 10;
}
// foo.js

// call the C API
console.log(wasmExports.foo()); // prints 10

@StEvUgnIn
Copy link

StEvUgnIn commented Sep 20, 2020

@StEvUgnIn I may have misunderstood you, sorry if so. But be ported to LLVM and LLVM optimized C/C++ code need a near complete rewrite all sounds like a lot more work than is necessary.

That is why LLVM offers the possibility to produce intermediate representation with LLVM IR (*.ll, *.bc). The only thing to do is to look at the documentationm, optimize the C++ compilation to LLVM IR, convert LLVM IR to Wasm. Now LLVM 10 along clang even supports targetting WebAssembly.

clang --target=wasm32 -emit-llvm -c -S lib.c -o lib.ll
llc -march=wasm32 -filetype=obj lib.ll -o lib.o
wasm-ld --no-entry --export-all lib.o -o lib.wasm

The standard approach for using wasm from JS at high speed is to write a C API for the wasm. C APIs are well defined and can be called directly from JS with very little overhead. That is, JS literally calls the wasm exports, which are defined by the C API.

We could try this option if it brings more optimization than node-sass-wasm brings. Any alternative deserves to be experimented. Even a fallback to asm.js for unsupported node.js versions.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests