Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Compiler Performance #4156

Closed
denysvitali opened this issue Mar 16, 2017 · 12 comments
Closed

[Question] Compiler Performance #4156

denysvitali opened this issue Mar 16, 2017 · 12 comments

Comments

@denysvitali
Copy link

Hello everyone,

I'm opening this issue because I wanted to know more about Crystal's compiler: I've noticed when I run some tests (check here for more info) that Crystal has the worst "release" compiler.
Below you can find a mini table on what I run my test with. Crystal took 2.248 s to compile and optimize a simple puts "Hello world".
Is that normal?

Programming Language Source Size (in Bytes) Binary Size (in Bytes) Compilation time Run time
C 58 8368 0,037 0,002
Crystal (Debug) 19 601952 0,537 0,005
Crystal (Release) 19 227168 2,248 0,005
Java 89 390* 0,349 0,002
Rust (Debug) 40 1911160 0,202 0,004
Rust (Release) 40 1911008 0,180 0,004

If you have some suggestions, please let me know.
You can find any related info like Crystal's version and the source here.

@refi64
Copy link
Contributor

refi64 commented Mar 16, 2017

There is no way on earth Rust compiled a freaking release binary faster than the debug one. Does Rust cache any object files or LLVM IR?

@refi64
Copy link
Contributor

refi64 commented Mar 16, 2017

Also, only Java has a 90-byte hello world program...

@bcardiff
Copy link
Member

I would suggest to check the llvm version you are using. That might affect the results.

Also, crystal build -s might offer some insights on how time is split among phases. Most probably bc+obj emit on behalf llvm and I/O.

@RX14
Copy link
Contributor

RX14 commented Mar 16, 2017

You'll want to use rm -Rf ~/.cache/crystal to clean up crystal's cache. Not sure what caching rust does but I assume it does it. Also keep in mind that Crystal compiles completely from source with no modularization. Couple that with a large and featuresome stdlib and there's alot of parsing, typing and compiling going on behind the scenes.

This stdlib overhead probably doesn't scale as much with the project. In fact compiling the crystal compiler in debug mode (with no cache) takes only 37 seconds, while compiling hello world takes 0.6s. That's only 62 times slower to build the entire compiler than to build a single hello world, and I can tell you the compiler's a lot more than 62 times larger than hello world.

@denysvitali
Copy link
Author

@kirbyfan64
Please note that the test may have been influenced by the CPU load at the moment: I cannot guarantee the exact same load was maintained, but you still can try it yourself with the sources.
Also note that for "Release" I meant rustc -O hello.rs, which is not the highest level of Optimization (in fact it should be equivalent to -O2). If you fancy I'll redo the test with a flag like -O3

@bcardiff I'll check that out, thank you!

@ysbaddaden
Copy link
Contributor

Crystal tells LLVM to optimize with -O3 (aggressive) in release mode.

@CapacitorSet
Copy link

With regards to the discrepancies in compile times, did you take only one measurement or did you average several? The latter would help average out inconsistencies due to CPU load and disk usage.

@ozra
Copy link
Contributor

ozra commented Mar 17, 2017

@denysvitali

  • Crystal does near global type inference - it would be expected to be slower in base compilation time based on that account
  • Any additional time in "release" will be exclusively LLVM time. However, it's not impossible that Crystal generates much more redundant code than Rust, that LLVM has to go through and optimize and finally de-duplicate (or not) - so even though they're both LLVM backends (don't know which compiler you used for C) - the luxury of helpful type inference may have its' cost. (this final one can most likely be optimized a lot on crystal side in the future, however - rules of optimization: 1. Don't do it. 2. Don't do it unless there's really a problem. 3. etc... ;-) That is, there are lots of priorities much higher than that. Compilation time for debug builds are much more important.

@asterite
Copy link
Member

The compiler is known to be slow in release mode, mostly because of LLVM optimizations, but also because we aren't doing yet incremental compilation. This is planned, so there's nothing we can do at the moment.

@denysvitali
Copy link
Author

Thanks everyone for answering my question, keep up the good work! 🥇 👍

@simonhf
Copy link

simonhf commented Jun 3, 2020

The compiler is known to be slow in release mode, mostly because of LLVM optimizations, but also because we aren't doing yet incremental compilation. This is planned, so there's nothing we can do at the moment.

I noticed incremental compilation was mentioned above on Mar 23, 2017, and today in June 2020 it's still on the roadmap. Any idea in which future time frame it's going to come?

@waj
Copy link
Member

waj commented Jun 3, 2020

@simonhf incremental compilation doesn't have a defined time frame yet. As I commented here we're focusing on 1.0 release now and that implies making the compiler semantics and standard library stable first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants