Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrading patternlab to v1.3.0 is doubling my compilation speed #316

Closed
bramsmulders opened this issue Apr 18, 2016 · 29 comments
Closed

Upgrading patternlab to v1.3.0 is doubling my compilation speed #316

bramsmulders opened this issue Apr 18, 2016 · 29 comments

Comments

@bramsmulders
Copy link
Contributor

I am using Pattern Lab Node v1.3.0 on Mac, using the grunt configuration.

Quick note

I use pl node for over 9 months now on a very big project. I have many patterns and lots of data so builds were taking ~12 seconds on each save. We are in the progress of updating/refactoring the styleguides to make it smaller and leaner.

Expected Behavior

Build times to take ~12 seconds.

Actual Behavior

With upgrading to pl-node v1.3.0 compiling takes around ~24 seconds. This is twice as long than it took before.

Steps to Reproduce

Switch between v1.2.2 and v1.3.0, and see the differences

@e2tha-e
Copy link
Contributor

e2tha-e commented Apr 18, 2016

@bramsmulders Out of curiosity, could you check out the latest commit from pr #277 , run a build, and check the time? This would be commit 470f54d from https://github.com/e2tha-e/patternlab-node.git , branch 250-with-recursion-limiter

@bmuenzenmeyer
Copy link
Member

As @e2tha-e points out - there is some recursion that made it into this version to fix an issue, but sans the perf improvements e2th-a is addressing in that PR. Unfortunately, that PR is too big to merge in its current state and is likely destined for dev-2.0-core. I take full responsibility for pushing the version a bit half-baked and with potential performance problems. For the meantime, I've updated the release with a note regard potential issues.

@bramsmulders
Copy link
Contributor Author

@e2tha-e That branch is working as it should, compiling takes ~12 seconds again. So i can confirm this recursion limiter works.

@bmuenzenmeyer can you cherry-pick the commit which entails the perf improvements?

@e2tha-e
Copy link
Contributor

e2tha-e commented Apr 19, 2016

@bramsmulders Super glad to hear! Thanks for taking the time to test!

@bmuenzenmeyer
Copy link
Member

@bramsmulders I am familiar with that concept but have never tried it - especially against such a large commit set.

I am thinking now that an acceptable approach may be to review for acceptance #277 on the 1.X.X track (as it already passes tests and is merge-able) and then work to forward port it (perhaps via a cherry-pick) into dev-2.0-core, which is also engine aware.

Looking for feedback from you all, including @e2tha-e and @geoffp on that.

@e2tha-e
Copy link
Contributor

e2tha-e commented Apr 19, 2016

@bramsmulders @bmuenzenmeyer
Just as an FIY, Git cherry picking from pr #277 will be impossible given the scale of the pr and the number of commits, rebases, etc. You'll really have to consider how you terminate the development flow of the 1.x.x major version, and how you'll resume development on the 2.x.x major version. Given that in semver, major versions are not backwards compatible, it should be fine to just merge the pr into 1.x.x, and redo the pr for 2.x.x. I am totally fine with this option. However, this is just a suggestion, and I leave the room open for discussion and debate.

@bmuenzenmeyer
Copy link
Member

@e2tha-e
I'd like to think through it a bit more, but your proposal aligns with my thinking - assuming you are okay with porting this functionality over to 2.x.x. as your schedule allows - it's a big ask!

@geoffp
Copy link
Contributor

geoffp commented Apr 27, 2016

I'm still nervous, but from scanning it as well as I can, it looks good, and as long as @e2tha-e is committed to forward porting it to dev-2.0, I'll sign off!

When it does get forward ported, I'd expect most of this to find its way into the Mustache engine. @e2tha-e, for bonus points, move the pattern parameter and stylemodifier machinery into the Mustache engine. :) Nah, I kid, that's totally a separate PR.

@e2tha-e
Copy link
Contributor

e2tha-e commented Apr 28, 2016

@bmuenzenmeyer @geoffp Yes, I'm looking forward to getting this into dev-2.0!

@e2tha-e
Copy link
Contributor

e2tha-e commented Aug 11, 2016

The work for this is in the pull requests referencing issue #250

@bramsmulders
Copy link
Contributor Author

bramsmulders commented Nov 7, 2016

Hello again, i was updating from PL node 2.5.1(where this issue wasnt present anymore) to v2.6.1 in a new project(average build times: ~6 seconds). And the build times increased from ~6 seconds to ~12 seconds.

Is there something pushed which breaks the fix mentioned here?

@bmuenzenmeyer
Copy link
Member

Incremental builds are the solution for this going forward. I am interested in more performance improvements if they are small and aimed at memory management and the like.

@bramsmulders
Copy link
Contributor Author

This is actually not fixed by incremental builds. The problems from this bug caused the total bouild time to be doubled. Incremental builds only shave off half of that(already doubled up) time. So no actual gains here.

The problem lies within the use of lots of data. For every separate data.json file the build takes longer to load even before actually parse templates.

@tburny tburny reopened this Jan 30, 2017
@tburny
Copy link

tburny commented Jan 30, 2017

I am reopening this issue because @bramsmulders is correct here.

@bramsmulders
Copy link
Contributor Author

I'm going to create a demo data/pattern set to demonstrate it tonight.

@tburny
Copy link

tburny commented Jan 30, 2017

That would be amazing! Maybe we can do some profiling then?

@bramsmulders
Copy link
Contributor Author

bramsmulders commented Jan 30, 2017

hmmm, i was upgrading a project of ours to 2.7.2. Lots of templates, lots of data, after upgrading and running patternlab i get an JavaScript heap out of memory error. This happens after 2 minutes of compiling. This project was coming from v1.2.2 where builds usually took like ~10 seconds.

In comparison:
The same set of data & patterns compiles easily within 2.5 seconds in PL-PHP.

I cannot grasp this amount of difference between PHP & node, i can live with built times up to ~10 seconds but this is affecting the workflow a LOT more.

I will recreate the amount of data with no company related stuff inside to test.

@tburny
Copy link

tburny commented Jan 30, 2017

By the way what pattern engine do you use?

Can you try do rebuild your project with cleanOutput = true? This should disable Incremental Builds so we can exclude it as a cause of the problems.

Also maybe you can try get a heap dump?

@bramsmulders
Copy link
Contributor Author

I'm using the mustache engine, out of the box.
With incremental builds disabled everything compiles, but still takes ~127 seconds per build.

All templates are processed in the console in 90 seconds but the task is not completed then, it waits for an additional 30 seconds to complete the task

@geoffp
Copy link
Contributor

geoffp commented Jan 31, 2017

I did some profiling for the asynchronous building work that found that a major source of delay is the HTML beautification step. I forget what the option is called just now (on my phone).

@bmuenzenmeyer
Copy link
Member

I forget what the option is called just now

cleanOutputHtml

@geoffp
Copy link
Contributor

geoffp commented Feb 1, 2017

That's the one. I had the impulse to implement a worker thread pool to make it faster, but I controlled myself. @bramsmulders, do you have cleanOutputHtml set?

@bramsmulders
Copy link
Contributor Author

Will check in a few, hold on

@bmuenzenmeyer
Copy link
Member

@bramsmulders can you point to dev and retest please?

@bmuenzenmeyer
Copy link
Member

I think I have fixed this with PL Node 2.9.X

@bramsmulders
Copy link
Contributor Author

bramsmulders commented Mar 21, 2017

@bmuenzenmeyer Oh yeah! first tests are promising! From ~12 to ~4 seconds, cold start.
Incremental builds now take ~1.3 instead of ~4

@bmuenzenmeyer
Copy link
Member

@bramsmulders excellent news. Makes sense really, since I basically reverted the logic that caused the problem in the first place.

Are you alright with closing this issue, at long last ? 😄

@bramsmulders
Copy link
Contributor Author

bramsmulders commented Mar 22, 2017

Just do it! 💪🏻

i'll do it 😉

Thnx for all the hard work on making this a success!

@bmuenzenmeyer
Copy link
Member

👍

sorry it took so long to identify the culprit. thanks for sticking with the tool and living with slower performance for 11 months!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants