-
-
Notifications
You must be signed in to change notification settings - Fork 432
Handle i/o backpressure with sapper export #869
Conversation
@Conduitry - can we get this PR looked at? The same unit test for this in AppVeyor seems to be failing for others as well. Running PR earlier did not trigger that failure, and we can not reproduce locally. From what we see it does not seem to be a valid failure caused by this code, as it only impacts |
updated the branch with a fix for the race condition brought up in #893 |
Would love to see some movement with this one. The issue in #893 is a blocker which this PR fixes. I'd rather not need to maintain a sapper fork just for fixes. |
@Conduitry would it be possible to get a review of this? I currently have to do exports in somewhat complicated batches to make sure I get all the posts. |
Bumb... |
Any chance this can be merged soon? |
I've added the suggestion from @sw-yx in #864, as well as @benmccann 's suggestion. I ran this PR with the existing export, the original PR, and my updated PR, and whilst non scientific, the original PR was 2x the speed of the original, and the updated PR seems to have shaved another 20% off (though this could be due to API warming up etc) Either way this gets a 👍 from me. |
i've spent so much time trying to figure out why sapper export often misses exporting the [slug].json for certain pages. in b0b3b83 i assumed the issue was (somehow) using hyphens in slugs so switched to using underscores but this started happening again. i found the followng issues which all are "closed" or "merged" but this is _definitely_ still a bug. ended up looking at options for the sapper export command and saw i can turn off concurrent requests, which seems to have fixed the issue. - sveltejs/sapper#1078 - sveltejs/sapper#893 - sveltejs/sapper#869 hoping to submit a repro of the issue to sapper soon
Rewrite of sapper export to reduce memory footprint. Related to issue #851.
Adds a queue that allows for any number of URLs to be queued up but limits the amount of data in memory based off of the concurrent export parameter.
In our test case a site with 69000 pages sapper export took 17 minutes and peaked memory usage around 3.8gb. After the queue change the export took 13 minutes and peaked memory usage around 600mb. In a use case where the export process is bottlenecked by the text parsing in the handle function or file writing in the save function this pattern offers performance gains and reduction in memory usage.
This pattern offered slight performance benefits over making the page export process (fetch, parse urls and file save) a single promise in the current queue.