Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory leaks when cropping -- perhaps a way to clean things up manually? #91

Open
sebastiangug opened this issue Dec 30, 2024 · 2 comments

Comments

@sebastiangug
Copy link

sebastiangug commented Dec 30, 2024

I extracted a core functionality of my app into this repo to reproduce the issue: https://github.com/sebastiangug/openseadragon-svelte

Everything is running exclusively in the browser-side.

  1. git clone https://github.com/sebastiangug/openseadragon-svelte.git
  2. pnpm i
  3. pnpm dev
  4. navigate to localhost:3000 in the browser
  5. drag and drop or browse for a semi-large file, 3000px minimum, region of 30mb should do the trick. Suggested from unsplash: https://unsplash.com/photos/a-horse-standing-in-a-field-with-a-mountain-in-the-background--dNoHZMLKI0 original size will be ~27mb.
  6. Click 'generate'
  7. It will resize it based on zoom levels.
  8. It will start cropping it to pieces.

How it works under the hood.

  • 1 service
  • 1 queue
    web workers instantiated based on maximum number of threads, currently hardcoded to 4 on this line:

https://github.com/sebastiangug/openseadragon-svelte/blob/973a74b34080ea4921912cdc8603dc0508e308e1/src/lib/services/vips.service.svelte.ts#L78

Can be reduced to 1, it makes no difference, however, attempting to instantiate wasm-vips inside 1 worker/thread or core will very quickly crash all the browsers that I have tried on my (irrelevantly) beefy desktop machine with 64GB of ram and 16c/32t.

  • each web worker is instantiated without wasm-vips
  • an action is sent to the web worker to load wasm-vips putting it in pending state which will be displayed at the top of the page.
  • once a worker has imported vips, it will move to 'available' and start processing requests from the queue.
  • only 1 request/action per web worker at any given time so there's a great deal of control.
  • all wasm-vips code is contained to this file:

https://github.com/sebastiangug/openseadragon-svelte/blob/main/src/lib/workers/vips.worker.ts

resizing seems to be OK cleanup-wise, I might be wrong, I can't quite find any ways to effectively debug this. But cropping will quite soon hit some memory limits.

cropping function:

const crop = async (data: IVipsCropInput) => {
  const image = vips.Image.newFromBuffer(await data.blob.arrayBuffer());

  const blob = image.crop(
    data.startX,
    data.startY,
    data.tileWidth,
    data.tileHeight
  );

  const response: IVipsCropResponse = {
    blob: new Blob([blob.jpegsaveBuffer({ Q: 95 })], { type: "image/jpeg" }),
  };

  self.postMessage({
    type: "success",
    requestId: data.requestId,
    workerId,
    data: response,
  });
};

I have tried to add blob.delete() image.delete() afer posting the message.

I have also tried ditching the blob on the service side -- i think that's still in place, hoping that the JS garbage collection will just picked up the 'cropped' image from memory.

But it seems to be largely per-worker based.

Even with a source image of only 1.87MB it shits the bed quite quickly, would love some more guidance on memory cleanup or debugging in general, because otherwise it works quite well and quite fast after initialisation.

image

@sebastiangug
Copy link
Author

I have made the following modifications for further testing:

  • Copy over the array buffer for resizing (so they're done in parallel) -- no problem with memory, even with 60mb files, I'lll try larger over time.
  • Use transferable objects (and confirm that they work) for the cropping. This added complication meant that the source buffer must be returned by the worker and then the reference to be updated in the main thread, since once gone, the reference to the transfered array buffer will be empty forever.

Made sure to structure the cropped image and sourced image into two different variables. Then call .delete() on both, as such:

	const image = vips.Image.newFromBuffer(data.imageBuffer);
	const cropped = image.crop(data.startX, data.startY, data.tileWidth, data.tileHeight);

	const response: IVipsCropResponse = {
		imageBuffer: cropped.jpegsaveBuffer({ Q: 90 }),
		sourceBuffer: data.imageBuffer
	};

	image.delete();
	cropped.delete();
  • No longer storing cropped blobs in the object on the main thread, instead, converting the array to blob and using indexedDb to keep track of them for eventual upload/saving.

Although with some added failure-handling and worker re-instantiating, I was able to somewhat achieve what I was aiming for, I'm afraid to try to give to the worker ALL crop tasks in one go, as the cropped array buffers seem to be piling up and eventually the whole thing crumbles down.

I'm very very interested in figuring out either what I'm doing wrong, or if there's an actual leak that can be plugged to make this more efficient & viable.

@kleisauke
Copy link
Owner

Hi @sebastiangug,

Could you try this patch?:

Details
--- a/src/lib/workers/vips.worker.ts
+++ b/src/lib/workers/vips.worker.ts
@@ -44,6 +44,8 @@ const loadVips = async (data: IVipsLoadInput) => {
   if (!vips) {
     await import("wasm-vips").then(async (m) => {
       vips = await m.default();
+      // Disable the libvips cache
+      vips.Cache.max(0);
     });
 
     // setting worker id
@@ -58,7 +60,9 @@ const loadVips = async (data: IVipsLoadInput) => {
 };
 
 const crop = async (data: IVipsCropInput) => {
-  const image = vips.Image.newFromBuffer(await data.blob.arrayBuffer());
+  const image = vips.Image.newFromBuffer(await data.blob.arrayBuffer(), '', {
+    access: vips.Access.sequential // 'sequential'
+  });
 
   const blob = image.crop(
     data.startX,
@@ -70,6 +74,8 @@ const crop = async (data: IVipsCropInput) => {
   const response: IVipsCropResponse = {
     blob: new Blob([blob.jpegsaveBuffer({ Q: 95 })], { type: "image/jpeg" }),
   };
+  image.delete();
+  blob.delete();
 
   self.postMessage({
     type: "success",
@@ -85,7 +91,9 @@ const resize = async (data: IVipsResizeInput) => {
   meta.mime = "image/jpeg";
 
   // resizing
-  const img = vips.Image.newFromBuffer(await data.file.arrayBuffer());
+  const img = vips.Image.newFromBuffer(await data.file.arrayBuffer(), '', {
+    access: vips.Access.sequential // 'sequential'
+  });
   const scale =
     data.scale ?? scaleFactor(img.width, img.height, data.longEdge!);
   const resized = img.resize(scale);
@@ -98,6 +106,9 @@ const resize = async (data: IVipsResizeInput) => {
   const blob = new Blob([resized.jpegsaveBuffer({ Q: 95 })], {
     type: "image/jpeg",
   });
+  img.delete();
+  resized.delete();
+
   meta.size = blob.size;
   // preparing response
   const response: IVipsResizeResponse = {

This opens the images in sequential access mode, loading pixels only on demand as it generates output, which can help reduce memory usage. I've also ensured that each image instance returned from wasm-vips is properly delete()-d (see #13 (comment) for the reasoning behind this).

Note that this still fails with that 11648x8736 image, as the maximum and initial memory size for wasm-vips is set to 1GiB, which is likely insufficient for processing these images.

-sINITIAL_MEMORY=1GB

This limitation is more of a historical artifact, as TypedArray indices and lengths were restricted to a maximum size of 230-1 back in the day.

# note 2: Browsers appear to limit the maximum initial memory size to 1GB, set `INITIAL_MEMORY` accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants