Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: throttle the rate AND number of request_state msg'es from the frontend #535

Merged

Conversation

maartenbreddels
Copy link
Member

Fixes #534

Related: jupyter-widgets/ipywidgets#2765

@jasongrout this is a variant of your mapBatch that will try to put 'room' number of messages in flight, so once 1 message comes in (and the rate is not too high), the next message will be directly sent. This should decrease latency.

Let me know what you think, we probably want this functionality to go into jupyter-widgets.

js/package.json Outdated
@@ -27,6 +27,7 @@
"css-loader": "^3.0.0",
"file-loader": "^4.0.0",
"npm-run-all": "^4.1.5",
"promise-semaphore": "^0.2.8",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This package is still using bluebird promises and hasn't been updated in 3 years.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'm not happy with that package either, looking for something else. https://github.com/sindresorhus/p-queue may actually do all of this..

@maartenbreddels
Copy link
Member Author

https://www.npmjs.com/package/p-limit is a bit easier to understand than p-queue.

@maartenbreddels
Copy link
Member Author

I thought about it a bit more, my argument was that my method would be faster with latency. But with 2000 widgets, at 100 per batch, you get an extra 20x the latency, which could be 2 seconds? On a total time of maybe 20 seconds?

So I think in terms of performance, we don't gain much actually.

@maartenbreddels
Copy link
Member Author

Ok, apart from the performance, we may still hit the iopub rate limit without this p-limit.

@jasongrout
Copy link
Member

What about this implementation of concurrent batching?

/**
 * A batched map implementation with concurrency
 *
 * Take a list, a batch size, a mapping function, and a max concurrency
 */
export async function mapBatch(
  list: any[],
  batchSize: number,
  concurrency: number,
  fn: Function,
  thisArg: any
): Promise<any[]> {
  const results = new Array(list.length);

  const chunks: PromiseClosure[] = [];

  // Process a single chunk and resolve to the next chunk if available
  async function processChunk(chunk: any[], start: number): Promise<void> {
    const chunkResult = await Promise.all(
      chunk.map((v, i) => fn.call(thisArg, v, start + i, list))
    );

    // Splice the chunk results into the results array. We use results.length
    // because the last chunk may not be a full batchSize.
    results.splice(start, chunkResult.length, ...chunkResult);

    // Start the next work item by returning its promise
    if (chunks.length > 0) {
      return chunks.shift()!();
    }
  }

  // Make closures for each batch of work
  for (let i = 0; i < list.length; i += batchSize) {
    chunks.push(() => processChunk(list.slice(i, i + batchSize), i));
  }

  // Start the concurrent chunks. Each chunk will automatically start
  // the next available chunk when it finishes.
  await Promise.all(chunks.splice(0, concurrency).map(f => f()));
  return results;
}

@jasongrout
Copy link
Member

What about this implementation of concurrent batching?

See the cleaned-up implementation at jupyter-widgets/ipywidgets#2765

@jasongrout
Copy link
Member

Actually, I don't think we need throttling. Having a concurrency of, say, 100 will naturally introduce the appropriate amount of throttling, i.e., the network roundtrip. Since all we care about is not hitting the zmq socket with more than 100 requests or so simultaneously, concurrency naturally throttles to about that.

@jasongrout
Copy link
Member

Actually, I don't think we need throttling.

Ah, the throttling is for the separate iopub rate limiting. Yes, good point.

@SylvainCorlay
Copy link
Member

Merging this!

@SylvainCorlay SylvainCorlay merged commit 5bc76b0 into voila-dashboards:master Feb 28, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Many widgets fail to render
3 participants