From 238466bcf08f9771cc4f89905b05292e0844f252 Mon Sep 17 00:00:00 2001 From: Matteo Collina Date: Thu, 5 Jan 2017 11:28:46 +0100 Subject: [PATCH] doc: handle backpressure when write() return false The doc specified that writable.write() was advisory only. However, ignoring that value might lead to memory leaks. This PR specifies that behavior. Moreover, it adds an example on how to listen for the 'drain' event correctly. See: https://github.com/nodejs/node/commit/f347dad0b7b1787092cca88789b77eb3def2d319 PR-URL: https://github.com/nodejs/node/pull/10631 Reviewed-By: Colin Ihrig Reviewed-By: Sam Roberts Reviewed-By: Evan Lucas Reviewed-By: James M Snell Reviewed-By: Joyee Cheung --- doc/api/stream.md | 43 ++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 40 insertions(+), 3 deletions(-) diff --git a/doc/api/stream.md b/doc/api/stream.md index 98026f6a394eb2..6732435cb60e93 100644 --- a/doc/api/stream.md +++ b/doc/api/stream.md @@ -443,9 +443,46 @@ first argument. To reliably detect write errors, add a listener for the The return value is `true` if the internal buffer does not exceed `highWaterMark` configured when the stream was created after admitting `chunk`. If `false` is returned, further attempts to write data to the stream should -stop until the [`'drain'`][] event is emitted. However, the `false` return -value is only advisory and the writable stream will unconditionally accept and -buffer `chunk` even if it has not not been allowed to drain. +stop until the [`'drain'`][] event is emitted. + +While a stream is not draining, calls to `write()` will buffer `chunk`, and +return false. Once all currently buffered chunks are drained (accepted for +delivery by the operating system), the `'drain'` event will be emitted. +It is recommended that once write() returns false, no more chunks be written +until the `'drain'` event is emitted. While calling `write()` on a stream that +is not draining is allowed, Node.js will buffer all written chunks until +maximum memory usage occurs, at which point it will abort unconditionally. +Even before it aborts, high memory usage will cause poor garbage collector +performance and high RSS (which is not typically released back to the system, +even after the memory is no longer required). Since TCP sockets may never +drain if the remote peer does not read the data, writing a socket that is +not draining may lead to a remotely exploitable vulnerability. + +Writing data while the stream is not draining is particularly +problematic for a [Transform][], because the `Transform` streams are paused +by default until they are piped or an `'data'` or `'readable'` event handler +is added. + +If the data to be written can be generated or fetched on demand, it is +recommended to encapsulate the logic into a [Readable][] and use +[`stream.pipe()`][]. However, if calling `write()` is preferred, it is +possible to respect backpressure and avoid memory issues using the +the [`'drain'`][] event: + +```js +function write (data, cb) { + if (!stream.write(data)) { + stream.once('drain', cb) + } else { + process.nextTick(cb) + } +} + +// Wait for cb to be called before doing any other write. +write('hello', () => { + console.log('write completed, do more writes now') +}) +``` A Writable stream in object mode will always ignore the `encoding` argument.