Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fs: improve error performance for fs.mkdtempSync #49750

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions benchmark/fs/bench-mkdtempSync.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
'use strict';

const common = require('../common');
const fs = require('fs');
const tmpdir = require('../../test/common/tmpdir');
tmpdir.refresh();

const nonexistentDir = tmpdir.resolve('non-existent', 'foo', 'bar');
const bench = common.createBenchmark(main, {
type: ['valid', 'invalid'],
n: [1e3],
});

function main({ n, type }) {
let prefix;

switch (type) {
case 'valid':
prefix = `${Date.now()}`;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably be wrapped with tmpdir.resolve(..)

break;
case 'invalid':
prefix = nonexistentDir;
break;
default:
new Error('Invalid type');
}

bench.start();
for (let i = 0; i < n; i++) {
try {
const folderPath = fs.mkdtempSync(prefix, { encoding: 'utf8' });
fs.rmdirSync(folderPath);
anonrig marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to remove it here? I think as long as n does not exceed the number of combinations formed by the 6 random characters it should be fine to just leave them in the loop and let test common just delete the tmpdir on exit.

} catch {
// do nothing
}
}
bench.end(n);
}
18 changes: 1 addition & 17 deletions lib/fs.js
Original file line number Diff line number Diff line change
Expand Up @@ -2893,23 +2893,7 @@ function mkdtemp(prefix, options, callback) {
* @returns {string}
*/
function mkdtempSync(prefix, options) {
options = getOptions(options);

prefix = getValidatedPath(prefix, 'prefix');
warnOnNonPortableTemplate(prefix);

let path;
if (typeof prefix === 'string') {
path = `${prefix}XXXXXX`;
} else {
path = Buffer.concat([prefix, Buffer.from('XXXXXX')]);
}

const ctx = { path };
const result = binding.mkdtemp(path, options.encoding,
undefined, ctx);
handleErrorFromBinding(ctx);
return result;
return syncFs.mkdtemp(prefix, options);
}

/**
Expand Down
23 changes: 23 additions & 0 deletions lib/internal/fs/sync.js
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,11 @@ const {
getStatsFromBinding,
getStatFsFromBinding,
getValidatedFd,
getOptions,
warnOnNonPortableTemplate,
} = require('internal/fs/utils');
const { parseFileMode, isInt32 } = require('internal/validators');
const { Buffer } = require('buffer');

const binding = internalBinding('fs');

Expand Down Expand Up @@ -53,6 +56,25 @@ function copyFile(src, dest, mode) {
);
}

function mkdtemp(prefix, options) {
options = getOptions(options);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, not sure since we we started to move the sync implementations to a different file, but I gotta say I am not a fan because 1) this incurs an unnecessary hit to startup (even with deserialization) and 2) moving code around makes git blame less useful, especially when it's useful to have the sync and the async version side by side for cross-reference..


prefix = getValidatedPath(prefix, 'prefix');
warnOnNonPortableTemplate(prefix);

let path;
if (typeof prefix === 'string') {
path = `${prefix}XXXXXX`;
} else {
path = Buffer.concat([prefix, Buffer.from('XXXXXX')]);
}

return binding.mkdtempSync(
path,
options.encoding,
);
}

function stat(path, options = { bigint: false, throwIfNoEntry: true }) {
path = getValidatedPath(path);
const stats = binding.statSync(
Expand Down Expand Up @@ -93,6 +115,7 @@ module.exports = {
exists,
access,
copyFile,
mkdtemp,
stat,
statfs,
open,
Expand Down
34 changes: 34 additions & 0 deletions src/node_file.cc
Original file line number Diff line number Diff line change
Expand Up @@ -2957,6 +2957,38 @@ static void Mkdtemp(const FunctionCallbackInfo<Value>& args) {
}
}

static void MkdtempSync(const FunctionCallbackInfo<Value>& args) {
Environment* env = Environment::GetCurrent(args);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, actually, why do we need a different binding? We could just change the sync branch of the original implementation to directly throw instead of passing the error down to ctx. This would avoid the repetition of most of the code here. We can probably tell if it's a sync call or not by checking the length of the arguments now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally find it a lot harder to optimize since the requirements for promise implementation, sync and callback implementations are connected a lot.

Copy link
Member

@joyeecheung joyeecheung Sep 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would it be a lot harder to optimize? We just need to introduce another SyncCall helper that throws instead of setting the ctx and use that in the original implementation, and it's essentially the same as what's being done here? Then that helper can be used across the board for similar optmizations.

Isolate* isolate = env->isolate();

CHECK_GE(args.Length(), 2);

BufferValue tmpl(isolate, args[0]);
CHECK_NOT_NULL(*tmpl);
THROW_IF_INSUFFICIENT_PERMISSIONS(
env, permission::PermissionScope::kFileSystemWrite, tmpl.ToStringView());

const enum encoding encoding = ParseEncoding(isolate, args[1], UTF8);

uv_fs_t req;
auto make = OnScopeLeave([&req]() { uv_fs_req_cleanup(&req); });
FS_SYNC_TRACE_BEGIN(mkdtemp);
int err = uv_fs_mkdtemp(nullptr, &req, *tmpl, nullptr);
FS_SYNC_TRACE_END(mkdtemp);
if (err < 0) {
return env->ThrowUVException(err, "mkdtemp", nullptr, *tmpl);
}

Local<Value> error;
MaybeLocal<Value> rc =
StringBytes::Encode(isolate, req.path, encoding, &error);
if (rc.IsEmpty()) {
env->isolate()->ThrowException(error);
return;
}
args.GetReturnValue().Set(rc.ToLocalChecked());
}

static bool FileURLToPath(
Environment* env,
const ada::url_aggregator& file_url,
Expand Down Expand Up @@ -3409,6 +3441,7 @@ static void CreatePerIsolateProperties(IsolateData* isolate_data,
SetMethod(isolate, target, "lutimes", LUTimes);

SetMethod(isolate, target, "mkdtemp", Mkdtemp);
SetMethod(isolate, target, "mkdtempSync", MkdtempSync);

StatWatcher::CreatePerIsolateProperties(isolate_data, target);
BindingData::CreatePerIsolateProperties(isolate_data, target);
Expand Down Expand Up @@ -3534,6 +3567,7 @@ void RegisterExternalReferences(ExternalReferenceRegistry* registry) {
registry->Register(LUTimes);

registry->Register(Mkdtemp);
registry->Register(MkdtempSync);
registry->Register(NewFSReqCallback);

registry->Register(FileHandle::New);
Expand Down