Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

using introduces unnecessary complexity #244

Open
JadenSimon opened this issue Oct 4, 2024 · 11 comments
Open

using introduces unnecessary complexity #244

JadenSimon opened this issue Oct 4, 2024 · 11 comments

Comments

@JadenSimon
Copy link

JadenSimon commented Oct 4, 2024

I've come to the conclusion that using, as defined in this proposal, does not belong in the language. using adds additional complexity to the language with no clear benefits over alternatives like Go/Zig's defer.

Capturing

This is the biggest flaw in the current proposal. The syntax is tied to symbols, yet the behavior is tied to scopes. So I can write using r = new Resource() which then may be referenced by something after the scope ends and r is disposed.

As far as I can tell, this has been acknowledged but not resolved. Why is the behavior of captured symbols not mentioned anywhere in the proposal? While it might be obvious to those who know the technical details of using, I believe this introduces a major "footgun" for the average developer.

Here's a simple example to demonstrate this:

const x = { 
    disposed: false,
    [Symbol.dispose]() { 
        this.disposed = true
    },
    doSomething() {
        if (this.disposed) {
            throw new Error(`I'm disposed!`)
        }
    }
}

function f() {
    using y = x
    y.doSomething()
    console.log('Returning...')

    return () => y.doSomething()
}

f()() // Error: I'm disposed!

Attempting to use y in the closure results in an error!

The behavior is the same with esbuild:

esbuild example.ts --target=node22 | node

Returning...
Error: I'm disposed!

I do not believe this aligns with the spirit of JavaScript. Normal variables captured by closures don't just suddenly break, am I not using y within the closure? While I still have access to y, the semantics of using mean that it's effectively inoperable. The feature that aims to mitigate common footguns introduces a new one for closures, one of JavaScript's defining features.

defer with extra steps

If using doesn't work with closures (as it clearly doesn't), you can describe the same behavior using an unconditional defer statement that executes on scope exit:

const y = x
defer y[Symbol.dispose]()

Now this is interesting, because the problem with my code is now much more obvious. It's because I'm disposing of the resource after returning the closure! The very name using promises a lot but doesn't deliver anything more than defer. So why have using to begin with? We're adding complexity and hiding behavior from developers for, as far as I can tell, no benefit beyond standardization of which method to call for disposal and enabling instantiation of multiple disposable resources in a single statement.

Rigidity and friction

using imposes requirements that impede usage:

  • The result of the initializer must have an impl. for Symbol.dispose (or Symbol.asyncDispose)
    • This is problematic because existing code must be updated, otherwise using becomes even more verbose than just having defer from the start.
  • Requires a declaration of bindings, even if unused
    • Don't be surprised to see lots of using _ = x // dispose of x in future codebases
  • Requires a different modifier for the variable declaration (using instead of var/let/const)

defer has none of these requirements with the same functionality, while also offering far more immediate value to users because it "just works" with existing code. The current proposal adds unnecessary friction for the functionality of "execute this when the scope exits". Even if developers have a use-case where using makes sense, they're less likely to "do the right thing" because of the added friction. Why are we getting in the way of developers?

Suggestions

using only makes sense if the disposal behavior is tied to the lifetime of the binding, not the scope in which the binding was declared. Currently, using is just a more confusing defer.

My current suggestions:

  1. Rework using - The current proposal should be reworked so that the disposal behavior is tied to the lifetime of the resource binding itself, meaning the resource would only be disposed of when it’s no longer referenced, even across closures. This effectively becomes an ergonomic way to add finalizers to JS objects.
  2. Introduce defer - An unconditional defer statement that executes on scope exit has the same functionality as the current proposal without the additional complexity. You can even address async use-cases by treating the await in defer await as a modifier to defer rather than AwaitExpression. Build tools should be able to re-use existing using logic for defer because they're essentially the same thing. I've tested this myself.

These two features would no longer overlap as each provide their own set of benefits and use-cases: defer for scope lifetimes, using for binding lifetimes. This will keep the language relatively simple and approachable for developers while still providing power and flexibility.

@bakkot
Copy link

bakkot commented Oct 5, 2024

Learning the rule "defer runs cleanup on scope exit" is not any different from learning the rule "using runs cleanup on scope exit", so I don't buy that the problem of using bindings after scope exit any more obvious with defer rather than using. I agree this is a problem but I don't think using defer would make it any better.

Beyond standardization of the name of the disposal method, using ties disposal to the acquisition of the resource. That's important: without that property, you can have code like

let x = getFoo(), y = getBar();
defer x.cleanup();
defer y.cleanup();

and not notice that if getBar throws then x will never get cleaned up. Simply making this harder is, IMO, sufficient enough reason to go with the using style over the defer style.

@JadenSimon
Copy link
Author

JadenSimon commented Oct 5, 2024

Learning the rule "defer runs cleanup on scope exit" is not any different from learning the rule "using runs cleanup on scope exit",

Agreed. But learning about the behavior of syntax is a problem regardless of the language. My argument is that using introduces more complexity than defer, not that defer doesn't have any complexity.

Beyond standardization of the name of the disposal method, using ties disposal to the acquisition of the resource.

Tying disposal to acquisition is a strong argument for using, and it's partially why I don't discount using entirely (edited to include that functionality). Still, I don't think needing to instantiate multiple resources that require disposal in a single statement is worth the complexity and ceremony that using currently requires. defer has problems but it's also incredibly simple comparatively. And simplicity, for practically the same functionality, sounds like the right approach to me.

@bakkot
Copy link

bakkot commented Oct 5, 2024

Still, I don't think needing to instantiate multiple resources that require disposal in a single statement is worth the complexity and ceremony that using currently requires.

using generally requires less ceremony than defer: (slightly) more on the part of the library defining the resource, but less on the part of the consumer, and there's more people who use libraries than write them.

And since the whole point of the feature is to make it easier to clean up resources correctly, I think it would be worth a little ceremony in order to avoid the possibility of exceptions occurring between acquiring a resource and and registering it for disposal.

@JadenSimon
Copy link
Author

using generally requires less ceremony than defer: (slightly) more on the part of the library defining the resource, but less on the part of the consumer, and there's more people who use libraries than write them.

I've really only experienced using within my own work, so as both a library and consumer. And it's just clunky compared to defer. If you look at it only from the perspective of the consumer, yeah it's not too bad. But should we really only view this feature in a library/user dynamic as opposed to a general language feature? I really did try to like and support this feature, but it just does not sit right with me.

And since the whole point of the feature is to make it easier to clean up resources correctly, I think it would be worth a little ceremony in order to avoid the possibility of exceptions occurring between acquiring a resource and and registering it for disposal.

Maybe this is a bad take, but to me, a lot of the motivation behind using seems to be to "protect" developers from themselves rather than empower them with tools to make their lives easier. What exactly is the "correct" way to clean-up a resource? Is it always disposing of them on scope exit? Doesn't seem to work all that well for my closures.

@bakkot
Copy link

bakkot commented Oct 5, 2024

And it's just clunky compared to defer.

This really hasn't been my experience. Even with resources I'm authoring, it's literally a single extra line - resource.prototype[Symbol.defer] = resource.prototype.cleanup.

The only clunky cases I've run into are when using something which does not already have a Symbol.dispose, which will be improved over time since this feature is brand new (and for now I just write a wrapper), and when I don't actually need a binding, for which using _ = getLock() or whatever is fine.

I accept that you find it more clunky than defer, but I find defer more clunky than using; this sort of difference happens pretty often.

What exactly is the "correct" way to clean-up a resource?

In this context, it means ensuring that the clean up logic runs in all cases, and at a predictable time. The way we'd normally do that is with try/finally, but that's very verbose and the verbosity makes it prone to errors - it's always tempting to use a single try/finally for more than one resource, just to avoid the extra level of nesting, but that has the same problem as my snippet above.

It's true that sometimes you want something other than scope exit, and that this will not work for you in that case. Neither would defer. But defer has the additional downside that it's easy to accidentally fail to register something for disposal, because acquisition isn't tied to registering for disposal.

@JadenSimon
Copy link
Author

I accept that you find it more clunky than defer, but I find defer more clunky than using; this sort of difference happens pretty often.

I totally agree that, in isolation, using r = new Resource() is cleaner. defer does start to feel repetitive and more error-prone. But that verbosity is not pointless, it leads to a huge amount of flexibility, solving problems that using completely fails at. In its current state, using feels more like a feature for a framework than a core language feature. It does a good job for this one particular thing, but fails everywhere else. This, combined with the added ceremony, is why I think it's so clunky as a language feature.

Here's a good example that I've ran into more than a few times: needing to asynchronously await things at the end of a block after iteration.

defer makes this simple and intuitive:

const promises = []
defer await Promise.all(promises)

for (const x of arr) {
    defer promises.push(x.dispose())
    // use x
}

using needs to introduce several new ideas to solve the same problem:

await using stack = new AsyncDisposableStack()

for (const x of arr) {
    // I need to make sure `Symbol.asyncDispose` is implemented for `x` to use this :(
    stack.use(x)
    // use x
}

Notice that I can't use await using in the for loop because then disposal happens serially rather than in parallel. I'm forced to use a mix of different tools to solve this problem compared to if I just had defer from the start.

using makes everything about "resources". defer says nothing about resources, instead offering a simple and versatile tool for developers to use as they see fit.

Also, my defer example isn't theoretical. I just implemented the functionality needed in my parser/transpiler and it works as expected.

And look, I can see the value in other parts of the proposal (DisposableStack/AsyncDisposableStack), but we don't need using to use them. Why can't we just have this?

const stack = new AsyncDisposableStack()
defer await stack.dispose()

Is that really so bad?

just to avoid the extra level of nesting, but that has the same problem as my snippet above.

Much of the disagreement here comes down to both the cost and value of using. I think the cost of using is very high with little value compared to defer. To me, the need to instantiate resources and register them for disposal within a single statement is not worth the introduction of new syntax. defer addresses practically 99% of using use-cases, that 1% is not worth it.

It's true that sometimes you want something other than scope exit, and that this will not work for you in that case. Neither would defer.

Yup. defer doesn't work here. That's not the point. using adds more confusion and friction to the language compared to defer, yet defer does practically the same thing while being significantly more versatile. Again, this is a matter of value. using is not worth it.

But defer has the additional downside that it's easy to accidentally fail to register something for disposal, because acquisition isn't tied to registering for disposal.

The value of tying resource acquisition with disposal registration is overstated. using is syntactic sugar for this:

const r = new Resource()
defer r[Symbol.dispose]()

Sure, it's a little bit easier to remember to write using r = new Resource() over adding a defer statement. But in any case, both require some amount of developer initiative. In theory, using has lower friction for developers. But only if:

  • They need disposal to happen at scope exit
  • They're working with a "resource"
  • Whatever is being disposed of has Symbol.dispose or Symbol.asyncDispose

This feels incredibly idealistic. We should seriously reconsider adding features to a mature language based on hopes and dreams. defer adds value now without any of the baggage of using. It sticks to the core need: an ergonomic way to add unconditional code execution at scope exit.

I honestly hope I'm wrong and that using turns out to be massively successful. But I currently see using as becoming a niche feature that's dragged along over the years, adding confusion and complexity where ever it goes. And the worst part? We'll never get a proper defer because using combined with DisposableStack will be seen as "good enough".

@JadenSimon
Copy link
Author

JadenSimon commented Oct 5, 2024

Oh wait, this example is actually wrong:

await using stack = new AsyncDisposableStack()

for (const x of arr) {
    // I need to make sure `Symbol.asyncDispose` is implemented for `x` to use this :(
    stack.use(x)
    // use x
}

Because it doesn't dispose each element until the very end. I honestly don't even know how to implement my desired behavior without resorting to .defer:

await using stack = new AsyncDisposableStack()
const promises = []
stack.defer(() => Promise.all(promises))

for (const x of arr) {
    using stack2 = new DisposableStack()
    stack2.defer(() => promises.push(x.dispose()))
    // use x
}

This seems extremely convoluted. It took significantly longer to create this solution vs. defer. And this doesn't seem like an exceptionally rare pattern either.

@rbuckton
Copy link
Collaborator

rbuckton commented Oct 5, 2024

I totally agree that, in isolation, using r = new Resource() is cleaner. defer does start to feel repetitive and more error-prone. But that verbosity is not pointless, it leads to a huge amount of flexibility, solving problems that using completely fails at. In its current state, using feels more like a feature for a framework than a core language feature. It does a good job for this one particular thing, but fails everywhere else. This, combined with the added ceremony, is why I think it's so clunky as a language feature.

defer is simple for one-and-done scenarios, but it is not compositional and it doesn't lend itself towards a consistent API. However, Symbol.dispose and DisposableStack are compositional and direct users towards API consistency (e.g., Symbol.dispose).

Here's a good example that I've ran into more than a few times: needing to asynchronously await things at the end of a block after iteration.

defer makes this simple and intuitive:

const promises = []
defer await Promise.all(promises)

for (const x of arr) {
    defer promises.push(x.dispose())
    // use x
}

This would not work the way you expect in a language like Go, where defer operations only run when the function exits. If there is any code between the closing } of the for loop above and the closing } of your function, then none of that cleanup has actually happened yet. If you are writing multiple sequential operations that involve acquiring and releasing the same resource, such as a lock on a mutex, you are forced to write each operation as separate function to guarantee cleanup occurs in the right order. This makes it hard to use defer for anything other than the most trivial operations.

using needs to introduce several new ideas to solve the same problem:

await using stack = new AsyncDisposableStack()

for (const x of arr) {
    // I need to make sure `Symbol.asyncDispose` is implemented for `x` to use this :(
    stack.use(x)
    // use x
}

First, you do not need to ensure Symbol.asyncDispose is implemented. Both DisposableStack and AsyncDisposableStack contain convenience APIs to adapt non-disposable resources. You could instead have written the code like this:

await using promises = new AsyncDisposableStack();
for (const x of arr) {
  promises.defer(() => x.dispose());
  // use x
}

which would collect each promise to dispose when promises is disposed.

Second, while it introduces some new concepts, the code is structurally the same and involves fewer steps. Rather than maintaining a promises array and deferring a call to Promise.all, you are able to allocate an AsyncDisposableStack and bind it to the scope in a single statement.

Third, Go's defer doesn't understand async functions. A JS defer would either need to always await in an async function, or you'd still need something like an await defer .... Always awaiting would be slow for synchronous disposal in an async function.

Notice that I can't use await using in the for loop because then disposal happens serially rather than in parallel. I'm forced to use a mix of different tools to solve this problem compared to if I just had defer from the start.

With both the defer example and both of our versions of the using example, you're still not parallelizing efficiently. In each loop iteration you end up deferring the disposal of that loop's dispose() until the end of the outer block scope (e.g., the function body). If you want efficient parallelization and proper scoped cleanup, then you need to serially invoke dispose but asynchronously await the result. This is actually more complex with Go's defer since it is function-scoped, not block-scoped:

const promises = [];
defer await Promise.all(promises);

for (const x of arr) {
  await work(promises, x);
}

function work(promises, x) {
  defer promises.push(x.dispose());
  // use x
}

With using you can instead perform these actions in the same function:

const promises = [];
await using stack = new AsyncDisposableStack();
stack.defer(() => Promise.all(promises));

for (const x of arr) {
  using cleanup = new DisposableStack();
  cleanup.defer(() => promises.push(x.dispose()));

  // use x
} // x.dispose() is invoked serially but its result is not observed until `stack` is disposed.

Yes, this is more complex, but the operation you are performing is also complex, and easy to get wrong.

If all you need is defer-like semantics but with block-scoped cleanup, that's fairly easy to achieve with a simple wrapper:

const defer = op => ({ [Symbol.dispose]() { op(); } });
const asyncDefer = op => ({ async [Symbol.asyncDispose]() { await op(); } });

const promises = [];
await using _ = asyncDefer(() => Promise.all(promises));

for (const x of arr) {
  using _ = defer(() => promises.push(x.dispose()));

  // use x
}

And unlike Go's defer, these operations are block scoped.

using makes everything about "resources". defer says nothing about resources, instead offering a simple and versatile tool for developers to use as they see fit.

While I agree that it is simple, I disagree about its versatility. Anything you can do with defer can ultimately be done with using, but using is far more capable at addressing more advanced scenarios.

Also, my defer example isn't theoretical. I just implemented the functionality needed in my parser/transpiler and it works as expected.

And look, I can see the value in other parts of the proposal (DisposableStack/AsyncDisposableStack), but we don't need using to use them. Why can't we just have this?

const stack = new AsyncDisposableStack()
defer await stack.dispose()

Is that really so bad?

It requires far more repetition when composing the API as you must regularly repeat calls to .dispose(), and it doesn't enforce any consistency in API design so you regularly have to check documentation for .unlock(), .close(), etc.

Syntactically, defer isn't viable due to parenthesized expression. The following is already legal JS code:

defer (await foo.bar()).baz();

Which means you'd have to disallow ( and force everyone to write defer void ... but without a clear syntax error to indicate they actually did something incorrect.

just to avoid the extra level of nesting, but that has the same problem as my snippet above.

Much of the disagreement here comes down to both the cost and value of using. I think the cost of using is very high with little value compared to defer. To me, the need to instantiate resources and register them for disposal within a single statement is not worth the introduction of new syntax. defer addresses practically 99% of using use-cases, that 1% is not worth it.

IMO, defer is only valuable over using in the short-term until the ecosystem catches up with Symbol.dispose/Symbol.asyncDispose, and that's already happening. NodeJS is already shipping Symbol.asyncDispose on a number of API objects.

using will be far more convenient in the long run as adoption grows and the need to adapt pre-disposable code lessens. For example, RAII-style using declarations combined with mutexes make for a very convenient way to synchronize multi-threaded JS in the Shared Structs proposal:

{
  using lck = mutex.lock();
  ...
}

The value of tying resource acquisition with disposal registration is overstated.

I strongly disagree with this statement. This was one of the main motivations for this proposal from the start. I've seen cleanup registration issues in numerous codebases, and far too much inconsistency in cleanup APIs in both user code and in host API's like the DOM. using not only addresses "do this at scope exit", as defer does, but also addresses these other two concerns in a way defer fails to do.

using is syntactic sugar for this:

const r = new Resource()
defer r[Symbol.dispose]()

Sure, it's a little bit easier to remember to write using r = new Resource() over adding a defer statement. But in any case, both require some amount of developer initiative. In theory, using has lower friction for developers. But only if:

  • They need disposal to happen at scope exit
  • They're working with a "resource"
  • Whatever is being disposed of has Symbol.dispose or Symbol.asyncDispose

This is actually fairly common practice in languages like C# and Python, which are both prior art for this proposal. In addition, the defer and adopt methods on DisposableStack and AsyncDisposableStack are primarily intended to address your third bullet point.

This feels incredibly idealistic. We should seriously reconsider adding features to a mature language based on hopes and dreams. defer adds value now without any of the baggage of using. It sticks to the core need: an ergonomic way to add unconditional code execution at scope exit.

defer is a very short-sighted feature that does not scale as the language evolves. Arguing for a feature we can use now does not mesh with a concern about changing a mature language. I'd much rather have a feature that is intended to mature with the language.

I honestly hope I'm wrong and that using turns out to be massively successful. But I currently see using as becoming a niche feature that's dragged along over the years, adding confusion and complexity where ever it goes. And the worst part? We'll never get a proper defer because using combined with DisposableStack will be seen as "good enough".

There are quite a few new features that are on the way that will make use of using, including the Shared Structs Proposal. I also intend to investigate adding Symbol.dispose to ArrayBuffer in the future as a way to immediately detach and free memory, rather than wait for GC. NodeJS has already adopted Symbol.asyncDispose and has been shipping API's that use the symbol for use with transpilers for some time now. I have no doubt that using will reach critical mass as time goes on.

@JadenSimon
Copy link
Author

JadenSimon commented Oct 5, 2024

defer is simple for one-and-done scenarios, but it is not compositional and it doesn't lend itself towards a consistent API. However, Symbol.dispose and DisposableStack are compositional and direct users towards API consistency (e.g., Symbol.dispose).

Simplicity naturally leads to composition. Yeah sure, it doesn't have a consistent API because there's literally nothing that needs consistency for it to work. Both DisposableStack and Symbol.dispose can be used without using. They are not mutually exclusive with defer.

This would not work the way you expect in a language like Go, where defer operations only run when the function exits.

My defer statement unconditionally evaluates an expression on scope exit. It's literally implemented using the same logic as using in my transpiler. Just wrap the expression with a closure and push it onto the disposable stack.

First, you do not need to ensure Symbol.asyncDispose is implemented. Both DisposableStack and AsyncDisposableStack contain convenience APIs to adapt non-disposable resources. You could instead have written the code like this:

What is the point of using then? Why can't we cut out the non-sense and use my suggested defer statement from start?

In each loop iteration you end up deferring the disposal of that loop's dispose() until the end of the outer block scope (e.g., the function body).

My defer executes at the end of the current scope. Just like using, but without binding anything. So it works as expected here.

While I agree that it is simple, I disagree about its versatility. Anything you can do with defer can ultimately be done with using,

That's because using is just a wrapper around my defer.

but using is far more capable at addressing more advanced scenarios.

So far, using seems to perform worse in the examples I've given. It would be helpful to see some of these advanced scenarios where my defer fails but using succeeds. Because I'm struggling to find them.

Yes, this is more complex, but the operation you are performing is also complex, and easy to get wrong.
If all you need is defer-like semantics but with block-scoped cleanup, that's fairly easy to achieve with a simple wrapper:

I would hope that the tools I'm using don't fall apart as the complexity of the problem grows.

It requires far more repetition when composing the API as you must regularly repeat calls to .dispose(), and it doesn't enforce any consistency in API design so you regularly have to check documentation for .unlock(), .close(), etc.

Sure, it increases repetition in cases compared to using in the ideal case. But defer offers developers a versatile tool that doesn't lock them into any one pattern. Inconsistency is problematic, but is this a problem worth solving with new syntax? From my own experiences, it doesn't solve a significant enough problem to justify its complexity. Whereas with defer, I find plenty of situations where it makes sense. Look at any Zig/Go codebase and you'll quickly run into defer. It's a useful construct in its own right. My point is: JavaScript doesn't need this particular form of consistency, it thrives on flexibility.

Syntactically, defer isn't viable due to parenthesized expression. The following is already legal JS code:

I'm aware of this problem, but really, I think it's a fine compromise to require defer void for parenthesized expressions. That's not the common-case.

Which means you'd have to disallow ( and force everyone to write defer void ... but without a clear syntax error to indicate they actually did something incorrect.

A similar problem already exists for parenthesized expression statements, requiring a semicolon before the statement in some cases. While this isn't good, I've never found it to be a massive problem. defer wouldn't be perfect, but it's better than using.

input.ts(6, 9): Are you missing a semicolon?

Seems fine to me, "Are you missing void after defer?"

until the ecosystem catches up with Symbol.dispose/Symbol.asyncDispose
using will be far more convenient in the long run as adoption grows
NodeJS is already shipping Symbol.asyncDispose on a number of API objects.

This is idealistic. What about Bun? Deno? The huge number of npm packages? That's a lot to cover.

History doesn't paint such a nice picture. ESM is still a struggle, and that has way more incentives for adoption vs. using. Yeah, it's not really the same thing, but the point is that the JS community is massive.

For example, RAII-style using declarations combined with mutexes make for a very convenient way to synchronize multi-threaded JS in the Shared Structs proposal:

Yes it is convenient for that particular use-case. My argument isn't that using isn't useful, it's just that using is not as useful as defer in general. Using defer in this scenario is slightly less convenient but it more than makes up for it by being general.

I strongly disagree with this statement. This was one of the main motivations for this proposal from the start. I've seen cleanup registration issues in numerous codebases, and far too much inconsistency in cleanup APIs in both user code and in host API's like the DOM. using not only addresses "do this at scope exit", as defer does, but also addresses these other two concerns in a way defer fails to do.

Why do we have to solve all 3 problems with one solution? defer doesn't try to, and that's not a failure. If you care about standardizing methods for disposal, add a modifier to defer. defer would still be super versatile while offering a standard mechanism for calling Symbol.dispose. Something like defer dispose <Expression> and defer await dispose <Expression>. defer now offers progressive functionality whereas using is a one-trick pony.

This is actually fairly common practice in languages like C# and Python, which are both prior art for this proposal.

Consistency with other languages is good, but why are other languages not looked at? It feels like defer in Zig/Go was never given a serious chance. The proposal hardly mentions alternative solutions from other languages.

In addition, the defer and adopt methods on DisposableStack and AsyncDisposableStack are primarily intended to address your third bullet point.

And it's clunky because I have to now create a stack within the current scope. When I just wanted defer this whole time.

defer is a very short-sighted feature that does not scale as the language evolves. Arguing for a feature we can use now does not mesh with a concern about changing a mature language. I'd much rather have a feature that is intended to mature with the language.

The fact that defer works now doesn't make it a short-sighted feature at all! In fact, the simplicity of it makes it much more scalable than using will ever be. Go and Zig developers don't seem to have any problems with defer in the language.

Let me clarify the purpose of defer: it adds a new mechanism to manipulate control flow, much like if, try, while, etc.

That has staying power. I just don't see the same for using.

There are quite a few new features that are on the way that will make use of using, including the Shared Structs Proposal. I also intend to investigate adding Symbol.dispose to ArrayBuffer in the future as a way to immediately detach and free memory, rather than wait for GC. NodeJS has already adopted Symbol.asyncDispose and has been shipping API's that use the symbol for use with transpilers for some time now. I have no doubt that using will reach critical mass as time goes on.

I don't doubt that using will do a decent job at fulfilling its particular role of resource management, but I think broader adoption and impact will depend heavily on usability and versatility. In my mind, using misses on both points compared to defer.

@JadenSimon
Copy link
Author

@bakkot
@rbuckton

I've started a proposal for defer (sync and async) and have fleshed out the design: https://github.com/JadenSimon/proposal-defer

I have this implemented in my transpiler as well. I would not go out of my way to do all of this unless I thought using was detrimental to the language.

@martinheidegger
Copy link

using x = resourceA()

to be equals to

const x = resourceA()
defer x[Symbol.dispose]();

seems like it would be a neat syntactic sugar offering more flexibility if need be. Similar as to sometimes Promise is useful when dealing with async constructs.

await using x =  resourceB()

to be equal to

const x = resource()
defer await x[Symbol.asyncDispose]()

seems like it would entirely make sense from a educational POV to have both.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants