-
Notifications
You must be signed in to change notification settings - Fork 18
Having to store maxByteLength is not very useful #18
Comments
[[ArrayBufferMaxByteLength]] is also used as a brand check for RAB and GSABs, in addition to storing the max byte length.
I'm not sure I understand this. Are you saying there are cases where you want a I guess you're asking that maxByteLength becomes a hint where an implementation is allowed to round it up (but not down) in an implementation-defined way? |
The implementation is also allowed to round down (indirectly), since resize() & grow() are always allowed to throw. Implementations would likely do this: resize(newByteLength) { So, the implementations are already allowed to ignore maxByteLength (or feed it into some heuristic) when deciding which size to allocate. The only thing which forces them to store it is the requirement that resize() must throw if newByteLength > maxByteLength. Fulfilling that requirement forces the implementation to store an additional data member. Without that requirement, the implementation can just store the return value of ImplementationDefinedHeuristic(maxByteLength). To be clear, I'm suggesting getting rid of the feature that resize & grow must throw if newByteLength > maxByteLength (where maxByteLength is what was passed to the ctor). The rationale is that having that feature requires the implementation to store one additional data member -> IMO not worth it. Spec-wise, this could be achieved e.g., by saying Let O.[[ArrayBufferMaxByteLength]] be ImplementationDefinedHeuristic(maxByteLength). and everything else would stay the same. |
Thanks for the clarification.
This is better than what I initially understood your suggestion to be. It's saying that during construction time, the implementation is allowed to do some rounding, but importantly, the rounded number becomes observable via On the flip side, this would expose more engine internals, which wouldn't be desirable. For example, suppose an implementation decides to round up to page size multiples for large buffers, and don't do that for small buffers. This allocation strategy becomes immediately observable and might make it harder to change allocation strategies down the road, if there's now code out there that depends on no rounding happening for small buffers. (Maybe also implications for fingerprinting and security, but I don't know much about that space.) |
Hmm, why would the byteLength change? That was not part of my suggestion originally. Did you mean maxByteLength there? There are several options what we could do:
|
Err, sorry, yes I meant |
I favor option 1. Option 3 is harder because it precludes certain legitimate implementation strategies, like moving the backing buffer on every resize, even shrinks. Embedded devices with very limited memory, for example, may opt for that strategy so that every resize results in a compaction. |
What's wrong with option 2? Making it unobservable seems like it preserves the most freedom while minimizing the chance users will intuit the wrong thing. |
From my perspective the main undesirability of option 2 is that In particular @ljharb, this reasoning seems similar to me as why we want |
Makes sense, if it's useful then option 2 wouldn't be a good pick. |
Afaics the maxByteLength passed as a parameter to the ctor is only needed for implementing the following behavior:
(Resizable|GrowableShared)ArrayBuffer.prototype.resize:
If newByteLength > O.[[ArrayBufferMaxByteLength]], throw a RangeError exception.
It's not needed for anything else. Typically an implementation would allocate a memory area of some length (maybe maxByteLength rounded up to the page length or maybe something else - it's allowed to ignore maxByteLength altogether per spec!) and then store the amount of memory allocated. In resize, we could check against what we have allocated, not against maxByteLength. (And for completeness, noting that resize is allowed to throw even if the newByteLength <= maxByteLength.)
The current spec leads to resize throwing even in cases where we could resize in place (because we originally allocated more than maxByteLength). That feels unnecessary too.
I think the right way to "fix" this (if deemed undesirable) would be simply to remove the above mentioned "if" and not storing maxByteLength. That would internalize all throwing behavior and heuristics for which size to actually allocate to CreateByteDataBlock.
The text was updated successfully, but these errors were encountered: