-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Atomic move operation for element reparenting & reordering #1255
Comments
First of all, thank you! I've been vocal about this issue about forever and part of one of the biggest discussions you've linked. As author of various "reactive" libraries and somehow veteran of the "DOM diffing field", I'd like to add an idea:
I understand a node can be moved from
On top of this I hope whatever solution comes to mind works well with DOM diffing, so that new nodes can even pass through the usual DOM dance when the parent is changed or they become live, removed nodes that won't land anywhere else would eventually invoke As quick idea to eventually signal a node is going to be moved in an atomic way, and assuming it's targeting also a live parent, I think something like
As I hope this answer of mine makes sense and maybe trigger some even better idea / API. edit on after thoughts another companion of the API should be reflected in MutationObserver, or better, MutationRecord ... so far we have The |
This would be a fantastic addition of functionality for web development in general and for web libraries in particular. Currently if developers want to preserve the state of a node when updating the DOM they need to be extremely careful not to remove that node from the DOM. Morphing (https://github.com/patrick-steele-idem/morphdom) is an idea that has developed around addressing this. I have created an extension to the original morphdom algorithm called idiomorph (https://github.com/bigskysoftware/idiomorph/) and the demo for idiomorph shows how it preserves a video in a situation when morphdom cannot. 37Signals has recently integrated idiomorph into Turbo 8 & Rails (https://radanskoric.com/articles/turbo-morphing-deep-dive-idiomorph) If you look at the details of the idiomorph demo you will see it's set up in a particular way: namely, the video cannot change the depth in the DOM at which it is placed, nor can any of the types of the parent nodes of the video change. This is a severe restriction on what sorts of UI changes idiomorph can handle. With the ability to reparent elements idiomorph could offer much better user experience, handling much more significant changes to the DOM without losing state such as video playback, input focus, etc. Note that it's not only morphing algorithms like idiomorph that would benefit from this change: nearly any library that mutates the DOM would benefit from this ability. Even virtual DOM based libraries, when the rubber meets the road, need to update the actual DOM and move actual elements around. This change would benefit them tremendously. Thank you for considering it! |
Add some complexity to selection/range: how to deal with Shadow DOM when the host moves around and selection is partially in shadow DOM? |
This is a very exciting proposal! In the Microsoft Teams Platform, we extensively use iframes to host embedded apps in the Teams Web/Desktop Clients. When a user navigates away from an experience powered by one of these embedded apps and comes back to it later, we provide the ability for them to keep their iframe cached in the DOM (in a hidden state) and then re-show it later when it's needed again. To implement this functionality, we had to resort to creating the embedded app frames under the body of our page and absolute position them in the right place within our UX. This approach has lots of obvious disadvantages (e.g. breaks the accessibility tree, requires us to run a bounds synchronization loop, etc.) and the only reason we had to resort to it was because moving the iframe in the DOM would reload the embedded app from scratch thus negating any benefits of caching the frame. This proposal would allow us to implement a much more ideal iframe caching solution! Note the location of the iframe in the DOM and its absolute positioning in this recording: |
The WHATNOT meetings that occurred after this issue was created deferred discussion about the topic. I wonder what next steps would be needed to move this issue forward. The next meeting is on March 28 (#10215). |
I hope we can get to it in the 28.3 WHATNOT. @domfarolino @past ? |
It's already on the agenda, so if the interested parties are attending we will discuss this. |
Are the imperative and declarative APIs meant to slowly replace the existing APIs over time? Or do we need to choose between one or the other because of potential overhead? |
If I understand the question, it's mainly for backwards compatibility. In some cases you might want the existing behavior or something subtle in your app relies on it, so we can't just change it under the hood. |
This would be very nice for React since we currently basically just live with things sometimes incorrectly resetting. A couple of notes on the API options:
The thing that does causes a change is the place where the move happens. But even then it's kind of random which one gets moved and which one implicitly moves by everything around it moving. We don't remove all children and then reinsert them. So sometimes things preserve state. A new API for insertion/move seems like a better option. We'd basically like to just always the same API for all moves - which can be thousands at a time. This means that this API would have to be really fast - similar to insertBefore. An API like Something new like |
One thing that's nice to nail down is whether re-ordering of child nodes is enough or we need to support re-parenting (i.e. parent node changing from one node to another). Supporting the latter is a lot more challenging than just supporting re-ordering. |
Definitely would prefer full re-parenting. I gave an htmx demo of an morph-based swap at Github where you could flip back and forth between two pages and a video keeps working: https://www.youtube.com/watch?v=Gj6Bez2182k&t=2100s The dark secret of that demo was that I had to really carefully structure the HTML in the first and second pages to make sure that the video stayed at the same depth w/ the same parent element types to make the video playing keep working. Would be far better for HTML authors if they could change the HTML structure entirely, just build page 1 the way they want and build page 2 the way they want, and we could swap elements into their new spots by ID. |
(For the purpose of brevity, I will begin using the SPAM acronym that we've been toying around with internally, which means "state-preserving atomic move". The most obvious example is an iframe that gets SPAM-moved doesn't lose its document or otherwise get torn down).
@sebmarkbage I understand your hesitation around a new subtree-associated-HTML-attribute — in that it would be over-broad, affecting tons of nested content that a framework might not own, possibly breaking parts of an app that doesn't expect SPAM moves to happen. But I'm curious if a new DOM API really gets you out from under that over-broadness, while still being useful? What would you expect I guess I had in mind that the imperative API would force-SPAM-move the "state-preservable" elements in the subtree that's moving, so that any nested iframes do not get their documents reset1. But if that API would not preserve nested iframe state, then the only way it would be possible to actually preserve that iframe's state in this case is if the application took care to apply an iframe-specific HTML attribute to it, specifying that it opts into SPAM moves:
But it sounded like that option didn't sit well with you because the application author would be one-by-one sprinkling these attributes to random iframes without understanding the context in which the SPAM move might actually take place, by a framework way higher up the stack. So how can we best enable the scenario where an
But I would love to get more thoughts on the subtree side-effects stuff in general. Footnotes
|
I don't think we can make this happen automatically based on a content attribute on an iframe. It most certainly needs to be a completely new DOM API. |
I am very much open to that, I'm just trying to consider what subtree side-effects are acceptable. That is, if |
An attribute + DOM API could work together in this case a bit, to ameliorate some of the compat concerns. For example: const nodeToAtomicallyMove = document.querySelector('......');
// Never trigger atomic moves on *this* specific sub-subtree, that was built by "old" content.
nodeToAtomicallyMove.querySelector('.built-by-legacy-app').preserve = 'none';
newParent.appendAtomic(nodeToAtomicallyMove); In this case, all |
That sounds like something that could be built by a user hand library, not something that needs to be built into browser's native API. We really need to keep this API proposal as simple & succinct as much as possible. |
Can you expand on why this is impossible? I can see the point why it might be preferable, but I think both directions are possible. |
and +1 to not limiting it to reordering. We'll end up just scratching the surface of the use-cases, coming back to where we started where we still need a full solution for reparenting. |
I'm also a bit at a loss as to why we'd discuss new attributes. That seems like a pretty severe layering violation? The way I see it:
|
I tend to agree with the conclusion, but I want to explain why the main reason to consider things like an iframe attribute, in case it raises something else. Outside "keep iframes from reloading", it's unclear exactly what the effects of this would be. For focus, we need to blur and refocus anyway, e.g. in case you're moving the element to an |
I think what Seb is saying is that React can decide if a move should be state preserving but if React added a "preserve-state" attribute to Our perspective is that the mover decides the move semantics rather than the tree. So any moves done by this embedded application won't preserve state b/c that is what the application was expecting and any moves done by React would preserve state becuase React was updated to signal this intent by using a novel API |
not adding much but beside fully agreeing with this sentence, the hidden footgun this API is throwing at developers is that even libraries "sure enough" to move around their own nodes can't prevent other libraries to interfere with live nodes ... so, ensureing not-live nodes, or nodes moved elsewhere, where the DOM has no mechanism to provide, or prevent, nodes ownership, looks really like somebody overlooked at the reason this API is desired in the first place: the intent is in the name, nothing else should happen ... if the intent can be clear, let it be, if it needs internals disambiguations for when that canot be performed, let that be an internal implementation detail no Web developer asked for or cared about when Again, this API should be the new |
From #1255 (comment):
This isn't entirely hypothetical, but impact would be mostly limited to large structural tree updates, where most the time is really spent recomputing layout and updating paint. In creating MithrilJS/mithril.js#2982, flattening a nested try/catch in the attribute update flow saved about 10% in performance, and their mere addition to that section caused a roughly 20% perf drop, but only in the fast case of no attributes changed (where diffs are commonly few-millisecond). In the slow case where attributes were frequently changing, it was barely outside margin of error, but paint times would far exceed that anyways. This is of course in the attribute update flow, and virtual DOM frameworks have to be able to process thousands, possibly tens of thousands, of these in just a single frame. In some cases, skipping even one frame with those updates would result in noticeable perf degradation. (Some users use Mithril.js to power games, and so they'd need that kind of speed.) Conversely, a keyed list might have to move hundreds if you change sort order, and users would tolerate some noticeable lag. So, as long as it clocks in at no more than about 10us per operation for the whole try/catch, my only complaint would be just the need for that try/catch. |
it's been mentioned that If parent.moveBefore(
knownNode,
refNode?,
{
fallback(knownNode, refNode?) {
return this.insertBefore(knownNode, refNode);
},
// OR ...
fallback: (parent, knownNode, refNode?) => {
return parent.insertBefore(knownNode, refNode);
}
}
) If such third option is not passed along, let it throw, if it's there, use that If the |
@WebReflection those property bags have a GC impact so they can be more expensive than try/catch blocks. I think the For example, your video editing app moves youtube iframes around, and wants to absolutely make sure they're not reloaded during the move. If we use the fallback, this will fail silently and the unexpected outcome would be delivered to the user. If we throw, a change in the user code that breaks the move (e.g. moving via a Also when I think about frameworks, they should probably use the throwing version internally (without |
those can all be an always same object to pass around though, mine was an inline example, I would write once that object and pass it every single time, no much GC pressure / impact?
my thinking is that having edit 'cause once again, if we need two APIs to do he same thing I'd rather have the |
Probably choose a different code path for moving nodes around. // Option A
new_parent.moveBefore(iframe, ref_node);
// Option B
const fragment = document.createDocumentFragment();
fragment.moveBefore(iframe, null);
new_parent.moveBefore(fragment, ref_node); If we use the fallback version, both of these would succeed and would appear to the developer as if they're doing the same thing, while in fact they have a very different effect to the user.
I have no objections to this. |
Option B is obviously using the wrong methods though ... a disconnected fragment (which is all fragments) that uses if (parent.isConnected)
parent.moveBefore(node, ref_node);
else
parent.insertBefore(node, ref_node); AFAIK that's not the end of the story though, the operation can fail in other occasions too ... the accessor I've mentioned also wouldn't work, a method such as |
makes no sense is exactly it. This is exactly when we throw!
It would only fail when moving between connected/disconnected parents, across documents, or when trying nonsensical things like moving comment nodes. What are those "other occasions"? |
I was referring to these checks #1307 (comment) but now there is a new one needed for comments ... comments nodes are used in both lit-html and my libaries, among others, to pin-point fragments and when these fragments are moved their comment nodes move along without needing to leave the living dom, they are just an indirection ... so it's new to me that comments can't be moved (and why? these are the least problematic thing ever when it comes to repaint/reflow) and that mentioned check should become: const moveOrInsertNode = (container, node, ref_node = null) => {
const canMove = (
container.isConnected &&
node.isConnected &&
node.nodeType !== node.COMMENT_NODE &&
(!ref_node || ref_node.parentNode === container)
);
return canMove ?
container.moveBefore(node, ref_node) :
container.insertBefore(node, ref_node)
;
} which is starting to become very ugly and a performance hazard due all those checks needed per every single node that would like to be moved ... I don't think a If the method needs to do all those checks internally, even a companion method to know if a node can be moved would be duplication of checks and intents ... the fastest best way to have all at once seems to be the third argument then, with a dev-defined callback. |
Sorry, the restriction about comments is for the parent. You can move comment nodes. |
so ... two |
Yea, you can move between disconnected parents. |
to whom it might concern, just adding those two |
@WebReflection What time is the "~30% slower" relative to? And same for the "~20% slowdown". Just looking for some perspective here. |
multiple moves and we're talking 0.3 VS 0.4 but for benchmarks 0.1 might mean everything ... of course more extensive tests with actually performing Node.prototype.moveBefore = Node.prototype.insertBefore; right before the test suite benchmark. |
@WebReflection Sorry, I meant like specific time numbers, like 60k ops/sec or 1 ms/op. |
@WebReflection given that it's established that raw |
@noamr the whole discussion is about not landing I am also OK to stop discussing it but so far we have most libraries authors not happy about that throwing (using such name that is too similar to If there is no way this method can be renamed to Unless explicitly asked to, I will stop commenting on this, or the other, issue. |
FYI React experimental reconcilier integration PR: facebook/react#31596 with preference for not throwing apparently:
To me |
This comment was marked as duplicate.
This comment was marked as duplicate.
The benchmark discussion was simply us assessing whether it's fast enough for it to be viable for us to use - if it's too slow, the throwing variant is outright useless to us. And switching to a method that moves instead of removing and inserting would allow us to fix a longstanding animations bug: MithrilJS/mithril.js#2612. It's incredibly important to me that it meets my performance requirements. And those requirements are stiff: around 5 µs/op firm and 50 µs/op hard. (Soft affords ~1k keyed element moves without dropping frames at 60 FPS, and hard affords ~100 moves.) Slower than that, I simply can't switch to it. And no, this is not hypothetical - check the comment at the top of this file and imagine if someone reverses such a 100-row table's order. (Namesilo's UI currently allows changing sort order, by the way, and they can display up to 250 rows per page.)
A This is why I'm so concerned and pushy about it. We need the ability to just move, and we need simple node movement to be very fast. |
I want to push back lightly on the use cases (e.g. #1255 (comment)) for throwing to try to understand this more. When exactly is it valuable? We have a number of obvious cases where state can be affected, for example (1) animation, (2) focus, (3) iframe status. We have 2 important nodes, the container to move to and the target moving element. I think that means these permutations:
I can't see any value in throwing for case (3) since there's no relevant state to consider. That leaves case (4) and the question: is there ever a time where you wouldn't want to remove/disconnect an element to prevent its state from being lost? I'm struggling to find a practical case where that would be desirable. I also think that moving across documents is a corner case and if we throw there, I suspect people won't care because that likely won't need hot path checking code. |
The way I am looking at this is whether the method performs a move or not. For the past two-and-half decades the DOM has only had insert and remove. And we've only been able to define a move operation for two same-document connected nodes or two same-shadow-including-root disconnected nodes (I suspect that's sufficient for @sebmarkbage's case cited above). Note that for the connected case it also comes with a new custom element callback (this is not offered in the disconnected case because custom elements never fire callbacks there; only built-in elements appear to have a need for that thus far). Making it implicitly fallback to different semantics prevents us from building upon it in the future. As people would rely on it having insert or remove behavior in a certain set of scenarios. And it would also be rather magical to change the semantics under the hood. You could perhaps have a method It will be interesting to see the performance aspects explored more. Once there is more data there we should certainly investigate why a couple of conditionals are so expensive. And perhaps adoption of abstractions is so overwhelming that it's indeed compelling to introduce Also, let me say that this issue has become quite unreadable. Adding way too many duplicate comments (this includes repeating what other people already said, to be clear) in just a couple of days is not a good way to make your case. There's a 160 people watching this repository. We all owe it to them to be more considerate of their time and attention. |
last question from me: is it possible to rename and land this method as edit to not bother more ... but ...
This is the exact reason most of us are against landing a completely new thing under a name that undermines its most meaningful behavior from veterans to new-comers. If this is all new, and somehow experimental, because:
... but in the meantime the most desirable name for such API has been doomed forever, I strongly believe there is all the time we want, and need, to explore such new behavior/API under a name that is not representing, in the name, meaning, and intent, what most Web developers would expect from such API: let it be As quick reminder, the most successful and popular Websites out there have infinite lists and/or tabular data, all cases where nodes get inevitably and quickly out and back into the living DOM ... a sorted or filtered table is the same, it's not always a reordering of a to-do list and as mentioned before:
so that it feels like this API has not "scaling" in mind, when it comes to performance. Moreover, the state-preserving feature of this API has been overlooked as the only desired outcome when in practice X, and similar projects, FB, and similar project, or even GitHub that is rendering code views in split chunks of nodes, won't benefit at all from the ergonomics, the duplicated checks on both client side and inevitably on the native browser engine too, to do what is the most common thing everyone does these days: stream of visual data to consume.
Agreed, but only if libraries have a way to hook carelessly into the fastest-possible path, because otherwise this API is asking consumers of such API to add bloat to every page the world is surfing these days, because it couldn't literally compromise on anything, even if every developer that paid attention (I even suggested I understand there are many people involved in this project, and while repeated reasons from different developers are annoying, because repeated, the fact different developers repeatedly stated they didn't like the throwing should be visible to all others involved in this effort. It's a new thing, it's potentially a game-changer only if it lands right ... and until it does, please do not nuke the most spot-on name that will backfire the day after it'll land out there. Thanks to anyone patience enough to read this edit. I hope for the best, which is everything but edit 2 as mentioned in the comment after, if |
Hey all, I want to weigh in with my perspective in support of both a low-level This thread is unfortunately noisy, full of long and repetitive comments by the same people. This is not helpful for making your point, and indeed is more likely to cause entrenchment and get yourself ignored. I usually subscribe to all whatwg/* issues, but had to unsubscribe from this thread (and the associated pull request) because of the excess noise a week ago. I see that since then there have been tens of comments. Please consider a more constructive strategy for engaging in standards bodies in the future, such as leaving a single, ideally short comment laying out your position and then accepting the fact that others might disagree. I'll call out @sorvell's single such comment at #1255 (comment) as a good example of this. Secondly, I think it's discouraging for us as spec writers and implementers to work on such a cool, technically-difficult, and long-awaited feature, which does the magic work of preserving iframe/canvas/etc. state even across moves… and have almost the entirety of the web developer feedback be about surface-level details. There's no appreciation or encouragement; there's just complaints about how terrible it is to write an extra Alright, on to the technical bits. I don't think performance is a serious consideration here. I agree there's value in a low-level However, unlike the Recall that there are a few error cases under discussion:
My proposal is that we allow I claim that the cross-document case is clearly a programmer error. Just like frameworks bubble up the exception that is thrown if you try to insert two Alternatively, frameworks could surface clearly-different APIs for same-document vs. cross-document operations, if they don't want to throw. This is not something they need to do today, because today there is no DOM API whose behavior differs significantly depending on cross- vs. same-document. But if a framework truly intends to be compatible with multiple documents, they're going to need to start surfacing this distinction to their user, because now such an operation exists. There's just no avoiding the extra complexity here. Hiding it is the wrong move, and not one we should encourage in the API design by having The connected-to-disconnected case is a bit trickier, but only a bit. I don't claim that it's always a programmer error to be moving an element into a disconnected document or I think my proposal gives a version of I think we have ample evidence from this thread that this would be helpful to web developers today, and would be suitable for merging alongside the low-level It's possible some web and framework developers feel that a state-losing fallback would be helpful, beyond what I've proposed. I am skeptical that we have enough evidence to support that conclusion. As per the above, I don't think frameworks have yet adapted to the reality where cross-document or connected-to-disconnected-to-connected are state-losing operations. I'd like to see how that goes for the next year or so before considering an even-more-lenient state-losing fallback version of this API. |
@domenic not sure you're still ignoring this thread, but worth trying to communicate my (as terse as possible) answer, and I am not speaking for others, just for myself, hoping others agree (with maybe just a thumb up instead of repeated comments):
Last, but not least, thanks for your comment and the will to fix users' expectations, in a way or another ❤️ P.S. I'd like to propose placeBefore instead of how many child.isConnected are required?moveBefore - as it standsUsers must check both parent and child.
moveBeforeOrInsert - as proposedUsers must check parent, use the method if connected, use fallback if not.
With The proposed default for this third argument is Alternatively, a |
I was surprised by the signature of I had hoped to describe the move destination more ergonomically, like with |
What problem are you trying to solve?
Chrome (@domfarolino, @noamr, @mfreed7) is interested in pursuing the addition of an atomic move primitive in the DOM Standard. This would allow an element to be re-parented or re-ordered without today's side effects of first being removed and then inserted.
Here are all of the prior issues/PRs I could find related to this problem space:
insertBefore
vsappendChild
and transitions #880Problem
Without an atomic move operation, re-parenting or re-ordering elements involves first removing them and then re-inserting them. With the DOM Standard's current removal/insertion model, this resets lots of state on various elements, including iframe document state, selection/focus on
<input>
s, and more. See @josepharhar's reparenting demo for a more exhaustive list of state that gets reset.This causes lots of developer pain, as recently voiced on X by frameworks like HTMX, and other companies such as Wix, Microsoft, and internally at Google.
This state-resetting is in part caused by the DOM Standard's current insertion & removal model. While well-defined, its model of insertion and removal steps has two issues, both captured by #808:
What solutions exist today?
One very limited partial solution that does not actually involve any DOM tree manipulation, is this shadow DOM example that @emilio had posted a while back: whatwg/html#5484 (comment) (see my brief recreation of it below).
But as mentioned, this does not seem to perform any real DOM mutations; rather, the slot mutation seems to just visually compose the element in the right place. Throughout this example, the iframe's actual parent does not change.
Otherwise, we know there is some historical precedent for trying to solve this problem with WebKit's since-rolled-back "magic iframes". See whatwg/html#5484 (comment) and https://bugs.webkit.org/show_bug.cgi?id=13574#c12. We believe that the concerns from that old approach can be ameliorated by:
How would you solve it?
Solution
To lay the groundwork for an atomic move primitive in the DOM Standard, we plan on resolving #808 by introducing a model desired by @annevk, @domfarolino, @noamr, and @mfreed7, that resembles Gecko & Chromium's model of handling all script-executing insertion/removal side-effects after all DOM mutations are done, for any given insertion.
With this in place, we believe it will be much easier to separate out the cases where we can simply skip the invocation of insertion/removal side-effects for nodes that are atomically moved in the DOM. This will make us, and implementers, confident that there won't be any way to observe an inconsistent DOM state while atomically moving an element, or experience other nasty unknown side-effects.
The API shape for this new primitive is an open question. Below are a few ideas:
append(node, {atomic: true})
,replaceChild(node, {atomic: true})
Compatibility issues here take the form relying on insertion/removal side-effects which no longer happen during an atomic move. They vary depending on the shape of our final design.
A non-exhaustive list of additional complexities that would be nice to track/discuss before a formal design:
Anything else?
No response
The text was updated successfully, but these errors were encountered: