Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use no isolation and normal blending for default SET animation #167

Closed
khushalsagar opened this issue Jul 18, 2022 · 6 comments
Closed

Use no isolation and normal blending for default SET animation #167

khushalsagar opened this issue Jul 18, 2022 · 6 comments

Comments

@khushalsagar
Copy link
Collaborator

SET uses the following CSS to blend the 2 images created for a transition:

wrapper {
  isolation: isolate;
}

incoming-image, outgoing-image {
  mix-blend-mode: plus-lighter;
}

The reason for the above is to ensure that if visual output of a widget is the same (either completely or partially) before and after a navigation, then blending them is a no-op.

While the above ensures that there are no visual glitches irrespective of the content of the 2 images, it means we're doing an expensive blending operation in the default case. Since we need to an offscreen pass to blend the 2 images before drawing them into the backbuffer. This is sub-optimal for 2 reasons:

  • A significant number of cases would likely not need this (since the transition will be between different widgets).

  • The API allows the developer to trivially add CSS to get this behaviour. So a developer could choose to add it only for cases where they know the visual output of 2 widgets could be identical. For cases where the 2 widgets are completely identical (like a sticky header) but the transform or size animation is needed, a simple approach could be to add opacity:0 to the incoming or outgoing snapshots.

So the trade-off is to either make the default case have complex rendering but guaranteed to never be visually wrong; or use the performant option for the default with guidance for developers to opt-in to the complex rendering when needed (using information about their UX).

@jakearchibald
Copy link
Collaborator

Putting this responsibility onto developers, who don't have pixel-access to the incoming & outgoing images, seems bad.

Would we just optimise this, either:

Magically

If the fade animation is between two images that are identical, then just show the incoming-image, and skip isolation.

This optimisation wouldn't be shown in getComputedStyle etc, it's entirely behind the scenes.

When constructing the pseudo tree

If the incoming and outgoing image are identical, then don't perform the fade animation, just start with the incoming-image. Don't apply isolation.

This optimisation would be detectable via getComputedStyle etc.

@vmpstr
Copy link
Collaborator

vmpstr commented Jul 19, 2022

I agree that we should still have a good blending mode. The fact that we can't handle some cases efficiently is something that we should fix.

I don't think we can detect whether incoming and outgoing are the same content with any reasonable degree of confidence, but we can only set up the blend mode for cases where there are in fact both incoming and outgoing images.

@jakearchibald
Copy link
Collaborator

I don't think we can detect whether incoming and outgoing are the same content with any reasonable degree of confidence,

Is comparing the pixels too slow?

@khushalsagar
Copy link
Collaborator Author

Putting this responsibility onto developers, who don't have pixel-access to the incoming & outgoing images, seems bad.

That's not what I had in mind. I was expecting developers would add the CSS since they know which elements are being tagged. "This is a header which I know is the same in both", "I'm expanding this widget to reveal more content so some pixels will be the same that should blend properly."

I find it similar to how images are tagged with information about whether every pixel is opaque and the engine can use that for many optimizations (without having to visit every pixel to confirm it).

Is comparing the pixels too slow?

The issue is that we generate the textures (for the incoming elements) that we'd need to compare the pixels when producing a frame for the animation. So for the first frame (when you don't know), you'll need to do a pass with plus-lighter offscreen first. You can cache whether the pixels were identical or not while doing this and do the more optimal thing in the next few frames. This will be complicated to set up.

I can buy if this is too much complexity being pushed to developers. We're going to have to optimize the path even with blending anyway. Just that you might hit the number of elements you can tag sooner with the blending enabled than without. And as a developer it seemed like I would notice the visual glitch and add the requisite CSS only when its necessary (I was assuming that will be rare). But if its a perf gotcha, that's harder to debug.

@jakearchibald
Copy link
Collaborator

"This is a header which I know is the same in both"

I don't think that's easy to know, particularly in the MPA case.

Is there something cheaper/simpler we could compare, like the skia calls? Although I guess different skia calls could result in the same pixels.

@vmpstr
Copy link
Collaborator

vmpstr commented Jul 21, 2022

Fix here should be for the implementation to be able to handle a number of isolation/blend mode elements without causing undue delay. Comparing content for equality could be one approach, although I don't think that would be trivial to do. Some sort of a caching / partial update should also work.

I'm going to close this for now since we agreed to keep "visually correct by default" behavior and optimize performance as needed

@vmpstr vmpstr closed this as completed Jul 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants