Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues/Improvements to Shadertoy support #360

Closed
Vipitis opened this issue May 18, 2023 · 12 comments
Closed

Issues/Improvements to Shadertoy support #360

Vipitis opened this issue May 18, 2023 · 12 comments
Labels
question Further information is requested

Comments

@Vipitis
Copy link
Contributor

Vipitis commented May 18, 2023

hey,

I have found this project and it looks really promising. I am working on a project that looks at language models generating shadercode.
Following the examples, I got it working for myself.
However I had several issues.

  • there is shader.show() but to stop there is shader._canvas.close() (in a Jupyter notebook).
  • to get the actual frames you can call shader._canvas.get_frame()
  • Shadertoy always uses the auto canvas, so you can't get it on a fake Canvas for example.
  • I have two instances of the Shadertoy class to then compare ground truth and generation, the 2nd shader is a frame ahead

Now my questions:

  • Is there a way to render a sinlge frame (perhaps given a time)?
  • How are erros in the shadercode handled? right now I seem to get a Jupyter kernal crash. (might be GPU dependant?)
  • can you access the parser/compiler to figure out where in the source code a specific function is defined?

My suggestion:

  • add methods to the Shadertoy class to access the information inside the ._canvas directly
  • expose more arguments to the __init__() to set canvas, framerate, ... (keep defaults).

Will read through more docs and look at the source, but the low level stuff is too advanced for me, might find some time to contribute myself.

@almarklein
Copy link
Member

Hi!

General remarks regarding the shadertoy utility:

  • The current shadertoy is designed to be really plug-n-play, and is not (yet) very configurable.
  • That said, I'm sure it could be improved, perhaps by adding optional arguments?
  • The shadertoy was contributed by a user, and since it is not our priority, we are unlikely to improve it ourselves. Though we'd accept contributions to improve it 👍

Is there a way to render a sinlge frame (perhaps given a time)

Not at the moment. Perhaps an option could be added e.g. offscreen that will then use an offscreen canvas and return the screenshot?

How are erros in the shadercode handled? right now I seem to get a Jupyter kernal crash. (might be GPU dependant?)

In most cases the errors should be caught when the GPU shader module is created and a clear error should be printed. A kernel crash should be considered a bug (though it might be one that we cannot fix, because it's cause lies upstream in wgpu-native).

can you access the parser/compiler to figure out where in the source code a specific function is defined?

No, it's opaque. We just call a function that we pass the wgsl, and it produces a GPU module.

expose more arguments to the init() to set canvas, framerate, ... (keep defaults).

This sounds like the most feasible approach to make this utility more flexible 👍

@almarklein almarklein added the question Further information is requested label May 22, 2023
@Vipitis
Copy link
Contributor Author

Vipitis commented May 22, 2023

Thank you got the answers.

The original contributor is @panxinmiao if I am reading the pull requests correctly. I will start to write an adapted Shadertoy class that incorporates some of the features mentioned above. From some more digging I did it looks like the time is based on a time.perf_counter() and therefore you don't even get a 0th frame on time 0 but slightly off. Which is one of the requirements I will try to implement first.

@panxinmiao
Copy link
Contributor

The current Shadertoy is a highly encapsulated utility that is used for testing and demonstrating shader code, with very limited customization options. It's great to see someone making improvements to it.

From some more digging I did it looks like the time is based on a time.perf_counter() and therefore you don't even get a 0th frame on time 0 but slightly off.

Actually, here the variable "time" is an accumulation of the increment in time obtained from calling "time.perf_counter()" at each frame rendering. It should start from 0 unless there is a bug, 😅

@Vipitis
Copy link
Contributor Author

Vipitis commented May 23, 2023

Here is my experimental setup (the aim is to compare two different shadercodes at the same, and do so via the frame the render). I do my development inside a Jupyter notebook. debug_code is copied from this shader https://www.shadertoy.com/view/4ts3R8 but with the decimal places for print(iTime ...) set to 4. near the end.

from PIL import Image, ImageChops
import numpy as np
from wgpu.utils.shadertoy import Shadertoy as ST

resolution = (512, 420)
# test_codes = [test_code, test_code] # replace with your list of test codes
test_codes = [debug_code, debug_code]
images = []

for code in test_codes:
    shader = ST(code, resolution)
    print(shader._uniform_data["time"])
    shader.show()
    shader._canvas.close()
    frame = shader._canvas.get_frame()
    print(shader._last_time)
    print(shader._uniform_data["time"])
    print(shader._frame)
    img = Image.fromarray(frame)
    images.append(img)
    # del shader # to remove any left overs?

# remove transparent alpha
for i in range(len(images)):
    images[i] = images[i].convert('RGB')

# display images
for img in images:
    display(img)

My output looks the following( and numbers change around a bit every time)
image

Do you have any direct idea on how to grab the frame when time is exactly 0 or what changes to make instead? I got one busy week, but will find the time to dig around myself soon.

@panxinmiao
Copy link
Contributor

panxinmiao commented May 24, 2023

_frame is used internally in Shadertoy and is not intended to be accessed externally. It may not correspond to the real-time variable values in the shader program (in fact, it should represent the corresponding variable values in the shader program at the rendering of the next frame). If you want to consider accessing these real-time properties externally, you would need to organize the code and the simplest way would be to directly map the values from the uniform_buffer.

Currently, Shadertoy does not provide a method for users to explicitly render a specific frame (off-screen). In practice, the rendering is handled by the GUI program, which calls the rendering at the appropriate time based on the refresh rate. Your rendering result above seems to have skipped the first frame, and I'm not sure what the reason is.

However, we can provide a function like snapshot. Iit will immediately render one frame and return the rendered result. This should not be complex to implement.

Edit:
The shadertoy.show() method hands over the rendering call to the GUI for processing, so the frame you get later is not the first frame (the GUI may have called rendering several times)

@panxinmiao
Copy link
Contributor

I just checked the code for “Jupyter Canvas”. As a temporary alternative, perhaps you can try directly calling the internal "Shadertoy._draw_frame()" method.

Like this:

.....
for code in test_codes:
    shader = ST(code, resolution)
    shader._draw_frame()
    frame = shader._canvas.get_frame()
    ......

Note that I did not test on Jupyter notebook, 😅

@Vipitis
Copy link
Contributor Author

Vipitis commented May 25, 2023

This kinda works. But it would always give me a single pixel. And I did not figure out why.
After some more digging through all the inherited classes, I found that there is shader._canvas.snapshot()

and put together the following code seems to do what I want: render a single frame at a given time

from PIL import Image, ImageChops
import numpy as np
from wgpu.utils.shadertoy import Shadertoy

resolution = (512, 420)
# test_codes = [test_code, test_code] # replace with your list of test codes
test_codes = [debug_code, debug_code]
frames = []

for code in test_codes:
    shader = Shadertoy(code, resolution)
    shader._uniform_data["time"] = 1.234 #set any time you want
    shader._canvas.request_draw(shader._draw_frame)
    frame = shader._canvas.snapshot().data
    frames.append(frame)

images = []
for frame in frames:
    img = Image.fromarray(frame)
    # remove transparent pixels 
    img = img.convert('RGB')
    images.append(img)
    display(img)

I am have only tested this inside a Jupyter notebook. But it does work out for me so far - and I don't have to implement anything myself.

@Korijn
Copy link
Collaborator

Korijn commented May 25, 2023

Essentially you're not actually using the shadertoy framework anymore. You're just rendering a full screen quad with a shader. I kind of suspected this earlier.

We could perhaps provide a utility function for this process as a function in the shadertoy namespace, similar to wgpu.utils.compute_with_buffers

@Vipitis
Copy link
Contributor Author

Vipitis commented Jun 5, 2023

here is the implementation I have been using

from PIL import Image
import numpy as np
import wgpu
from wgpu.utils.shadertoy import * 
from wgpu.gui.offscreen import WgpuCanvas as OffscreenCanvas, run as run_offscreen

# custom Class
class ShadertoyCustom(Shadertoy):
    def __init__(self, shader_code, resolution=(800, 450), canvas_class=WgpuCanvas, run_fn=run):
        self._canvas_class = canvas_class
        self._fun_fn = run_fn
        super().__init__(shader_code, resolution)
        self._uniform_data = UniformArray(
            ("mouse", "f", 4),
            ("resolution", "f", 3),
            ("time", "f", 1),
            ("time_delta", "f", 1),
            ("frame", "I", 1),
        )
        
        self._shader_code = shader_code
        self._uniform_data["resolution"] = resolution + (1,)

        self._prepare_render()
        self._bind_events()
    
    def _prepare_render(self):
        import wgpu.backends.rs  # noqa

        self._canvas = self._canvas_class(title="Shadertoy", size=self.resolution, max_fps=60)

        adapter = wgpu.request_adapter(
            canvas=self._canvas, power_preference="high-performance"
        )
        self._device = adapter.request_device()

        self._present_context = self._canvas.get_context()

        # We use "bgra8unorm" not "bgra8unorm-srgb" here because we want to let the shader fully control the color-space.
        self._present_context.configure(
            device=self._device, format=wgpu.TextureFormat.bgra8unorm
        )

        shader_type = self.shader_type
        if shader_type == "glsl":
            vertex_shader_code = vertex_code_glsl
            frag_shader_code = (
                builtin_variables_glsl + self.shader_code + fragment_code_glsl
            )
        elif shader_type == "wgsl":
            vertex_shader_code = vertex_code_wgsl
            frag_shader_code = (
                builtin_variables_wgsl + self.shader_code + fragment_code_wgsl
            )

        vertex_shader_program = self._device.create_shader_module(
            label="triangle_vert", code=vertex_shader_code
        )
        frag_shader_program = self._device.create_shader_module(
            label="triangle_frag", code=frag_shader_code
        )

        self._uniform_buffer = self._device.create_buffer(
            size=self._uniform_data.nbytes,
            usage=wgpu.BufferUsage.UNIFORM | wgpu.BufferUsage.COPY_DST,
        )

        bind_group_layout = self._device.create_bind_group_layout(
            entries=binding_layout
        )

        self._bind_group = self._device.create_bind_group(
            layout=bind_group_layout,
            entries=[
                {
                    "binding": 0,
                    "resource": {
                        "buffer": self._uniform_buffer,
                        "offset": 0,
                        "size": self._uniform_data.nbytes,
                    },
                },
            ],
        )

        self._render_pipeline = self._device.create_render_pipeline(
            layout=self._device.create_pipeline_layout(
                bind_group_layouts=[bind_group_layout]
            ),
            vertex={
                "module": vertex_shader_program,
                "entry_point": "main",
                "buffers": [],
            },
            primitive={
                "topology": wgpu.PrimitiveTopology.triangle_list,
                "front_face": wgpu.FrontFace.ccw,
                "cull_mode": wgpu.CullMode.none,
            },
            depth_stencil=None,
            multisample=None,
            fragment={
                "module": frag_shader_program,
                "entry_point": "main",
                "targets": [
                    {
                        "format": wgpu.TextureFormat.bgra8unorm,
                        "blend": {
                            "color": (
                                wgpu.BlendFactor.one,
                                wgpu.BlendFactor.zero,
                                wgpu.BlendOperation.add,
                            ),
                            "alpha": (
                                wgpu.BlendFactor.one,
                                wgpu.BlendFactor.zero,
                                wgpu.BlendOperation.add,
                            ),
                        },
                    },
                ],
            },
        )
    
    def show(self, time: float = 0.0):
        self._canvas.request_draw(self._draw_frame)
        self._fun_fn()
    
    def snapshot(self, time):
        if hasattr(self, "_last_time"): #this is left over when the draw is first called
            self.__delattr__("_last_time") #we reset this so our time can be set.
        self._uniform_data["time"] = time #set any time you want
        self._canvas.request_draw(self._draw_frame)
        if issubclass(self._canvas_class, wgpu.gui.jupyter.JupyterWgpuCanvas):
            frame = self._canvas.snapshot().data
        elif issubclass(self._canvas_class, wgpu.gui._offscreen.WgpuOffscreenCanvas):
            frame = np.asarray(self._canvas.draw())
        img = Image.fromarray(frame)
        # remove transparent pixels
        img = img.convert('RGB')
        return img

def get_image(code, time= 0.0, resolution=(512, 420)):
    shader = ShadertoyCustom(code, resolution, OffscreenCanvas, run_offscreen) #pass offscreen canvas here (or don't)
    return shader.snapshot(time)

which let's you pass the canvas class and run function and also the snapshot method (for offscreen and jupyter)

E: 16.10. - fixed snapshot having the wrong time when sequentially called on the same object

@Korijn
Copy link
Collaborator

Korijn commented Oct 27, 2023

@Vipitis are you still interested in working on this? To improve the shadertoy class?

ShaderToy is not a core feature of wgpu-py (the wgpu-native wrappers and the GUI backends are!) so it's not in a priority for us as maintainers.

If you're not interested to make the improvements and open a pull request, please close the issue. Thanks!

@Vipitis
Copy link
Contributor Author

Vipitis commented Oct 27, 2023

I am still working on the custom class that adds some functionality: validation via naga (although since it's now merged, maybe there is a less ugly way to do so), offscreen canvas and off screen run. a snapshot method to render a frame at any given time (might add additional inputs like mouse position soon).
However I am noticing more and more that there is plenty of functionality that I really don't need. So I am close to simply using the compatibility layer (adding some shadercode Infront and some behind) and then using more native function.
I will likely write some form of error parser to point back at the location of the original code.

I will open a PR this weekend, which includes the features I currently have, so you can cherry pick the parts that make sense.

But it's more likely that I will strip a lot of the functionality in the next 2-3 weeks for my project. (which is hosted as a HF space: https://huggingface.co/spaces/Vipitis/shadermatch/blob/main/shadermatch.py#L131)

@Vipitis Vipitis mentioned this issue Oct 28, 2023
6 tasks
@Vipitis
Copy link
Contributor Author

Vipitis commented Nov 6, 2023

closing this far. Will work on PRs to add additional built-ins (iDate, iFramerate) and further features (like snapshot for juypter canvas) in the very near future.

issues I encounter seem to be more appropriate for wgpu/naga directly.

@Vipitis Vipitis closed this as completed Nov 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants