-
-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues/Improvements to Shadertoy support #360
Comments
Hi! General remarks regarding the shadertoy utility:
Not at the moment. Perhaps an option could be added e.g.
In most cases the errors should be caught when the GPU shader module is created and a clear error should be printed. A kernel crash should be considered a bug (though it might be one that we cannot fix, because it's cause lies upstream in wgpu-native).
No, it's opaque. We just call a function that we pass the wgsl, and it produces a GPU module.
This sounds like the most feasible approach to make this utility more flexible 👍 |
Thank you got the answers. The original contributor is @panxinmiao if I am reading the pull requests correctly. I will start to write an adapted Shadertoy class that incorporates some of the features mentioned above. From some more digging I did it looks like the time is based on a |
The current Shadertoy is a highly encapsulated utility that is used for testing and demonstrating shader code, with very limited customization options. It's great to see someone making improvements to it.
Actually, here the variable "time" is an accumulation of the increment in time obtained from calling "time.perf_counter()" at each frame rendering. It should start from 0 unless there is a bug, 😅 |
Here is my experimental setup (the aim is to compare two different shadercodes at the same, and do so via the frame the render). I do my development inside a Jupyter notebook. from PIL import Image, ImageChops
import numpy as np
from wgpu.utils.shadertoy import Shadertoy as ST
resolution = (512, 420)
# test_codes = [test_code, test_code] # replace with your list of test codes
test_codes = [debug_code, debug_code]
images = []
for code in test_codes:
shader = ST(code, resolution)
print(shader._uniform_data["time"])
shader.show()
shader._canvas.close()
frame = shader._canvas.get_frame()
print(shader._last_time)
print(shader._uniform_data["time"])
print(shader._frame)
img = Image.fromarray(frame)
images.append(img)
# del shader # to remove any left overs?
# remove transparent alpha
for i in range(len(images)):
images[i] = images[i].convert('RGB')
# display images
for img in images:
display(img) My output looks the following( and numbers change around a bit every time) Do you have any direct idea on how to grab the frame when time is exactly 0 or what changes to make instead? I got one busy week, but will find the time to dig around myself soon. |
Currently, Shadertoy does not provide a method for users to explicitly render a specific frame (off-screen). In practice, the rendering is handled by the GUI program, which calls the rendering at the appropriate time based on the refresh rate. Your rendering result above seems to have skipped the first frame, and I'm not sure what the reason is. However, we can provide a function like Edit: |
I just checked the code for “Jupyter Canvas”. As a temporary alternative, perhaps you can try directly calling the internal "Shadertoy._draw_frame()" method. Like this: .....
for code in test_codes:
shader = ST(code, resolution)
shader._draw_frame()
frame = shader._canvas.get_frame()
...... Note that I did not test on Jupyter notebook, 😅 |
This kinda works. But it would always give me a single pixel. And I did not figure out why. and put together the following code seems to do what I want: render a single frame at a given time from PIL import Image, ImageChops
import numpy as np
from wgpu.utils.shadertoy import Shadertoy
resolution = (512, 420)
# test_codes = [test_code, test_code] # replace with your list of test codes
test_codes = [debug_code, debug_code]
frames = []
for code in test_codes:
shader = Shadertoy(code, resolution)
shader._uniform_data["time"] = 1.234 #set any time you want
shader._canvas.request_draw(shader._draw_frame)
frame = shader._canvas.snapshot().data
frames.append(frame)
images = []
for frame in frames:
img = Image.fromarray(frame)
# remove transparent pixels
img = img.convert('RGB')
images.append(img)
display(img) I am have only tested this inside a Jupyter notebook. But it does work out for me so far - and I don't have to implement anything myself. |
Essentially you're not actually using the shadertoy framework anymore. You're just rendering a full screen quad with a shader. I kind of suspected this earlier. We could perhaps provide a utility function for this process as a function in the shadertoy namespace, similar to |
here is the implementation I have been using from PIL import Image
import numpy as np
import wgpu
from wgpu.utils.shadertoy import *
from wgpu.gui.offscreen import WgpuCanvas as OffscreenCanvas, run as run_offscreen
# custom Class
class ShadertoyCustom(Shadertoy):
def __init__(self, shader_code, resolution=(800, 450), canvas_class=WgpuCanvas, run_fn=run):
self._canvas_class = canvas_class
self._fun_fn = run_fn
super().__init__(shader_code, resolution)
self._uniform_data = UniformArray(
("mouse", "f", 4),
("resolution", "f", 3),
("time", "f", 1),
("time_delta", "f", 1),
("frame", "I", 1),
)
self._shader_code = shader_code
self._uniform_data["resolution"] = resolution + (1,)
self._prepare_render()
self._bind_events()
def _prepare_render(self):
import wgpu.backends.rs # noqa
self._canvas = self._canvas_class(title="Shadertoy", size=self.resolution, max_fps=60)
adapter = wgpu.request_adapter(
canvas=self._canvas, power_preference="high-performance"
)
self._device = adapter.request_device()
self._present_context = self._canvas.get_context()
# We use "bgra8unorm" not "bgra8unorm-srgb" here because we want to let the shader fully control the color-space.
self._present_context.configure(
device=self._device, format=wgpu.TextureFormat.bgra8unorm
)
shader_type = self.shader_type
if shader_type == "glsl":
vertex_shader_code = vertex_code_glsl
frag_shader_code = (
builtin_variables_glsl + self.shader_code + fragment_code_glsl
)
elif shader_type == "wgsl":
vertex_shader_code = vertex_code_wgsl
frag_shader_code = (
builtin_variables_wgsl + self.shader_code + fragment_code_wgsl
)
vertex_shader_program = self._device.create_shader_module(
label="triangle_vert", code=vertex_shader_code
)
frag_shader_program = self._device.create_shader_module(
label="triangle_frag", code=frag_shader_code
)
self._uniform_buffer = self._device.create_buffer(
size=self._uniform_data.nbytes,
usage=wgpu.BufferUsage.UNIFORM | wgpu.BufferUsage.COPY_DST,
)
bind_group_layout = self._device.create_bind_group_layout(
entries=binding_layout
)
self._bind_group = self._device.create_bind_group(
layout=bind_group_layout,
entries=[
{
"binding": 0,
"resource": {
"buffer": self._uniform_buffer,
"offset": 0,
"size": self._uniform_data.nbytes,
},
},
],
)
self._render_pipeline = self._device.create_render_pipeline(
layout=self._device.create_pipeline_layout(
bind_group_layouts=[bind_group_layout]
),
vertex={
"module": vertex_shader_program,
"entry_point": "main",
"buffers": [],
},
primitive={
"topology": wgpu.PrimitiveTopology.triangle_list,
"front_face": wgpu.FrontFace.ccw,
"cull_mode": wgpu.CullMode.none,
},
depth_stencil=None,
multisample=None,
fragment={
"module": frag_shader_program,
"entry_point": "main",
"targets": [
{
"format": wgpu.TextureFormat.bgra8unorm,
"blend": {
"color": (
wgpu.BlendFactor.one,
wgpu.BlendFactor.zero,
wgpu.BlendOperation.add,
),
"alpha": (
wgpu.BlendFactor.one,
wgpu.BlendFactor.zero,
wgpu.BlendOperation.add,
),
},
},
],
},
)
def show(self, time: float = 0.0):
self._canvas.request_draw(self._draw_frame)
self._fun_fn()
def snapshot(self, time):
if hasattr(self, "_last_time"): #this is left over when the draw is first called
self.__delattr__("_last_time") #we reset this so our time can be set.
self._uniform_data["time"] = time #set any time you want
self._canvas.request_draw(self._draw_frame)
if issubclass(self._canvas_class, wgpu.gui.jupyter.JupyterWgpuCanvas):
frame = self._canvas.snapshot().data
elif issubclass(self._canvas_class, wgpu.gui._offscreen.WgpuOffscreenCanvas):
frame = np.asarray(self._canvas.draw())
img = Image.fromarray(frame)
# remove transparent pixels
img = img.convert('RGB')
return img
def get_image(code, time= 0.0, resolution=(512, 420)):
shader = ShadertoyCustom(code, resolution, OffscreenCanvas, run_offscreen) #pass offscreen canvas here (or don't)
return shader.snapshot(time) which let's you pass the canvas class and run function and also the snapshot method (for offscreen and jupyter) E: 16.10. - fixed snapshot having the wrong time when sequentially called on the same object |
@Vipitis are you still interested in working on this? To improve the shadertoy class? ShaderToy is not a core feature of wgpu-py (the wgpu-native wrappers and the GUI backends are!) so it's not in a priority for us as maintainers. If you're not interested to make the improvements and open a pull request, please close the issue. Thanks! |
I am still working on the custom class that adds some functionality: validation via naga (although since it's now merged, maybe there is a less ugly way to do so), offscreen canvas and off screen run. a I will open a PR this weekend, which includes the features I currently have, so you can cherry pick the parts that make sense. But it's more likely that I will strip a lot of the functionality in the next 2-3 weeks for my project. (which is hosted as a HF space: https://huggingface.co/spaces/Vipitis/shadermatch/blob/main/shadermatch.py#L131) |
closing this far. Will work on PRs to add additional built-ins (iDate, iFramerate) and further features (like snapshot for juypter canvas) in the very near future. issues I encounter seem to be more appropriate for wgpu/naga directly. |
hey,
I have found this project and it looks really promising. I am working on a project that looks at language models generating shadercode.
Following the examples, I got it working for myself.
However I had several issues.
shader.show()
but to stop there isshader._canvas.close()
(in a Jupyter notebook).shader._canvas.get_frame()
Now my questions:
My suggestion:
._canvas
directly__init__()
to set canvas, framerate, ... (keep defaults).Will read through more docs and look at the source, but the low level stuff is too advanced for me, might find some time to contribute myself.
The text was updated successfully, but these errors were encountered: