Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Extended Examples #3885

Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
Show all changes
68 commits
Select commit Hold shift + click to select a range
a157419
Add the base of the example. May need refining and definitely fact-ch…
JustAnotherCodemonkey Jun 23, 2023
594f3b8
Start change to changelog.
JustAnotherCodemonkey Jun 23, 2023
68b2e68
Complete changelog change for repeated-compute.
JustAnotherCodemonkey Jun 23, 2023
f750f86
Apply suggestion to fix typos.
JustAnotherCodemonkey Jun 23, 2023
e87dccd
Add storage-texture example which currently works native but needs to…
JustAnotherCodemonkey Jun 25, 2023
4065a6c
Merge typo fixing from remote.
JustAnotherCodemonkey Jun 25, 2023
4182531
repeated-compute now works on the web. [no ci]
JustAnotherCodemonkey Jun 25, 2023
8e1f203
`storage-texture` now works on the web as well as native.
JustAnotherCodemonkey Jun 25, 2023
a27a574
Format because I forgot to do that (ugh).
JustAnotherCodemonkey Jun 25, 2023
3c8ba56
Add `storage-texture` to changelog.
JustAnotherCodemonkey Jun 25, 2023
049d516
Add `render-to-texture` example.
JustAnotherCodemonkey Jun 25, 2023
76a7baa
Not all the files got git added. Fixed it.
JustAnotherCodemonkey Jun 25, 2023
5ab1d4d
Add `render-to-texture` to changelog.
JustAnotherCodemonkey Jun 26, 2023
baf94b3
Make better readme's and add examples to said readme's.
JustAnotherCodemonkey Jun 26, 2023
44840e4
Oops. Put the example updates in the wrong place.
JustAnotherCodemonkey Jun 26, 2023
849dcac
Add `uniform-values` example.
JustAnotherCodemonkey Jun 27, 2023
96054a8
Apply clippy suggestions.
JustAnotherCodemonkey Jun 27, 2023
0185200
Improved readme's and documentation.
JustAnotherCodemonkey Jun 27, 2023
45e9664
Fmt. Turning into the Joker rn.
JustAnotherCodemonkey Jun 27, 2023
dca6a19
Make instructions for examples on the web more clear. \(Fmt and clipp…
JustAnotherCodemonkey Jun 27, 2023
a15fc4a
hello-workgroups It doesn't work.
JustAnotherCodemonkey Jun 29, 2023
2cbd6ef
Add basic comments and readme to hello-workgroups.
JustAnotherCodemonkey Jun 29, 2023
c7ccebc
Add hello-synchronization example. Currently doesn't have any tests b…
JustAnotherCodemonkey Jun 30, 2023
2ef67cc
Forgot to check wasm compatibility for hello-synchronization. Fixed it.
JustAnotherCodemonkey Jun 30, 2023
337afff
Add test for hello-synchronization.
JustAnotherCodemonkey Jul 1, 2023
9e972ec
Make my examples downlevel defaults.
JustAnotherCodemonkey Jul 1, 2023
24fee15
Make uniform-values downlevel defaults. (Forgot to do that last commit.)
JustAnotherCodemonkey Jul 1, 2023
2a34a8d
Fix clippy doc complaints.
JustAnotherCodemonkey Jul 7, 2023
a21cca8
Didn't fully fix the docs last commit. Got it here I think.
JustAnotherCodemonkey Jul 7, 2023
f246ac5
Fix redundant bullet point in examples/hello-workgroups/README.md.
JustAnotherCodemonkey Aug 3, 2023
5a0b591
Trim down the introduction section of examples/hello-workgroups/READM…
JustAnotherCodemonkey Aug 3, 2023
bd8e239
Add technical links section to examples/hello-workgroups/README.md.
JustAnotherCodemonkey Aug 3, 2023
4258690
Use idiomatic Rust comments, break up big text wall into paragraphs, …
JustAnotherCodemonkey Aug 3, 2023
6945d99
Move output image functions into examples/common and give output_imag…
JustAnotherCodemonkey Aug 3, 2023
530eb50
Modify changelog for moving output_image_native and output_image_wasm…
JustAnotherCodemonkey Aug 3, 2023
5deb26f
Fix output_image_wasm. (Formerly did not handle pre-existing output i…
JustAnotherCodemonkey Aug 3, 2023
b5de6a1
Make a multiline comment be made of single lines to be more ideomatic.
JustAnotherCodemonkey Aug 3, 2023
d0448e7
"Fix" more multiline comments. I think this is actually the last of t…
JustAnotherCodemonkey Aug 3, 2023
98ecd7f
Make the window a consistant, square size that's convenient for viewing.
JustAnotherCodemonkey Aug 4, 2023
abb0bd1
Make the window on uniform-values not endlessly poll, taking up 100% …
JustAnotherCodemonkey Aug 4, 2023
8b55b05
Make execute in hello-synchronization return a struct of vecs instead…
JustAnotherCodemonkey Aug 19, 2023
a9e90c6
Didn't realize the naming of wgpu_example::framework so I moved my co…
JustAnotherCodemonkey Aug 19, 2023
6e12498
Add add_web_nothing_to_see_msg function to replace all the instances …
JustAnotherCodemonkey Sep 3, 2023
fe49dc7
Add small documentation to add_web_nothing_to_see_msg and change it t…
JustAnotherCodemonkey Sep 3, 2023
d87cc1a
Add documentation to output_image_native and output_image_wasm in exa…
JustAnotherCodemonkey Sep 3, 2023
7d4c952
Do better logging for output image functions in wgpu-example::utils.
JustAnotherCodemonkey Sep 3, 2023
5eaa9e7
Remove redundant append_child'ing of the output image element in wgpu…
JustAnotherCodemonkey Sep 3, 2023
03bad9d
Fix error regarding log message for having written the image in wgpu-…
JustAnotherCodemonkey Sep 3, 2023
d71628d
Fmt.
JustAnotherCodemonkey Sep 3, 2023
ea00169
In examples/README.md, re-arrange the examples in the graph to be in …
JustAnotherCodemonkey Sep 4, 2023
773c433
Fix changlog item regarding wgpu-example::utils and the output image …
JustAnotherCodemonkey Sep 4, 2023
8737cbe
Move all the added examples into one changelog item that lists all of…
JustAnotherCodemonkey Sep 4, 2023
83a3c8e
Updated table in examples/README.md with new examples. Added new feat…
JustAnotherCodemonkey Sep 6, 2023
06a7113
Fix inaccurate comment in hello-workgroups/src/shader.wgsl.
JustAnotherCodemonkey Sep 7, 2023
54656c5
Update examples/README.md to include basic descriptions of the basic …
JustAnotherCodemonkey Sep 10, 2023
b037cbf
Remove `capture` example. See changelog entry for reasoning.
JustAnotherCodemonkey Sep 13, 2023
1a14e66
Fix typo in hello-workgroups/shader.wgsl
JustAnotherCodemonkey Sep 21, 2023
abdb200
Change the method of vertex generation in the shader code of render-t…
JustAnotherCodemonkey Sep 21, 2023
ffd1525
Modify/correct message in repeated-compute/main.rs regarding the outp…
JustAnotherCodemonkey Sep 23, 2023
291f11c
Update message in uniform-values/main.rs about writing the app state …
JustAnotherCodemonkey Sep 24, 2023
f2d89c8
Add notice in repeated-compute/main.rs about why async channels are n…
JustAnotherCodemonkey Sep 25, 2023
0a1f2eb
Revise comment in uniform-values/main.rs about why we don't cast the …
JustAnotherCodemonkey Sep 25, 2023
c5e95f5
Change uniform-values to use encase for translating AppState to WGSL …
JustAnotherCodemonkey Oct 2, 2023
f436d21
Merge trunk.
JustAnotherCodemonkey Oct 7, 2023
2c6ff78
Cargo & Clippy: My two best friends.
JustAnotherCodemonkey Oct 7, 2023
f03748d
Add MIT-0 to the list of allowed liscences.
JustAnotherCodemonkey Oct 7, 2023
373ab65
Fix docs for wasm.
JustAnotherCodemonkey Oct 7, 2023
3126657
Merge branch 'trunk' into JustAnotherCodemonkey/add-examples
JustAnotherCodemonkey Oct 7, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,10 @@ Bottom level categories:

- Empty scissor rects are allowed now, matching the specification. by @PJB3005 in [#3863](https://github.com/gfx-rs/wgpu/pull/3863).

#### General

- Added example repeated-compute. By @JustAnotherCodemonkey in [#3885](https://github.com/gfx-rs/wgpu/pull/3885).

### Documentation

- Better documentation for draw, draw_indexed, set_viewport and set_scissor_rect. By @genusistimelord in [#3860](https://github.com/gfx-rs/wgpu/pull/3860)
Expand Down
35 changes: 34 additions & 1 deletion Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

29 changes: 29 additions & 0 deletions examples/repeated-compute/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
[package]
name = "wgpu-repeated-compute-example"
version.workspace = true
license.workspace = true
edition.workspace = true
description = "wgpu repeated compute example"
publish = false

[[bin]]
name = "repeated-compute"
path = "src/main.rs"

[dependencies]
bytemuck.workspace = true
env_logger.workspace = true
futures-intrusive.workspace = true
log.workspace = true
pollster.workspace = true
rand = "0.8.5"
wgpu.workspace = true

[target.'cfg(target_arch = "wasm32")'.dependencies]
console_error_panic_hook.workspace = true
console_log.workspace = true
wasm-bindgen-futures.workspace = true

[dev-dependencies]
wasm-bindgen-test.workspace = true
wgpu-test.workspace = true
4 changes: 4 additions & 0 deletions examples/repeated-compute/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# repeated-compute

Repeatedly performs the Collatz calculation used in `hello-compute` on sets of
random numbers and reports the timings of core events to log.
238 changes: 238 additions & 0 deletions examples/repeated-compute/src/main.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,238 @@
//! See hello-compute example main.rs for more details
//! as similar items here are not explained.
//!
//! This example does elaborate on some things though that the
//! hello-compute example does not such as mapping buffers
//! and why use the async channels.

use rand::Rng;
use std::mem::size_of_val;

const OVERFLOW: u32 = 0xffffffff;

async fn run() {
let mut numbers = [0u32; 256];
let context = WgpuContext::new(size_of_val(&numbers)).await;
// Not sure how compatible this is with web but I don't know how to check.
let mut rng = rand::thread_rng();

for _ in 0..10 {
for p in numbers.iter_mut() {
*p = rng.gen::<u16>() as u32;
}

compute(&mut numbers, &context).await;

let printed_numbers = numbers
.iter()
.map(|n| match n {
&OVERFLOW => "(overflow)".to_string(),
n => n.to_string(),
})
.collect::<Vec<String>>();
log::info!("Results: {printed_numbers:?}");
}
}

async fn compute(local_buffer: &mut [u32], context: &WgpuContext) {
log::info!("Beginning GPU compute on data {local_buffer:?}.");
// Local buffer contents -> GPU storage buffer
/* Adds a write buffer command to the queue. This command is more complicated
than it appears. */
context.queue.write_buffer(
&context.storage_buffer,
0,
bytemuck::cast_slice(local_buffer),
);
log::info!("Wrote to buffer.");

let mut command_encoder = context
.device
.create_command_encoder(&wgpu::CommandEncoderDescriptor { label: None });

{
let mut compute_pass =
command_encoder.begin_compute_pass(&wgpu::ComputePassDescriptor { label: None });
compute_pass.set_pipeline(&context.pipeline);
compute_pass.set_bind_group(0, &context.bind_group, &[]);
compute_pass.dispatch_workgroups(local_buffer.len() as u32, 1, 1);
}
// We finish the compute pass by dropping it.

// Entire storage buffer -> staging buffer.
command_encoder.copy_buffer_to_buffer(
&context.storage_buffer,
0,
&context.output_staging_buffer,
0,
context.storage_buffer.size(),
);

// Finalize the command encoder, add the contained commands to the queue and flush.
context.queue.submit(Some(command_encoder.finish()));
log::info!("Submitted commands.");

// Finally time to get our results.
/* First we get a buffer slice which represents a chunk of the buffer (which we
can't access yet). */
// We want the whole thing so use unbounded range.
let buffer_slice = context.output_staging_buffer.slice(..);
/* Now things get complicated. WebGPU, for safety reasons, only allows either the GPU
or CPU to access a buffer's contents at a time. We need to "map" the buffer which means
flipping ownership of the buffer over to the CPU and making access legal. We do this
with `BufferSlice::map_async`. The problem is that map_async is not an async function
and we can't await it. What we need to do instead is pass in a closure that will
be executed when the slice is either mapped or the mapping has failed. The problem
with this is that we don't have a reliable way to wait in the main code for the buffer
to be mapped and even worse, calling get_mapped_range or get_mapped_range_mut prematurely
will cause a panic, not return an error. Using async channels solves this as awaiting
the recieving of a message from the passed closure will force the outside code to wait.
It also doesn't hurt if the closure finishes before the outside code catches up as
the message is buffered and recieving will just pick that up. */
cwfitzgerald marked this conversation as resolved.
Show resolved Hide resolved
let (sender, reciever) = futures_intrusive::channel::shared::oneshot_channel();
buffer_slice.map_async(wgpu::MapMode::Read, move |r| sender.send(r).unwrap());
/* In order for the mapping to be completed, one of three things must happen.
One of those can be calling `Device::poll`. This isn't nessecary on the web as devices
are polled automatically but natively, we need to make sure this happens manually. */
// `Maintain::Wait` will cause the thread to wait on native but not the web.
context.device.poll(wgpu::Maintain::Wait);
log::info!("Device polled.");
// Now we await the recieving and panic if anything went wrong because we're lazy.
reciever.receive().await.unwrap().unwrap();
log::info!("Result recieved.");
JustAnotherCodemonkey marked this conversation as resolved.
Show resolved Hide resolved
// NOW we can call get_mapped_range.
{
let view = buffer_slice.get_mapped_range();
local_buffer.copy_from_slice(bytemuck::cast_slice(&view));
}
log::info!("Results written to local buffer.");
/* We need to make sure all `BufferView`'s are dropped before we do what we're about
to do. */
// Unmap so that we can copy to the staging buffer in the next iteration.
context.output_staging_buffer.unmap();
}

fn main() {
#[cfg(not(target_arch = "wasm32"))]
{
env_logger::builder()
.filter_level(log::LevelFilter::Info)
.format_timestamp_nanos()
.init();
pollster::block_on(run());
}
#[cfg(target_arch = "wasm32")]
{
std::panic::set_hook(Box::new(console_error_panic_hook::hook));
console_log::init_with_level(log::Level::Info).expect("could not initialize logger");
wasm_bindgen_futures::spawn_local(run());
}
}

/// A convenient way to hold together all the useful wgpu stuff together.
struct WgpuContext {
device: wgpu::Device,
queue: wgpu::Queue,
pipeline: wgpu::ComputePipeline,
bind_group: wgpu::BindGroup,
storage_buffer: wgpu::Buffer,
output_staging_buffer: wgpu::Buffer,
}

impl WgpuContext {
async fn new(buffer_size: usize) -> WgpuContext {
let instance = wgpu::Instance::default();
let adapter = instance
.request_adapter(&wgpu::RequestAdapterOptions::default())
.await
.unwrap();
let (device, queue) = adapter
.request_device(
&wgpu::DeviceDescriptor {
label: None,
features: wgpu::Features::empty(),
limits: wgpu::Limits::default(),
},
None,
)
.await
.unwrap();

// Our shader, kindly compiled with Naga.
let shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: None,
source: wgpu::ShaderSource::Wgsl(std::borrow::Cow::Borrowed(include_str!(
"shader.wgsl"
))),
});

// This is where the GPU will read from and write to.
let storage_buffer = device.create_buffer(&wgpu::BufferDescriptor {
label: None,
size: buffer_size as wgpu::BufferAddress,
usage: wgpu::BufferUsages::STORAGE
| wgpu::BufferUsages::COPY_DST
| wgpu::BufferUsages::COPY_SRC,
mapped_at_creation: false,
});
/* In WebGPU (unlike many graphics libraries), you aren't allowed to map
primary buffers (anthing that has any use that isn't COPY_SRC, COPY_DST,
MAP_READ, or MAP_WRITE) (as in get pointers to their memory from the CPU).
In WebGPU, the idea is that you copy the data into a specialized reading
buffer and then read from there. Same would be true for writes if it weren't
for the Queue::write_buffer method. */
let output_staging_buffer = device.create_buffer(&wgpu::BufferDescriptor {
label: None,
size: buffer_size as wgpu::BufferAddress,
usage: wgpu::BufferUsages::COPY_DST | wgpu::BufferUsages::MAP_READ,
mapped_at_creation: false,
});

// This can be though of as the function signature for our CPU-GPU function.
let bind_group_layout = device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
label: None,
entries: &[wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStages::COMPUTE,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Storage { read_only: false },
has_dynamic_offset: false,
// Going to have this be None just to be safe.
min_binding_size: None,
},
count: None,
}],
});
/* This ties actual resources stored in the GPU to our metaphorical function
through the binding slots we defined above. */
let bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
label: None,
layout: &bind_group_layout,
entries: &[wgpu::BindGroupEntry {
binding: 0,
resource: storage_buffer.as_entire_binding(),
}],
});

let pipeline_layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: None,
bind_group_layouts: &[&bind_group_layout],
push_constant_ranges: &[],
});
let pipeline = device.create_compute_pipeline(&wgpu::ComputePipelineDescriptor {
label: None,
layout: Some(&pipeline_layout),
module: &shader,
entry_point: "main",
});

WgpuContext {
device,
queue,
pipeline,
bind_group,
storage_buffer,
output_staging_buffer,
}
}
}
38 changes: 38 additions & 0 deletions examples/repeated-compute/src/shader.wgsl
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
@group(0)
@binding(0)
var<storage, read_write> v_indices: array<u32>; // this is used as both input and output for convenience

// The Collatz Conjecture states that for any integer n:
// If n is even, n = n/2
// If n is odd, n = 3n+1
// And repeat this process for each new n, you will always eventually reach 1.
// Though the conjecture has not been proven, no counterexample has ever been found.
// This function returns how many times this recurrence needs to be applied to reach 1.
fn collatz_iterations(n_base: u32) -> u32{
var n: u32 = n_base;
var i: u32 = 0u;
loop {
if (n <= 1u) {
break;
}
if (n % 2u == 0u) {
n = n / 2u;
}
else {
// Overflow? (i.e. 3*n + 1 > 0xffffffffu?)
if (n >= 1431655765u) { // 0x55555555u
return 4294967295u; // 0xffffffffu
}

n = 3u * n + 1u;
}
i = i + 1u;
}
return i;
}

@compute
@workgroup_size(1)
fn main(@builtin(global_invocation_id) global_id: vec3<u32>) {
v_indices[global_id.x] = collatz_iterations(v_indices[global_id.x]);
}