-
Notifications
You must be signed in to change notification settings - Fork 244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Texture support #204
Comments
First pass (not the final complete one) on what the API should look like: For OpTypeImage declarations
OpConstantSampler: not sure if we should go with separate methods on texture objects, passing in a sampler constant value, or what. What's weird is that OpConstantSampler requires a LiteralSampler capability, however, there doesn't seem to be any other way in the spec to create a OpTypeSampler object. More research is needed on the second part to design this bit. struct Texture2d<T> { opaque }
impl<T> Texture2d<T> {
fn fetch(&self, coord: Vec2<u32>) -> T { OpImageFetch }
fn sample(&self, coord: Vec2<f32>) -> T { OpImageSampleImplicitLod }
fn sample_lod(&self, coord: Vec2<f32>, lod: f32) -> T { OpImageSampleExplicitLod }
fn sample_grad(&self, coord: Vec2<f32>, dx: f32, dy: f32) -> T { OpImageSampleExplicitLod }
// no gather, Dref, projection, or sparse for now.
} |
Usually samplers get bound just like textures and buffers to a slot. OpConstantSampler is seperate where it's declared inline in the shader and that is indeed a separate capability (also referred to as static samplers in other apis). So you need one to do a texture sample. struct Texture2d<T> { opaque }
struct Sampler { opaque }
impl<T> Texture2d<T> {
fn fetch(&self, coord: Vec2<u32>) -> T { OpImageFetch }
fn sample(&self, sampler: &Sampler, coord: Vec2<f32>) -> T { OpImageSampleImplicitLod }
fn sample_lod(&self, sampler: &Sampler, coord: Vec2<f32>, lod: f32) -> T { OpImageSampleExplicitLod }
fn sample_grad(&self, sampler: &Sampler, coord: Vec2<f32>, dx: f32, dy: f32) -> T { OpImageSampleExplicitLod }
// no gather, Dref, projection, or sparse for now.
}
/// Sampler could be passed by value if it's Copy/Clone, the idea is at least that you should be able to use a sampler with multiple textures
If we do sparse textures I think a Gather4 ops can wait I think, same for the depth-testing samplers because they're just used for shadow mapping (and could be straight forward to add later). |
Can you point out where that is in the spec? I'm not seeing anything. Like, what address space do the
What? They're used in every single one of these sample instructions except the ones that directly go into the texture without a sampler, you can't avoid using them.
I... what? not following you here, why would we want to expose this instead of it being an implementation detail of these sample methods? (especially considering the absolutely wild spir-v validation rules around it) |
I have no idea how it actually gets translated to spir-v, but most of the time when writing Vulkan and Vulkan GLSL, as @Jasper-Bekkers said, you're using separate images and samplers rather than a combined ImageSampler object. Specifically, this means that CPU-side you'd be creating separate descriptors with type layout(set = 0, binding = 0) uniform sampler tex_sampler;
layout(set = 0, binding = 1) uniform textureCube spec_cube_map;
layout(set = 0, binding = 2) uniform textureCube irradiance_cube_map;
layout(set = 0, binding = 3) uniform texture2D spec_brdf_map;
layout(set = 2, binding = 0) uniform texture2D albedo_map;
layout(set = 2, binding = 1) uniform texture2D normal_map;
layout(set = 2, binding = 2) uniform texture2D metallic_roughness_map;
layout(set = 2, binding = 3) uniform texture2D ao_map;
layout(set = 2, binding = 4) uniform texture2D emissive_map;
// ... this then gets used like so, re-using the sampler to sample each different image even of different types.
// I think `sampler2D()` constructor is combining the sampler and texture into SPIR-V's version of
// combined image sampler in the shader maybe?
vec3 albedo = texture(sampler2D(albedo_map, tex_sampler), f_uv).rgb;
vec3 normal = texture(sampler2D(normal_map, tex_sampler), f_uv).rgb;
vec2 metallic_roughness = texture(sampler2D(metallic_roughness_map, tex_sampler), f_uv).bg;
float metallic = metallic_roughness.x;
float roughness = metallic_roughness.y;
float ao = texture(sampler2D(ao_map, tex_sampler), f_uv).r;
vec3 emissive = texture(sampler2D(emissive_map, tex_sampler), f_uv).rgb;
// ... later, here notice `samplerCube()` but with the same sampler
vec3 ambient_irradiance = texture(samplerCube(irradiance_cube_map, tex_sampler), N).rgb;
vec3 ambient_spec = textureLod(samplerCube(spec_cube_map, tex_sampler), R, roughness * MAX_SPEC_LOD).rgb;
vec2 env_brdf = texture(sampler2D(spec_brdf_map, tex_sampler), vec2(NdotV, roughness)).rg; |
Oh, also a thing of note is that I just went and checked and Ark is actually currently using only the combined image samplers everywhere, rather than the separate version I talked about above x) it wouldn't be very hard to change that though |
Design question I thought of when doing #276: do we want to support textures whose
So to be clear, it's not like we can have a So, if we were to support |
There's a already an issue on glam for it, so we could just track that for now. bitshifter/glam-rs#70 |
I would like to add combined image sampler ( Proposed API: #[spirv(sampled_image)]
pub struct SampledImage<I> { .. }
impl SampledImage<Image2d> {
pub fn sample(&self, coord: Vec3A) -> Vec4 { .. }
} Usage: #[spirv(binding = 0)] image: UniformConstant<SampledImage<Image2d>>,
...
let image = image.load();
let sample = image.sample(coord); |
I believe nobody else is planning on implementing |
Would it be readonable to have an enum in spirv-std where one variant contains an 'OpTypeImage' and a 'OpTypeSampler' and the other variant contains a 'OpTypeSampledImage'? That way it could be used for sampling without anyone having to worry about how it was constructed. |
This is for
Except in the case of |
Let's close this and track individual features as the need for them comes up - needs seem to have stabilized, and a solid foundation with quite a bit of support is already in. |
Implement texture support in the compiler. Exposing all of this is a lot, and needs a decent amount of UX thought, so posting this issue so people can chime in with ideas in addition to tracking the implementation.
Here's an overview of how textures work in the SPIR-V spec:
Then, lots of image read/write instructions (not sure what Dref is used for, "depth-comparison reference value"):
and edit, to be clear, nearly all of these (except for ImageRead/ImageWrite) take the combined TypeSampledImage. There is no avoiding using the combined version. The combined type has insane restrictions that pretty much guarantees that it will not be exposed to users as a struct (e.g. must be used in the same basic block it is declared in), but we cannot get around using it, it is how texture reads work in SPIR-V.
And some misc instructions:
but like, the ImageQuery capability implicitly declares the Shader capability, so the queries that require both the ImageQuery and the Kernel capability are, uuuh, a program needs to be both a Shader and a Kernel? what.
The text was updated successfully, but these errors were encountered: