-
Notifications
You must be signed in to change notification settings - Fork 960
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ComputePipeline
is never freed
#4073
Comments
#[tokio::main]
async fn main() {
// Instantiates instance of WebGPU
let instance = wgpu::Instance::default();
// `request_adapter` instantiates the general connection to the GPU
let adapter = instance
.request_adapter(&wgpu::RequestAdapterOptions {
power_preference: wgpu::PowerPreference::HighPerformance,
force_fallback_adapter: false,
..wgpu::RequestAdapterOptions::default()
})
.await.unwrap();
// `request_device` instantiates the feature specific connection to the GPU, defining some parameters,
// `features` being the available features.
let (device, queue) = adapter
.request_device(
&wgpu::DeviceDescriptor {
label: None,
features: wgpu::Features::empty(),
limits: wgpu::Limits::downlevel_defaults(),
},
None,
)
.await
.unwrap();
let compiled_shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: None,
source: wgpu::ShaderSource::Wgsl(Cow::Borrowed("@compute @workgroup_size(1, 1, 1) fn main() {}")),
});
for i in 1..100000 {
let pipeline = device.create_compute_pipeline(&wgpu::ComputePipelineDescriptor {
label: None,
layout: None,
module: &compiled_shader,
entry_point: "main",
});
}
} |
Does not appear with the iGPU, this is a problem with using external dGPUs (in this case an TB3 NVIDIA GPU) and the fallback (llmvpipe). Without loop: 200kb |
This is expected behavior. We do not clear any resources that are dropped until the device is maintained by either a call to |
This is not regarding GPU memory but Host memory. I have already tried using
#[tokio::main]
async fn main() {
// Instantiates instance of WebGPU
let instance = wgpu::Instance::default();
// `request_adapter` instantiates the general connection to the GPU
let adapter = instance
.request_adapter(&wgpu::RequestAdapterOptions {
power_preference: wgpu::PowerPreference::HighPerformance,
force_fallback_adapter: false,
..wgpu::RequestAdapterOptions::default()
})
.await.unwrap();
// `request_device` instantiates the feature specific connection to the GPU, defining some parameters,
// `features` being the available features.
let (device, queue) = adapter
.request_device(
&wgpu::DeviceDescriptor {
label: None,
features: wgpu::Features::empty(),
limits: wgpu::Limits::downlevel_defaults(),
},
None,
)
.await
.unwrap();
use std::borrow::Cow;
let compiled_shader = device.create_shader_module(wgpu::ShaderModuleDescriptor {
label: None,
source: wgpu::ShaderSource::Wgsl(Cow::Borrowed("@compute @workgroup_size(1, 1, 1) fn main() {}")),
});
for i in 1..900000 {
let pipeline = device.create_compute_pipeline(&wgpu::ComputePipelineDescriptor {
label: None,
layout: None,
module: &compiled_shader,
entry_point: "main",
});
device.poll(wgpu::Maintain::Wait);
}
} |
In contrast,
while the fallback gives
Either way this is a lot amount of memory being used to maintain pipelines (whether used or unused). Ideally, it would not be taking more memory, changing the loop to initialize a Vector instead:
Gives
Which is a lot more reasonable. |
Alright, if this is still leaking with a poll, this is definitely a bug. |
I think this is a duplicate of #5029. |
Description
Memory leak from
ComputePipeline
never freeing memory.CommandEncoder.begin_compute_pass
does not fix this.queue.submit()
does not fix this either.device.poll(wgpu::Maintain::Wait)
eitherWhether or not I use it or not, memory is never freed until program exits.
dhat
leads to be believe that it looks like there is some internal storage (lots ofVec
? Unnecessary caching or something along lines may be unintentionally extending its lifetime.Repro steps
Essentially
Expected vs observed behavior
Memory climbs up instead of remaining stable.
Extra materials
Screenshots to help explain your problem.
Validation logs can be attached in case there are warnings and errors.
Zip-compressed API traces and GPU captures can also land here.
Platform
Information about your OS, version of
wgpu
, your tech stack, etc.wgpu: 0.17.0
The text was updated successfully, but these errors were encountered: