-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RTX 30 Series Compatibility #169
Comments
I'd like to follow up on this as I'm running into it too and think a good number of users will start to as well. |
I had the same problem when trying to run on colab instance with A100 |
I'm having the same issue! Has anyone been able to find a solution? |
I'm having the same issue. I'm working with RTX 3090 and pytorch built with CUDA 11. Is it possible that redner is built with CUDA 10? Pytorch will not let me downgrade CUDA to 10 and complains that the GPU is not compatible. |
I'm having the same issue. I'm working with RTX 3080 Laptop GPU and pytorch built with CUDA 11.3 |
same issue with RTX 3090Ti pytorch==1.12.0 |
redner is currently using Optix Prime and version 5.1 is deprecated for RTX GPUs. Optix Prime doesn't take advantage of RTX capable Optix implementations that are shipped part of RTX drivers, hence renders will be about 10x slower compared to psdr-cuda or mitsuba-2. If anyone may still be interested, I would suggest updating this build files to migrate to using the latest Optix Prime version supported on RTX GPUs (6.5). I have a fork here I was able to build on Windows with RTX 3080 Ti: You can see what I changed here: You would have to adapt for Linux and place the binary dependencies for Optix 6.5 under an You can build wheels by running this under root: (I will make a PR to @BachiLi soon) |
@leventt I have tested using Optix 6.5 instead of mater version(https://github.com/BachiLi/redner)
|
@leventt using your version (https://github.com/leventt/redner) in ubuntu==20.04 LTS pytorch==1.11.0+cu113 RTX3090Ti
|
@ForrestPi Are you perhaps running out of memory? Can you share a snippet that recreates this for you? I am asking but I am most likely not going to try fixing this for you. Perhaps someone else may advice. I am just pointing out that I can run redner on a 3080 Ti with #187 |
@ForrestPi @leventt Hello there! I am facing the same question when I try to install Redner on Linux 3090 based on @leventt version (https://github.com/leventt/redner) and it indeed installs successfully. But when I try to use redner there comes the error |
I was able to compile for python 3.9 and cuda 11.6 here it is redner-0.4.28-cp39-cp39-win_amd64.zip but I was haing the same |
I just found why it doesnt work. |
here is another version for python 3.9 without cuda |
having the same issue with RTX4090 CUDA 11.8 and torch2.0.0 |
Hi,
I've been using PyRedner for a while and I found it very helpful. Thanks!
Recently I upgraded my GPU from 2080Ti to 3090. With the same code and identical Conda environment, I got an error on 3090 but not on 2080Ti:
With some research online, I supposed that this is due to the old version of OptiX (5.1.1). So therefore the 30Series is not supported? Is there any workaround for this problem?
(I noticed that the OptiX version needs to be older than 6.5 to compile PyRedner. Can we use the latest OptiX version?)
The text was updated successfully, but these errors were encountered: