Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RTX 30 Series Compatibility #169

Open
Yozey opened this issue Aug 11, 2021 · 15 comments · May be fixed by #187
Open

RTX 30 Series Compatibility #169

Yozey opened this issue Aug 11, 2021 · 15 comments · May be fixed by #187

Comments

@Yozey
Copy link

Yozey commented Aug 11, 2021

Hi,

I've been using PyRedner for a while and I found it very helpful. Thanks!

Recently I upgraded my GPU from 2080Ti to 3090. With the same code and identical Conda environment, I got an error on 3090 but not on 2080Ti:

RuntimeError: Function "RTPresult _rtpModelUpdate(RTPmodel, unsigned int)" caught exception: Encountered a CUDA error: radix_sort_temp_size -> cub::DeviceRadixSort::SortPairs returned (8): invalid device function

With some research online, I supposed that this is due to the old version of OptiX (5.1.1). So therefore the 30Series is not supported? Is there any workaround for this problem?

(I noticed that the OptiX version needs to be older than 6.5 to compile PyRedner. Can we use the latest OptiX version?)

@teamcontact
Copy link

I'd like to follow up on this as I'm running into it too and think a good number of users will start to as well.

@duskvirkus
Copy link

I had the same problem when trying to run on colab instance with A100

@chorsch
Copy link

chorsch commented Mar 15, 2022

I'm having the same issue! Has anyone been able to find a solution?

@IgorJanos
Copy link

I'm having the same issue. I'm working with RTX 3090 and pytorch built with CUDA 11. Is it possible that redner is built with CUDA 10? Pytorch will not let me downgrade CUDA to 10 and complains that the GPU is not compatible.
Thanks.

@huhai463127310
Copy link

I'm having the same issue. I'm working with RTX 3080 Laptop GPU and pytorch built with CUDA 11.3

@ForrestPi
Copy link

same issue with RTX 3090Ti pytorch==1.12.0

@leventt
Copy link

leventt commented Jul 14, 2022

redner is currently using Optix Prime and version 5.1 is deprecated for RTX GPUs.
Last version of Optix that had Optix Prime for RTX GPUs seems to be 6.5.

Optix Prime doesn't take advantage of RTX capable Optix implementations that are shipped part of RTX drivers, hence renders will be about 10x slower compared to psdr-cuda or mitsuba-2.

If anyone may still be interested, I would suggest updating this build files to migrate to using the latest Optix Prime version supported on RTX GPUs (6.5).

I have a fork here I was able to build on Windows with RTX 3080 Ti:
https://github.com/leventt/redner

You can see what I changed here:
master...leventt:redner:master

You would have to adapt for Linux and place the binary dependencies for Optix 6.5 under an redner-dependencies/optix folder at root. but I only tried on Windows.
You can download Optix 6.5 here:
https://developer.nvidia.com/designworks/optix/downloads/legacy

You can build wheels by running this under root:
pip wheel -w dist --verbose .

(I will make a PR to @BachiLi soon)

@leventt leventt linked a pull request Jul 14, 2022 that will close this issue
@ForrestPi
Copy link

@leventt I have tested using Optix 6.5 instead of mater version(https://github.com/BachiLi/redner)
ubuntu==20.04 LTS pytorch==1.11.0+cu113 RTX3090Ti
and a new error occurred

scene = redner.Scene(camera, RuntimeError: Function "RTPresult _rtpModelUpdate(RTPmodel, unsigned int)" caught exception: Encountered a CUDA error: cudaEventRecord( m_eventEnd, stream ) returned (700): an illegal memory access was encountered

@ForrestPi
Copy link

@leventt using your version (https://github.com/leventt/redner) in ubuntu==20.04 LTS pytorch==1.11.0+cu113 RTX3090Ti

File "/home/ubuntu/anaconda3/lib/python3.8/site-packages/pyredner/render_pytorch.py", line 609, in unpack_args scene = redner.Scene(camera, RuntimeError: Function "RTPresult _rtpModelUpdate(RTPmodel, unsigned int)" caught exception: Encountered a CUDA error: cudaEventRecord( m_eventEnd, stream ) returned (700): an illegal memory access was encountered

@leventt
Copy link

leventt commented Jul 15, 2022

@ForrestPi Are you perhaps running out of memory? Can you share a snippet that recreates this for you?

I am asking but I am most likely not going to try fixing this for you. Perhaps someone else may advice. I am just pointing out that I can run redner on a 3080 Ti with #187

@AndyWangZH
Copy link

@ForrestPi @leventt Hello there! I am facing the same question when I try to install Redner on Linux 3090 based on @leventt version (https://github.com/leventt/redner) and it indeed installs successfully. But when I try to use redner there comes the error
RuntimeError: Function "RTPresult _rtpModelUpdate(RTPmodel, unsigned int)" caught exception: Encountered a CUDA error: cudaEventRecord( m_eventEnd, stream ) returned (700): an illegal memory access was encountered
It happens whenever I set a small batch_size or even only rendering one img.
Have you figured out how to fix or solve this question?

@carlosedubarreto
Copy link

carlosedubarreto commented Nov 3, 2022

I was able to compile for python 3.9 and cuda 11.6
i didn't test it yet.

here it is

redner-0.4.28-cp39-cp39-win_amd64.zip

but I was haing the same _rtpModelUpdate problem as others where having.

@carlosedubarreto
Copy link

I just found why it doesnt work.
Optix prime wont work for RTX30
https://forums.developer.nvidia.com/t/optix-6-5-prime-samples-fail-with-rtx-3080/177078

@carlosedubarreto
Copy link

here is another version for python 3.9 without cuda
redner-0.4.28-cp39-cp39-win_amd64.zip

@ggxxii
Copy link

ggxxii commented Jul 30, 2024

having the same issue with RTX4090 CUDA 11.8 and torch2.0.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.