-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ONNX export error when run on system with CUDA device available #29
Comments
This error means that one of the tensors is on the CPU and the other is on the GPU. In this case the dummy model input used during export is on the CPU but the model weights are on the GPU. I think when I've exported models in the past, I always did so on a device with only a CPU. So this is a bug in the training script for not taking into account the possibility that you're doing the export on a system with a GPU (!) In
To:
The other option would be to find the line that initializes |
Ok, well it fixed the issue, but when trying to use the custom detection model with the CLI (for testing purposes), I got an error:
|
Can you upload the ONNX model somewhere? Also can you confirm which version of |
Here is the ONNX model: text-detection.zip. $ ocrs --version
ocrs 0.8.0 As for |
Ah, I can see the problem. The
When I publish the next release of |
Hey, I generated a synthetic dataset for my needs and trained the models. I wanted to export it to ONNX and then to RTEN, but it seems like I'm having trouble converting it to ONNX. Am I missing something?
running on an A100 CUDA 12.2 Driver 535.154.05
The text was updated successfully, but these errors were encountered: