Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting HTTP Error:500 Server Error #2635

Closed
beep-love opened this issue Dec 30, 2020 · 9 comments
Closed

Getting HTTP Error:500 Server Error #2635

beep-love opened this issue Dec 30, 2020 · 9 comments

Comments

@beep-love
Copy link

My actions before raising this issue

  • I have deployed tf-faster-rcnn-inception-v2-coco-gpu
  • The function is running in nuclio dashboard
  • Also the cvat task action menu 'automatic annotation' is showing the model with another cpu based model.
  • When i run the annotation task it gives the error Getting HTTP Error:500 Server Error: Internal Server Error for url: http://nuclio:8070/api/function_invocations
  • I restarted the name of containers and also went through the docker logs
  • But the error still exists.
    My nuctl version is 1.5.8
@beep-love
Copy link
Author

@jahaniam can you please help on this?

@jahaniam
Copy link
Contributor

jahaniam commented Jan 3, 2021

What’s your gpu memory?
change the number of workers in the yaml file from 2 to 1 and redeploy function.
does the automatic annotation for single image work? Also provide the docker logs for nuclio-nuclio-tf-faster... container

@jahaniam
Copy link
Contributor

jahaniam commented Jan 4, 2021

@beep-love

@beep-love
Copy link
Author

@jahaniam My gpu is rtx 2080 ti

I was able to run automatic annotations, after editing the yaml file and by redeploying the function.

Thanks for the help and sorry for not updating the status here.

It is taking around 90 minutes to train a video with a total of 2000 frames. Is it the desired performance or can we make it more efficient?

@jahaniam
Copy link
Contributor

jahaniam commented Jan 6, 2021

May I ask what did you modify in YAML?
No it is not efficient, maybe try adding workers if you have the memory for it.

How did automatic annotation for a whole task work for you? I think it is broken. It shows successfully done but when opening the task there is no result. Can you verify this please?

@jahaniam
Copy link
Contributor

jahaniam commented Jan 6, 2021

@beep-love Is it possible to deploy the cpu tensorflow rcnn and compare the timing for reference?

@beep-love
Copy link
Author

@jahaniam I havent deployed tensorflow model in cpu.

But i had deployed model in serverless/openvino/omz/public/faster_rcnn_inception_v2_coco/nuclio
which was the cpu based model.

And in my i9-9900k CPU the inference was taking around 12 hrs.

I opened the task and i saw annotations results for all the random frames i went through.

Also, i have replied in next thread for some error message at issue #2529.

Changes i made in YAML file was

triggers: myHttpTrigger: maxWorkers: 1 kind: 'http' workerAvailabilityTimeoutMilliseconds: 10000 attributes: maxRequestBodySize: 33554432 # 32MB

This is the portion of code i modified. I changed maxWorkers from 2 to 1

I am using the whole GPU and CPU for this task. Can you suggest how can i maximize the performance in detail?

What could be the highest no. of worker? How is that set?

@azhavoro
Copy link
Contributor

azhavoro commented Mar 5, 2021

@beep-love Have you solved the issue?

@azhavoro
Copy link
Contributor

No response for a long time, I'll close the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants