Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

server.py Errors when running on docker #8

Open
nachiketh89 opened this issue Sep 14, 2023 · 0 comments
Open

server.py Errors when running on docker #8

nachiketh89 opened this issue Sep 14, 2023 · 0 comments

Comments

@nachiketh89
Copy link

nachiketh89 commented Sep 14, 2023

Hello Respected Scholars,
I am trying to reproduce the steps you have mentioned in the GitHub - Open-Speech-EkStep/speech-recognition-open-api wesbite :-

I am running the following command :-
docker run -itd -p 50051:50051 --env gpu=False --env languages=['en','hi'] -v C:\Users\nachi\Downloads\EKSTEP\speech-recognition-open-api\deployed_models:/opt/speech_recognition_open_api/deployed_models/ gcr.io/ekstepspeechrecognition/speech_recognition_model_api:3.2.37

In the Windows 11 docker log , I see the following :-

2023-09-13 23:50:58 /usr/local/lib/python3.8/dist-packages/torch/cuda/init.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
2023-09-13 23:50:58 return torch._C._cuda_getDeviceCount() > 0
2023-09-13 23:50:58 [NeMo W 2023-09-13 18:20:58 optimizers:46] Apex was not found. Using the lamb optimizer will error out.
2023-09-13 23:50:59 No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
2023-09-13 23:51:01 2023-09-13 18:21:01,732 — [MainThread] - src.lib.inference_lib - inference_lib.py.get_cuda_device(56) - INFO - User has provided gpu as False gpu_present False
2023-09-13 23:51:01 Using server workers: 10
2023-09-13 23:51:01 2023-09-13 18:21:01,780 — [MainThread] - src.speech_recognition_service - speech_recognition_service.py.init(29) - INFO - Initializing realtime and batch inference service
2023-09-13 23:51:01 2023-09-13 18:21:01,781 — [MainThread] - src.speech_recognition_service - speech_recognition_service.py.init(38) - INFO - User has provided gpu as False
2023-09-13 23:51:01 2023-09-13 18:21:01,781 — [MainThread] - src.speech_recognition_service - speech_recognition_service.py.init(41) - INFO - GPU available on machine False
2023-09-13 23:51:01 2023-09-13 18:21:01,782 — [MainThread] - src.speech_recognition_service - speech_recognition_service.py.init(44) - INFO - Loading models from /opt/speech_recognition_open_api/deployed_models/ with gpu value: False
2023-09-13 23:51:01 2023-09-13 18:21:01,782 — [MainThread] - src.model_service - model_service.py.init(35) - INFO - environment requested languages ['en','hi']
2023-09-13 23:51:01 Traceback (most recent call last):
2023-09-13 23:51:01 File "/opt/speech_recognition_open_api/server.py", line 24, in
2023-09-13 23:51:01 run()
2023-09-13 23:51:01 File "/opt/speech_recognition_open_api/server.py", line 17, in run
2023-09-13 23:51:01 add_SpeechRecognizerServicer_to_server(SpeechRecognizer(), server)
2023-09-13 23:51:01 File "/opt/speech_recognition_open_api/src/speech_recognition_service.py", line 45, in init
2023-09-13 23:51:01 self.model_service = ModelService(self.MODEL_BASE_PATH, 'kenlm', gpu, gpu)
2023-09-13 23:51:01 File "/opt/speech_recognition_open_api/src/model_service.py", line 39, in init
2023-09-13 23:51:01 model_config = json.load(f)
2023-09-13 23:51:01 File "/usr/lib/python3.8/json/init.py", line 293, in load
2023-09-13 23:51:01 return loads(fp.read(),
2023-09-13 23:51:01 File "/usr/lib/python3.8/json/init.py", line 357, in loads
2023-09-13 23:51:01 return _default_decoder.decode(s)
2023-09-13 23:51:01 File "/usr/lib/python3.8/json/decoder.py", line 337, in decode
2023-09-13 23:51:01 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
2023-09-13 23:51:01 File "/usr/lib/python3.8/json/decoder.py", line 353, in raw_decode
2023-09-13 23:51:01 obj, end = self.scan_once(s, idx)
2023-09-13 23:51:01 json.decoder.JSONDecodeError: Invalid \escape: line 3 column 20 (char 33)

When I tried with GPUs TRUE , I get the below errors :-

I have submitted " docker run -itd -p 50051:50051 --env gpu=True --env languages=['en','hi'] --gpus all -v C:\Users\nachi\Downloads\EKSTEP\speech-recognition-open-api\deployed_models:/opt/speech_recognition_open_api/deployed_models/ gcr.io/ekstepspeechrecognition/speech_recognition_model_api:3.2.37 " on Win11 cmd in administrator mode , and I see the below errors :-

=========
2023-09-14 00:11:20 [NeMo W 2023-09-13 18:41:20 optimizers:46] Apex was not found. Using the lamb optimizer will error out.
2023-09-14 00:11:23 2023-09-13 18:41:23,658 — [MainThread] - src.lib.inference_lib - inference_lib.py.get_cuda_device(56) - INFO - User has provided gpu as True gpu_present True
2023-09-14 00:11:23 2023-09-13 18:41:23,659 — [MainThread] - src.lib.inference_lib - inference_lib.py.get_cuda_device(61) - INFO - ### GPU Utilization ###
2023-09-14 00:11:23 | ID | GPU | MEM |
2023-09-14 00:11:23 ------------------
2023-09-14 00:11:23 | 0 | 0% | 17% |
2023-09-14 00:11:23 2023-09-13 18:41:23,839 — [MainThread] - src.lib.inference_lib - inference_lib.py.get_cuda_device(67) - INFO - available GPUs ['0'], all GPUs [0], excluded GPUs [0]
2023-09-14 00:11:23 2023-09-13 18:41:23,906 — [MainThread] - src.lib.inference_lib - inference_lib.py.get_cuda_device(75) - INFO - Selected GPUs: [] requested GPUs [0]
2023-09-14 00:11:23 2023-09-13 18:41:23,907 — [MainThread] - src.lib.inference_lib - inference_lib.py.get_cuda_device(82) - INFO - selected gpu index: None selecting device: cuda
2023-09-14 00:11:23 Using server workers: 10
2023-09-14 00:11:23 2023-09-13 18:41:23,966 — [MainThread] - src.speech_recognition_service - speech_recognition_service.py.init(29) - INFO - Initializing realtime and batch inference service
2023-09-14 00:11:23 2023-09-13 18:41:23,966 — [MainThread] - src.speech_recognition_service - speech_recognition_service.py.init(38) - INFO - User has provided gpu as True
2023-09-14 00:11:23 2023-09-13 18:41:23,967 — [MainThread] - src.speech_recognition_service - speech_recognition_service.py.init(41) - INFO - GPU available on machine True
2023-09-14 00:11:23 2023-09-13 18:41:23,968 — [MainThread] - src.speech_recognition_service - speech_recognition_service.py.init(44) - INFO - Loading models from /opt/speech_recognition_open_api/deployed_models/ with gpu value: True
2023-09-14 00:11:23 2023-09-13 18:41:23,968 — [MainThread] - src.model_service - model_service.py.init(35) - INFO - environment requested languages ['en','hi']
2023-09-14 00:11:23 Traceback (most recent call last):
2023-09-14 00:11:23 File "/opt/speech_recognition_open_api/server.py", line 24, in
2023-09-14 00:11:23 run()
2023-09-14 00:11:23 File "/opt/speech_recognition_open_api/server.py", line 17, in run
2023-09-14 00:11:23 add_SpeechRecognizerServicer_to_server(SpeechRecognizer(), server)
2023-09-14 00:11:23 File "/opt/speech_recognition_open_api/src/speech_recognition_service.py", line 45, in init
2023-09-14 00:11:23 self.model_service = ModelService(self.MODEL_BASE_PATH, 'kenlm', gpu, gpu)
2023-09-14 00:11:23 File "/opt/speech_recognition_open_api/src/model_service.py", line 39, in init
2023-09-14 00:11:23 model_config = json.load(f)
2023-09-14 00:11:23 File "/usr/lib/python3.8/json/init.py", line 293, in load
2023-09-14 00:11:23 return loads(fp.read(),
2023-09-14 00:11:23 File "/usr/lib/python3.8/json/init.py", line 357, in loads
2023-09-14 00:11:23 return _default_decoder.decode(s)
2023-09-14 00:11:23 File "/usr/lib/python3.8/json/decoder.py", line 337, in decode
2023-09-14 00:11:23 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
2023-09-14 00:11:23 File "/usr/lib/python3.8/json/decoder.py", line 353, in raw_decode
2023-09-14 00:11:23 obj, end = self.scan_once(s, idx)
2023-09-14 00:11:23 json.decoder.JSONDecodeError: Invalid \escape: line 3 column 20 (char 33)

Any clue to the issue with server.py will help me greatly in getting your system up and running .

I beg you to please help me sir .

My WhatsApp number is +91-8861636108 , if you can share you WhatsApp number or mobile numbers, it would greatly help to get around this issue.
My email id is [email protected] ,,,,, Pls help me ,, I have got stuck !

Thanking You ,
Nachiketh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant