You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there!
Thank you a lot for the guide! (and to arter97)
Following the guide I successfully installed immich to LXC (Debian 12).
The hardware transcoding feature also works like a charm using QuickSync of my 12th gen iGPU.
I see it using intel_gpu_top.
The immich ML features also LOOKS LIKE works nicely. Context search is quick and smooth, face recognition also looks good enough (I use default ViT-B-32__openai model for now).
The only question that I left is: What is immich actually use for ML on my installation?
I don't really have ideas how to track it.
Following the guide I install ONNX runtime with OpenVino https://pypi.org/project/onnxruntime-openvino/ .
So I assume that at least OpenVino itself is used. But no ideas if it is used in fact and if it is used on CPU or iGPU.
According to docs on the link above:
"By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated GPU, discrete GPU, integrated NPU (Windows only). Invoke the provider config device type argument to change the hardware on which inferencing is done."
Logs in /var/log/immich are not really useful for it.
In web.log there are something like this (sorry for the mess, cropped copy directly from shell, but I think the idea is clear): ^[[96m[Nest] 376 - ^[[39m12/03/2024, 12:46:35 PM ^[[96mVERBOSE^[[39m ^[[33m[Api:LoggingInterceptor~g6vnpnid]^[[39m ^[[96m{"deviceAssetId":"1000082815","deviceId":"3a8a661b4eecdc4155c3c21207113e3989627a7b4f9f19a7585f94f5a1b640ba","fileCreat> ^[[95m[Nest] 129 - ^[[39m12/03/2024, 12:46:35 PM ^[[95m DEBUG^[[39m ^[[33m[Microservices:PersonService]^[[39m ^[[95m1 faces detected in upload/thumbs/48513c9a-4997-4967-3c14-77a47a287db7/aa/7f/aa7fff0f-4db6-43c1-a812-5e1056ca73d2-preview.> ^[[32m[Nest] 129 - ^[[39m12/03/2024, 12:46:35 PM ^[[32m LOG^[[39m ^[[33m[Microservices:PersonService]^[[39m ^[[32mDetected 1 new faces in asset aa7fff0f-4db6-43b1-a512-5e1036ca76d2^[[39m ^[[96m[Nest] 129 - ^[[39m12/03/2024, 12:46:35 PM ^[[96mVERBOSE^[[39m ^[[33m[Microservices:MetadataService]^[[39m ^[[96mExif Tags^[[39m ^[[95m[Nest] 376 - ^[[39m12/03/2024, 12:46:35 PM ^[[95m DEBUG^[[39m ^[[33m[Api:LoggingInterceptor~pdhyq5uo]^[[39m ^[[95mGET /api/server/storage 200 154.48ms ::ffff:192.168.100.196^[[39m
In ml.log also nothing interesting. I guess it contains only info that models downloaded successfully. But nothing about how ml going on: ... [2024-12-02 13:57:22 +0300] [365] [INFO] Application startup complete. ^MFetching 11 files: 0%| | 0/11 [00:00<?, ?it/s]^MFetching 11 files: 9%| ^v^i | 1/11 [00:00<00:03, 3.16it/s]^MFetching 11 files: 27%| ^v^h ^v^h ^v^k | 3/11 [00:00<00:01, 7.96it/s]^MFetching 11 files: 27%| ^v^h> ... ^MFetching 11 files: 100%| ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h| 11/11 [01:38<00:00, 8.95s/it] ... [2024-12-02 14:07:46 +0300] [128] [INFO] Starting gunicorn 23.0.0 [2024-12-02 14:07:46 +0300] [128] [INFO] Listening at: http://127.0.0.1:3003 (128) [2024-12-02 14:07:46 +0300] [128] [INFO] Using worker: app.config.CustomUvicornWorker [2024-12-02 14:07:46 +0300] [365] [INFO] Booting worker with pid: 365 [2024-12-02 14:07:48 +0300] [365] [INFO] Started server process [365] [2024-12-02 14:07:48 +0300] [365] [INFO] Waiting for application startup. [2024-12-02 14:07:48 +0300] [365] [INFO] Application startup complete. ^MFetching 4 files: 0%| | 0/4 [00:00<?, ?it/s]^MFetching 4 files: 25%| ^v^h ^v^h ^v^l | 1/4 [00:00<00:01, 2.03it/s]^MFetching 4 files: 75%| ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^l | 3/4 [00:03<00:01, 1.24s/it]^MFetchin ... etching 4 files: 100%| ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h| 4/4 [00:31<00:00, 10.42s/it]^MFetching 4 files: 100%| ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h| 4/4 [00:31<00:00, 7.82s/it] ... ... [2024-12-03 12:54:54 +0300] [365] [INFO] Shutting down [2024-12-03 12:54:54 +0300] [365] [INFO] Waiting for application shutdown. [2024-12-03 12:54:54 +0300] [365] [INFO] Application shutdown complete. [2024-12-03 12:54:54 +0300] [365] [INFO] Finished server process [365] [2024-12-03 12:54:54 +0300] [128] [ERROR] Worker (pid:365) was sent SIGINT! [2024-12-03 12:54:54 +0300] [2082] [INFO] Booting worker with pid: 2082 [2024-12-03 12:54:55 +0300] [2082] [INFO] Started server process [2082] [2024-12-03 12:54:55 +0300] [2082] [INFO] Waiting for application startup. [2024-12-03 12:54:55 +0300] [2082] [INFO] Application startup complete.
Maybe there are places with other logs?
Any idea how to find out what is actually used for ML?
Thanks a lot!
The text was updated successfully, but these errors were encountered:
loeeeee
changed the title
Discussion. How to track what is actually using for ML?
[Discussion] How to find the actual hardware being used for ML?
Dec 7, 2024
Hi there!
Thank you a lot for the guide! (and to arter97)
Following the guide I successfully installed immich to LXC (Debian 12).
The hardware transcoding feature also works like a charm using QuickSync of my 12th gen iGPU.
I see it using intel_gpu_top.
The immich ML features also LOOKS LIKE works nicely. Context search is quick and smooth, face recognition also looks good enough (I use default ViT-B-32__openai model for now).
The only question that I left is: What is immich actually use for ML on my installation?
I don't really have ideas how to track it.
Following the guide I install ONNX runtime with OpenVino https://pypi.org/project/onnxruntime-openvino/ .
So I assume that at least OpenVino itself is used. But no ideas if it is used in fact and if it is used on CPU or iGPU.
According to docs on the link above:
"By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated GPU, discrete GPU, integrated NPU (Windows only). Invoke the provider config device type argument to change the hardware on which inferencing is done."
Logs in /var/log/immich are not really useful for it.
In web.log there are something like this (sorry for the mess, cropped copy directly from shell, but I think the idea is clear):
^[[96m[Nest] 376 - ^[[39m12/03/2024, 12:46:35 PM ^[[96mVERBOSE^[[39m ^[[33m[Api:LoggingInterceptor~g6vnpnid]^[[39m ^[[96m{"deviceAssetId":"1000082815","deviceId":"3a8a661b4eecdc4155c3c21207113e3989627a7b4f9f19a7585f94f5a1b640ba","fileCreat> ^[[95m[Nest] 129 - ^[[39m12/03/2024, 12:46:35 PM ^[[95m DEBUG^[[39m ^[[33m[Microservices:PersonService]^[[39m ^[[95m1 faces detected in upload/thumbs/48513c9a-4997-4967-3c14-77a47a287db7/aa/7f/aa7fff0f-4db6-43c1-a812-5e1056ca73d2-preview.> ^[[32m[Nest] 129 - ^[[39m12/03/2024, 12:46:35 PM ^[[32m LOG^[[39m ^[[33m[Microservices:PersonService]^[[39m ^[[32mDetected 1 new faces in asset aa7fff0f-4db6-43b1-a512-5e1036ca76d2^[[39m ^[[96m[Nest] 129 - ^[[39m12/03/2024, 12:46:35 PM ^[[96mVERBOSE^[[39m ^[[33m[Microservices:MetadataService]^[[39m ^[[96mExif Tags^[[39m ^[[95m[Nest] 376 - ^[[39m12/03/2024, 12:46:35 PM ^[[95m DEBUG^[[39m ^[[33m[Api:LoggingInterceptor~pdhyq5uo]^[[39m ^[[95mGET /api/server/storage 200 154.48ms ::ffff:192.168.100.196^[[39m
In ml.log also nothing interesting. I guess it contains only info that models downloaded successfully. But nothing about how ml going on:
... [2024-12-02 13:57:22 +0300] [365] [INFO] Application startup complete. ^MFetching 11 files: 0%| | 0/11 [00:00<?, ?it/s]^MFetching 11 files: 9%| ^v^i | 1/11 [00:00<00:03, 3.16it/s]^MFetching 11 files: 27%| ^v^h ^v^h ^v^k | 3/11 [00:00<00:01, 7.96it/s]^MFetching 11 files: 27%| ^v^h> ... ^MFetching 11 files: 100%| ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h| 11/11 [01:38<00:00, 8.95s/it] ... [2024-12-02 14:07:46 +0300] [128] [INFO] Starting gunicorn 23.0.0 [2024-12-02 14:07:46 +0300] [128] [INFO] Listening at: http://127.0.0.1:3003 (128) [2024-12-02 14:07:46 +0300] [128] [INFO] Using worker: app.config.CustomUvicornWorker [2024-12-02 14:07:46 +0300] [365] [INFO] Booting worker with pid: 365 [2024-12-02 14:07:48 +0300] [365] [INFO] Started server process [365] [2024-12-02 14:07:48 +0300] [365] [INFO] Waiting for application startup. [2024-12-02 14:07:48 +0300] [365] [INFO] Application startup complete. ^MFetching 4 files: 0%| | 0/4 [00:00<?, ?it/s]^MFetching 4 files: 25%| ^v^h ^v^h ^v^l | 1/4 [00:00<00:01, 2.03it/s]^MFetching 4 files: 75%| ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^l | 3/4 [00:03<00:01, 1.24s/it]^MFetchin ... etching 4 files: 100%| ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h| 4/4 [00:31<00:00, 10.42s/it]^MFetching 4 files: 100%| ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h ^v^h| 4/4 [00:31<00:00, 7.82s/it] ... ... [2024-12-03 12:54:54 +0300] [365] [INFO] Shutting down [2024-12-03 12:54:54 +0300] [365] [INFO] Waiting for application shutdown. [2024-12-03 12:54:54 +0300] [365] [INFO] Application shutdown complete. [2024-12-03 12:54:54 +0300] [365] [INFO] Finished server process [365] [2024-12-03 12:54:54 +0300] [128] [ERROR] Worker (pid:365) was sent SIGINT! [2024-12-03 12:54:54 +0300] [2082] [INFO] Booting worker with pid: 2082 [2024-12-03 12:54:55 +0300] [2082] [INFO] Started server process [2082] [2024-12-03 12:54:55 +0300] [2082] [INFO] Waiting for application startup. [2024-12-03 12:54:55 +0300] [2082] [INFO] Application startup complete.
Maybe there are places with other logs?
Any idea how to find out what is actually used for ML?
Thanks a lot!
The text was updated successfully, but these errors were encountered: