You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, in encrypted inference, we encrypt images one by one when calling ts.im2col_encoding() function and do encrypted inference on "model(context, x_enc, windows_nb)" on one sample x_enc. I think we have the most problematic performance bottleneck here. The acceleration effect of GPU is seen most when we do inference on batches of data and running model(batch_x), where batch_x is a 3D or 4D tensor (#num of samples, width, height). But now the #num_of_samples=1 in encrypted inference and the GPU utilization is very low. I looked if there is a way of "CKKSTensor - Batching" in TenSEAL, but I could not find any. Have you considered this feature as an improvement? It would speed up things a lot.
The text was updated successfully, but these errors were encountered:
Nothing is running on GPU. Ciphertext computation (even with a batch of 1) could benefit from running in GPU, but this is not the case. Everything in running on CPU. So this is clearly out of reach.
Currently, in encrypted inference, we encrypt images one by one when calling ts.im2col_encoding() function and do encrypted inference on "model(context, x_enc, windows_nb)" on one sample x_enc. I think we have the most problematic performance bottleneck here. The acceleration effect of GPU is seen most when we do inference on batches of data and running model(batch_x), where batch_x is a 3D or 4D tensor (#num of samples, width, height). But now the #num_of_samples=1 in encrypted inference and the GPU utilization is very low. I looked if there is a way of "CKKSTensor - Batching" in TenSEAL, but I could not find any. Have you considered this feature as an improvement? It would speed up things a lot.
The text was updated successfully, but these errors were encountered: