-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shared Cache for TileServer Proxy with TensorStore #208
Comments
There is no way to share the cache across processes, unfortunately. |
Also not with tmpfs ? |
Certainly there are ways in principle to share a cache across processes, but the cache in tensorstore does not support that. An alternative would be to use a single-process server, but I don't know if the GIL overhead will be too great. TensorStore does not yet support nogil Python, but in the future it will, and that will presumably allow for improved multi-threaded server performance. |
I'll look into using a single-process server, and perform benchmarks for different request situations, or deploy single-process servers for different scale levels with background prefetching. It'd be great to be able to also compare performance with nogil Python later. |
I'm implementing a tile server backed by tensorstore using FastAPI with multiple workers. I'd like to use a single common memory cache across the workers and I figured that this may be possible with the Tensorstore Context Framework.
However, I'm unclear on how to configure such a shared context when opening a dataset. Could you provide any guidance or example code to help? Thank you!
The text was updated successfully, but these errors were encountered: