You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Images in Tensorboard are incredibly useful for us when training generative models as they allow an intuitive feel for what different scalar metrics mean. Currently the number of selectable steps are limited to 10 making it difficult to see transients and instabilities (common in e.g. GANs).
It would be great if we could get a method for selecting the step for an image with higher precision. E.g. a manual input for the step would be very helpful, especially if coupled with the ability to scroll around in its immediate area.
Ideally the suggested global step slider in #469 should support this higher precision step selection as well.
The text was updated successfully, but these errors were encountered:
The reason TensorBoard samples events from disk and serves that sampled subset to the frontend (of say 10 steps per image tag) is because TensorBoard serves data stored in memory (python data structures), and storing entire log directories (all events) would be too large for memory.
Also, reading from disk to find events at particular steps would be too slow for many cases, especially within distributed file systems. I tried implementing a plugin a while back that reads from disk. Reading the event at the 10,000th step took I believe 30s from my local directory. Reading events at millions of steps was basically impossible.
We are developing a new backend based on SQL. This lets the user write to for instance SQLite databases, and TensorBoard would (relatively quickly) read from them. However, this effort will launch in probably a few months, and unfortunately, we might end up sampling in the end to conserve database space.
I think this issue warrants some more thought. Many have broached it, but we lack a robust solution.
FYI, PR #1138 added a --samples_per_plugin flag that can be used to set the number of samples retained on a per-plugin basis. So e.g. --samples_per_plugin=images=100 should set the image dashboard to retain 100 images per series.
Leaving this open to track the request to be able to manually input a step and scroll around near it (without necessarily having to keep 1000s of images in memory), which will probably need to wait until the SQL backend support is ready.
Images in Tensorboard are incredibly useful for us when training generative models as they allow an intuitive feel for what different scalar metrics mean. Currently the number of selectable steps are limited to 10 making it difficult to see transients and instabilities (common in e.g. GANs).
It would be great if we could get a method for selecting the step for an image with higher precision. E.g. a manual input for the step would be very helpful, especially if coupled with the ability to scroll around in its immediate area.
Ideally the suggested global step slider in #469 should support this higher precision step selection as well.
The text was updated successfully, but these errors were encountered: