-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NeptuneObserver raises Neptune.api_exceptions.ChannelsValuesSendBatchError #5130
Comments
Hi! thanks for your contribution!, great first issue! |
Any ideas @jakubczakon? |
Thanks for bringing this up @wjaskowski. |
Hi @wjaskowski! I bumped into a similar issue a while back and from what I remember, PL automatically adds an To be honest, I'm not sure what's the best way to approach this. The only Neptune-specific behaviour here is that we don't accept non-monotonic Maybe someone from the PL team can weigh in? @Borda? PS. I investigated this some time ago, so my findings may be a bit outdated. |
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team! |
Well, it is still there. The problem won't disappear by itself. |
We're approaching this here: #5510. Once it's merged, it will most likely fix the issue you're observing. Sorry for the wait! |
Thanks! I posted just to not let it being closed by the stałe bot.
…On Mon, Jan 18, 2021, 09:22 Piotr Łusakowski ***@***.***> wrote:
We're approaching this here: #5510
<#5510>. Once
it's merged, it will most likely fix the issue you're observing. Sorry for
the wait!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#5130 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABFZEHMLYGRR2NS73QEHC3TS2PVT7ANCNFSM4U26PP3Q>
.
|
This comment is relevant: #5510 (comment) |
@awaelchli Regarding you comment #5510 (comment) ("Multiple calls to fit will not reset the global step."), the example at the start of this issue only contains calls to In particular, I was able to verify (by modifying PTL code and adding a debug statement after https://github.com/PyTorchLightning/pytorch-lightning/blob/f477c2fd2980ad128bfe79a3b859e0b81b435507/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py#L190) that when metrics are logged during So it does seem that calling Steps to reproduce:
|
Interestingly, the above is not the case for the Boring Model: https://colab.research.google.com/drive/1HvWVVTK8j2Nj52qU4Q4YCyzOm0_aLQF3?usp=sharing#scrollTo=FOma5cYzSWp7 It might be worth comparing the Boring Model and @wjaskowski 's model to see what the difference could be - I'll try to do that later today. |
🐛 Bug
NeptuneObserver throws
Expected behavior
No exception.
Environment
This happens both with pytorch-lighting 1.0.7 and 1.1 and neptune-client-0.4.126 and 129.
Additional context
Happens only when we log during
validation_step
only withon_epoch=True
, i.e.:The text was updated successfully, but these errors were encountered: