You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 25, 2019. It is now read-only.
Integrating with galaxy_zoo stream. Some events arrive with same user_id, same created_at and different subjects. Logging this issue per our discussion with your team.
The text was updated successfully, but these errors were encountered:
Unfortunately, this is unavoidable. When the API receives a classification, it timestamps it immediately. The timestamps you're seeing in the data are set when the classification is created.
Some common scenarios that cause this:
A mobile user, or a user on flaky network connection (very common)
They begin classifying
Their network connection drops out
They continue classifying
When they reconnect their classifications finish sending to the API simultaneously
Or in times of unusually high traffic (less common)
The web server receives classifications faster than the requests can be processed
The requests queue up at the server in front of the API
The requests are pulled out of the queue and processed concurrently resulting in identical timestamps
The only way to approach this is to have the client timestamp the classifications before they are sent. The caveat here is that there are no guarantees on what the client system clock is set to.
I suppose you could try to calculate a client local time offset by comparing it to a response from the server and adjusting for network latency, but that's pretty far from reliable.
In a nutshell, you could figure out the order that requests are sent in, but not the actual time the request is sent.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Integrating with galaxy_zoo stream. Some events arrive with same user_id, same created_at and different subjects. Logging this issue per our discussion with your team.
The text was updated successfully, but these errors were encountered: