-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use the newest lastSyncAt
instead of the eldest one during sync
#327
Comments
lmatayoshi
changed the title
Update
Update Dec 17, 2020
greater_updated_at
right after processing each page during sync
lastSyncAt
and updatedAt
right after processing each page during sync
lmatayoshi
changed the title
Update
Update Dec 17, 2020
lastSyncAt
and updatedAt
right after processing each page during sync
greater_updated_at
right after processing each page during sync
lmatayoshi
changed the title
Update
Use the newest Dec 24, 2020
greater_updated_at
right after processing each page during sync
lastSyncAt
instead of the eldest one during sync
lmatayoshi
added a commit
that referenced
this issue
Dec 25, 2020
…328) For #327 Approach Instead of sending to the server the eldest `lastSyncAt` at Collector's end and updating the `lastSyncAt` of all the entries at the end of the `sync`, it now just sends the newest `lastSyncAt` which falls more natural (no need of modifying the `lastSyncAt` of any other entries at the end). In layman's terms it's asking: "Ok server, can you please bring me all the records that have been created or updated since the last time I synced with you?", where the last time is determined by the last entry that has been fetched (last entry has the most recent `lastSyncAt` value because they come sorted by `updated_at` in `ASCENDING` order from the server). This allows the syncing to be interrupted and resumed without any problem as it doesn't depend on any event on completion such as updating `lastSyncAt` for all entries at the end of the `sync` process. Avoiding pitfalls on upload What happens if other entries or updates are submitted to the server by other collectors while the current one is uploading their own changes? Will the current collector miss those changes? No, that's not a problem. In order to cover that scenario, `lastSyncAt` of each entry that is being `uploaded` on `remoteUpload` or `remoteUploadUpdate` is set to the newest `lastSyncAt` in the database at that moment. This allows capturing the new entries or updates that could have been uploaded to the server by other collectors while the current one was uploading their own changes.
lmatayoshi
added a commit
to instedd/maap-store
that referenced
this issue
Dec 25, 2020
For instedd/maap-collector#327 Collector commit: instedd/maap-collector@04ce1ee
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Current behavior for
syncing
process is determined by:The eldest
lastSyncAt
of the Collector is sent to the server in order to fetch all the entries that have been updated later than the eldestlastSyncAt
. That forces the Collector to update the lastSyncAt of all the remaining entries so the eldestlastSyncAt
gets updated and turns into the newest value in the next sync.However, In order for the current approach to work, it is mandatory that the sync process completes entirely so the
lastSyncAt
of the remaining entries gets updated. Otherwise, next sync will fetch already fetched entries again. This situation gets worse in instances such asNigeria
, in which there are a lot ofAntibiotic Consumption Stats
(more than 70000) and syncing processes can last up to 8 hours.Let's change the current approach and sent the newest
lastSyncAt
to the server. By doing this, it won't be necessary to repeat the wholesyncing
process again if it gets interrupted for any reason. Be careful with not updatinglastSyncAt
when uploading changes to the server (remoteUpload
andremoteUploadUpdate
) to avoid missing entries in the next sync that other Collectors could potentially have created during the upload.The text was updated successfully, but these errors were encountered: