-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Conversation
…hould only be updating that if a pro-active event
# To keep the single process behaviour consistent with worker mode, run the | ||
# same logic as `update_external_syncs_row`, even though it looks weird. | ||
if prev_state.state == PresenceState.OFFLINE: | ||
await self._update_states( | ||
[ | ||
prev_state.copy_and_replace( | ||
state=PresenceState.ONLINE, | ||
last_active_ts=self.clock.time_msec(), | ||
last_user_sync_ts=self.clock.time_msec(), | ||
) | ||
] | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code block preformed two functions, updating last_active_ts
and last_user_sync_ts
. last_active_ts
is now handled by set_state()
, and last_user_sync_ts
will be set at the finish of the context manager, so is unneeded here. (As is actually commented on just below this)
@@ -1866,7 +1862,7 @@ def handle_timeout( | |||
"""Checks the presence of the user to see if any of the timers have elapsed | |||
|
|||
Args: | |||
state | |||
state: the current state of the user, not the state at the setting of the timer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was just a driveby fix
if presence == PresenceState.ONLINE or ( | ||
presence == PresenceState.BUSY and self._busy_presence_enabled | ||
): | ||
if ( | ||
prev_state.state != PresenceState.ONLINE | ||
and presence == PresenceState.ONLINE | ||
) or (presence == PresenceState.BUSY and self._busy_presence_enabled): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was the meat and potatoes of the fix, if we are already online
, we don't have to bump last_active_ts
self.presence_handler.get_state(UserID.from_string(user_id)) | ||
) | ||
# last_active_ts should not have changed as no pro-active event has occurred | ||
self.assertEqual(prev_state.last_active_ts, state.last_active_ts) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI: on develop
this fails with
twisted.trial.unittest.FailTest: 2000 != 4000
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise seems good
# TODO: This will be removed in future work as it's | ||
# literally done again at the '_end' of this context manager and the | ||
# only time last_user_sync_ts matters is when processing timeouts in | ||
# handle_timeout(). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then can't we just remove it now rather than leaving a comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not yet. And I'm on the fence based on further experiments about putting the other code block back in. Let's put this on hold for the moment. I think I have something to show you(out-of-band though).
I'm going to close this for now, as there were unforeseen side-effects I'm not happy about. I will roll this into another pull request with more in it to abate those side-effects. (Mainly, the side effects were that federation sending would ramp up and stay up for over an hour. I couldn't pin down exactly what was causing this, but fixing 'presence spam' seemed to make it stop and requires something broader than this one PR was scoped for.) |
Replaced with #15989 |
last_active_ago
is updated on sync when it shouldn't be #12424Essentially, what was happening was that during an incremental sync,
online
is set repeatedly by the system(which is correct behavior, approximately eachtimeout
). This means it was as simple as comparing the 'before' and 'after' state ofonline
, if they matched then don't bumplast_active_ts
. I spent hours checking permutations, I'm pretty sure I caught them all but need more eyes to check me.I did write a basic test to check the behavior(which fails on develop), but am open to ideas of what else to actually check for regression.
last_active_ago
in the Matrix SpecConsequences:
last_active_change_online
andlast_active_change_not_online
metrics to actually update correctly(which will be interesting to see how it plays out)_update_states()
machinery, which is the heavy lift of sending notifications and persisting changes(and setting more timers).A quick hackish(and incomplete) primer on how presence does timeouts
Pull Request Checklist
(run the linters)
Signed-off-by: Jason Little [email protected]