You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hydrogen seems unhappy with the /messages response in some rooms because synapse returns a different value for start then what I provide as the from query parameter. The only difference seems to be that synapse appends _0 to the token. So I have t37-1342550139\_757284974\_7971954\_1695409504\_1724035178\_3720015\_646406111\_5691777095\_0in?fromand t37-1342550139\_757284974\_7971954\_1695409504\_1724035178\_3720015\_646406111\_5691777095\_0\_0forstartin the response body. Hydrogen checks that the token used for the request is the same as in the database when the response comes back to make sure no part of the timeline is being swallowed due to a race.
The error in the console is start is not equal to prev_batch or next_batch , see GapWriter.writeFragmentFill
We should just keep the token I sent the request with in memory rather than looking atstart in the response body.
The text was updated successfully, but these errors were encountered:
this error should be reported in the UI of the RoomViewModel (or TimelineViewModel?)
we can prevent the error from happening in the first place by keeping the token in memory between when we start the request and when we process the response, rather than relying on the start value provided by the server. This seems to happen because synapse added a field to the pagination tokens, so they support the old format but will return tokens with the added field, hence _0 being appeneded.
Hydrogen seems unhappy with the
/messages
response in some rooms because synapse returns a different value forstart
then what I provide as thefrom
query parameter. The only difference seems to be that synapse appends_0
to the token. So I havet37-1342550139\_757284974\_7971954\_1695409504\_1724035178\_3720015\_646406111\_5691777095\_0
in?from
andt37-1342550139\_757284974\_7971954\_1695409504\_1724035178\_3720015\_646406111\_5691777095\_0\_0
forstart
in the response body. Hydrogen checks that the token used for the request is the same as in the database when the response comes back to make sure no part of the timeline is being swallowed due to a race.The error in the console is
start is not equal to prev_batch or next_batch
, seeGapWriter.writeFragmentFill
We should just keep the token I sent the request with in memory rather than looking at
start
in the response body.The text was updated successfully, but these errors were encountered: