Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apiv2 #153

Open
wants to merge 8 commits into
base: master
Choose a base branch
from
Open

Apiv2 #153

wants to merge 8 commits into from

Conversation

jvillafanez
Copy link
Member

New notification API, being used by the webUI already. The old API is still available mainly for the desktop client.

New API focuses on efficient network usage specially if the user has several notifications pending, as well as long lists of notifications.

As part of the API changes, the usage of the web browser notification has changed, showing only the browser notification only once until further interaction with the ownCloud's notifications has been made. Once you "know" you have pending notifications, the browser won't bother you until you fetch all of pending notifications.
Note that the notification permission will be requested on page loading (when the component is being initialized) instead of when a new notification arrives. This might be a bit problematic on some scenarios if the browser permission hasn't been accepted or rejected beforehand.

@jvillafanez jvillafanez added this to the development milestone Mar 22, 2018
@codecov-io
Copy link

codecov-io commented Mar 22, 2018

Codecov Report

Merging #153 into master will increase coverage by 1.8%.
The diff coverage is 98.96%.

Impacted file tree graph

@@             Coverage Diff             @@
##             master     #153     +/-   ##
===========================================
+ Coverage     91.73%   93.54%   +1.8%     
- Complexity      129      167     +38     
===========================================
  Files            17       18      +1     
  Lines           569      759    +190     
===========================================
+ Hits            522      710    +188     
- Misses           47       49      +2
Impacted Files Coverage Δ Complexity Δ
appinfo/routes.php 100% <100%> (ø) 0 <0> (ø) ⬇️
lib/Handler.php 99.47% <100%> (+0.19%) 45 <13> (+13) ⬆️
lib/Controller/EndpointV2Controller.php 98.34% <98.34%> (ø) 25 <25> (?)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2c87cb2...add27d7. Read the comment docs.

@DeepDiver1975
Copy link
Member

New notification API, being used by the webUI already. The old API is still available mainly for the desktop client.

New API focuses on efficient network usage specially if the user has several notifications pending, as well as long lists of notifications.

I don't get the neccessity of this new api. Desktop, mobile and web clients can/should use the same APIs in any case - otherwise we are doing it really wrong - and do not believe this.

@jvillafanez
Copy link
Member Author

Current API is very inneficient in terms of polling. It basically fetch all the notifications for the user regardless of the situation.

  • You are fetching all the notifications for the user. If the user is on vacation, when he comes back he could have 100 or more notifications stored. The API will fetch all of them without any control, it doesn't matter if there are 100 or 1000. This might be problematic for performance because we can't optimize anything and the time it takes to fetch the data will vary.
  • The UI isn't really prepared to handle a large number of notifications. The container size is rather small for even 15-20 notifications, and you'll need to scroll too much.
  • The current polling mechanism is really bad. Just polling fetches all the notifications. This means that polling might take some time and it won't be blazing fast. In addition polling never stops, so you might be fetching 100 or more notifications every 30 seconds.

As said, the new API solves these problems:

  • The new API limits the number of notifications to be fetched to a maximum of 20. The webUI will use a maximum of 3. The maximum timing will be constant as well as the network usage.
  • The new API also tells you if you have more notifications available while you're getting the notifications, minimizing the number of requests to be done.
  • It also gives you the versatility to focus either on the newer notifications or the older ones.
  • The new polling is independent of the number of notifications. This means that, even if you have thousands of notifications, the polling will remain constant both in time and in size. The new polling will use only 10-15 bytes (plus HTTP(S) protocol overhead, plus an additional header), which means that the network usage will be extremely low if we compare with the current one.
  • The new polling can also be easily cached in a shared memory cache (apcu / redis ...) reducing the workload on the DB server (not done yet, to be discussed)

From a UX point of view, the PR moves from a "you have to attend the notifications as fast as you can" due to a heavy network usage otherwise, to a "you have notifications, attend them when you want".

@DeepDiver1975
Copy link
Member

The new API limits the number of notifications to be fetched to a maximum of 20. The webUI will use a maximum of 3. The maximum timing will be constant as well as the network usage.

these query parameters exist on the v1 api? We can adopt them in the UI client side as needed.

Thanks for having a look at this issues. Can we see if this can be done without introducing v2?
Adding additional query parameters and request headers is a way to extend v1 without introducing v2.

Please give it a try. THX

@jvillafanez
Copy link
Member Author

Can we see if this can be done without introducing v2?

It doesn't look good.
V1 returns an etag in the response. This etag needs to traverse all the notifications to be generated, so adding a limit to avoid fetching all the responses is pointless.

I'm not sure who is actively using that etag, but unless we remove it I don't think we can bring any improvement there, it's completely pointless. The problem is that this might break the service specification, so It doesn't seems we can remove it.

Even if we change the implementation of the etag generation to provide a better and less buggy one, the effect is that we won't be checking if there are changes, we'll be fetching the changes, so providing an etag there is completely useless, unless we provide another more efficient endpoint to check for changes.

To sum up the changes I see:

  • Include new parameters in the request
  • Provide an additional endpoint for polling, otherwise the etag is useless
  • Change the implementation of the etag generation

Note that the new v2 uses plain JSON, not OCS. This also reduces the network overhead that comes with the continous polling.

Copy link
Contributor

@tomneedham tomneedham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO this is a big loss of UX, in return for 1.2k LOC, and in order to solve a theoretical problem. Do we have any cases where this is actually a performance issue?

@jvillafanez
Copy link
Member Author

A change in the behaviour doesn't necessarily means a loss of UX.

  • Browser notifications don't work in IE, so there isn't any loss here.
  • If the user is away or talking to another person, the user won't notice the notification.
  • The user might nor care about the notification.

It's possible to keep the current behaviour, although the code will be more complex, that's why I'd prefer to keep this new behaviour.

Do we have any cases where this is actually a performance issue?

There won't be any report unless the request takes more than 1 minute or 1 minute and a half, and that would be due to the maximum limit of parallel ajax requests in the browser, which would affect to other ajax requests that would need to be queued and delayed, making the problem visible for the user. This is really unlikely even in the worst scenario.
In any case, fetching all the notifications is a bad practice when we can't limit the number of notifications the user can have, and as said, we're fetching all of them each 30 secs.

@DeepDiver1975
Copy link
Member

Note that the new v2 uses plain JSON, not OCS.

anything which is meant to be a public api has to be either WebDAV or OCS.

No other concept is allowed - even if we don't like it.

@DeepDiver1975
Copy link
Member

While playing with the current notifications api in the scope of phoenix I have to agree that v1 is far from optimal. What we basically want to do is syncing the notifications. We might want to research into implementing the notifications collection as synced dav collection ...

@PVince81 PVince81 modified the milestones: development, triage Apr 25, 2018
@PVince81 PVince81 modified the milestones: triage, maybe some day May 22, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants