Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement RPC caching proxy #66

Open
2 of 11 tasks
Tracked by #72 ...
valiafetisov opened this issue Jan 27, 2022 · 4 comments
Open
2 of 11 tasks
Tracked by #72 ...

Implement RPC caching proxy #66

valiafetisov opened this issue Jan 27, 2022 · 4 comments

Comments

@valiafetisov
Copy link
Contributor

valiafetisov commented Jan 27, 2022

Goal

Setup basic RPC caching proxy

Context

After preliminary investigation in #38, it seems like caching RPC read requests can be a viable solution to make our website scalable and not significantly affected by it's popularity. But we still want to validate and test that it will not significantly affect the users.

  • what responses should be cached?
    • possibly all responses (since some uniswap responses throw expected errors)
    • we may want to not cache subscriptions / socket connections
  • for how long should responses be cached? (right now min(inactive, proxy_cache_valid 200 10m)
    • during testing period: 2m; in production: 1 block time ~15s
  • do we want to allow manual cache invalidation? I.e., a force refresh?
    • not yet required, may change after testing

Tasks

  • make RPC caching testable
    • make it work with kubernetes
    • make sure that container logs indicate hit/miss
    • create a separate RPC-caching-test infura key with no restrictions
    • make sure caching works for all networks
    • make sure that bot is also using proxy for the requests
    • make caching optional (e.g.: provide a frontend env variable that enables or disables it)
  • (optional) never fetch auctions via metamask in another PR Stop fetching data through wallet #65 (and merge it here first)
  • deploy the branch
  • manually test it with several connected clients
  • if all the above goes well, let's solve the problem of not exposing infura key to the frontend
This was referenced Jan 27, 2022
@BracketJohn
Copy link
Contributor

make sure that container logs indicate hit/miss

What container do you mean by this and why is it necessary?

create a separate RPC-caching-test infura key with no restrictions

@florianstoecker can you create this key and store it into auction-ui/cache-test.auction-ui.k8s.sidestream.tech/bot-twitter/infura_project_id?


I think we should probably split this issue more into the infrastructure related part and the implementation changes on various components (bot update, testing of HIT/MISS, adding a punch-through env variable to the frontend, that can be done by the full team.

If you agree I would finish up in #76 by enabling the deployment of cache-test on every push and people can open PRs against the branch to allow development of the feature in a separated environment (avoiding the problem of having a broken main. when the cache does not work out.

@valiafetisov
Copy link
Contributor Author

make sure that container logs indicate hit/miss

What container do you mean by this and why is it necessary?

  • I suppose ingres or whoever caches the requests?
  • To validate that opening several clients together will produce a lot of hit records and we can later estimate percentage of the cached requests

I think we should probably split this issue

I don't see the need of splitting it into smaller parts, as I estimate it to be quite small changes. But if you want, you can! Agreeing that caching shouldn't be merged to main until the multi-client testing is complete by the team.

@BracketJohn
Copy link
Contributor

To validate that opening several clients together will produce a lot of hit records and we can later estimate percentage of the cached requests

Configuring ingress logging is limited by ingress/nginx per default to a certain set of variables (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/log-format/). This is because the log format is defined as part of the http directive, but the cache hit/miss is only captured on the lower location level - so turning on logging here is not easy.

I'd instead just want the team members to look into their networking tab to verify it - in general I don't see a point of extensive testing of the nginx caching functionality: Our request format is quite trivial and nginx likely implemented their caching correctly.

But if you want, you can!

I don't want to, the main point is that I don't see the implementation of:

  • bot twitter code,
  • networks.ts related code,
  • adding hit/miss env vars,
  • testing

on my task list, but rather on the list of members. If we can mange this with this one issue I'm fine with keeping it (:

@valiafetisov
Copy link
Contributor Author

in general I don't see a point of extensive testing of the nginx caching functionality

As stated above, the the main reason for logging is to see caching across connected clients. Single clients already cache requests and doesn't call the same contract method more often than 2 times per minute. Second reason is the estimation of the saved quota. Third reason is the bot, that doesn't have console to look into.

But if you say it's not easy, it's another point we need to estimate. What would be the workaround?

I don't see the implementation on my task list
bot twitter code

I am not sure who else will be able to figure out how to make bot-twitter pass requests through the ingress.

networks.ts related code,

The expected change is to create and set env variable so if not set it falls back to pure infura endpoint. I think adding env variable and making sure it works can fall into category of tasks you can help with (:

adding hit/miss env vars

Do you mean logs? Then it's answered above.

testing

This point is mainly for the functional review, but as the whole issue is not just implement, but also validate that it works properly, we need to also prepare means to conclude (eg logging):

  • Frontend worked without issues
  • Cache was actually hit

@LukSteib LukSteib mentioned this issue Feb 14, 2022
20 tasks
@LukSteib LukSteib mentioned this issue Feb 28, 2022
18 tasks
@LukSteib LukSteib mentioned this issue Mar 14, 2022
23 tasks
@LukSteib LukSteib mentioned this issue Mar 29, 2022
16 tasks
@LukSteib LukSteib mentioned this issue Apr 19, 2022
16 tasks
@LukSteib LukSteib mentioned this issue May 2, 2022
19 tasks
@LukSteib LukSteib mentioned this issue May 16, 2022
17 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants