-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement RPC caching proxy #66
Comments
What container do you mean by this and why is it necessary?
@florianstoecker can you create this key and store it into I think we should probably split this issue more into the infrastructure related part and the implementation changes on various components (bot update, testing of HIT/MISS, adding a punch-through env variable to the frontend, that can be done by the full team. If you agree I would finish up in #76 by enabling the deployment of |
I don't see the need of splitting it into smaller parts, as I estimate it to be quite small changes. But if you want, you can! Agreeing that caching shouldn't be merged to main until the multi-client testing is complete by the team. |
Configuring ingress logging is limited by ingress/nginx per default to a certain set of variables (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/log-format/). This is because the log format is defined as part of the I'd instead just want the team members to look into their networking tab to verify it - in general I don't see a point of extensive testing of the nginx caching functionality: Our request format is quite trivial and nginx likely implemented their caching correctly.
I don't want to, the main point is that I don't see the implementation of:
on my task list, but rather on the list of members. If we can mange this with this one issue I'm fine with keeping it (: |
As stated above, the the main reason for logging is to see caching across connected clients. Single clients already cache requests and doesn't call the same contract method more often than 2 times per minute. Second reason is the estimation of the saved quota. Third reason is the bot, that doesn't have console to look into. But if you say it's not easy, it's another point we need to estimate. What would be the workaround?
I am not sure who else will be able to figure out how to make bot-twitter pass requests through the ingress.
The expected change is to create and set env variable so if not set it falls back to pure infura endpoint. I think adding env variable and making sure it works can fall into category of tasks you can help with (:
Do you mean logs? Then it's answered above.
This point is mainly for the functional review, but as the whole issue is not just implement, but also validate that it works properly, we need to also prepare means to conclude (eg logging):
|
Goal
Setup basic RPC caching proxy
Context
After preliminary investigation in #38, it seems like caching RPC read requests can be a viable solution to make our website scalable and not significantly affected by it's popularity. But we still want to validate and test that it will not significantly affect the users.
Tasks
RPC-caching-test
infura key with no restrictionsThe text was updated successfully, but these errors were encountered: