-
Notifications
You must be signed in to change notification settings - Fork 619
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance when loading huge route file #343
Comments
In fact, in terms of routes, we need to handle in a near future up to 2 million rules (may be more). Fabio seems to be the more efficient tool to handle our needs.
Am I right ? |
I didn't consider 2M routes a viable use case but I'm more than happy to make this a reality. What a showcase! You are correct in that fabio cannot handle this dynamically right now because of the consul limitation which is not going to be lifted or made configurable any time soon. We get this request regularly. However, in addition of being a data source consul is the single source of truth and acts as a synchronization point. When consul gets updated all fabios pull the updated the new version from there. You don't have to know how many fabios there are or where they are. They will do the right thing. Therefore, using an API, or a dynamic file provider would put the burden on you to provide consistency and timeliness of updates. Supporting a different data source like a URL would be another option. Fabio could pull the routing table from a URL and could use the I'd still have to look at the speed of the routing parser. It is using regular expressions right now but I've gained some experience in coding my own parsers. How often does the routing table change and how many fabio instances are you running? |
Also, how many targets do you have per domain? Is it 2M domains or 2M route entries? |
@smalot Is that still an issue for you since I'd like to pick that one up. |
Hi,
Loading a 100'000 routes file consume very many ressources.
In fact, loading the file itself seems to be very fast (5 to 10 secondes regarding logs), but post-process seems to index data and parse it and is really long (more than 1 minute).
But once done, any request is quickly (as usual) handled.
Is there workaround or optimization task planned ?
Many thanks
The text was updated successfully, but these errors were encountered: