Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handling multiple rate-limits #382

Merged
merged 1 commit into from
Jul 10, 2015
Merged

Handling multiple rate-limits #382

merged 1 commit into from
Jul 10, 2015

Conversation

subnetmarco
Copy link
Member

Closes issue #205 and deprecates #224.

This pull requests enables support for multiple rate-limit periods on the same API.

New plugin configuration value format

This PR promotes a new format for setting up the ratelimiting plugin, whose value is now value.{period}={limit}, for example:

value.second=10 instead of the previous one value.limit=10&value.period=second.

The plugin will transparently handle the old value format without requiring a migration of the data.

@subnetmarco subnetmarco added the idea/new plugin [legacy] those issues belong to Kong Nation, since GitHub issues are reserved for bug reports. label Jul 7, 2015
@subnetmarco subnetmarco self-assigned this Jul 7, 2015
@subnetmarco subnetmarco added this to the 0.4.0 milestone Jul 7, 2015
@subnetmarco subnetmarco force-pushed the feature/ratelimiting branch from 319dcb5 to c36ea77 Compare July 7, 2015 02:14
@subnetmarco subnetmarco added the pr/ready (but hold merge) No more concerns, but do not merge yet (probably a conflict of interest with another PR or release) label Jul 7, 2015
@thibaultcha
Copy link
Member

Sounds good 👍. Except for the manual on_insert trigger. It became self_check and is recursively called on sub-schemas now. Also called during insert and update. It's not yet merged in master.

@thibaultcha
Copy link
Member

However I am not sure about supporting the old format. Because now we are either forever supporting a deprecated format, or just fighting off an inevitable deprecation.

We could drop it now, while we are still not in 1.0, and optionally provide a migration script and instructions on how to run it. Instead of dropping it later on when old providers will have both formats in their plugins configuration (the old and the new one) and it will be harder to migrate.

@subnetmarco
Copy link
Member Author

We could drop it now, while we are still not in 1.0, and optionally provide a migration script and instructions on how to run it. Instead of dropping it later on when old providers will have both formats in their plugins configuration (the old and the new one) and it will be harder to migrate.

I thought of it, and I was about to implement it. Then I thought more carefully and I found the following reasons to postpone it:

  • Database schema migrations have never been thought in this scenario where data migration is involved too (till now we did schema migrations), and giving that 0.4.0 will be released soon we might not have enough time to think carefully on the best implementation.
  • A data migration script will decrease adoption and create problems if not done correctly, and rate-limiting is one of the most used plugins. So in order to avoid potential problems for 0.4.0, or decrease adoption because migration is involved, I believe it's better to postpone it.
  • In this specific case retro-compatibility is very easy to maintain (it's an if..else statement)
  • New migrations might be necessary, some of them might be more deep. Because we don't really know what to expect from the future from a schema/data migration perspective, we could decide to keep retro-compatibility now (as long as it's so easy to implement) and break it when switching to 1.0 (basically no migrations now, but when adopting 1.0 a full-scale migration would be required because retro-compatibility to pre-1.0 versions would not be maintained).
  • We also talked about merging all the migration script files into one file for 1.0, so schema retro-compatibility will be broken anyways if we decided to merge these files and a full-scale migration will be required anyways for 1.0

Thoughts?

return false, DaoError(err, constants.DATABASE_ERROR_TYPES.SCHEMA)
end
end

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@thibaultcha to fix the problem I had to implement this check. Basically this is the problem:

  • BaseDao invokes the validate_entity. The entity is the plugin_configuration, which has a self_check.
  • The self_check is never being invoked on the underlying plugin's schema.
  • This code invokes the underlying plugins's self_check if it exists.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, but you are locking the self_check to a SCHEMA error, and it shouldn't be. It can be any other error type too. You should return the error as it is returned by self_check, it is up to the plugin to decide what error it is exactly. Maybe the error comes from Cassandra being unaccessible for example.

Otherwise the check sounds good to me. A better solution would be to investigate on why it is not invoked from validate_entity, because the function calls itself on sub-schemas so if there is no other error, self_check should be recursively called already.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because the function calls itself on sub-schemas

I believe this part is missing.

@subnetmarco subnetmarco force-pushed the feature/ratelimiting branch from 89def70 to 7a6a5e8 Compare July 10, 2015 06:43
@subnetmarco subnetmarco force-pushed the feature/ratelimiting branch from 62e8bd7 to 55352bc Compare July 10, 2015 08:03
subnetmarco added a commit that referenced this pull request Jul 10, 2015
@subnetmarco subnetmarco merged commit 5e94856 into master Jul 10, 2015
@subnetmarco subnetmarco deleted the feature/ratelimiting branch July 10, 2015 08:09
@tamizhgeek
Copy link
Contributor

Sorry to comment on a closed PR, but do we have the requested benchmarks when a API is configured with multiple rate limits? Just curious!

ctranxuan pushed a commit to streamdataio/kong that referenced this pull request Aug 25, 2015
Handling multiple rate-limits

Former-commit-id: d49df90d93864c9b6aca6018f6735e4f0ebfa4a6
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
idea/new plugin [legacy] those issues belong to Kong Nation, since GitHub issues are reserved for bug reports. pr/ready (but hold merge) No more concerns, but do not merge yet (probably a conflict of interest with another PR or release)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants