-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bug-654]: Zero quota should allow infinite usage #211
Conversation
05a240c
to
869f3aa
Compare
214d9a0
869f3aa
to
214d9a0
Compare
@@ -46,6 +46,3 @@ $K3S kubectl create configmap volumes-create -n karavi --from-file=./volumes_cre | |||
$K3S kubectl create configmap volumes-delete -n karavi --from-file=./volumes_delete.rego --save-config --dry-run=client -o yaml | $K3S kubectl apply -f - | |||
$K3S kubectl create configmap volumes-unmap -n karavi --from-file=./volumes_unmap.rego --save-config --dry-run=client -o yaml | $K3S kubectl apply -f - | |||
$K3S kubectl create configmap volumes-map -n karavi --from-file=./volumes_map.rego --save-config --dry-run=client -o yaml | $K3S kubectl apply -f - | |||
$K3S kubectl create configmap powerflex-urls -n karavi --from-file=./url.rego --save-config --dry-run=client -o yaml | $K3S kubectl apply -f - |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should you also remove $K3S kubectl create configmap powerscale-volumes-create -n karavi --from-file=./volumes_powerscale_create.rego --save-config --dry-run=client -o yaml | $K3S kubectl apply -f -
since that rego file is being removed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, good catch.
f88e952
run e2e test |
Description
This PR implements infinite capacity usage for PowerFlex and PowreMax if the quota is zero.
This also fixes a data race and cleans up unnecessary code around URL request paths and PowerScale handlers.
Infinite Quota
The volume create policy files are updated to permit roles with zero quota along with roles that have a non-zero allowable quota.
The Redis enforcer is also updated to not check the approved capacity and request capacity for the tenant if the quota is zero. We just approve the request, but still keep track of the approved capacity.
Data Race
The unit tests reported a data race regarding the storage pool cache and the token getter for PowerFlex. This occurs because these entities are using the same goscaleio client and getting/setting the client's login concurrently. The fix is to use different goscaleio clients.
Another solution would be to wrap the goscaleio client with a mutex but the above solution is simpler.
Code Removal
For URL path policy decisions, we want to allow all paths by default so querying OPA for a URL policy decision is unnecessary since we will always allow the request anyway. The URL policy files are removed and the proxy-server code for querying OPA for those policies is removed.
The PowerScale handler should only be a credential shield since PowerScale has its own RBAC. But, we have
volumeCreate
andvolumeDelete
handlers which intercept the requests and essentially do nothing except query OPA that the claims exist in the token. There is other middleware in the proxy-server which validates the access token. These handlers are removed so now the proxy-server only handles session authentication with the array and proxies everything else.GitHub Issues
List the GitHub issues impacted by this PR:
Checklist:
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Please also list any relevant details for your test configuration
New unit tests in the proxy-server handlers and in the redis enforcer, manual testing, and e2e tests.