Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug-654]: Zero quota should allow infinite usage #211

Merged
merged 20 commits into from
Feb 22, 2023
Merged

Conversation

atye
Copy link
Contributor

@atye atye commented Feb 9, 2023

Description

This PR implements infinite capacity usage for PowerFlex and PowreMax if the quota is zero.

This also fixes a data race and cleans up unnecessary code around URL request paths and PowerScale handlers.

Infinite Quota

The volume create policy files are updated to permit roles with zero quota along with roles that have a non-zero allowable quota.

The Redis enforcer is also updated to not check the approved capacity and request capacity for the tenant if the quota is zero. We just approve the request, but still keep track of the approved capacity.

Data Race

The unit tests reported a data race regarding the storage pool cache and the token getter for PowerFlex. This occurs because these entities are using the same goscaleio client and getting/setting the client's login concurrently. The fix is to use different goscaleio clients.

Another solution would be to wrap the goscaleio client with a mutex but the above solution is simpler.

Code Removal

For URL path policy decisions, we want to allow all paths by default so querying OPA for a URL policy decision is unnecessary since we will always allow the request anyway. The URL policy files are removed and the proxy-server code for querying OPA for those policies is removed.

The PowerScale handler should only be a credential shield since PowerScale has its own RBAC. But, we have volumeCreate and volumeDelete handlers which intercept the requests and essentially do nothing except query OPA that the claims exist in the token. There is other middleware in the proxy-server which validates the access token. These handlers are removed so now the proxy-server only handles session authentication with the array and proxies everything else.

GitHub Issues

List the GitHub issues impacted by this PR:

GitHub Issue #
dell/csm#654

Checklist:

  • I have performed a self-review of my own code to ensure there are no formatting, vetting, linting, or security issues
  • I have verified that new and existing unit tests pass locally with my changes
  • I have not allowed coverage numbers to degenerate
  • I have maintained at least 90% code coverage
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • Backward compatibility is not broken

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Please also list any relevant details for your test configuration

New unit tests in the proxy-server handlers and in the redis enforcer, manual testing, and e2e tests.

@atye atye force-pushed the bug-654-zero-quota branch 2 times, most recently from 05a240c to 869f3aa Compare February 14, 2023 16:48
@atye atye marked this pull request as draft February 14, 2023 16:48
@atye atye marked this pull request as ready for review February 21, 2023 14:10
alikdell
alikdell previously approved these changes Feb 21, 2023
@atye atye dismissed stale reviews from shaynafinocchiaro and alikdell via 214d9a0 February 21, 2023 14:32
@atye atye force-pushed the bug-654-zero-quota branch from 869f3aa to 214d9a0 Compare February 21, 2023 14:32
alikdell
alikdell previously approved these changes Feb 21, 2023
@@ -46,6 +46,3 @@ $K3S kubectl create configmap volumes-create -n karavi --from-file=./volumes_cre
$K3S kubectl create configmap volumes-delete -n karavi --from-file=./volumes_delete.rego --save-config --dry-run=client -o yaml | $K3S kubectl apply -f -
$K3S kubectl create configmap volumes-unmap -n karavi --from-file=./volumes_unmap.rego --save-config --dry-run=client -o yaml | $K3S kubectl apply -f -
$K3S kubectl create configmap volumes-map -n karavi --from-file=./volumes_map.rego --save-config --dry-run=client -o yaml | $K3S kubectl apply -f -
$K3S kubectl create configmap powerflex-urls -n karavi --from-file=./url.rego --save-config --dry-run=client -o yaml | $K3S kubectl apply -f -
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should you also remove $K3S kubectl create configmap powerscale-volumes-create -n karavi --from-file=./volumes_powerscale_create.rego --save-config --dry-run=client -o yaml | $K3S kubectl apply -f - since that rego file is being removed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, good catch.

@atye atye dismissed stale reviews from alikdell and shaynafinocchiaro via f88e952 February 21, 2023 16:05
@atye
Copy link
Contributor Author

atye commented Feb 22, 2023

run e2e test

@atye atye merged commit 4cb1658 into main Feb 22, 2023
@atye atye deleted the bug-654-zero-quota branch February 22, 2023 02:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants