-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fsx backup checks #7252
Fsx backup checks #7252
Conversation
Cfgperiodic
mu - changes on LambdaRetry max_attempts
Also rename/shorten the ignore parameter for readability.
@kk1532 can you update the PR with a proper description? |
Adding a new filter called 'consecutive-backups' for collecting daily backup reports of AWS Fsx resources, so that we can determine the number of consecutive daily backups for each AWS Fsx resources. The compliance reporting requires that all AWS Fsx resources should have at least 7 days worth of backups. This filter allows to calculate the number of daily consecutive backups. Sample policy
Similar filter was created for RDS instances and RDS Cluster resources. The difference on this PR is, for ONTAP and Non ONTAP Fsx resources backup methods are different. So the API calls are also different. Below is the reference to the RDS related PR. |
@kk1532 for the PR description, can you put it at the top (edit) rather than putting it as a comment down here? That way people can quickly understand what this PR is about. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me @kk1532 , nice job adapting the existing RDS filter to a more complex resource 👍
c7n/resources/fsx.py
Outdated
'Values': ontap_fids, | ||
}]) | ||
ontap_vids = [v['VolumeId'] for v in ontap_volumes['Volumes']] | ||
ontap_backups = self.describe_backups(client, 'volume-id', ontap_vids) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice use of chunking and filters here, and handling the differences between ONTAP and non-ONTAP file systems. I wasn't familiar with the nuances of filesystem vs. volume-level backups until reviewing this, but your code looked clear anyway and that's a good sign 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ajkerrigan Thank you for the approval.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies, noticed an issue late with regard to chunking. FYI I found the max number of filter values in the service data here but you can also produce a failure on demand with an intentionally broken AWS CLI command like this:
aws fsx describe-backups --filters Name=file-system-id,Values=[fs-12345678901234567,...]
And copy the same file system ID 21 times 🙈 . You'll get an error like:
An error occurred (BadRequest) when calling the DescribeBackups operation: 1 validation error detected: Value '...' at 'filters.1.member.values' failed to satisfy constraint: Member must have length less than or equal to 20
c7n/resources/fsx.py
Outdated
expected_dates.add((utcnow - timedelta(days=days)).strftime('%Y-%m-%d')) | ||
|
||
for resource_set in chunks( | ||
[r for r in resources if self.annotation not in r], 50): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I noticed something here that we'll need to tweak. First, it looks like the max number of filter values per API call is 20 (reference). And second, we'll need to handle the chunking at a different level. As-is the chunking includes ontap and non-ontap filesystems - we'd want to chunk those separately because they feed into separate API calls. Also since ontap backups are volume-based and there may be more than one volume per filesystem, we'll need to chunk at the volume-level rather than filesystem-level.
My gut says that it'd be helpful to:
- Split
process_resource_set
into dedicated separate methods for ontap and non-ontap resources - Handle the chunking in those individual methods, at the volume level for ontap and filesystem level for non-ontap
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- As suggested I have splitted the method
process_resource_set
into two as for ontap and non-ontap resources - Changed the chunking logics as well.
- Now it should handle the describe filter limitation issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good with the changes, thanks again for the work on this one @kk1532 !
Adding a new filter called 'consecutive-backups' for collecting daily backup reports of AWS Fsx resources, so that we can determine the number of consecutive daily backups for each AWS Fsx resources. The compliance reporting requires that all AWS Fsx resources should have at least 7 days worth of backups. This filter allows to calculate the number of daily consecutive backups.
Sample policy
Similar filter was created for RDS instances and RDS Cluster resources. The difference on this PR is, for ONTAP and Non ONTAP Fsx resources backup methods are different. So the API calls are also different.
Below is the reference to the RDS related PR.
#7190