Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sk/cog rebase #4489

Open
wants to merge 12 commits into
base: main
Choose a base branch
from
Open

Sk/cog rebase #4489

wants to merge 12 commits into from

Conversation

gsa-suk
Copy link
Contributor

@gsa-suk gsa-suk commented Nov 27, 2024

Closes #4481

Changes

  1. support/cog_over.py - Audit year is used to calculate baseline year for cog assignments. The calculated baseline year is used in the various functions.
  2. support/test_cog_over.py - Added test cases for 2024 and a future year.
  3. audit/models/models.py - Audit year is passed into compute_cog_over.

How to test

  1. Bring up the FAC app locally with code from this branch.
  2. On a terminal, in the backend folder, run 'docker compose run web python manage.py test support.test_cog_over'.

Expected test result

Screenshot 2024-11-27 at 11 22 22 AM

Advanced Test (if interested)

  1. Pull FAC/main.
  2. Bring up the FAC app locally with code from this branch.
  3. Load public data into local dissemination_general and dissemination_federalaward tables.
  4. Truncate local support_cognizantbaseline and support_cognizantassignment tables.
  5. On a terminal, in the backend folder, run 'docker compose run web python manage.py check_cog_over_for_year --year 2022'. Note the end result.
  6. Truncate local support_cognizantbaseline and support_cognizantassignment tables.
  7. Pull FAC/sk/cog_rebase.
  8. On a terminal, in the backend folder, run 'docker compose run web python manage.py check_cog_over_for_year --year 2022'. Note the end result.
  9. The result from 5 should match the result from 8 above.

PR Checklist: Submitter

  • Link to an issue if possible. If there’s no issue, describe what your branch does. Even if there is an issue, a brief description in the PR is still useful.
  • List any special steps reviewers have to follow to test the PR. For example, adding a local environment variable, creating a local test file, etc.
  • For extra credit, submit a screen recording like this one.
  • Make sure you’ve merged main into your branch shortly before creating the PR. (You should also be merging main into your branch regularly during development.)
  • Make sure you’ve accounted for any migrations. When you’re about to create the PR, bring up the application locally and then run git status | grep migrations. If there are any results, you probably need to add them to the branch for the PR. Your PR should have only one new migration file for each of the component apps, except in rare circumstances; you may need to delete some and re-run python manage.py makemigrations to reduce the number to one. (Also, unless in exceptional circumstances, your PR should not delete any migration files.)
  • Make sure that whatever feature you’re adding has tests that cover the feature. This includes test coverage to make sure that the previous workflow still works, if applicable.
  • Make sure the full-submission.cy.js Cypress test passes, if applicable.
  • Do manual testing locally. Our tests are not good enough yet to allow us to skip this step. If that’s not applicable for some reason, check this box.
  • Verify that no Git surgery was necessary, or, if it was necessary at any point, repeat the testing after it’s finished.
  • Once a PR is merged, keep an eye on it until it’s deployed to dev, and do enough testing on dev to verify that it deployed successfully, the feature works as expected, and the happy path for the broad feature area (such as submission) still works.
  • Ensure that prior to merging, the working branch is up to date with main and the terraform plan is what you expect.

PR Checklist: Reviewer

  • Pull the branch to your local environment and run make docker-clean; make docker-first-run && docker compose up; then run docker compose exec web /bin/bash -c "python manage.py test"
  • Manually test out the changes locally, or check this box to verify that it wasn’t applicable in this case.
  • Check that the PR has appropriate tests. Look out for changes in HTML/JS/JSON Schema logic that may need to be captured in Python tests even though the logic isn’t in Python.
  • Verify that no Git surgery is necessary at any point (such as during a merge party), or, if it was, repeat the testing after it’s finished.

The larger the PR, the stricter we should be about these points.

Pre Merge Checklist: Merger

  • Ensure that prior to approving, the terraform plan is what we expect it to be. -/+ resource "null_resource" "cors_header" should be destroying and recreating its self and ~ resource "cloudfoundry_app" "clamav_api" might be updating its sha256 for the fac-file-scanner and fac-av-${ENV} by default.
  • Ensure that the branch is up to date with main.
  • Ensure that a terraform plan has been recently generated for the pull request.

Copy link
Contributor

github-actions bot commented Nov 27, 2024

Terraform plan for meta

No changes. Your infrastructure matches the configuration.
No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.

📝 Plan generated in Pull Request Checks #3996

Copy link
Contributor

github-actions bot commented Nov 27, 2024

Terraform plan for dev

Error: Unknown Error
Error: Unknown Error

  with module.dev.module.newrelic.newrelic_nrql_alert_condition.infected_file_found,
  on ../shared/modules/newrelic/alerts.tf line 59, in resource "newrelic_nrql_alert_condition" "infected_file_found":
  59: resource "newrelic_nrql_alert_condition" "infected_file_found" {

❌ Failed to generate plan in Pull Request Checks #3996

@phildominguez-gsa
Copy link
Contributor

Lgtm. Is there a quick way to test this works in the app itself?

@rnovak338
Copy link
Contributor

2. docker compose run web python manage.py test support.test_cog_over

+1 to Phil - I was able to get successful test results myself.

backend/support/cog_over.py Outdated Show resolved Hide resolved
backend/support/cog_over.py Outdated Show resolved Hide resolved
@gsa-suk
Copy link
Contributor Author

gsa-suk commented Dec 2, 2024

Added an advanced test.

Copy link
Contributor

github-actions bot commented Dec 2, 2024

Code Coverage

Package Line Rate Branch Rate Health
. 100% 100%
api 98% 90%
audit 97% 87%
audit.cross_validation 98% 88%
audit.fixtures 84% 50%
audit.intakelib 90% 81%
audit.intakelib.checks 92% 85%
audit.intakelib.common 98% 82%
audit.intakelib.transforms 100% 94%
audit.management.commands 78% 17%
audit.migrations 100% 100%
audit.models 93% 75%
audit.templatetags 100% 100%
audit.views 60% 39%
census_historical_migration 96% 65%
census_historical_migration.migrations 100% 100%
census_historical_migration.sac_general_lib 92% 84%
census_historical_migration.transforms 95% 90%
census_historical_migration.workbooklib 68% 69%
config 76% 31%
curation 100% 100%
curation.curationlib 57% 100%
curation.migrations 100% 100%
dissemination 91% 72%
dissemination.migrations 97% 25%
dissemination.searchlib 74% 64%
dissemination.templatetags 100% 100%
djangooidc 53% 38%
djangooidc.tests 100% 94%
report_submission 93% 88%
report_submission.migrations 100% 100%
report_submission.templatetags 74% 100%
support 92% 65%
support.management.commands 96% 100%
support.migrations 100% 100%
support.models 97% 83%
tools 98% 50%
users 95% 92%
users.fixtures 100% 83%
users.management 100% 100%
users.management.commands 100% 100%
users.migrations 100% 100%
Summary 90% (17401 / 19228) 76% (2168 / 2850)


def compute_cog_over(
federal_awards, submission_status, auditee_ein, auditee_uei, audit_year
):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is not directly related to the current change, but since the PR modifies this function, I’d like to take the opportunity to clarify some points and potentially simplify the logic for improved readability.
While reviewing this function, I noticed that it checks for None or empty awards at line 36. For context, we compute cog_over as part of dissemination, and based on our current cross-validations, we do not allow reports with no awards to proceed. This makes the check at line 36 seem unnecessary to me.
If anything, I believe it would be better to throw an exception here to indicate an issue with the report. Since I wasn’t part of the discussions that led to this logic, do you recall any context or conversations around this decision? I also noticed a similar check on line 118.

@sambodeme
Copy link
Contributor

The code changes look good to me. After reviewing Matt's diagram and the overall cog_over logic, I think we could consider refactoring the code to align more closely with the diagram. While the current implementation does this to some extent, having the flow in the code mirror the diagram more precisely could make it easier to understand what the code is doing (or is supposed to do) when compared to the diagram. Going through this exercise may also require us to update the diagram (or the code) whenever necessary. This is unrelated to this PR and would be more of a nice-to-have improvement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support multiple baseline years for cognizant assignments
5 participants