Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LNK-3007: Testing/Integration Testing - select/store data based on fa… #594

Merged
merged 4 commits into from
Dec 30, 2024

Conversation

arianamihailescu
Copy link
Contributor

@arianamihailescu arianamihailescu commented Dec 30, 2024

…cility

🛠️ Description of Changes

Please provide a high-level overview of the changes included in this PR.

🧪 Testing Performed

Please describe the testing that was performed on the changes included in this PR.

📓 Documentation Updated

Please update any relevant sections in the project documentation that were impacted by the changes in the PR.

Summary by CodeRabbit

  • New Features

    • Introduced facility-specific Kafka consumer management.
    • Added support for facility-based message filtering and caching.
    • Created new Facility model to support facility identification.
  • Improvements

    • Enhanced Kafka consumer service with more robust message handling.
    • Updated integration testing endpoints to support facility-specific operations.
    • Improved Redis cache management with facility context.
  • Changes

    • Modified consumer creation and management methods to include facility parameter.
    • Updated endpoint method signatures to accept facility information.

Copy link
Contributor

coderabbitai bot commented Dec 30, 2024

Caution

Review failed

The pull request is closed.

Walkthrough

The pull request introduces facility-specific enhancements to Kafka consumer management across multiple components. The changes modify the KafkaConsumerManager, KafkaConsumerService, and IntegrationTestingEndpoints classes to support facility-based operations. A new Facility model is introduced to represent facility identifiers. The modifications enable more granular control over Kafka consumers, including facility-specific Redis cache management, message filtering, and endpoint interactions.

Changes

File Change Summary
DotNet/LinkAdmin.BFF/Application/Commands/Integration/KafkaConsumerManager.cs - Updated consumer type from IConsumer<Ignore, string> to IConsumer<string, string>
- Added facility parameter to methods
- Introduced ClearRedisCache and StopAllConsumers methods
DotNet/LinkAdmin.BFF/Application/Commands/Integration/KafkaConsumerService.cs - Updated consumer type and method signatures
- Added extractFacility method for parsing Kafka message keys
- Implemented facility-based message filtering
DotNet/LinkAdmin.BFF/Application/Models/Integration/Facility.cs - New class with FacilityId property
- Added XML documentation
DotNet/LinkAdmin.BFF/Presentation/Endpoints/IntegrationTestingEndpoints.cs - Updated endpoint method signatures to include Facility
- Changed some HTTP methods from MapGet to MapPost

Assessment against linked issues

Objective Addressed Explanation
Update Link Admin integration endpoint functionality
Support facility-specific operations

Possibly related PRs

Suggested reviewers

  • edward-miller-lcg
  • amphillipsLGC

Poem

🐰 Hop, hop, through Kafka's stream,
Facility tags now reign supreme!
Consumers dance with grace and might,
Redis cache gleams oh-so bright!
Code hops forward, clean and clear! 🚀


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ab85af5 and 07247e8.

📒 Files selected for processing (2)
  • DotNet/LinkAdmin.BFF/Application/Commands/Integration/KafkaConsumerManager.cs (2 hunks)
  • DotNet/LinkAdmin.BFF/Presentation/Endpoints/IntegrationTestingEndpoints.cs (4 hunks)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
DotNet/LinkAdmin.BFF/Application/Commands/Integration/KafkaConsumerService.cs (2)

72-75: Storing updated correlation IDs in Redis
Storing the updated list back into Redis for each consumed message is a sensible approach. However, consider adding logging or instrumentation in case of Redis failures or slowdowns to aid troubleshooting.


95-123: Robust facility extraction logic
This method gracefully handles both valid JSON and plain string cases. As an enhancement, consider logging JSON parsing errors for better observability when malformed keys are encountered.

DotNet/LinkAdmin.BFF/Application/Commands/Integration/KafkaConsumerManager.cs (1)

60-77: ClearRedisCache: Potential logging improvements
Clearing all facility-specific keys is correct for a fresh start. Consider logging which keys have been removed to aid in debugging.

DotNet/LinkAdmin.BFF/Presentation/Endpoints/IntegrationTestingEndpoints.cs (1)

131-133: CreateConsumersRequested
Explicitly passing facility.FacilityId clarifies scope. Consider returning an explicit response or success code to indicate that consumers have started successfully.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 96d6fce and ab85af5.

📒 Files selected for processing (4)
  • DotNet/LinkAdmin.BFF/Application/Commands/Integration/KafkaConsumerManager.cs (2 hunks)
  • DotNet/LinkAdmin.BFF/Application/Commands/Integration/KafkaConsumerService.cs (4 hunks)
  • DotNet/LinkAdmin.BFF/Application/Models/Integration/Facility.cs (1 hunks)
  • DotNet/LinkAdmin.BFF/Presentation/Endpoints/IntegrationTestingEndpoints.cs (4 hunks)
🔇 Additional comments (12)
DotNet/LinkAdmin.BFF/Application/Commands/Integration/KafkaConsumerService.cs (3)

6-7: New references for JSON and Regex
The addition of Newtonsoft.Json.Linq and System.Text.RegularExpressions looks appropriate for parsing Kafka message keys and filtering by facility.


27-27: Added ‘facility’ parameter to StartConsumer
Including a facility parameter is a good approach to distinguish or skip messages from non-target facilities. Ensure that all invocations of this method are updated to supply the correct facility to avoid consumer mismatches.


48-59: Facility-based message filtering and Redis key handling
Skipping messages that don’t match the target facility appears correct. Confirm the approach meets functional requirements (e.g., do we need to log or handle these skipped messages differently?). Also verify concurrency scenarios for Redis access during high throughput.

DotNet/LinkAdmin.BFF/Application/Commands/Integration/KafkaConsumerManager.cs (4)

79-89: CreateAllConsumers: Redis clearing prior to consumer creation
It’s logical to clear cached correlation IDs before starting new consumers. Confirm that you really want to discard prior correlation data, as this might affect historical tracking.


Line range hint 94-117: CreateConsumer: Building IConsumer with facility
Switching to IConsumer<string, string> is consistent with the new facility-based message key usage. Exception handling within Task.Run(not shown) appears adequate, but consider whether additional fault handling or logging is required.


121-134: readAllConsumers: Retrieving correlation IDs from Redis
Reading correlation IDs per topic is clear. If a Redis value is null or missing, ensure no unexpected exceptions occur. Also consider a fallback to an empty list or a default placeholder to better handle missing data.


139-152: StopAllConsumers: Cleaning up concurrency
Stopping all consumers and clearing Redis is an effective reset. However, verify if historical correlation IDs are needed post-stop for troubleshooting. If so, consider persisting them elsewhere.

DotNet/LinkAdmin.BFF/Application/Models/Integration/Facility.cs (1)

1-14: New Facility model
Defining a simple FacilityId property is a clean approach. For future enhancements, consider adding validations or constraints (e.g., non-null, length limitations) to prevent invalid facility identifiers.

DotNet/LinkAdmin.BFF/Presentation/Endpoints/IntegrationTestingEndpoints.cs (4)

93-95: POST mapping to start consumers
Switching to a POST endpoint and including facility context is consistent with the new design. Verify that clients know they must provide a Facility payload in the request body.


104-104: POST mapping to read consumers
Using POST for retrieving consumer states is unconventional but can be valid if an input payload is needed. Ensure this aligns with your API design standards.


115-115: POST mapping to stop consumers
Uniformly using POST is coherent. Confirm that correlation or usage data isn’t prematurely wiped if multiple tests run concurrently.


137-139: ReadConsumersRequested
Reading Redis for the specified facility ensures isolation per facility. As a safeguard, consider handling empty or null FacilityId strings.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants