Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

COG-989 feat: make tasks a configurable argument in the cognify function #442

Merged
merged 4 commits into from
Jan 17, 2025

Conversation

lxobr
Copy link
Collaborator

@lxobr lxobr commented Jan 15, 2025

Summary by CodeRabbit

  • New Features

    • Enhanced cognify function with optional custom task list support.
    • Added ability to specify and validate custom tasks during data processing.
    • Introduced default task generation mechanism when no tasks are provided.
  • Improvements

    • Updated function signatures to provide more flexibility in task management.
    • Improved error handling and telemetry for task creation.

Copy link
Contributor

coderabbitai bot commented Jan 15, 2025

Walkthrough

The changes modify the cognify function in the cognify_v2.py file to introduce a new tasks parameter, allowing users to specify custom task lists for processing datasets. A new helper function get_default_tasks is added to generate default tasks when none are provided. The run_cognify_pipeline function is updated to accept and validate the tasks parameter, replacing the previous hardcoded task list approach.

Changes

File Change Summary
cognee/api/v1/cognify/cognify_v2.py - Added tasks: list[Task] = None parameter to cognify function
- Updated run_cognify_pipeline to require tasks parameter
- Introduced new get_default_tasks async function to generate default tasks

Possibly related issues

  • topoteretes/cognee#441
    • Directly addresses the need to make tasks a configurable argument in the cognify function.
    • Matches the implementation of adding a get_default_tasks function and making tasks optional.

Poem

🐰 Hop, hop, through code so bright,
Tasks now dance with pure delight!
Flexible pipelines, no more the same,
Cognify's magic plays a new game!
Rabbit's code leaps with glee today! 🚀


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fb8ce21 and 4c00b0d.

📒 Files selected for processing (1)
  • cognee/api/v1/cognify/cognify_v2.py (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • cognee/api/v1/cognify/cognify_v2.py
⏰ Context from checks skipped due to timeout of 90000ms (18)
  • GitHub Check: run_multimedia_example_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_dynamic_steps_example_test / test
  • GitHub Check: test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_simple_example_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: docker-compose-test
  • GitHub Check: profiler

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@lxobr lxobr self-assigned this Jan 15, 2025
@lxobr lxobr requested a review from alekszievr January 15, 2025 11:49
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
cognee/api/v1/cognify/cognify_v2.py (4)

39-39: Use List from typing module for better type safety.

Consider using List[Task] instead of list[Task] for better compatibility with older Python versions and static type checkers.

-    tasks: list[Task] = None,
+    tasks: List[Task] = None,

71-71: Track the UI lock TODO comment.

There's an unaddressed TODO comment about adding a UI lock to prevent multiple backend requests. This should be tracked for future implementation.

Would you like me to create a GitHub issue to track this TODO item?


101-106: Enhance task validation with more descriptive error messages.

The validation logic could be improved with more descriptive error messages and moved to a separate function for reusability.

Consider refactoring to:

+def validate_tasks(tasks: List[Task]) -> None:
+    """Validate that tasks is a list of Task instances."""
+    if not isinstance(tasks, list):
+        raise ValueError("Expected tasks to be a list, got {type(tasks)}")
+    
+    for idx, task in enumerate(tasks):
+        if not isinstance(task, Task):
+            raise ValueError(
+                f"Task at index {idx} is not a Task instance: {type(task)}"
+            )

 async def run_cognify_pipeline(dataset: Dataset, user: User, tasks: list[Task]):
     # ...
-    if not isinstance(tasks, list):
-        raise ValueError("Tasks must be a list")
-
-    for task in tasks:
-        if not isinstance(task, Task):
-            raise ValueError(f"Task {task} is not an instance of Task")
+    validate_tasks(tasks)

143-167: Add docstring and consider task configuration validation.

The function would benefit from documentation explaining the default task sequence and its purpose. Also, consider validating task configurations.

Add a docstring and enhance error handling:

 async def get_default_tasks(
     user: User = None, graph_model: BaseModel = KnowledgeGraph
 ) -> list[Task]:
+    """Generate the default sequence of tasks for the cognify pipeline.
+    
+    The default sequence includes:
+    1. Document classification
+    2. Permission checking
+    3. Chunk extraction
+    4. Graph extraction
+    5. Text summarization
+    
+    Args:
+        user: The user context for permission checking. Defaults to default user.
+        graph_model: The graph model to use. Defaults to KnowledgeGraph.
+    
+    Returns:
+        List[Task]: The default sequence of tasks.
+    
+    Raises:
+        ConfigError: If cognify configuration is invalid
+        ValueError: If task configuration is invalid
+    """
     if user is None:
         user = await get_default_user()

     try:
         cognee_config = get_cognify_config()
+        # Validate configuration
+        if not cognee_config.summarization_model:
+            raise ValueError("Summarization model not configured")
+
         default_tasks = [
             Task(classify_documents),
             Task(check_permissions_on_documents, user=user, permissions=["write"]),
             Task(extract_chunks_from_documents),
             Task(
                 extract_graph_from_data,
                 graph_model=graph_model,
                 task_config={"batch_size": 10}
             ),
             Task(
                 summarize_text,
                 summarization_model=cognee_config.summarization_model,
                 task_config={"batch_size": 10},
             ),
         ]
-    except Exception as error:
+    except (ConfigError, ValueError) as error:
         send_telemetry("cognee.cognify DEFAULT TASKS CREATION ERRORED", user.id)
         raise error
     return default_tasks
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6653d73 and fb8ce21.

📒 Files selected for processing (1)
  • cognee/api/v1/cognify/cognify_v2.py (4 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: docker-compose-test
🔇 Additional comments (2)
cognee/api/v1/cognify/cognify_v2.py (2)

59-61: LGTM! Good handling of default tasks.

The code properly handles the case when tasks are not provided by fetching default tasks.


39-39: Verify all callers are updated for the signature changes.

The signatures of both cognify and run_cognify_pipeline have changed. Let's verify all callers are updated.

Also applies to: 71-71

✅ Verification successful

All callers are compatible with the signature changes

The tasks parameter is optional with a default value of None, maintaining backward compatibility with all existing callers. No updates to the calling code are required.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for direct calls to these functions
echo "Searching for cognify function calls..."
rg -A 2 "cognify\(" --type py

echo "Searching for run_cognify_pipeline function calls..."
rg -A 2 "run_cognify_pipeline\(" --type py

Length of output: 4347

Copy link

gitguardian bot commented Jan 16, 2025

⚠️ GitGuardian has uncovered 2 secrets following the scan of your pull request.

Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

🔎 Detected hardcoded secrets in your pull request
GitGuardian id GitGuardian status Secret Commit Filename
9573981 Triggered Generic Password de85dfa notebooks/cognee_graphiti_demo.ipynb View secret
8719688 Triggered Generic Password de85dfa notebooks/cognee_graphiti_demo.ipynb View secret
🛠 Guidelines to remediate hardcoded secrets
  1. Understand the implications of revoking this secret by investigating where it is used in your code.
  2. Replace and store your secrets safely. Learn here the best practices.
  3. Revoke and rotate these secrets.
  4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

To avoid such incidents in the future consider


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

@lxobr lxobr merged commit 65a0c98 into dev Jan 17, 2025
23 of 25 checks passed
@lxobr lxobr deleted the COG-989-cognify-tasks-arguments branch January 17, 2025 09:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants