-
Notifications
You must be signed in to change notification settings - Fork 589
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QP sidebar filters to active slice for group datasets #5177
Conversation
WalkthroughThe pull request introduces several enhancements across multiple files, primarily focusing on the integration of new input fields in the GraphQL schema, improvements to the Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (5)
fiftyone/server/lightning.py (1)
331-333
: Improve error handling for distinct query failuresThe current try-except block catches all exceptions without specific handling or logging.
try: result = await collection.distinct(query.path, filter) - except: + except Exception as e: + logger.warning(f"Distinct query failed for path {query.path}: {str(e)}") # too many results return NoneAlso applies to: 340-340
app/schema.graphql (1)
447-447
: Consider adding schema documentationWhile the implementation is solid, consider adding GraphQL descriptions to document the purpose and usage of the new
slice
field. This would improve schema discoverability and maintainability.Example addition:
input LightningInput { dataset: String! paths: [LightningPathInput!]! + """ + Filter results to a specific slice of a group dataset. + When null, no slice filtering is applied. + """ slice: String = null }tests/unittests/lightning_tests.py (3)
1056-1139
: Consider adding edge cases to the test suite.The test implementation looks good but could be more comprehensive. Consider adding test cases for:
- Empty slice names
- Non-existent slice names
- Multiple samples in the same slice
Line range hint
1155-1170
: Add type hints and docstring for the new parameters.The function implementation looks good, but could benefit from better documentation.
Consider adding a docstring and improving type hints:
async def _execute( query: str, dataset: fo.Dataset, field: fo.Field, keys: t.Set[str], - frames=True, - slice: t.Optional[str] = None, + frames: bool = True, + slice: t.Optional[str] = None, ): + """Execute a GraphQL query on the dataset. + + Args: + query: The GraphQL query string + dataset: The FiftyOne dataset + field: The field type to query + keys: Set of field keys to include + frames: Whether to include frame fields + slice: Optional slice name to filter the dataset + + Returns: + The query execution result + """
1178-1194
: Add docstring for the _get_paths function.The function implementation is correct, but would benefit from documentation.
Consider adding a docstring:
def _get_paths( dataset: fo.Dataset, field_type: t.Type[fo.Field], keys: t.Set[str], frames: bool = True, ): + """Get the field paths for the dataset. + + Args: + dataset: The FiftyOne dataset + field_type: The type of field to include + keys: Set of field keys to include + frames: Whether to include frame fields + + Returns: + List of LightningPathInput objects + """
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (1)
app/packages/relay/src/queries/__generated__/lightningQuery.graphql.ts
is excluded by!**/__generated__/**
,!**/__generated__/**
📒 Files selected for processing (5)
app/packages/state/src/recoil/queryPerformance.ts
(3 hunks)app/schema.graphql
(1 hunks)docs/source/user_guide/app.rst
(0 hunks)fiftyone/server/lightning.py
(5 hunks)tests/unittests/lightning_tests.py
(3 hunks)
💤 Files with no reviewable changes (1)
- docs/source/user_guide/app.rst
🧰 Additional context used
📓 Path-based instructions (1)
app/packages/state/src/recoil/queryPerformance.ts (1)
Pattern **/*.{ts,tsx}
: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.
🔇 Additional comments (7)
app/packages/state/src/recoil/queryPerformance.ts (2)
14-14
: LGTM!
The import is correctly placed and necessary for the new group slice functionality.
38-38
: Verify GraphQL schema compatibility
The addition of the slice
parameter looks good and aligns with the group dataset filtering objectives.
Let's verify the GraphQL schema compatibility:
✅ Verification successful
GraphQL schema compatibility verified
The slice
parameter is properly defined in the GraphQL schema. The LightningInput
type includes an optional slice: String
field, which aligns perfectly with the addition in the code.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that the GraphQL schema includes the slice field in LightningInput type
# Check if the schema file exists and contains the required field
rg -U "type LightningInput \{(\n.*?)*slice:" app/schema.graphql
Length of output: 65
Script:
#!/bin/bash
# Let's first find the GraphQL schema files in the repository
fd "schema.*graphql"
# Then let's search for LightningInput type definition in any GraphQL files
rg -U "type\s+LightningInput\s*\{[^}]*\}" --type graphql
# Also search for any input type that has a slice field
rg -U "input\s+\w+\s*\{[^}]*slice:[^}]*\}" --type graphql
# Let's also check the TypeScript interface/type definitions related to Lightning
rg -U "interface\s+Lightning|type\s+Lightning" --type typescript
Length of output: 590
Script:
#!/bin/bash
# Let's check the content of the main schema file
cat app/schema.graphql
# Also search for any input types with slice field without specifying file type
rg -U "input\s+\w+\s*\{[^}]*slice:[^}]*\}"
# Search for Lightning related types without specifying file type
rg -U "type\s+Lightning|input\s+Lightning"
Length of output: 17277
fiftyone/server/lightning.py (4)
48-48
: LGTM: New optional slice field added correctly
The field is properly typed as Optional[str] with an appropriate default value of None.
302-302
: LGTM: Filter parameter properly integrated
The filter parameter is correctly typed and properly passed through the async query chain.
Also applies to: 306-306
366-366
: LGTM: Pipeline filter integration is correct
The filter is properly integrated into the pipeline while maintaining the existing functionality.
Also applies to: 369-371
141-147
: 🛠️ Refactor suggestion
Consider validating the slice value against available group names
While the filter construction is correct, there's no validation to ensure that the provided slice value exists in the dataset's groups. This could lead to empty results without any error indication.
Let's verify the available group names in the dataset:
Consider adding validation:
filter = (
+ {f"{dataset.group_field}.name": input.slice}
+ if dataset.group_field and input.slice and input.slice in dataset.distinct(f"{dataset.group_field}.name")
+ else None
- {f"{dataset.group_field}.name": input.slice}
- if dataset.group_field and input.slice
- else None
)
✅ Verification successful
Let me gather more context about the validation of group names by checking the implementation of distinct()
method and its usage.
Let me check one more thing about the distinct()
method implementation and its usage in the context of group fields.
Let me try one final search to understand the distinct()
method implementation.
The suggested validation using distinct()
is appropriate and safe
The distinct()
method is implemented in the SampleCollection
class and is designed to handle field expressions safely, including nested fields like {group_field}.name
. The method ignores None
values by default, making it suitable for validating group names without risk of errors.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for group validation logic in the codebase
# Look for references to group validation or group name retrieval
rg -g '!*.pyc' -g '!*test*' "group.*names|available.*groups"
Length of output: 1175
Script:
#!/bin/bash
# Search for distinct method implementation and its usage with group fields
ast-grep --pattern 'def distinct'
# Also check for any existing validation of group fields
rg -g '!*.pyc' "group_field.*valid|validate.*group" -A 3
Length of output: 4233
Script:
#!/bin/bash
# Search for distinct method implementation in dataset or collection classes
rg -g '!*.pyc' "def distinct.*group" -A 5
# Also check for any existing usage of distinct with group fields
rg -g '!*.pyc' "distinct.*group.*name" -A 2
Length of output: 89
Script:
#!/bin/bash
# Search for distinct method implementation in dataset or collection classes
rg -g '!*.pyc' "def distinct" -A 5
# Also check for any MongoDB-related distinct operations
rg -g '!*.pyc' "mongo.*distinct" -A 2
Length of output: 505
app/schema.graphql (1)
447-447
: LGTM! Verify integration with group dataset queries
The addition of the optional slice
field to LightningInput
is well-structured and maintains backward compatibility. This enhancement aligns with the PR's objective of improving group dataset filtering capabilities.
Let's verify the integration with group dataset queries:
✅ Verification successful
Integration verified - slice parameter is properly handled
The verification shows proper integration of the slice
parameter:
- Server-side mutation handling is implemented in
mutation.py
withset_group_slice
mutation - Client-side integration exists in state management (
useSetModalState.ts
) - The parameter is used consistently for group dataset filtering and slice management
- Implementation includes proper handling of 3D/2D slice transitions in group datasets
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent usage of the new slice parameter across the codebase
# Search for LightningInput usage to ensure proper handling of the new slice parameter
ast-grep --pattern 'new LightningInput($$$)'
# Search for references to group dataset filtering with slice
rg -l 'group.*dataset.*slice|slice.*group.*dataset'
Length of output: 740
Script:
#!/bin/bash
# Let's check the implementation details around the slice parameter usage
# Check mutation handling of slice parameter
rg -A 5 "slice.*LightningInput" app/
# Look for GraphQL query definitions using slice
rg -A 5 "slice.*group.*dataset" app/
# Check server-side handling in mutation.py
cat fiftyone/server/mutation.py
Length of output: 14753
console.log(samples); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove debugging console.log statement
Debug logging should not be committed to production code.
Apply this diff to remove the debugging statement:
- console.log(samples);
-
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
console.log(samples); |
@@ -306,25 +313,31 @@ async def _do_async_query( | |||
dataset: fo.Dataset, | |||
collection: AsyncIOMotorCollection, | |||
query: t.Union[DistinctQuery, t.List[t.Dict]], | |||
filter: t.Optional[t.Mapping[str, str]], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add type checking before modifying the query list
The current implementation assumes query is always a list when applying the filter. Add type checking to ensure safe operation.
if filter:
+ if not isinstance(query, list):
+ raise TypeError("Expected query to be a list for filter application")
query.insert(0, {"$match": filter})
Also applies to: 324-325
What changes are proposed in this pull request?
Screen.Recording.2024-11-21.at.8.51.24.PM.mov
How is this patch tested? If it is not, please explain why.
Lightning server tests
What areas of FiftyOne does this PR affect?
fiftyone
Python library changesSummary by CodeRabbit
Release Notes
New Features
slice
field in theLightningInput
class for enhanced dataset querying.Bug Fixes
Documentation
Tests