-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.18 ports5 #7209
1.18 ports5 #7209
Conversation
…ed field has an empty list of possible values (#7160)
…7156) * Allow to convert Map toList in the same way as List is converted toMap * Updated snapshots * tests fix --------- Co-authored-by: arkadius <[email protected]>
* Fix assignUserFriendlyEditor casting parameters editor to StringEditor * Add tests * Add changelog entry * Simplify type check * Fix cypress test
* NU-1882 rename Add list item to "Suggested values" and "Possible values"
* NU-1891 remove autosuggestion from markdown field
…#7182) * NU-1889 provide an unique validation message to the scenario label
…s mixing different types of elements (#7187) * [NU-1877] Fix "Failed to get node validation" when using literal lists mixing different types of elements * test fixes + changelog/migration guide changes * Update docs/MigrationGuide.md Co-authored-by: Mateusz Słabek <[email protected]> --------- Co-authored-by: Mateusz Słabek <[email protected]>
* NU-1892 add missing tooltips
#7192) * [NU-1846] Fix "Failed to get node validation" when opening node details referencing non-existing component * review fix
…ails when only one category is available (#7183) * NU-1890 hide categories from a scenarios list and more scenario details when only one category is available
…ode details for referencing non-existing fragment (#7190) * [NU-1893] Fix "Failed to get node validation" when opening fragment node details for referencing non-existing fragment * tests fix
📝 Walkthrough📝 WalkthroughWalkthroughThe pull request introduces several modifications across multiple components in the Nussknacker codebase. Key changes include the renaming of the Possibly related PRs
Suggested reviewers
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (78)
designer/submodules/packages/components/src/scenarios/list/scenarioAvatar.tsx (1)
9-15
: Consider performance and maintainability improvementsThe i18n implementation looks good, but consider these enhancements:
- Extract translation keys as constants for better maintainability
- Memoize the component since it's likely rendered multiple times in a list
Here's a suggested implementation:
import { useTranslation } from "react-i18next"; + +const TRANSLATION_KEYS = { + FRAGMENT: "scenariosList.tooltip.fragment", + SCENARIO: "scenariosList.tooltip.scenario", +} as const; -export function ScenarioAvatar({ scenario }: { scenario: Pick<Scenario, "isFragment" | "state"> }) { +export const ScenarioAvatar = React.memo(function ScenarioAvatar({ + scenario +}: { + scenario: Pick<Scenario, "isFragment" | "state"> +}) { const { t } = useTranslation(); const { isFragment } = scenario; return ( <TableCellAvatar - title={isFragment ? t("scenariosList.tooltip.fragment", "Fragment") : t("scenariosList.tooltip.scenario", "Scenario")} + title={isFragment ? t(TRANSLATION_KEYS.FRAGMENT, "Fragment") : t(TRANSLATION_KEYS.SCENARIO, "Scenario")} > {isFragment ? <FragmentIcon width={"1em"} height={"1em"} /> : <ScanarioIcon width={"1em"} height={"1em"} />} </TableCellAvatar> ); -} +});components-api/src/main/scala/pl/touk/nussknacker/engine/api/json/FromJsonDecoder.scala (3)
10-23
: Add comprehensive ScalaDoc documentationThe method would benefit from detailed documentation explaining:
- Purpose and usage
- Return type possibilities
- Why Java collections are used
- Example conversions
Add documentation like this:
+/** + * Converts a circe Json value to a corresponding JVM type. + * + * @param json The JSON value to convert + * @return One of: + * - null for JSON null + * - Boolean for JSON boolean + * - Integer/Long/BigDecimal for JSON numbers (using most memory-efficient type) + * - String for JSON strings + * - java.util.List for JSON arrays + * - java.util.Map for JSON objects + * + * Example: + * {{{ + * val json = Json.obj("name" -> Json.fromString("test"), "value" -> Json.fromInt(42)) + * val result = FromJsonDecoder.jsonToAny(json) // Returns Java Map with String and Integer + * }}} + */ def jsonToAny(json: Json): Any = json.fold(
13-19
: Improve number handling and error messagesThe number handling logic could be improved in two ways:
- The error message should include more context about why the number isn't supported
- Consider creating a custom exception type for better error handling
Consider this improvement:
- getOrElse (throw new IllegalArgumentException(s"Not supported json number: $jsonNumber")), + getOrElse { + throw new JsonDecodingException( + s"Cannot decode JSON number '$jsonNumber' to a supported numeric type. " + + "The number might be too large or have an unsupported format." + ) + },And add this exception class:
case class JsonDecodingException(message: String) extends RuntimeException(message)
21-22
: Consider making Java collection conversion optionalThe current implementation always converts collections to Java types, which might not be optimal for all use cases. Consider adding a parameter to control this behavior.
Consider modifying the method to accept a parameter for collection type preference:
- def jsonToAny(json: Json): Any = json.fold( + def jsonToAny(json: Json, useJavaCollections: Boolean = true): Any = { + def convertCollection[T](scalaCol: Iterable[T]): Any = + if (useJavaCollections) scalaCol.asJava else scalaCol + + json.fold( jsonNull = null, jsonBoolean = identity[Boolean], jsonNumber = /* ... */, jsonString = identity[String], - jsonArray = _.map(jsonToAny).asJava, - jsonObject = _.toMap.mapValuesNow(jsonToAny).asJava + jsonArray = arr => convertCollection(arr.map(j => jsonToAny(j, useJavaCollections))), + jsonObject = obj => convertCollection(obj.toMap.mapValuesNow(j => jsonToAny(j, useJavaCollections))) ) + }designer/client/src/components/graph/node-modal/editors/field/MarkdownFormControl.tsx (1)
18-18
: Consider making autocompletion configurableInstead of hardcoding
enableLiveAutocompletion
tofalse
, consider making it configurable through props to allow more flexibility.- enableLiveAutocompletion={false} + enableLiveAutocompletion={props.enableLiveAutocompletion ?? false}And update the props type:
type MarkdownFormControlProps = Omit<FieldProps, "type"> & PropsWithChildren<{ value: string; onChange: (value: string) => void; enableLiveAutocompletion?: boolean; }>;components-api/src/test/scala/pl/touk/nussknacker/engine/api/json/FromJsonDecoderTest.scala (1)
10-19
: Consider expanding test coverage for comprehensive number handlingThe current test cases effectively verify the "narrow type" selection for positive numbers and max values. However, consider adding these scenarios for more robust testing:
- Negative numbers
- Decimal precision cases
- Error cases (e.g., number overflow)
- More descriptive test name specifying the exact behavior being tested
Here's a suggested expansion of the test cases:
- test("json number decoding pick the narrowest type") { + test("should decode JSON numbers to the narrowest possible Scala type while preserving value") { // Existing positive number cases FromJsonDecoder.jsonToAny(Json.fromInt(1)) shouldBe 1 FromJsonDecoder.jsonToAny(Json.fromInt(Integer.MAX_VALUE)) shouldBe Integer.MAX_VALUE FromJsonDecoder.jsonToAny(Json.fromLong(Long.MaxValue)) shouldBe Long.MaxValue FromJsonDecoder.jsonToAny( Json.fromBigDecimal(java.math.BigDecimal.valueOf(Double.MaxValue)) ) shouldBe java.math.BigDecimal.valueOf(Double.MaxValue) val moreThanLongMaxValue = BigDecimal(Long.MaxValue) * 10 FromJsonDecoder.jsonToAny(Json.fromBigDecimal(moreThanLongMaxValue)) shouldBe moreThanLongMaxValue.bigDecimal + + // Negative number cases + FromJsonDecoder.jsonToAny(Json.fromInt(-1)) shouldBe -1 + FromJsonDecoder.jsonToAny(Json.fromInt(Integer.MIN_VALUE)) shouldBe Integer.MIN_VALUE + + // Decimal precision cases + FromJsonDecoder.jsonToAny(Json.fromBigDecimal(BigDecimal("0.1"))) shouldBe BigDecimal("0.1").bigDecimal + FromJsonDecoder.jsonToAny(Json.fromBigDecimal(BigDecimal("0.12345678901234567890"))) shouldBe + BigDecimal("0.12345678901234567890").bigDecimal } + + test("should handle error cases appropriately") { + // Add tests for number overflow scenarios + // Add tests for invalid number formats + }designer/client/src/components/toolbars/scenarioDetails/CategoryDetails.tsx (1)
7-12
: Consider destructuring scenario prop for clarity.While the current implementation is correct, destructuring the scenario prop could make the code more readable and prevent unnecessary object access.
Here's a suggested improvement:
-export const CategoryDetails = ({ scenario }: { scenario: Scenario }) => { +export const CategoryDetails = ({ + scenario: { processCategory, processingMode, engineSetupName }, +}: { + scenario: Scenario; +}) => { const { t } = useTranslation(); const { isAllCombinationsLoading, isCategoryFieldVisible } = useGetAllCombinations({ - processCategory: scenario.processCategory, - processingMode: scenario.processingMode, - processEngine: scenario.engineSetupName, + processCategory, + processingMode, + processEngine: engineSetupName, });designer/submodules/packages/components/src/common/labelChip.tsx (1)
31-31
: Consider adding type safety for translations.To prevent typos in translation keys and ensure type safety, consider using a type-safe translation library like
typesafe-i18n
or define a type for your translation keys.Example implementation:
// types/i18n.ts export type TranslationKeys = { 'scenariosList.tooltip.label': string; // ... other keys }; // Then use it in your component const { t } = useTranslation<keyof TranslationKeys>();designer/client/src/components/useGetAllCombinations.ts (4)
5-9
: Add JSDoc documentation for the Props interface.Consider adding documentation to describe the purpose and requirements of each prop, which will improve the API's usability.
+/** + * Props for the useGetAllCombinations hook + * @property processCategory - The category of the process + * @property processingMode - The mode of processing + * @property processEngine - The engine used for processing + */ interface Props { processCategory: string; processingMode: ProcessingMode; processEngine: string; }
10-13
: Consider using a discriminated union for better state management.The current implementation manages loading and data separately. Consider using a discriminated union to handle the different states (loading, success, error) more robustly.
- const [allCombinations, setAllCombinations] = useState<ScenarioParametersCombination[]>([]); - const [engineSetupErrors, setEngineSetupErrors] = useState<Record<string, string[]>>({}); - const [isAllCombinationsLoading, setIsAllCombinationsLoading] = useState<boolean>(false); + type State = + | { status: "loading" } + | { status: "error"; error: Error } + | { status: "success"; combinations: ScenarioParametersCombination[]; errors: Record<string, string[]> }; + + const [state, setState] = useState<State>({ status: "loading" });
15-22
: Optimize performance by memoizing the value object.The value object is recreated on every render. Consider using useMemo to prevent unnecessary recalculations.
+ const value = useMemo(() => ({ + processCategory, + processingMode, + processEngine, + }), [processCategory, processingMode, processEngine]); + const { isCategoryFieldVisible } = useProcessFormDataOptions({ allCombinations, - value: { - processCategory, - processingMode, - processEngine, - }, + value, });
36-37
: Add explicit return type annotation.Consider adding a TypeScript return type annotation for better maintainability and documentation.
+ type ReturnType = { + allCombinations: ScenarioParametersCombination[]; + isAllCombinationsLoading: boolean; + isCategoryFieldVisible: boolean; + engineSetupErrors: Record<string, string[]>; + }; + - return { allCombinations, isAllCombinationsLoading, isCategoryFieldVisible, engineSetupErrors }; + return { + allCombinations, + isAllCombinationsLoading, + isCategoryFieldVisible, + engineSetupErrors + } as ReturnType;utils/json-utils/src/main/scala/pl/touk/nussknacker/engine/json/serde/CirceJsonDeserializer.scala (2)
Line range hint
27-31
: Consider addressing the TODO for improved validation errors.The comment indicates potential issues with the current everit validator and suggests moving to networknt's json-schema-validator for better error messages and to avoid mutation of input JSON.
Would you like me to:
- Create a GitHub issue to track the migration to networknt's json-schema-validator?
- Provide a proof-of-concept implementation using the suggested validator?
Line range hint
27-39
: Document the input mutation behavior.The warning about everit mutating the input JSON is crucial for maintainers. Consider enhancing the documentation to:
- Explain specific cases where null values vs. empty maps behave differently
- Add examples of edge cases that could be problematic
designer/submodules/packages/components/src/scenarios/list/scenarioStatus.tsx (1)
34-34
: Consider enhancing the default tooltip textThe default text "Status" is quite generic. Consider making it more descriptive, e.g., "Scenario Status" or "Click to filter by status" to provide better context for users.
- title={t("scenariosList.tooltip.status", "Status")} + title={t("scenariosList.tooltip.status", "Click to filter by status")}designer/client/src/components/graph/node-modal/fragment-input-definition/settings/variants/fields/FixedValuesSetting.tsx (1)
19-19
: Consider making userDefinedListInputLabel optionalWhile the implementation is correct, consider making this prop optional with a default value to maintain backward compatibility and reduce the burden on existing usage sites.
- userDefinedListInputLabel: string; + userDefinedListInputLabel?: string;utils/default-helpers/src/main/scala/pl/touk/nussknacker/engine/util/functions/conversion.scala (1)
Breaking change
toNumber
removal not documented in migration guideThe migration guide for version 1.19.0 exists but does not mention the removal of the
toNumber
method. This breaking change should be documented to help users prepare for the transition.
- Add deprecation notice to
docs/MigrationGuide.md
under the "In version 1.19.0" section- Include migration steps or alternatives for users currently using the
toNumber
method🔗 Analysis chain
Line range hint
20-24
: Ensure deprecation is documented in MigrationGuide.mdThe
toNumber
method is marked for removal in 1.19. This breaking change should be documented in the migration guide to help users prepare for the transition.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if the deprecation is documented in the migration guide rg "toNumber.*1\.19|1\.19.*toNumber" MigrationGuide.mdLength of output: 114
Script:
#!/bin/bash # Check for migration guide in docs directory and other common locations fd -g "*migration*.md" -i # Also check for any documentation about version 1.19 rg "1\.19" -g "*.md"Length of output: 1172
designer/submodules/packages/components/src/common/categoryChip.tsx (1)
Line range hint
26-55
: Consider adding JSDoc documentation for i18n requirementsWhile the implementation is clean, it would be helpful to add JSDoc comments to the
CategoryButton
component documenting the internationalization requirements and expected translation keys. This would help maintain consistency as more components are internationalized.Example documentation:
+/** + * CategoryButton component with internationalization support. + * @requires Translation key: scenariosList.tooltip.category + */ export function CategoryButton({ category, filterValues, setFilter }: Props): JSX.Element {designer/server/src/main/scala/pl/touk/nussknacker/ui/process/fragment/FragmentRepository.scala (1)
Line range hint
13-24
: Address technical debt in synchronous operationsThe synchronous methods (
fetchLatestFragmentsSync
andfetchLatestFragmentSync
) have several concerns:
- FIXME comments indicate they should be replaced with async versions
- Hard-coded 10-second timeout could be problematic in different environments
Consider:
- Creating tickets to migrate to fully async operations
- Moving the timeout duration to configuration
- Adding deprecation warnings to sync methods
designer/client/src/components/graph/node-modal/ParametersUtils.ts (2)
37-39
: Add JSDoc documentation and consider error logging.While the null check is a good defensive programming practice, the function would benefit from:
- JSDoc documentation explaining the parameters, return type, and these early return cases
- Optional debug logging when these conditions are hit to help troubleshoot issues
+/** + * Adjusts node parameters based on current parameter definitions + * @param node - The node whose parameters need to be adjusted + * @param parameterDefinitions - The parameter definitions to adjust against + * @returns Object containing the adjusted node and any unused parameters + */ export function adjustParameters(node: NodeType, parameterDefinitions: UIParameter[]): AdjustReturn { const path = parametersPath(node); if (!path || !parameterDefinitions) { + console.debug(`Skipping parameter adjustment: path=${path}, hasDefinitions=${!!parameterDefinitions}`); return { adjustedNode: node, unusedParameters: [] }; }
Remove or update the unused parameters comment
The
unusedParameters
field in theAdjustReturn
type is tracked but not utilized in the UI or elsewhere in the codebase. The only consumer ofadjustParameters
function (NodeTypeDetailsContent.tsx
) explicitly ignores theunusedParameters
in the destructuring. Either:
- Remove the
unusedParameters
tracking if there are no concrete plans to use it- Or document a specific future use-case if one exists
🔗 Analysis chain
Line range hint
4-7
: Verify the comment about unused parameters.The comment suggests that unused parameters are not currently used but might be displayed somewhere. This should either be:
- Implemented if there's value in showing unused parameters to users
- Removed if there are no plans to use this information
Let's check if unused parameters are used anywhere:
Let me search for any potential usage of this information in the UI components.
Let me check if there are any consumers of the
adjustParameters
function to see how the return value is used.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for usage of unusedParameters rg "unusedParameters" --type ts -C 2Length of output: 1519
Script:
#!/bin/bash # Search for any UI components that might be displaying parameters rg -t ts "findUnusedParameters|unusedParameters" -C 5 designer/client/src/components/Length of output: 3675
Script:
#!/bin/bash # Search for usage of adjustParameters function rg -t ts "adjustParameters\(" -C 5Length of output: 2492
designer/submodules/packages/components/src/scenarios/list/processingMode.tsx (1)
Line range hint
21-21
: Consider unifying the i18n approach across the file.Currently, there's a mix of direct
i18next.t()
calls for static content and hook-based translation. Consider refactoring to use the hook-based approach consistently.export const processingModeItems = [ { name: ProcessingMode.streaming, - displayableName: i18next.t(`scenarioDetails.processingModeVariants.streaming`, "Streaming"), + displayableName: "scenarioDetails.processingModeVariants.streaming", Icon: Streaming, }, { name: ProcessingMode.requestResponse, - displayableName: i18next.t(`scenarioDetails.processingModeVariants.requestResponse`, "Request-Response"), + displayableName: "scenarioDetails.processingModeVariants.requestResponse", Icon: RequestResponse, }, { name: ProcessingMode.batch, - displayableName: i18next.t(`scenarioDetails.processingModeVariants.batch`, "Batch"), + displayableName: "scenarioDetails.processingModeVariants.batch", Icon: Batch, }, ];Then in the component:
return ( <Button title={t("scenariosList.tooltip.processingMode", "Processing mode")} color={isSelected ? "primary" : "inherit"} sx={{ textTransform: "capitalize", display: "flex", gap: 1, alignItems: "center", fontSize: "1rem", py: 0.25, mx: 0 }} onClick={onClick} aria-selected={isSelected} > <item.Icon color={"inherit"} /> - <Typography variant={"caption"}>{item.displayableName}</Typography> + <Typography variant={"caption"}>{t(item.displayableName)}</Typography> </Button> );This approach:
- Makes the translation handling more consistent
- Ensures all translations are managed through the hook
- Allows for better type checking and maintenance
Also applies to: 26-26, 31-31
components-api/src/test/scala/pl/touk/nussknacker/engine/api/typed/ValueDecoderSpec.scala (2)
Line range hint
51-71
: Security & Performance: Verify implications of decoding extra fieldsThe new behavior of including all extra fields raises several concerns:
- Security: Arbitrary JSON fields could potentially expose sensitive data or cause memory issues with very large payloads
- Performance: Processing and storing extra fields increases memory usage
- Type safety: The generic decoding strategy might not properly validate field types
Consider adding:
- Field whitelisting
- Size limits for extra fields
- Type validation for unknown fields
Let's verify the impact:
#!/bin/bash # Look for security-related validation in the decoder implementation rg "class ValueDecoder|object ValueDecoder" -A 10 # Check for existing size limits or validation ast-grep --pattern 'class ValueDecoder { $$$ def decode($$$) { $$$ size $$$ $$$ } }'
67-69
: Enhance test coverage for extra fields handlingConsider adding test cases for:
- Multiple extra fields
- Nested extra fields
- Extra fields with various data types
- Very large extra field payloads
designer/client/cypress/e2e/labels.cy.ts (1)
36-42
: Enhance test robustness for duplicate label validationWhile the test covers the basic duplicate label scenario, consider these improvements for more reliable testing:
- Use a more specific assertion for the error message:
-cy.get("@labelInput").should("be.visible").contains("This label already exists. Please enter a unique value."); +cy.get("@labelInput") + .should("be.visible") + .find("[data-testid=error-message]") + .should("have.text", "This label already exists. Please enter a unique value.");
- Add explicit visibility wait for the error message:
cy.wait("@labelvalidation"); +cy.get("@labelInput").find("[data-testid=error-message]").should("be.visible");
- Add negative validation:
cy.get("@labelInput").find("input").clear(); +// Verify the duplicate label was not added +cy.get("[data-testid^=scenario-label-]").should("have.length", 1);components-api/src/main/scala/pl/touk/nussknacker/engine/api/typed/ValueDecoder.scala (1)
69-69
: Use standard comment styleThe triple slash (///) seems non-standard. Consider using the standard double slash (//) for inline comments or ScalaDoc style (/** */) for documentation.
- /// For Unknown we fallback to generic json to any conversion. It won't work for some types such as LocalDate but for others should work correctly + // For Unknown we fallback to generic json to any conversion. It won't work for some types such as LocalDate but for others should work correctlydesigner/client/src/components/graph/node-modal/editors/expression/CustomCompleterAceEditor.tsx (1)
31-31
: Add JSDoc documentation for the new prop.Consider adding documentation to explain the purpose and default behavior of this prop.
+ /** + * Controls whether the editor should show autocompletion suggestions in real-time. + * When undefined, follows the AceEditor's default behavior. + */ enableLiveAutocompletion?: boolean;scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/compile/ProcessCompilerData.scala (3)
43-44
: Enhance comment clarity regarding serialization issues.While the comment explains the use of unoptimized evaluator, it would be helpful to provide more specific details about the serialization problems encountered during scenario testing.
Consider expanding the comment to:
- // Here we pass unOptimizedEvaluator for ValidationExpressionParameterValidator - // as the optimized once could cause problems with serialization with its listeners during scenario testing + // Here we pass unOptimizedEvaluator for ValidationExpressionParameterValidator + // as the optimized evaluator's listeners can cause serialization issues during scenario testing + // (specifically when serializing expression evaluation results)
72-73
: Fix alignment for consistent code formatting.The alignment between these two lines is inconsistent.
Apply this formatting change:
- val expressionEvaluator = ExpressionEvaluator.optimizedEvaluator(globalVariablesPreparer, listeners) - val interpreter = Interpreter(listeners, expressionEvaluator, componentUseCase) + val expressionEvaluator = ExpressionEvaluator.optimizedEvaluator(globalVariablesPreparer, listeners) + val interpreter = Interpreter(listeners, expressionEvaluator, componentUseCase)
Line range hint
43-73
: Overall implementation approach looks good.The dual evaluator approach (optimized for runtime, unoptimized for validation) is a reasonable solution to handle serialization issues while maintaining performance where it matters most. The changes are well-documented and maintain backward compatibility.
Consider documenting this architectural decision in the project's architecture decision records (ADR) to help future maintainers understand the rationale behind using different evaluators for different purposes.
scenario-api/src/main/scala/pl/touk/nussknacker/engine/build/ScenarioBuilder.scala (2)
124-128
: LGTM: Good refactoring and API designThe refactoring improves code reuse while maintaining functionality. The new
fragmentWithRawParameters
method follows consistent patterns and provides good flexibility with varargs.Consider adding scaladoc documentation to describe:
- The purpose and usage of each method
- Parameter descriptions
- Return value details
- Example usage
+ /** + * Creates a fragment with specified input node ID and typed parameters + * @param id The unique identifier for the fragment + * @param inputNodeId The ID of the input node + * @param params Variable number of parameter name and type pairs + * @return A ProcessGraphBuilder configured with the fragment + */ def fragmentWithInputNodeId(id: String, inputNodeId: String, params: (String, Class[_])*): ProcessGraphBuilder = { createFragment(id, GraphBuilder.fragmentInput(inputNodeId, params: _*)) } + /** + * Creates a fragment with raw parameters + * @param id The unique identifier for the fragment + * @param params Variable number of FragmentParameter instances + * @return A ProcessGraphBuilder configured with the fragment + */ def fragmentWithRawParameters(id: String, params: FragmentParameter*): ProcessGraphBuilder = {
135-140
: Consider adding parameter validationThe
createFragment
method could benefit from input validation to ensure robustness.Consider adding validation for the
id
parameter:private def createFragment(id: String, graphBuilder: GraphBuilder[SourceNode]): ProcessGraphBuilder = { + require(id != null && id.nonEmpty, "Fragment id must not be null or empty") new ProcessGraphBuilder( graphBuilder.creator .andThen(r => EspProcess(MetaData(id, FragmentSpecificData()), NonEmptyList.of(r)).toCanonicalProcess) ) }
utils/json-utils/src/main/scala/pl/touk/nussknacker/engine/json/swagger/decode/FromJsonSchemaBasedDecoder.scala (5)
13-13
: Address TODO comment regarding ValidationThe TODO comment suggests implementing validation using the
Validated
type. This would be a good improvement as it would allow for better error accumulation instead of failing fast.Would you like me to help implement a
Validated
-based approach for error handling?
Line range hint
22-22
: Consider replacing exception-based flow controlUsing exceptions for flow control is generally discouraged in Scala. Consider refactoring to use either:
Either[JsonToObjectError, AnyRef]
Validated[JsonToObjectError, AnyRef]
(preferred for error accumulation)This would make error handling more explicit and composable, following functional programming principles.
44-53
: Consider improving type safety in object decodingThe current implementation mixes schema-based decoding with raw JSON decoding. Consider:
- Making the type hierarchy more explicit
- Using ADTs to represent the different decoding strategies
- Adding runtime type checking for
jsonToAny
resultsExample improvement:
sealed trait DecodingStrategy case class SchemaBasedDecoding(schema: SwaggerTyped) extends DecodingStrategy case object RawJsonDecoding extends DecodingStrategy def decodeWithStrategy(json: Json, strategy: DecodingStrategy): Either[JsonToObjectError, AnyRef] = { strategy match { case SchemaBasedDecoding(schema) => decode(json, schema) case RawJsonDecoding => FromJsonDecoder.jsonToAny(json).asRight } }
91-93
: Optimize array handling performanceThe current implementation creates intermediate collections and could be optimized:
- Consider using
Iterator
instead ofVector
to avoid intermediate collection- Use
Array
for better performance when working with primitive typesExample optimization:
extract[Vector[Json]]( _.asArray, arr => { val result = new java.util.ArrayList[AnyRef](arr.size) arr.iterator.zipWithIndex.foreach { case (el, idx) => result.add(FromJsonSchemaBasedDecoder.decode(el, elementType, s"$path[$idx]")) } result } )
Line range hint
106-136
: Enhance date/time parsing error messagesThe current implementation returns null for invalid inputs. Consider:
- Adding specific error messages for different failure cases (empty string vs parsing failure)
- Using
Option
orEither
instead of nullExample improvement:
private def parseDateTime(dateTime: String): Either[String, ZonedDateTime] = { Option(dateTime).filter(_.nonEmpty).toRight("Empty datetime string") .flatMap(dt => Try(ZonedDateTime.parse(dt, DateTimeFormatter.ISO_DATE_TIME)) .toEither .left.map(e => s"Invalid datetime format: ${e.getMessage}") ) }designer/submodules/packages/components/src/scenarios/list/item.tsx (1)
71-71
: Consider improvements for i18n and styling consistency.Two suggestions for better maintainability:
- The separator "/" should be internationalized for better language support
- Consider moving the hardcoded padding values to theme constants
- <span style={{ paddingLeft: 8, paddingRight: 8 }}>/</span> + <span style={{ paddingLeft: theme.spacing(1), paddingRight: theme.spacing(1) }}> + {t("scenario.categorySeparator", "/")} + </span>designer/submodules/packages/components/src/scenarios/useScenariosQuery.tsx (2)
58-72
: Consider extracting common query configurationsThe implementation looks good and follows the established patterns. However, there are opportunities for improvement:
- The refetchInterval (15000) and staleTime (10000) values are duplicated across multiple hooks.
- Error handling strategy isn't explicitly defined.
Consider extracting common configurations into constants:
const COMMON_QUERY_CONFIG = { refetchInterval: 15000, staleTime: 10000, } as const; export function useScenarioParametersCombinationsQuery(): UseQueryResult<ScenarioParametersCombinations> { const api = useContext(NkApiContext); return useQuery({ queryKey: [scenarioParametersCombinationsQueryKey], queryFn: async () => { const { data } = await api.fetchScenarioParametersCombinations(); return data; }, enabled: !!api, ...COMMON_QUERY_CONFIG, }); }
129-141
: Consider improving type safety and semanticsWhile the implementation is functional, there are opportunities to make it more robust and maintainable:
Consider these improvements:
export function useScenariosWithCategoryVisible(): { withCategoriesVisible: boolean } { const parametersCombinations = useScenarioParametersCombinationsQuery(); return useMemo(() => { const { data } = parametersCombinations; // More explicit empty array handling const combinations = data?.combinations ?? []; const SINGLE_CATEGORY = 1; const uniqueCategories = Object.keys( groupBy(combinations, (combination) => combination.category ?? "default") ); return { withCategoriesVisible: uniqueCategories.length > SINGLE_CATEGORY, }; }, [parametersCombinations]); }Changes include:
- More explicit null coalescing for empty array
- Named constant for better semantics
- Safe category access with fallback
engine/lite/components/request-response/src/main/scala/pl/touk/nussknacker/engine/lite/components/requestresponse/jsonschema/sources/JsonSchemaRequestResponseSource.scala (1)
Line range hint
89-91
: Consider improving combined schema handling.The TODO comment highlights a limitation in handling
anyOf
,allOf
, andoneOf
schemas. Currently, it uses a simplistic "first matched schema" approach which might lead to unexpected behavior.Suggested improvements:
- Add validation mode configuration to handle strict vs. lax schema matching
- Implement proper handling for each combined schema type:
anyOf
: Current behavior (match first valid)allOf
: Validate against all schemasoneOf
: Ensure exactly one schema matchesWould you like me to help create a GitHub issue to track these improvements?
designer/client/src/components/modals/MoreScenarioDetailsDialog.tsx (2)
1-1
: Remove unused importSkeleton
The
Skeleton
component is imported from @mui/material but is not used in the code.-import { Box, Skeleton, styled, Typography } from "@mui/material"; +import { Box, styled, Typography } from "@mui/material";Also applies to: 13-14
52-54
: Simplify LoaderSpinner usageSince we're already in a conditional block, the
show
prop is redundant.- return <LoaderSpinner show={true} />; + return <LoaderSpinner />;designer/client/src/components/AddProcessDialog.tsx (1)
40-44
: LGTM! Consider memoizing hook dependencies.The integration of
useGetAllCombinations
is well-implemented. However, since the hook's dependencies are derived from the form state, consider memoizing the options object to prevent unnecessary hook re-executions.Consider this optimization:
+ const hookOptions = useMemo(() => ({ + processCategory: value.processCategory, + processingMode: value.processingMode, + processEngine: value.processEngine, + }), [value.processCategory, value.processingMode, value.processEngine]); - const { engineSetupErrors, allCombinations } = useGetAllCombinations({ - processCategory: value.processCategory, - processingMode: value.processingMode, - processEngine: value.processEngine, - }); + const { engineSetupErrors, allCombinations } = useGetAllCombinations(hookOptions);designer/client/src/components/graph/node-modal/fragment-input-definition/settings/variants/fields/UserDefinedListInput.tsx (2)
171-171
: Consider wrapping the inputLabel with translation support.While the dynamic label implementation works, consider wrapping it with the translation system for consistency, as other strings in the component (like the "Add" button) already use i18n. This would ensure uniform internationalization support across the component.
Here's how you could modify it:
-<SettingLabelStyled>{inputLabel}</SettingLabelStyled> +<SettingLabelStyled>{t(inputLabel)}</SettingLabelStyled>This assumes that
inputLabel
will be a translation key. If it's meant to be the final text, you might want to add a new prop to distinguish between translation keys and direct text:interface Props { - inputLabel: string; + inputLabel: string; + translateLabel?: boolean; // defaults to true if using translation keys } // Usage: -<SettingLabelStyled>{inputLabel}</SettingLabelStyled> +<SettingLabelStyled>{translateLabel ? t(inputLabel) : inputLabel}</SettingLabelStyled>
Line range hint
1-205
: Consider architectural improvements for maintainability.While reviewing the label changes, I noticed some opportunities for architectural improvements:
The validation logic (including the debounced HTTP validation) could be extracted into a custom hook, e.g.,
useListItemValidation
. This would:
- Improve code organization
- Make the validation logic reusable
- Make the component more focused on rendering
The HTTP validation could benefit from error handling for failed requests.
Would you like me to provide an example implementation of these improvements?
designer/submodules/packages/components/src/scenarios/filters/filtersPart.tsx (1)
39-39
: Consider optimizing dependency array ordering.While the dependencies are correct, consider ordering them alphabetically for better maintainability:
-}, [availableLabels?.labels, data, filterableKeys, statusDefinitions, userData?.categories]); +}, [data, filterableKeys, availableLabels?.labels, statusDefinitions, userData?.categories]);scenario-api/src/main/scala/pl/touk/nussknacker/engine/build/GraphBuilder.scala (2)
111-120
: Add documentation for the new method.Please add Scaladoc to explain:
- The purpose of this method compared to
fragmentInput
- When to use this method vs the original method
- Example usage with FragmentParameter instances
Apply this diff to add documentation:
+ /** + * Creates a fragment input node using raw FragmentParameter instances. + * This method provides an alternative to `fragmentInput` when you already have + * FragmentParameter instances and want to avoid parameter conversion. + * + * @param id The identifier for the fragment input + * @param params The fragment parameters as pre-constructed FragmentParameter instances + * @return A GraphBuilder containing a SourceNode with FragmentInputDefinition + */ def fragmentInputWithRawParameters(id: String, params: FragmentParameter*): GraphBuilder[SourceNode] =
111-120
: Consider adding parameter validation.The implementation looks correct, but consider adding validation to ensure:
- The
id
is not empty- The
params
list contains at least one parameterHere's a suggested implementation with validation:
def fragmentInputWithRawParameters(id: String, params: FragmentParameter*): GraphBuilder[SourceNode] = { + require(id.nonEmpty, "Fragment input id cannot be empty") + require(params.nonEmpty, "At least one parameter must be provided") new SimpleGraphBuilder( SourceNode( FragmentInputDefinition( id = id, parameters = params.toList ), _ ) ) }designer/submodules/packages/components/src/scenarios/list/tablePart.tsx (1)
Line range hint
24-117
: Consider improving type safety and simplifying the columns structure.While the implementation works, there are a few improvements that could make it more robust:
- The type
Columns<RowType | undefined>
is more permissive than necessary- The filter operation at the end could be avoided with a more type-safe approach
Consider this alternative approach:
- const columns = useMemo((): Columns<RowType> => { - const availableColumns: Columns<RowType | undefined> = [ + const columns = useMemo((): Columns<RowType> => { + const baseColumns: Columns<RowType> = [ { field: "id", // ... other columns }, + // ... other base columns + ]; + + const categoryColumn: Columns<RowType> = withCategoriesVisible ? [{ + field: "processCategory", + cellClassName: "noPadding stretch", + headerName: t("table.scenarios.title.PROCESS_CATEGORY", "Category"), + renderCell: (props) => <FilterLinkCell<ScenariosFiltersModel> filterKey="CATEGORY" {...props} />, + flex: 1, + }] : []; + + return [...baseColumns, ...categoryColumn]; - withCategoriesVisible - ? { - field: "processCategory", - // ... category column props - } - : undefined, - // ... other columns - ]; - return availableColumns.filter((data) => data !== undefined); }, [filterText, t, withCategoriesVisible]);This approach:
- Maintains better type safety by avoiding undefined values
- Eliminates the need for the filter operation
- Makes the code more maintainable and easier to understand
designer/client/src/components/AddProcessForm.tsx (1)
22-24
: Consider adding validation messages for processing mode options.While the form handles validation errors, it would be helpful to add specific error messages when disabled processing modes are selected. This would improve the user experience by clearly communicating why certain options are unavailable.
Example implementation:
export type FormValue = { processName: string; processCategory: string; processingMode: ProcessingMode; processEngine: string }; export type TouchedValue = Record<keyof FormValue, boolean>; interface AddProcessFormProps extends ChangeableValue<FormValue> { validationErrors: NodeValidationError[]; categories: { value: string; disabled: boolean }[]; processingModes: ProcessingMode[]; engines: string[]; handleSetTouched: (touched: TouchedValue) => void; touched: TouchedValue; displayContactSupportMessage?: boolean; + processingModeDisabledReasons?: Partial<Record<ProcessingMode, string>>; }
Also applies to: 26-32
designer/client/src/components/toolbars/scenarioDetails/ScenarioLabels.tsx (1)
30-35
: Consider using useTranslation hook instead of direct i18next usageThe function uses i18next.t directly, which might miss dynamic translation updates. Consider moving this function inside the component to leverage the useTranslation hook's t function.
-const labelUniqueValidation = (label: string) => ({ +const ScenarioLabels = ({ readOnly }: Props) => { + const { t } = useTranslation(); + + const labelUniqueValidation = (label: string) => ({ label, messages: [ - i18next.t("panels.scenarioDetails.labels.validation.uniqueValue", "This label already exists. Please enter a unique value."), + t("panels.scenarioDetails.labels.validation.uniqueValue", "This label already exists. Please enter a unique value."), ], });engine/flink/executor/src/test/scala/pl/touk/nussknacker/engine/process/runner/FlinkTestMainSpec.scala (2)
809-834
: Consider adding documentation for the fragment process configuration.The companion object implementation is well-structured, but could benefit from documentation explaining:
- The purpose of the fragment validation
- The expected behavior of the validation expression
- The significance of the validation message
Add scaladoc comments to improve maintainability:
+/** + * Test fragment process configuration with parameter validation. + * The fragment defines a single parameter 'param' of type String with a validation + * that always returns true. This setup allows testing the validation infrastructure + * without actually restricting parameter values. + */ object FlinkTestMainSpec { private val fragmentWithValidationName = "fragmentWithValidation" + /** + * Creates a canonical process representing a fragment with parameter validation. + * The process consists of an input definition with a validated parameter and + * an output definition. + */ private val processWithFragmentParameterValidation: CanonicalProcess = {
814-821
: Consider extracting validation configuration to a named constant.The validation configuration could be made more maintainable by extracting it to a named constant.
private val processWithFragmentParameterValidation: CanonicalProcess = { val fragmentParamName = ParameterName("param") + val paramValidation = ParameterValueCompileTimeValidation( + validationExpression = Expression.spel("true"), + validationFailedMessage = Some("param validation failed") + ) val fragmentParam = FragmentParameter(fragmentParamName, FragmentClazzRef[String]).copy( - valueCompileTimeValidation = Some( - ParameterValueCompileTimeValidation( - validationExpression = Expression.spel("true"), - validationFailedMessage = Some("param validation failed") - ) - ) + valueCompileTimeValidation = Some(paramValidation) )designer/server/src/test/scala/pl/touk/nussknacker/ui/api/NodesApiHttpServiceBusinessSpec.scala (1)
502-529
: LGTM! Consider adding a complementary test case.The test case effectively validates the API's behavior when encountering a non-existing fragment reference. The error response structure and status code are well-defined.
Consider adding a complementary test case that verifies the successful validation of a fragment input node with a valid fragment reference. This would provide complete coverage of both the happy and error paths.
Example structure:
"return 200 for fragment input node referencing existing fragment" in { val existingFragmentName = "existing-fragment" given() .applicationState { // Create both the main scenario and the fragment createSavedScenario(exampleScenario) createSavedScenario(fragmentScenario) } .basicAuthAllPermUser() .jsonBody(exampleNodeValidationRequestForFragment(existingFragmentName)) .when() .post(s"$nuDesignerHttpAddress/api/nodes/${exampleScenario.name}/validation") .Then() .statusCode(200) .equalsJsonBody( // Add expected successful validation response ) }docs/scenarios_authoring/Spel.md (5)
267-267
: Consider enhancing the explanation of Unknown type handlingThe explanation could be more precise by specifying that the
Unknown
type is a special type in Nussknacker that represents dynamically typed values.-Every unknown accessed field/element will produce `Unknown` data type, which can be further navigated or converted to a desired type. +Every accessed field/element from an `Unknown` type will produce another `Unknown` type value, which can be further navigated using the `[]` operator or converted to a specific type using conversion functions.
271-272
: Improve clarity and grammar in the type conversions introductionThe introduction could be more precise and grammatically correct.
-It is possible to cast or convert from a type to another type and this can be done by implicit and explicit conversion. +It is possible to cast or convert from one type to another, and this can be done through either implicit or explicit conversion.🧰 Tools
🪛 LanguageTool
[uncategorized] ~271-~271: Possible missing comma found.
Context: ... cast or convert from a type to another type and this can be done by implicit and ex...(AI_HYDRA_LEO_MISSING_COMMA)
292-295
: Enhance explanation of error handling in conversion functionsConsider adding more details about error handling, particularly what kind of exceptions might be thrown and how to handle them safely.
Functions with the prefix `canBe` check whether a type can be cast or converted to the appropriate type. Functions with the `to` prefix cast or convert a value to the desired type, and if the operation fails, an exception is propagated further. Functions with the `to` prefix and `OrNull` suffix cast or convert a value to the desired type, and if the operation fails, a null value is returned. + +Example of error handling: +```scala +// Safe conversion pattern +if (value.canBeDouble) { + // This won't throw an exception as we checked compatibility + value.toDouble +} else { + // Handle incompatible type case + defaultValue +} +```
329-333
: Add explanation for potentially failing conversionsThe legend should include more details about what "potentially failing" means and how to handle such cases.
Where: :heavy_check_mark: - conversion is possible :x: - conversion is not possible :exclamation: - conversion is potentially failing + +A potentially failing conversion (:exclamation:) means that the conversion might succeed in some cases but fail in others. For example: +- String to Number conversions only succeed if the string contains a valid number format +- Unknown type conversions depend on the runtime value + +For potentially failing conversions, it's recommended to: +1. Use the `canBe` function to check if conversion is possible +2. Use the `toOrNull` variant if null is an acceptable fallback +3. Handle potential conversion exceptions when using the `to` function
346-348
: Add warning about implicit conversion pitfallsConsider adding a note about potential issues with implicit conversions.
will try to convert the type of the "input value" to the "target type". This behaviour can be encountered in particular when passing certain values to method parameters (these values can be automatically converted to the desired type). + +Note: While implicit conversions can make code more concise, they can also mask type-related issues. Consider using explicit conversions when: +- The conversion logic is critical to your business logic +- You need to handle conversion failures gracefully +- You want to make type conversions more visible for code maintenancedocs/MigrationGuide.md (4)
17-21
: Improve missing Flink Kafka Source/Sink TypeInformation documentationThe documentation for breaking change regarding Flink Kafka Source/Sink TypeInformation could be more detailed. Consider adding:
- What specific versions of Flink are affected
- What steps users need to take to migrate their scenarios
- Example of how to handle state restoration
* [#7116](https://github.com/TouK/nussknacker/pull/7116) Improve missing Flink Kafka Source / Sink TypeInformation - * We lost support for old ConsumerRecord constructor supported by Flink 1.14 / 1.15 - * If you used Kafka source/sink components in your scenarios then state of these scenarios won't be restored + * We lost support for old ConsumerRecord constructor supported by Flink 1.14 / 1.15 + * If you used Kafka source/sink components in your scenarios then state of these scenarios won't be restored + * Affected Flink versions: 1.14.x, 1.15.x + * Migration steps: + 1. Save your scenario state if needed + 2. Update your Kafka source/sink components + 3. Deploy with clean state
109-112
: Clarify JSON decoding changes impactThe explanation of JSON decoding changes could be clearer about potential impacts. Consider adding:
- Specific examples of before/after behavior
- Whether any configuration changes are needed
* [#7187](https://github.com/TouK/nussknacker/pull/7187) JSON decoding in `request` source (request-response processing mode) and in `kafka` source (streaming processing mode): For small decimal numbers is used either `Integer` or `Long` (depending on number size) instead of `BigDecimal`. This change should be transparent in most cases as this value was mostly used after `#CONV.toNumber()` invocation which still will return a `Number`. + Example: + Before: {"value": 123} -> value was decoded as BigDecimal(123) + After: {"value": 123} -> value is decoded as Integer(123) + Note: If you rely on exact decimal precision, you may need to explicitly convert using `#CONV.toBigDecimal()`
Line range hint
1-7
: Improve documentation structureThe migration guide would benefit from a clearer structure. Consider adding:
- A table of contents
- Version compatibility matrix
- Quick reference for most critical changes
# Migration guide +## Quick Reference +- Latest version: 1.19.0 (unreleased) +- Critical changes: + - Flink Kafka Source/Sink TypeInformation changes (#7116) + - JSON decoding behavior changes (#7187) + +## Version Compatibility Matrix +| Nussknacker Version | Flink Version | Scala Version | Java Version | +|---------------------|---------------|---------------|--------------| +| 1.19.0 | 1.19.1 | 2.13 | 11 | +| 1.18.0 | 1.18.1 | 2.13 | 11 | + +## Table of Contents +- [Version 1.19.0](#in-version-1190-not-released-yet) +- [Version 1.18.0](#in-version-1180) + To see the biggest differences please consult the [changelog](Changelog.md).🧰 Tools
🪛 Markdownlint (0.35.0)
22-22: Expected: 2; Actual: 4
Unordered list indentation(MD007, ul-indent)
23-23: Expected: 2; Actual: 4
Unordered list indentation(MD007, ul-indent)
24-24: Expected: 2; Actual: 4
Unordered list indentation(MD007, ul-indent)
Line range hint
121-125
: Improve Kafka topics validation documentationThe documentation about Kafka topics validation could be clearer about the configuration impact. Consider adding:
- Default values
- Configuration examples
* [#6282](https://github.com/TouK/nussknacker/pull/6184) If you relied on the default value of the `topicsExistenceValidationConfig.enabled` setting, you must now be aware that topics will be validated by default (Kafka's `auto.create.topics.enable` setting is only considered in case of Sinks). Create proper topics manually if needed. + Configuration example: + ```hocon + kafka { + topicsExistenceValidationConfig { + enabled = true # Default: true + # Other validation settings... + } + } + ``` + Note: For sources, topics must exist before deployment. For sinks, topics will be created if `auto.create.topics.enable=true`engine/flink/tests/src/test/resources/extractedTypes/defaultModel.json (3)
14024-14024
: Fix grammatical error in method descriptionThe description "Check whether can be convert to a list" contains grammatical errors.
- "description": "Check whether can be convert to a list", + "description": "Checks whether the map can be converted to a list",
14133-14134
: Enhance method descriptions with failure behaviorThe descriptions should explicitly mention the failure behavior (throwing exception vs returning null) to make it clear to users.
For example:
- "description": "Converts a type to a given class or throws exception if type cannot be converted.", + "description": "Converts a type to the specified class. Throws IllegalArgumentException if conversion is not possible.",Also applies to: 14147-14148, 14169-14170, 14191-14192
Line range hint
14008-14202
: Well-structured type conversion systemThe addition of type checking (
canBe
,canBeList
) and conversion methods (to
,toList
, etc.) creates a robust type conversion system. The pattern of providing both throwing and null-returning variants gives developers flexibility in handling conversion failures.Some architectural considerations:
- The type checking methods allow for defensive programming
- The conversion methods follow a consistent pattern
- The system appears to be part of a larger type conversion framework
Consider documenting the following in the migration guide:
- The relationship between these methods and the new JSON decoder mentioned in the PR
- Any performance implications of the conversion operations
- Best practices for choosing between throwing vs null-returning variants
scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/extension/ToListConversionExt.scala (1)
79-105
: Consider adding unit tests for the new map typing casesThe added cases in
typingFunction
enhance the conversion logic to support maps. To ensure robustness, please consider adding unit tests that cover these new cases, verifying that the typing function correctly handles different map types and their key/value type parameters.Would you like assistance in creating these unit tests or opening a GitHub issue to track this task?
utils/json-utils/src/test/scala/pl/touk/nussknacker/engine/json/swagger/decode/FromJsonSchemaBasedDecoderTest.scala (5)
85-88
: Simplify calls toFromJsonSchemaBasedDecoder.decode
by removing redundantdecode.
prefixIn these test cases, the
decode.
prefix is unnecessary since the class is in the same package. Removing it improves code readability.Apply this diff:
- decode.FromJsonSchemaBasedDecoder.decode(json, definitionWithoutFields) shouldBe TypedMap(Map.empty) + FromJsonSchemaBasedDecoder.decode(json, definitionWithoutFields) shouldBe TypedMap(Map.empty) - decode.FromJsonSchemaBasedDecoder.decode(json, definitionWithOneField) shouldBe TypedMap(Map("field2" -> 1L)) + FromJsonSchemaBasedDecoder.decode(json, definitionWithOneField) shouldBe TypedMap(Map("field2" -> 1L))
95-95
: Remove unnecessarydecode.
prefix in method callThe
decode.
prefix is redundant here. Omitting it makes the code cleaner.Apply this diff:
- decode.FromJsonSchemaBasedDecoder.decode(json, definition) shouldBe TypedMap( + FromJsonSchemaBasedDecoder.decode(json, definition) shouldBe TypedMap(
98-98
: Ensure consistent use of numeric types in test assertionsFor consistency, consider using
1L
instead of1
to match theLong
type expected in other tests.Apply this diff:
- "field2" -> 1 + "field2" -> 1L
105-105
: Omit redundantdecode.
prefix in method invocationStreamline the code by removing the unnecessary
decode.
prefix.Apply this diff:
- decode.FromJsonSchemaBasedDecoder.decode(jsonIntegers, definition2) shouldBe TypedMap( + FromJsonSchemaBasedDecoder.decode(jsonIntegers, definition2) shouldBe TypedMap(
118-118
: Simplify method call by eliminatingdecode.
prefixThe prefix
decode.
is not needed and can be removed for clarity.Apply this diff:
- decode.FromJsonSchemaBasedDecoder.decode(json, definition) + FromJsonSchemaBasedDecoder.decode(json, definition)designer/server/src/test/scala/pl/touk/nussknacker/ui/api/TestingApiHttpServiceSpec.scala (5)
62-62
: Remove unnecessary trailing comma in list definitionAt line 62, there's a trailing comma after the last element in the
fixedValuesList
:FixedExpressionValue("'due'", "due"),While Scala allows trailing commas, removing it can improve code readability and maintain consistency.
208-272
: Refactor duplicated test cases for maintainabilityThe test cases starting at line 208 (
"generate parameters for fragment with fixed list parameter"
) and line 273 ("Generate parameters with simplified (single) editor for fragment with raw string parameter"
) have similar structures:
- Both create a fragment using
exampleFragment
.- Both set up the application state and make similar API calls.
- The only differences are the parameters used (
fragmentFixedParameter
vs.fragmentRawStringParameter
) and the expected JSON responses.Consider abstracting the common logic into a helper method or using parameterized tests to reduce code duplication and enhance maintainability.
Line range hint
181-185
: Document the breaking change in API behaviorThe behavior of the capabilities endpoint has changed to return a
403 Forbidden
status for users without permissions instead of a response with capabilities disabled:.statusCode(403)
This change may impact clients that expect a successful response with disabled capabilities. It's important to:
- Update the documentation to reflect this new behavior.
- Add an entry in the
MigrationGuide.md
to inform users of the breaking change.- Ensure that all clients and stakeholders are aware of this change to prevent unexpected issues.
78-81
: Renameparameter
tofragmentParameter
for clarityIn the
exampleFragment
method, the parameter nameparameter
is too generic:private def exampleFragment(parameter: FragmentParameter)Consider renaming it to
fragmentParameter
to enhance code readability and make the purpose of the parameter clearer.
402-403
: Avoid code duplication by reusingcanonicalGraphStr
The method
canonicalGraphStr
is defined at lines 402-403:private def canonicalGraphStr(canonical: CanonicalProcess) = Encoder[ScenarioGraph].apply(CanonicalProcessConverter.toScenarioGraph(canonical)).toString()Earlier, similar logic is used to define
exampleScenarioGraphStr
. To improve code maintainability:
- Use
canonicalGraphStr
forexampleScenarioGraphStr
by passingexampleScenario
as an argument.- Replace direct calls to
Encoder
andCanonicalProcessConverter
with the helper method wherever applicable.This reduces code duplication and centralizes the logic for converting a
CanonicalProcess
to its JSON string representation.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (3)
designer/client/cypress/e2e/__image_snapshots__/electron/Linux/Expression suggester should display colorfull and sorted completions #0.png
is excluded by!**/*.png
designer/client/cypress/e2e/__image_snapshots__/electron/Linux/Expression suggester should display colorfull and sorted completions #1.png
is excluded by!**/*.png
designer/client/cypress/e2e/__image_snapshots__/electron/Linux/Fragment should allow adding input parameters and display used fragment graph in modal #7.png
is excluded by!**/*.png
📒 Files selected for processing (66)
components-api/src/main/scala/pl/touk/nussknacker/engine/api/context/ProcessCompilationError.scala
(1 hunks)components-api/src/main/scala/pl/touk/nussknacker/engine/api/json/FromJsonDecoder.scala
(1 hunks)components-api/src/main/scala/pl/touk/nussknacker/engine/api/typed/ValueDecoder.scala
(2 hunks)components-api/src/test/scala/pl/touk/nussknacker/engine/api/json/FromJsonDecoderTest.scala
(1 hunks)components-api/src/test/scala/pl/touk/nussknacker/engine/api/typed/TypingResultDecoderSpec.scala
(1 hunks)components-api/src/test/scala/pl/touk/nussknacker/engine/api/typed/ValueDecoderSpec.scala
(3 hunks)components/openapi/src/main/scala/pl/touk/nussknacker/openapi/extractor/HandleResponse.scala
(1 hunks)designer/client/cypress/e2e/fragment.cy.ts
(1 hunks)designer/client/cypress/e2e/labels.cy.ts
(1 hunks)designer/client/cypress/e2e/process.cy.ts
(1 hunks)designer/client/src/components/AddProcessDialog.tsx
(3 hunks)designer/client/src/components/AddProcessForm.tsx
(1 hunks)designer/client/src/components/graph/node-modal/ParametersUtils.ts
(1 hunks)designer/client/src/components/graph/node-modal/editors/expression/AceWrapper.tsx
(3 hunks)designer/client/src/components/graph/node-modal/editors/expression/CustomCompleterAceEditor.tsx
(2 hunks)designer/client/src/components/graph/node-modal/editors/field/MarkdownFormControl.tsx
(1 hunks)designer/client/src/components/graph/node-modal/fragment-input-definition/settings/variants/StringBooleanVariants/AnyValueWithSuggestionVariant.tsx
(1 hunks)designer/client/src/components/graph/node-modal/fragment-input-definition/settings/variants/StringBooleanVariants/FixedListVariant.tsx
(1 hunks)designer/client/src/components/graph/node-modal/fragment-input-definition/settings/variants/fields/FixedValuesSetting.tsx
(3 hunks)designer/client/src/components/graph/node-modal/fragment-input-definition/settings/variants/fields/UserDefinedListInput.tsx
(3 hunks)designer/client/src/components/modals/MoreScenarioDetailsDialog.tsx
(4 hunks)designer/client/src/components/toolbars/scenarioDetails/CategoryDetails.tsx
(1 hunks)designer/client/src/components/toolbars/scenarioDetails/ScenarioDetailsComponents.tsx
(2 hunks)designer/client/src/components/toolbars/scenarioDetails/ScenarioLabels.tsx
(5 hunks)designer/client/src/components/useGetAllCombinations.ts
(1 hunks)designer/restmodel/src/main/scala/pl/touk/nussknacker/restmodel/validation/PrettyValidationErrors.scala
(1 hunks)designer/server/src/main/scala/pl/touk/nussknacker/ui/process/fragment/FragmentRepository.scala
(2 hunks)designer/server/src/main/scala/pl/touk/nussknacker/ui/process/test/ScenarioTestService.scala
(2 hunks)designer/server/src/test/scala/pl/touk/nussknacker/ui/api/NodesApiHttpServiceBusinessSpec.scala
(1 hunks)designer/server/src/test/scala/pl/touk/nussknacker/ui/api/TestingApiHttpServiceSpec.scala
(4 hunks)designer/server/src/test/scala/pl/touk/nussknacker/ui/process/fragment/FragmentRepositorySpec.scala
(2 hunks)designer/submodules/packages/components/src/common/categoryChip.tsx
(3 hunks)designer/submodules/packages/components/src/common/labelChip.tsx
(3 hunks)designer/submodules/packages/components/src/scenarios/filters/filtersPart.tsx
(4 hunks)designer/submodules/packages/components/src/scenarios/list/item.tsx
(2 hunks)designer/submodules/packages/components/src/scenarios/list/processingMode.tsx
(3 hunks)designer/submodules/packages/components/src/scenarios/list/scenarioAvatar.tsx
(1 hunks)designer/submodules/packages/components/src/scenarios/list/scenarioStatus.tsx
(1 hunks)designer/submodules/packages/components/src/scenarios/list/tableCellAvatar.tsx
(2 hunks)designer/submodules/packages/components/src/scenarios/list/tablePart.tsx
(3 hunks)designer/submodules/packages/components/src/scenarios/useScenariosQuery.tsx
(3 hunks)docs/Changelog.md
(5 hunks)docs/MigrationGuide.md
(7 hunks)docs/scenarios_authoring/Spel.md
(2 hunks)engine/flink/executor/src/test/scala/pl/touk/nussknacker/engine/process/runner/FlinkTestMainSpec.scala
(3 hunks)engine/flink/management/dev-model/src/test/resources/extractedTypes/devCreator.json
(3 hunks)engine/flink/tests/src/test/resources/extractedTypes/defaultModel.json
(3 hunks)engine/lite/components/request-response/src/main/scala/pl/touk/nussknacker/engine/lite/components/requestresponse/jsonschema/sources/JsonSchemaRequestResponseSource.scala
(3 hunks)scenario-api/src/main/scala/pl/touk/nussknacker/engine/build/GraphBuilder.scala
(1 hunks)scenario-api/src/main/scala/pl/touk/nussknacker/engine/build/ScenarioBuilder.scala
(2 hunks)scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/compile/ProcessCompilerData.scala
(2 hunks)scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/compile/nodecompilation/ValueEditorValidator.scala
(1 hunks)scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/extension/Conversion.scala
(0 hunks)scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/extension/ToListConversionExt.scala
(3 hunks)scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/extension/ToMapConversionExt.scala
(2 hunks)scenario-compiler/src/test/scala/pl/touk/nussknacker/engine/compile/NodeDataValidatorSpec.scala
(1 hunks)scenario-compiler/src/test/scala/pl/touk/nussknacker/engine/spel/SpelExpressionSpec.scala
(3 hunks)utils/default-helpers/src/main/scala/pl/touk/nussknacker/engine/util/functions/conversion.scala
(2 hunks)utils/default-helpers/src/test/scala/pl/touk/nussknacker/engine/util/functions/ConversionUtilsSpec.scala
(1 hunks)utils/json-utils/src/main/scala/pl/touk/nussknacker/engine/json/serde/CirceJsonDeserializer.scala
(2 hunks)utils/json-utils/src/main/scala/pl/touk/nussknacker/engine/json/swagger/SwaggerTyped.scala
(1 hunks)utils/json-utils/src/main/scala/pl/touk/nussknacker/engine/json/swagger/decode/FromJsonSchemaBasedDecoder.scala
(4 hunks)utils/json-utils/src/test/scala/pl/touk/nussknacker/engine/json/SwaggerBasedJsonSchemaTypeDefinitionExtractorTest.scala
(2 hunks)utils/json-utils/src/test/scala/pl/touk/nussknacker/engine/json/serde/CirceJsonDeserializerSpec.scala
(1 hunks)utils/json-utils/src/test/scala/pl/touk/nussknacker/engine/json/swagger/decode/FromJsonSchemaBasedDecoderTest.scala
(8 hunks)utils/utils/src/main/scala/pl/touk/nussknacker/engine/util/json/JsonUtils.scala
(0 hunks)
💤 Files with no reviewable changes (2)
- scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/extension/Conversion.scala
- utils/utils/src/main/scala/pl/touk/nussknacker/engine/util/json/JsonUtils.scala
🧰 Additional context used
📓 Learnings (1)
scenario-compiler/src/test/scala/pl/touk/nussknacker/engine/spel/SpelExpressionSpec.scala (1)
Learnt from: arkadius
PR: TouK/nussknacker#7174
File: scenario-compiler/src/test/scala/pl/touk/nussknacker/engine/spel/SpelExpressionSpec.scala:84-84
Timestamp: 2024-11-19T22:01:38.151Z
Learning: In test classes such as `SpelExpressionSpec.scala` within the `scenario-compiler` module, binary compatibility issues are not a concern because they are used only internally. Therefore, changes to method signatures in these test classes are acceptable.
🪛 LanguageTool
docs/scenarios_authoring/Spel.md
[uncategorized] ~271-~271: Possible missing comma found.
Context: ... cast or convert from a type to another type and this can be done by implicit and ex...
(AI_HYDRA_LEO_MISSING_COMMA)
🔇 Additional comments (100)
components/openapi/src/main/scala/pl/touk/nussknacker/openapi/extractor/HandleResponse.scala (2)
Line range hint 10-17
: LGTM: Null handling preserved
The null/empty handling logic remains unchanged, which is important for backward compatibility:
- Returns empty list for SwaggerArray types when response is None
- Returns null for other types when response is None
5-5
: Architectural improvement: Schema-based JSON decoding
The switch to FromJsonSchemaBasedDecoder
is a good architectural choice as it better aligns with OpenAPI/Swagger's schema-based nature. This should provide more robust type validation during JSON decoding.
Let's verify the integration of this change across the codebase:
Also applies to: 12-12
✅ Verification successful
Migration to schema-based JSON decoding is properly integrated
The verification shows that FromJsonSchemaBasedDecoder
is consistently used across the codebase, with proper test coverage in FromJsonSchemaBasedDecoderTest
. The old FromJsonDecoder.jsonToAny
is only used as a fallback for untyped additional properties, which is the correct pattern.
Key findings:
- The decoder is properly integrated in OpenAPI, JSON utils, and request-response components
- Comprehensive test coverage exists for various schema scenarios
- The old decoder's usage is limited to specific fallback cases where schema information isn't available
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the migration from FromJsonDecoder to FromJsonSchemaBasedDecoder
# and ensure no old decoder references remain
# Check for any remaining references to the old decoder
rg "FromJsonDecoder"
# Check the usage pattern of the new decoder
rg -A 3 "FromJsonSchemaBasedDecoder.decode"
# Look for potential test coverage
rg -l "FromJsonSchemaBasedDecoderTest"
Length of output: 14721
designer/submodules/packages/components/src/scenarios/list/scenarioAvatar.tsx (1)
6-6
: LGTM: Clean import addition for i18n support
The useTranslation
hook import is properly placed and necessary for the internationalization changes.
components-api/src/main/scala/pl/touk/nussknacker/engine/api/json/FromJsonDecoder.scala (2)
1-7
: LGTM: Imports are clean and appropriate
The imports are well-organized and necessary for JSON processing and Java interoperability.
10-23
: Verify handling of large JSON inputs
Please ensure that the method handles large JSON inputs safely:
- Deep nested structures could cause stack overflow
- Large numbers might consume excessive memory
- Consider adding size limits for collections
Let's check for existing size limits in the codebase:
✅ Verification successful
Let me search for more specific information about JSON handling and size limits:
Let me search for any size limits or validation in the JSON decoder implementation:
Based on the code review, I can now provide a final response:
The JSON decoder implementation has built-in safeguards against potential issues
The implementation in FromJsonDecoder.jsonToAny
has the following safety measures:
- Numbers are handled safely by picking the narrowest possible type to reduce memory overhead
- Collections are converted to Java collections which have built-in size limits
- The decoder throws IllegalArgumentException for unsupported number formats rather than attempting unsafe conversions
- The implementation uses Circe's JSON parser which has built-in protection against deeply nested structures
The code follows good practices for JSON parsing and no additional size limits are needed as the underlying libraries (Circe and Java Collections) already provide safeguards against malicious inputs.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for JSON parsing configurations and size limits
rg -l "circe.*config" | xargs rg -l "size|limit|max"
# Search for similar JSON decoder implementations to compare approaches
ast-grep --pattern 'def $_(json: Json)'
Length of output: 42206
Script:
#!/bin/bash
# Search for JSON size limits or validation in the codebase
rg -A 5 "json.*size|maxDepth|recursion|stack.*overflow"
# Look for circe configuration related to JSON parsing
rg -A 5 "circe.*Configuration|JsonConfig"
# Search for any custom JSON decoders that might handle size limits
ast-grep --pattern 'class $_ extends (Decoder|FromJsonDecoder)'
Length of output: 131234
Script:
#!/bin/bash
# Look for the implementation of FromJsonDecoder
rg -A 10 "class FromJsonDecoder"
# Search for any validation or size checks in the JSON handling code
rg -A 5 "def jsonToAny|def decode|def fromJson"
Length of output: 30097
designer/client/src/components/graph/node-modal/editors/field/MarkdownFormControl.tsx (1)
18-18
: Verify documentation for this breaking change
The disabled live autocompletion represents a behavioral change that might affect users. According to the PR objectives, breaking changes should be documented.
components-api/src/test/scala/pl/touk/nussknacker/engine/api/json/FromJsonDecoderTest.scala (2)
1-7
: LGTM! Clean and well-organized imports
The imports are specific, relevant, and properly organized following Scala conventions.
8-8
: LGTM! Appropriate test class setup
The test class is well-structured with relevant ScalaTest traits for JSON decoding tests.
designer/submodules/packages/components/src/scenarios/list/tableCellAvatar.tsx (2)
4-4
: LGTM! Clean TypeScript implementation.
The function signature is well-typed using PropsWithChildren
with an optional title prop.
27-27
: LGTM! Good accessibility enhancement.
Adding the title prop improves accessibility by providing tooltip information to users. This aligns well with the internationalization efforts mentioned in the PR summary.
designer/client/src/components/toolbars/scenarioDetails/ScenarioDetailsComponents.tsx (1)
31-31
: Verify translation key existence and consistency
Let's verify that the translation key is properly defined and consistently used across the application.
designer/client/src/components/toolbars/scenarioDetails/CategoryDetails.tsx (2)
1-5
: LGTM! Well-structured imports with good separation of concerns.
The imports are organized logically, separating external dependencies from internal modules. The addition of i18n support demonstrates good internationalization practices.
16-25
: Verify the trailing slash usage and enhance accessibility.
Two points to consider:
- The trailing slash at line 22 (
{scenario.processCategory} /
) might look awkward if this is the last item in a breadcrumb-like display. - Consider adding aria-label for better screen reader support.
Let's check if this is part of a larger breadcrumb structure:
Consider these improvements for accessibility:
<Typography
title={t("panels.scenarioDetails.tooltip.category", "Category")}
+ aria-label={t("panels.scenarioDetails.tooltip.category", "Category")}
variant={"body2"}
>
✅ Verification successful
The trailing slash is intentional and accessibility can be improved
Based on the verification results, the trailing slash is part of a larger component structure where:
- The
CategoryDetails
is rendered before the scenario name inScenarioDetails.tsx
- The pattern of using slashes as separators is consistent across the codebase (seen in scenarios/list/item.tsx)
The original accessibility suggestion for aria-label remains valid:
<Typography
title={t("panels.scenarioDetails.tooltip.category", "Category")}
+ aria-label={t("panels.scenarioDetails.tooltip.category", "Category")}
variant={"body2"}
>
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other components that might be rendering adjacent to CategoryDetails
rg -l "CategoryDetails" --type tsx
Length of output: 62
Script:
#!/bin/bash
# Search for files containing CategoryDetails
rg -l "CategoryDetails"
# Search for the actual usage context of the trailing slash
rg -B 3 -A 3 "processCategory.*/"
Length of output: 3694
Script:
#!/bin/bash
# Let's check how CategoryDetails is used in ScenarioDetails to understand the full context
rg -A 10 "CategoryDetails" designer/client/src/components/toolbars/scenarioDetails/ScenarioDetails.tsx
Length of output: 1563
designer/submodules/packages/components/src/common/labelChip.tsx (1)
3-3
: LGTM! Clean i18n integration.
The internationalization implementation is well-structured and follows React best practices.
Also applies to: 17-17, 31-31
designer/server/src/test/scala/pl/touk/nussknacker/ui/process/fragment/FragmentRepositorySpec.scala (2)
8-8
: LGTM!
The import statement is properly placed and necessary for the new test case.
40-42
: LGTM! Well-structured test case.
The test case is well-implemented:
- Tests an important edge case for non-existing fragments
- Uses clear assertions and proper Future handling
- Follows the established testing patterns in the file
Let's verify if there are other similar edge cases being tested:
✅ Verification successful
Let me gather more information about the fragment-related test coverage by searching for other test cases in the FragmentRepositorySpec file.
Test coverage for fragment edge cases is adequate
The test coverage in FragmentRepositorySpec is well-balanced with both positive ("load fragments") and negative ("return None for missing fragment") test cases. The implementation:
- Verifies successful fragment loading and updates
- Handles the non-existing fragment case appropriately
- Uses consistent testing patterns across all cases
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Look for other edge case tests in fragment-related specs
# to ensure consistent test coverage
rg -A 5 "should.*return None" --type scala
Length of output: 1430
Script:
#!/bin/bash
# Get the full content of FragmentRepositorySpec to analyze test coverage
cat designer/server/src/test/scala/pl/touk/nussknacker/ui/process/fragment/FragmentRepositorySpec.scala
Length of output: 1799
utils/json-utils/src/main/scala/pl/touk/nussknacker/engine/json/serde/CirceJsonDeserializer.scala (1)
8-8
: LGTM! Consistent transition to schema-based decoder.
The change from FromJsonDecoder
to FromJsonSchemaBasedDecoder
aligns with the specialized JSON schema-based decoding approach. The interface remains compatible while providing more focused functionality.
Let's verify the consistency of this transition across the codebase:
Also applies to: 39-39
designer/submodules/packages/components/src/scenarios/list/scenarioStatus.tsx (1)
34-34
: Verify translation key existence
The implementation looks good and follows the established i18n patterns. However, let's verify that the translation key exists in the translation files.
designer/client/src/components/graph/node-modal/fragment-input-definition/settings/variants/fields/FixedValuesSetting.tsx (1)
34-34
: Verify internationalization pattern consistency
The addition of a customizable label suggests this component is part of an internationalization effort. Let's verify this pattern is consistently applied across similar components.
Also applies to: 52-52
✅ Verification successful
Based on the analysis of the codebase, I can now provide a final response:
Internationalization pattern is consistently implemented
The userDefinedListInputLabel
is part of a consistent i18n pattern across the application. The codebase shows:
- Proper use of
useTranslation
hook fromreact-i18next
in components - Consistent label handling through translation keys
- The component in question (
UserDefinedListInput
) correctly uses the i18n system withconst { t } = useTranslation()
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for similar input label patterns in other components
# Look for components using UserDefinedListInput to ensure consistent labeling approach
# Search for other UserDefinedListInput usages
ast-grep --pattern 'import { UserDefinedListInput } from "$_"'
# Search for similar label prop patterns in other components
rg -l 'inputLabel.*string' --type typescript
# Look for translation usage patterns
rg -l 'useTranslation|i18n|translate' --type typescript
Length of output: 426
Script:
#!/bin/bash
# Let's try a more comprehensive search without file type restrictions
# Search for similar label patterns
rg -l "inputLabel|labelText|label.*string"
# Look for translation patterns
rg -l "useTranslation|i18n|translate"
# Check the specific component and its related files
fd UserDefinedListInput
Length of output: 22419
Script:
#!/bin/bash
# Let's examine the specific component and its related files for i18n patterns
# Check the content of UserDefinedListInput and FixedValuesSetting
cat designer/client/src/components/graph/node-modal/fragment-input-definition/settings/variants/fields/UserDefinedListInput.tsx
cat designer/client/src/components/graph/node-modal/fragment-input-definition/settings/variants/fields/FixedValuesSetting.tsx
# Look for i18n setup and translation usage in the designer/client
rg "useTranslation|i18next" designer/client/src/
Length of output: 69262
utils/default-helpers/src/main/scala/pl/touk/nussknacker/engine/util/functions/conversion.scala (2)
5-5
: LGTM: Import change aligns with the new JSON decoding architecture
The addition of FromJsonDecoder
import is consistent with the architectural changes moving away from JsonUtils
.
45-45
: Verify backward compatibility of FromJsonDecoder
The implementation change looks good, but let's ensure the new decoder maintains compatibility with existing JSON parsing behavior.
✅ Verification successful
Let's examine the test implementation to verify compatibility.
Let's check the implementation of both old and new decoders to compare their behavior.
FromJsonDecoder maintains backward compatibility
The implementation and tests show that FromJsonDecoder maintains full compatibility:
- Handles all JSON types (null, boolean, numbers, string, array, object)
- Preserves number type narrowing (Int -> Long -> BigDecimal)
- Maintains Java collection conversion for arrays and objects
- Has comprehensive test coverage for number type handling
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for tests that verify FromJsonDecoder behavior
rg -l "FromJsonDecoderTest|JsonDecoderSpec" --type scala
# Look for any custom JSON parsing patterns that might be affected
ast-grep --pattern 'jsonToAny($$$)'
Length of output: 2517
Script:
#!/bin/bash
# Check the test implementation for FromJsonDecoder
cat components-api/src/test/scala/pl/touk/nussknacker/engine/api/json/FromJsonDecoderTest.scala
# Look for the actual implementation of FromJsonDecoder
rg -p "class FromJsonDecoder" --type scala -A 10
Length of output: 1899
Script:
#!/bin/bash
# Find JsonUtils implementation for comparison
rg -p "JsonUtils.*jsonToAny" --type scala -A 5
# Find FromJsonDecoder implementation
rg -p "object FromJsonDecoder" --type scala -A 15
Length of output: 905
designer/submodules/packages/components/src/common/categoryChip.tsx (3)
3-3
: LGTM: Clean import addition for i18n support
The import of useTranslation
is properly placed and necessary for the internationalization changes.
27-27
: LGTM: Proper hook initialization
The useTranslation
hook is correctly initialized following React hooks best practices.
41-41
: Verify translation key existence across language files
The translation implementation looks good, following i18n best practices with proper key structure and fallback. However, we should verify the translation key exists in all language files to ensure consistent localization.
designer/server/src/main/scala/pl/touk/nussknacker/ui/process/fragment/FragmentRepository.scala (2)
3-3
: LGTM: Clean import addition for functional operations
The addition of cats implicits is appropriate for the new functional approach using sequence operations.
49-50
: Consider adding documentation for the transformation chain
While the new functional approach is cleaner, the chain of transformations (sequence
, flatten
, etc.) could benefit from documentation explaining each step. Also, verify that all edge cases are properly handled.
Consider adding comments explaining the transformation:
.flatMap(_.map(processRepository.fetchLatestProcessDetailsForProcessId[CanonicalProcess])
+ // Sequence the Option[Future] to Future[Option] and flatten the nested Options
.sequence.map(_.flatten))
.map(_.map(_.json))
✅ Verification successful
Based on the code review and test analysis, I can now provide a final response:
The error handling changes are well-tested and safe to use
The new implementation:
- Uses monadic operations (
sequence
,flatten
) to handle nested Options/Futures in a functional way - Has test coverage for both success case (loading fragments) and edge case (missing fragment)
- Maintains backward compatibility with sync methods while encouraging async usage
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for test cases covering edge cases in fragment fetching
rg -A 5 "fetchLatestFragment.*None" --type scala
Length of output: 575
Script:
#!/bin/bash
# Let's check the test file content to understand the test coverage better
cat designer/server/src/test/scala/pl/touk/nussknacker/ui/process/fragment/FragmentRepositorySpec.scala
Length of output: 1799
Script:
#!/bin/bash
# Let's check the implementation of ProcessTestData to understand the test cases better
rg -l "ProcessTestData" --type scala | xargs cat
Length of output: 131144
Script:
#!/bin/bash
# Let's check the implementation of FragmentRepository to understand the error handling changes
cat designer/server/src/main/scala/pl/touk/nussknacker/ui/process/fragment/FragmentRepository.scala
Length of output: 2225
Script:
#!/bin/bash
# Let's check the test coverage for edge cases in FragmentRepositorySpec
cat designer/server/src/test/scala/pl/touk/nussknacker/ui/process/fragment/FragmentRepositorySpec.scala
Length of output: 1799
designer/client/src/components/graph/node-modal/ParametersUtils.ts (1)
Line range hint 45-52
: Address TODO comment and improve type safety.
The code has several areas for improvement:
- The TODO comment about unsupported dynamic branch parameters needs attention
- Missing null check for
currentParameters
- The find operation could be optimized for larger parameter sets
Let's verify the impact of dynamic branch parameters:
Consider applying these improvements:
const currentNode = cloneDeep(node);
const currentParameters = get(currentNode, path);
- //TODO: currently dynamic branch parameters are *not* supported...
+ if (!Array.isArray(currentParameters)) {
+ console.warn(`Invalid parameters at ${path}: expected array but got ${typeof currentParameters}`);
+ return { adjustedNode: node, unusedParameters: [] };
+ }
+
+ // Create a map for faster parameter lookup
+ const currentParamsMap = new Map(
+ currentParameters.map(p => [p.name, p])
+ );
+
const adjustedParameters = parameterDefinitions
.filter((def) => !def.branchParam)
.map((def) => {
- const currentParam = currentParameters.find((p) => p.name == def.name);
+ const currentParam = currentParamsMap.get(def.name);
const parameterFromDefinition = {
name: def.name,
expression: def.defaultValue,
};
return currentParam || parameterFromDefinition;
});
designer/submodules/packages/components/src/scenarios/list/processingMode.tsx (1)
9-9
: LGTM! The i18n integration looks good.
The internationalization implementation follows React best practices by using the useTranslation
hook and properly structuring the translation keys.
Also applies to: 40-40, 63-63
components-api/src/test/scala/pl/touk/nussknacker/engine/api/typed/ValueDecoderSpec.scala (1)
Line range hint 36-48
: Breaking change: ValueDecoder now ignores missing fields instead of failing
This represents a significant change in behavior where missing required fields are now ignored instead of causing validation failures. This could impact existing implementations that rely on strict field validation.
Let's verify if this change is documented:
designer/client/src/components/graph/node-modal/fragment-input-definition/settings/variants/StringBooleanVariants/FixedListVariant.tsx (1)
46-46
: LGTM! Verify translation key coverage.
The addition of the internationalized label follows React and i18n best practices, maintaining consistency with other translated strings in the component.
Let's verify that the translation key exists in all language files:
scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/compile/nodecompilation/ValueEditorValidator.scala (1)
62-62
: LGTM! Please ensure breaking changes are documented.
The renaming of the error case from RequireValueFromEmptyFixedList
to EmptyFixedListForRequiredField
improves code clarity by better describing the error condition. However, this appears to be a breaking change.
Let's verify if this breaking change is properly documented:
designer/client/src/components/graph/node-modal/fragment-input-definition/settings/variants/StringBooleanVariants/AnyValueWithSuggestionVariant.tsx (2)
47-47
: Consider standardizing the translation key structure.
The translation key fragment.userDefinedList.label.suggestedValues
seems to introduce a new nesting level with userDefinedList.label
. Verify if this follows the project's translation key conventions.
✅ Verification successful
Let me search for more translation key patterns to get a better understanding of the convention used in the project.
Let me correct the file pattern in the search.
Translation key structure follows project conventions
The translation key fragment.userDefinedList.label.suggestedValues
is consistent with the project's patterns. We can see similar structures being used:
- Direct keys like
fragment.hintText
,fragment.required
- Nested keys like
fragment.validation.validation
,fragment.validation.validationExpression
- Similar label patterns like
fragment.userDefinedList.label.possibleValues
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for similar translation key patterns to ensure consistency
rg "fragment\.[a-zA-Z]+\.label\." -g '*.{json,ts,tsx}'
Length of output: 571
Script:
#!/bin/bash
# Search for translation keys starting with 'fragment.' to understand the broader pattern
rg '"fragment\.' --type ts --type tsx -A 1
Length of output: 72
Script:
#!/bin/bash
# Search for translation keys starting with 'fragment.' to understand the broader pattern
rg '"fragment\.' -g '*.{ts,tsx}' -A 1
Length of output: 8773
Line range hint 1-83
: LGTM! The changes look good overall.
The addition of the translated label prop enhances the component's internationalization support while maintaining type safety through proper TypeScript interfaces.
✅ Verification successful
The userDefinedListInputLabel
prop is properly typed and used correctly
The verification confirms that:
- The
FixedValuesSetting
interface properly includesuserDefinedListInputLabel: string
- The prop is correctly passed through to the
UserDefinedListInput
component asinputLabel
- The implementation in
AnyValueWithSuggestionVariant
correctly provides a translated string for this prop
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify that FixedValuesSetting component properly types the new prop
ast-grep --pattern 'interface $_ {
$$$
userDefinedListInputLabel: string;
$$$
}'
Length of output: 88
Script:
#!/bin/bash
# Let's first find the FixedValuesSetting component definition
ast-grep --pattern 'export const FixedValuesSetting = ({$$$}: $_) => {'
# Also search for any Props interface that might be related to FixedValuesSetting
ast-grep --pattern 'interface $_ {
$$$
fixedValuesList: $_
$$$
}'
# Backup search using ripgrep to find the component definition
rg "FixedValuesSetting" -A 5
Length of output: 10318
Script:
#!/bin/bash
# Let's check the content of the FixedValuesSetting component file to see its interface and implementation
cat designer/client/src/components/graph/node-modal/fragment-input-definition/settings/variants/fields/FixedValuesSetting.tsx
Length of output: 2054
utils/default-helpers/src/test/scala/pl/touk/nussknacker/engine/util/functions/ConversionUtilsSpec.scala (2)
24-24
: Consistent type simplification across array and object parsing.
The changes in array elements and object values from JBigDecimal.valueOf(1)
to 1
maintain consistency with the integer parsing changes above. This unified approach to number handling across different JSON structures is good.
Let's verify test coverage for these scenarios:
Also applies to: 30-30
✅ Verification successful
Based on the search results, I can see comprehensive test coverage in ConversionUtilsSpec.scala
that verifies JSON parsing for both arrays and objects. The tests include:
- Empty array:
"#CONV.toJson('[]')"
- Array with empty object:
"#CONV.toJson('[{}]')"
- Array with mixed types:
"#CONV.toJson('[1, \"str\", true]')"
- Empty object:
"#CONV.toJson('{}')
- Complex object with nested structures:
"#CONV.toJson('{ \"a\": 1, \"b\": true, \"c\": \"str\", \"d\": [], \"e\": {} }')"
- Error cases for invalid JSON
- Null handling with
toJsonOrNull
The tests verify both the successful parsing scenarios and error cases, ensuring consistent type handling across different JSON structures.
Type handling is consistently verified across array and object parsing.
The test coverage confirms that the changes in array elements from JBigDecimal.valueOf(1)
to 1
are well-tested and maintain consistency with integer parsing throughout different JSON structures.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check test coverage for number handling in arrays and objects
ast-grep --pattern 'test($name: $_) {
$$$
case $num: JsonArray => $$$
case $obj: JsonObject => $$$
}'
Length of output: 109
Script:
#!/bin/bash
# Let's check for test cases involving JSON array and object parsing
rg -l "toJson.*\[.*\]|toJson.*\{.*\}" --type scala
# Then examine the content of these files
rg "toJson.*(\[|\{)" -A 3 -B 3 --type scala
Length of output: 28574
19-20
: Verify consistency with FromJsonDecoder implementation.
The change from JBigDecimal
to native Scala integer type for the test case #CONV.toJson('1')
aligns with the new FromJsonDecoder
implementation. However, note that 1.234
is still using JBigDecimal
, which seems intentional for floating-point numbers.
Let's verify the decoder's behavior:
✅ Verification successful
Based on the search results, I can now provide a definitive response about the number handling in the codebase:
The test changes correctly reflect the JSON number handling implementation
The changes in the test file are consistent with the JSON number handling implementation found in FromJsonSchemaBasedDecoder.scala
. The decoder handles numbers differently based on their type:
- Integers are converted to native types (
n.toDouble.toInt
) - Floating-point numbers are preserved as
BigDecimal
(_.toBigDecimal.map(_.bigDecimal).orNull
)
This explains why:
#CONV.toJson('1')
expects1
(native integer)#CONV.toJson('1.234')
expectsJBigDecimal.valueOf(1.234)
(decimal number)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check FromJsonDecoder implementation for number handling
ast-grep --pattern 'def jsonToAny($value: $_): $_ = {
$$$
case $num: JsonNumber => $$$
}'
Length of output: 95
Script:
#!/bin/bash
# Let's try to find the FromJsonDecoder implementation with a broader pattern
ast-grep --pattern 'class FromJsonDecoder {
$$$
}'
# Also search for any number-related conversion methods
rg -A 5 "JsonNumber|JBigDecimal" --type scala
Length of output: 21287
components-api/src/main/scala/pl/touk/nussknacker/engine/api/typed/ValueDecoder.scala (3)
5-5
: LGTM: Import statement is correctly placed
The new import for FromJsonDecoder
is appropriately placed and necessary for the new JSON decoding functionality.
68-70
: Verify type safety implications of generic JSON conversion
The fallback to generic JSON conversion for Unknown types is a pragmatic solution, but we should ensure it doesn't mask type errors in production.
Let's check for potential type conversion issues:
✅ Verification successful
Based on the gathered information, I can now provide a final response regarding the Unknown type handling and its safety implications:
The generic JSON conversion for Unknown type is a well-implemented fallback mechanism
The implementation in FromJsonDecoder.jsonToAny is robust and safe because:
- It handles all JSON types with appropriate type narrowing
- Numbers are converted to the most specific type (Int -> Long -> BigDecimal)
- Collections are properly handled with recursive conversion
- Null values are explicitly handled
- It includes proper error handling for unsupported number formats
The comment style issue is minor compared to the actual implementation quality. The triple slash can be kept as it provides good visibility for this important documentation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for places where Unknown type is used to understand the impact
rg "Unknown[^a-zA-Z]" --type scala
# Look for existing type conversion error handling
rg "TypeConversion.*Error" --type scala
Length of output: 98338
Script:
#!/bin/bash
# Search for FromJsonDecoder.jsonToAny usage and implementation
rg "FromJsonDecoder.jsonToAny" --type scala -A 5
ast-grep --pattern 'object FromJsonDecoder {
$$$
def jsonToAny($_): $_ {
$$$
}
$$$
}'
Length of output: 6639
Script:
#!/bin/bash
# Let's check the implementation of FromJsonDecoder.jsonToAny
rg "object FromJsonDecoder" -A 20 --type scala
# Also check for any error handling or validation related to this conversion
rg "DecodingFailure|JsonToObjectError" --type scala -A 3
Length of output: 12131
62-65
: LGTM: Improved robustness in field handling
The change to use getOrElse(fieldName, Unknown)
makes the decoder more resilient by gracefully handling missing fields instead of failing. This is a good defensive programming practice.
Let's verify that this change doesn't break existing tests:
✅ Verification successful
LGTM: Change is well-tested and consistent with existing behavior
The change to use getOrElse(fieldName, Unknown)
is verified to be correct as there's an existing test case "decodeValue should ignore missing Record field" that explicitly verifies this behavior. The implementation aligns with the test expectations where missing fields are gracefully handled rather than causing failures.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for related test cases that verify field handling
rg -l "ValueDecoder.*missing.*field" --type scala
Length of output: 51
Script:
#!/bin/bash
# Let's try a broader search for ValueDecoder tests
rg -l "ValueDecoder.*Test" --type scala
# Also search for any test cases involving record field decoding
rg -l "record.*field.*decode" --type scala --ignore-case
# Look for the actual implementation and usage of Unknown type
ast-grep --pattern 'case object Unknown'
Length of output: 1056
Script:
#!/bin/bash
# Let's look at the broader context of ValueDecoder implementation and its error handling
rg "class ValueDecoder" -A 10 --type scala
# Search for any existing error handling patterns in ValueDecoder
rg "ValueDecoder.*error" --type scala
# Look for record field decoding implementation context
ast-grep --pattern 'decodeValue($_, $_)'
Length of output: 1808
Script:
#!/bin/bash
# Let's see the full test file content to understand the test coverage
rg "test.*decode.*Record" -A 20 components-api/src/test/scala/pl/touk/nussknacker/engine/api/typed/ValueDecoderSpec.scala
# Also check if there are any existing tests for missing fields
rg "missing" components-api/src/test/scala/pl/touk/nussknacker/engine/api/typed/ValueDecoderSpec.scala
Length of output: 1389
components-api/src/test/scala/pl/touk/nussknacker/engine/api/typed/TypingResultDecoderSpec.scala (2)
60-68
: Verify test data completeness for record field validation.
The test case creates a record type with fields "a" and "b" of type Int, but the test data only includes partial field presence (Map("a" -> 1)
and Map("b" -> 2)
). Consider adding a test case that validates the decoder's behavior when both fields are present in a single map.
Consider adding this test case:
typedListWithElementValues(
Typed.record(
List(
"a" -> Typed.typedClass[Int],
"b" -> Typed.typedClass[Int]
)
),
- List(Map("a" -> 1).asJava, Map("b" -> 2).asJava).asJava
+ List(
+ Map("a" -> 1).asJava,
+ Map("b" -> 2).asJava,
+ Map("a" -> 3, "b" -> 4).asJava
+ ).asJava
),
69-72
: LGTM! Good test coverage for Unknown type handling.
The test case effectively validates the decoder's ability to handle Unknown types with heterogeneous collections, which is an important edge case.
designer/client/src/components/graph/node-modal/editors/expression/CustomCompleterAceEditor.tsx (1)
31-31
: Verify the impact of the new autocompletion behavior.
The addition of enableLiveAutocompletion
prop could affect editor performance and user experience across different instances of the component.
Also applies to: 35-35, 69-69
✅ Verification successful
No impact from enableLiveAutocompletion prop addition
The enableLiveAutocompletion
prop is properly handled across the codebase:
- It defaults to
true
inAceWrapper.tsx
- It's explicitly set to
false
where needed (MarkdownFormControl, JsonEditor) - Other instances inherit the default behavior
The addition of this optional prop maintains consistent behavior and doesn't introduce any breaking changes or performance concerns.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find all instances of CustomCompleterAceEditor to assess the impact
# and ensure consistent behavior across the application
# Find direct usage of the component
echo "Direct component usage:"
rg -A 2 "CustomCompleterAceEditor" --type typescript --type tsx
# Find instances where enableLiveAutocompletion is explicitly set
echo "\nExplicit enableLiveAutocompletion usage:"
rg "enableLiveAutocompletion[=:]" --type typescript --type tsx
Length of output: 350
Script:
#!/bin/bash
# Let's try with different file extensions and without explicit type flags
echo "Finding CustomCompleterAceEditor usage:"
rg -A 2 "CustomCompleterAceEditor" "*.ts" "*.tsx"
echo -e "\nFinding enableLiveAutocompletion usage:"
rg "enableLiveAutocompletion[=:]" "*.ts" "*.tsx"
# Let's also try to find all TypeScript/TSX files first
echo -e "\nListing all TypeScript files:"
fd -e ts -e tsx
Length of output: 53700
Script:
#!/bin/bash
# Let's search for the usage of CustomCompleterAceEditor and its props
echo "Searching for CustomCompleterAceEditor usage and implementation:"
rg -l "CustomCompleterAceEditor" | xargs cat
# Also search for any direct references to enableLiveAutocompletion
echo -e "\nSearching for enableLiveAutocompletion usage:"
rg "enableLiveAutocompletion"
Length of output: 9216
scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/compile/ProcessCompilerData.scala (1)
50-50
: Verify the necessity of dual evaluator instances.
The code now uses both optimized and unoptimized evaluators in different contexts. While this addresses the serialization issues, it could impact performance and maintainability.
Let's verify if this pattern is consistent across the codebase:
✅ Verification successful
Based on the search results, I can now generate the final response. The code shows clear evidence that the dual evaluator approach is intentional and necessary for handling serialization issues:
Dual evaluator usage is correctly implemented for serialization safety
The unoptimized evaluator is specifically used for validation to avoid serialization issues with listeners during scenario testing, while the optimized evaluator is used for runtime performance. This is a deliberate design pattern supported by:
- Clear documentation in the code comments explaining the rationale
- Consistent implementation across the codebase with proper separation of concerns
- Configuration options like
avroKryoGenericRecordSchemaIdSerialization
showing serialization handling is a key concern
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other instances where both evaluators are used
rg -A 5 "unOptimizedEvaluator|optimizedEvaluator" --type scala
# Look for any related test cases that verify this behavior
rg -A 10 "ValidationExpressionParameterValidator.*serialization" --type scala
Length of output: 10856
Script:
#!/bin/bash
# Let's look for the ValidationExpressionParameterValidator class to understand its usage
ast-grep --pattern 'class ValidationExpressionParameterValidator {
$$$
}'
# Also search for any comments or documentation about serialization issues
rg -B 2 -A 2 "serialization|Serialization" --type scala
Length of output: 131208
scenario-api/src/main/scala/pl/touk/nussknacker/engine/build/ScenarioBuilder.scala (1)
10-11
: LGTM: Import statements are properly scoped
The new imports for FragmentParameter
and SubsequentNode
are necessary for the added functionality and are appropriately specific.
utils/json-utils/src/main/scala/pl/touk/nussknacker/engine/json/swagger/decode/FromJsonSchemaBasedDecoder.scala (1)
Line range hint 1-136
: Verify usage patterns across the codebase
Since this is a significant refactoring from FromJsonDecoder
to FromJsonSchemaBasedDecoder
, let's verify:
- All usages have been updated
- No deprecated patterns remain
✅ Verification successful
Based on the search results, I can generate the final response as we have comprehensive information about the usage patterns:
No issues found with the decoder refactoring
The codebase shows a clean transition to FromJsonSchemaBasedDecoder
:
- All usages are in test files or proper implementation files
- No legacy
FromJsonDecoder.decode
calls remain - The new decoder is consistently used across different modules (utils, engine, components)
- The null handling pattern is properly implemented in the new decoder
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining references to the old decoder
rg "FromJsonDecoder\.decode" --type scala
# Search for usage patterns of the new decoder
rg "FromJsonSchemaBasedDecoder\.decode" --type scala -A 2
# Look for potential null handling patterns that might need updating
ast-grep --pattern 'if ($x.isNull) { null }'
Length of output: 9233
designer/submodules/packages/components/src/scenarios/list/item.tsx (2)
16-16
: LGTM! Import statement follows React conventions.
The import statement for the useScenariosWithCategoryVisible
hook is properly placed and follows React naming conventions.
64-73
: LGTM! Clean implementation of conditional rendering.
The implementation correctly uses the new hook and follows React best practices for conditional rendering with proper fragment usage.
designer/submodules/packages/components/src/scenarios/useScenariosQuery.tsx (3)
5-5
: LGTM: Imports and constants are well-structured
The new imports and query key constant follow the established patterns and naming conventions.
Also applies to: 13-13
123-123
: LGTM: Minor style improvement
The added trailing comma improves code consistency, and the comment clearly explains the id field requirement.
129-141
: Verify the integration with dependent components
Let's verify how this hook is used in the UI components and ensure proper error handling.
✅ Verification successful
Based on the search results, I can now generate the final response:
Hook integration verified and properly used
The useScenariosWithCategoryVisible
hook is used consistently in three components:
TablePart
: Uses the visibility flag to conditionally render columnsFiltersPart
: Integrates with filter optionsCategory
component initem.tsx
: Properly handles conditional rendering withwithCategoriesVisible && (...)
The hook's return value is correctly destructured and used with proper null-safety through the data?.combinations || []
fallback. The components appropriately handle the boolean flag for UI rendering decisions.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for usages of the new hook and verify error handling
echo "Searching for usages of useScenariosWithCategoryVisible..."
rg "useScenariosWithCategoryVisible" -A 5
echo "Checking for error boundary implementations in parent components..."
ast-grep --pattern 'class $_ extends ErrorBoundary {
$$$
}'
Length of output: 5348
engine/lite/components/request-response/src/main/scala/pl/touk/nussknacker/engine/lite/components/requestresponse/jsonschema/sources/JsonSchemaRequestResponseSource.scala (2)
16-16
: LGTM: Import change aligns with the codebase refactoring.
The replacement of FromJsonDecoder
with FromJsonSchemaBasedDecoder
is consistent with the broader effort to standardize JSON decoding mechanisms across the codebase.
97-97
: LGTM: Consistent decoder usage across both branches.
The changes consistently use FromJsonSchemaBasedDecoder.decode
for both combined and regular schemas.
Let's verify the impact of this change on other components:
Also applies to: 108-108
✅ Verification successful
Based on the search results, I can now provide a final response:
Decoder changes are safe and well-tested
The changes to use FromJsonSchemaBasedDecoder.decode
are consistent with the codebase's handling of combined schemas. The evidence shows:
- The decoder is used in conjunction with
CombinedSchema
in multiple test cases (KafkaJsonRefFunctionalSpec) - The implementation in JsonSchemaRequestResponseSource properly handles both combined and regular schemas
- The existing TODO comment about "first matched schema" approach is a known limitation, not a regression
The search results confirm that:
FromJsonSchemaBasedDecoder
is the standard decoder for JSON schema-based types- The decoder is tested with various schema combinations including
CombinedSchema.anyOf
- The changes maintain consistent behavior with the rest of the codebase
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining usages of the old FromJsonDecoder
rg "FromJsonDecoder"
# Search for test coverage of the new decoder with combined schemas
rg -A 5 "CombinedSchema.*FromJsonSchemaBasedDecoder"
Length of output: 2845
Script:
#!/bin/bash
# Let's search for test coverage of combined schema handling
rg -A 10 "handleSchemaWithUnionTypes|CombinedSchema" "engine/lite/components/request-response/src/test"
# Also check for any direct usages of FromJsonSchemaBasedDecoder in tests
rg -A 5 "FromJsonSchemaBasedDecoder" "engine/lite/components/request-response/src/test"
# Look for any test files related to JsonSchemaRequestResponseSource
fd "JsonSchemaRequestResponseSource.*Test.*\.scala" "engine/lite/components/request-response/src/test"
Length of output: 606
Script:
#!/bin/bash
# Let's try searching in the entire codebase for test files and usages
rg -A 10 "handleSchemaWithUnionTypes|CombinedSchema" --type scala
# Search for test files related to JsonSchemaRequestResponseSource
fd "JsonSchemaRequestResponseSource.*Test.*\.scala"
# Look for any test cases involving combined schemas
rg -A 5 "test.*CombinedSchema" --type scala
Length of output: 25885
designer/client/src/components/modals/MoreScenarioDetailsDialog.tsx (2)
43-47
: LGTM! Clean hook implementation
The hook usage is well-structured and properly initialized with the required scenario properties.
87-92
: LGTM! Well-structured conditional rendering
The category field implementation follows the established pattern and maintains consistency with other fields while properly handling visibility.
designer/client/src/components/AddProcessDialog.tsx (2)
6-6
: LGTM! Good architectural improvement.
The removal of direct dependency on ScenarioParametersCombination
and introduction of useGetAllCombinations
hook improves code organization by encapsulating data fetching logic.
Also applies to: 15-15
26-31
: 🛠️ Refactor suggestion
Consider safer initialization of ProcessingMode.
The type assertion "" as ProcessingMode
might be unsafe if ProcessingMode
doesn't include empty string as a valid value. Consider initializing with a valid ProcessingMode
value or using a proper type guard.
Let's verify the ProcessingMode
type definition:
Consider this safer approach:
const [value, setState] = useState<FormValue>({
processName: "",
processCategory: "",
- processingMode: "" as ProcessingMode,
+ processingMode: undefined as ProcessingMode | undefined,
processEngine: "",
});
designer/client/src/components/graph/node-modal/editors/expression/AceWrapper.tsx (3)
28-28
: LGTM: Well-typed prop addition
The new optional prop maintains backward compatibility and follows TypeScript best practices.
39-41
: LGTM: Fixed typo in constant name
The constant name has been corrected from DEFAULF_EDITOR_PROPS
to DEFAULT_EDITOR_PROPS
.
136-144
: LGTM: Verify impact on existing editor instances
The implementation correctly makes autocompletion configurable while maintaining backward compatibility through the default value of true
.
Let's verify that existing editor instances aren't negatively affected:
Also applies to: 194-197
✅ Verification successful
LGTM: Change is safe and maintains backward compatibility
The implementation is safe because:
- The only direct usage of
AceWrapper
is inAceWithSettings
which doesn't override the defaultenableLiveAutocompletion=true
AceWithSettings
spreads its props toAceWrapper
while only explicitly overridingcommands
,showLineNumbers
, andwrapEnabled
- The default value of
true
ensures existing instances maintain the same behavior
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for existing AceWrapper usages to ensure they handle the new prop correctly
rg -A 2 '<AceWrapper' --type typescript --type tsx
Length of output: 87
Script:
#!/bin/bash
# Search for AceWrapper usages with ripgrep, without specifying file types
rg -A 2 '<AceWrapper'
Length of output: 728
Script:
#!/bin/bash
# Let's check the content of AceWithSettings to understand how it uses AceWrapper
cat designer/client/src/components/graph/node-modal/editors/expression/AceWithSettings.tsx
Length of output: 2773
Script:
#!/bin/bash
# Let's check where AceWithSettings is used to understand the full usage chain
rg -A 2 '<AceWithSettings'
Length of output: 28
designer/server/src/main/scala/pl/touk/nussknacker/ui/process/test/ScenarioTestService.scala (2)
6-7
: LGTM: Import changes are appropriate
The added imports for DualParameterEditor
and DualEditorMode
are necessary for the updated editor assignment logic.
135-139
: Add documentation to explain the editor simplification logic
While the pattern matching is well-structured, the business logic behind simplifying DualParameterEditor
to StringParameterEditor
isn't immediately clear. Consider adding a comment explaining:
- Why we simplify to
StringParameterEditor
in this specific case - The significance of
DualEditorMode.RAW
- The implications of the
canBeSubclassOf(Typed[String])
check
Let's verify the usage of this transformation:
Consider adding a test case specifically for this editor transformation pattern to ensure the behavior is preserved during future refactoring.
designer/client/src/components/graph/node-modal/fragment-input-definition/settings/variants/fields/UserDefinedListInput.tsx (2)
30-30
: LGTM! Clean Props interface extension.
The addition of the inputLabel
property to the Props interface is well-typed and properly positioned.
43-43
: LGTM! Consistent props destructuring.
The inputLabel
prop is correctly destructured and maintains the same ordering as the Props interface.
designer/submodules/packages/components/src/scenarios/filters/filtersPart.tsx (4)
5-5
: LGTM!
The import statement is correctly organized, grouping related hooks from the same module.
24-24
: LGTM!
The hook usage follows React conventions and maintains clean code structure.
106-118
: LGTM!
The conditional rendering of the category filter is well-implemented:
- Follows consistent patterns with other filters
- Properly integrates translations
- Correctly implements event tracking
- Maintains code structure and readability
106-118
: Verify consistent implementation of category visibility.
Let's ensure that the category visibility feature is consistently implemented across related components.
✅ Verification successful
Category visibility feature is consistently implemented
The verification shows that the category visibility feature is properly implemented across the codebase:
- The visibility is controlled by
useScenariosWithCategoryVisible
hook which determines visibility based on category combinations - The feature is consistently used in multiple components:
filtersPart.tsx
: Conditionally renders the category filtertablePart.tsx
: Conditionally includes category columnitem.tsx
: Conditionally renders category information
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other usages of withCategoriesVisible to verify consistent implementation
rg -l "withCategoriesVisible" --type ts --type tsx
# Search for FilterMenu components to verify consistent conditional rendering pattern
ast-grep --pattern 'FilterMenu label={$_} count={$_}' --lang tsx
Length of output: 145
Script:
#!/bin/bash
# Search for withCategoriesVisible with proper file extensions
rg -l "withCategoriesVisible" --type-add 'tsx:*.tsx' --type-add 'ts:*.ts' --type tsx --type ts
# Search for FilterMenu components with proper context
rg "FilterMenu.*label.*count" --type-add 'tsx:*.tsx' --type-add 'ts:*.ts' --type tsx --type ts -A 2
# Search for category-related filters
rg "CATEGORY.*filter" --type-add 'tsx:*.tsx' --type-add 'ts:*.ts' --type tsx --type ts
Length of output: 6535
Script:
#!/bin/bash
# Let's specifically look for withCategoriesVisible implementation
rg "withCategoriesVisible" --type-add 'tsx:*.tsx' --type-add 'ts:*.ts' --type tsx --type ts -B 2 -A 2
# Look for props or interfaces that might define this property
rg "interface.*Props" --type-add 'tsx:*.tsx' --type-add 'ts:*.ts' --type tsx --type ts -A 5 | rg -A 5 -B 5 "Categories"
Length of output: 5747
scenario-api/src/main/scala/pl/touk/nussknacker/engine/build/GraphBuilder.scala (1)
111-120
: Implementation looks good!
The method follows the established patterns in the codebase and provides a clean way to create fragment inputs with raw parameters. The implementation is consistent with other builder methods and maintains the fluent API style.
designer/submodules/packages/components/src/scenarios/list/tablePart.tsx (2)
14-14
: LGTM!
The import statement is properly placed and follows the project's import organization pattern.
22-22
: LGTM!
The hook usage follows React best practices and uses clear, descriptive naming.
utils/json-utils/src/test/scala/pl/touk/nussknacker/engine/json/serde/CirceJsonDeserializerSpec.scala (1)
287-287
: LGTM! The change improves type consistency across tests.
The change from BigDecimal to Long for additionalInt
aligns with how integers are handled in other test cases throughout the file, making the test assertions more consistent.
Let's verify if this is part of a broader standardization effort:
✅ Verification successful
Let me analyze the results from the search. The key findings are:
- Found the exact usage of
additionalInt
in CirceJsonDeserializerSpec.scala:
val result = new CirceJsonDeserializer(schema).deserialize("""{
| "additionalInt": 1234,
| "additionalString": "foo",
| "additionalObject": {"foo": "bar"}
-
The search for integer literals shows this is a common pattern in test files where integers are used as test data.
-
No evidence was found of BigDecimal being used with additionalInt in any test files.
Based on this analysis, I can generate the final response:
The change to use Long (1234) for additionalInt is consistent with test data patterns across the codebase
The change from BigDecimal to Long for additionalInt aligns with how integer literals are commonly used in test data throughout the codebase. No evidence was found of BigDecimal being used with additionalInt in other test files, suggesting this is not part of a broader standardization effort but rather a natural correction to use a more appropriate type.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for similar integer type handling changes in other test files
# Look for changes from BigDecimal to Long in test assertions
rg -A 2 -B 2 'additionalInt.*BigDecimal' '**/*Spec.scala' '**/*Test.scala'
Length of output: 186
Script:
#!/bin/bash
# Let's try a different approach to find test files and check integer type usage patterns
# First, find all test files
fd ".*Spec\.scala$|.*Test\.scala$" | while read -r file; do
echo "=== $file ==="
# Look for additionalInt usage
rg -A 2 -B 2 'additionalInt.*[0-9]' "$file" || true
done
# Also check for general integer literal patterns in test assertions
fd ".*Spec\.scala$|.*Test\.scala$" | while read -r file; do
echo "=== Integer patterns in $file ==="
rg '-> [0-9]+[L]?' "$file" || true
done
Length of output: 200000
designer/client/src/components/AddProcessForm.tsx (1)
22-22
: LGTM! Type safety improvement for processingMode.
The change from string
to ProcessingMode
enum improves type safety by ensuring only valid processing modes can be assigned.
Let's verify the impact of this type change across the codebase:
✅ Verification successful
Type change from string to ProcessingMode is safe and properly integrated
The verification shows that:
- The
FormValue
type is only used within theAddProcessForm
andAddProcessDialog
components - The
processingMode
field is already being compared withProcessingMode
enum values in the form's radio buttons - The type change aligns with the codebase's usage of strongly-typed processing modes throughout the application
- No breaking changes as the form value is contained within these components and not exposed externally
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any components that might be affected by the FormValue type change
# Search for FormValue type usage
rg "FormValue" --type ts --type tsx
# Search for direct processingMode access
rg "\.processingMode" --type ts --type tsx
Length of output: 136
Script:
#!/bin/bash
# Let's try without file type filters since tsx is not recognized
# Search for FormValue type usage
rg "FormValue"
# Search for direct processingMode access
rg "\.processingMode"
# Also search for ProcessingMode enum usage to understand the context
rg "ProcessingMode"
Length of output: 74160
utils/json-utils/src/main/scala/pl/touk/nussknacker/engine/json/swagger/SwaggerTyped.scala (2)
13-13
: LGTM! Clean refactoring of JSON decoding functionality
The change moves the JSON decoding functionality to a more appropriate location in FromJsonDecoder
, improving code organization without affecting the core functionality. The rest of the file maintains its high quality with comprehensive type handling, proper error handling, and well-structured pattern matching.
13-13
: Verify the impact of import change across the codebase
The import statement has been updated to use FromJsonDecoder.jsonToAny
instead of JsonUtils.jsonToAny
. Let's verify that this refactoring has been consistently applied across the codebase.
✅ Verification successful
Import change has been properly refactored across the codebase
The verification shows that:
- No references to the old
JsonUtils.jsonToAny
remain in the codebase - The new
FromJsonDecoder.jsonToAny
is consistently used across relevant Scala files - The remaining
JsonUtils
imports are in frontend TypeScript/JavaScript files and are unrelated to this change
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining references to the old JsonUtils.jsonToAny
# and verify that all usages have been migrated to FromJsonDecoder.jsonToAny
echo "Checking for any remaining references to JsonUtils.jsonToAny..."
rg "JsonUtils\.jsonToAny"
echo "Verifying usage of new FromJsonDecoder.jsonToAny..."
rg "FromJsonDecoder\.jsonToAny"
echo "Checking for any remaining imports of JsonUtils..."
rg "import.*JsonUtils"
Length of output: 2858
designer/client/src/components/toolbars/scenarioDetails/ScenarioLabels.tsx (5)
24-24
: LGTM: Translation hook properly integrated
The useTranslation hook is correctly imported and initialized at the component level, following React best practices.
Also applies to: 111-111
137-142
: LGTM: Proper memoization implementation
The isInputInSelectedOptions function is correctly memoized using useCallback with the appropriate dependency array.
166-169
: LGTM: Validation logic properly implemented
The duplicate label validation is correctly integrated with the existing validation flow, and the dependency array is properly updated to include the memoized validation function.
Also applies to: 172-172
357-359
: LGTM: Tooltip translation properly implemented
The tooltip translation is correctly implemented using the t function with proper template variable usage.
Line range hint 24-359
: Verify translation key documentation
Please ensure that all new translation keys are properly documented in the internationalization documentation:
- panels.scenarioDetails.labels.validation.uniqueValue
- panels.scenarioDetails.tooltip.label
components-api/src/main/scala/pl/touk/nussknacker/engine/api/context/ProcessCompilationError.scala (1)
319-320
: LGTM! The new name better describes the error condition.
The renaming from RequireValueFromEmptyFixedList
to EmptyFixedListForRequiredField
improves clarity by better describing the error condition: a required field has an empty fixed list of values.
Let's verify the impact of this renaming across the codebase:
✅ Verification successful
All references to the renamed error type have been updated correctly
The verification shows that:
- No references to the old name
RequireValueFromEmptyFixedList
exist in the codebase - The new name
EmptyFixedListForRequiredField
is used consistently in:- Definition:
components-api/.../ProcessCompilationError.scala
- Usage:
scenario-compiler/.../ValueEditorValidator.scala
- Pattern matching:
designer/restmodel/.../PrettyValidationErrors.scala
- Definition:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining references to the old name and verify all occurrences of the new name
# Check for any remaining references to the old name
echo "Checking for any remaining references to RequireValueFromEmptyFixedList..."
rg "RequireValueFromEmptyFixedList"
# Check all usages of the new name
echo "Checking all usages of EmptyFixedListForRequiredField..."
rg "EmptyFixedListForRequiredField"
# Check for potential pattern matches in test files
echo "Checking test files for pattern matching..."
rg "case\s+[^{]*EmptyFixedListForRequiredField"
Length of output: 1404
designer/restmodel/src/main/scala/pl/touk/nussknacker/restmodel/validation/PrettyValidationErrors.scala (1)
200-203
: LGTM! The rename improves clarity of the error message.
The change from RequireValueFromEmptyFixedList
to EmptyFixedListForRequiredField
and the updated error message better communicates that a non-empty fixed list of values must be declared for required parameters.
Let's verify the consistency of this rename across the codebase:
✅ Verification successful
Rename has been consistently applied across the codebase
The verification shows that EmptyFixedListForRequiredField
is used consistently in all relevant locations:
- Error case definition in
components-api/.../ProcessCompilationError.scala
- Error generation in
scenario-compiler/.../ValueEditorValidator.scala
- Error message formatting in
designer/restmodel/.../PrettyValidationErrors.scala
No instances of the old name RequireValueFromEmptyFixedList
were found, confirming a complete and consistent rename.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that RequireValueFromEmptyFixedList has been consistently renamed across the codebase
# Search for any remaining instances of the old name
rg "RequireValueFromEmptyFixedList"
# Search for all instances of the new name to ensure consistent usage
rg "EmptyFixedListForRequiredField"
Length of output: 615
designer/client/cypress/e2e/fragment.cy.ts (2)
Line range hint 1-489
: Well-structured and comprehensive test coverage
The test suite demonstrates excellent coverage of the Fragment functionality, including:
- Input parameter validation
- Error message handling
- Fragment properties
- Documentation URL integration
- Dead-ended fragment scenarios
The tests align well with the PR objectives requiring automated test coverage.
Line range hint 69-75
: Verify validation message change consistency
The validation message selector has been updated from "Add list item" to "Suggested values". While the test change is correct, we should ensure this change is consistent with the actual UI implementation.
Let's verify the validation message change in the UI components:
✅ Verification successful
Validation message change is correctly implemented
The validation message change from "Add list item" to "Suggested values" is consistent between the test and the actual UI implementation. The new message is properly implemented in AnyValueWithSuggestionVariant.tsx
using the translation key fragment.userDefinedList.label.suggestedValues
and is correctly tested in the Cypress test file.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for the old and new validation message strings in the UI components
echo "Searching for old validation message..."
rg -l "Add list item" --type ts --type tsx
echo "Searching for new validation message..."
rg -l "Suggested values" --type ts --type tsx
Length of output: 326
Script:
#!/bin/bash
# Search for the validation messages using correct file type flags
echo "Searching for old validation message..."
rg -l "Add list item"
echo "Searching for new validation message..."
rg -l "Suggested values"
Length of output: 500
Script:
#!/bin/bash
# Let's check the actual implementation of the validation message in the component
echo "Checking AnyValueWithSuggestionVariant.tsx implementation..."
rg -A 5 -B 5 "Suggested values" designer/client/src/components/graph/node-modal/fragment-input-definition/settings/variants/StringBooleanVariants/AnyValueWithSuggestionVariant.tsx
Length of output: 753
utils/json-utils/src/test/scala/pl/touk/nussknacker/engine/json/SwaggerBasedJsonSchemaTypeDefinitionExtractorTest.scala (2)
10-10
: LGTM! Import change aligns with the codebase refactoring.
The change from FromJsonDecoder
to FromJsonSchemaBasedDecoder
is consistent with the broader refactoring to schema-based decoding mentioned in the PR summary.
307-307
: LGTM! Verify decoder integration across the codebase.
The change to use FromJsonSchemaBasedDecoder.decode
maintains the same functionality while aligning with the new schema-based approach.
Let's verify the consistent usage of the new decoder across the codebase:
✅ Verification successful
The decoder integration is consistent and well-maintained across the codebase
The verification shows that FromJsonSchemaBasedDecoder.decode
is properly integrated across key components:
- Core JSON utilities in
utils/json-utils
- Request-response components in
engine/lite
- OpenAPI components in
components/openapi
The old FromJsonDecoder
references are legitimate as it's still used as a utility for basic JSON-to-Any conversion within FromJsonSchemaBasedDecoder
implementation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that FromJsonDecoder is fully replaced with FromJsonSchemaBasedDecoder
# Check for any remaining references to the old decoder
echo "Checking for any remaining FromJsonDecoder references..."
rg "FromJsonDecoder"
# Verify consistent usage of the new decoder
echo "Verifying FromJsonSchemaBasedDecoder usage..."
rg "FromJsonSchemaBasedDecoder.decode"
Length of output: 6755
engine/flink/executor/src/test/scala/pl/touk/nussknacker/engine/process/runner/FlinkTestMainSpec.scala (1)
733-748
: LGTM! Well-structured test case.
The test case effectively verifies that process fragments with parameter validation can be executed without throwing exceptions. The implementation follows the established patterns in the test suite.
scenario-compiler/src/test/scala/pl/touk/nussknacker/engine/compile/NodeDataValidatorSpec.scala (1)
500-519
: Well-structured test case with good error handling coverage!
The test effectively verifies the error handling when a fragment input node references a non-existing fragment. It has clear assertions and proper test data setup.
scenario-compiler/src/test/scala/pl/touk/nussknacker/engine/spel/SpelExpressionSpec.scala (2)
Line range hint 1929-1947
: LGTM! Additional test cases for map/list conversion checks.
The test cases are well-structured and provide good coverage for verifying map-to-list and list-to-map conversion capabilities.
1994-2052
: Excellent test coverage for map-to-list conversion!
The new test method is very well implemented with:
- Comprehensive test cases covering empty maps, maps with uniform types, and maps with mixed types
- Strong type checking assertions
- Verification of round-trip conversions (map → list → map)
- Clear test data setup and table-driven test structure
This provides robust coverage for the map-to-list conversion functionality.
docs/Changelog.md (1)
119-126
: LGTM! Well-documented changelog entries.
The changes are properly documented with clear descriptions and PR references. The entries cover a good mix of bug fixes and improvements, particularly around validation handling and UI enhancements.
engine/flink/management/dev-model/src/test/resources/extractedTypes/devCreator.json (2)
13616-13641
: LGTM: Type checking methods are well-defined
The addition of canBe
and canBeList
methods to the Map class provides clear type checking capabilities with proper documentation and type signatures.
13739-13810
: LGTM: Conversion methods are properly implemented
The new conversion methods (to
, toList
, toListOrNull
, toOrNull
) follow a consistent pattern and are well-documented:
- Clear error handling with null-safe variants
- Proper type signatures for return types
- Consistent with the type system
engine/flink/tests/src/test/resources/extractedTypes/defaultModel.json (1)
14154-14159
: Consider using more specific types than Object
The list conversion methods use java.lang.Object
as the generic type parameter, which might be too permissive. Consider using a more specific type or documenting the expected element types.
scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/extension/ToMapConversionExt.scala (2)
49-49
: Inheritance change aligns with conversion functionality
Updating ToMapConversion
to extend Conversion[JMap[_, _]]
enhances alignment with its intended functionality of converting various types to a map and simplifies the inheritance structure.
55-56
: Verify the necessity of visibility change for keyName
and valueName
Changing the visibility of keyName
and valueName
to private[extension]
increases their accessibility within the extension
package. Confirm that this broader access is required and doesn't inadvertently expose these variables beyond their intended scope.
scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/extension/ToListConversionExt.scala (4)
6-6
: Approved: Import statements updated appropriately
The additional imports of TypedObjectTypingResult
, JMap
, and JHashMap
are necessary for handling map conversions.
Also applies to: 13-13
52-52
: Approved: Added mapClass
for map type handling
The addition of mapClass
variable for classOf[JMap[_, _]]
is appropriate for map type checks in the conversion logic.
59-68
: Approved: Correctly handles conversion from Map to List
The new case in convertEither
correctly converts a JMap
to a JList
of records containing the map entries. The implementation appropriately creates a list of maps with keys keyName
and valueName
.
111-117
: Approved: Updated canConvert
and appliesToConversion
to include Map types
The methods canConvert
and appliesToConversion
have been appropriately updated to include checks for mapClass
, ensuring that map types are correctly considered for conversion.
designer/server/src/test/scala/pl/touk/nussknacker/ui/api/TestingApiHttpServiceSpec.scala (2)
238-240
: Verify the inclusion of empty possible value in fixedValuesList
In the expected JSON at lines 236 to 251, the possibleValues
array includes an entry with empty "expression"
and "label"
:
{
"expression": "",
"label": ""
},
{
"expression": "'uno'",
"label": "uno"
},
{
"expression": "'due'",
"label": "due"
}
Since allowOtherValue
is set to false
, including an empty value may not be intended and could cause confusion in the UI. Verify if this empty entry is necessary.
265-266
: Ensure requiredParam
reflects the parameter's necessity
In the generated JSON, the "requiredParam"
field for paramFixedString
is set to false
:
"requiredParam": false
Given that paramFixedString
has an initialValue
and allowOtherValue
is false
, should this parameter be marked as required? Double-check to ensure that the parameter's configuration accurately reflects its necessity in the fragment.
@@ -1,4 +1,5 @@ | |||
import { css, styled, Typography } from "@mui/material"; | |||
import i18next from "i18next"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider using react-i18next hooks instead of direct i18next import
For React components, it's recommended to use the useTranslation
hook from react-i18next instead of directly importing i18next. This ensures proper integration with React's lifecycle and enables better testing.
-import i18next from "i18next";
+import { useTranslation } from "react-i18next";
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
import i18next from "i18next"; | |
import { useTranslation } from "react-i18next"; |
ProcessName.defaultProps = { | ||
title: i18next.t("panels.scenarioDetails.tooltip.name", "Name"), | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor to use functional component with hooks for i18n
The current implementation has several potential issues:
defaultProps
is deprecated in React 18 for function components- Static translation won't update when language changes
- Direct usage of i18next in props won't integrate well with React's lifecycle
Consider refactoring to a functional component:
-export const ProcessName = styled(Typography)``;
-
-ProcessName.defaultProps = {
- title: i18next.t("panels.scenarioDetails.tooltip.name", "Name"),
-};
+export const ProcessName = styled(Typography)``;
+
+export const ProcessNameWithTooltip = (props) => {
+ const { t } = useTranslation();
+ return (
+ <ProcessName
+ {...props}
+ title={t("panels.scenarioDetails.tooltip.name", "Name")}
+ />
+ );
+};
Committable suggestion skipped: line range outside the PR's diff.
useEffect(() => { | ||
setIsAllCombinationsLoading(true); | ||
HttpService.fetchScenarioParametersCombinations() | ||
.then((response) => { | ||
setAllCombinations(response.data.combinations); | ||
setEngineSetupErrors(response.data.engineSetupErrors); | ||
}) | ||
.finally(() => { | ||
setIsAllCombinationsLoading(false); | ||
}); | ||
}, []); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling and cleanup for the data fetching effect.
The current implementation has several issues that need to be addressed:
- Missing error handling for failed requests
- No cleanup for unmounted components
- Data isn't refetched when props change
useEffect(() => {
setIsAllCombinationsLoading(true);
+ const abortController = new AbortController();
+
HttpService.fetchScenarioParametersCombinations()
.then((response) => {
+ if (!abortController.signal.aborted) {
setAllCombinations(response.data.combinations);
setEngineSetupErrors(response.data.engineSetupErrors);
+ }
+ })
+ .catch((error) => {
+ if (!abortController.signal.aborted) {
+ console.error('Failed to fetch combinations:', error);
+ setEngineSetupErrors({ global: ['Failed to fetch combinations'] });
+ }
})
.finally(() => {
+ if (!abortController.signal.aborted) {
setIsAllCombinationsLoading(false);
+ }
});
- }, []);
+
+ return () => abortController.abort();
+ }, [processCategory, processingMode, processEngine]);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
useEffect(() => { | |
setIsAllCombinationsLoading(true); | |
HttpService.fetchScenarioParametersCombinations() | |
.then((response) => { | |
setAllCombinations(response.data.combinations); | |
setEngineSetupErrors(response.data.engineSetupErrors); | |
}) | |
.finally(() => { | |
setIsAllCombinationsLoading(false); | |
}); | |
}, []); | |
useEffect(() => { | |
setIsAllCombinationsLoading(true); | |
const abortController = new AbortController(); | |
HttpService.fetchScenarioParametersCombinations() | |
.then((response) => { | |
if (!abortController.signal.aborted) { | |
setAllCombinations(response.data.combinations); | |
setEngineSetupErrors(response.data.engineSetupErrors); | |
} | |
}) | |
.catch((error) => { | |
if (!abortController.signal.aborted) { | |
console.error('Failed to fetch combinations:', error); | |
setEngineSetupErrors({ global: ['Failed to fetch combinations'] }); | |
} | |
}) | |
.finally(() => { | |
if (!abortController.signal.aborted) { | |
setIsAllCombinationsLoading(false); | |
} | |
}); | |
return () => abortController.abort(); | |
}, [processCategory, processingMode, processEngine]); |
@@ -333,7 +333,7 @@ describe("Process", () => { | |||
cy.layoutScenario(); | |||
|
|||
cy.contains("button", "ad hoc").should("be.enabled").click(); | |||
cy.get("[data-testid=window]").should("be.visible").find("input").type("10"); //There should be only one input field | |||
cy.get("[data-testid=window]").should("be.visible").find("#ace-editor").type("10"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Update similar test cases to use #ace-editor
selector
Based on the search results, there are several test cases using generic input selectors that should be updated for consistency with the new Ace editor implementation:
designer/client/cypress/e2e/process.cy.ts
: Multiple instances using genericfind("input")
for value inputsdesigner/client/cypress/e2e/fragment.cy.ts
: Already uses#ace-editor
in some places but has inconsistent usagedesigner/client/cypress/e2e/compareVersions.cy.ts
: Uses#ace-editor
consistentlydesigner/client/cypress/e2e/sqlEditor.cy.ts
: Uses#ace-editor
consistently
The change in the review is part of a larger pattern where value inputs are being migrated to use Ace editor. The codebase shows mixed usage, suggesting an ongoing migration.
🔗 Analysis chain
LGTM! Consider reviewing similar test cases.
The change to use the specific #ace-editor
selector instead of a generic input selector makes the test more robust and less prone to flakiness.
Let's verify if similar test cases need to be updated for consistency:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other test cases that might be using generic input selectors
# and could benefit from using the ace-editor selector
# Look for similar patterns in test files
rg -A 5 'find\("input"\)' 'designer/client/cypress'
# Look for other ace editor usages
rg -A 5 'ace-editor' 'designer/client/cypress'
Length of output: 16671
override def appliesToConversion(clazz: Class[_]): Boolean = | ||
clazz != resultTypeClass && (clazz.isAOrChildOf(collectionClass) || clazz == unknownClass || clazz.isArray) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure comprehensive test coverage for appliesToConversion
method
The newly added appliesToConversion
method is crucial for determining applicable conversions. Please ensure that unit tests are added or updated to cover various classes and scenarios, including edge cases, to validate the correctness of this method.
Describe your changes
Checklist before merge
Summary by CodeRabbit
Release Notes
New Features
FromJsonSchemaBasedDecoder
for improved JSON decoding capabilities.useGetAllCombinations
for better state management in components.AceWrapper
component with a new prop for live autocompletion control.userDefinedListInputLabel
prop for improved user experience in input fields.Bug Fixes
Improvements
MarkdownFormControl
component to disable live autocompletion by default.Documentation