DialogSet.Add: Unique dialog ids for name collisions but not for reference collisions #3918
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes #3913
The
Add()
method ofDialogSet
has a mechanism for supporting multiple dialogs with the same id, which is adding a unique suffix for internal book keeping.There is a bug however, if the same exact dialog is added multiple times because it is referenced by multiple dialogs. In this scenario, this is not a name collision, we are just adding the same dialog. But still we are assigning a different Id and treating it as different which brings downstream incorrect behaviors.
The fix here is to more precisely detect whether this is a name collision, by checking whether the dialog that is being added, was already added. If that is the case, we don't assign a new id and just leave things as they are.
Note that this is not a declarative only bug or even an adaptive only bug. But in adaptive code, without declarative, something like the below would also fail without this fix:
In addition to the repro test, added another cancellation test to verify other scenraios where instances look the same but are expected to behave differently.
Did a quick performance analysis over 500 medium sized json files (500 lines) and saw no measurable difference in performance. The clone is as efficient as the previous deserialization so its a no change in net time spent loading resources.