Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add error message for return item without channel #241

Merged
merged 5 commits into from
Mar 12, 2024

Conversation

samwaseda
Copy link
Member

I keep making a mistake like this:

@single_value_node("sum")
def addition(a, b):
    return a + b

@single_value_node("product")
def multiplication(a, b):
    return a * b

@macro_node("result")
def add_and_multiply(macro):
    macro.addition = addition(1, 2)
    macro.multiplication = multiplication(macro.addition, 5)
    return macro.multiplication.outputs.product.value

node = add_and_multiply()

The main problem is that in the current implementation it only says NOT_DATA does not have channel, which doesn't immediately make me understand where the error is coming from.

@samwaseda samwaseda requested a review from liamhuber March 11, 2024 14:17
Copy link

Binder 👈 Launch a binder notebook on branch pyiron/pyiron_workflow/macro_add_return_error

Copy link

codacy-production bot commented Mar 11, 2024

Coverage summary from Codacy

See diff coverage on Codacy

Coverage variation Diff coverage
-0.01% (target: -1.00%) 83.33%
Coverage variation details
Coverable lines Covered lines Coverage
Common ancestor commit (bb326ec) 3146 2730 86.78%
Head commit (27fbb46) 3151 (+5) 2734 (+4) 86.77% (-0.01%)

Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: <coverage of head commit> - <coverage of common ancestor commit>

Diff coverage details
Coverable lines Covered lines Diff coverage
Pull request (#241) 6 5 83.33%

Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: <covered lines added or modified>/<coverable lines added or modified> * 100%

See your quality gate settings    Change summary preferences

You may notice some variations in coverage metrics with the latest Coverage engine update. For more details, visit the documentation

@coveralls
Copy link

coveralls commented Mar 11, 2024

Pull Request Test Coverage Report for Build 8254926325

Details

  • 5 of 5 (100.0%) changed or added relevant lines in 1 file are covered.
  • 12 unchanged lines in 1 file lost coverage.
  • Overall coverage decreased (-0.01%) to 91.796%

Files with Coverage Reduction New Missed Lines %
macro.py 12 89.4%
Totals Coverage Status
Change from base Build 8210165337: -0.01%
Covered Lines: 5841
Relevant Lines: 6363

💛 - Coveralls

Copy link
Member

@liamhuber liamhuber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I totally agree with the diagnosis, and that the proper outcome is to raise a more informative error, I only have a differing opinion on finer details of the treatment. My disclaimer is that atm I'm on the couch on mobile with a sick kid napping on me 😂 so I can't browse the code base easily. I'll try to come back and polish this if I get a chance, but in case you get to it first I wanted you to have the feedback.

My main concern is that I don't like the location of the check -- IIRC this gets used for both I and O at some point, but this is strictly an O problem; it also un-statics the method, and while I agree we need self for an informative message (label and options), it is a yellow flag to me to convert away from static.

I think I would prefer a new private _prepare_... method analogous to the input one and to do this there. At any rate, some other more specific and more specifically named location for this work please!

Otherwise just two nits:

  • I'm trying to reduce string usage in the messages to help refactoring be more stable; this might be an edge case because of what we want to say, but you could play around with sentence formulations that replace the "channel" string with {HasChannel.__name__}
  • I'm not sold on giving a link to the repo in the error message. IMO people know what the package name is and will find their way there by themselves. If we do wdecide to include this, I would want to do so consistently across all user-facing messages (and thus in a separate PR)

@liamhuber
Copy link
Member

Oh, and we need a test added that catches the new message

@liamhuber
Copy link
Member

Ok, I've got my computer and I stand by everything I said earlier. More specifically, somewhere in here:

returned_has_channel_objects = self.graph_creator(self, *ui_nodes)
self._configure_graph_execution()
# Update IO map(s) if a function-like graph creator interface was used
if len(ui_nodes) > 0:
self._whitelist_inputs_map(*ui_nodes)
if returned_has_channel_objects is not None:

I'd slot in a new method like self._ensure_returned_objects_are_has_channels (or similar) that does the check you want. This is perfectly in line with the variable name, it's just that right now the code assumes that the user has correctly returned only HasChannel objects. I totally agree that's an easy mistake though, so let's add the filtering. To fail as early as possible I recommend inserting the check immediately after line 313.

In the future we could even imagine extending the check to create channel objects from non-HasChannel returns. This heads in the direction of getting macro and function nodes to be more similar, and is absolutely not necessary here and now, but I like that this change is commensurate with that direction.

@samwaseda
Copy link
Member Author

Ok I messed around. In the end I didn't put it in __init__, because for my taste there are already too many lines. I also added a test.

@@ -317,13 +317,10 @@ def __init__(
if len(ui_nodes) > 0:
self._whitelist_inputs_map(*ui_nodes)
if returned_has_channel_objects is not None:
if not isinstance(returned_has_channel_objects, tuple):
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was not intentional (also nothing changed). I can revoke this change if you prefer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we lack a test where the graph creator returns only a single value?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, nvm, I see your comment applies to the change in the next few lines too. It's just a refactor. I don't have strong feelings, I find both perfectly readable and don't see an efficiency difference.

@samwaseda samwaseda added enhancement New feature or request format_black trigger the Black formatting bot labels Mar 12, 2024
Copy link
Member

@liamhuber liamhuber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM now! The windows failure is just #48. I want the tests passing before we merge, but I would be surprised if it takes more than one re-run as the error is exceptionally rare.

@liamhuber
Copy link
Member

Ugh, my rerun failed jobs button has no effect. I'm not sure if it's a problem with my browser or what, but this happens sometimes. @samwaseda could you rerun the failed windows test before merging?

@samwaseda samwaseda merged commit 9d7b286 into main Mar 12, 2024
16 checks passed
@samwaseda samwaseda deleted the macro_add_return_error branch March 12, 2024 21:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request format_black trigger the Black formatting bot
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants