-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: prevent expect step to fail silently #6
fix: prevent expect step to fail silently #6
Conversation
Hi @Maestro31 👋 Thank you so much for your interest in clix ^^ Maybe could we catch this error when we spawn the process: Line 214 in dcd6082
We could handle the error like this: proc.on('error', (err) => { // todo }); WDYT? |
@tony-go, i'm not sure but it seems to me that with your proposal, in case we catch the error and use it to update The simplest way seems to me to let the "step matcher" choose the behavior. Today, it's in the body of a switch case, but it could be classes with a dedicated method that takes as input the data of the current step and returns a comparison result, or raise an error if not expected. |
Thanks for sharing your thoughts. IMO we should stop the We probably have to investigate aiming to find if we could differentiate kinds of errors. I'd like to keep the usage of |
I spent some time today experimenting with a specific branch: https://github.com/Maestro31/clix/tree/refactor-code I went with the principle of reversing the loop to start from the events received from the child process rather than the scenario steps. By making these changes, the error is effectively lifted as soon as a new data is received as you suggest. By the way, I took out the responsibility of the scenario construction in a scenario builder and extracted the matchers logic in separate classes. I also removed the _compare method tests which were not relevant anymore after these changes and we can easily add matchers specific tests now (todo). You can also add additional matchers quite easily. Potentially even provide an API to add custom matchers. I would be interested in your feedback in any case! |
Awesome 🙌
I was hesitating between both when I started 😅. Thanks for all these efforts. The implementation is quite impressive 💪 But I still had a concern, we started from a request (handle unknown command) and we are now with a full new implementation. I think that it's not the way I'd like to see this project evolving. WDYT? For the rest, we could keep it as a reference for the future, even If I'm not sure about the scenario / scenario-builder abstract split. Maybe we could discuss this point further. |
Thanks for the feedback ! 🙏 Yes, I couldn't imagine opening a PR with all the changes 😅. I'm going to work on the necessary process changes. For the scenario builder, let's say it allows to separate the responsibility of validation and scenario building. Another advantage is that it avoids Scenario to know the implementation and the way to instantiate the matchers. So we can refactor these parts independently without touching Scenario. |
976697b
to
5cb3e6c
Compare
@tony-go , I just made the changes. In order to be able to start from the received data instead of the steps, I had to modify the api of input because it also allows to avoid the process at the expect before the input. Not receiving data is blocking with this paradigm shift. We can discuss this change of api for edit: it is not necessary to unit test the process class because the coverage is already reached by the other tests. It is just a refactoring without changing the behaviors. |
1d744a5
to
bc44484
Compare
Really happy to see your new commits this morning. 👍 Before making a full a review of it, I'd like to highlight a bunch of stuff:
I have a kind of different opinion on that. We should have unit tests as having functional tests is not a reason for not having unit tests. But we could add them after if functional cover the overall atm.
We'll try to deliver a first version that sticks to our promise (the README), then we could think about having customer matchers (if we found proper cases for it) Again, amazing work, we'll make it 💪 |
(I'll try to rebase to v0.0.2) |
I succeed to rebase 🎉 Integrating on stream instead of user steps ( Running functional tests on my machine
Honestly, it was my main concern with What is remaining ?
|
…clix into feat/prevent-failing-silently
I'm currently tweaking the code. Please don't push new updates for the moment. 🙏 |
I'll make a few more iterations and we should be good. |
It sounds good for me too ! |
e84770a
to
22a72a2
Compare
22a72a2
to
060be93
Compare
* fix(type): using value instead of val * chore(debug): improve logs * chore(example): rename basic folder to js * feat(scenario): handle \n before writing in proc * chore(test): add example tests * doc: add contributing md * chore(ci): add windows (#12) * chore(ci): add windows * chore(ci): try cross env * chore(ci): add cross-env for remaining scripts * chore(ci): add node 18 * chore: add jsdoc and organize methods in Scenario (#13) * chore: organise scenario methods * chore: add jsdoc * chore: split test folders (#14) * chore(test): split unit and functional * chore(test): fix result test * fix: prevent expect step to fail silently and refactor (#6) * chore: bump version to 0.0.2 Co-authored-by: Emmanuel RICHE <[email protected]> Co-authored-by: Emmanuel RICHE <[email protected]>
This PR to prevent scenario.run with expect step to fail silently. Related to #3
Currently, I assume that only the expect step should raise an error but I can imagine that the input step will fail silently under the same conditions.
We may have to look at a more global solution, but I feel like that would require a lot of restructuring, including having the steps as separate classes with their own behavior.
I'm open to discussion (en français aussi :D)
Thanks for the review