Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: meltano config <plugin> test false-negative on Windows #8213

Merged
merged 26 commits into from
May 2, 2024
Merged

fix: meltano config <plugin> test false-negative on Windows #8213

merged 26 commits into from
May 2, 2024

Conversation

ReubenFrankel
Copy link
Contributor

@ReubenFrankel ReubenFrankel commented Oct 5, 2023

Closes #8212


I'd like to refactor TestExtractorTestService and TestCliConfig.test_config_test to spin up a real dummy process, rather than stub out return values through a mock. This means we could validate this fix on Windows through CI.

@netlify
Copy link

netlify bot commented Oct 5, 2023

Deploy Preview for meltano ready!

Name Link
🔨 Latest commit f1afcff
🔍 Latest deploy log https://app.netlify.com/sites/meltano/deploys/6631a43f69af8d00085e31a3
😎 Deploy Preview https://deploy-preview-8213--meltano.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@codecov
Copy link

codecov bot commented Oct 5, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 91.68%. Comparing base (25da80a) to head (f1afcff).
Report is 2 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #8213   +/-   ##
=======================================
  Coverage   91.68%   91.68%           
=======================================
  Files         245      245           
  Lines       19294    19299    +5     
  Branches     2148     2148           
=======================================
+ Hits        17690    17695    +5     
  Misses       1329     1329           
  Partials      275      275           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ReubenFrankel ReubenFrankel marked this pull request as ready for review October 19, 2023 11:26
@edgarrmondragon
Copy link
Collaborator

I'd like to refactor TestExtractorTestService and TestCliConfig.test_config_test to spin up a real dummy process, rather than stub out return values through a mock. This means we could validate this fix on Windows through CI.

@ReubenFrankel Let me know if I can help with that 👆

@ReubenFrankel
Copy link
Contributor Author

ReubenFrankel commented Oct 31, 2023

@edgarrmondragon Yeah, I would appreciate some help if you have any ideas. Might unblock meltano/edk#75 (comment) too if there is good pattern to follow.

@edgarrmondragon
Copy link
Collaborator

@edgarrmondragon Yeah, I would appreciate some help if you have any ideas. Might unblock meltano/edk#75 (comment) too if there is good pattern to follow.

@ReubenFrankel So, I don't have a windows machine with me to test but we can probably set up a dummy fixture tap that fails with a certain config value and passes otherwise. Wdyt?

@ReubenFrankel
Copy link
Contributor Author

ReubenFrankel commented Nov 14, 2023

@edgarrmondragon Yeah, I would appreciate some help if you have any ideas. Might unblock meltano/edk#75 (comment) too if there is good pattern to follow.

@ReubenFrankel So, I don't have a windows machine with me to test but we can probably set up a dummy fixture tap that fails with a certain config value and passes otherwise. Wdyt?

The part I was struggling with was how a dummy process could be started and then properly terminated with respect to the host OS handling of SIGTERM, as per the process.terminate call...

One idea I had was to create test resources that contain tap output indicating success (e.g. valid.jsonl)

{"type": "SCHEMA"}
{"type": "RECORD"}
{"type": "STATE"}
{"type": "SCHEMA"}
{"type": "RECORD"}
{"type": "STATE"}
{"type": "SCHEMA"}
{"type": "RECORD"}
{"type": "STATE"}

and failure (e.g. invalid.jsonl)

{"type": "SCHEMA"}
{"type": "SCHEMA"}
{"type": "SCHEMA"}

, and then mock the plugin_invoker.invoke_async call to stream the resource to stdout for the respective test. Not sure how that last part would actually translate to code though, so perhaps I've glossed over some important details. Hopefully that makes some sense! 😅

@edgarrmondragon
Copy link
Collaborator

I'd like to refactor TestExtractorTestService and TestCliConfig.test_config_test to spin up a real dummy process, rather than stub out return values through a mock. This means we could validate this fix on Windows through CI.

@ReubenFrankel do you think that's a hard blocker for merging this PR?

@ReubenFrankel
Copy link
Contributor Author

@edgarrmondragon Not a hard blocker, no - I just haven't got an environment to manually test the fix with. I was thinking since CI is already setup with a Windows environment that it made sense to refactor the existing test as per above.

@ReubenFrankel
Copy link
Contributor Author

I think I finally made some progress on this last night - I'll push a test POC later today and we can decide whether it's worth having or not.

…ific process interactions - namely `SIGTERM`
@ReubenFrankel
Copy link
Contributor Author

Hmm... Working locally no problem, but looks like there is some kind of race condition in reading the emulated tap process output for test_validate_failure_subprocess_err specifically. Failing tests on Windows indicate there is also still an issue here (if they are to be believed)... @edgarrmondragon What do you think of this approach? It definitely feels a bit hacked in it's current state. 😅

@ReubenFrankel
Copy link
Contributor Author

ReubenFrankel commented May 1, 2024

The return code debug log seems to indicate that the process return code is 1 on Windows for both error and termination, which goes against what I thought was happening.

@ReubenFrankel
Copy link
Contributor Author

ReubenFrankel commented May 1, 2024

Decided to just flag if we encounter a RECORD message and use that to determine success, rather than rely on the process return code. I think this makes more sense anyway (not sure why I went with return code initially) and should be more robust since it's now decoupled from OS-specific behaviour (although it would have been nice to see it working with the test POC implementation).

@edgarrmondragon
Copy link
Collaborator

return code. I think this makes more sense anyway (not sure why I with went return code initially) and should be more robust since it's now decoupled from OS-specific behaviour (although it would have been nice to see it working with the test POC implementation).

Yeah, that actually makes more sense 👍

@edgarrmondragon
Copy link
Collaborator

Thanks @ReubenFrankel!

@edgarrmondragon edgarrmondragon added this pull request to the merge queue May 2, 2024
Merged via the queue into meltano:main with commit 886cb57 May 2, 2024
86 of 87 checks passed
@ReubenFrankel ReubenFrankel deleted the fix/config-test-false-negative-on-windows branch May 3, 2024 00:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

bug: meltano config <plugin> test incorrectly reports a plugin configuration as invalid on Windows
2 participants