-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add validation to module name #49
Conversation
0435fa7
to
2d7fe4b
Compare
mdbenchmark/generate.py
Outdated
# If we are unable to find the modules path on the system or if the | ||
# user told us to skip the validation, we do so. | ||
if (validated_module is None) or skip_validation: | ||
console.warn('Not performing module name validation.') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This warning will be printed for every module that we define. We probably should put this outside the for-loop so it is only printed once.
Codecov Report
@@ Coverage Diff @@
## master #49 +/- ##
==========================================
+ Coverage 87.06% 89.35% +2.29%
==========================================
Files 11 11
Lines 456 526 +70
==========================================
+ Hits 397 470 +73
+ Misses 59 56 -3
Continue to review full report at Codecov.
|
mdbenchmark/generate.py
Outdated
if (validated_module is None) or skip_validation: | ||
console.warn('Not performing module name validation.') | ||
# If the validation fails, we throw an error and exit the script. | ||
elif not validated_module: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We currently do not test this case.
@kain88-de Any thoughts? Besides my two comments, this PR is feature complete. We could think about moving the code from |
mdbenchmark/validate.py
Outdated
|
||
from . import console | ||
|
||
SUPPORTED_MDENGINES = ('gromacs', 'namd') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rather get that variable from a definition in the mdengines module. Right now we need to maintain a list of supported engines in two places.
mdbenchmark/generate.py
Outdated
@@ -103,10 +115,40 @@ def generate(name, gpu, module, host, min_nodes, max_nodes, time, list_hosts): | |||
'If you use the {} option make sure you use the GPU compatible NAMD module!', | |||
'--gpu') | |||
|
|||
# Grab list of all available and supported modules | |||
available_modules = get_available_modules() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not filter the list modules here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean we should duplicate the for m in module:
code? Otherwise we cannot do the validation of module names, as we need to iterate through the list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
modules = [m for m in modules if m in available_modules]
Now all modules are actually available. If you want to warn the user for missing ones you can so
requested_modules = set(modules)
modules = [...]
if requested_modules.difference(modules):
console.warn(" {} are not available modules".format(requested_modules.difference(modules))
This if-clause should only be activated if the difference is not empty. If it is empty all modules are available.
@kain88-de What is wrong with failing tests on python 2.7? What's wrong about that import statement? |
This became way bigger than anticipated. I've done a few things in the latest commits:
Several things that we could consider:
|
Sorry about closing this. That was an error |
def test_get_version(test_apps, capsys):
"""Test of get_version."""
from flask import __version__ as flask_ver
from sys import version as py_ver
class MockCtx(object):
resilient_parsing = False
color = None
def exit(self): return
ctx = MockCtx()
get_version(ctx, None, "test")
out, err = capsys.readouterr()
assert flask_ver in out
assert py_ver in out |
Phew. Refactoring code after the last merge wasn't as exciting as I was hoping for. Everything should be working now. I still want to make sure that we are testing everything correctly. Quickly went through the CLI manually and found a major bug which I now fixed. When I've finished that, this should finally be ready to merge. |
you can remove the validation tests and clean up the history a bit. |
Which validation tests? The contents of |
or move them to the appropriate files. There is no file called |
The PR is done. @kain88-de I assume we should keep the merge and just squash the commits in between, right? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM besides the one minor issue.
About squashing. You can squash as much as you like. With the merge commit, you have to be careful git doesn't like rebasing them very much.
mdbenchmark/tests/test_generate.py
Outdated
from mdbenchmark import cli | ||
from mdbenchmark.ext.click_test import cli_runner | ||
from mdbenchmark.generate import (validate_hosts, validate_module, | ||
validate_name, validate_number_of_nodes) | ||
from mdbenchmark.mdengines import normalize_modules |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this last function should be tested in test_mdengines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is currently not directly tested in this module either
c243611
to
af92fc5
Compare
* Move validation into callbacks
* Fix import errors * Add fixture to mock context * Move skip_validation check into normalize_modules method. This fixes a bug, when providing an unsupported MD engine and the `--skip-validation` option at the same time. * Add docstrings * Added some missing tests in other parts of the module
Changes made in this pull request:
mdbenchmark.generate
. When given a module name, we try to find it on the system. Users can skip validation with--skip-validation
.--skip-validation
option tomdbenchmark.generate
.PR Checklist
./changelog/
(more information)?[ ] Issue raised/referenced?