-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/feature flexible env #53
Conversation
a0307a0
to
04ba2a9
Compare
- pip install -r requirements.txt | ||
- pip install -r test_requirements.txt | ||
- pip install -e src/common -e src/scraper -e src/server -e src/topicmodeller -e src/learner -e src/orchestrator | ||
- tools/install_envs.sh | ||
- pip install coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be included in tools/install_envs.sh ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What i wanted to do was to put in install_env what was not specific to .travis. ie: what you would need to run in local.
So I left here
-"conda install" which is a travis hack to get numpy/Scipy (we don't have conda in local, we install numpy in the "normal" way)
-"coveralls" which is specific to pushing coverage metric to covelalls website. It doesn't manage coverage, it manages the push of coverage statistics to the website from travis( https://pypi.python.org/pypi/coveralls )
Does it is seem OK for you ? I can make a script like "install_env_travis" but I thought it added a useless level. I can add a comment int .travis.yml like "install steps specific to travis are put directly here"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mistakenly thought that coverage is managed by coveralls ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
c95faa1
to
d50af1b
Compare
# script must be executed from root git directory | ||
pip install -r requirements.txt | ||
pip install -r test_requirements.txt | ||
pip install -r src/server/server/requirements.txt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those requirements are installed in "default" environment and in appengine_env virtual env, should it not be installed only on appengine_env virtual env ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
appengine_env is the environment used to start the flask server as you can see in tools/start_server.sh, to ensure the server is started as in appengine.
But unit tests task and pylint task are run once for all tests, including unit tests of server, and thus share the same env and must have access to server requirements.
This default env is also the environment used in dev mode (in pycharm) so we need everything
I will add a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
Those folders are generated by install_envs.sh global_env is my local virtualenv but not used yet on travis
instead of doing everything in .travis script, .travis call 2 shell scripts, so we can call them in dev environment: 1)install_envs.sh does all pip installs It makes a specific virtualenv of flask server: For appengine we need a requirements.txt that will be used to setup dependencies on appengine. Then we execute flask server in this virtualenv for functests to ensure that we don't use dependencies who woudn't be specified in this server/requirements.txt It does not install gcloud SDK and numpy dependency in conda because it's specific to travis (and must be done in before_install) 2)start_tests.sh start datastore server (start_gcloud.sh), flask server, then execute tests start_gcloud_and_exit wrap start_gcloud so that it's not blocking start_tests take an optional parameter ("cover") for coverage analysis run_pylint.sh is only used in local dev environment to activate global virtualenv (with all dependencies of gator project) This global_env virtualenv is not used in travis, because it's hard to make it works with numpy which is installed with conda .
-Remove (test_)remote_api.py: With gcloud datastore API, we can call datastore without relying on remote_api. To have the correct credentials is enough -remove testhelpers.py: was use to setup testbed (ndb mocking). Now all unit tests call local gcloud datastore emulator server -remove start_gae.sh and startcloud.sh : was used for standard appengine and ndb server
the machine resources are chosen to use the cheapest machine. remaining is standard (as tutorial)
6aecfc7
to
44429c4
Compare
Changes Unknown when pulling 514873b on feature/Feature-FlexibleEnv into * on master*. |
@m-mokhtari every remarks fixed I think :-) |
base.html is edited accordinly css and images are in distinct folders
pytest plugin was used to find appengine modules not in python path pylint plugin was used to disable warnings on not found properties for ndb
non python files containing "py" in name was passed to autopep8. copy-past this in .git/hooks folder
…setuputils https://lists.logilab.org/pipermail/python-projects/2011-October/003107.html This was not present before because it's only with virtuaenv
Dal now keep a connection to datastore gcloud client and is a class So clients of dal needs to instanciate a Dal object. This is not strictly necessary now because "client" is now a static class field and not an instance field of Dal class But I put it as a class field afterwards for performance. I kept instances of Dal everywhere anyway because it's more flexible (to make a specific client by instance for example)
the python 2 framework datetime.utcnow gives a date without timezone info This makes comparing it to date from database fails (because datetimes loaded from gcloud datastore have timezones, that was not the case before)
gator.py to mainp.py to match flask convention handlers.py are migrated to flask one to one. The "blueprint" is the way to separate flask app between several python modules requirements.txt is used by appengine to load server dependencies.
tests remains rougly the same, except: -some cleans where it used ndb directly -new tests for empty collection management: (because on gcloud, when the value associated to en entry is an empty list, the entry is not saved at all) dbentities is removed because there is no equivalent of strongly typed "entity" type in gcloud. All is just dictionaries Dal is now a class (explained in previous commit about Dal clients) The client methods remains the same: of the kind -save_entity(struct.entities) -get_entity(struct_key) They call private methods - _to_db_entity(struct.entity) - _to_entity(db_entity) : outside dal instance because they doen't use Dal class field (gcloud client) some new private methods are created for refactoring. Dal client is instanciated with a mock credential in test environment (as advised in googleapis/google-cloud-python#1839 )
..and remove steps related to app_engine default: true means deployement is done to default app adress branch is not specified: means only deployed from Master branch cf. https://docs.travis-ci.com/user/deployment/google-app-engine
We get random timeouts in Travis on actions on selenium/phantomJS This seems to be a problem specific to Docker/phantomJS mix. More details in the issue below. Retry is the approch of the Pull below. We apply it to get pages and click links (where I spotted the issues) I don't know why we did not get those issues before Flask migration... problem timeout in tavis travis-ci/travis-ci#3251 airspeed-velocity/asv#290
Info about flask testing in tutorial http://flask.pocoo.org/docs/0.11/testing/ functests that calls a true server/datastore are slow and painfull to run and debug, it's not practical to test all possible cases from here. We add unit tests directly on flask handlers, mocking datastore.
to be consistent with structs and naming elsewhere In the same time make ds_client field private in Dal (rename to _ds_client)
since I added shebang on script, 'pkill gcloud" command was killing the script itself because it contains gcloud. I make regex more precise to fix this
44b2614
to
20a8f60
Compare
migrate app to:
-standard to appengine flexible environment
-webapp2 to flask
-ndb to gcloud datastore API