Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

global: pep8 and pep257 improvements #14

Merged
merged 5 commits into from
Aug 26, 2014
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 9 additions & 6 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,12 +1,15 @@
*.egg
*.egg-info

*.pyc
*~
*.egg-info
*.egg
.cache
.coverage
.ropeproject
.tox
MANIFEST
build
dist
build/*
coverage.xml
dist
dist/*
.ropeproject
.coverage
coverage.xml
4 changes: 3 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,13 @@ python:

install:
- pip install --upgrade pip --use-mirrors
- pip install cloud coverage --use-mirrors
- pip install pytest pytest-cache pytest-pep8 pytest-cov --use-mirrors
- pip install coveralls --use-mirrors
- pip install .

script:
- coverage run --source=workflow setup.py test
- python -m pytest

after_success:
- coveralls
Expand Down
40 changes: 20 additions & 20 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -65,10 +65,11 @@ Example:
def next_token(obj, eng):
eng.ContinueNextToken()

There are NO explicit states, conditions, transitions - the job of the engine is
simply to run the tasks one after another. It is the responsibility of the task
to tell the engine what is going to happen next; whether to continue, stop,
jump back, jump forward and few other options.
There are NO explicit states, conditions, transitions - the job of the
engine is simply to run the tasks one after another. It is the
responsibility of the task to tell the engine what is going to happen
next; whether to continue, stop, jump back, jump forward and few other
options.

This is actually a *feature*, I knew that there will be a lot of possible
exceptions and transition states to implement for NLP processing and I also
Expand All @@ -78,10 +79,8 @@ you can make more errors and workflow engine will not warn you.
The workflow module comes with many patterns that can be directly used in the
definition of the pipeline, such as IF, IF_NOT, PARALLEL_SPLIT and others.

*This version requires Python 2 and many of the workflow patterns (such as IF,
XOR, WHILE) are implemented using lambdas, therefore not suitable for Python 3.*

The individual tasks then can influence the whole pipeline, available ''commands'' are:
The individual tasks then can influence the whole pipeline, available
''commands'' are:

.. code-block:: text

Expand Down Expand Up @@ -129,8 +128,8 @@ We can then write *workflow definition* like:
Tasks
-----

Tasks are simple python functions, we can enforce rules (not done yet!) in a pythonic
way using pydoc conventions, consider this:
Tasks are simple python functions, we can enforce rules (not done yet!) in
a pythonic way using pydoc conventions, consider this:

.. code-block:: python

Expand All @@ -150,14 +149,15 @@ way using pydoc conventions, consider this:
"""
...

So using the python docs, we can instruct workflow engine what types of arguments
are acceptable, what is the expected outcome and what happens after the task finished.
And let's say, there will be a testing framework which will run the workflow
pipeline with fake arguments and will test all sorts of conditions. So, the
configuration is not cluttered with states and transitions that are possible,
developers can focus on implementation of the individual tasks, and site admins
should have a good understanding what the task is supposed to do -- the description
of the task will be displayed through the web GUI.
So using the python docs, we can instruct workflow engine what types of
arguments are acceptable, what is the expected outcome and what happens
after the task finished. And let's say, there will be a testing framework
which will run the workflow pipeline with fake arguments and will test all
sorts of conditions. So, the configuration is not cluttered with states
and transitions that are possible, developers can focus on implementation
of the individual tasks, and site admins should have a good understanding
what the task is supposed to do -- the description of the task will be
displayed through the web GUI.

Some examples
-------------
Expand All @@ -171,8 +171,8 @@ patterns.

.. image:: http://www.yawlfoundation.org/images/patterns/basic_ps.jpg

This pattern is called Parallel split (as tasks B,C,D are all started in parallel
after task A). It could be implemented like this:
This pattern is called Parallel split (as tasks B,C,D are all started in
parallel after task A). It could be implemented like this:

.. code-block:: python

Expand Down
88 changes: 54 additions & 34 deletions bin/run_workflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
# more details.

import glob
import six
import sys
import os
import imp
Expand All @@ -21,22 +22,24 @@

log = main_engine.get_logger('workflow.run-worklfow')


def run(selection,
listwf=None,
places=None,
verbose=False,
profile=None,
**kwargs):
'''
"""
Example usage: %prog -l
%prog 1 [to select first workflow to run]

usage: %prog glob_pattern(s) [options]
-l, --listwf: list available workflows
-i, --places = places: list of glob patterns to search for workflows (separate with commas!)
-i, --places = places: list of glob patterns to search for workflows
(separate with commas!)
-p, --profile=profile: profile the workflow and save output as x
-v, --verbose: makes for a lot of output
'''
"""

workflows = set()

Expand All @@ -59,7 +62,9 @@ def run(selection,
for i in range(len(workflows)):
print "%d - %s" % (i, short_names[i])
if not len(workflows):
log.warning('No workflows found using default search path: \n%s' % '\n'.join(places))
log.warning(
'No workflows found using default search path: \n%s' % (
'\n'.join(places)))

if workflows:
for s in selection:
Expand All @@ -76,7 +81,9 @@ def run(selection,
if len(ids) == 0:
raise Exception("I found no wf for this id: %s" % (s, ))
elif len(ids) > 1:
raise Exception("There is more than one wf for this id: %s (%s)" % (s, ids))
raise Exception(
"There is more than one wf for this id: %s (%s)" % (
s, ids))
else:
if verbose:
run_workflow(workflows[ids[0]],
Expand All @@ -95,15 +102,16 @@ def find_workflow(workflows, name):
i += 1
return candidates


def run_workflow(file_or_module,
data=None,
engine=None,
processing_factory = None,
callback_chooser = None,
before_processing = None,
after_processing = None,
profile = None):
"""Runs the workflow
processing_factory=None,
callback_chooser=None,
before_processing=None,
after_processing=None,
profile=None):
"""Run the workflow
@var file_or_module: you can pass string (filepath) to the
workflow module, the module will be loaded as an anonymous
module (from the file) and <module>.workflow will be
Expand Down Expand Up @@ -132,15 +140,14 @@ def run_workflow(file_or_module,
@return: workflow engine instance (after its workflow was executed)
"""

if isinstance(file_or_module, basestring):
if isinstance(file_or_module, six.string_types):
log.info("Loading: %s" % file_or_module)
workflow = get_workflow(file_or_module)
elif isinstance(file_or_module, list):
workflow = WorkflowModule(file_or_module)
else:
workflow = file_or_module


if workflow:
if profile:
workflow_def = PROFILE(workflow.workflow, profile)
Expand All @@ -164,7 +171,7 @@ def run_workflow(file_or_module,
before_processing,
after_processing)
datae.process(data)
if data[0]: # get prepared data
if data[0]: # get prepared data
data = data[0]

log.info('Running the workflow')
Expand All @@ -173,12 +180,13 @@ def run_workflow(file_or_module,
else:
raise Exception('No workfow found in: %s' % file_or_module)


def create_workflow_engine(workflow,
engine=None,
processing_factory = None,
callback_chooser = None,
before_processing = None,
after_processing = None):
processing_factory=None,
callback_chooser=None,
before_processing=None,
after_processing=None):
"""Instantiate engine and set the workflow and callbacks
directly
@var workflow: normal workflow tasks definition
Expand All @@ -192,10 +200,12 @@ def create_workflow_engine(workflow,
"""
if engine is None:
engine = main_engine.GenericWorkflowEngine
wf = engine(processing_factory, callback_chooser, before_processing, after_processing)
wf = engine(processing_factory, callback_chooser,
before_processing, after_processing)
wf.setWorkflow(workflow)
return wf


def get_workflow(file):
""" Initializes module into a separate object (not included in sys) """
name = 'XXX'
Expand All @@ -217,8 +227,8 @@ def get_workflow(file):
# old_cwd = os.getcwd()

try:
#filedir, filename = os.path.split(file)
#os.chdir(filedir)
# filedir, filename = os.path.split(file)
# os.chdir(filedir)
execfile(file, x.__dict__)
except Exception, excp:
sys.stderr.write(traceback.format_exc())
Expand All @@ -228,6 +238,7 @@ def get_workflow(file):

return x


def import_workflow(workflow):
"""Import workflow module
@var workflow: string as python import, eg: merkur.workflow.load_x"""
Expand All @@ -238,23 +249,27 @@ def import_workflow(workflow):
return mod





class TalkativeWorkflowEngine(main_engine.GenericWorkflowEngine):
counter = 0

def __init__(self, *args, **kwargs):
main_engine.GenericWorkflowEngine.__init__(self, *args, **kwargs)
self.log = main_engine.get_logger('TalkativeWFE<%d>' % TalkativeWorkflowEngine.counter)
self.log = main_engine.get_logger(
'TalkativeWFE<%d>' % TalkativeWorkflowEngine.counter)
TalkativeWorkflowEngine.counter += 1

def execute_callback(self, callback, obj):
obj_rep = []
max_len = 60

def val_format(v):
return '<%s ...>' % repr(v)[:max_len]

def func_format(c):
return '<%s ...%s:%s>' % (c.func_name, c.func_code.co_filename[-max_len:], c.func_code.co_firstlineno)
return '<%s ...%s:%s>' % (
c.func_name,
c.func_code.co_filename[-max_len:],
c.func_code.co_firstlineno)
if isinstance(obj, dict):
for k, v in obj.items():
obj_rep.append('%s:%s' % (k, val_format(v)))
Expand All @@ -268,12 +283,15 @@ def func_format(c):
self.log.debug('%s ( %s )' % (func_format(callback), obj_rep))
callback(obj, self)


class WorkflowModule(object):
"""This is used just as a replacement for when module is needed but workflow
was supplied directly"""

"""Workflow wrapper."""

def __init__(self, workflow):
self.workflow = workflow


def usage():
print """
usage: %(prog)s [options] <workflow name or pattern>
Expand All @@ -296,15 +314,17 @@ def usage():
the less messages are printed
-h, --help: this help message

""" % {'prog' : os.path.basename(__file__) }
""" % {'prog': os.path.basename(__file__)}


def main():

try:
opts, args = getopt.getopt(sys.argv[1:], "lp:o:ve:h", ['list', 'places=', 'profile=', 'verbose', 'Vlevel=', 'help'])
opts, args = getopt.getopt(sys.argv[1:], "lp:o:ve:h", [
'list', 'places=', 'profile=', 'verbose', 'Vlevel=', 'help'])
except getopt.GetoptError, err:
# print help information and exit:
print str(err) # will print something like "option -a not recognized"
print str(err) # will print something like "option -a not recognized"
usage()
sys.exit(2)

Expand Down Expand Up @@ -334,15 +354,15 @@ def main():
else:
assert False, "unhandled option %s" % o



if (not len(args) or not len(opts)) and 'listwf' not in kw_args:
usage()
sys.exit()

if 'places' not in kw_args:
d = os.path.dirname(os.path.abspath(__file__))
kw_args['places'] = ['%s/workflows/*.py' % d, '%s/workflows/*.pyw' % d, '%s/workflows/*.cfg' % d]
kw_args['places'] = ['%s/workflows/*.py' % d,
'%s/workflows/*.pyw' % d,
'%s/workflows/*.cfg' % d]

run(args, **kw_args)

Expand Down
2 changes: 2 additions & 0 deletions pytest.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[pytest]
addopts = --clearcache --pep8 --cov=workflow --cov-report=term-missing
3 changes: 3 additions & 0 deletions requirements-test.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
pytest
pytest-cov
pytest-pep8
Loading