-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[alpha] Addition of simple pipeline abstraction #11
Conversation
`evalem.pipelines` module is added.
Now we can parametrize data, models and evaluators.
predictions = self.model(inputs, **kwargs) | ||
return list( | ||
map( | ||
lambda e: e(predictions=predictions, references=references), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might have to pass kwargs
to each evaluator just in case to avoid any bug?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you mean like a catch all kwargs
that any evaluator might need, within the pipeline? if that's the case, it might be too overloaded imo. then again if we are not expecting too many evaluator specific kwargs, i think it's fine
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ya. For now, it's a bit tricky because some of the interfaces seem to lack kwargs. So, passing kwargs here explicitly might have runtime error. Gonna let it be for now.
model = request.getfixturevalue(model) | ||
predictions = model(data.get("inputs", [])) | ||
print(predictions) | ||
assert len(predictions) == len(data.get("references", [])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if testing reference and prediction lengths is a valid test but i might be wrong
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not 100% sure about tests and their coverage - for now i think it looks good but leave it to @xhagrg @code-geek . rest(not much) is lgtm
Major Changes
evalem.pipelines
module is added. Now a new type/componentevalem.pipelines._base.Pipeline
is introduced.evalem.pipelines.defaults.SimpleEvaluationPipeline
is implemented which takes in a single model, input and list of evaluators to run the model through the pipeline. We can invoke.run(..)
or the object itself is callable.Minor Changes
tests/
test suite is refactored to make use ofconftest.py
format configuration for pytestUsage