-
Notifications
You must be signed in to change notification settings - Fork 906
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Define pipelines in YAML files #650
Comments
I wanted to define Kedro pipeline in YAML too, so I implemented this option in PipelineX package. You can define Kedro pipeline like this: # parameters.yml
PIPELINES:
__default__:
=: pipelinex.FlexiblePipeline
module: # Optionally specify the default Python module so you can omit the module name to which functions belongs
decorator: # Optionally specify function decorator(s) to apply to each node
nodes:
- inputs: ["params:model", train_df, "params:cols_features", "params:col_target"]
func: sklearn_demo.train_model
outputs: model
- inputs: [model, test_df, "params:cols_features"]
func: sklearn_demo.run_inference
outputs: pred_df
For more options, please see: Here is an example Kedro project using Iris dataset: PipelineX supports Kedro 0.16.x. |
Thank you a lot, @Minyus! Looks like you've done a great job in For better or worse I've already switched to However, despite all your great work, I still believe that YAML pipeline definition should be a part of Kedro framework. Let's see what is the core contributors' point of view on this. |
@Sitin Thank you for the detailed issue/suggestion.
A team within QuantumBlack Labs has also built a plugin (internally) that allows users to define pipelines in YAML, quite similar to what you describe; @drqb or @willashford may be able to chime in here, as its developers. At this time, it's not used on the majority of Kedro projects, but having it as a plugin allows the Kedro framework itself to remain (comparatively) lightweight. P.S. You may get sparse replies from the core team until January, as a lot of people are on holiday. |
Hi @Sitin Yes, I hope Kedro natively supports YAML interface for pipeline too. Meanwhile, I prepared kedro starter templates (based on The YAML pipeline is at https://github.com/Minyus/kedro-starters-sklearn/blob/master/sklearn-mlflow-yamlholic-iris/%7B%7B%20cookiecutter.repo_name%20%7D%7D/conf/base/parameters.yml#L34-L50 To use YAML interface for pipeline and run config, run: kedro new --starter https://github.com/Minyus/kedro-starters-sklearn.git --directory sklearn-mlflow-yamlholic-iris Hooks for MLflow tracking are included, but it should work as is even if MLflow is not installed. |
@Sitin Thank you for your kind words, we're really happy Kedro changed your way of working for the better and made you happier as a result! Thank you for being part of the community as well by opening this issue. Currently in Kedro we do not plan to add native support for Just to give more context why, I will list some of our main considerations:
Defining pipelines in Python has some drawbacks as well:
To summarise, Python is too powerful as a pipeline definition language, but on the flip-side has excellent tooling support. Where |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
First of all, thank you very much for the great framework. I'd found myself not working in notebooks for weeks since I've switched to Kedro. Now I can debug everything and still have interactive access to my data and models. You've changed the quality of my professional life being and I am deeply grateful for this.
Description
Now, my proposal. I think that in addition to existing Python based API for pipeline definition it would be nice to have a possibility to describe pipelines in YAML files. Moreover, the more I am thinking about this the more I'm convinced that this should be the default way of pipeline definition.
Context
I've found that in 95% of cases my
pipeline.py
files are "static". Meaning that I don't need to dynamically define pipeline nodes and their IO. Also (my subjective opinion) most "dynamical" use cases can be handled by config templates and we already have this feature.By putting pipeline definitions into
conf
we'll make modular pipelines easier to adjust and configure since any pipeline consumer will have a clear picture of used inputs and outputs which is extremely important since we have "global" dataset naming. We can also mix nodes from different pipelines on a consumer level (check the possible implementation below).Another reason to use YAML-first pipeline definitions is that it could be much easier to integrate pipeline consistency checks into modern IDEs since we just need to parse YAML files (parameters, pipelines and catalog) and check that the corresponding paths to python modules is correct.
Possible Implementation
We can use the same structure as in the current
pipeline.py
andhooks.py
created by starters and same agreement as incatalog.yml
.Current API
Let's say I have the following pipeline in
src/my_package/analysis/pipelines.py
:I also have to register it in
src/my_package/hooks.py
:Proposed YAML API
With YAML API we may do the same in
conf/base/pipelines/analysis/pipelines.py
:And from this we can automatically conclude what should be the registered pipeline name. Meaning, no annoying boiler-plating in the
hooks.py
is required. Also it will be much easier to mix nodes from several modular pipelines. For example, in the presented use case we might have globalreports_publishing
pipeline which can be used to publish HTML reports.We also can use internal YAML templating to create an alias for
05_model_input/inliers.parquet
which is the most frequent reason I introduce any "dynamics" into my pipeline definitions.The text was updated successfully, but these errors were encountered: