Skip to content
Jo Stichbury edited this page Aug 7, 2023 · 12 revisions

Frequently asked questions

You can find high level FAQs about Kedro on our website and technical FAQs in the developer documentation.

If you have a different question which isn't answered here, check out the searchable archive of Slack discussions or the older archive of discussions on Discord.

To ask your own question, join Kedro's Slack organisation and use the #questions channel.

What is data engineering convention?

Bruce Philp and Guilherme Braccialli are the brains behind a layered data-engineering convention as a model of managing data. You can find an in-depth walk through of their convention as a blog post on Medium.

Refer to the following table below for a high level guide to each layer's purpose

Note:The data layers don’t have to exist locally in the data folder within your project, but we recommend that you structure your S3 buckets or other data stores in a similar way.

data_engineering_convention

Folder in data Description
Raw Initial start of the pipeline, containing the sourced data model(s) that should never be changed, it forms your single source of truth to work from. These data models are typically un-typed in most cases e.g. csv, but this will vary from case to case
Intermediate Optional data model(s), which are introduced to type your :code:raw data model(s), e.g. converting string based values into their current typed representation
Primary Domain specific data model(s) containing cleansed, transformed and wrangled data from either raw or intermediate, which forms your layer that you input into your feature engineering
Feature Analytics specific data model(s) containing a set of features defined against the primary data, which are grouped by feature area of analysis and stored against a common dimension
Model input Analytics specific data model(s) containing all :code:feature data against a common dimension and in the case of live projects against an analytics run date to ensure that you track the historical changes of the features over time
Models Stored, serialised pre-trained machine learning models
Model output Analytics specific data model(s) containing the results generated by the model based on the model input data
Reporting Reporting data model(s) that are used to combine a set of primary, feature, model input and model output data used to drive the dashboard and the views constructed. It encapsulates and removes the need to define any blending or joining of data, improve performance and replacement of presentation layer without having to redefine the data models

Commonly asked and answered questions

This is a list of queries that we see commonly on our Slack channel (and previously on Discord). We are aiming to answer each of these in documentation or blog posts, but for now, it's handy to have a list of previous answers to draw upon.

Integration testing and best practice

How to organise/re-use code in multiple nodes/pipelines

https://discord.com/channels/778216384475693066/846330075535769601/1035544689388036106

Dynamic pipelines

Environment vars [will get better with OmegaConf work]

https://discord.com/channels/778216384475693066/846330075535769601/1010521600740835449

Conditional logic

https://discord.com/channels/778216384475693066/846330075535769601/1031896717630652416 https://discord.com/channels/778216384475693066/846330075535769601/984042296720908298 https://discord.com/channels/778216384475693066/941044759009587262/941093495236595792

How to use IDE Debugger with Kedro?

Note that if you are running a debugger with tests, you may need to add an extra argument --no-cov to make it work properly.

VS Code: https://docs.kedro.org/en/stable/development/set_up_vscode.html. PyCharm: https://docs.kedro.org/en/stable/development/set_up_pycharm.html?highlight=ide

Clone this wiki locally