-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CT-376] add ability to calculate metrics based off of other metrics #4884
Comments
Thanks Mary! Moving this over to the dbt Core repo - I wasn't clear in Slack, my bad 😅 |
Absolutely critical that metrics can input to other metrics. If that's achieved by intermediary models initial, okay, but ultimately metrics need to be able to work like models - they can and will chain into each other. Perfectly fine for them not to be recursive though. |
Agreed, this feels super important! There are two related questions on my mind:
We're continuing to collect feedback on the initial implementation of metrics, with plans to batch up larger-scale enhancements (like this one), along with more tactical improvements, for a minor version later this year. In the meantime, let's keep the thread going! |
Thanks for the additional considerations @jtcohen6 👋 "Metrics that power other metrics" Re validation I'm not sure I understand the issue here. Child metrics would have to call the parent metric and in that call pass any required properties. Re metric types that should not be sourced in downstream metrics calculations, it seems like this could be tested eg: if (source.args.metric != null): # testing that my metric does source other metrics "Metrics that power models" |
Thanks for the responses @mary-hallewell!
I was thinking in terms of, a "child" metric that defines dimensions or time grains that are not supported by one of its "parent" metrics. Should that be allowed? Should an error be raised at parse time? Imagine: I'm building a metric off someone else's metric... maybe even a metric defined in a different package... should the person/team who defined the parent metric be able to "govern" my use of their metric? It feels important to get these validation
I'm taking the proposal in this issue to be: Metrics should be reffable (DAG parents) for other metrics. Should we also enable metrics to be reffable within (DAG parents for) models, too? I'm already seeing some use cases that call for this (a model to union together multiple metrics, a snapshot to capture historical metric values, ...). I'm starting to think this would make sense, and I'm sensing that @dave-connors-3 and @matt-winkler might agree :) Conceptually, the lineage is simple: the model depends on the metric that depends on the model. From a technical perspective, we'll need to figure out how to capture that lineage at parse time (sorta like how ephemeral models work? maybe?) |
* wip * More support for ratio metrics * Formatting and linting * Fix unit tests * Support disabling metrics * mypy * address all TODOs * make pypy happy * wip * checkpoint * refactor, remove ratio_terms * flake8 and unit tests * remove debugger * quickfix for filters * Experiment with functional testing for 'expression' metrics * reformatting slightly * make file and mypy fix * remove config from metrics - wip * add metrics back to context * adding test changes * fixing test metrics * revert name audit * pre-commit fixes * add changelog * Bumping manifest version to v6 (#5430) * Bumping manifest version to v6 * Adding manifest file for tests * Reverting unneeded changes * Updating v6 * Updating test to add metrics field * Adding changelog * add v5 to backwards compatibility * Clean up test_previous_version_state, update for v6 (#5440) * Update test_previous_version_state for v6. Cleanup * Regenerate, rm breakpoint * Code checks * Add assertion that will fail when we bump manifest version * update tests to automatically tests all previous versions Co-authored-by: Emily Rockman <[email protected]> Co-authored-by: Jeremy Cohen <[email protected]> Co-authored-by: Callum McCann <[email protected]> Co-authored-by: Emily Rockman <[email protected]> Co-authored-by: leahwicz <[email protected]>
* wip * More support for ratio metrics * Formatting and linting * Fix unit tests * Support disabling metrics * mypy * address all TODOs * make pypy happy * wip * checkpoint * refactor, remove ratio_terms * flake8 and unit tests * remove debugger * quickfix for filters * Experiment with functional testing for 'expression' metrics * reformatting slightly * make file and mypy fix * remove config from metrics - wip * add metrics back to context * adding test changes * fixing test metrics * revert name audit * pre-commit fixes * add changelog * Bumping manifest version to v6 (dbt-labs#5430) * Bumping manifest version to v6 * Adding manifest file for tests * Reverting unneeded changes * Updating v6 * Updating test to add metrics field * Adding changelog * add v5 to backwards compatibility * Clean up test_previous_version_state, update for v6 (dbt-labs#5440) * Update test_previous_version_state for v6. Cleanup * Regenerate, rm breakpoint * Code checks * Add assertion that will fail when we bump manifest version * update tests to automatically tests all previous versions Co-authored-by: Emily Rockman <[email protected]> Co-authored-by: Jeremy Cohen <[email protected]> Co-authored-by: Callum McCann <[email protected]> Co-authored-by: Emily Rockman <[email protected]> Co-authored-by: leahwicz <[email protected]>
In order to calculate any rate-type metrics there has to be a mechanism supported to calculate metrics based off of other metrics.
This could be implemented by expanding the inputs allowed in model. Instead of just referencing a materialized model you could allow referencing a metric dynamically and passing the time aggregation info from your current metric as a variable argument.
You could also enable this functionality by natively supporting more type aggregations such as division and then allowing dynamic metric arguments to be passed in that type field. I believe that can mostly be accomplished by overwriting the type functions but it still leaves a gap where you can't pass other metrics to be used in that type agg.
The text was updated successfully, but these errors were encountered: