Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Magic Feature Flags [Hackathon Project] #5777

Closed
marcushyett-ph opened this issue Sep 1, 2021 · 4 comments
Closed

Magic Feature Flags [Hackathon Project] #5777

marcushyett-ph opened this issue Sep 1, 2021 · 4 comments
Assignees
Labels
enhancement New feature or request feature/feature-flags Feature Tag: Feature flags stale

Comments

@marcushyett-ph
Copy link
Contributor

Problem we're solving

When I create a feature flag - I need to understand how the feature (behind the flag) is performing.

Proposed solution

We show the the performance (specific metric) of the feature with with flag (or specific variant) vs the control case and also provide a score for the confidence of the result.

  1. Configuring the metric

In the feature flags experience I should be able to configure my success event, this is what the success of my experiment will be measured against

image

  1. View Results (In Product)
    We should show the metric % difference and confidence in the feature flags page next to each variant

image

  1. Viewing Results (In Toolbar) [If we have time]
    We should should show the same metric % difference and confidence in the feature toolbar as with the feature flags page

image

Technical details

How do we calculate the % difference in metric between test / variant and control?
In order to compare the variants and measure confidence we will need some kind of aggregation. The simplest aggregation will be per day. I propose we calculate the % difference between test and control as

Mean_test_value_per_day = SUM(SUM(Events[test])/ SUM(exposures[test]) OVER days_exposed) / days_exposed
Mean_control_value_per_day = SUM(SUM(Events[control])/ SUM(exposures[control]) OVER days_exposed) / days_exposed

% difference = (Mean_test_value_per_day - Mean_control_value_per_day)*100.0 / Mean_control_value_per_day

How do we calculate the confidence score?
We can use a T-test to calculate our confidence in whether or not the metric has improved..

https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_ind.html

Where a is an array of the results (SUM(Events[test])/ SUM(exposures[test]) for the control group per day and b is an array of the results from the test group per day.

The T-Test returns a P-value (X), we can convert this to a confidence score as below:

Confidence score % = (1 - X)*100

@marcushyett-ph
Copy link
Contributor Author

Query to get increase in metric for feature flag:

https://app.posthog.com/insights?insight=TRENDS&date_from=-30d&actions=%5B%7B%22id%22%3A%224912%22%2C%22type%22%3A%22actions%22%2C%22order%22%3A0%2C%22name%22%3A%22Discovered%20Learning%22%2C%22math%22%3A%22dau%22%2C%22properties%22%3A%5B%7B%22key%22%3A%22%24active_feature_flags%22%2C%22value%22%3A%224535-funnel-bar-viz%22%2C%22operator%22%3A%22icontains%22%2C%22type%22%3A%22event%22%7D%5D%7D%2C%7B%22id%22%3A%225043%22%2C%22type%22%3A%22actions%22%2C%22order%22%3A1%2C%22name%22%3A%22App%20Pageview%20-%20Logged%20in%22%2C%22math%22%3A%22dau%22%2C%22math_property%22%3Anull%2C%22properties%22%3A%5B%7B%22key%22%3A%22%24active_feature_flags%22%2C%22value%22%3A%224535-funnel-bar-viz%22%2C%22operator%22%3A%22icontains%22%2C%22type%22%3A%22event%22%7D%5D%7D%2C%7B%22id%22%3A%224912%22%2C%22type%22%3A%22actions%22%2C%22order%22%3A2%2C%22name%22%3A%22Discovered%20Learning%22%2C%22math%22%3A%22dau%22%2C%22properties%22%3A%5B%7B%22key%22%3A%22%24active_feature_flags%22%2C%22value%22%3A%224535-funnel-bar-viz%22%2C%22operator%22%3A%22not_icontains%22%2C%22type%22%3A%22event%22%7D%5D%7D%2C%7B%22id%22%3A%225043%22%2C%22type%22%3A%22actions%22%2C%22order%22%3A3%2C%22name%22%3A%22App%20Pageview%20-%20Logged%20in%22%2C%22math%22%3A%22dau%22%2C%22math_property%22%3Anull%2C%22properties%22%3A%5B%7B%22key%22%3A%22%24active_feature_flags%22%2C%22value%22%3A%224535-funnel-bar-viz%22%2C%22operator%22%3A%22not_icontains%22%2C%22type%22%3A%22event%22%7D%5D%7D%5D&filter_test_accounts=true&formula=((A%2FB)%20-%20(C%2FD))*100%20%2F%20(C%2FD)&interval=week&properties=%5B%5D&events=%5B%5D&new_entity=%5B%5D&display=ActionsBarValue

Where the we will want to change per feature flag are:

  • The metric we want to measure against - Discovered Learnings (in this example)
  • The feature flag we want to test on - 4535-funnel-bar-viz (in this example)

@marcushyett-ph
Copy link
Contributor Author

UI Mock with result + Confidence:
image

@macobo macobo added the feature/feature-flags Feature Tag: Feature flags label Sep 6, 2021
@posthog-bot
Copy link
Contributor

This issue hasn't seen activity in two years! If you want to keep it open, post a comment or remove the stale label – otherwise this will be closed in two weeks.

@posthog-bot
Copy link
Contributor

This issue was closed due to lack of activity. Feel free to reopen if it's still relevant.

@posthog-bot posthog-bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request feature/feature-flags Feature Tag: Feature flags stale
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants