-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Magic Feature Flags [Hackathon Project] #5777
Labels
Comments
Query to get increase in metric for feature flag: Where the we will want to change per feature flag are:
|
6 tasks
This issue hasn't seen activity in two years! If you want to keep it open, post a comment or remove the |
This issue was closed due to lack of activity. Feel free to reopen if it's still relevant. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Problem we're solving
When I create a feature flag - I need to understand how the feature (behind the flag) is performing.
Proposed solution
We show the the performance (specific metric) of the feature with with flag (or specific variant) vs the control case and also provide a score for the confidence of the result.
In the feature flags experience I should be able to configure my success event, this is what the success of my experiment will be measured against
We should show the metric % difference and confidence in the feature flags page next to each variant
We should should show the same metric % difference and confidence in the feature toolbar as with the feature flags page
Technical details
How do we calculate the % difference in metric between test / variant and control?
In order to compare the variants and measure confidence we will need some kind of aggregation. The simplest aggregation will be per day. I propose we calculate the % difference between test and control as
Mean_test_value_per_day = SUM(SUM(Events[test])/ SUM(exposures[test]) OVER days_exposed) / days_exposed
Mean_control_value_per_day = SUM(SUM(Events[control])/ SUM(exposures[control]) OVER days_exposed) / days_exposed
% difference = (Mean_test_value_per_day - Mean_control_value_per_day)*100.0 / Mean_control_value_per_day
How do we calculate the confidence score?
We can use a T-test to calculate our confidence in whether or not the metric has improved..
https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_ind.html
Where a is an array of the results (SUM(Events[test])/ SUM(exposures[test]) for the control group per day and b is an array of the results from the test group per day.
The T-Test returns a P-value (X), we can convert this to a confidence score as below:
Confidence score % = (1 - X)*100
The text was updated successfully, but these errors were encountered: