Skip to content

Commit

Permalink
revised parap description for SearchForExplanation
Browse files Browse the repository at this point in the history
  • Loading branch information
rfl-urbaniak committed Aug 26, 2024
1 parent b6164e4 commit 17be7f5
Showing 1 changed file with 51 additions and 12 deletions.
63 changes: 51 additions & 12 deletions docs/source/explainable_sir.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"## Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"The main dependencies for this example are PyTorch, Pyro, and ChiRho.\n"
]
Expand Down Expand Up @@ -94,7 +100,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Bayesian Epidemiological SIR model with Policies\n",
"## Bayesian Epidemiological SIR model with Policies"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"Now, we build the epidemiological SIR (Susceptible, Infected, Recovered/Removed) model, one step at a time. We first encode the deterministic SIR dynamics. Then we add uncertainty about the parameters that govern these dynamics - $\\beta$ and $\\gamma$. These parameters have been described in much detail in the [dynamical systems tutorial](https://basisresearch.github.io/chirho/dynamical_intro.html). We then incorporate the resulting model into a more complex causal model that describes the policy mechanisms such as imposing lockdown and masking restrictions.\n",
"\n",
Expand Down Expand Up @@ -211,7 +223,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Bayesian SIR model\n",
"### Bayesian SIR model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"Now suppose we are uncertain about $\\beta, \\gamma$, and want to construct a Bayesian SIR model that incorporates this uncertainty. Say we inducing $\\beta$ to be drawn from `Beta(18, 600)`, and $\\gamma$ to be drawn from distribution `Beta(1600, 1600)`. "
]
Expand Down Expand Up @@ -245,7 +264,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Bayesian SIR model with Policies\n",
"### Bayesian SIR model with Policies\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"Now we incorporate the Bayesian SIR model into a larger model that includes the effect of two different policies, lockdown and masking, where each can be implemented with $50\\%$ probability (these probabilities won't really matter, as we will be intervening on these, the sampling is mainly used to register the parameters with Pyro). We encode their efficiencies which further affect the model. Crucially, these efficiencies interact in a fashion resembling the structure of the stone-throwing example we discussed in the tutorial on categorical variables. If lockdown is present, this limits the impact of masking as agents interact less and so masks have fewer opportunities to block anything. We assume the situation is assymetric: masking has no impact on the efficiency of lockdown. The model also computes `overshoot` and `os_too_high` for further analysis.\n",
"\n"
Expand Down Expand Up @@ -335,7 +360,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## But-for Analysis with Bayesian SIR model with Policies\n",
"## But-for Analysis with Bayesian SIR model with Policies"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"Suppose now we introduced both policies, and this resulted in an overshoot. What intuitively is the case is that lockdown limited the efficiency of masking, and it was in fact the lockdown that in this particular context caused the overshoot (this is consistent with saying that in the context where only masking has been implemented, masking would be responsible for the resulting overshoot being too high).\n",
"\n",
Expand Down Expand Up @@ -546,9 +577,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Causal Explanations using `SearchForExplanation`\n",
"## Causal Explanations using `SearchForExplanation`\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"We first setup a function for performing importance sampling through the model that returns cumulative log probabilities of the samples, sample traces, handler for multiworld counterfactual reasoning and log probabilities. We use these objects later in the code to subselect the samples."
"We first setup a function for performing importance sampling through the model that returns cumulative log probabilities of the samples, sample traces, a handler object for multiworld counterfactual reasoning, and log probabilities. We use these objects later in the code to subselect the samples."
]
},
{
Expand Down Expand Up @@ -594,12 +632,13 @@
"metadata": {},
"source": [
"Then, we setup the query as follows:\n",
"1. `supports`: We extract supports of the model using `ExtractSupports` and enrich it with additional information of `os_too_high` being a Boolean.\n",
"2. `antecedents`: We have put `lockdown=1` and `mask=1` as possible causes.\n",
"1. `supports`: We extract supports of the model using `ExtractSupports` and enrich it with additional information of `os_too_high` being a Boolean (constraints for deterministic nodes currently need to be specified manually).\n",
"2. `antecedents`: We postulate `lockdown=1` and `mask=1` as possible causes.\n",
"3. `alternatives`: We provide `lockdown=0` and `mask=0` as alternative values.\n",
"4. `witnesses`: We include `mask_efficiency` and `lockdown_efficiency` as candidates to be included in the context to be kept fixed.\n",
"5. `consequents`: We put `os_too_high=1` as the outcome we wish to analyze the causes for.\n",
"6. `antecedent_bias`, `witness_bias`, `consequent_scale`: We set these parameters to have equal probabilities of choosing causes and preferring minimal witness sets. Please refer to the documentation of `SearchForExplanation` for more details."
"4. `witnesses`: We include `mask_efficiency` and `lockdown_efficiency` as candidates to be included in the contexts potentially to be kept fixed.\n",
"5. `consequents`: We put `os_too_high=1` as the outcome whose causes we wish to analyze.\n",
"6. `antecedent_bias`, `witness_bias`,: We set these parameters to have equal probabilities of intervening on cause candidates, and to slightly prefer smaller witness sets. Please refer to the documentation of `SearchForExplanation` for more details.\n",
"7. `consequent_scale` is set to effectively include values near 0 and 1 depending on whether the binary outocomes differ across counterfactual worlds."
]
},
{
Expand Down

0 comments on commit 17be7f5

Please sign in to comment.