Skip to content

Commit

Permalink
finished up to the stopping point
Browse files Browse the repository at this point in the history
  • Loading branch information
rfl-urbaniak committed Aug 26, 2024
1 parent 17be7f5 commit 89864d3
Showing 1 changed file with 20 additions and 13 deletions.
33 changes: 20 additions & 13 deletions docs/source/explainable_sir.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -591,7 +591,7 @@
},
{
"cell_type": "code",
"execution_count": 511,
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -643,14 +643,14 @@
},
{
"cell_type": "code",
"execution_count": 512,
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor(0.1215)\n"
"tensor(0.1257)\n"
]
}
],
Expand All @@ -670,7 +670,7 @@
" consequents={\"os_too_high\": torch.tensor(1.0)},\n",
" consequent_scale=1e-8,\n",
" witness_bias=0.2,\n",
" )(policy_model #it was policy_model_all earlier)\n",
" )(policy_model \n",
" )\n",
"\n",
"logp, importance_tr, mwc_imp, log_weights = importance_infer(num_samples=10000)(query)()\n",
Expand All @@ -681,12 +681,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have setup the query and drawn 10000 samples from it, we can analyze the samples and their log probabilities to compute queries of interest. We first compute the probabilities that different sets of antecedent candidates have causal effect over `os_too_high`."
"The above probability itself is not directly related to our query. It is the probability that the overshoot is too high in the antecedents-intervened workd and not too high in the alterantives-intervened world, where antecedent interventions are preempted with probabilities $0.5$ at each site, and witnesses are kept fixed at the observed values with probability $0.5+0.2$ at each site. But more fine-grained queries can be answered using the 10000 samples we have drawn in the process. We first compute the probabilities that different sets of antecedent candidates have causal effect over `os_too_high`."
]
},
{
"cell_type": "code",
"execution_count": 513,
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -699,34 +699,41 @@
},
{
"cell_type": "code",
"execution_count": 514,
"execution_count": 21,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'__cause____antecedent_lockdown': 0, '__cause____antecedent_mask': 0} 0.19823434948921204\n",
"{'__cause____antecedent_lockdown': 0, '__cause____antecedent_mask': 1} 0.1833265870809555\n",
"{'__cause____antecedent_lockdown': 1, '__cause____antecedent_mask': 0} 0.10898739099502563\n",
"{'__cause____antecedent_lockdown': 1, '__cause____antecedent_mask': 1} 2.7269246860583962e-09\n"
"{'__cause____antecedent_lockdown': 0, '__cause____antecedent_mask': 0} 0.2081967145204544\n",
"{'__cause____antecedent_lockdown': 0, '__cause____antecedent_mask': 1} 0.192521870136261\n",
"{'__cause____antecedent_lockdown': 1, '__cause____antecedent_mask': 0} 0.1043718010187149\n",
"{'__cause____antecedent_lockdown': 1, '__cause____antecedent_mask': 1} 2.6645385897694496e-09\n"
]
}
],
"source": [
"# no preemptions on lockdown and masking, i.e. both interventions executed\n",
"compute_prob(importance_tr, log_weights, {\"__cause____antecedent_lockdown\": 0, \"__cause____antecedent_mask\": 0})\n",
"\n",
"# only lockdown executed, masking preempted\n",
"compute_prob(importance_tr, log_weights, {\"__cause____antecedent_lockdown\": 0, \"__cause____antecedent_mask\": 1})\n",
"\n",
"# only masking executed, lockdown preempted\n",
"compute_prob(importance_tr, log_weights, {\"__cause____antecedent_lockdown\": 1, \"__cause____antecedent_mask\": 0})\n",
"\n",
"# no interventions executed\n",
"compute_prob(importance_tr, log_weights, {\"__cause____antecedent_lockdown\": 1, \"__cause____antecedent_mask\": 1})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that one can also compute above queries by giving specific parameters to `SearchForExplanation` instead of subselecting the samples as we did in the tutorial for explainable module for models with categorical variables.\n",
"Note that one could also compute above queries by giving specific parameters to `SearchForExplanation` instead of subselecting the samples, as we did in the tutorial for explainable module for models with categorical variables. Here, however, we illustrate that running a sufficiently general query ones produces samples that can be used to answer multiple different questions.\n",
"\n",
"Also, we use the log probabilities above to identify whether a particular combination of intervening nodes and context nodes have causal power or not. One can also obatin these results by explictly analyzing the sample trace as we do in the next section."
"Also, we use the log probabilities above to identify whether a particular combination of intervening nodes and context nodes have causal power or not, which is made possible by the fact that our handler adds appropriate log probabilities to the trace (see the previous tutorial and documentation for more explanation). One can also obtain these results by explictly analyzing the sample trace as we do in the next section."
]
},
{
Expand Down

0 comments on commit 89864d3

Please sign in to comment.