Skip to content

Commit

Permalink
Hotfix/0.4.3 (mckinsey#7) - Address broken links and grammar
Browse files Browse the repository at this point in the history
* Fix documentation links in README (mckinsey#2)

* Fix links in README

* library -> libraries

* Fix github link in docs

* Clean up grammar and consistency in documentation (mckinsey#4)

* Clean up grammar and consistency in `README` files

* Add esses, mostly

* Reword feature description to not appear automatic

* Update docs/source/05_resources/05_faq.md

Co-Authored-By: Ben Horsburgh <[email protected]>

Co-authored-by: Ben Horsburgh <[email protected]>

* hotfix/0.4.3: fix broken links

Co-authored-by: Zain Patel <[email protected]>
Co-authored-by: Nikos Tsaousis <[email protected]>
Co-authored-by: Deepyaman Datta <[email protected]>
  • Loading branch information
4 people authored Feb 5, 2020
1 parent 7e814fc commit e4314f5
Show file tree
Hide file tree
Showing 5 changed files with 110 additions and 12 deletions.
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

> "A toolkit for causal reasoning with Bayesian Networks."
CausalNex aims to become one of the leading library for causal reasoning and "what-if" analysis using Bayesian Networks. It helps to simplify the steps:
CausalNex aims to become one of the leading libraries for causal reasoning and "what-if" analysis using Bayesian Networks. It helps to simplify the steps:
- To learn causal structures,
- To allow domain experts to augment the relationships,
- To estimate the effects of potential interventions using data.
Expand All @@ -27,7 +27,7 @@ CausalNex aims to become one of the leading library for causal reasoning and "wh
CausalNex is built on our collective experience to leverage Bayesian Networks to identify causal relationships in data so that we can develop the right interventions from analytics. We developed CausalNex because:

- We believe **leveraging Bayesian Networks** is more intuitive to describe causality compared to traditional machine learning methodology that are built on pattern recognition and correlation analysis.
- Causal relationships are more accurate if we can easily **encode or augment domain expertise** in the graph model
- Causal relationships are more accurate if we can easily **encode or augment domain expertise** in the graph model.
- We can then use the graph model to **assess the impact** from changes to underlying features, i.e. counterfactual analysis, and **identify the right intervention**.

In our experience, a data scientist generally has to use at least 3-4 different open-source libraries before arriving at the final step of finding the right intervention. CausalNex aims to simplify this end-to-end process for causality and counterfactual analysis.
Expand All @@ -40,8 +40,8 @@ The main features of this library are:
- Allow domain knowledge to augment model relationship
- Build predictive models based on structural relationships
- Fit probability distribution of the Bayesian Networks
- Evaluate model quality with standard statistical checks.
- Visualisation that simplifies how causality is understood in Bayesian Networks
- Evaluate model quality with standard statistical checks
- Simplify how causality is understood in Bayesian Networks through visualisation
- Analyse the impact of interventions using Do-calculus

## How do I install CausalNex?
Expand All @@ -58,8 +58,8 @@ See more detailed installation instructions, including how to setup Python virtu

You can find the documentation for the latest stable release [here](https://causalnex.readthedocs.io/en/latest/). It explains:

- An end-to-end [tutorial on how to use CausalNex](https://causalnex.readthedocs.io/en/latest/03_tutorial/03_tutorial.htm)
- The [main concepts and methods](https://causalnex.readthedocs.io/en/latest/04_user_guide/04_user_guide.htm) in using Bayesian Networks for Causal Inference
- An end-to-end [tutorial on how to use CausalNex](https://causalnex.readthedocs.io/en/latest/03_tutorial/03_tutorial.html)
- The [main concepts and methods](https://causalnex.readthedocs.io/en/latest/04_user_guide/04_user_guide.html) in using Bayesian Networks for Causal Inference

> Note: You can find the notebook and markdown files used to build the docs in [`docs/source`](docs/source).
Expand Down
4 changes: 4 additions & 0 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# Release 0.4.3:

Bugfix to resolve broken links in README and minor text issues.

# Release 0.4.2:

Bugfix to add image to readthedocs
Expand Down
2 changes: 1 addition & 1 deletion causalnex/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,6 @@
causalnex toolkit for causal reasoning (Bayesian Networks / Inference)
"""

__version__ = "0.4.2"
__version__ = "0.4.3"

__all__ = ["structure", "discretiser", "evaluation", "inference", "network", "plots"]
94 changes: 94 additions & 0 deletions docs/_templates/breadcrumbs.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
{# Support for Sphinx 1.3+ page_source_suffix, but don't break old builds. #}

{% if page_source_suffix %}
{% set suffix = page_source_suffix %}
{% else %}
{% set suffix = source_suffix %}
{% endif %}

{# modification to enable custom github_url #}

{% if meta is not defined or meta is not none%}
{% set meta = {} %}
{% endif %}

{% if github_url is defined %}
{% set _dummy = meta.update({'github_url': github_url}) %}
{% endif %}

{# // modification to enable custom github_url #}

{% if meta is defined and meta is not none %}
{% set check_meta = True %}
{% else %}
{% set check_meta = False %}
{% endif %}

{% if check_meta and 'github_url' in meta %}
{% set display_github = True %}
{% endif %}

{% if check_meta and 'bitbucket_url' in meta %}
{% set display_bitbucket = True %}
{% endif %}

{% if check_meta and 'gitlab_url' in meta %}
{% set display_gitlab = True %}
{% endif %}

<div role="navigation" aria-label="breadcrumbs navigation">

<ul class="wy-breadcrumbs">
{% block breadcrumbs %}
<li><a href="{{ pathto(master_doc) }}">{{ _('Docs') }}</a> &raquo;</li>
{% for doc in parents %}
<li><a href="{{ doc.link|e }}">{{ doc.title }}</a> &raquo;</li>
{% endfor %}
<li>{{ title }}</li>
{% endblock %}
{% block breadcrumbs_aside %}
<li class="wy-breadcrumbs-aside">
{% if hasdoc(pagename) %}
{% if display_github %}
{% if check_meta and 'github_url' in meta %}
<!-- User defined GitHub URL -->
<a href="{{ meta['github_url'] }}" class="fa fa-github"> {{ _('Edit on GitHub') }}</a>
{% else %}
<a href="https://{{ github_host|default("github.com") }}/{{ github_user }}/{{ github_repo }}/{{ theme_vcs_pageview_mode|default("blob") }}/{{ github_version }}{{ conf_py_path }}{{ pagename }}{{ suffix }}" class="fa fa-github"> {{ _('Edit on GitHub') }}</a>
{% endif %}
{% elif display_bitbucket %}
{% if check_meta and 'bitbucket_url' in meta %}
<!-- User defined Bitbucket URL -->
<a href="{{ meta['bitbucket_url'] }}" class="fa fa-bitbucket"> {{ _('Edit on Bitbucket') }}</a>
{% else %}
<a href="https://bitbucket.org/{{ bitbucket_user }}/{{ bitbucket_repo }}/src/{{ bitbucket_version}}{{ conf_py_path }}{{ pagename }}{{ suffix }}?mode={{ theme_vcs_pageview_mode|default("view") }}" class="fa fa-bitbucket"> {{ _('Edit on Bitbucket') }}</a>
{% endif %}
{% elif display_gitlab %}
{% if check_meta and 'gitlab_url' in meta %}
<!-- User defined GitLab URL -->
<a href="{{ meta['gitlab_url'] }}" class="fa fa-gitlab"> {{ _('Edit on GitLab') }}</a>
{% else %}
<a href="https://{{ gitlab_host|default("gitlab.com") }}/{{ gitlab_user }}/{{ gitlab_repo }}/{{ theme_vcs_pageview_mode|default("blob") }}/{{ gitlab_version }}{{ conf_py_path }}{{ pagename }}{{ suffix }}" class="fa fa-gitlab"> {{ _('Edit on GitLab') }}</a>
{% endif %}
{% elif show_source and source_url_prefix %}
<a href="{{ source_url_prefix }}{{ pagename }}{{ suffix }}">{{ _('View page source') }}</a>
{% elif show_source and has_source and sourcename %}
<a href="{{ pathto('_sources/' + sourcename, true)|e }}" rel="nofollow"> {{ _('View page source') }}</a>
{% endif %}
{% endif %}
</li>
{% endblock %}
</ul>

{% if (theme_prev_next_buttons_location == 'top' or theme_prev_next_buttons_location == 'both') and (next or prev) %}
<div class="rst-breadcrumbs-buttons" role="navigation" aria-label="breadcrumb navigation">
{% if next %}
<a href="{{ next.link|e }}" class="btn btn-neutral float-right" title="{{ next.title|striptags|e }}" accesskey="n">Next <span class="fa fa-arrow-circle-right"></span></a>
{% endif %}
{% if prev %}
<a href="{{ prev.link|e }}" class="btn btn-neutral float-left" title="{{ prev.title|striptags|e }}" accesskey="p"><span class="fa fa-arrow-circle-left"></span> Previous</a>
{% endif %}
</div>
{% endif %}
<hr/>
</div>
10 changes: 5 additions & 5 deletions docs/source/05_resources/05_faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

[CausalNex](https://github.com/quantumblacklabs/causalnex) is a python library that allows data scientists and domain experts to co-develop models which go beyond correlation to consider causal relationships. It was originally designed by [Paul Beaumont](https://www.linkedin.com/in/pbeaumont/) and [Ben Horsburgh](https://www.linkedin.com/in/benhorsburgh/) to solve challenges they faced in inferencing causality in their project work.

This work was later turned into a product thanks to the following contributors: [Ivan Danov](https://github.com/idanov), [Dmitrii Deriabin](https://github.com/DmitryDeryabin), [Yetunde Dada](https://github.com/yetudada), [Wesley Leong](https://www.linkedin.com/in/wesleyleong/), [Steve Ler](https://www.linkedin.com/in/song-lim-steve-ler-380366106/), [Viktoriia Oliinyk](https://www.linkedin.com/in/victoria-oleynik/), [Roxana Pamfil](https://www.linkedin.com/in/roxana-pamfil-1192053b/), [Fabian Peter](https://www.linkedin.com/in/fabian-peters-6291ab105/), [Nisara Sriwattanaworachai](https://www.linkedin.com/in/nisara-sriwattanaworachai-795b357/) and [Nikolaos Tsaousis](https://www.linkedin.com/in/ntsaousis/).
This work was later turned into a product thanks to the following contributors: [Ivan Danov](https://github.com/idanov), [Dmitrii Deriabin](https://github.com/DmitryDeryabin), [Yetunde Dada](https://github.com/yetudada), [Wesley Leong](https://www.linkedin.com/in/wesleyleong/), [Steve Ler](https://www.linkedin.com/in/song-lim-steve-ler-380366106/), [Viktoriia Oliinyk](https://www.linkedin.com/in/victoria-oleynik/), [Roxana Pamfil](https://www.linkedin.com/in/roxana-pamfil-1192053b/), [Fabian Peters](https://www.linkedin.com/in/fabian-peters-6291ab105/), [Nisara Sriwattanaworachai](https://www.linkedin.com/in/nisara-sriwattanaworachai-795b357/) and [Nikolaos Tsaousis](https://www.linkedin.com/in/ntsaousis/).

## What are the benefits of using CausalNex?

Expand All @@ -16,8 +16,8 @@ As we see it, CausalNex:

- **Generates transparency and trust in models** it creates by allowing users to collaborate with domain experts during the modelling process.
- Uses an **optimised structure learning algorithm**, [NOTEARS](https://papers.nips.cc/paper/8157-dags-with-no-tears-continuous-optimization-for-structure-learning.pdf) where the runtime to learn structure is no longer exponential but scales cubically with number of nodes.
- **Add known relationships or remove spurious correlations** so that your model can better consider causal relationships in data
- **Visualise networks using common tools** built upon [NetworkX](https://networkx.github.io/), allowing users to understand relationships in their data more intuitively, and work with experts to encode their knowledge
- **Enables adding known relationships/removing spurious correlations** so that your model can better consider causal relationships in data.
- **Visualises networks using common tools** built upon [NetworkX](https://networkx.github.io/), allowing users to understand relationships in their data more intuitively, and work with experts to encode their knowledge.
- **Streamlines the use of Bayesian Networks** for an end-to-end counterfactual analysis, which in the past was a complicated process involving the use of at least three separate open source libraries, each with its own interface.

## When should you consider using CausalNex?
Expand All @@ -42,7 +42,7 @@ According to the benchmarking done on **synthetic dataset** in-house, it is high

StructureModel is used when discovering the causal structure of a dataset. Part of this process is adding, removing, and flipping edges, until the appropriate structure is completed. As edges are modified, cycles can be temporarily introduced into the structure, which would raise an Exception within a BayesianNetwork, which is a specialised **directed acyclic graph**.

Once the structure is finalised, and is acyclic, then it can be used to create a [BayesianNetwork](https://causalnex.readthedocs.io/en/latest/04_user_guide/04_user_guide.html)
Once the structure is finalised, and is acyclic, then it can be used to create a [BayesianNetwork](https://causalnex.readthedocs.io/en/latest/04_user_guide/04_user_guide.html).


## Why a separate data pre-processing process for probability fitting than structure learning? / Why discretise data in probability fitting?
Expand Down Expand Up @@ -71,7 +71,7 @@ At the moment, the algorithm calculates the probability of **every node** in a B

The following points describe how we are unique comparing to the others:
1) We are one of the very few causal packages that use **Bayesian Networks** to model the problems. Most of the causal packages use statistical matching technique like **propensity score matching** to approach these problems.
2) One of the main hurdle to applying Bayesian Network is to find the optimal graph structure. In CausalNex, We **simplify** this process by providing the ability for the users to learn the graph structure through: i) **encoding domain expertise** by manually adding the edges, and ii) **leveraging the data** using the state-of-the-art [structure learning algorithm](https://papers.nips.cc/paper/8157-dags-with-no-tears-continuous-optimization-for-structure-learning.pdf).
2) One of the main hurdles to applying Bayesian Networks is to find the optimal graph structure. In CausalNex, We **simplify** this process by providing the ability for the users to learn the graph structure through: i) **encoding domain expertise** by manually adding the edges, and ii) **leveraging the data** using the state-of-the-art [structure learning algorithm](https://papers.nips.cc/paper/8157-dags-with-no-tears-continuous-optimization-for-structure-learning.pdf).
3) We provide the ability for the users to do **counterfactual analysis** using Bayesian Network by introducing **Do-Calculus**, which is not commonly found in Bayesian Network packages.

## What version of Python does CausalNex use?
Expand Down

0 comments on commit e4314f5

Please sign in to comment.