Skip to content

A JupyterLab extension for tracking, managing, and comparing Responsible AI mitigations and experiments.

License

Notifications You must be signed in to change notification settings

microsoft/responsible-ai-toolbox-tracker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MIT license

Responsible AI Tracker

Responsible AI Tracker is a JupyterLab Extension for managing, tracking, and comparing results of machine learning experiments for model improvement. Using this extension, users can view models, code, and visualization artifacts within the same framework enabling fast model iteration and evaluation processes. The extension is a work-in-progress research prototype to test and understand tooling functionalities and visualizations that can be helpful to data scientists. If you would like to propose new ideas for improvement feel free to contact the development team at [email protected] or create new issues in this repository.

This repo is a part of the Responsible AI Toolbox, a suite of tools providing a collection of model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

Main functionalities of the tracker include:

  • Managing and linking model improvement artifacts: the extension encourages clean and systematic data science practices by allowing users to associate the notebook used to create a model with the resulting model. These practices support careful model tracking and systematic experimentation.

  • Disaggregated model evaluation and comparisons: the model comparison table in the extension provides an in-depth comparison between the different models registered in the extension. This comparison contrasts performance results across different data cohorts and metrics, following a disaggregated approach, which goes beyond single-score performance numbers and highlights cohorts of data for which a model may perform worse than its older versions. Read more about disaggregated analysis here.

  • Integration with the Responsible AI Mitigations library: as data scientists experiment and ideate different steps for model improvement, the Responsible AI Mitigations Library helps them implement different mitigation techniques in python that may improve model performance and can be targeted towards specified cohorts of interests.

Tour

Watch a video tour of the Responsible AI Tracker and follow along using the notebooks and dataset here.

ResponsibleAITrackerOverview

Installation

The Responsible AI Tracker can be deployed on Windows or Ubuntu, using anaconda or python.

The Responsible AI Tracker prerequisites:

  • Nodejs

  • Python (versions supported 3.9 to 3.10.6)

  • JupyterLab

    • If you use pip:
    pip install jupyterlab==3.6.3
    • If you use conda:
    conda install -c conda-forge jupyterlab==3.6.3

The Responsible AI Tracker has two installation options:

  • The default installation only installs the essential packages.

    pip install raitracker
  • The installation With the [all] flag installs the essential packages plus PyTorch, and Tensorflow.

    pip install raitracker[all]

Installation through the JupyterLab Extension Manager coming soon.

Running

Start up JupyterLab using:

jupyter lab

The extension should be available in the left vertical bar. For ideas on getting started, watch the video tour and follow along using the notebooks and dataset here.

Dependencies
  • jupyterlab
  • fluentui
  • nodejs
  • react
  • redux
  • lumino
  • lodash
  • babel
  • codeMirror
  • webpack
  • mlflow
  • numpy
  • pandas
  • scikit-learn
  • pytorch

Getting help

We encourage you to check the Responsible AI Tracker documentation.

For Responsible AI Mitigations Library help see Responsible AI Mitigations documentation.

See here for further support information.

Bug reports

To report a bug please read the guidelines and then open a Github issue.

Feature requests

We welcome suggestions for new features as they help make the project more useful for everyone. To request a feature please use the feature request template.

Contributing

To contribute code or documentation to the Responsible AI Tracker, please read the contribution guidelines.


Microsoft Open Source Code of conduct

The Microsoft Code of conduct outlines expectations for participation in Microsoft-managed open source communities.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

Research and Acknowledgements

Current Maintainers: ThuVan Pham, Matheus Mendonça, Besmira Nushi, Rahee Ghosh Peshawaria, Marah Abdin, Mark Encarnación, Dany Rouhana

Past Maintainers: Irina Spiridonova

Research Contributors: Besmira Nushi, Jingya Chen, Rahee Ghosh Peshawaria, ThuVan Pham, Matheus Mendonça, Ece Kamar, Dany Rouhana

About

A JupyterLab extension for tracking, managing, and comparing Responsible AI mitigations and experiments.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published