Skip to content

SahanaRamnath/Interpretability-of-Deep-Learning-Models

Repository files navigation

Interpretability-of-Deep-Learning-Models

Dual degree project thesis on "Interpretability of Deep Learning Models" (July 2019 - May 2020). Work done under the guidance of Dr. Mitesh M. Khapra at IIT Madras.

Abstract

The past few years have seen an explosion in the development of artificial intelligence systems to tackle a plethora of human tasks, starting with simple tasks such as image classification, and going on to more complex ones such as answering questions based on reading passages or images, language translation, or playing games such as Alpha-Go or Minecraft. With the increasing size and complexity of deep learning models, accuracy on various tasks has increased tremendously (almost human-level); it is now necessary to take a step back and analyze whether these models are working in an explainable and interpretable way. This dual degree project consists of two concurrent threads, with Interpretability of Deep Learning Systems as the central theme: Analyzing Interpretability of Deep RCQA Systems, and Dialog-Based Image Retrieval.

List of publications based on thesis

About

MTech thesis on "Interpretability of Deep Learning Models"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published