Skip to content

atagade/Vision-Transformer-Interpretability

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vision-Transformer-Interpretability

Attention Patterns

Currently only support ViT-B-16 from Huggingface

Attention patterns for all layers and heads: python attention_pattern.py -m vb16 -i PATH_TO_IMAGE

Attention patterns for a particular layer and head: python attention_pattern.py -m vb16 -i PATH_TO_IMAGE -l LAYER -he HEAD

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published