This repository collect solutions for Stanfords CS231N course on deep learning for 2022 which implemented and studied various machine learning and deep learning methods for computer vision. The purpose was to get some hands on experience with neural nets and some common architectures under the hood. Unless there is an explicit mention to Pytorch implementations are done using NumPy in the codebase used for the course.
- knn for image classification goes over the k nearest neighbors approach for image classification.
- Softmax classifier goes over softmax classification a generalization of logistic regression for image classification.
- Support Vector Machines goes over support vector machines for image classification.
- 2-Layer Nets uses a simpler 2 layer feed forward neural net for image classification
- Features learned by the methods above are analyzed in this notebook.
- Fully Connected Nets implements fully connected neural networks with regularization for an arbitrary amount of hidden layers.
- Batch Normalization implements batch normalization for neural network and demonstrates its usefulness for generalization.
- Dropout implements dropout a popular method for regularizing neural networks.
- Convolutional Neural Networks implements convolutional neural networks a popoular architecture used for computer vision.
- Pytorch goes through some Pytorch examples in increasing levels of abstraction from Pytorch tensors to modules to sequential.
- RNN Captioning implements Recurrent Neural Networks for captioning an image.
- Transformer Captioning implements a Transformer for captioning an image in PyTorch.
- Genarative Adversarial Networks implements a basic GAN with a simple discriminator and generator in PyTorch.
- Self Supervised Learning goes through an example of self-supervised learning where contrastive learning is used to learn good representations of images.