Skip to content

Latest commit

 

History

History
 
 

examples

Tensorpack Examples

Training examples with reproducible performance.

The word "reproduce" should always mean reproduce performance. With the magic of SGD, wrong deep learning code often appears to still work, especially if you try it on toy datasets. See Unawareness of Deep Learning Mistakes.

We refuse toy examples. Instead of showing you 10 arbitrary networks trained on toy datasets with random final performance, tensorpack examples try to faithfully replicate experiments and performance in the paper as much as possible, so you're confident that they are correct.

Getting Started:

These are all the toy examples in tensorpack. They are supposed to be just demos.

Vision:

Name Performance
Train ResNet, ShuffleNet and other models on ImageNet reproduce paper
Train Faster-RCNN / Mask-RCNN on COCO reproduce paper
DoReFa-Net: training binary / low-bitwidth CNN on ImageNet reproduce paper
Generative Adversarial Network(GAN) variants, including DCGAN, InfoGAN,
Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN
visually reproduce
Fully-convolutional Network for Holistically-Nested Edge Detection(HED) visually reproduce
Spatial Transformer Networks on MNIST addition reproduce paper
Visualize CNN saliency maps visually reproduce
Similarity learning on MNIST
Single-image super-resolution using EnhanceNet
Learn steering filters with Dynamic Filter Networks visually reproduce
Load a pre-trained AlexNet, VGG, or Convolutional Pose Machines

Reinforcement Learning:

Name Performance
Deep Q-Network(DQN) variants on Atari games, including
DQN, DoubleDQN, DuelingDQN.
reproduce paper
Asynchronous Advantage Actor-Critic(A3C) on Atari games reproduce paper

Speech / NLP:

Name Performance
LSTM-CTC for speech recognition reproduce paper
char-rnn for fun fun
LSTM language model on PennTreebank reproduce reference code