From 6fed4bd3f7dd2387b12dca6f33030b8066af930b Mon Sep 17 00:00:00 2001 From: Aniket Maurya Date: Wed, 23 Jun 2021 19:10:34 +0530 Subject: [PATCH 1/5] add smiley --- README.md | 98 +++++++++++++++++++++++++++-------------------- docs/index.md | 104 ++++++++++++++++++++++++++++---------------------- 2 files changed, 115 insertions(+), 87 deletions(-) diff --git a/README.md b/README.md index 58e6fe05..9f882769 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,5 @@ # chitra - [![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=aniketmaurya_chitra&metric=alert_status)](https://sonarcloud.io/dashboard?id=aniketmaurya_chitra) [![Lines of Code](https://sonarcloud.io/api/project_badges/measure?project=aniketmaurya_chitra&metric=ncloc)](https://sonarcloud.io/dashboard?id=aniketmaurya_chitra) [![Maintainability Rating](https://sonarcloud.io/api/project_badges/measure?project=aniketmaurya_chitra&metric=sqale_rating)](https://sonarcloud.io/dashboard?id=aniketmaurya_chitra) @@ -11,14 +10,15 @@ [![Documentation Status](https://readthedocs.org/projects/chitra/badge/?version=latest)](https://chitra.readthedocs.io/en/latest/?badge=latest) [![Discord](https://img.shields.io/discord/848469007443165184?style=flat)](https://discord.gg/TdnAfDw3kB) - ## What is chitra? -**chitra** (**चित्र**) is a Deep Learning Computer Vision library for easy data loading, model building and model visualization with GradCAM/GradCAM++ and Framework agnostic Model Serving. +**chitra** (**चित्र**) is a Deep Learning Computer Vision library for easy data loading, model building and model +visualization with GradCAM/GradCAM++ and Framework agnostic Model Serving. -Highlights: +### Highlights: +- [New] Data Visualization, Bounding Box visualization 🐶 +- [New] Framework Agnostic Model Serving ✨. - Faster data loading without any boilerplate. -- Framework Agnostic Model Serving. - Progressive resizing of images. - Rapid experiments with different models using `chitra.trainer` module. - Train models with cyclic learning rate. @@ -26,12 +26,11 @@ Highlights: > If you have more use case please [**raise an issue/PR**](https://github.com/aniketmaurya/chitra/issues/new/choose) with the feature you want. - ## Installation [![Downloads](https://pepy.tech/badge/chitra)](https://pepy.tech/project/chitra) [![Downloads](https://pepy.tech/badge/chitra/month)](https://pepy.tech/project/chitra) -![GitHub](https://img.shields.io/github/license/aniketmaurya/chitra?style=flat) +![GitHub License](https://img.shields.io/github/license/aniketmaurya/chitra?style=flat) ### Using pip (recommended) @@ -46,6 +45,7 @@ pip install -e . ``` ### From GitHub + ``` pip install git+https://github.com/aniketmaurya/chitra@master ``` @@ -54,7 +54,9 @@ pip install git+https://github.com/aniketmaurya/chitra@master ### Loading data for image classification -Chitra `dataloader` and `datagenerator` modules for loading data. `dataloader` is a minimal dataloader that returns `tf.data.Dataset` object. `datagenerator` provides flexibility to users on how they want to load and manipulate the data. +Chitra `dataloader` and `datagenerator` modules for loading data. `dataloader` is a minimal dataloader that +returns `tf.data.Dataset` object. `datagenerator` provides flexibility to users on how they want to load and manipulate +the data. ```python import numpy as np @@ -65,22 +67,24 @@ import matplotlib.pyplot as plt clf_dl = Clf() data = clf_dl.from_folder(cat_dog_path, target_shape=(224, 224)) -clf_dl.show_batch(8, figsize=(8,8)) +clf_dl.show_batch(8, figsize=(8, 8)) ``` ![Show Batch](https://raw.githubusercontent.com/aniketmaurya/chitra/master/docs/assets/images/output_3_1.png) - ## Image datagenerator + Dataset class provides the flexibility to load image dataset by updating components of the class. Components of Dataset class are: + - image file generator - resizer - label generator - image loader -These components can be updated with custom function by the user according to their dataset structure. For example the Tiny Imagenet dataset is organized as- +These components can be updated with custom function by the user according to their dataset structure. For example the +Tiny Imagenet dataset is organized as- ``` train_folder/ @@ -95,11 +99,11 @@ train_folder/ ......imageN.jpg ``` -The inbuilt file generator search for images on the `folder1`, now we can just update the `image file generator` and rest of the functionality will remain same. +The inbuilt file generator search for images on the `folder1`, now we can just update the `image file generator` and +rest of the functionality will remain same. **Dataset also support progressive resizing of images.** - ### Updating component ```python @@ -109,6 +113,7 @@ ds = Dataset(data_path) # it will load the folders and NOT images ds.filenames[:3] ``` +
Output No item present in the image size list @@ -116,19 +121,22 @@ ds.filenames[:3] ['/Users/aniket/Pictures/data/tiny-imagenet-200/train/n02795169/n02795169_boxes.txt', '/Users/aniket/Pictures/data/tiny-imagenet-200/train/n02795169/images', '/Users/aniket/Pictures/data/tiny-imagenet-200/train/n02769748/images'] -
+ ```python def load_files(path): return glob(f'{path}/*/images/*') + def get_label(path): return path.split('/')[-3] + ds.update_component('get_filenames', load_files) ds.filenames[:3] ``` +
Output get_filenames updated with @@ -140,7 +148,6 @@ ds.filenames[:3]
- ### Progressive resizing > It is the technique to sequentially resize all the images while training the CNNs on smaller to bigger image sizes. Progressive Resizing is described briefly in his terrific fastai course, “Practical Deep Learning for Coders”. A great way to use this technique is to train a model with smaller image size say 64x64, then use the weights of this model to train another model on images of size 128x128 and so on. Each larger-scale model incorporates the previous smaller-scale model layers and weights in its architecture. @@ -168,6 +175,7 @@ for img, label in ds.generator(): print('third call to generator:', img.shape) break ``` +
Output get_filenames updated with @@ -176,11 +184,13 @@ for img, label in ds.generator(): first call to generator: (28, 28, 3) seconds call to generator: (32, 32, 3) third call to generator: (64, 64, 3) -
+ ### tf.data support -Creating a `tf.data` dataloader was never as easy as this one liner. It converts the Python generator into `tf.data.Dataset` for a faster data loading, prefetching, caching and everything provided by tf.data. + +Creating a `tf.data` dataloader was never as easy as this one liner. It converts the Python generator +into `tf.data.Dataset` for a faster data loading, prefetching, caching and everything provided by tf.data. ```python image_sz_list = [(28, 28), (32, 32), (64, 64)] @@ -200,6 +210,7 @@ for e in dl.take(1): for e in dl.take(1): print(e[0].shape) ``` +
Output get_filenames updated with @@ -207,18 +218,21 @@ for e in dl.take(1): (28, 28, 3) (32, 32, 3) (64, 64, 3) +
## Trainer -The Trainer class inherits from `tf.keras.Model`, it contains everything that is required for training. -It exposes trainer.cyclic_fit method which trains the model using Cyclic Learning rate discovered by [Leslie Smith](https://arxiv.org/abs/1506.01186). + +The Trainer class inherits from `tf.keras.Model`, it contains everything that is required for training. It exposes +trainer.cyclic_fit method which trains the model using Cyclic Learning rate discovered +by [Leslie Smith](https://arxiv.org/abs/1506.01186). ```python from chitra.trainer import Trainer, create_cnn from chitra.datagenerator import Dataset -ds = Dataset(cat_dog_path, image_size=(224,224)) +ds = Dataset(cat_dog_path, image_size=(224, 224)) model = create_cnn('mobilenetv2', num_classes=2, name='Cat_Dog_Model') trainer = Trainer(ds, model) # trainer.summary() @@ -226,17 +240,17 @@ trainer = Trainer(ds, model) ```python trainer.compile2(batch_size=8, - optimizer=tf.keras.optimizers.SGD(1e-3, momentum=0.9, nesterov=True), - lr_range=(1e-6, 1e-3), - loss='binary_crossentropy', - metrics=['binary_accuracy']) - + optimizer=tf.keras.optimizers.SGD(1e-3, momentum=0.9, nesterov=True), + lr_range=(1e-6, 1e-3), + loss='binary_crossentropy', + metrics=['binary_accuracy']) trainer.cyclic_fit(epochs=5, - batch_size=8, - lr_range=(0.00001, 0.0001), - ) + batch_size=8, + lr_range=(0.00001, 0.0001), +) ``` +
Training Loop... cyclic learning rate already set! @@ -256,18 +270,21 @@ trainer.cyclic_fit(epochs=5, 1/1 [==============================] - 0s 982us/step - loss: 1.9062 - binary_accuracy: 0.8750 -
+ ## Model Visualization -It is important to understand what is going inside the model. Techniques like GradCam and Saliency Maps can visualize what the Network is learning. `trainer` module has InterpretModel class which creates GradCam and GradCam++ visualization with almost no additional code. + +It is important to understand what is going inside the model. Techniques like GradCam and Saliency Maps can visualize +what the Network is learning. `trainer` module has InterpretModel class which creates GradCam and GradCam++ +visualization with almost no additional code. ```python from chitra.trainer import InterpretModel + trainer = Trainer(ds, create_cnn('mobilenetv2', num_classes=1000, keras_applications=False)) model_interpret = InterpretModel(True, trainer) - image = ds[1][0].numpy().astype('uint8') image = Image.fromarray(image) model_interpret(image) @@ -280,7 +297,6 @@ print(IMAGENET_LABELS[285]) ![png](https://raw.githubusercontent.com/aniketmaurya/chitra/master/docs/assets/images/output_22_1.png) - ## Data Visualization ### Image annotation @@ -291,7 +307,7 @@ Bounding Box creation is based on top of `imgaug` library. from chitra.image import Chitra -bbox = [ 70, 25, 190, 210] +bbox = [70, 25, 190, 210] label = 'Dog' image = Chitra(image_path, bboxes=bbox, labels=label) @@ -300,7 +316,6 @@ plt.imshow(image.draw_boxes()) ![png](https://raw.githubusercontent.com/aniketmaurya/chitra/master/docs/assets/images/preview-bounding-box.png) - ## Utils Limit GPU memory or enable dynamic GPU memory growth for Tensorflow. @@ -309,10 +324,10 @@ Limit GPU memory or enable dynamic GPU memory growth for Tensorflow. from chitra.utils import limit_gpu, gpu_dynamic_mem_growth # limit the amount of GPU required for your training -limit_gpu(gpu_id=0, memory_limit=1024*2) +limit_gpu(gpu_id=0, memory_limit=1024 * 2) ``` - No GPU:0 found in your system! + No GPU:0 found in your system! ```python gpu_dynamic_mem_growth() @@ -320,13 +335,14 @@ gpu_dynamic_mem_growth() No GPU found on the machine! - ## Contributing -Contributions of any kind are welcome. Please check the [**Contributing Guidelines**](https://github.com/aniketmaurya/chitra/blob/master/CONTRIBUTING.md) before contributing. +Contributions of any kind are welcome. Please check the [**Contributing +Guidelines**](https://github.com/aniketmaurya/chitra/blob/master/CONTRIBUTING.md) before contributing. ## Code Of Conduct -We pledge to act and interact in ways that contribute to an open, welcoming, -diverse, inclusive, and healthy community. -Read full [**Contributor Covenant Code of Conduct**](https://github.com/aniketmaurya/chitra/blob/master/CODE_OF_CONDUCT.md) +We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. + +Read full [**Contributor Covenant Code of +Conduct**](https://github.com/aniketmaurya/chitra/blob/master/CODE_OF_CONDUCT.md) diff --git a/docs/index.md b/docs/index.md index cc63aa58..9f882769 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,6 +1,5 @@ # chitra - [![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=aniketmaurya_chitra&metric=alert_status)](https://sonarcloud.io/dashboard?id=aniketmaurya_chitra) [![Lines of Code](https://sonarcloud.io/api/project_badges/measure?project=aniketmaurya_chitra&metric=ncloc)](https://sonarcloud.io/dashboard?id=aniketmaurya_chitra) [![Maintainability Rating](https://sonarcloud.io/api/project_badges/measure?project=aniketmaurya_chitra&metric=sqale_rating)](https://sonarcloud.io/dashboard?id=aniketmaurya_chitra) @@ -11,29 +10,27 @@ [![Documentation Status](https://readthedocs.org/projects/chitra/badge/?version=latest)](https://chitra.readthedocs.io/en/latest/?badge=latest) [![Discord](https://img.shields.io/discord/848469007443165184?style=flat)](https://discord.gg/TdnAfDw3kB) - ## What is chitra? -**chitra** (**चित्र**) is a Deep Learning Computer Vision library for easy data loading, model building and model visualization with GradCAM/GradCAM++ and Framework agnostic Model Serving. +**chitra** (**चित्र**) is a Deep Learning Computer Vision library for easy data loading, model building and model +visualization with GradCAM/GradCAM++ and Framework agnostic Model Serving. -Highlights: +### Highlights: +- [New] Data Visualization, Bounding Box visualization 🐶 +- [New] Framework Agnostic Model Serving ✨. - Faster data loading without any boilerplate. -- Framework Agnostic Model Serving. - Progressive resizing of images. - Rapid experiments with different models using `chitra.trainer` module. - Train models with cyclic learning rate. - Model interpretation using GradCAM/GradCAM++ with no extra code. - > If you have more use case please [**raise an issue/PR**](https://github.com/aniketmaurya/chitra/issues/new/choose) with the feature you want. - - ## Installation [![Downloads](https://pepy.tech/badge/chitra)](https://pepy.tech/project/chitra) [![Downloads](https://pepy.tech/badge/chitra/month)](https://pepy.tech/project/chitra) -![GitHub](https://img.shields.io/github/license/aniketmaurya/chitra?style=flat) +![GitHub License](https://img.shields.io/github/license/aniketmaurya/chitra?style=flat) ### Using pip (recommended) @@ -48,43 +45,46 @@ pip install -e . ``` ### From GitHub + ``` pip install git+https://github.com/aniketmaurya/chitra@master - ``` ## Usage ### Loading data for image classification -Chitra `dataloader` and `datagenerator` modules for loading data. `dataloader` is a minimal dataloader that returns `tf.data.Dataset` object. `datagenerator` provides flexibility to users on how they want to load and manipulate the data. +Chitra `dataloader` and `datagenerator` modules for loading data. `dataloader` is a minimal dataloader that +returns `tf.data.Dataset` object. `datagenerator` provides flexibility to users on how they want to load and manipulate +the data. ```python import numpy as np import chitra from chitra.dataloader import Clf, show_batch import matplotlib.pyplot as plt -``` -```python + clf_dl = Clf() data = clf_dl.from_folder(cat_dog_path, target_shape=(224, 224)) -clf_dl.show_batch(8, figsize=(8,8)) +clf_dl.show_batch(8, figsize=(8, 8)) ``` ![Show Batch](https://raw.githubusercontent.com/aniketmaurya/chitra/master/docs/assets/images/output_3_1.png) - ## Image datagenerator + Dataset class provides the flexibility to load image dataset by updating components of the class. Components of Dataset class are: + - image file generator - resizer - label generator - image loader -These components can be updated with custom function by the user according to their dataset structure. For example the Tiny Imagenet dataset is organized as- +These components can be updated with custom function by the user according to their dataset structure. For example the +Tiny Imagenet dataset is organized as- ``` train_folder/ @@ -99,11 +99,11 @@ train_folder/ ......imageN.jpg ``` -The inbuilt file generator search for images on the `folder1`, now we can just update the `image file generator` and rest of the functionality will remain same. +The inbuilt file generator search for images on the `folder1`, now we can just update the `image file generator` and +rest of the functionality will remain same. **Dataset also support progressive resizing of images.** - ### Updating component ```python @@ -113,6 +113,7 @@ ds = Dataset(data_path) # it will load the folders and NOT images ds.filenames[:3] ``` +
Output No item present in the image size list @@ -120,19 +121,22 @@ ds.filenames[:3] ['/Users/aniket/Pictures/data/tiny-imagenet-200/train/n02795169/n02795169_boxes.txt', '/Users/aniket/Pictures/data/tiny-imagenet-200/train/n02795169/images', '/Users/aniket/Pictures/data/tiny-imagenet-200/train/n02769748/images'] -
+ ```python def load_files(path): return glob(f'{path}/*/images/*') + def get_label(path): return path.split('/')[-3] + ds.update_component('get_filenames', load_files) ds.filenames[:3] ``` +
Output get_filenames updated with @@ -144,7 +148,6 @@ ds.filenames[:3]
- ### Progressive resizing > It is the technique to sequentially resize all the images while training the CNNs on smaller to bigger image sizes. Progressive Resizing is described briefly in his terrific fastai course, “Practical Deep Learning for Coders”. A great way to use this technique is to train a model with smaller image size say 64x64, then use the weights of this model to train another model on images of size 128x128 and so on. Each larger-scale model incorporates the previous smaller-scale model layers and weights in its architecture. @@ -172,6 +175,7 @@ for img, label in ds.generator(): print('third call to generator:', img.shape) break ``` +
Output get_filenames updated with @@ -180,11 +184,13 @@ for img, label in ds.generator(): first call to generator: (28, 28, 3) seconds call to generator: (32, 32, 3) third call to generator: (64, 64, 3) -
+ ### tf.data support -Creating a `tf.data` dataloader was never as easy as this one liner. It converts the Python generator into `tf.data.Dataset` for a faster data loading, prefetching, caching and everything provided by tf.data. + +Creating a `tf.data` dataloader was never as easy as this one liner. It converts the Python generator +into `tf.data.Dataset` for a faster data loading, prefetching, caching and everything provided by tf.data. ```python image_sz_list = [(28, 28), (32, 32), (64, 64)] @@ -204,6 +210,7 @@ for e in dl.take(1): for e in dl.take(1): print(e[0].shape) ``` +
Output get_filenames updated with @@ -211,18 +218,21 @@ for e in dl.take(1): (28, 28, 3) (32, 32, 3) (64, 64, 3) +
## Trainer -The Trainer class inherits from `tf.keras.Model`, it contains everything that is required for training. -It exposes trainer.cyclic_fit method which trains the model using Cyclic Learning rate discovered by [Leslie Smith](https://arxiv.org/abs/1506.01186). + +The Trainer class inherits from `tf.keras.Model`, it contains everything that is required for training. It exposes +trainer.cyclic_fit method which trains the model using Cyclic Learning rate discovered +by [Leslie Smith](https://arxiv.org/abs/1506.01186). ```python from chitra.trainer import Trainer, create_cnn from chitra.datagenerator import Dataset -ds = Dataset(cat_dog_path, image_size=(224,224)) +ds = Dataset(cat_dog_path, image_size=(224, 224)) model = create_cnn('mobilenetv2', num_classes=2, name='Cat_Dog_Model') trainer = Trainer(ds, model) # trainer.summary() @@ -230,17 +240,17 @@ trainer = Trainer(ds, model) ```python trainer.compile2(batch_size=8, - optimizer=tf.keras.optimizers.SGD(1e-3, momentum=0.9, nesterov=True), - lr_range=(1e-6, 1e-3), - loss='binary_crossentropy', - metrics=['binary_accuracy']) - + optimizer=tf.keras.optimizers.SGD(1e-3, momentum=0.9, nesterov=True), + lr_range=(1e-6, 1e-3), + loss='binary_crossentropy', + metrics=['binary_accuracy']) trainer.cyclic_fit(epochs=5, - batch_size=8, - lr_range=(0.00001, 0.0001), - ) + batch_size=8, + lr_range=(0.00001, 0.0001), +) ``` +
Training Loop... cyclic learning rate already set! @@ -260,18 +270,21 @@ trainer.cyclic_fit(epochs=5, 1/1 [==============================] - 0s 982us/step - loss: 1.9062 - binary_accuracy: 0.8750 -
+ ## Model Visualization -It is important to understand what is going inside the model. Techniques like GradCam and Saliency Maps can visualize what the Network is learning. `trainer` module has InterpretModel class which creates GradCam and GradCam++ visualization with almost no additional code. + +It is important to understand what is going inside the model. Techniques like GradCam and Saliency Maps can visualize +what the Network is learning. `trainer` module has InterpretModel class which creates GradCam and GradCam++ +visualization with almost no additional code. ```python from chitra.trainer import InterpretModel + trainer = Trainer(ds, create_cnn('mobilenetv2', num_classes=1000, keras_applications=False)) model_interpret = InterpretModel(True, trainer) - image = ds[1][0].numpy().astype('uint8') image = Image.fromarray(image) model_interpret(image) @@ -284,7 +297,6 @@ print(IMAGENET_LABELS[285]) ![png](https://raw.githubusercontent.com/aniketmaurya/chitra/master/docs/assets/images/output_22_1.png) - ## Data Visualization ### Image annotation @@ -295,7 +307,7 @@ Bounding Box creation is based on top of `imgaug` library. from chitra.image import Chitra -bbox = [ 70, 25, 190, 210] +bbox = [70, 25, 190, 210] label = 'Dog' image = Chitra(image_path, bboxes=bbox, labels=label) @@ -304,7 +316,6 @@ plt.imshow(image.draw_boxes()) ![png](https://raw.githubusercontent.com/aniketmaurya/chitra/master/docs/assets/images/preview-bounding-box.png) - ## Utils Limit GPU memory or enable dynamic GPU memory growth for Tensorflow. @@ -313,10 +324,10 @@ Limit GPU memory or enable dynamic GPU memory growth for Tensorflow. from chitra.utils import limit_gpu, gpu_dynamic_mem_growth # limit the amount of GPU required for your training -limit_gpu(gpu_id=0, memory_limit=1024*2) +limit_gpu(gpu_id=0, memory_limit=1024 * 2) ``` - No GPU:0 found in your system! + No GPU:0 found in your system! ```python gpu_dynamic_mem_growth() @@ -324,13 +335,14 @@ gpu_dynamic_mem_growth() No GPU found on the machine! - ## Contributing -Contributions of any kind are welcome. Please check the [**Contributing Guidelines**](https://github.com/aniketmaurya/chitra/blob/master/CONTRIBUTING.md) before contributing. +Contributions of any kind are welcome. Please check the [**Contributing +Guidelines**](https://github.com/aniketmaurya/chitra/blob/master/CONTRIBUTING.md) before contributing. ## Code Of Conduct -We pledge to act and interact in ways that contribute to an open, welcoming, -diverse, inclusive, and healthy community. -Read full [**Contributor Covenant Code of Conduct**](https://github.com/aniketmaurya/chitra/blob/master/CODE_OF_CONDUCT.md) +We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. + +Read full [**Contributor Covenant Code of +Conduct**](https://github.com/aniketmaurya/chitra/blob/master/CODE_OF_CONDUCT.md) From fe00a8eac1fa56230434f390f25db0a2fd32640e Mon Sep 17 00:00:00 2001 From: Aniket Maurya Date: Wed, 23 Jun 2021 19:12:16 +0530 Subject: [PATCH 2/5] update --- README.md | 3 +-- docs/index.md | 3 +-- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 9f882769..8c9f6e6b 100644 --- a/README.md +++ b/README.md @@ -21,8 +21,7 @@ visualization with GradCAM/GradCAM++ and Framework agnostic Model Serving. - Faster data loading without any boilerplate. - Progressive resizing of images. - Rapid experiments with different models using `chitra.trainer` module. -- Train models with cyclic learning rate. -- Model interpretation using GradCAM/GradCAM++ with no extra code. +- Model interpretation using GradCAM/GradCAM++ with no extra code 🔥 > If you have more use case please [**raise an issue/PR**](https://github.com/aniketmaurya/chitra/issues/new/choose) with the feature you want. diff --git a/docs/index.md b/docs/index.md index 9f882769..8c9f6e6b 100644 --- a/docs/index.md +++ b/docs/index.md @@ -21,8 +21,7 @@ visualization with GradCAM/GradCAM++ and Framework agnostic Model Serving. - Faster data loading without any boilerplate. - Progressive resizing of images. - Rapid experiments with different models using `chitra.trainer` module. -- Train models with cyclic learning rate. -- Model interpretation using GradCAM/GradCAM++ with no extra code. +- Model interpretation using GradCAM/GradCAM++ with no extra code 🔥 > If you have more use case please [**raise an issue/PR**](https://github.com/aniketmaurya/chitra/issues/new/choose) with the feature you want. From bff5f053cc919ed436ffa41b3fa5011332ff9b7b Mon Sep 17 00:00:00 2001 From: Aniket Maurya Date: Wed, 23 Jun 2021 19:16:34 +0530 Subject: [PATCH 3/5] precommit --- README.md | 2 ++ docs/index.md | 2 ++ 2 files changed, 4 insertions(+) diff --git a/README.md b/README.md index 8c9f6e6b..f561d228 100644 --- a/README.md +++ b/README.md @@ -24,6 +24,8 @@ visualization with GradCAM/GradCAM++ and Framework agnostic Model Serving. - Model interpretation using GradCAM/GradCAM++ with no extra code 🔥 > If you have more use case please [**raise an issue/PR**](https://github.com/aniketmaurya/chitra/issues/new/choose) with the feature you want. +> If you want to contribute, feel free to raise a PR. It doesn't need to be fully perfect. +> We will help you get there. ## Installation diff --git a/docs/index.md b/docs/index.md index 8c9f6e6b..f561d228 100644 --- a/docs/index.md +++ b/docs/index.md @@ -24,6 +24,8 @@ visualization with GradCAM/GradCAM++ and Framework agnostic Model Serving. - Model interpretation using GradCAM/GradCAM++ with no extra code 🔥 > If you have more use case please [**raise an issue/PR**](https://github.com/aniketmaurya/chitra/issues/new/choose) with the feature you want. +> If you want to contribute, feel free to raise a PR. It doesn't need to be fully perfect. +> We will help you get there. ## Installation From 0211c28dab2a448a5dfae1f7d36d29d6e3a960f7 Mon Sep 17 00:00:00 2001 From: Aniket Maurya Date: Wed, 23 Jun 2021 19:17:42 +0530 Subject: [PATCH 4/5] update --- README.md | 3 ++- docs/index.md | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index f561d228..bc97204f 100644 --- a/README.md +++ b/README.md @@ -16,6 +16,7 @@ visualization with GradCAM/GradCAM++ and Framework agnostic Model Serving. ### Highlights: + - [New] Data Visualization, Bounding Box visualization 🐶 - [New] Framework Agnostic Model Serving ✨. - Faster data loading without any boilerplate. @@ -24,7 +25,7 @@ visualization with GradCAM/GradCAM++ and Framework agnostic Model Serving. - Model interpretation using GradCAM/GradCAM++ with no extra code 🔥 > If you have more use case please [**raise an issue/PR**](https://github.com/aniketmaurya/chitra/issues/new/choose) with the feature you want. -> If you want to contribute, feel free to raise a PR. It doesn't need to be fully perfect. +> If you want to contribute, feel free to raise a PR. It doesn't need to be perfect. > We will help you get there. ## Installation diff --git a/docs/index.md b/docs/index.md index f561d228..bc97204f 100644 --- a/docs/index.md +++ b/docs/index.md @@ -16,6 +16,7 @@ visualization with GradCAM/GradCAM++ and Framework agnostic Model Serving. ### Highlights: + - [New] Data Visualization, Bounding Box visualization 🐶 - [New] Framework Agnostic Model Serving ✨. - Faster data loading without any boilerplate. @@ -24,7 +25,7 @@ visualization with GradCAM/GradCAM++ and Framework agnostic Model Serving. - Model interpretation using GradCAM/GradCAM++ with no extra code 🔥 > If you have more use case please [**raise an issue/PR**](https://github.com/aniketmaurya/chitra/issues/new/choose) with the feature you want. -> If you want to contribute, feel free to raise a PR. It doesn't need to be fully perfect. +> If you want to contribute, feel free to raise a PR. It doesn't need to be perfect. > We will help you get there. ## Installation From e602e3007b80aed442ee44530d9b76221c026f5b Mon Sep 17 00:00:00 2001 From: Aniket Maurya Date: Wed, 23 Jun 2021 19:21:06 +0530 Subject: [PATCH 5/5] precommit --- README.md | 14 +++++++------- docs/index.md | 14 +++++++------- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/README.md b/README.md index bc97204f..5e89cb70 100644 --- a/README.md +++ b/README.md @@ -12,16 +12,16 @@ ## What is chitra? -**chitra** (**चित्र**) is a Deep Learning Computer Vision library for easy data loading, model building and model -visualization with GradCAM/GradCAM++ and Framework agnostic Model Serving. +**chitra** (**चित्र**) is a Deep Learning Computer Vision library for easy data loading, data visualization, model building and model +analysis with GradCAM/GradCAM++ and Framework agnostic Model Serving. ### Highlights: -- [New] Data Visualization, Bounding Box visualization 🐶 -- [New] Framework Agnostic Model Serving ✨. -- Faster data loading without any boilerplate. -- Progressive resizing of images. -- Rapid experiments with different models using `chitra.trainer` module. +- [New] Data Visualization, Bounding Box Visualization 🐶 +- [New] Framework Agnostic Model Serving ✨🌟 +- Faster data loading without any boilerplate 🤺 +- Progressive resizing of images +- Rapid experiments with different models using `chitra.trainer` module 🚀 - Model interpretation using GradCAM/GradCAM++ with no extra code 🔥 > If you have more use case please [**raise an issue/PR**](https://github.com/aniketmaurya/chitra/issues/new/choose) with the feature you want. diff --git a/docs/index.md b/docs/index.md index bc97204f..5e89cb70 100644 --- a/docs/index.md +++ b/docs/index.md @@ -12,16 +12,16 @@ ## What is chitra? -**chitra** (**चित्र**) is a Deep Learning Computer Vision library for easy data loading, model building and model -visualization with GradCAM/GradCAM++ and Framework agnostic Model Serving. +**chitra** (**चित्र**) is a Deep Learning Computer Vision library for easy data loading, data visualization, model building and model +analysis with GradCAM/GradCAM++ and Framework agnostic Model Serving. ### Highlights: -- [New] Data Visualization, Bounding Box visualization 🐶 -- [New] Framework Agnostic Model Serving ✨. -- Faster data loading without any boilerplate. -- Progressive resizing of images. -- Rapid experiments with different models using `chitra.trainer` module. +- [New] Data Visualization, Bounding Box Visualization 🐶 +- [New] Framework Agnostic Model Serving ✨🌟 +- Faster data loading without any boilerplate 🤺 +- Progressive resizing of images +- Rapid experiments with different models using `chitra.trainer` module 🚀 - Model interpretation using GradCAM/GradCAM++ with no extra code 🔥 > If you have more use case please [**raise an issue/PR**](https://github.com/aniketmaurya/chitra/issues/new/choose) with the feature you want.