Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor README #188

Merged
merged 5 commits into from
Apr 25, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 9 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,17 +16,17 @@ PAZ is used in the following examples (links to **real-time demos** and training
|---------------------------|--------------------------| -----------------------|
|<img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/emotion.gif" width="410">| <img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/keypoints.png" width="410">| <img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/mask.png" width="400">|

| [3D keypoint discovery](https://github.com/oarriaga/paz/tree/master/examples/discovery_of_latent_keypoints) | [Haar Cascade detector](https://github.com/oarriaga/paz/tree/master/examples/haar_cascade_detectors) | 6D pose estimation |
| [3D keypoint discovery](https://github.com/oarriaga/paz/tree/master/examples/discovery_of_latent_keypoints) | [Haar Cascade detector](https://github.com/oarriaga/paz/tree/master/examples/haar_cascade_detectors) | [6D pose estimation](https://github.com/oarriaga/paz/tree/master/examples/pix2pose) |
|---------------------------|-----------------------| --------------------------|
|<img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/discovery_keypoints.png" width="410"> | <img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/haar_cascades.png" width="410">| <img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/pose_estimation.png" width="400"> |
|<img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/discovery_keypoints.png" width="410"> | <img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/haar_cascades.png" width="410">| <img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/pix2pose_example.jpg" width="400"> |

| [Implicit orientation](https://github.com/oarriaga/paz/tree/master/examples/implicit_orientation_learning) | [Attention (STNs)](https://github.com/oarriaga/paz/tree/master/examples/spatial_transfomer_networks) | [Eigenfaces](https://github.com/oarriaga/paz/blob/master/examples/eigenfaces/eigenfaces.py) |
|---------------------------|-----------------------|-----------------|
|<img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/implicit_pose.png" width="360">| <img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/attention.png" width="360"> | <img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/eigenfaces.png" width="350">|

|[Semantic segmentation](https://github.com/oarriaga/paz/tree/master/examples/semantic_segmentation) | | |
|[Semantic segmentation](https://github.com/oarriaga/paz/tree/master/examples/semantic_segmentation) | Hand pose estimation | |
|---------------------------|-----------------------|-----------------|
| <img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/semantic_segmentation.png" width="330">|<img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/your_example_here.png" width="330"> | <img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/blank.png" width="330">|
| <img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/semantic_segmentation.png" width="330">| <img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/minimal_hand_example.png" width="330"> |<img src="https://raw.githubusercontent.com/oarriaga/altamira-data/master/images/your_example_here.png" width="330"> |

All models can be re-trained with your own data (except for Mask-RCNN, we are working on it [here](https://github.com/oarriaga/paz/tree/mask_rcnn)).

Expand Down Expand Up @@ -189,26 +189,21 @@ The following models are implemented in PAZ and they can be trained with your ow
|[Detection and Segmentation](https://github.com/oarriaga/paz/tree/mask_rcnn/examples/mask_rcnn) |[MaskRCNN (in progress)](https://arxiv.org/abs/1703.06870) |
|[Keypoint estimation](https://github.com/oarriaga/paz/blob/master/paz/models/keypoint/hrnet.py)|[HRNet](https://arxiv.org/abs/1908.07919)|
|[Semantic segmentation](https://github.com/oarriaga/paz/blob/master/paz/models/segmentation/unet.py)|[U-NET](https://arxiv.org/abs/1505.04597)|
|[6D Pose estimation](https://github.com/oarriaga/paz/blob/master/paz/models/keypoint/keypointnet.py) |[KeypointNet2D](https://arxiv.org/abs/1807.03146) |
|[6D Pose estimation](https://github.com/oarriaga/paz/blob/master/paz/models/keypoint/keypointnet.py) |[Pix2Pose](https://arxiv.org/abs/1908.07433) |
|[Implicit orientation](https://github.com/oarriaga/paz/blob/master/examples/implicit_orientation_learning/model.py) |[AutoEncoder](https://arxiv.org/abs/1902.01275) |
|[Emotion classification](https://github.com/oarriaga/paz/blob/master/paz/models/classification/xception.py) |[MiniXception](https://arxiv.org/abs/1710.07557) |
|[Discovery of Keypoints](https://github.com/oarriaga/paz/blob/master/paz/models/keypoint/keypointnet.py) |[KeypointNet](https://arxiv.org/abs/1807.03146) |
|[Keypoint estimation](https://github.com/oarriaga/paz/blob/master/paz/models/keypoint/keypointnet.py) |[KeypointNet2D](https://arxiv.org/abs/1807.03146)|
|[Attention](https://github.com/oarriaga/paz/blob/master/examples/spatial_transfomer_networks/STN.py) |[Spatial Transformers](https://arxiv.org/abs/1506.02025) |
|[Object detection](https://github.com/oarriaga/paz/blob/master/paz/models/detection/haar_cascade.py) |[HaarCascades](https://link.springer.com/article/10.1023/B:VISI.0000013087.49260.fb) |
|[Object detection](https://github.com/oarriaga/paz/blob/master/paz/models/detection/haar_cascade.py) |[HaarCascades](https://link.springer.com/article/10.1023/B:VISI.0000013087.49260.fb) |
|[Hand pose estimation](https://github.com/oarriaga/paz/blob/refactor_readme/paz/models/keypoint/detnet.py) |[DetNet](https://vcai.mpi-inf.mpg.de/projects/2020-cvpr-hands/) |


## Motivation
Even though there are multiple high-level computer vision libraries in different deep learning frameworks, I felt there was not a consolidated deep learning library for robot-perception in my framework of choice (Keras).

### Why Keras over other frameworks/libraries?
In simple terms, I have always felt the API of Keras to be more mature.
It allowed me to express my ideas at the level of complexity that was required.
Keras was often misinterpreted as an inflexible or a beginners framework; however, once you learn to abstract `Layer`, `Callbacks`, `Loss`, `Metrics` or `Model`, the API remained intact and helpful for more complicated ideas.
It allowed me to automate and write down experiments with no extra boilerplate.
Furthermore, one could always have created a custom training loop.

As a final remark, I would like to mention, that I feel that we might tend to forget the great effort and emotional status behind every (open-source) project.
I feel it's easy to blurry a company name with the individuals behind their project, and we forget that there is someone feeling our criticism and our praise.
I feel it's easy to blurry a company name with the individuals behind their work, and we forget that there is someone feeling our criticism and our praise.
Therefore, whatever good code you can find here, is all dedicated to the software-engineers and contributors of open-source projects like Pytorch, Tensorflow and Keras.
You put your craft out there for all of us to use and appreciate, and we ought first to give you our thankful consideration.

Expand Down
19 changes: 7 additions & 12 deletions paz/pipelines/keypoints.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,15 @@
from tensorflow.keras.utils import get_file
from ..abstract import SequentialProcessor, Processor
from .. import processors as pr

from .renderer import RenderTwoViews
from ..models import KeypointNet2D
from ..models import DetNet
from .image import PreprocessImageHigherHRNet
from .heatmaps import GetHeatmapsAndTags

from .. import processors as pr
from ..abstract import SequentialProcessor, Processor
from ..models import KeypointNet2D, HigherHRNet
from ..backend.image import get_affine_transform
from ..models import KeypointNet2D, HigherHRNet, DetNet

from ..backend.image import get_affine_transform, flip_left_right
from ..datasets import JOINT_CONFIG, FLIP_CONFIG
from .image import PreprocessImageHigherHRNet
from .heatmaps import GetHeatmapsAndTags
from .renderer import RenderTwoViews
from ..backend.image import flip_left_right


class KeypointNetSharedAugmentation(SequentialProcessor):
Expand Down Expand Up @@ -257,7 +252,7 @@ def call(self, image):
image = self.draw_skeleton(image, keypoints)
keypoints = self.extract_keypoints_locations(keypoints)
return self.wrap(image, keypoints, scores)


class HandPoseEstimation(Processor):
"""Hand keypoints detection pipeline.
Expand Down Expand Up @@ -293,4 +288,4 @@ class MinimalHandPoseEstimation(HandPoseEstimation):
"""
def __init__(self):
detect_hand = DetNet()
super(MinimalHandPoseEstimation, self).__init__(detect_hand)
super(MinimalHandPoseEstimation, self).__init__(detect_hand)
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
import paz

setup(name='pypaz',
version='0.1.7',
version=paz.__version__,
description='Perception for Autonomous Systems',
author='Octavio Arriaga',
author_email='[email protected]',
Expand Down