diff --git a/_posts/2022-07-12-image-Pipeline.md b/_posts/2022-07-12-image-Pipeline.md
new file mode 100644
index 0000000..27f264c
--- /dev/null
+++ b/_posts/2022-07-12-image-Pipeline.md
@@ -0,0 +1,150 @@
+---
+layout: post
+title: Image Pipeline
+tags: computer_vision Image_Processing
+description: Converting RAW Image to meaningful image
+---
+
+-- [Om Doiphode](https://github.com/Om-Doiphode)
+
+-- [Kedar Dhamankar](https://github.com/KedarDhamankar)
+
+
+# **Image Pipeline**
+#### This blog is about our project in the Eklavya Mentorship Program by SRA VJTI
+#### And what you can see below is what we achieved by the end of this project
+
+![Image Pipeline](/assets/posts/image-pipeline/intro.gif)
+##### _You might be thinking that we just added colors to a B&W image huh? It's more than that ; )_
+
+# About This Project
+
+#### The image pipeline takes raw image from sensor and convert it to meaningful image. Several algorithms like debayering, auto exposure, auto-white balance, gamma correction, etc are implemented to construct a meaningful i.e. a processed image.
+#### All algorithms will be implemented on a static raw image captured from a sensor.
+
+### Domain explored through this project -
+* Image Processing
+
+## Aim
+* The main aim of the project was to implement our own RAW Image Reader which applies Pre-Processing algorithms to a RAW image and display a processed image.
+
+* Along with this, few Post-Processing algorithms are also applied to the image and thus few post processed images are displayed as well
+
+## Technologies Used
+1. OpenCV
+1. C++
+1. Python (you can't really avoid this language, can you :) )
+
+## Approach
+
+#### First, convert the RAW image into a .tiff file which stores the CFA image with missing pixel values. Interpolate the missing pixel values by applying Debayering algorithms like the ones proposed by Malvar-He-Cutler and store the image in a 2d vector. Then furthermore algorithms like Auto White Balance, Auto Exposure, Color Correction, etc are applied to the image which is stored in the form of a vector, and finally, the processed image is displayed using OpenCV.
+
+## Theory
+
+### RAW Image
+#### Raw data from an image sensor contains light intensity values captured from a scene, but these data are not intrinsically recognizable to the human eye.
+#### Raw sensor data typically comes in the form of a Color Filter Array (CFA) where each pixel carries information about a single-color channel: red, green, or blue.
+![RAW Image](/assets/posts/image-pipeline/raw.png)
+#### The most common CFA pattern is the Bayer array. There are twice as many pixels that represent green light in a Bayer array image because the human eye is more sensitive to variation in shades of green and it is more closely correlated with the perception of light intensity of a scene.
+
+### Color Filter Array
+* It is an array of tiny color filters placed before the image sensor array of a camera.
+* The resolution of this array is the same as that of the image sensor array.
+* Each color filter may allow a different wavelength of light to pass
+* This is predetermined during the camera design.
+* The most common type of CFA is the Bayer pattern which is shown below:
+![Bayer Filter](/assets/posts/image-pipeline/CFA.png)
+
+## Pre-Processing Algorithms
+
+### Debayering
+
+#### We applied two debayering algorithms in our image pipeline which were the one proposed by Malvar-He-Cutler and the other one being Bilinear Interpolation.
+#### But we'll only be discussing only Malvar-He-Cutler algorithm here. You might be saying why? well duh, it's just better and gives better results
+#### This algorithm is aka High-Quality Interpolation which is derived as a modification of Bilinear Interpolation
+#### Algorithm -
+* The idea behind high-quality interpolation is that for interpolating the missing pixels in each channel, it might not be accurate to use only the adjacent pixels located on the same channel.
+* In other words, for interpolating a green pixel such as Gx in Fig we need to use the value of its adjacent green pixels as well as the value of the existing channel.
+* For example, if at the location of Gx, we have a red value, we have to use that value as well as the adjacent available green values. They called their method gradient correction interpolation.
+* Finally, they came up with 8 different 5*5 filters. We need to convolve the filters to pixels that we want to interpolate.
+![Filters](/assets/posts/image-pipeline/Filters.png)
+
+#### On applying the algorithm to a RAW image
+#### Voila! We get the following result
+![Debayered Image](/assets/posts/image-pipeline/debayered.png)
+
+### White Balance
+* Sometimes subject appears to be yellow or blue because of the incorrect color temperature of the light in a scene.
+* To reveal the color that we would see as humans, what we need is a reference point, something we know should be a certain color (or more accurately, a certain chromaticity). Then, we can rescale the R, G, B values of the pixel until it is that color.
+* As it is usually possible to identify objects that should be white, we will find a pixel we know should be white (or gray), which we know should have RGB values all equal, and then we find the scaling factors necessary to force each channel's value to be equal.
+* White Balance is the process of removing unrealistic color casts, so that objects which appear white in person are rendered white in your photo.
+
+#### Result of White Balancing the image -
+![White Balanced Image](/assets/posts/image-pipeline/wb.png)
+
+### Auto Exposure
+* When too much or too less light strikes the image sensor, the image appears overexposed, washed out or underexposed, dark and lacking in details in different areas of the same image.
+* Auto Exposure is applied to images so that such unevenness in differently exposed areas of an image are corrected.
+* Image channel having normalized values in the range 0-1 is run through a loop where each pixel value is compared to the mean intensity value of the image and correction is applied accordingly
+#### Result of Auto Exposing the image -
+![Auto Exposed Image](/assets/posts/image-pipeline/image3.png)
+
+### Gamma Correction
+This is also the last one of the pre-processing algorithms just in case u got bored reading :P
+#### So what is it anyways
+* When twice the number of photons hit the sensor of a digital camera, it receives twice the signal (a linear relationship).
+* However, we perceive double the amount of light as only a fraction brighter (a non-linear relationship)
+* Gamma correction or gamma is a nonlinear operation used to encode and decode luminance in an image
+
+#### Result of Gamma Correction -
+![Gamma Corrected Image](/assets/posts/image-pipeline/gamma.png)
+
+#### So yeah that's it from our Image Pipeline for the Pre-Processing part, the results were good weren't they? :)
+
+#### Here's one more beautiful (processed :P) image to make your day
+![](/assets/posts/image-pipeline/Processed.png)
+
+
+## Post-Processing Algorithms
+#### So let's move on to some Post-Processing algorithms now
+
+## Color Conversion
+#### Let's play with the colors of the image first :)
+### RGB → Grayscale -
+#### Ahaa Grayscale, feelin' old yet?
+![RGB to Gray](/assets/posts/image-pipeline/RGBtoGray.png)
+#### Luminosity Method for conversion
+* The best method is the luminosity method which successfully solves the problems of previous methods.
+* Based on the observations, we should take a weighted average of the components. The contribution of blue to the final value should decrease, and the contribution of green should increase.
+* After some experiments and more in-depth analysis, researchers have concluded in the equation below:
+
+ `Grayscale = (0.3 * R + 0.59 * G + 0.11* B)/3`
+* Here most weight is given to green-colored pixels as humans are said to perceive green light well.
+
+### RGB → Binary -
+
+* Binary images are images whose pixels have only two possible intensity values. They are normally displayed as black and white.
+* Numerically, the two values are often 0 for black, and either 1 or 255 for white.
+* Binary images are often produced by thresholding a grayscale or color image, in order to separate an object in the image from the background.
+
+![RGB to Binary](/assets/posts/image-pipeline/RGBtoBinary.png)
+
+### RGB → HSV -
+
+* HSV – (hue, saturation, value), also known as HSB (hue, saturation, brightness), is often used by artists because it is often more natural to think about a color in terms of hue and saturation than in terms of additive or subtractive color components.
+* HSV is a transformation of an RGB colorspace, and its components and colorimetry are relative to the RGB colorspace from which it was derived.
+
+![RGB to HSV](/assets/posts/image-pipeline/RGBtoHSV.png)
+
+#### Enough playing with the colors for now, let's look at some other algorithms
+
+## Edge Detection
+#### Not gonna explain this since it's pretty self-explanatory, jk here you go :P
+#### An edge in an image is a significant local change in the image intensity. As the name suggests, edge detection is the process of detecting the edges in an image. We applied sobel edge detection here.
+### Sobel Edge Detection -
+
+* The Sobel operator performs a 2-D spatial gradient measurement on an image and so emphasizes regions of high spatial frequency that correspond to edges. Typically it is used to find the approximate absolute gradient magnitude at each point in an input grayscale image.
+* In theory at least, the operator consists of a pair of 3×3 convolution kernels. One kernel is simply the transpose of other.
+* These kernels are designed to respond maximally to edges running vertically and horizontally relative to the pixel grid, one kernel for each of the two perpendicular orientations.
+
+![](/assets/posts/image-pipeline/edge4.png)
\ No newline at end of file
diff --git a/_posts/2022-12-7-drone-3D-reconstruction.md b/_posts/2022-12-7-drone-3D-reconstruction.md
new file mode 100644
index 0000000..b05add1
--- /dev/null
+++ b/_posts/2022-12-7-drone-3D-reconstruction.md
@@ -0,0 +1,256 @@
+---
+layout: post
+title: Resolute with ROS:Mapping Our Journey in 3D Terrain Reconstruction
+tags: ROS-Noetic Gazebo-Sim RViz Point-Cloud-Library
+description: Flying a drone over some terrain in ROS with a GPS and a Depth Sensor and to construct a 3D model of that terrain with the incorporation of the Point Cloud Library (PCL).
+---
+
+-- [Soham Mulye](https://github.com/Shazam213)
+
+# Resolute with ROS: Mapping Our Journey in 3D Terrain Reconstruction
+
+A weird title, we understand, but it quite aptly describes our experience with this project. A journey, which began as a set of four thrilling tasks and interviews, ended after two months of research, learning and debugging. We started out as two individuals who were new to almost all avenues of robotics. Thankfully, we had the guidance of some great mentors and an amazing community thanks to our college club: [SRA](https://sravjti.in/).
+
+With that introduction out of the way, let the revisitation begin!
+
+
+
+
+## Idea of the Project
+
+The idea of our **research-based** project stems from the visualisation of a drone flying over a terrain and reconstructing it simultaneously in simulation. We wanted to understand:
+* What do you mean when you say "data" of the terrain, and how does a drone actually obtain this data? (*answer* = Point Cloud Data, using LiDAR Sensors in ROS/Gazebo, more on that later:wink:),
+* Assuming we have the data, how do you convert it to a 3D Model to look *exactly* like the terrain? (*answer* = Point Cloud Library, more on that too) Fascinating questions to be answered.
+
+Basically we wanted to get involved in the depths of the reconstruction process, which required us to be aware of certain technologies, which are as follows:
+
+
+
+## Tools & Technologies:
+
+
+
+ 1. ROS (Robot Operating System) |
+ |
+ROS is an open-source, meta-operating system for your robot. Basically ROS enables us to control our Drone (and its specific parts) using Python/C++ code. |
+
+
+ 2. Gazebo |
+ |
+Gazebo is an open-source 3D robotics simulator fo research, design and development. It has several plugins to work in sync with ROS, an extensive set of sensor and noise models, and several plugin-based interfaces to physics engines and rendering engines. |
+
+
+ 3. RViz |
+
+ | RVIZ is a ROS graphical interface that allows you to visualize a lot of information, using plugins for many kinds of available topics |
+
+ 4.LiDAR Sensors |
+
+ |
+
+
+ 5. MoveIt! |
+
+ |
+MoveIt is the most widely used software for manipulation. Powerful out-of-the box visual demonstrations and easy-to-use Setup Assistant are some of its functionalities. It can be used to form a powerful robotics development platform, when used in combination with Gazebo and ROS Control. |
+
+
+As beautiful as MoveIt sounds in theory, we had to abandon this approach after a series of issues:disappointed:. More on that in section ssds |
+
+
+ 6. PCL(Point Cloud Library) |
+
+ |
+The Point Cloud Library is a standalone, large scale, open project for 2D/3D image and point cloud processing, and is free for commercial and research use. PCL into a series of modular libraries, that support several functionalities in processing like subsampling, normal estimation, and meshing. |
+
+
+
+
+## Method of Execution #1 (Intial Approach)
+
+**Primary Clarity**: This part describes the ~~Method That Didn't Work~~ Method That Developed our Fundamentals :smile:
+
+### STEP I: Get The Drone Working in ROS & Collect Data
+
+* For this step, we utilised the Quadrotor Model as developed by [Wil Selby](https://www.wilselby.com/research/ros-integration/3d-mapping-navigation/) that incorporated the use of state-of-the-art software called [MoveIt!](https://moveit.ros.org/)
+* At the expense of sounding like paid promotion, MoveIt is the most widely used software for manipulation and has been used on over 150 robots, and is free for industrial, commercial, and research use. By incorporating the latest advances in motion planning, manipulation, 3D perception, kinematics, control and navigation, MoveIt gets it's state-of-the-art title.
+
+
+
+The Desired Way of Execution |
+
+
+ |
+Given alongide is a really complex diagram of what this package entirely does. |
+
+
+
+To oversimplify it, our quadrotor model integrated external sensors and controllers and data from something called the parameter server. |
+
+
+This allowed the creation of a ROS interface to manipulate the quadrotor in Gazebo. |
+
+The Quadrotor traced a pre-defined path (trajectory) to incorporate any rotational and translational changes. |
+
+The Kitchen World in Gazebo to be traced by the Quadrotor |
+Representation in RVIZ after the quadrotor would have completed navigating a pre-defined path in the kitchen world with a Kinect motion sensor attached. |
+
+
+ |
+ |
+
+
+
+This would've executed smoothly, had we been operating on ROS Melodic...But we were using ROS Noetic and opposed to our belief it stirred up a much greater number of problems.:disappointed:
+
+
+Comedy of Errors
+
+
+ Fixed Frame: Global Status Erorr |
+ Build Errors |
+
+
+Expected Output |
+Our Error |
+After switching from catkin_make to catkin build |
+
+
+
+ |
+ |
+
+ |
+
+
+
+
+These are just two among several errors, following which we had to abandon this approach. However, to make it approach work isn't completely impossible, but there were just two many quote, hoops to jump through.
+
+And like Damon Salvatore once said, and we quote:
+
+“There is no such thing as bad ideas. Just poorly executed awesome ideas.”
+
+This might be good time to point out that one of our contributors (Unmani) has a known history of not getting things done right in the first attempt. Cheers to that record still being intact! :wine_glass:
+
+## Method of Execution #2 (That Worked!)
+
+For this approach, we used the **sjtu_drone** as developed by [Danping Zou and Tahsincan Köse](https://github.com/tahsinkose/sjtu-drone.git) for Shanghai Jiao Tong University.
+
+### ROS Communication
+
+
+ Now if you open this dropdown, you might be made familiar with certain technical terms which might not totally be necessary. What you should know is that we found a way to obtain Point Cloud Data of the terrain. And if you'd like to continue without this abstraction, go ahead, open it! :smile:
+
+
+ROS Master & Nodes |
+
+
+ |
+
+ ROS starts with the ROS Master. The Master allows all other ROS pieces of software (Nodes) to find and talk to each other. |
+
+
+ That way, we do not have to ever specifically state “Send this sensor data to that computer at 127.0.0.1. We can simply tell Node 1 to send messages to Node 2. |
+
+
+ So yes, Master is pretty much your average Godfather. |
+
+
+ROS Topics |
+
+
+ |
+
+
+Like we said earlier,ROS Nodes are used for communication. How do Nodes do this? By publishing and subscribing to Topics, and utilising Services. |
+
+
+The depth camera (Microsoft Kinect) that we integrated with our model publishes point clouds to ROS topics. |
+
+
+ Gazebo, being the godsent that it is, is entirely is a Node in itself! |
+
+ Hence, Gazebo (/gazebo, if we're being technical) publishes the PCD to a topic (let's call it /3d_cloud).
+ |
+
+
+and our Processing subsrciber node (/sub_pcl) obtains this PCD by subscribing to this very topic.
+ |
+
+
+ROS Servies |
+
+
+ |
+
+
+ ROS Services are similar to Topics in their purpose aka data transmission, but differ in their manner of operation. |
+
+
+ Services will be used when you need a client/server architecture, i.e., data transmission occurs on-request. |
+
+
+ Unlike topics, where a publisher keeps on publishing data and the subsriber may subscribe to it whenever required, both actions independent of each other. |
+
+
+ To conclude, you can see that ROS services complements topics well. |
+
+
+
+
+The problem, however, didn't end there. The PCD we obtained was in reference to the **drone**, and not with respect to the fixed world frame. For this, we had to incorporate the logic of translation between frames, and we did so by obtaining the coordinates of the drone in each frame using the ROS Service ```get_model_state()```, as explained in the above drop-down.
+
+
+
+Terrain to be mapped in simulation |
+The sjtu_drone quadrotor simulation model |
+PCD of the terrain |
+
+
+ |
+ |
+ |
+
+
+*Drumrolls* And thus we had the PCD of the terrain with our Quadrotor, after weeks!:tada:
+
+### Surface Reconstruction
+
+Now that we actually had the PCD of the terrain, the next step was to create a 3D Model of the terrain from this data.
+
+This series of steps can be simply illustrated as:
+
+![image](https://user-images.githubusercontent.com/95737452/197510503-c0b727f0-c7d2-4497-9df0-20d7640d3585.png)
+
+And you think this was without errors? Oh no, of course. What is life without pain, anyways:smile:
+
+Here's a slide that illustrates how our outputs improved sequentially:
+![image](https://user-images.githubusercontent.com/95737452/197511915-07e4a8f2-3cdf-421e-9366-929a6317e562.png)
+
+And there it was, the 3D Model of our terrain!:confetti_ball::confetti_ball:
+
+## Conclusion
+
+Being First Year students this was the first "real" big team project we had worked on. Here are a few things we learnt (sometimes painfully so) about not only developing but working on any project in general:
+
+ - Learning becomes much more fun if you get to apply it alongside! 🧑🔬
+ - Having well-thought and curated resources can go a loooong way. 📚
+ - There SHOULD be a course for effective Googling taught in all schools. 💻
+ - The best Mentors are the ones who can guide and nudge you in the right directions while letting you figuring out the solutions on your own. 🧑🏫
+ - Having a teammate who you can understand and communicate with easily makes any project 50% easier and a 100% more fun! 🥳
+ - The final vision of a project is much much different at the beginning than the end. 👓
+ - Having to scale back your orignal goals to meet deadlines isn't as much accepting defeat as it is an exercise in prioritization. 😣🏆
+
+
+So, to any of our fellow programmers or just anyone who cared to read till this point if there's anything to take away from this blog post, it's that **no matter what you want to do, you have the capacity to do it. Even if you have no idea how to, you can learn to.**
+
+
+After all, if two First Years with nothing more than some time and a hell lot of resolve can recreate a 3-dimensional model of a terrain, you can do whatever you put your mind to too. 😊
+
+## Links and Further Reading
+- If we managed to hold your interest for this long, then try taking a look at our project [GitHub](https://github.com/Shazam213/drone-terrain-reconstruction-.git)
+- If you want to go in depth with the code and theory, take a look at our [project report](https://github.com/Shazam213/drone-terrain-reconstruction-/blob/main/project-report.pdf)
+- If you'd like to learn more about Surface Reconstruction, take a look at this wonderful [paper](https://nccastaff.bmth.ac.uk/jmacey/OldWeb/MastersProjects/MSc13/14/Thesis/SurfaceReconstructionThesis.pdf) by Navpreet Kaur Pawar
+- Check out our ever-present and helpful mentors: [Jash Shah](https://github.com/Jash-Shah), [Sarrah Bastawala](https://github.com/sarrah-basta)
+- If you'd like to learn more about us and what we do check out our profiles: [Soham Mulye](https://github.com/Shazam213), [Unmani Shinde](https://github.com/unmani-shinde)
diff --git a/_posts/2022-12-7-follow-your-goal-avoiding-them-all.md b/_posts/2022-12-7-follow-your-goal-avoiding-them-all.md
new file mode 100644
index 0000000..cc86814
--- /dev/null
+++ b/_posts/2022-12-7-follow-your-goal-avoiding-them-all.md
@@ -0,0 +1,304 @@
+---
+layout: post
+title: Follow Your Goal, Avoiding Them All
+tags: Solidworks ROS-Noetic GAZEBO-Sim RVIZ OpenCV
+description: Obstacle Avoidance Racecar is an autonomous robot designed in Solidworks and simulated and tested in ROS, Gazebo, RViz, etc. It's main objective is to avoid obstacle using ODG-PF algorithm and line following through OpenCV and PID.
+---
+
+-- [Sameer Gupta](https://github.com/sameergupta4873)
+
+# Follow Your Goal, Avoiding Them All
+As the title suggest our project in a nutshell was to design a moving robot vehicle, with automated driving feature, which would be capable of avoiding dynamic obstacles and Line following just as added bonus.
+
+We started this project as complete beginners and was worried how could we manage to do this stuffs. So Let's Start from beginning.
+
+---
+> ### Now where to start from ?
+>![](https://i.imgflip.com/3y6qmf.jpg)
+
+ obviously, designing.
+
+
+### The Creator of Racecar : SolidWorks
+![](https://media-exp1.licdn.com/dms/image/C5112AQFiC-hYeyvK7A/article-inline_image-shrink_1000_1488/0/1585760509518?e=1670457600&v=beta&t=Faf4027r51OqiApwxRzTt_SU453sxc4clxyCO39RoUI)
+
+
+To start designing on solidworks it's better if you have some drawing or some reference that you want to improvise and re-create
+The best place to find references is YouTube
+So go on YouTube and search for "How to make a car in solidworks"
+![](https://i.imgur.com/P0ZxHRk.jpg)
+Look for a reference that's easy to make and improvise
+Now looking at this picture making a Toy car is definitely easier than making a Nissan GTR and Ford GT 16 (I wish I could make a more cool car)
+
+Following steps were used in making the car:
+1. Make all the parts mentioned in the video (.sldprt)
+2. Assemble these parts on solidworks (.sldasm)
+3. While making the report you can use drawings (.slddrw)
+4. Make sure that you have added all the coordinate systems, points and axes for URDF
+> Tip: get a big cup of coffee and follow and make the car while following the video 🍵
+
+
+After few hours of redoing and few cups of coffee we were finally done with the model.
+
+Tadaaa!!
+![](https://i.imgur.com/h2Idu0Q.jpg)
+![](https://i.imgur.com/xYOHDSQ.jpg)
+
+>Tip: Don't be lazy and add colors and maybe your initials too
+
+Now you have your super car ready
+To actually be able to stimulate this car and spawn it we need to convert our CAD model to URDF ([Universal Robot Description Format](https://wiki.ros.org/urdf)). For making our life easier we have an [URDF exporter](http://wiki.ros.org/sw_urdf_exporter#:~:text=The%20SolidWorks%20to%20URDF%20exporter,and%20robots%20(urdf%20files)) (Bless the maker)
+
+
+---
+
+> ### Nice, Racecar looks a renegade, Now It's time to provide a playground to virtually simulate how it's going to be in real.
+
+
+### Software and Simulators
+
+#### 1. [ROS (Robot Operating Software)](http://wiki.ros.org/)
+The Robot Operating System (ROS) is an open-source framework that helped us build and reuse code between robotics applications.
+
+[ROS Node](http://wiki.ros.org/ROS/Tutorials/UnderstandingNodes)
+
+All processes in ROS run in a Node. For eg: In our Car each wheek Link (the joint between the wheel and the base), the camera, IMU sensors are all nodes. The python script we write itself creates many nodes.
+
+[ROS Topics](http://wiki.ros.org/Topics)
+
+Topics are named buses over which nodes exchange messages. Topics have anonymous publish/subscribe semantics, which decouples the production of information from its consumption for example:
+![image alt](https://learn.microsoft.com/en-us/dotnet/architecture/dapr-for-net-developers/media/publish-subscribe/pub-sub-pattern.png)
+Over here ```Message Broker``` is a topic
+#### 2. [Gazebo](https://gazebosim.org/home)
+
+So now URDF has all about your car, ROS will help you in controlling the car but where will this happen?
+That's where gazebo helps you
+
+Gazebo is a powerful robot simulator used by industry and academia that calculates physics, generates sensor data and provides convenient interfaces.
+
+---
+> We hope you are familiar enough with the softwares now. Let's simulate our Racecar in world to Drive-in.
+
+### Let's Burn Some Rubber 🏎 🔥
+![](http://img.soogif.com/C9xreA41W9rVlfQjOp7B6dLyyMcvm62m.gif)
+#### [World File](https://classic.gazebosim.org/tutorials?tut=components&cat=get_started#WorldFiles)
+Now we have everything we need but where is your car going to be on ? float in air?
+obviously not
+for all the movements of our car we need a platform for it or you can say a world for it to be in
+![](https://i.imgur.com/AVSQU4o.png)
+![](https://i.imgur.com/AOlG52t.png)
+
+
+
+#### [diff_drive_controller](http://wiki.ros.org/diff_drive_controller)
+
+Now our car is all set for moving!
+the wheels of our vehicle can only rotate so how will our car turn?
+to counter this aspect we use the diff_drive_controller which would turn the car by rotating the wheels in 2 different direction
+
+![](https://i.imgur.com/WrYT8bL.png)
+
+we don't have to look at the math behind it thanks to the lovely guy who made these formulas for us to use
+
+![](https://i.imgur.com/heRtC1v.png)
+
+
+---
+> Vooho... Our **Racecar** not only looks great but also glides like wind on the track. Now, the most important aspect of our project the **Algorithm**.
+
+### Racecar has Self-Control ⚙️
+![](https://user-images.githubusercontent.com/95731926/198358906-54cd67f5-0ad0-480a-8173-15b993495291.gif)
+
+
+With Developing world the latest technologies like self-driven car attracts most of us but, have you ever thought how this machines are built, what are their requirements in terms of hardwares, coding, testing and simulations?
+
+Talking about the hardwares ignoring the vehicle, the most importants are **Sensors**.
+What kind of specially? As for self-driving we need to know the surrounding enviroment, a very renowned sensor called as the LiDar Sensors, are used to get the a rough view of enviroment.
+
+#### Lidar Sensors
+
+![](https://raw.githubusercontent.com/Ford/AVData/master/ford_demo/doc/rviz.gif)
+
+Now what is a Lidar Sensor?
+
+Lidar is an acronym for “light detection and ranging.” It is sometimes called “laser scanning” or “3D scanning.” The technology uses eye-safe laser beams to create a 3D representation of the surveyed environment. Lidar is used in many industries, including automotive, infrastructure, robotics, trucking, UAV/drones, industrial, mapping, and many more. Because lidar is its own light source, the technology offers strong performance in a wide variety of lighting and weather conditions.
+
+We had used **hokuyo.dae** mesh in our Racecar for Lidar sensing which can be easily pluged-in from Gazebosim [head_hokuyo_sensor plugin](https://classic.gazebosim.org/tutorials?tut=ros_gzplugins#AddingaSensorPlugin).
+
+```
+
+
+ 0 0 0 0 0 0
+ false
+ 40
+
+
+
+ 720
+ 1
+ -1.570796
+ 1.570796
+
+
+
+ 0.10
+ 30.0
+ 0.01
+
+
+ gaussian
+
+ 0.0
+ 0.01
+
+
+
+ /rrbot/laser/scan
+ hokuyo_link
+
+
+
+```
+We had added two hokuyo sensor on the Racecar and shown below are the GPU-rays of Lidar.
+![](https://media.discordapp.net/attachments/1006253253064937472/1015573905676709928/Screenshot_2022-09-03_at_4.16.49_PM.png?width=1660&height=1038)
+#### visualising Point cloud in RViz :-
+![](https://media.discordapp.net/attachments/1006253253064937472/1015573905336967292/Screenshot_2022-09-03_at_4.16.26_PM.png?width=1660&height=1038)
+
+As we have two sensors, their are two point-clouds made for each obstacle, on analyzing closely we get to know their is a bit distant error in the point-clouds due to different positons of Lidars.
+
+So, we use an open-source library called as the ira_laser_tools for Lidar Scan-Data Merging, you can check this out [here](https://arxiv.org/pdf/1411.1086v1.pdf). Finally we have the perfect laserscan data to be used in our algorithm (will be discused in the next section).
+
+Aaa aa a.. We are not over yet, we need one more sensor the imu sensor to get the orientations of the robot specially the yaw which is the orientation about the z-axis so, as to keep track were the Racecar is heading and in which angle and to much extent we have to rotate it to counter the obstacles.
+
+#### [Imu Sensor](https://classic.gazebosim.org/tutorials?tut=ros_gzplugins#AddingaSensorPlugin)
+
+An IMU is a specific type of sensor that measures angular rate, force and sometimes magnetic field. IMUs are composed of a 3-axis accelerometer and a 3-axis gyroscope, which would be considered a 6-axis IMU. They can also include an additional 3-axis magnetometer, which would be considered a 9-axis IMU. Technically, the term “IMU” refers to just the sensor, but IMUs are often paired with sensor fusion software which combines data from multiple sensors to provide measures of orientation and heading. In common usage, the term “IMU” may be used to refer to the combination of the sensor and sensor fusion software; this combination is also referred to as an AHRS (Attitude Heading Reference System).
+
+As we move on to our next section we shall know all the avialable Algorithms for Obstacle Avoidance.
+
+Here, are some of them :-
+* [Bug Algorithm](https://en.wikipedia.org/wiki/Bug_algorithm#:~:text=The%20most%20basic%20form%20of,%2C%20walking%20around%20the%20obstacle)
+* [Artificial Potential Feild Algorithm]()
+ * The Convetional Potential Field Method
+ * The Follow-The-Gap Method
+ * The Advanced Fuzzy Potential Method
+ * The Obstacle Dependent Gaussian Potential Field Method
+
+All algorithm except the The Obstacle Dependent Gaussian Potential Field Method (ODG-PF) has their own drawbacks due to errors and in-efficency.
+
+
+---
+
+> Impressive... I hope we are clear with the algorithms available in our toolbox and why **ODG-PF** is most suitable. It's Time to implement this.
+
+### :construction: Obstacles, We are not Afraid of You 🦾
+
+![](https://1734811051.rsc.cdn77.org/data/images/full/374838/tesla-autopilot.png)
+
+#### Flow of the Algorithm
+![](https://user-images.githubusercontent.com/95731926/193303926-14bc111d-c998-436c-acae-effe77a4ccc0.png)
+
+**1. Range Data From Lidar**
+ After merging the lidar datas of two Sensors we get an array of distance readings of Lidar of length 542, in 180 degree angle, ranging from 0-11m.
+
+**2. Thresold**
+ Thresolding is the step which we use to specify the the range at which Racecar should start avoiding the Obsctacle here we are taking it as 3m.
+
+**3. Defining and enlargening the obstacles**
+ After we had decided the range we need to clearly define the obstacle and to enlarge it.
+ Now, why we are enlargening the obstacles ?
+ As, we want to avoid the obstacle and also very safely, we enlarge the obstacle so that, the racecar is not much close to an object.
+
+**4. Repulsive Feild**
+ We calculate the repulsive feild for the range data readings using the formula shown below.
+ ![](https://i.imgur.com/QpwAqdn.png)
+
+But, the problem here is the algorithm according to the research paper is for 180 degrees from -90 to 90 degrees and we have around 542 readings.
+So, to solve this we used the unitary method to convert 542 reading <-> 180 degree => 1 reading <-> 0.33 degree,
+
+And here's the graphical representation of the feild.
+
+
+
+![](https://cdn.discordapp.com/attachments/1006253253064937472/1019124122137133097/Screenshot_2022-09-13_at_11.24.14_AM.png)
+
+![](https://cdn.discordapp.com/attachments/1006253253064937472/1019124121768050739/Screenshot_2022-09-13_at_11.24.05_AM.png)
+
+**5. Attractive Feild**
+Similar to the repulsive feild it's calculated using the formula shown below.
+![](https://i.imgur.com/8tZCu6e.png)
+
+where, gamma is an constant and is chosen to be 5, After performing tasks, also theta_goal is taken to be zero so as to keep the racecar moving straight.
+
+**6. Total Feild and Suitable angle with Minimum Feild value**
+Now, you have the values of attractive as well as the repulsive feilds, you just need to add them both with their respective indexes of each element.
+The resultant feild values are total feild and simply now we have to calculate the local minima of the total feild array and the index where the value is minimum is the required index position our racecar should move but index is for array and we need an angle for racecar so simply again use unitary method to convert the index to angle.
+
+The racecar is published with the required Twist message to acheive found using the imu sensor as a rectifier.
+
+![](https://cdn.discordapp.com/attachments/1006253253064937472/1019253809077305465/Screenshot_2022-09-13_at_7.59.52_PM.png)
+
+![](https://media.discordapp.net/attachments/1006253253064937472/1019253809551245352/Screenshot_2022-09-13_at_7.59.49_PM.png?width=1660&height=1038)
+
+The image shows that we should follow an angle of around -25 degree to avoid obstacle which seems acceptable.
+
+### Demos and Results
+
+#### The static obstacle avoidance:
+
+https://user-images.githubusercontent.com/95731926/195905534-36b4bf22-bf20-4839-8e3b-0d61bfe03c86.mp4
+
+#### The Dynamic obstacle avoidance:
+
+https://user-images.githubusercontent.com/95731926/198357438-de720297-426f-4120-b9eb-d4f2fabc90d5.mp4
+
+---
+> Well Done... I hope you are thorough with the implementation and didn't found it intimidating. You have now reached the **Bonus Section (Line-Following).**
+
+### 📈 PID help me Follow the right Way 🛣
+![car](https://user-images.githubusercontent.com/95731926/198359354-05bf350f-05b0-401b-b66c-a6900e0d71ff.gif)
+
+
+Line Following is really popular feature and you can get tons of examples, source code, explanations, etc.
+
+It's really simple, steps for Line following are:
+
+1. Subscribe to the raw image topic and convert raw image to cv2 image using the cv2bridge method.
+2. the image obtained is RGB image convert this image to HSV for better comparison and definition.
+3. define upper bound and lower bound of the colour you want racecar to follow
+4. After Masking the image we get contour of the red line whose centroid is calculated and compared with the centre of the image and error is the required counter deviation.
+
+![](data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAkGBwoNCggNDQgKCA0ICA0HDggICA8ICQcNIBEiIiAdHxMkKCgsJCYxJxMfLTEtJzAwMC8wIys/PTM4OSgtLisBCgoKDg0OGhAQFy0lICEvNzc3NzctNy0tKysrLS03LS4xKy4uLi4uKystLS0tLjcrKys3Li0tLS0tKysrLS0tLf/AABEIAJMBCQMBIgACEQEDEQH/xAAbAAACAgMBAAAAAAAAAAAAAAAAAgEDBAUGB//EAFAQAAEDAQQFCAQKBwUGBwAAAAEAAgMRBAUSIRMxQVFhBlNUcYGRodEiUpOxFBUkMjRyouHw8RYjRFVidMFCkpSy4gczNUPC0iZFc4KDhLP/xAAaAQADAQEBAQAAAAAAAAAAAAAAAQIDBAUG/8QAOREAAgECAgUJBgYCAwAAAAAAAAECAxEEIRIxQVHRFFNhcYGRkqGxBSIyQlLwE3KCssHSNMIjM2L/2gAMAwEAAhEDEQA/AONsF3ue+aSWaQjTvwsxmmtbURQin6utNuSUHIj+Jx8VOJQUNgh5seCMEPNjwV932C02onRR0a04TO8+gD1a1sX8lraG1Fps0mVdGyN4c7tKzdWCdmzSFCpJaSi7Gn0cPNDwRgh5seCJ2SRSOjljdE9v9l1PSHWlxDuWhFrZMbBDzY8EYIebHgrLJZJ56mNmFgNNM/5lerWsp9yWtoqJoJf4GMcD3oFcwcEPNjwRgh5seCVxc1xY9hje3Wx2vvRiQA2CHmx4IwQ82PBLiUF4H3AlAD4IebHgjBDzY8FUyZprTFl60bm+KfEgBsEPNjwU6OHmx4JMaMSAH0cPNjwUYIebHglxIxIAbBDzY8EYIebHglxoxIAbBDzY8EaOHmh4JcSMXBADaOHmh4I0cPNDwS4kYkANgh5seCMEPNjwS40Y0ANgh5seCMEPNjwS4kY0ANgh5seCNHDzQ8EuJGJADYIebHgjRw80PBLiRiQBVabFFI0gF0RI1xmma1XxHaOmP/vuW6xKcfAIAprr+s73p4IjLLDEDTSvw14U+5UF2Z+s73q6wziO02aQmgjeanrH3qKjai2tdi6KTqRUtV16ndOmbDCBFC5zYW0bDEM3BUi+Y3aER4p3Sn5kfzmCuZKpmkkwnRBpeR6Ie/A3vWHFY5IXiSN7ZXvNJg/DE143g9i8Sm1KN3r9fU+mqKcZJRWW3LVxMrlFA2ezOdQB9nGla/aGjX7lydmbpZIGVppfTPUM1016WkMs1oJIGNroacSFytgl0ctmcTTA0s7xRej7Pk5U3c8j2rCMaytrazOta5rQGtAa1gwho1NCnSLXSiN9A9peAagB7mZ9ir+DWXmT7d67jzbFl9xtfDpKUfZ/SxDW5m5aUOyHEVWzvG0BtnlFaaRphHd9y04OQ4Ae5AItqun/ANnthhtFvndKwSfBYGSsY75uIkhcniXY/wCzE/Kr0/kWe8pxeTfQTLZ1o2vLOG9Zo7PC6KwBk1r0UXweSQzudh2gilMti0Nt5FXnBZ5bRJLY9HBEZ3Na9+kwgLFuN7zf9jrJI/5c7J8rnBvo7lk8vnPN7uYJ5GB+FmESuEbQR6upTGN9G2uRpJ20r6o8WFi5E3pPCyVk9ia2UFzWyPfjcO5a4XJbfhwsRjbHaC0vo+ujc3fVdq67LJYH3Y1l32u2vmc1xtTJptHCa7QDTarLYP8AxRYDSlbDJ10wJ5OWWrPyX3sM89F31pLzdvvM5ybkFe7Gg6Sxy6vQie8v7qcVTbeRN6wQmVxs8zWtxGKzlxnaOorobknk/SK9maR5YWH0C8lrczsS8j5pHXnyjjdI+RjXOcGPeXhhqFDb0br6dLzsXZKVv/VvK5xl0XVa7bKY7PGHFvzpX10MXWe1Zt78lLysUWll0M8Y1vspcRF11XTcgNCbuvdoj0r/AIU/FE0lsj203jNUWq82wWG8IG3FbrLHPE+J0toZNJG0ka8Tid6up7t0tlu377RQz19Jw2JGJUhybEgCyqKqvEjEgCyqKqvEjEgCyqMSrxIxIAsqjEq8SMSALKoqq8SMSALMSK8VXiRiQBS4+k767veoJUuhmxPpBKavcaiJxDhXeoEE3R5vYOQI2thvl8bQyQGRoyEgObQsx1/WYDIved2jczxXPiCbo83sHKNDN0eb2DlxTwFGctJo9Cn7UxEI6N0+tZmVb7e+dwr6DG6oxmsYnijQy9Hl9i5Ggm6PN16By64QjBaMVkcU6kpycpO7ZlQW9zQA4YgMg/bTqVhvJmwOcdxBCwdDNzEvsHKDFKK1glFNpicA1UQyyed8jg52QGpgzDUuJVF/Ed6gOHrDvCY0mW4uK2F0Xza7E6V1mkjidaGCJ5lhEwc3q7VqsQ3jvCMQ3jvCA0XuM6z2+eK0stLHNE0TzM1zmBzWupu7VN53laLXMZp3sfI4ULo4xEO5YOIesO8IxcR11CSWroB3z6TobNyzviGNsbLVCWsyGnsjZntHWVjnlJeJtjLYZ4nWiNhjbIYBo2gj1OxaXEN471OIbx3hPPWGi7WsbiDlDb47XNamTRtntAo97rO1zHD6vaosPKK32ea1TQyxskttTK59nEjX57Bs1LUsa91cMb5KbWMc/D3J9BP0eb2DkuFuzcGfnftMuwXparNMZYJzE8kk5Esea7WrYXjyuva1QuhntMT435FsdlbEXCm/tWk0E/RpvYOUmCfo83sHIaTVmCyzRAKKqdBP0ebr0DkaCbo83sHIAjEpxI0E3R5vYOUaCbo83sHIC5OJGJKIJujzewcp0E3R5vYOQAVRiUaGbo83sHI0E3R5vYOQK42JRVRoJujzewcjQzdHm9k5AXJqiqgwzcxL7JyNDL0eX2TkDuTiRVGhl5iX2TkaKXmJfYuQB6zdTj8DsWZ+jM28FlYnbysK6iPglk/l2e5ZgKkkfGW7TXfXYqS57tpCalVcxgbmc+CkBYYicy4/cmmmI9FpOeVaqJJdgSMbv26+tBQzKtBzNTrCptID2uYTXG0inrBNNLsGZ37EjGbzUn7KQzyq0Wf4LHeOOyWa0yRXmyzgWmzGYBhjxLA+MW/uq6f8CfNdXypsujvF5GIm9bJJdwNKxMkcWgV4ZLh5Wua5zSaljnMrscQaf0XIqEJVJKau9et6n2nBDDU51aiqK7vfW9Ul16k0102Zn/GTf3Xdf+Bd5o+Mo/3XdQ/+kfNazEglXyal9Pm+JryGh9Pm+JsvjSP91XT/AIE+an43A/8AKrqoBX6BX+q1RKAc+wrr9n4Wk8XSWj80dr+pdJNTA0NCXu7N74my+Nm/um6dWr4CfNN8bN/dV06ugnzWrGodQUVXG8LRu/d83xLWBw6+XzlxPReSN5FlsgEccTILfZGxudZf1cDLS0VcKb813WlcNp79a8ZuCYuc6AklzXG2Wcn/AHVncPnZ8RkvWbttbbRZoJh/zo2vp6hVUsk4PZ6bPIvCtxj+E9cMutbHx6U9hn6bie/5pTaWmZJPCqocA0VcaHd6yrLq7eoLU6tZkmdx2kDYKpdK7ee9UYkF3FAF+lO89dU2kO896xcakP8Ax6yaEZAed571OM+seuqoD+z/AKVOP8gqJLS8+seqqMR3lVg96MXZwQIsLjvOSQk55nqqitexH4r6qpEsU13lLhOfpHJWoVIRQWnPM/8Acq8PF3eVeczT8BNo/wCIIEYN2H5LZP5ZnuWUD2LEuz6JY8/2dnuWQ9+tQxlulA47D/CUhkJP9VWATX8Yk+5IZYD47dyHvyy2Kuqkd/FMYzG7TtTvIaNeZ2KoO7a7FMbS5xcTkNX8SgZouWNiMt3yOBLDZj8JDhk6gG/tXnfKGhtAeBWOSCPDK2mje4Nzz6161bg17Hxu9ISNLC3gvMb4szmWa0tkArYrYyCINywROBPbqWc8pRl2d/36s56vu1IT/T4tXml2XOdRVAP5KDt2cFodIHZnRAOfYUEKGjPsK7PZ3+XR/PH9yIqfA+oK6upQjd1BSuSWtlF9htL4po5GnCQcDjQElhPpZdRXo/Iy8WMdLA2gjtLPjWytdXF8Hc6jRTfkV5lXiui5P29zIY3AmR9ithtchdkWWXDQUPWdSxmnGamtuT/j723tkYVPcqxqLU/dfbqffl+pLI9QLnE57c/qoDtfuTxgOaHNIcC0GoUmL81oddxMXFJiTaIoMZQBGJTiSkKOzsTQhsfbwVgdSnvVQy412oJ/JCRLZkB35IDvzVDT+akOVEmSHfjcmBWNj1KRJlrVITZkV46tvqpSTq1qsSg5DMqxop27VQhwAAN52qNINyVxSUG8oEYd2D5JZP5dnuWQRqVN2n5JY82j5MzaNyvJHrN/vhZtreMEpP5KTTe3vCUn+JveEXW8Y2Kn4+cguHYfspMt7e8KMQ3jqqEroZYHahtTPdQYRkBtSxgD0qtO7MKJHClagYteYSuhoWJpLi47Ni5HlfYA2U466K8AXAgnF8LaMMYruJK7Rj2taCS30RUCo9JajlBCJ7LK6gJsxbbWCoo57cx4hZ1EpQaTz2daMq9NzptR1611rV5nkcrXMe5hbhdG4scKg4XA559iTEs6+IAHsmYS5lqGldJixNFo1uaDwJWvJH4KqFRTipIqlUVSCktv21bo+7PICVIAr2FIfxmEA766iu72d/mUfzx/ch1PgfUN6uewKCorq1agoxHPV3rketlDUWz5OuHwyFpNGTB0bmOI0craai3sWrxlDXCrduFwNKDVVRUjpRcd6Mq9P8SnKG9HsfJC16W77PiJxw1heHAgtNVvcTVx3JG2h8kxphF6RfHDQ6gdGK4aE78l0+frN7wpp1FOKkXRq/iU1PejJL28EheO/asc/WHeEBh3jvCvST2mhY4hK6nXxRhp/ab3hKQN47wi63iAjxUU49qKcW9dQlOz0h3hPSW8LEV1+5GLWgkes3vCCRvHeFSaJCvGiWpOQzISF9TTEO8K+ItaNYJO2oTTT2iLIm4RvJ2q0OKqLx6zeqoUaQb25cQquhFxdx7d6KHcqw5u9veFOkHrt7wndAam77hu59msznWKNzpIWyOeS6rnEdav/R27egR5cXeay7ldisF3uOt9ihcSNVcO7tWYQujl2K56fjlxI0Y7jTfo7dvQY+93mg8nbu6BH3u1963Bb+OKUt8NiOXYvn5+OXENCO41B5PXd0CLvd5oHJ+7cvkMee2rvNbdw/NJSlTq/wClHLsXz0/HLiPQjuRq3XBdmIAWGPjm7zSOuG7S+gsMWX1s/FbUei0uOsqmMeiTqLkuX4vn5+OXEahHcu41rrju4vAFhjAOVKupXvWQ/k9dgaPkEXHN3ms2zs9KtKUVsh150ptRy/F8/Pxy4j0I7keX3/Y3Rx2yInA27pxPFox88SO1Hq2Ll8bs6u8F6LymjY+1MYWgx2myzPeznHMZUZ9a85rkFhSx2KjKUFWnrv8AHL5s9+u92+swowipzg0snfsln63fagxv9f3KKk0qajPaMlB7kDWNutejgMdipYqlF1ptOUfnk9bW9s0nCKi/dXcAlf63H5oUaV/r/ZCUjV1BKuV4/F3f/PU8cv7F6EPpXcOZH+tq4BSJH+v9kKuvYj+qXL8Xz9Txy/sChDcu477krDBPLdDLRG2dpuHEGuJAxaY7l2I5O3Z0CPPbV3muB5GWhxN3vdRzoLzZdDHasFnMZNO9emCQ9Q9yww2OxShoqtPJtfHLf1nNgoRVLRt8La1dJhDk5dvQIvtej4qTydusfsER41d5rO03H/UlMq35fi+fqeOXE6tCO5dxgnk9dnQIu93moPJ67MvkMXe70vFZ4k4avtIL0+X4vn6njlxDQjuXca48n7s2WCMdrvNR+jt3dBiy4u81tAfyUOd2I5fi+en45cRaEdy7jVHk/dvQY+93mqzyfsDiA2wxinF3mtriLjQDX9lZDGho3nemsdi+fn45cSdCO41I5N3cAK2KInfid5pf0fu3P5BF3u9HxW1c7X+MKrJqVXLsXz8/HLiLQjuRqzcF3dAj73eaYXBdo/YY8+LvR8Vsjlt7fVSE8EcuxfPz8cuI9GO41xuG7egRntd5o+Irs6BH3u81sK8VGLgE+X4rn5+OXENCO4e4f+HXZl+wQf5Fn0/NYFyAi77uGoiwQgjb8xZtfFcY2SUU8M68Eyg96LALh8fspHCpAV2zr96ljaAk6/8AMkBjTx/Nb4pS3LV2K0ZkuO0od+P4kJDEbkCsed+WEf2suLVlSj0QPxVRBZtb35Ab0rDTOU5SRFsti9GnyO25+t+rC8ucRQcF6tyztIHwSQMLmh7rEHDWDJ6OvtXltsg0M08RdpDBK6HGBQPIO5YL/tkuhcDnp5V5roj/ACUvNfJDPnDtSk8VLT6Q6ivR9n54uj+eP7kbVPgfUKTq6goJRXV1BBPZ1LmetlgepAKUlASA6jkvIGxWU1xaK/mTua3N0bNAc6bs16ia0FNRAP1hReN3ASDepGRFyymo1/OavYbJK10MBrWsbc+xYU8nNdPqrnNhspVI7peqT/kAD96k/gJi5uuvYq3PWx1BiOXvTB35JACa5Jw0b9SdibjB+rPjVRjLjSnYqnEk0bnxH9lXxMw02nemFzIjaGjjrqoc/wAUhdx/0qpxO/sVIQ5fVSMvPcqxl171IPigCSVBKKpSmIg9XYlwqwj8b0mIbggC66z8ksfCzM9yysS192O+S2TPVZ2e5ZWNRtBl4drTA+Kxg/tonDkwLxmerYpkdWg1cVSH02/eoD9u9FgMgUA1alW0YndSV7+KYOAG7bVFgGLW1qTRoWFarUZHYG5Nb9pU2m0ucS0GgOxTFHSnvSGablgylhbl+32X/wDYLzG+/p14fz03+Zep8rRWw0GdLdZjT/5QvLL7+n2/I/Tpt/rLCzdf9P8AsYRu8S/yf7GAmZ85vaopwKGA4hRpOZ2Er0/Z0Xyujl88f3I6KkXovLYKTmOoIUHqPio/9p8VyNO7yKUXuJQoz3HxU04HxRovcGi9xs+TzvlIhpVl4M+ASH/mMY419E78l6jc5BscGEkgYmiuugdT+i8r5Pf8Su3X9Mj969O5Lyh1hiGZwSSj63plYxi1VkuhfzwOWPu4ia3qL7byXol3I2TASrRFxqpAH3p1tY6LgBTuVT3Emg27Uz3VyHerI2ADVmqETFEGjipJ4/6VDn9vFVl3alYVxnOKhvX2qCpqgCezsUB3htU9qVMGMHILvySfiixb0cW2W1uBphs0jgRracKMwMomu3tUdy5m4bVbrLZ7sZb82XhZI7RBbRXRy1FcJO+i6PGzeO9Awu/6LZf5dvuWSNvUhCjaIAmG1CE0IQk5Zqxn9FCEwGHzlXO40GalCAMVmvtWWzUhCaBmn5TfRXf+sw/bXnXxvayKmZpOLWYI/JCFz1aNOpL34p5LWr7TnrUKVWS/EgpWW1X2veKb2tfOt9hH5Jor3tZd/vW+wj8lCFeEwlB4imnTj8S2LeYVMDhlF2pR8K4Cuva15/rW7f2eLyU/G1r51vsI/JCFg8JQ5uPct7KWBw3NR8K4Em9rVzrfYR+SPja1c63/AA8fkoQk8JQ5uPchchwvNR8K4B8cWwFhEzQa69BH5LuuR2d2wE5lz5Knaf1hUIW9KlCm3oRS6lY6KNCnSv8AhxSvuVvQ3qgqULoeo3HiAzyTb0IUoCtyluxQhNiGUoQmAFSVCEgArDvj6Fbf5ST/ACoQhBtN4yxQT8jbuEsLZgy6LM9uOtWnCF4r8YWnpEn95CEFH//Z)
+
+5. this error is resolved using PID.
+
+https://user-images.githubusercontent.com/95731926/195905945-6cd707a3-23ca-4dc1-bdb5-43566de89396.mp4
+
+
+---
+> Racecar is perpetually following the line... Now, It's time to wrap-up. Let's us brief the blog.
+
+### Conclusion
+Being Second Year students there were a few things which we learned (Some of them were learned the hard way 🥲)
+So here's what we learned
+
+* Bunking lectures is fine as long as you're doing something productive 😜
+* Ctrl C + Ctrl V is your best friend
+* Break down the project into smaller goals 🤓
+* Working offline together is more productive than zoom and gmeets
+* Effective googling is one of the best skills 🤹🏻
+* Projects are really fun when you get along with your mentors and teammates
+* All you need is a big cup of coffee and an all nighter to cover up 😇
+
+So for everyone who survived this tsunami of information or skipped directly to conclusions we would like to conclude by saying that **It's not about what you know. It's all about how much more effort you put in to learn**
+After all we were just 2 SYs thinking about making the next Tesla
+
+---
+### Refrences
+
+* the Basic Science Research Program through the National Research Foundation of Korea (NRF) for [ODG-PF resource paper](https://www.hindawi.com/journals/jat/2018/5041401/).
+* [ira_laser_tools](http://wiki.ros.org/ira_laser_tools).
+* [Documetation inspiration](https://github.com/Jash-Shah/Eklavya---Drone).
\ No newline at end of file
diff --git a/_posts/2022-12-7-pothole-detection-blog.md b/_posts/2022-12-7-pothole-detection-blog.md
new file mode 100644
index 0000000..b69d719
--- /dev/null
+++ b/_posts/2022-12-7-pothole-detection-blog.md
@@ -0,0 +1,102 @@
+---
+layout: post
+title: Pothole-detection:An OPENcv challenge
+tags: OpenCV Machine-learning
+description: Detecting road potholes using OAK-D camera
+---
+
+-- [Dhruvanshu Joshi](https://github.com/Dhruvanshu-Joshi)
+
+# Detecting Pothole: An OPENcv challenge
+In my opinion, OpenCV is one of the highest-performing, most versatile as well as notorious tools out there. You can spend a whole week trying to figure out why all test cases won’t pass and realise all you had to do was convert the image into a binary image. For the Eklavya Mentorship program, my teammate and I used the OAK-D camera to detect road potholes using Stereo Vision.
+The Task was fun and intimidating at the same time but we were able to complete it thanks to the guidance of some great mentors and an amazing community, all courtesy of our college club: [SRA](https://sravjti.in/)
+
+## What is an Oak-D
+
+![src](https://www.mouser.in/images/luxonis/lrg/OAK-D_t.jpg)
+
+First things first having read that our project depends majorly on the Oak-D camera, we googled what is an Oak-D camera. To our surprise, we found out that this palm size robotic eye is actually worth 200 dollars! Sounds like an assault on your pockets right? We soon realised its worth once we started reading its specifications. This is a camera which can simultaneously run advanced neural networks and provide real-time depth image of a scene using its two stereo cameras, perform object tracking, person detection, Motion estimation, Expression estimation and whatnot! On digging up more we came across the term DepthAI.
+
+![img1](https://techcrunch.com/wp-content/uploads/2020/07/oak-opencv.gif)
+
+## Hello DepthAI!
+
+So what is DepthAI?
+In layman’s terms, DepthAI is a Spatial AI platform which with the combination of its features: Artificial Intelligence, Computer Vision, Depth perception and Performants offers an embedded, low-power solution enabling the computer to perceive the world as the human eye.
+
+![image](https://assets.rocketstock.com/uploads/2017/07/SpidermanHUD_example.gif)
+
+Firstly we clone the DepthAI repo in our local directory. Then we create a python virtualenv and install all the required requirements. Once this is done, we finally run the code to generate the depth image from the Oak-D camera and save it.
+
+And this is what a depth image looks like:
+
+![image](https://scontent.xx.fbcdn.net/v/t1.15752-9/312576342_821962229008957_6632162205827036789_n.png?stp=dst-png_p206x206&_nc_cat=105&ccb=1-7&_nc_sid=aee45a&_nc_ohc=aaNOHy8-TQEAX9XJ1Qx&_nc_ad=z-m&_nc_cid=0&_nc_ht=scontent.xx&oh=03_AdSx_hN-h1A-5CHd6kQrT-n0twgAk7W1bf3V5Zq_AdaDkA&oe=63B63E14)
+
+## Disparity and Rectification: Not a cup of my tea
+
+So the very first step of the implementation process was to take stereo images from the camera, rectify it manually and generate a depth map from it manually.
+
+Piece of cake right?NOOOO!!! So for rectification, you need some camera parameters which were unknown to us at that time. So we spend the first few days trying to conjure up a way to rectify images without having to know any information about it. As the sub-heading suggests, we failed to do so. So we again played with the camera to get its focal length and calibration details. Having them, now we thought we were unstoppable and the very first task is almost done. Little did we know that we were going to get such an unwelcoming outcome.
+
+This is the outcome we got. If you look real close, you’ll notice the pothole depth that the image tries to signify.
+
+![image](https://scontent.xx.fbcdn.net/v/t1.15752-9/313280594_1560569631038696_221842432707219026_n.png?stp=dst-png_s320x320&_nc_cat=104&ccb=1-7&_nc_sid=aee45a&_nc_ohc=1ZwexE1y5esAX_Y0WTK&_nc_ad=z-m&_nc_cid=0&_nc_ht=scontent.xx&oh=03_AdTKRL9UbDD6Bh40tqkuNu5yosFUper5V6Fp7VmInYNwKw&oe=63902DB2)
+
+Anyhoo, using the oak-d camera itself to get the depth seemed better, more convenient and beneficial to the overall project. So we continued with that.
+
+## Surface why wont you fit :(
+
+A major part of our project depended on road surface fitting. So what is Road Surface fitting? Umm, to answer this, I’ll describe Road surface fit as the chart that we desire to obtain as the road surface that would have been in case there was no pothole present. Sounds super easy right? All you gotta do is run the simple algorithm already present in a very articulate research paper that has been served to you by your mentors, on all pixels of the image, identify the recorded depths as potholes and carry out the succeeding steps, right? Well if you thought any of this, that's a violation. Cause this “super-easy” task took me 2 weeks! To be honest, I was even convinced that this method is not legit.
+
+![image](https://quotefancy.com/media/wallpaper/3840x2160/520085-Homer-Quote-If-something-s-hard-to-do-then-it-s-not-worth-doing.jpgQUJosXAdaNJQMYb8NVSWeFZUXwKln04d-hvvRPz1rk6w&oe=63818214)
+
+Our attempts looked something like this:
+
+![image](https://scontent.xx.fbcdn.net/v/t1.15752-9/307972598_620141066560545_9089820199153739_n.png?stp=dst-png_p228x119&_nc_cat=102&ccb=1-7&_nc_sid=aee45a&_nc_ohc=tzVhWts7SNsAX8J7qQH&_nc_ad=z-m&_nc_cid=0&_nc_ht=scontent.xx&oh=03_AdTZ6eXxOrEqjGMCriG4je8Xt29LDRZBfMos9rOKmlBvyg&oe=63900298)
+
+So weeks of trying it out and reaching out to our mentors for slight hints and help, we(I mean Our mentor_:) ) finally cracked the logic behind doing it. We studied the same, realised what we did wrong, cursed ourselves and continued with the project.
+
+This is what the surface looked like btw. Note that this also considers the non-road regions in the picture for the fit. If we manually select an ROI, we can get a scene-specific better outcome.
+
+![image](https://scontent.xx.fbcdn.net/v/t1.15752-9/309617964_1473863579783506_189537991565908528_n.png?stp=dst-png_p206x206&_nc_cat=102&ccb=1-7&_nc_sid=aee45a&_nc_ohc=djBAR3d0Q48AX9Rs4SY&_nc_ad=z-m&_nc_cid=0&_nc_ht=scontent.xx&oh=03_AdSOlYsBGTQr9DNlv5YV_dokaTryViQC6K2TUEFv51MZcQ&oe=639021CC)
+
+## All that holes is not Pothole
+
+So once we have this surface fit, our next task is to identify the pothole by simply comparing the actual coordinates in 3d space and then comparing them with the corresponding coordinates on the surface map and if we find them to lie below it, it’s a Pothole. Voila!! So simple. Let me just rephrase this. The only thing left to do is to meticulously separate the patches corresponding to the actual potholes from those that merely exist below the surface chart, plotted using the least square regression algorithm, which necessitates general binarization of the image with a standard mean threshold on which we conduct the various morphological commands that involve: degrading dilation of the image followed by thickening erosion only to be followed by a reverse of both the procedure again to get the best-optimized result.
+
+![image](https://media4.giphy.com/media/75ZaxapnyMp2w/giphy.gif?cid=790b761190bef8a30853b39e7c28cd0db26756426e362775&rid=giphy.gif&ct=g/)
+
+A simple layman's translation of the above procedure is to convert the depth image into a binary image using the average value of depth as the threshold. Any point deeper than this is considered a pothole. To eliminate the errors, we dilate the image to reduce any small inconsistency followed up by thickening of the actual pothole region. We also define the central portion of the image as our region of interest to eliminate errors like those that occur due to the junction between the wall and the ground.
+
+![image](https://scontent.xx.fbcdn.net/v/t1.15752-9/314836165_1049237063143755_3794358094393401786_n.png?stp=dst-png_p296x100&_nc_cat=101&ccb=1-7&_nc_sid=aee45a&_nc_ohc=TNnLhLpywIsAX9mCK6W&_nc_ad=z-m&_nc_cid=0&_nc_ht=scontent.xx&oh=03_AdTADFZj_n-hXWhPtjnMHMNT0ZcA4xQ1lVlTvLQtjREnzw&oe=6390C043)
+
+## Connected Component Labelling (CCL):
+
+Once the correct pothole region is identified, we gotta label them now so that multiple potholes can be detected. For the above picture say we detect 3 potholes we will assign them numbers 1,2,3. Now if even a part of any of these candidate potholes lies outside the central ROI, we eliminate them. By far the simplest. Genuinely speaking XD .
+
+## Found you Pothole!
+
+Now once we have successfully identified the pothole coordinates, we draw a rectangle on the depth image along with specifying the average depth of all coordinates corresponding to a pothole. Annnddd now we are done!!
+
+![image](https://scontent.xx.fbcdn.net/v/t1.15752-9/314430571_533517651939422_48320739202229212_n.png?stp=dst-png_p206x206&_nc_cat=101&ccb=1-7&_nc_sid=aee45a&_nc_ohc=jloYAg1tTokAX-B7EU_&_nc_oc=AQlykGigh8gJLTdl1eTkbX0rQkjYhwClRw5aWTcuCNuqet9kMpc5ykE0SuOxjdb_bhrEMwv8qCmSmMZyPdQ2Vs_y&_nc_ad=z-m&_nc_cid=0&_nc_ht=scontent.xx&oh=03_AdQkOQQ0tt09pr2G0Gu1QBY_zXGE6u8nhkYBa_WhWFu_hA&oe=6390D7B6)
+
+## Conclusion
+
+As sophomores, we learned a lot from this project. To enlist a few I would say:
+
+* OpenCV can ruin you and save you simultaneously.
+* Pothole **is** a HUGE problem
+* Oak-D camera is G.O.A.T
+* Depth Estimation is fun with enormous future scope
+* Connecting with Seniors is beneficial
+* Jotting down all the Tasks to be carried out in a day helps a lot
+* Prioritizing tasks and carrying them out is a skill you’ll only learn the hard way
+* Bernie Meme superiority
+
+## Know more about our project
+
+Do read our project report and checkout and **star** our Github repo to get more insights of our project.
+
+[Github](https://github.com/Dhruvanshu-Joshi/Pothole-Detection)
+
+[Project Report](https://github.com/Dhruvanshu-Joshi/Pothole-Detection/blob/main/Assets/Report_Pothole_Detection.pdf)
\ No newline at end of file
diff --git a/assets/posts/image-pipeline/CFA.png b/assets/posts/image-pipeline/CFA.png
new file mode 100644
index 0000000..e348a1f
Binary files /dev/null and b/assets/posts/image-pipeline/CFA.png differ
diff --git a/assets/posts/image-pipeline/Filters.png b/assets/posts/image-pipeline/Filters.png
new file mode 100644
index 0000000..ca3a116
Binary files /dev/null and b/assets/posts/image-pipeline/Filters.png differ
diff --git a/assets/posts/image-pipeline/Processed.png b/assets/posts/image-pipeline/Processed.png
new file mode 100644
index 0000000..863a37d
Binary files /dev/null and b/assets/posts/image-pipeline/Processed.png differ
diff --git a/assets/posts/image-pipeline/RGBtoBinary.png b/assets/posts/image-pipeline/RGBtoBinary.png
new file mode 100644
index 0000000..78f6471
Binary files /dev/null and b/assets/posts/image-pipeline/RGBtoBinary.png differ
diff --git a/assets/posts/image-pipeline/RGBtoGray.png b/assets/posts/image-pipeline/RGBtoGray.png
new file mode 100644
index 0000000..65bfb66
Binary files /dev/null and b/assets/posts/image-pipeline/RGBtoGray.png differ
diff --git a/assets/posts/image-pipeline/RGBtoHSV.png b/assets/posts/image-pipeline/RGBtoHSV.png
new file mode 100644
index 0000000..c80a3a0
Binary files /dev/null and b/assets/posts/image-pipeline/RGBtoHSV.png differ
diff --git a/assets/posts/image-pipeline/debayered.png b/assets/posts/image-pipeline/debayered.png
new file mode 100644
index 0000000..1240cb9
Binary files /dev/null and b/assets/posts/image-pipeline/debayered.png differ
diff --git a/assets/posts/image-pipeline/edge4.png b/assets/posts/image-pipeline/edge4.png
new file mode 100644
index 0000000..d567105
Binary files /dev/null and b/assets/posts/image-pipeline/edge4.png differ
diff --git a/assets/posts/image-pipeline/gamma.png b/assets/posts/image-pipeline/gamma.png
new file mode 100644
index 0000000..af934b1
Binary files /dev/null and b/assets/posts/image-pipeline/gamma.png differ
diff --git a/assets/posts/image-pipeline/image3.png b/assets/posts/image-pipeline/image3.png
new file mode 100644
index 0000000..8f262f2
Binary files /dev/null and b/assets/posts/image-pipeline/image3.png differ
diff --git a/assets/posts/image-pipeline/intro.gif b/assets/posts/image-pipeline/intro.gif
new file mode 100644
index 0000000..df5b5d4
Binary files /dev/null and b/assets/posts/image-pipeline/intro.gif differ
diff --git a/assets/posts/image-pipeline/raw.png b/assets/posts/image-pipeline/raw.png
new file mode 100644
index 0000000..03d6daf
Binary files /dev/null and b/assets/posts/image-pipeline/raw.png differ
diff --git a/assets/posts/image-pipeline/wb.png b/assets/posts/image-pipeline/wb.png
new file mode 100644
index 0000000..690c755
Binary files /dev/null and b/assets/posts/image-pipeline/wb.png differ