You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 9, 2024. It is now read-only.
Train an ONYX/TF model to turn "ball in video frame" into a "normalized x/y" coord.
Update the simulator to render the ball as a video frame, instead of synthesizing it directly from the physics.
Import that and then combine it with one or more concepts for controlling the servos
Export to moab.
???
Profit
We use openCV computer vision algorithms to find the dominate hue of the center pixel space during calibration. Then use that color for the Hough circle detection algorithm. It does mean you have to recalibrate for different color balls, and the detection is sensitive to changing ambient lights (dark room vs sunny).
Would require the synthesis of simulated data and/or lots of real frames of data. Since we have labeled data now from ideal conditions (i.e., we can easily save videos with meta-data of circles where the ball is), we can get a good start.
The text was updated successfully, but these errors were encountered:
Great suggestion from @mikeestee
We use openCV computer vision algorithms to find the dominate hue of the center pixel space during calibration. Then use that color for the Hough circle detection algorithm. It does mean you have to recalibrate for different color balls, and the detection is sensitive to changing ambient lights (dark room vs sunny).
Would require the synthesis of simulated data and/or lots of real frames of data. Since we have labeled data now from ideal conditions (i.e., we can easily save videos with meta-data of circles where the ball is), we can get a good start.
The text was updated successfully, but these errors were encountered: