Skip to content

Commit

Permalink
Allow for Simulating Acquisition (#124)
Browse files Browse the repository at this point in the history
* Update dummy SegmentFromPoint to publish new Mask.msg

* Update AcquireFood goal request pickles to have the new Mask.msg

* Update README with instructions on simulating bite acquisition
  • Loading branch information
amalnanavati authored Feb 17, 2024
1 parent 1a466cb commit 1264852
Show file tree
Hide file tree
Showing 11 changed files with 120 additions and 37 deletions.
15 changes: 15 additions & 0 deletions feeding_web_app_ros2_test/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,3 +17,18 @@ You can also toggle off certain combinations of dummy nodes with arguments:
- **Don't run the RealSense node**: `ros2 launch feeding_web_app_ros2_test feeding_web_app_dummy_nodes_launch.xml run_real_sense:=false`

You can also combine any of the above arguments.

## Simulating an AcquireFood Goal

If you launch the code in `--sim mock` (see [here](https://github.com/personalrobotics/ada_feeding/blob/ros2-devel/README.md)), using bite selection through the app should properly call AcquireFood, and you should be able to see the acquisition action in RVIZ (it is recommended to add an `Axes` visualization to RVIZ for the `food` frame to see the perceived top-center and orientation of the detected food item). However, this approach has two downsides:
1. The detected food mask is a dummy mask, not a real output of SegmentAnything.
2. Each time you use the app to invoke AcquireFood, the mask and depth will be slightly different, which is a challenge for repeatability.

To address these issues, we have pickled goal request(s) from the app to the AcquireFood action. These goal request(s) were Segmented by the actual `SegmentFromPoint` node with the dummy RGB and depth images (found in `./data`). To invoke bite acquisition with this static goal request, do the following:
1. `cd ~/colcon_ws`
2. `python3 src/ada_feeding/start-py --sim mock`. See [here](https://github.com/personalrobotics/ada_feeding/blob/ros2-devel/README.md) for more info.
3. `ros2 launch feeding_web_app_ros2_test feeding_dummy_acquirefood_launch.py`. This will have the robot move above the plate, and then invoke `AcquireFood` for the stored goal request.

There are 6 pickled goal requests in `./data`. You can modify which gets run through a launch argument to `feeding_dummy_acquirefood_launch.py`. Be sure to also change the images published by the DummyRealSense node to correspond to the new pickled goal request; these can be changed by launch argument to `feeding_web_app_dummy_nodes_launch.xml`.

To pickle more goal_requests, modify `ada_feeding/config/` to [pass the `pickle_goal_path` parameter to the AcquireFood tree](https://github.com/personalrobotics/ada_feeding/blob/f889fe44351ec552e945ba028d4928826ee03710/ada_feeding/config/ada_feeding_action_servers_default.yaml#L54). That should be a full path to where you want to store the pickle. Then, run the **dummy RealSense node**, the **real** perception nodes, run the web app, do bite selection for a bite, and select a mask. That mask will be stored in a pickle.
Binary file modified feeding_web_app_ros2_test/data/above_plate_1_carrot_request.pkl
Binary file not shown.
Binary file modified feeding_web_app_ros2_test/data/above_plate_1_grape_request.pkl
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified feeding_web_app_ros2_test/data/above_plate_2_carrot_request.pkl
Binary file not shown.
Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ def main(args=None):
action_client.get_logger().error("MoveAbovePlate Failed")
return

# Second, send the acquire food action
# Send Goal
future = action_client.acquire_food()
rclpy.spin_until_future_complete(action_client, future)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
from rclpy.action import ActionServer, CancelResponse, GoalResponse
from rclpy.executors import MultiThreadedExecutor
from rclpy.node import Node
from sensor_msgs.msg import CompressedImage, RegionOfInterest
from sensor_msgs.msg import CompressedImage, Image, RegionOfInterest, CameraInfo
from shapely.geometry import MultiPoint
import threading
import time
Expand Down Expand Up @@ -42,9 +42,36 @@ def __init__(self, sleep_time=2.0, send_feedback_hz=10):
self.latest_img_msg = None
self.latest_img_msg_lock = threading.Lock()

# Subscribe to the depth topic, to store the latest image
self.depth_subscriber = self.create_subscription(
Image,
"/camera/aligned_depth_to_color/image_raw",
self.depth_callback,
1,
)
self.latest_depth_msg = None
self.latest_depth_msg_lock = threading.Lock()

# Subscribe to the camera info
self.camera_info_subscriber = self.create_subscription(
CameraInfo, "/camera/color/camera_info", self.camera_info_callback, 1
)
self.camera_info = None
self.camera_info_lock = threading.Lock()

# Convert between ROS and CV images
self.bridge = CvBridge()

# Create the action server
self._action_server = ActionServer(
self,
SegmentFromPoint,
"SegmentFromPoint",
self.execute_callback,
goal_callback=self.goal_callback,
cancel_callback=self.cancel_callback,
)

def image_callback(self, msg):
"""
Store the latest image message.
Expand All @@ -54,20 +81,30 @@ def image_callback(self, msg):
msg: The image message.
"""
with self.latest_img_msg_lock:
# Only create the action server after we have received at least one
# image
if self.latest_img_msg is None:
# Create the action server
self._action_server = ActionServer(
self,
SegmentFromPoint,
"SegmentFromPoint",
self.execute_callback,
goal_callback=self.goal_callback,
cancel_callback=self.cancel_callback,
)
self.latest_img_msg = msg

def depth_callback(self, msg):
"""
Store the latest depth message.
Parameters
----------
msg: The depth message.
"""
with self.latest_depth_msg_lock:
self.latest_depth_msg = msg

def camera_info_callback(self, msg):
"""
Store the latest camera info message.
Parameters
----------
msg: The camera info message.
"""
with self.camera_info_lock:
self.camera_info = msg

def goal_callback(self, goal_request):
"""
Accept a goal if this action does not already have an active goal,
Expand All @@ -85,12 +122,28 @@ def goal_callback(self, goal_request):
goal_request: The goal request message.
"""
self.get_logger().info("Received goal request")
if self.active_goal_request is None:
self.get_logger().info("Accepting goal request")
self.active_goal_request = goal_request
return GoalResponse.ACCEPT
self.get_logger().info("Rejecting goal request")
return GoalResponse.REJECT

# Reject if there is already an active goal
if self.active_goal_request is not None:
self.get_logger().info("Rejecting goal request")
return GoalResponse.REJECT

# Reject if this node has not received an RGB and depth image
with self.latest_img_msg_lock:
with self.latest_depth_msg_lock:
with self.camera_info_lock:
if (
self.latest_img_msg is None
or self.latest_depth_msg is None
or self.camera_info is None
):
self.get_logger().info("Rejecting cancel request")
return CancelResponse.REJECT

# Otherwise accept
self.get_logger().info("Accepting goal request")
self.active_goal_request = goal_request
return GoalResponse.ACCEPT

def cancel_callback(self, goal_handle):
"""
Expand Down Expand Up @@ -122,9 +175,15 @@ def segment_image(self, seed_point, result, segmentation_success):
latest_img_msg = None
with self.latest_img_msg_lock:
latest_img_msg = self.latest_img_msg
with self.latest_depth_msg_lock:
latest_depth_msg = self.latest_depth_msg
with self.camera_info_lock:
camera_info = self.camera_info
result.header = latest_img_msg.header
result.camera_info = camera_info
img = self.bridge.compressed_imgmsg_to_cv2(latest_img_msg, "bgr8")
width, height, _ = img.shape
depth_img = self.bridge.imgmsg_to_cv2(latest_depth_msg, "passthrough")

# Sleep (dummy segmentation)
time.sleep(self.sleep_time)
Expand Down Expand Up @@ -179,6 +238,12 @@ def segment_image(self, seed_point, result, segmentation_success):
format="jpeg",
data=cv2.imencode(".jpg", img[y_min:y_max, x_min:x_max])[1].tostring(),
)
mask_msg.depth_image = self.bridge.cv2_to_imgmsg(
depth_img[y_min:y_max, x_min:x_max], "passthrough"
)
mask_msg.average_depth = (
np.median(depth_img[y_min:y_max, x_min:x_max][mask_img > 0]) / 1000.0
) # Convert to meters
mask_msg.item_id = "dummy_food_id_%d" % (i)
mask_msg.confidence = np.random.random()
result.detected_items.append(mask_msg)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,23 +27,23 @@ def generate_launch_description():

# Initialize Arguments
request = LaunchConfiguration("request")

# Add dummy realsense node
package_prefix = FindPackageShare("feeding_web_app_ros2_test")
ld.add_action(
IncludeLaunchDescription(
AnyLaunchDescriptionSource(
[package_prefix, "/launch/feeding_web_app_dummy_nodes_launch.xml"]
),
launch_arguments={
"run_web_bridge": "false",
"run_food_detection": "false",
"run_face_detection": "false",
"run_motion": "false",
"run_real_sense": "true",
}.items(),
)
)

# # Add dummy realsense node
# ld.add_action(
# IncludeLaunchDescription(
# AnyLaunchDescriptionSource(
# [package_prefix, "/launch/feeding_web_app_dummy_nodes_launch.xml"]
# ),
# launch_arguments={
# "run_web_bridge": "false",
# "run_food_detection": "false",
# "run_face_detection": "false",
# "run_motion": "false",
# "run_real_sense": "true",
# }.items(),
# )
# )

# Add Request
request_path = {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@
<arg name="run_face_detection" default="true" description="Whether to run the dummy face detection node" />
<arg name="run_real_sense" default="true" description="Whether to run the dummy RealSense node" />
<arg name="run_motion" default="true" description="Whether to run the dummy motion nodes" />
<arg name="rgb_path" default="above_plate_2_rgb.jpg" description="The path to the RGB image/video to publish from the dummy node, relative to this node's share/data folder." />
<arg name="depth_path" default="above_plate_2_depth.png" description="The path to the depth image to publish from the dummy node, relative to this node's share/data folder." />

<group if="$(var run_web_bridge)">
<!-- The ROSBridge Node -->
Expand All @@ -23,8 +25,8 @@
<remap from="~/camera_info" to="/camera/color/camera_info"/>
<remap from="~/aligned_depth/camera_info" to="/camera/aligned_depth_to_color/camera_info"/>
<param name="fps" value="30"/>
<param name="rgb_path" value="$(find-pkg-share feeding_web_app_ros2_test)/../data/above_plate_1_rgb.jpg"/>
<param name="depth_path" value="$(find-pkg-share feeding_web_app_ros2_test)/../data/above_plate_1_depth.png"/>
<param name="rgb_path" value="$(find-pkg-share feeding_web_app_ros2_test)/../data/$(var rgb_path)"/>
<param name="depth_path" value="$(find-pkg-share feeding_web_app_ros2_test)/../data/$(var depth_path)"/>
</node>
</group>

Expand Down

0 comments on commit 1264852

Please sign in to comment.