Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP Visualization using Foxglove #201

Open
wants to merge 49 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 13 commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
ef49d55
added ROSCameraPublisher.py and visualizer operator can publish camer…
storm-light Jun 15, 2021
a4f092b
minimal name changes
storm-light Jun 15, 2021
a7ecf65
fixed variable typo
storm-light Jun 15, 2021
44ff205
visualizer operator can now publish ROS PointCloud2 messages (display…
storm-light Jun 17, 2021
83c3b98
fixed mirror image for lidar
storm-light Jun 17, 2021
cd44d2c
visualizer operator can visualize segmentation frames and depth frame…
storm-light Jun 23, 2021
8155d82
visualizes planning world
storm-light Jun 24, 2021
bf5a24d
streams can be displayed simultaneously (in foxglove), still has pyga…
storm-light Jul 3, 2021
ccce977
removed all dependencies on pygame in visualizer_operator, though sti…
storm-light Jul 4, 2021
22a3900
removed run_visualizer_control_loop
storm-light Jul 6, 2021
a7b36ce
removed visualize() methods from camera_frame.py, depth_frame.py, poi…
storm-light Jul 6, 2021
87b3a53
renamed ROSCameraPublisher.py to ros_camera_publisher.py and ROSLIDAR…
storm-light Jul 6, 2021
03e9752
added updated dockerfile (WIP) to install foxglove and ROS
storm-light Jul 6, 2021
dd73dfc
[Bug] Ensure localization sends a pose even if it is innacurate.
ICGog Jul 7, 2021
c8fa7b2
Support arbitrary traffic manager ports (#202)
pschafhalter Jul 7, 2021
1518c0a
[Bug] Ensure that the obstacle and TL bbox loggers do not overwrite b…
ICGog Jul 9, 2021
efa9333
removed pygame depedencies from operator_creator.py
storm-light Jul 14, 2021
88dc197
minor comment syntax fix
storm-light Jul 19, 2021
7227e58
minor comment syntax fix
storm-light Jul 19, 2021
ba82d01
changed import statements to include full path
storm-light Jul 19, 2021
051f1a8
added docstrings to ROSCameraPublisher and ROSLIDARPublisher classes …
storm-light Jul 19, 2021
2ad7843
resolved import merge conflict
storm-light Jul 19, 2021
ba218ef
minor comment changes
storm-light Jul 19, 2021
fdba024
Added QDTrack object tracker (#183)
mageofboy Jul 19, 2021
a24ed70
yapf'd deez files
storm-light Jul 19, 2021
13b6356
added ROSCameraPublisher.py and visualizer operator can publish camer…
storm-light Jun 15, 2021
204d60c
minimal name changes
storm-light Jun 15, 2021
5f77390
fixed variable typo
storm-light Jun 15, 2021
0b50b99
visualizer operator can now publish ROS PointCloud2 messages (display…
storm-light Jun 17, 2021
10288a5
fixed mirror image for lidar
storm-light Jun 17, 2021
c3d0913
visualizer operator can visualize segmentation frames and depth frame…
storm-light Jun 23, 2021
630b943
visualizes planning world
storm-light Jun 24, 2021
b3b0aab
streams can be displayed simultaneously (in foxglove), still has pyga…
storm-light Jul 3, 2021
389092b
removed all dependencies on pygame in visualizer_operator, though sti…
storm-light Jul 4, 2021
85cdf73
removed run_visualizer_control_loop
storm-light Jul 6, 2021
7702c64
removed visualize() methods from camera_frame.py, depth_frame.py, poi…
storm-light Jul 6, 2021
0fd5ad4
renamed ROSCameraPublisher.py to ros_camera_publisher.py and ROSLIDAR…
storm-light Jul 6, 2021
56590a5
added updated dockerfile (WIP) to install foxglove and ROS
storm-light Jul 6, 2021
2206c71
removed pygame depedencies from operator_creator.py
storm-light Jul 14, 2021
8101483
added docstrings to ROSCameraPublisher and ROSLIDARPublisher classes …
storm-light Jul 19, 2021
4b5cab5
minor comment syntax fix
storm-light Jul 19, 2021
af7fac5
minor comment syntax fix
storm-light Jul 19, 2021
08f945e
changed import statements to include full path
storm-light Jul 19, 2021
9aebae6
minor comment changes
storm-light Jul 19, 2021
2074851
yapf'd deez files
storm-light Jul 19, 2021
9d50b79
Merge remote-tracking branch 'origin/pylot-ros' into pylot-ros
storm-light Jul 20, 2021
053cdab
removed extra import statements
storm-light Jul 20, 2021
72ecd9a
runs roscore from visualizer operator's init() and kills process in d…
storm-light Jul 22, 2021
45436d7
updated Dockerfile to correctly install foxglove and ros
storm-light Jul 22, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions data_gatherer.py
Original file line number Diff line number Diff line change
Expand Up @@ -284,8 +284,6 @@ def main(argv):
stream.send(erdos.WatermarkMessage(erdos.Timestamp(is_top=True)))

try:
if pylot.flags.must_visualize():
pylot.utils.run_visualizer_control_loop(control_display_stream)
node_handle.wait()
except KeyboardInterrupt:
node_handle.shutdown()
Expand Down
27 changes: 27 additions & 0 deletions docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -72,3 +72,30 @@ RUN echo "if [ -f ~/.bashrc ]; then . ~/.bashrc ; fi" >> ~/.bash_profile
# Set up ssh access to the container.
RUN cd ~/ && ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa <<<y 2>&1 >/dev/null
RUN sudo sed -i 's/#X11UseLocalhost yes/X11UseLocalhost no/g' /etc/ssh/sshd_config

# WIP (for visualization in Foxglove)
# FOR NOW: install foxglove-studio and ROS manually in the pylot container

# # Download Foxglove
# RUN cd /home/erdos/workspace/pylot
# RUN curl -L https://github.com/foxglove/studio/releases/download/v0.10.2/foxglove-studio-0.10.2-linux-amd64.deb -o foxglove-studio-0.10.2-linux-amd64.deb
# RUN sudo apt install ./foxglove-studio-*.deb

# # Download ROS
# RUN cd /home/erdos/workspace/pylot
# # From, http://wiki.ros.org/melodic/Installation/Ubuntu:
# # Setup your computer to accept software from packages.ros.org:
# RUN sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
# # Set up your keys
# RUN sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
# # Make sure your Debian package index is up-to-date
# RUN sudo apt update
# # Desktop-Full Install
# RUN sudo apt install ros-melodic-desktop-full
# # Set up ROS environment variables
# RUN echo "source /opt/ros/melodic/setup.bash" >> ~/.bashrc
# RUN source ~/.bashrc




2 changes: 0 additions & 2 deletions pylot.py
Original file line number Diff line number Diff line change
Expand Up @@ -258,8 +258,6 @@ def main(args):
client.start_recorder(FLAGS.simulation_recording_file)
node_handle, control_display_stream = driver()
signal.signal(signal.SIGINT, shutdown)
if pylot.flags.must_visualize():
pylot.utils.run_visualizer_control_loop(control_display_stream)
node_handle.wait()
except KeyboardInterrupt:
shutdown_pylot(node_handle, client, world)
Expand Down
25 changes: 25 additions & 0 deletions pylot/debug/ros_camera_publisher.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
import sys
import rospy
import numpy as np
from sensor_msgs.msg import Image

class ROSCameraPublisher:

storm-light marked this conversation as resolved.
Show resolved Hide resolved
def __init__(self, topic:str):
# publishes to the given topic
storm-light marked this conversation as resolved.
Show resolved Hide resolved
self.image_pub = rospy.Publisher(topic, Image, queue_size=10)

def publish(self, img_arr):
# converts the 3d np arrary img_arr to a sensor_msgs/Image datatype
# inspired by https://github.com/eric-wieser/ros_numpy/blob/master/src/ros_numpy/image.py
storm-light marked this conversation as resolved.
Show resolved Hide resolved
img_msg = Image(encoding='rgb8')
if type(img_arr[0][0][0]) != np.int8:
img_arr = img_arr.astype(np.int8)
img_msg.height, img_msg.width, channels = img_arr.shape
img_msg.data = img_arr.tobytes()
img_msg.step = img_msg.width * img_msg.height
img_msg.is_bigendian = (
img_arr.dtype.byteorder == '>' or
img_arr.dtype.byteorder == '=' and sys.byteorder == 'big'
)
self.image_pub.publish(img_msg)
29 changes: 29 additions & 0 deletions pylot/debug/ros_lidar_publisher.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
import sys
import rospy
import numpy as np
from sensor_msgs.msg import PointCloud2, PointField

class ROSLIDARPublisher:
storm-light marked this conversation as resolved.
Show resolved Hide resolved

def __init__(self, topic:str):
# publishes to the given topic
storm-light marked this conversation as resolved.
Show resolved Hide resolved
self.point_cloud_pub = rospy.Publisher(topic, PointCloud2, queue_size=10)

def publish(self, points):
# converts point cloud points array to a sensor_msgs/PointCloud2 datatype
storm-light marked this conversation as resolved.
Show resolved Hide resolved
points = points.astype(np.float32)
points_byte_array = points.tobytes()
row_step = len(points_byte_array)
point_step = len(points[0].tobytes())
fields = [PointField('x', 0, PointField.FLOAT32, 1),
PointField('y', 4, PointField.FLOAT32, 1),
PointField('z', 8, PointField.FLOAT32, 1)]
point_cloud_msg = PointCloud2(height=1,
width=len(points),
is_dense=True,
is_bigendian=False,
fields=fields,
point_step=point_step,
row_step=row_step,
data=points_byte_array)
self.point_cloud_pub.publish(point_cloud_msg)
166 changes: 67 additions & 99 deletions pylot/debug/visualizer_operator.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,31 +9,31 @@

import numpy as np

import pygame
from pygame.locals import K_n

import pylot.utils
from pylot.drivers.sensor_setup import RGBCameraSetup
from pylot.perception.camera_frame import CameraFrame
from pylot.planning.world import World

DEFAULT_VIS_TIME = 30000.0

import rospy
storm-light marked this conversation as resolved.
Show resolved Hide resolved
from .ros_camera_publisher import ROSCameraPublisher
from .ros_lidar_publisher import ROSLIDARPublisher
storm-light marked this conversation as resolved.
Show resolved Hide resolved

class VisualizerOperator(erdos.Operator):
""" The `VisualizerOperator` allows developers to see the current state
of the entire pipeline by visualizing it on a pygame instance.
of the entire pipeline by visualizing it in Foxglove Studio.

This receives input data from almost the entire pipeline and renders the
results of the operator currently chosen by the developer on the screen.
results of the operator(s) in Foxglove Studio.
"""
def __init__(self, pose_stream, rgb_camera_stream, tl_camera_stream,
prediction_camera_stream, depth_camera_stream,
point_cloud_stream, segmentation_stream, imu_stream,
obstacles_stream, traffic_lights_stream,
tracked_obstacles_stream, lane_detection_stream,
prediction_stream, waypoints_stream, control_stream,
display_control_stream, pygame_display, flags):
control_display_stream, flags):
visualize_streams = []
self._pose_msgs = deque()
pose_stream.add_callback(
Expand Down Expand Up @@ -129,52 +129,33 @@ def __init__(self, pose_stream, rgb_camera_stream, tl_camera_stream,
erdos.add_watermark_callback(visualize_streams, [], self.on_watermark)

# Add a callback on a control stream to figure out what to display.
display_control_stream.add_callback(self.change_display)
self._logger = erdos.utils.setup_logging(self.config.name,
self.config.log_file_name)
self.display = pygame_display

# Set the font.
fonts = [x for x in pygame.font.get_fonts() if 'mono' in x]
default_font = 'ubuntumono'
mono = default_font if default_font in fonts else fonts[0]
mono = pygame.font.match_font(mono)
self.font = pygame.font.Font(mono, 14)

# Array of keys to figure out which message to display.
self.current_display = 0
self.display_array = []
self.window_titles = []

# PYLOT-ROS Integration
storm-light marked this conversation as resolved.
Show resolved Hide resolved
rospy.init_node("visualizer", anonymous=True, disable_signals=True)
self.pub = {} # dict of publishers

# Creating ROS publishers for streams to be visualized
if flags.visualize_rgb_camera:
self.display_array.append("RGB")
self.window_titles.append("RGB Camera")
self.pub["RGB"] = ROSCameraPublisher("/camera/rgb")
if flags.visualize_detected_obstacles:
self.display_array.append("Obstacle")
self.window_titles.append("Detected obstacles")
self.pub["Obstacle"] = ROSCameraPublisher("/camera/obstacle")
if flags.visualize_tracked_obstacles:
self.display_array.append("TrackedObstacle")
self.window_titles.append("Obstacle tracking")
self.pub["TrackedObstacle"] = ROSCameraPublisher("/camera/tracked_obstacle")
if flags.visualize_detected_traffic_lights:
self.display_array.append("TLCamera")
self.window_titles.append("Detected traffic lights")
self.pub["TLCamera"] = ROSCameraPublisher("/camera/tl_camera")
if flags.visualize_waypoints:
self.display_array.append("Waypoint")
self.window_titles.append("Planning")
if flags.visualize_prediction:
self.display_array.append("PredictionCamera")
self.window_titles.append("Prediction")
self.pub["Waypoint"] = ROSCameraPublisher("/camera/waypoint")
if flags.visualize_lidar:
self.display_array.append("PointCloud")
self.window_titles.append("LiDAR")
self.pub["PointCloud"] = ROSLIDARPublisher("/lidar/point_cloud")
if flags.visualize_detected_lanes:
self.display_array.append("Lanes")
self.window_titles.append("Detected lanes")
self.pub["Lanes"] = ROSCameraPublisher("/camera/lanes")
if flags.visualize_depth_camera:
self.display_array.append("Depth")
self.window_titles.append("Depth Camera")
self.pub["Depth"] = ROSCameraPublisher("/camera/depth")
self.pub["DepthPointCloud"] = ROSLIDARPublisher("/lidar/depth")
if flags.visualize_segmentation:
self.display_array.append("Segmentation")
self.window_titles.append("Segmentation")
self.pub["Segmentation"] = ROSCameraPublisher("/camera/segmentation")
if flags.visualize_world:
self._planning_world = World(flags, self._logger)
top_down_transform = pylot.utils.get_top_down_transform(
Expand All @@ -184,12 +165,9 @@ def __init__(self, pose_stream, rgb_camera_stream, tl_camera_stream,
self._bird_eye_camera_setup = RGBCameraSetup(
'bird_eye_camera', flags.camera_image_width,
flags.camera_image_height, top_down_transform, 90)
self.display_array.append("PlanningWorld")
self.window_titles.append("Planning world")
self.pub["PlanningWorld"] = ROSCameraPublisher("/camera/planning_world")
else:
self._planning_world = None
assert len(self.display_array) == len(self.window_titles), \
"The display and titles differ."

# Save the flags.
self._flags = flags
Expand All @@ -211,13 +189,6 @@ def save(self, msg, msg_type, queue):
msg.timestamp, msg_type))
queue.append(msg)

def change_display(self, display_message):
if display_message.data == K_n:
self.current_display = (self.current_display + 1) % len(
self.display_array)
self._logger.debug("@{}: Visualizer changed to {}".format(
display_message.timestamp, self.current_display))

def get_message(self, queue, timestamp, name):
msg = None
if queue:
Expand Down Expand Up @@ -258,21 +229,7 @@ def render_text(self, pose, control, timestamp):
"Reverse : {:.2f}".format(control.reverse),
]

# Display the information box.
info_surface = pygame.Surface(
(220, self._flags.camera_image_height // 3))
info_surface.set_alpha(100)
self.display.blit(info_surface, (0, 0))

# Render the text.
v_offset = 10
for line in info_text:
if v_offset + 18 > self._flags.camera_image_height:
break
surface = self.font.render(line, True, (255, 255, 255))
self.display.blit(surface, (8, v_offset))
v_offset += 18
pygame.display.flip()
# TODO: Render text (using ROS publisher)

def on_watermark(self, timestamp):
self._logger.debug("@{}: received watermark.".format(timestamp))
Expand Down Expand Up @@ -315,61 +272,70 @@ def on_watermark(self, timestamp):
if self._flags.visualize_imu:
self._visualize_imu(imu_msg)

sensor_to_display = self.display_array[self.current_display]
if sensor_to_display == "RGB" and bgr_msg:
bgr_msg.frame.visualize(self.display, timestamp=timestamp)
elif sensor_to_display == "Obstacle" and bgr_msg and obstacle_msg:
if self._flags.visualize_rgb_camera and bgr_msg:
image_np = bgr_msg.frame.as_rgb_numpy_array()
self.pub["RGB"].publish(image_np)
if self._flags.visualize_detected_obstacles and bgr_msg and obstacle_msg:
bgr_msg.frame.annotate_with_bounding_boxes(timestamp,
obstacle_msg.obstacles,
ego_transform)
bgr_msg.frame.visualize(self.display, timestamp=timestamp)
elif (sensor_to_display == "TLCamera" and tl_camera_msg
and traffic_light_msg):
image_np = bgr_msg.frame.as_rgb_numpy_array()
self.pub["Obstacle"].publish(image_np)
if self._flags.visualize_detected_traffic_lights and tl_camera_msg and traffic_light_msg:
tl_camera_msg.frame.annotate_with_bounding_boxes(
timestamp, traffic_light_msg.obstacles)
tl_camera_msg.frame.visualize(self.display, timestamp=timestamp)
elif (sensor_to_display == "TrackedObstacle" and bgr_msg
and tracked_obstacle_msg):
image_np = tl_camera_msg.frame.as_rgb_numpy_array()
self.pub["TLCamera"].publish(image_np)
if self._flags.visualize_tracked_obstacles and bgr_msg and tracked_obstacle_msg:
bgr_msg.frame.annotate_with_bounding_boxes(
timestamp, tracked_obstacle_msg.obstacle_trajectories,
ego_transform)
bgr_msg.frame.visualize(self.display)
elif sensor_to_display == "Waypoint" and (bgr_msg and pose_msg
and waypoint_msg):
image_np = bgr_msg.frame.as_rgb_numpy_array()
self.pub["TrackedObstacle"].publish(image_np)
if self._flags.visualize_waypoints and bgr_msg and pose_msg and waypoint_msg:
bgr_frame = bgr_msg.frame
if self._flags.draw_waypoints_on_camera_frames:
bgr_frame.camera_setup.set_transform(
pose_msg.data.transform * bgr_frame.camera_setup.transform)
waypoint_msg.waypoints.draw_on_frame(bgr_frame)
if self._flags.draw_waypoints_on_world:
waypoint_msg.waypoints.draw_on_world(self._world)
bgr_frame.visualize(self.display, timestamp=timestamp)
elif (sensor_to_display == "PredictionCamera" and prediction_camera_msg
and prediction_msg):
image_np = bgr_frame.as_rgb_numpy_array()
self.pub["Waypoint"].publish(image_np)
if prediction_camera_msg and prediction_msg:
frame = prediction_camera_msg.frame
frame.transform_to_cityscapes()
for obstacle_prediction in prediction_msg.predictions:
obstacle_prediction.draw_trajectory_on_frame(frame)
frame.visualize(self.display, timestamp=timestamp)
elif sensor_to_display == "PointCloud" and point_cloud_msg:
point_cloud_msg.point_cloud.visualize(
self.display, self._flags.camera_image_width,
self._flags.camera_image_height)
elif (sensor_to_display == "Lanes" and bgr_msg and lane_detection_msg):
if self._flags.visualize_lidar and point_cloud_msg:
points = point_cloud_msg.point_cloud.points
points[:,[0,2]] = points[:,[2,0]]
points[:,[1,2]] = points[:,[2,1]]
points[:,0] = -points[:,0]
points[:,2] = -points[:,2]
storm-light marked this conversation as resolved.
Show resolved Hide resolved
self.pub["PointCloud"].publish(points)
if self._flags.visualize_detected_lanes and bgr_msg and lane_detection_msg:
for lane in lane_detection_msg.data:
lane.draw_on_frame(bgr_msg.frame)
bgr_msg.frame.visualize(self.display, timestamp)
elif sensor_to_display == "Depth" and depth_msg:
depth_msg.frame.visualize(self.display, timestamp=timestamp)
elif sensor_to_display == "Segmentation" and segmentation_msg:
segmentation_msg.frame.visualize(self.display, timestamp=timestamp)
elif sensor_to_display == "PlanningWorld":
image_np = bgr_msg.frame.as_rgb_numpy_array()
self.pub["Lanes"].publish(image_np)
if self._flags.visualize_depth_camera and depth_msg:
image_np = depth_msg.frame.original_frame
image_np = image_np[:, :, ::-1]
self.pub["Depth"].publish(image_np)
depth_msg.frame.resize(854, 480)
points = depth_msg.frame.as_point_cloud()
points[:,0] = -points[:,0]
self.pub["DepthPointCloud"].publish(points)
if self._flags.visualize_segmentation and segmentation_msg:
self.pub["Segmentation"].publish(segmentation_msg.frame.as_cityscapes_palette())
if self._flags.visualize_world:
if prediction_camera_msg is None:
# Top-down prediction is not available. Show planning
# world on a black image.
black_img = np.zeros((self._bird_eye_camera_setup.height,
self._bird_eye_camera_setup.width, 3),
dtype=np.dtype("uint8"))
self._bird_eye_camera_setup.width, 3),
dtype=np.dtype("uint8"))
frame = CameraFrame(black_img, 'RGB',
self._bird_eye_camera_setup)
else:
Expand All @@ -387,9 +353,11 @@ def on_watermark(self, timestamp):
lanes=lanes)
self._planning_world.update_waypoints(None, waypoint_msg.waypoints)
self._planning_world.draw_on_frame(frame)
frame.visualize(self.display, timestamp=timestamp)
image_np = frame.as_rgb_numpy_array()
self.pub["PlanningWorld"].publish(image_np)

self.render_text(pose_msg.data, control_msg, timestamp)
# todo: render text in foxglove
#self.render_text(pose_msg.data, control_msg, timestamp)
storm-light marked this conversation as resolved.
Show resolved Hide resolved

def run(self):
# Run method is invoked after all operators finished initializing.
Expand Down
13 changes: 0 additions & 13 deletions pylot/perception/camera_frame.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,19 +137,6 @@ def resize(self, width: int, height: int):
dsize=(width, height),
interpolation=cv2.INTER_NEAREST)

def visualize(self, pygame_display, timestamp=None):
"""Visualizes the frame on a pygame display."""
import pygame
if timestamp is not None:
pylot.utils.add_timestamp(self.frame, timestamp)
if self.encoding != 'RGB':
image_np = self.as_rgb_numpy_array()
else:
image_np = self.frame
image_np = np.transpose(image_np, (1, 0, 2))
pygame.surfarray.blit_array(pygame_display, image_np)
pygame.display.flip()

def save(self, timestamp: int, data_path: str, file_base: str):
"""Saves the camera frame to a file.

Expand Down
Loading