Skip to content

v0.4.6

Compare
Choose a tag to compare
@fcakyon fcakyon released this 14 Jun 23:37
· 406 commits to main since this release
596eb86

new feature

  • add more mot utils (#133)
MOT Challenge formatted ground truth dataset creation:
  • import required classes:
from sahi.utils.mot import MotAnnotation, MotFrame, MotVideo
  • init video:
mot_video = MotVideo(name="sequence_name")
  • init first frame:
mot_frame = MotFrame()
  • add annotations to frame:
mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height])
)

mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height])
)
  • add frame to video:
mot_video.add_frame(mot_frame)
  • export in MOT challenge format:
mot_video.export(export_dir="mot_gt", type="gt")
  • your MOT challenge formatted ground truth files are ready under mot_gt/sequence_name/ folder.
Advanced MOT Challenge formatted ground truth dataset creation:
  • you can customize tracker while initializing mot video object:
tracker_params = {
  'max_distance_between_points': 30,
  'min_detection_threshold': 0,
  'hit_inertia_min': 10,
  'hit_inertia_max': 12,
  'point_transience': 4,
}
# for details: https://github.com/tryolabs/norfair/tree/master/docs#arguments

mot_video = MotVideo(tracker_kwargs=tracker_params)
  • you can omit automatic track id generation and directly provide track ids of annotations:
# create annotations with track ids:
mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height], track_id=1)
)

mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height], track_id=2)
)

# add frame to video:
mot_video.add_frame(mot_frame)

# export in MOT challenge format without automatic track id generation:
mot_video.export(export_dir="mot_gt", type="gt", use_tracker=False)
  • you can overwrite the results into already present directory by adding exist_ok=True:
mot_video.export(export_dir="mot_gt", type="gt", exist_ok=True)
MOT Challenge formatted tracker output creation:
  • import required classes:
from sahi.utils.mot import MotAnnotation, MotFrame, MotVideo
  • init video by providing video name:
mot_video = MotVideo(name="sequence_name")
  • init first frame:
mot_frame = MotFrame()
  • add tracker outputs to frame:
mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height], track_id=1)
)

mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height], track_id=2)
)
  • add frame to video:
mot_video.add_frame(mot_frame)
  • export in MOT challenge format:
mot_video.export(export_dir="mot_test", type="test")
  • your MOT challenge formatted ground truth files are ready as mot_test/sequence_name.txt.
Advanced MOT Challenge formatted tracker output creation:
  • you can enable tracker and directly provide object detector output:
# add object detector outputs:
mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height])
)

mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height])
)

# add frame to video:
mot_video.add_frame(mot_frame)

# export in MOT challenge format by applying a kalman based tracker:
mot_video.export(export_dir="mot_gt", type="gt", use_tracker=True)
  • you can customize tracker while initializing mot video object:
tracker_params = {
  'max_distance_between_points': 30,
  'min_detection_threshold': 0,
  'hit_inertia_min': 10,
  'hit_inertia_max': 12,
  'point_transience': 4,
}
# for details: https://github.com/tryolabs/norfair/tree/master/docs#arguments

mot_video = MotVideo(tracker_kwargs=tracker_params)
  • you can overwrite the results into already present directory by adding exist_ok=True:
mot_video.export(export_dir="mot_gt", type="gt", exist_ok=True)

documentation

  • update coco docs (#134)
  • add colab links into readme (#135)

Check YOLOv5 + SAHI demo: Open In Colab

Check MMDetection + SAHI demo: Open In Colab

bug fixes

  • fix demo notebooks (#136)