Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Finally, the working version of YoloV8 Instance Segmentation #918

Closed
wants to merge 9 commits into from

Conversation

sweetlhare
Copy link

Redid the previous pull request. Conducted testing.

@sweetlhare
Copy link
Author

Test slice/no slice with detection model on image without objects and on image with objects.

clear_blue_sky png
sliced_clear_blue_sky png
sliced_small-vehicles1 jpeg
small-vehicles1 jpeg

@sweetlhare
Copy link
Author

Test slice/no slice with Instance Segmentation model on image without objects and on image with objects.

clear_blue_sky-seg png
sliced_clear_blue_sky-seg png
sliced_small-vehicles1-seg jpeg
small-vehicles1-seg jpeg

@sweetlhare
Copy link
Author

The interface is identical to the detection model

`from sahi.sahi import AutoDetectionModel
from sahi.sahi.predict import get_prediction, get_sliced_prediction

detection_model = AutoDetectionModel.from_pretrained(
model_type='yolov8',
model_path='yolov8n-seg.pt',
confidence_threshold=0.5,
device='cuda:0' # if torch.cuda.is_available() else 'cpu'
)

pred = get_sliced_prediction('small-vehicles1.jpeg', detection_model,
slice_height=640, slice_width=640,
overlap_height_ratio=0.4, overlap_width_ratio=0.4)

pred.export_visuals('test_results/', file_name='sliced_small-vehicles1-seg.jpeg')`

JakubCha added a commit to JakubCha/sahi that referenced this pull request Jul 24, 2023
Copy link

@Aprilistic Aprilistic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for updating!! I managed to use yolov8 seg model to sahi. However, you missed threshold_confidence. At line 72, 73

Original, Line 72, 73
prediction_result_ = [(result.boxes.data[result.boxes.data[:, 4] >= 0.5],
result.masks.data[result.boxes.data[:, 4] >= 0.5])
for result in prediction_result]

Modified
prediction_result_ = [(result.boxes.data[result.boxes.data[:, 4] >= self.confidence_threshold],
result.masks.data[result.boxes.data[:, 4] >= self.confidence_threshold])
for result in prediction_result]

This code worked well.

@sweetlhare
Copy link
Author

Thanks a lot! I set a fixed trash hold for local tests, forgot to change it back

@Jareco
Copy link

Jareco commented Jul 24, 2023

Probably I did something wrong, but I still have the following error:

File "C:\Projects\levasoft\SolarProTool-ObjectDetector\app\handlers\yolohandler.py", line 33, in predict
    result = get_sliced_prediction(
  File "C:\Projects\levasoft\SolarProTool-ObjectDetector\app\sahi\predict.py", line 261, in get_sliced_prediction
    prediction_result = get_prediction(
  File "C:\Projects\levasoft\SolarProTool-ObjectDetector\app\sahi\predict.py", line 100, in get_prediction
    detection_model.convert_original_predictions(
  File "C:\Projects\levasoft\SolarProTool-ObjectDetector\app\sahi\models\base.py", line 168, in convert_original_predictions
    self._create_object_prediction_list_from_original_predictions(
  File "C:\Projects\levasoft\SolarProTool-ObjectDetector\app\sahi\models\yolov8.py", line 182, in _create_object_prediction_list_from_original_predictions
    object_prediction = ObjectPrediction(
  File "C:\Projects\levasoft\SolarProTool-ObjectDetector\app\sahi\prediction.py", line 75, in __init__
    super().__init__(
  File "C:\Projects\levasoft\SolarProTool-ObjectDetector\app\sahi\annotation.py", line 581, in __init__
    raise ValueError("Invalid boolean mask.")
ValueError: Invalid boolean mask.

I use the following code:

    detection_model = AutoDetectionModel.from_pretrained(
        model_type='yolov8',
        model_path=config.YOLOMODEL_PATH,
        confidence_threshold=0.3,
        device="cpu", # or 'cuda:0'
    )

    result = get_sliced_prediction(
        "C:\\GRoof_22037481-dc2c-43d4-aab6-238d5f254bd6.png",
        detection_model,
        slice_height = 256,
        slice_width = 256,
        overlap_height_ratio = 0.2,
        overlap_width_ratio = 0.2
    )

Here is the picture:
https://imgur.com/a/GtcGS2Y

I detect obstacles on the roof, but somehow I get an error with invalid boolean mask. I also had this error with previous pull request.

At the same time, if I use another parameters, like

      result = get_sliced_prediction(
            "C:\\GRoof_22037481-dc2c-43d4-aab6-238d5f254bd6.png",
            detection_model,
            slice_height = 640,
            slice_width = 640,
            overlap_height_ratio = 0.4,
            overlap_width_ratio = 0.4
        )

then it seems to work, but result.object_prediction_list[0].mask.bool_mask contains 'False' values instead of points.

@sweetlhare
Copy link
Author

I checked on your image and with your parameters, everything works. To get some detection results, I added an object to the image
sliced_ZzvCGl1-seg jpeg

@Jareco
Copy link

Jareco commented Jul 24, 2023

If I change some parameters, I get some results, but at the same time, I get as "mask" sometimes "False" values instead of numbers. See screenshots below:

upload

prediction_visual

@sweetlhare
Copy link
Author

I will check what the problem may be asap

@rusvagzur
Copy link

Is this implemented in the main branch? I can't see the yolov8 segmentation model there

@rezat96
Copy link

rezat96 commented Aug 18, 2023

Hey @sweetlhare, is there any update on this?
thanks

@OscarPetersen1992
Copy link

Any update on this?

@FourierMourier
Copy link

Any updates? masks are correct and it works fine

@CyVision13
Copy link

CyVision13 commented Oct 2, 2023

Thanks a lot, can you update please? @sweetlhare

Guys if you have problem with new version of ultralytics, you can change
from ultralytics.yolo.engine.results import Masks to from ultralytics.engine.results import Masks

@jehanjen
Copy link

Thank you for updating!! I managed to use yolov8 seg model to sahi. However, you missed threshold_confidence. At line 72, 73

Original, Line 72, 73 prediction_result_ = [(result.boxes.data[result.boxes.data[:, 4] >= 0.5], result.masks.data[result.boxes.data[:, 4] >= 0.5]) for result in prediction_result]

Modified prediction_result_ = [(result.boxes.data[result.boxes.data[:, 4] >= self.confidence_threshold], result.masks.data[result.boxes.data[:, 4] >= self.confidence_threshold]) for result in prediction_result]

This code worked well.
could you please the segment code with sahi

@jehanjen
Copy link

Is this implemented in the main branch? I can't see the yolov8 segmentation model there

the same for me

@jehanjen
Copy link

I checked on your image and with your parameters, everything works. To get some detection results, I added an object to the image sliced_ZzvCGl1-seg jpeg

could you please share the cml for segmntation with sahi ans the code please

@jehanjen
Copy link

The interface is identical to the detection model

`from sahi.sahi import AutoDetectionModel from sahi.sahi.predict import get_prediction, get_sliced_prediction

detection_model = AutoDetectionModel.from_pretrained( model_type='yolov8', model_path='yolov8n-seg.pt', confidence_threshold=0.5, device='cuda:0' # if torch.cuda.is_available() else 'cpu' )

pred = get_sliced_prediction('small-vehicles1.jpeg', detection_model, slice_height=640, slice_width=640, overlap_height_ratio=0.4, overlap_width_ratio=0.4)

pred.export_visuals('test_results/', file_name='sliced_small-vehicles1-seg.jpeg')`

which code you have been used for segmentation with sahi?

@libazhahei
Copy link

I also wrote similar code to yours, but I found that the memory usage was very very high..

@jehanjen
Copy link

jehanjen commented Oct 28, 2023 via email

@sweetlhare
Copy link
Author

Hello everyone!
In this version, segmentation should work fine, but there is really a lot of memory consumption with a large number of masks. The problem comes from the original logic of SAHI's work. Now, within our development team, we have made a simplified and optimized version, as soon as we finalize it for SAHI, we will update the pull request.

@alexkutsan
Copy link

I am also verry interested in YOLOv8 segmentation with SAHI.

Tried version from

pip install git+https://github.com/sweetlhare/sahi-yolov8-instance-segmentation.git@main

It processes several images and then throw an exception.

Traceback (most recent call last):
  File "/home/alex/rnd/company/tracks_tools/./segment_compare.py", line 75, in <module>
    check_detector("yolov8_sahi", yolo8_sahi_detector,
  File "/home/alex/rnd/company/tracks_tools/./segment_compare.py", line 54, in check_detector
    result = detect_func(image)
  File "/home/alex/rnd/company/tracks_tools/adssai/tracks_extraction/detectors.py", line 73, in __call__
    results = get_prediction(image, self.model)
  File "/home/alex/rnd/company/tracks_tools/venv/lib/python3.10/site-packages/sahi/predict.py", line 100, in get_prediction
    detection_model.convert_original_predictions(
  File "/home/alex/rnd/company/tracks_tools/venv/lib/python3.10/site-packages/sahi/models/base.py", line 168, in convert_original_predictions
    self._create_object_prediction_list_from_original_predictions(
  File "/home/alex/rnd/company/tracks_tools/venv/lib/python3.10/site-packages/sahi/models/yolov8.py", line 182, in _create_object_prediction_list_from_original_predictions
    object_prediction = ObjectPrediction(
  File "/home/alex/rnd/company/tracks_tools/venv/lib/python3.10/site-packages/sahi/prediction.py", line 75, in __init__
    super().__init__(
  File "/home/alex/rnd/company/tracks_tools/venv/lib/python3.10/site-packages/sahi/annotation.py", line 581, in __init__
    raise ValueError("Invalid boolean mask.")
ValueError: Invalid boolean mask.

Let me know if you need more info or more context.
i attached the video file that I am processing.

coldwater_10_sec.mp4

@jehanjen
Copy link

Hello everyone! In this version, segmentation should work fine, but there is really a lot of memory consumption with a large number of masks. The problem comes from the original logic of SAHI's work. Now, within our development team, we have made a simplified and optimized version, as soon as we finalize it for SAHI, we will update the pull request.

sahi segmentation with coco format??? can i use just the image to segment with sahi?

@fcakyon
Copy link
Contributor

fcakyon commented Nov 5, 2023

Hello everyone! In this version, segmentation should work fine, but there is really a lot of memory consumption with a large number of masks. The problem comes from the original logic of SAHI's work. Now, within our development team, we have made a simplified and optimized version, as soon as we finalize it for SAHI, we will update the pull request.

Wow amazing to hear that!

Feel free to ping me once PR is ready so that I can merge it 👍

Copy link
Contributor

@fcakyon fcakyon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yolov8 demo notebook should be updated with a segmentation example, failing tests need to be handled

@AntonioRodriguezRuiz
Copy link

Hi! Is there any updates on this?

@fcakyon
Copy link
Contributor

fcakyon commented Nov 25, 2023

Hi! Is there any updates on this?

We need @sweetlhare's cooperation to make this PR ready (fixing tests and adding/updating a demo notebook)

@gfx73
Copy link

gfx73 commented Dec 4, 2023

Hi!
Up to my understanding the feature implemented here uses segmentation model to perfrorm detection. Am I right? I tried this code snippet and pred value's dictionary has Nones for masks.

The interface is identical to the detection model

`from sahi.sahi import AutoDetectionModel from sahi.sahi.predict import get_prediction, get_sliced_prediction

detection_model = AutoDetectionModel.from_pretrained( model_type='yolov8', model_path='yolov8n-seg.pt', confidence_threshold=0.5, device='cuda:0' # if torch.cuda.is_available() else 'cpu' )

pred = get_sliced_prediction('small-vehicles1.jpeg', detection_model, slice_height=640, slice_width=640, overlap_height_ratio=0.4, overlap_width_ratio=0.4)

pred.export_visuals('test_results/', file_name='sliced_small-vehicles1-seg.jpeg')`

image

@dimka11
Copy link

dimka11 commented Dec 14, 2023

How to fix ValueError: Invalid boolean mask?

@AndrewArchie
Copy link

I was wondering if there will be a similar update for using segmentation for YoloV5? Thanks a lot!

@ozayr
Copy link

ozayr commented Jan 23, 2024

Hi, how would I get the actual polygon of the segmented area, currently only the mask seems to be available. Is this something that is there, or is it not implemented ?

@filipemendesdev
Copy link

Does anyone has a colab notebook for this solution? I am trying to replicate but no success... :(

@jehanjen
Copy link

jehanjen commented Feb 7, 2024

Hi, how would I get the actual polygon of the segmented area, currently only the mask seems to be available. Is this something that is there, or is it not implemented ?

i also need SAHI with segmentation for polygon ?

@fmandel98
Copy link

I included the provided code for segmentation and it works partially. Unfortunately is the bounding box prediction worse than before. It seems it is wrongly shifted. Someone has the same issue?

@kyra-smith
Copy link

kyra-smith commented Mar 5, 2024

Hi! Up to my understanding the feature implemented here uses segmentation model to perfrorm detection. Am I right? I tried this code snippet and pred value's dictionary has Nones for masks.

The interface is identical to the detection model
from sahi.sahi import AutoDetectionModel from sahi.sahi.predict import get_prediction, get_sliced_prediction detection_model = AutoDetectionModel.from_pretrained( model_type='yolov8', model_path='yolov8n-seg.pt', confidence_threshold=0.5, device='cuda:0' # if torch.cuda.is_available() else 'cpu' ) pred = get_sliced_prediction('small-vehicles1.jpeg', detection_model, slice_height=640, slice_width=640, overlap_height_ratio=0.4, overlap_width_ratio=0.4) pred.export_visuals('test_results/', file_name='sliced_small-vehicles1-seg.jpeg')

image

I'm having the same issue. Has anyone else found a solution for this?

@fcakyon
Copy link
Contributor

fcakyon commented Jun 2, 2024

closing the PR in favor of #1039

@fcakyon fcakyon closed this Jun 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.