You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using YOLOv8 segmentation model in my iOS app and have been running into some issues. I know this repository has a working implementation of getting the YOLO masks and post processing with Swift so I copied its' runCoreMLInference code and post processing code exactly. However, whenever I use it with my model, it produces a mask that is far from what I expect to get, and far from what I get when I run the model with Python. I have been debugging for a couple of days now so just wanted to reach out and see if anyone has done something similar, or noticed a glaring issue in my code that I could fix. One interesting area I have seen is that although the image I am feeding in is 1920x1080, after the resizing, the mask is 960x960 and I am unsure as to why.
I think that I already found the problem, please use your CoreML model through the Vision framework for inference as seen in runVisionInference().
The CoreML solution in this repo was originally intended to be work in progress as the implementation should be correct but somehow produces wrong outputs. Since CoreML models only work correctly with the Vision implementation I guess that I should remove the implementation based only on CoreML.
Hello!
I am using YOLOv8 segmentation model in my iOS app and have been running into some issues. I know this repository has a working implementation of getting the YOLO masks and post processing with Swift so I copied its' runCoreMLInference code and post processing code exactly. However, whenever I use it with my model, it produces a mask that is far from what I expect to get, and far from what I get when I run the model with Python. I have been debugging for a couple of days now so just wanted to reach out and see if anyone has done something similar, or noticed a glaring issue in my code that I could fix. One interesting area I have seen is that although the image I am feeding in is 1920x1080, after the resizing, the mask is 960x960 and I am unsure as to why.
Thanks!
KeypointsProcessor.txt
KeypointsUtils.txt
The text was updated successfully, but these errors were encountered: