Multiple false detections for YOLO using Conda and Ultralytics

I bought a Pi5 and Pi Cam Module 3 to get started with YOLO object detection and I followed Core’s most recent tutorial

https://core-electronics.com.au/guides/how-to-set-up-yolo-computer-vision-on-a-raspberry-pi-conda-and-ultralytics/

No detections occur at all using the published “yoloe-11s-seg.pt” model, so I tried “yolov8n.pt” and “yolo11n.pt”

Both 8n and 11n detect objects and create annotated bounding boxes pretty well, but both are frequently impacted by the multiple false detections at top of screen issue per

https://stackoverflow.com/questions/78820748/a-lot-of-incorrect-detection-using-yolov8

Fully understand that YOLO will occasionally produce false detections but this is clearly something more, a relatively serious bug.

I tried both solutions posted on the StackOverflow page by using a clean distro image with the recommended version numbers when following the Core tutorial, but the solutions either didn’t solve the issue, or generated an error that cv2.imshow() was not implemented. Tried following other forum posts to resolve cv2.imshow() by installing GTK via Conda but that got caught up in unresolvable dependencies.

I’ve also tried the alternative/back up method listed at the end of the Core guide, but it also suffers from the multiple false detections issue.

Having spent multiple days on this so far just wondering how best to proceed…

Can anyone recommend a solution to the multiple false detection issue?

Many thanks

Dan

Hey @Daniel70781,

Cheers for doing a bit of digging and the write up, makes this all a bit easier.

I can’t say I have encountered that bug recently, I got it when working on the old guides when I was doing something whacky related to shoving in the wrong resolution, but it shouldn’t be cropping up here.

The test code in that Conda guide is more or less checking if it is installed. If you want to run something yolo11 or yolov8 based, I would try giving our guides for that a go (ignoring the installation process as you already have it set up). The code for running those yolo models are a bit different and the guide you linked was more YOLOE focused. Here is the code from the guide you should be focused on:

import cv2
from picamera2 import Picamera2
from ultralytics import YOLO

# Set up the camera with Picam
picam2 = Picamera2()
picam2.preview_configuration.main.size = (1280, 1280)
picam2.preview_configuration.main.format = "RGB888"
picam2.preview_configuration.align()
picam2.configure("preview")
picam2.start()

# Load YOLOv8
model = YOLO("yolov8n.pt")

while True:
    # Capture a frame from the camera
    frame = picam2.capture_array()
    
    # Run YOLO model on the captured frame and store the results
    results = model(frame)
    
    # Output the visual detection data, we will draw this on our camera preview window
    annotated_frame = results[0].plot()
    
    # Get inference time
    inference_time = results[0].speed['inference']
    fps = 1000 / inference_time  # Convert to milliseconds
    text = f'FPS: {fps:.1f}'

    # Define font and position
    font = cv2.FONT_HERSHEY_SIMPLEX
    text_size = cv2.getTextSize(text, font, 1, 2)[0]
    text_x = annotated_frame.shape[1] - text_size[0] - 10  # 10 pixels from the right
    text_y = text_size[1] + 10  # 10 pixels from the top

    # Draw the text on the annotated frame
    cv2.putText(annotated_frame, text, (text_x, text_y), font, 1, (255, 255, 255), 2, cv2.LINE_AA)

    # Display the resulting frame
    cv2.imshow("Camera", annotated_frame)

    # Exit the program if q is pressed
    if cv2.waitKey(1) == ord("q"):
        break

# Close all windows
cv2.destroyAllWindows()

If you wish to try the YOLOE model, follow along with the guide specifically for that as well. There are a few steps needed to get it going.

2 Likes

Thanks Jaryd

I tried a few different things and ended with success by creating a virtual env on Bookworm 64 bit per the guide and:
sudo apt install python3-opencv (this installed 4.11.0)
pip install ultralytics[export]==8.3.40

then I used your python code above and finally got good performance with model v8n and 11n without the dreaded multiple false detections.

My observation is that some of the pip install dependency dramas relate to higher ultralytics version numbers for [export]. Some online guidance I saw was that the components particular to [export] are not typically needed.

Anyway this newbie has landed at a working baseline. It can be improved upon but at least it’s a baseline. As I learn more I hope to get more recent versions working, probably using conda

So from here I’ll repeat the whole process again, meticulously recording each step and update my post here with detailed steps in case it’s useful to anyone else.
Thanks for your assistance.

2 Likes

@Daniel70781,

Glad to see you got it working!

Yeah the newer versions of the Ultralytics package has so many dependencies required for the installation that PIP can’t seem to solve the correct ordering of it. Hoping that Ultralytics fixes this soon, and we can have a simple install again.

2 Likes

ok I’ve wrapped up my tests and leaving a final note in case useful to others. For YOLO (not YOLOE) without the multiple false detections issue:

Install the 64 bit version of Bookworm (listed as a legacy version under ‘other’ in the RPi imager)
Follow Core Electronics’ guide instructions here https://core-electronics.com.au/guides/raspberry-pi/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/
Except instead of pip install ultralytics[export], use
pip install ultralytics[export]=8.3.40

I found ultralytics versions like 8.3.100 or above caused a ‘resolution-too-deep’ error in pip so until ultralytics have resolved this at least I can use 8.3.40 as a repeatable go-to

2 Likes