Hi everyone,
I’m working on a project with a Raspberry Pi 5 where I’m using YOLOE for real-time object detection.
I also have an AI Hat (13 TOPS) hardware accelerator that I’d like to use to improve performance, but I’m not sure how to properly integrate it with YOLOE on the Raspberry Pi.
Does anyone have experience or can point me in the right direction regarding:
how to configure YOLOE to leverage the accelerator,
I’ve had a look around and haven’t found anything on it so far, you may be one of the first people trying to do this, but it is a logical next step so I assume others are working at it.
First things first, the AI HAT has its own workflows and processes needed to operate it. We have a guide on getting that going. If you work through this guide, at the end you will have all the machinery to take a model in the.HEF format (the model format the HAT needs), and run it. All you need to do is convert the YOLOE model into this format.
To do so, you would need to start by getting a YOLOE model you like. @ahsrab292840 is on the right track here, and we also have a guide on getting YOLOE going on the Pi. At the end of this guide you will have a YOLOE model in the .ONNX format.
This format is great though, because the conversion process in HAILO’s process take a .ONNX model and convert it to .HEF! Luke Ditria has some fantastic and practical videos on using the AI HAT, and I would reccomend his video on training a custom model for the AI HAT. I’ve timestamped it to the relevant section as you can ignore the first part where he is just getting the .ONNX model (which we already have).
This should give you most of the resources you need to try and get a YOLOE model going on the AI HAT. Again, you may be one of the first people attempting to do this, so let us know if you get it going!