Pi Zero W 2 + Camera 3 streaming issue

Hi all.

I have a Pi Zero W 2 running Bookworm + a Camera Module 3 Wide. I can get the Pi to stream to VLC using the shell command:
rpicam-vid -t 0 -n --codec libav --libav-format mpegts -o tcp://192.168.0.100:8554?listen=1
and in VLC tcp://192.168.0.100:8554
But the resolution is the default 640 x 480.

I then tried:
rpicam-vid -t 0 -n --codec libav --libav-format mpegts -o --width 1920 --height 1080 tcp://192.168.0.100:8554?listen=1

…and while the Pi sits there listening, VLC fails with … Your input can’t be opened: VLC is unable to open the MRL.

Illumination would be appreciated!

Hi @Mark285907

You’re almost on the right track, the width and height need to be before the -0. Below should get your sorted.

rpicam-vid -t 0 -n --codec libav --libav-format mpegts --width 1920 --height 1080 -o tcp://192.168.0.100:8554?listen=1
2 Likes

Make the edit according to Dan’s suggestion. If VLC keeps failing, try using FFmpeg to test the stream reception:

ffplay tcp://192.168.0.100:8554

2 Likes

Thanks Dan. That works.

1 Like

Hey @Mark285907

No worries at all glad you got that one sorted!

1 Like

One step sorted. Now trying (and failing) to get Synology Surveillance Station to recognise the camera.

I’ve created a camera config file.xlxs for the Synology Surveillance Station which it accepts but fails to add the camera with no reason provided in the log.

When I manually add the camera, Synology fails when authenticating the camera. This makes me think there’s a security issue with the Pi itself or rpicam that I have to set to allow access to the stream?

I can access and view the stream in VLC no dramas.

rpicam-vid -t 0 -n --codec libav --libav-format mpegts --width 1280 --height 720 --framerate 4 -o - | cvlc stream:///dev/stdin --sout ‘#rtp{sdp=rtsp://192.168.0.100:8554/stream1}’

Hey @Mark285907

Surveillance Station expects to connect to a proper RTSP server, not a one-shot stream. You can look at using something like GitHub - bluenviron/mediamtx: Ready-to-use SRT / WebRTC / RTSP / RTMP / LL-HLS media server and media proxy that allows to read, publish, proxy, record and playback video and audio streams., it should then allow you to run the camera stream as an RTSP server that Surveillance Station can connect to.

1 Like

Thanks Dan.

Copilot was pushing me down that road too but I thought I’d exhaust rpicam first.

Re " Surveillance Station expects to connect to a proper RTSP server, not a one-shot stream.", where’d you get this from? If you don’t mind me asking??

Hey @Mark285907

Maybe not my best choice of words, an analogy may serve better.

Think of Surveillance Station as a TV and your Pi as a TV channel:

  • A one-shot stream is like playing a video once on a speaker — the TV can’t “tune in” because there’s no channel, just raw sound.
  • A proper RTSP server is like a TV broadcast station: it’s always transmitting on a known frequency (port), and the TV can tune in anytime, request info, pause/resume, and so on.
1 Like

Dan et al.

I’ve done Pi updates, installed MedaMTX (v1.2.1_linux_armv7) on the Pi and edited the mediamtx.yml file to add the path to the camera.

rstp and port 8554 were already setup.

I figure I’d get the camera to stream VLC and then once proven, get Synology SS online.

When I run the stream in VLC I get one of two opposite errors in MediaMTX.

rtsp://192.168.0.100:8554/cam I get the error “path of a SETUP request must end with a slash. This typically happens when VLC fails a request, and then switches to an unsupported RTSP dialect”

Doing the same but with the “/”, rtsp://192.168.0.100:8554/cam/ I get the error “[RTSP] [conn 192.168.0.103:13781] closed: invalid path name: can’t end with a slash (cam/)”.

No doubt something simple but I can’t see it and Copilot just sends me round in circle.

Hey @Mark285907

You’re definitely on the right track. It looks like the issue is related to how MediaMTX interprets RTSP paths and how clients like VLC handle them.

In your mediamtx.yml, you’ve defined this:

paths:
  cam:
    source: rtsp://rpiCamera

That tells MediaMTX to serve the stream at:

rtsp://<your-pi-ip>:8554/cam

However, you’re hitting errors depending on how the URL is formatted:

  • rtsp://192.168.0.100:8554/cam → “path of a SETUP request must end with a slash.”
  • rtsp://192.168.0.100:8554/cam/ → “invalid path name: can’t end with a slash.”

This happens because VLC occasionally retries failed requests with a modified path (including a slash), which MediaMTX sees as an invalid path.

  1. Use the exact RTSP URL in VLC:
rtsp://192.168.0.100:8554/cam

Make sure VLC doesn’t auto-append a slash — it can sometimes do this on retry if it thinks the initial request failed.

  1. Test the camera source directly on the Pi:

Make sure the rtsp://rpiCamera source works locally. Use something like:

vlc rtsp://rpiCamera

or

ffplay rtsp://rpiCamera

If that fails, MediaMTX has no stream to forward — and that’s the real issue.

1 Like

Thanks Dan.

Re your item 2, I can’t get it it stream locally.

Just to be clear…I’m running mediamtx in one terminal window and while that’s waiting, I’ve opened a second terminal window and ran your two suggested code lines.

1 Like

Hi @Mark285907

Thanks for confirming that — the fact that you can’t stream rtsp://rpiCamera locally means MediaMTX isn’t receiving anything to work with, which definitely explains the behavior.

Let’s troubleshoot the RTSP source directly:


1. Confirm the RTSP camera is actually running

If you’re using libcamera, ffmpeg, or another script to generate the stream:

  • Is the RTSP camera server process running?
  • Are there any errors output to its terminal?

For example, if you’re using v4l2rtspserver, rpicam-vid, or an ffmpeg stream, you’d need to confirm that it’s bound to the right port (e.g., 8554) and is publishing the path rpiCamera.


2. Try accessing directly via localhost

Try running:

vlc rtsp://localhost:8554/rpiCamera

or

ffplay rtsp://localhost:8554/rpiCamera

If that also fails, then the issue is either:

  • The camera stream server is not running, or
  • It’s running but not on that path, or
  • The service is bound to a different port (not 8554)

3. Check MediaMTX logs

In the terminal running MediaMTX, check if it logs something like:

source 'rpiCamera' could not be read

Or anything about failed connection attempts to the upstream source.

Working!

There was some simple thing that I (a Pi noob) couldn’t get in Dan’s advice above and so could not get things working. I’m not sure but it may have been not understanding that the stream AND MediaMTX had to be running at the same time.

YouTube and Copilot just served to confuse things for me but ChatGPT seemed to know what to do, only making a few small path errors.

I ended up making an executable script that starts MediaMTX and then starts an RTSP stream from rpicam to MediaMTX. Then a systemd Service File.

The trick in Synology Surveillance Station is to manually add an IP camera, set the camera brand to “user defined”, which then allows the selection of “Streaming - RSTP”. Put the path in, set the resolution and…it works!

The Zero W 2 + Camera 3 is called “Test”.

I’ll drop the frame rate to something sensible, like 4.

2 Likes

Hi @Mark285907,

Congratulations on getting the project up and running, it looks great!

2 Likes

I’ve got the Pi Zero W 2 running a Python script than captures the video using rpicam-vid, publishes the rtsp stream with ffmpeg and serves the stream to Synology SS with MediaMTX.

The video has no name/time/date stamp which is a problem as Synology SS can’t add it when storing the vid.

I can’t use “annotate” as that command has been removed from rpicam-apps.

Is there another way to do this?

Cheers.

1 Like

Hi @Mark285907,

You’re right, the annotate option was part of the legacy raspivid, and it’s not available in the newer rpicam-apps like rpicam-vid. But you can still overlay text (like timestamp and name) onto the video stream, just in a different way.

Since you’re already using ffmpeg to publish the RTSP stream, the cleanest method is to use ffmpeg’s drawtext filter to overlay a timestamp or custom text.

Here’s how:

Modify your ffmpeg command to include a drawtext filter. Example:

ffmpeg -f v4l2 -i /dev/video0 \
  -vf "drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf: \
        text='%{localtime\:%Y-%m-%d %H\\\\:%M\\\\:%S}': \
        fontcolor=white: fontsize=24: box=1: boxcolor=black@0.5: \
        x=10: y=10" \
  -f rtsp rtsp://localhost:8554/stream

This will overlay the local time in the top-left corner. Adjust font, color, size, and position as needed.

You can also hardcode a label (e.g., “PiZero Cam”):

text='PiZero Cam %{localtime\:%Y-%m-%d %H\\\\:%M\\\\:%S}'

Note:

  • The double backslashes (\\\\:) are necessary when the command is parsed inside certain shells or scripts.
  • Make sure the font path exists, or choose another (fc-list will show fonts available).

Let us know how that works out for your project!

1 Like

Thanks Ryan.

I’m getting ffmpeg errors.

Which could well be down to a format error on my part?

import subprocess
import time

# Start MediaMTX
mediamtx_proc = subprocess.Popen(
    ["/usr/local/bin/mediamtx", "/home/morchard/mediamtx.yml"]
)

# Delay to allow MediaMTX to bind the RTSP port
time.sleep(2)

# Start the rpicam stream piped into FFmpeg
rpicam_cmd = [
    "rpicam-vid",
    "-t", "0",
    "--inline",
    "--width", "1920",
    "--height", "1080",
    "--framerate", "30",
    "-o", "-"
]

ffmpeg_cmd = [
    "ffmpeg",
    "-fflags", "+genpts",
    "-analyzeduration", "10000000",
    "-probesize", "5000000",
    "-re",
    "-f", "h264",
    "-i", "-",
    "-c", "copy",
    "-vf", "drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf: \
             text='PiZeroW2 Cam %{localtime\:%Y-%m-%d %H\\\\:%M\\\\:%S}': \
             fontcolor=white: fontsize=24: box=1: boxcolor=black@0.5: \
             x=10: y=10",
    "-f", "rtsp",
    "rtsp://localhost:8554/cam"
]

# Launch rpicam-vid piped into ffmpeg
rpicam_proc = subprocess.Popen(rpicam_cmd, stdout=subprocess.PIPE)
ffmpeg_proc = subprocess.Popen(ffmpeg_cmd, stdin=rpicam_proc.stdout)

# Wait for subprocesses to exit
ffmpeg_proc.wait()
rpicam_proc.wait()
mediamtx_proc.terminate()



Hey @Mark285907,

Thanks for the code snippet, which helps in figuring out what’s going on!

So I think the only way around this is to re-encode the video, that lets ffmpeg actually process the frames and add the text overlay before sending it out over the stream. The catch is that re-encoding is a bit more CPU-heavy, so depending on your setup, it might introduce a bit of lag or a drop in frame rate.

Try removing -c copy and instead tell ffmpeg to re-encode the video so it can overlay the text.

Test you command with this:

ffmpeg_cmd = [
    "ffmpeg",
    "-fflags", "+genpts",
    "-analyzeduration", "10000000",
    "-probesize", "5000000",
    "-re",
    "-f", "h264",
    "-i", "-",
    "-vf", "drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf:"
           " text='PiZeroW2 Cam %{localtime\\:%Y-%m-%d %H\\\\:%M\\\\:%S}':"
           " fontcolor=white: fontsize=24: box=1: boxcolor=black@0.5:"
           " x=10: y=10",
    "-c:v", "libx264",
    "-preset", "veryfast",
    "-f", "rtsp",
    "rtsp://localhost:8554/cam"
]

So pretty similar to what you’ve already got, this command just tells ffmpeg to decode the input, add the overlay, then re-encode it with libx264. The veryfast preset should help keep CPU usage manageable.

Give it a try and see how your setup handles the performance.