Monday, January 7, 2013

FFMPEG/VLC command lines (Linux)

Write live video to mp4 file:

ffmpeg -f video4linux2 -s 320x240 -i /dev/video0 test.mp4

With sound:
ffmpeg -f video4linux2 -s 320x240 -i /dev/video0 -f alsa -i hw:0 -f mp4 test3.mp4

Write to a raw .264 file
ffmpeg -f video4linux2 -s 320x240 -i /dev/video0 -vcodec libx264 -f h264 test.264

With access unit delimiters
ffmpeg -f video4linux2 -s cif -i /dev/video0 -x264opts slice-max-size=1400:vbv-maxrate=512:vbv-bufsize=200:fps=15:aud=1 -f mp4 -vcodec libx264 -f h264 rtp_aud_.264

Convert raw .264 to mp4
ffmpeg -i test.264 test_convert.mp4

Pipe into another process
ffmpeg -f video4linux2 -s cif -i /dev/video0 -x264opts slice-max-size=1400:vbv-maxrate=512:vbv-bufsize=200:fps=15 -f mp4 -vcodec libx264 -f h264 pipe:1 | ./pipe_test 

Send over RTP and generate SDP on std out:
ffmpeg -f video4linux2 -s cif -i /dev/video0 -x264opts slice-max-size=1400:vbv-maxrate=512:vbv-bufsize=200:fps=15 -f mp4 -vcodec libx264 -f rtp rtp://


o=- 0 0 IN IP4
s=No Name
c=IN IP4
t=0 0
a=tool:libavformat 54.17.100
m=video 49170 RTP/AVP 96
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1

Low Delay RTP:
ffmpeg -f video4linux2 -s cif -i /dev/video0 -x264opts slice-max-size=1400:vbv-maxrate=100:vbv-bufsize=200:fps=25 -f mp4 -vcodec libx264 -tune zerolatency -f rtp rtp:

Can be played with (where test.sdp should include the generated SDP): 
./vlc -vvv test.sdp

Using VLC to stream over RTP AND display live stream:
./vlc -vvv v4l2:///dev/video0 :v4l2-standard= :v4l2-dev=/dev/video0 --v4l2-width=352 --v4l2-height=288 --sout-x264-tune zerolatency --sout-x264-bframes 0 --sout-x264-aud --sout-x264-vbv-maxrate=1000 --sout-x264-vbv-bufsize=512 --sout-x264-slice-max-size=1460 --sout '#duplicate{dst="transcode{vcodec=h264,vb=384,scale=0.75}:rtp{dst=,port=49170}",dst=display}'

However this does not seem to generate the SPSs and PPSs.

 --sout-x264-options=repeat-headers is necessary to repeat SPS and PPS in stream.

Low delay capture with VLC:
./vlc --live-caching 0 --sout-rtp-caching 0 -vvv v4l2:///dev/video1 :v4l2-standard= :v4l2-dev=/dev/video0 --v4l2-width=352 --v4l2-height=288 --sout-x264-tune zerolatency --sout-x264-bframes 0 --sout-x264-options repeat-headers=1 --sout-x264-aud --sout-x264-vbv-maxrate=1000 --sout-x264-vbv-bufsize=512 --sout-x264-slice-max-size=1460 --sout '#duplicate{dst="transcode{vcodec=h264,vb=384,scale=0.75}:rtp{dst=,port=49170}",dst=display}'


  1. This comment has been removed by the author.

  2. Hi, Ralf, I find here through your stackoverflow profile. I know you are an expert in multimedia, and I have a question I really hope you can help me with. Now I am writing an app that buffers raw H.264 stream from a remote camera, and re-sends them as RTP packets, my problem is that I have no idea how to decide the timestamp filed in RTP header. I've googled it for some days, but was not able to find a clear answer. Do I need to parse the H.264 stream to get the sample time by the camera or simply set the timestamp according to the time I made this RTP packet? My email is, and I really hope that you can reply, with many thanks!

  3. Hi, here's my 0.02c.
    There are no timestamps in the raw H.264 stream. Like you said you can just convert the current time to an RTP timestamp. This for example is what the live555 RTP/RTSP library does. Don't forget that RTP timestamps have a random starting point. Send the mapping of the RTP timestamp to the date/time using RTCP SR reports. Other things you need to take into account is to use a 90 Khz clock for your RTP timestamps. It's a good idea to read the relevant sections in the RFCs ( and Hope that helps..

    1. I am so glad that you replied this soon. I've heard that RTCP is optional (is it?), and since my RTSP server and client are on the same host, my first attempt is not to implement RTCP. I see in RFC 2326 that "RTP-Info" header field allows me to map the first timestamp to the first RTP packet sequence. If anything I assumed here is wrong, please tell me, as by now I am still not able to play back a single frame. Thanks!

    2. UPDATE: I just saw some rendering result. It is weird that the video view gradually presents a certain frame from total dark. It took about 20 minutes to distinguish the graph, which is, as the camera stands still, the right picture from the perspective of the camera. Could you think of anything wrong?

    3. Did you retrieve the parameter sets from the SDP and initialise the decoder with them? If they are not in the SDP but in-band, you'll have to wait until you get them in the RTP stream and then only can you initialise the decoder appropriately.

      Other things you can try to debug your issue is to make sure that your reconstructed H.264 stream is valid: write the received NAL units to file in the Annex B bitstream format (i.e. prepend the appropriate start code to each NAL unit) and then you can use tools like ffmpeg to e.g. convert the raw.264 to mp4 and view the received video.

    4. Thanks Ralf! I finally made it :)

  4. This comment has been removed by the author.