Dump all video frames
$ ffmpeg -i input.mkv -an -qscale 1 %06d.jpg
Cat video frames in reverse order to FFmpeg as input
$ cat $(ls -t *jpg) | ffmpeg -f image2pipe -vcodec mjpeg -r 25 -i - -i backwards.wav
-vcodec libx264 -vpre slow -crf 20 -threads 0 -acodec flac output.mkv
via ffmpeg – encode video in reverse? – Stack Overflow.
The above ffmpeg command examples turned out to be very useful. Previously I had to do this manually in Avidemux.
The output of my IP camera is VGA 640×480 and I needed to slice that up into a 10×10 array of little areas (100 jpeg files) using the following:
convert input.jpg -crop 64x48 +repage +adjoin myoutputfile_%02d.jpg
Motion detect reveals a lot of false positives which must be filtered out manually. In order to automate this I compare time n to time n+1 in only a couple of the 100 little jpegs separated in the above command. So far I’m using this:
compare -metric MAE time_n_number.jpg time_n+1_number.jpg null: 2>&1
A changed portion of the jpeg will generate a high number which can be compared to a threshold in a script allowing me to eliminate most all false positive motion detects.
More detailed explanation for the Image compare commands in the ImageMagick package can be found here:
ImageMagick v6 Examples –Image Comparing
Therefore my entire process consists of separating the video into jpegs, finding changes in areas where there shouldn’t be changes and if not reassembling the jpegs back into a video file again all done automatically via bash scripting.
There probably are more elegant solutions but this works for now.
There are two methods within ffmpeg that can be used to concatenate files of the same type: the concat ”demuxer” and the concat ”protocol”. The demuxer is more flexible – it requires the same codecs, but different container formats can be used; and it can be used with any container formats, while the protocol only works with a select few containers. However, the concat protocol is available in older versions of ffmpeg, where the demuxer isn’t.
via Concatenate – FFmpeg.
I needed a way to concatenate mp4 files generated from all the IP cameras connected to the open wifi. I tried compiling MP4Joiner but there were way too many dodgy dependencies throwing code errors in the compile. Then I read that simple ffmpeg, a package easily loaded onto a Linux box, can merge mp4 files and it works. Unfortunately concat only works on later Fedora releases like Fedora 19 and above but it’s easier than manually merging them in Avidemux. My main server still runs Fedora 14 for many reasons so merging them simply requires running a command manually in a Fedora 19 or above Virtual Machine. In the future when I migrate to a later release this can all be scripted seamlessly.
Here’s more as to how it’s done in ffmpeg…
Create a file mylist.txt with all the files you want to have concatenated in the following form (lines starting with a # are ignored):
# this is a comment
Note that these can be either relative or absolute paths. Then you can stream copy or re-encode your files:
ffmpeg -f concat -i mylist.txt -c copy output
The initial release of this open-source multi-media library came in December of 2000, but only now twelve years later has it hit the over-emphasized 1.0 milestone. Michael Niedermayer, the official FFmpeg maintainer since 2004, mentioned on the developers list that he uploaded the 1.0 release. However, he’s not updating the FFmpeg main page until after he’s got “a bit of sleep”, so the official announcement is likely still a couple of hours out.
via [Phoronix] FFmpeg Reaches Version 1.0.