The Cerro Pelado fire that was threatening Los Alamos is mostly under
control now (71% contained as of Tuesday morning), and the county
has relaxed the "prepare to evacuate" status.
That's good, and not just for Los Alamos, because it means more
people who can fight the much larger Hermit's Peak/Calf Canyon fire,
currently 26% contained and stretching over a huge 299,565 acres.
For those of us on the Pajarito Plateau, that means we're getting
views of enormous
pyrocumulus
clouds towering over the Sangre de Cristo mountains from Las Vegas
to just south of Taos.
I keep missing the opportunity for photos, but on Sunday night
I took a series of images and made this time-lapse movie.
I was working on a weather project to make animated maps of the
jet stream. Getting and plotting wind data is a much longer article
(coming soon), but once I had all the images plotted, I wanted to
combine them all into a time-lapse video showing how the jet stream moves.
Like most projects, it's simple once you find the right recipe.
If your images are named outdir/filename00.png, outdir/filename01.png,
outdir/filename02.png and so on,
you can turn them into an MPEG4 video with ffmpeg:
%02d, for non-programmers, just means a 2-digit decimal integer
with leading zeros, If the filenames just use 1, 2, 3, ... 10, 11 without
leading zeros, use %2d instead; if they have three digits, use %03d or
%3d, and so on.
Update:
If your first photo isn't numbered 00, you can set a
-start_number — but it must come before the -i and
filename template. For instance:
That "setpts=6.0*PTS" controls the speed of the playback,
by adding or removing frames.
PTS stands for "Presentation TimeStamps",
which apparently is a measure of how far along a frame is in the file;
setpts=6.0*PTS means for each frame, figure out how far
it would have been in the file (PTS) and multiply that by 6. So if
a frame would normally have been at timestamp 10 seconds, now it will be at
60 seconds, and the video will be six times longer and six times slower.
And yes, you can also use values less than one to speed a video up.
You can also change a video's playback speed by
changing the
frame rate, either with the -r option, e.g. -r 30,
or with the fps filter, filter:v fps=30.
The default frame rate is 25.
You can examine values like the frame rate, number of frames and duration
of a video file with:
ffprobe -select_streams v -show_streams filename
or with the mediainfo program (not part of ffmpeg).
The -pix_fmt yuv420p turned out to be the tricky part.
The recipes I found online didn't include that part, but without it,
Firefox claims "Video can't be played because the file is corrupt",
even though most other browsers can play it just fine.
If you open Firefox's web console and reload, it offers the additional
information
"Details: mozilla::SupportChecker::AddMediaFormatChecker(const mozilla::TrackInfo&)::<lambda()>: Decoder may not have the capability to handle the requested video format with YUV444 chroma subsampling.":
Adding -pix_fmt yuv420p cured the problem and made the
video compatible with Firefox, though at first I had problems with
ffmpeg complaining "height not divisible by 2 (1980x1113)" (even though
the height of the images was in fact divisible by 2).
I'm not sure what was wrong; later ffmpeg stopped giving me that error
message and converted the video. It may depend on where in the ffmpeg
command you put the pix_fmt flag or what other flags are
present. ffmpeg arguments are a mystery to me.
Of course, if you're only making something to be uploaded to youtube,
the Firefox limitation probably doesn't matter and you may not need
the -pix_fmt yuv420p argument.
Animated GIFs
Making an animated GIF is easier. You can use ImageMagick's convert:
It doesn't just sit there until it gets warm enough to melt and run
off as water. Instead, the whole mass of snow moves together,
gradually, down the metal roof, like a glacier.
When it gets to the edge, it still doesn't fall; it somehow stays
intact, curling over and inward, until the mass is too great and it
loses cohesion and a clump falls with a Clunk!
The day after I posted that, I had a chance to see what happens as the
snow sheet slides off a roof if it doesn't have a long distance
to fall. It folds gracefully and gradually, like a sheet.
The underside as they slide off the roof is pretty interesting, too,
with varied shapes and patterns in addition to the imprinted pattern
of the roof.
But does it really move like a glacier? I decided to set up a camera
and film it on the move. I set the Rebel on a tripod with an AC power
adaptor, pointed it out the window at a section of roof with a good
snow load, plugged in the intervalometer I bought last summer, located
the manual to re-learn how to program it, and set it for a 30-second
interval. I ran that way for a bit over an hour -- long enough that
one section of ice had detached and fallen and a new section was
starting to slide down. Then I moved to another window and shot a series
of the same section of snow from underneath, with a 40-second interval.
I uploaded the photos to my workstation and verified that they'd
captured what I wanted. But when I stitched them into a movie, the
way I'd used for my
time-lapse
clouds last summer, it went way too fast -- the movie was over in
just a few seconds and you couldn't see what it was doing. Evidently
a 30-second interval is far too slow for the motion of a roof glacier
on a day in the mid-thirties.
But surely that's solvable in software? There must be a way to get avconv
to make duplicates of each frame, if I don't mind that the movie come
out slightly jump. I read through the avconv manual, but it wasn't
very clear about this. After a lot of fiddling and googling and help
from a more expert friend, I ended up with this:
In avconv, -r specifies a frame rate for the next file, input or
output, that will be specified. So -r 3 specifies the
frame rate for the set of input images, -i 'img_%04d.jpg';
and then the later -r 30 overrides that 3 and sets a new
frame rate for the output file, -timelapse.mp4. The start
number is because the first file in my sequence is named img_8252.jpg.
30, I'm told, is a reasonable frame rate for movies intended to be watched
on typical 60FPS monitors; 3 is a number I adjusted until the glacier in
the movie moved at what seemed like a good speed.
The movies came out quite interesting! The main movie, from the top,
is the most interesting; the one from the underside is shorter.
I wish I had a time-lapse of that folded sheet I showed above ...
but that happened overnight on the night after I made the movies.
By the next morning there wasn't enough left to be worth setting up
another time-lapse. But maybe one of these years I'll have a chance to
catch a sheet-folding roof glacier.
A few weeks ago I wrote about building a simple
Arduino-driven
camera intervalometer to take repeat photos with my DSLR.
I'd been entertained by watching the clouds build and gather and dissipate
again while I stepped through all the false positives in my
crittercam,
and I wanted to try capturing them intentionally so I could make cloud
movies.
Of course, you don't have to build an Arduino device.
A search for timer remote control or intervalometer
will find lots of good options around $20-30. I bought one
so I'll have a nice LCD interface rather than having to program an
Arduino every time I want to make movies.
Setting the image size
Okay, so you've set up your camera on a tripod with the intervalometer
hooked to it. (Depending on how long your movie is, you may also want
an external power supply for your camera.)
Now think about what size images you want.
If you're targeting YouTube, you probably want to use one of
YouTube's
preferred settings, bitrates and resolutions, perhaps 1280x720 or
1920x1080. But you may have some other reason to shoot at higher resolution:
perhaps you want to use some of the still images as well as making video.
For my first test, I shot at the full resolution of the camera.
So I had a directory full of big ten-megapixel photos with
filenames ranging from img_6624.jpg to img_6715.jpg.
I copied these into a new directory, so I didn't overwrite the originals.
You can use ImageMagick's mogrify to scale them all:
mogrify -scale 1280x720 *.jpg
I had an additional issue, though: rain was threatening and I didn't
want to leave my camera at risk of getting wet while I went dinner shopping,
so I moved the camera back under the patio roof. But with my fisheye lens,
that meant I had a lot of extra house showing and I wanted to crop
that off. I used GIMP on one image to determine the x, y, width and height
for the crop rectangle I wanted.
You can even crop to a different aspect ratio from your target,
and then fill the extra space with black:
mogrify img_6624.jpg -crop 2720x1450+135+315 -scale 1280 -gravity center -background black -extent 1280x720 *.jpg
If you decide to rescale your images to an unusual size, make sure
both dimensions are even, otherwise avconv will complain that
they're not divisible by two.
Finally: Making your movie
I found lots of pages explaining how to stitch
together time-lapse movies using mencoder, and a few
using ffmpeg. Unfortunately, in Debian, both are deprecated.
Mplayer has been removed entirely.
The ffmpeg-vs-avconv issue is apparently a big political war, and
I have no position on the matter, except that Debian has come down
strongly on the side of avconv and I get tired of getting nagged at
every time I run a program. So I needed to figure out how to use avconv.
I found some pages on avconv, but most of them didn't actually work.
Here's what worked for me:
Update: I don't know where that -f image2 came from -- ignore it.
And avconv can take an input and an output frame rate; they're
both specified with -r, and the only way input and output are
distinguished is their position in the command line. So a more
appropriate command might be something like this:
using 30 as a good output frame rate for people viewing on 60fps monitors.
Adjust the input frame rate, the -r 15, as needed to control the speed
of your time-lapse video.
Adjust the start_number and filename appropriately for the files you have.
Avconv produces an mp4 file suitable for uploading to youtube.
So here is my little test movie:
Time Lapse Clouds.
While testing my
automated critter
camera, I was getting lots of false positives caused by clouds
gathering and growing and then evaporating away. False positives
are annoying, but I discovered that it's fun watching the clouds grow
and change in all those photos
... which got me thinking about time-lapse photography.
First, a disclaimer: it's easy and cheap to just buy an
intervalometer. Search for timer remote control
or intervalometer and you'll find plenty of options for
around $20-30. In fact, I ordered one.
But, hey, it's not here yet, and I'm impatient.
And I've always wanted to try controlling a camera from an Arduino.
This seemed like the perfect excuse.
Why an Arduino rather than a Raspberry Pi or BeagleBone? Just because
it's simpler and cheaper, and this project doesn't need much compute
power. But everything here should be applicable to any microcontroller.
My Canon Rebel Xsi has a fairly simple wired remote control plug:
a standard 2.5mm stereo phone plug.
I say "standard" as though you can just walk into Radio Shack and buy
one, but in fact it turned out to be surprisingly difficult, even when
I was in Silicon Valley, to find them. Fortunately, I had found some,
several years ago, and had cables already wired up waiting for an experiment.
The outside connector ("sleeve") of the plug is ground.
Connecting ground to the middle ("ring") conductor makes the camera focus,
like pressing the shutter button halfway; connecting ground to the center
("tip") conductor makes it take a picture.
I have a wired cable release that I use for astronomy and spent a few
minutes with an ohmmeter verifying what did what, but if you don't
happen to have a cable release and a multimeter there are plenty of
Canon
remote control pinout diagrams on the web.
Now we need a way for the controller to connect one pin of the remote
to another on command.
There are ways to simulate that with transistors -- my
Arduino-controlled
robotic shark project did that. However, the shark was about a $40
toy, while my DSLR cost quite a bit more than that. While I
did find several people on the web saying they'd used transistors with
a DSLR with no ill effects, I found a lot more who were nervous about
trying it. I decided I was one of the nervous ones.
The alternative to transistors is to use something like a relay. In a relay,
voltage applied across one pair of contacts -- the signal from the
controller -- creates a magnetic field that closes a switch and joins
another pair of contacts -- the wires going to the camera's remote.
But there's a problem with relays: that magnetic field, when it
collapses, can send a pulse of current back up the wire to the controller,
possibly damaging it.
There's another alternative, though. An opto-isolator works like a
relay but without the magnetic pulse problem. Instead of a magnetic
field, it uses an LED (internally, inside the chip where you can't see it)
and a photo sensor. I bought some opto-isolators a while back and had
been looking for an excuse to try one. Actually two: I needed one for
the focus pin and one for the shutter pin.
How do you choose which opto-isolator to use out of the gazillion
options available in a components catalog? I don't know, but when I
bought a selection of them a few years ago, it included a 4N25, 4N26
and 4N27, which seem to be popular and well documented, as well as a
few other models that are so unpopular I couldn't even find a
datasheet for them. So I went with the 4N25.
Wiring an opto-isolator is easy. You do need a resistor across the inputs
(presumably because it's an LED).
380Ω
is apparently a good value for the 4N25, but
it's not critical. I didn't have any 380Ω but I had a bunch of
330Ω so that's what I used. The inputs (the signals from the Arduino)
go between pins 1 and 2, with a resistor; the outputs (the wires to the
camera remote plug) go between pins 4 and 5, as shown in
the diagram on this
Arduino
and Opto-isolators discussion, except that I didn't use any pull-up
resistor on the output.
Then you just need a simple Arduino program to drive the inputs.
Apparently the camera wants to see a focus half-press before it gets
the input to trigger the shutter, so I put in a slight delay there,
and another delay while I "hold the shutter button down" before
releasing both of them.
Here's some Arduino code to shoot a photo every ten seconds:
int focusPin = 6;
int shutterPin = 7;
int focusDelay = 50;
int shutterOpen = 100;
int betweenPictures = 10000;
void setup()
{
pinMode(focusPin, OUTPUT);
pinMode(shutterPin, OUTPUT);
}
void snapPhoto()
{
digitalWrite(focusPin, HIGH);
delay(focusDelay);
digitalWrite(shutterPin, HIGH);
delay(shutterOpen);
digitalWrite(shutterPin, LOW);
digitalWrite(focusPin, LOW);
}
void loop()
{
delay(betweenPictures);
snapPhoto();
}
Naturally, since then we haven't had any dramatic clouds, and the
lightning storms have all been late at night after I went to bed.
(I don't want to leave my nice camera out unattended in a rainstorm.)
But my intervalometer seemed to work fine in short tests.
Eventually I'll make some actual time-lapse movies ... but that will
be a separate article.