Shallow Thoughts : tags : photography
Akkana's Musings on Open Source Computing and Technology, Science, and Nature.
Sat, 13 Apr 2024
I'm sorry, but I have no eclipse photos to share. I messed that up.
But I did get to see totality.
For the April 8, 2024 eclipse, Dave and I committed early to Texas.
Seemed like that was where the best long-range forecasts were.
In the last week before the eclipse, the forecasts were no longer
looking so good. But I've heard so many stories of people driving around
trying to chase holes in the clouds, only to be skunked,
while people who stayed put got a better view.
We decided to stick with our plan, which was to stay in San Angelo
(some 190 miles off the centerline) the night before,
get up fairly early and drive to somewhere near the centerline.
Read more ...
Tags: eclipse, astronomy, photography
[
12:36 Apr 13, 2024
More science/astro |
permalink to this entry |
]
Mon, 20 Nov 2023
I've been relying more on my phone
for photos I take while hiking, rather than carry a separate camera.
The Pixel 6a takes reasonably good photos, if you can put up with
the wildly excessive processing Google's camera app does whether you
want it or not.
That opens the possibility of GPS tagging photos, so I'd
have a good record of where on the trail each photo was taken.
But as it turns out: no. It seems the GPS coordinates the Pixel's
camera app records in photos is always wrong, by a significant amount.
And, weirdly, this doesn't seem to be something anyone's talking
about on the web ... or am I just using the wrong search terms?
Read more ...
Tags: mapping, GIS, cellphone, google, programming, python, photography, 30DayMapChallenge
[
19:09 Nov 20, 2023
More mapping |
permalink to this entry |
]
Mon, 17 Jul 2023
While driving back down the hill after an appointment, I had to stop
at Anderson Overlook to snap a few photos of the clouds and their shadows
on the mesa.
In a Robert B. Parker novel I read many years ago, a character, musing
on the view from a rich guy's house, comments, "I hear that after a while,
it's just what you see out the window."
Dave and I make fun of that line all the time.
Maybe it's true in Boston, but in New Mexico, I never get tired of the
view and the endlessly changing light and shadow.
I know people who have lived here fifty years or more and still aren't
tired of it.
Tags: nature, photography
[
20:12 Jul 17, 2023
More photo |
permalink to this entry |
]
Sat, 20 May 2023
The weather is great for this year's Kite Festival, going on right now
at Overlook Park. It's a little hazy, but there's a good wind,
plenty to keep the kids' small kits aloft, though the big, fancy kites
were struggling a little.
Continuing through Sunday night; if you're in the area, go take a look!
A few photos:
White Rock Kite Festival 2023.
Tags: photography, kites
[
15:43 May 20, 2023
More misc |
permalink to this entry |
]
Sun, 04 Dec 2022
I've been down for a week with the Flu from Hell that's going around.
We think it's flu because the symptoms match, and
because I got knocked out by it, while Dave caught a much milder case
— and Dave got the double-dose-for-seniors flu shot, while I
only got the regular-for-younger-folks shot. (Our COVID tests are negative
and there's no anosmia or breathing impairment.)
So I haven't been getting much done lately, nor writing blog articles.
But I'm feeling a bit better now. While I recover, here's something
from a few months ago: our annual autumn visit from the Door Bunny.
Read more ...
Tags: humor, photography
[
08:27 Dec 04, 2022
More humor |
permalink to this entry |
]
Wed, 19 Aug 2020
Late summer is whiptail season. Whiptails are long, slender,
extremely fast lizards with (as you might expect) especially long tails.
They emerge from hibernation at least a month later than the fence lizards,
but once they're awake, they're everywhere.
In addition to being pretty to look at, fun to watch as they
hit the afterburner and streak across the yard,
and challenging to photograph since they seldom sit still for long,
they're interesting for several reasons.
Read more ...
Tags: nature, lizard, photography
[
19:56 Aug 19, 2020
More nature |
permalink to this entry |
]
Sat, 25 Jul 2020
Monday was the last night it's been clear enough to see Comet Neowise.
I shot some photos with the Rebel, but I haven't quite figured out
the alignment and stacking needed for decent astrophotos, so I don't
have much to show. I can't even see the ion tail.
The interesting thing about Monday besides just getting to see
the comet was the never-ending train of satellites.
Read more ...
Tags: astronomy, science, photography
[
20:27 Jul 25, 2020
More science/astro |
permalink to this entry |
]
Sun, 27 Aug 2017
My first total eclipse! The suspense had been building for years.
Dave and I were in Wyoming. We'd made a hotel reservation nine months
ago, by which time we were already too late to book a room in the zone
of totality and settled for Laramie, a few hours' drive from the centerline.
For visual observing, I had my little portable 80mm refractor. But
photography was more complicated. I'd promised myself that for my
first (and possibly only) total eclipse, I wasn't going to miss the
experience because I was spending too much time fiddling with cameras.
But I couldn't talk myself into not trying any photography at all.
Initially, my plan was to use my
90mm Mak
as a 500mm camera lens. It had worked okay for the
the 2012 Venus transit.
I spent several weeks before the eclipse in a flurry of creation,
making a couple of
solar finders,
a barn-door
mount, and then wrestling with motorizing the barn-door (which was
a failure because I couldn't find a place to buy decent gears for the motor.
I'm still working on that and will eventually write it up).
I wrote up a plan: what equipment I would use when, a series of
progressive exposures for totality, and so forth.
And then, a couple of days before we were due to leave, I figured I
should test my rig -- and discovered that it was basically impossible
to focus on the sun. For the Venus transit, the sun wasn't that high
in the sky, so I focused through the viewfinder. But for the total
eclipse, the sun would be almost overhead, and the viewfinder nearly
impossible to see. So I had planned to point the Mak at a distant
hillside, focus it, then slip the filter on and point it up to the sun.
It turned out the focal point was completely different through the filter.
With only a couple of days left to go, I revised my plan.
The Mak is difficult to focus under any circumstances. I decided
not to use it, and to stick to my Canon 55-250mm zoom telephoto,
with the camera on a normal tripod. I'd skip the partial eclipse
(I've photographed those before anyway) and concentrate on
getting a few shots of the diamond ring and the corona, running
through a range of exposures without needing to look at the camera
screen or do any refocusing. And since I wasn't going to be usinga
telescope, my nifty solar finders wouldn't work; I designed a new
one out of popsicle sticks to fit in the camera's hot shoe.
Getting there
We stayed with relatives in Colorado Saturday night, then drove to
Laramie Sunday. I'd heard horror stories of hotels canceling people's
longstanding eclipse reservations, but fortunately our hotel honored
our reservation. WHEW! Monday morning, we left the hotel at 6am in
case we hit terrible traffic. There was already plenty of traffic on
the highway north to Casper, but we turned east hoping for fewer crowds.
A roadsign sign said "NO PARKING ON HIGHWAY." They'd better not try
to enforce that in the totality zone!
When we got to I-25 it was moving and, oddly enough, not particularly
crowded. Glendo Reservoir had looked on the map like a nice spot on
the centerline ... but it was also a state park, so there was a risk
that everyone else would want to go there. Sure enough: although
traffic was moving on I-25 at Wheatland, a few miles north the freeway
came to a screeching halt. We backtracked and headed east toward Guernsey,
where several highways went north toward the centerline.
East of Glendo, there were crowds at every highway pullout and rest
stop. As we turned onto 270 and started north, I kept an eye on
OsmAnd on my phone, where I'd loaded
a GPX file of the eclipse path. When we were within a mile of the
centerline, we stopped at a likely looking pullout. It was maybe 9 am.
A cool wind was blowing -- very pleasant since we were expecting a hot
day -- and we got acquainted with our fellow eclipse watchers as we
waited for first contact.
Our pullout was also the beginning of a driveway to a farmhouse we could
see in the distance. Periodically people pulled up, looking lost,
checked maps or GPS, then headed down the road to the farm. Apparently
the owners had advertised it as an eclipse spot -- pay $35, and you
can see the eclipse and have access to a restroom too! But apparently
the old farmhouse's plumbing failed early on, and some of the people
who'd paid came out to the road to watch with us since we had better
equipment set up.
There's not much to say about the partial eclipse. We all traded views
-- there were five or six scopes at our pullout, including a nice
little H-alpha scope. I snapped an occasional photo through the 80mm
with my pocket camera held to the eyepiece, or with the DSLR through
an eyepiece projection adapter. Oddly, the DSLR photos came out worse
than the pocket cam ones. I guess I should try and debug that at some point.
Shortly before totality, I set up the DSLR on the tripod, focused on a
distant hillside and taped the focus with duct tape, plugged in the
shutter remote, checked the settings in Manual mode, then set the
camera to Program mode and AEB (auto exposure bracketing). I put the
lens cap back on and pointed the camera toward the sun using the
popsicle-stick solar finder. I also set a countdown timer, so I could
press START when totality began and it would beep to warn me when it was
time to the sun to come back out. It was getting chilly by then, with
the sun down to a sliver, and we put on sweaters.
The pair of eclipse veterans at our pullout had told everybody to
watch for the moon's shadow racing toward us across the hills from the
west. But I didn't see the racing shadow, nor any shadow bands.
And then Venus and Mercury appeared and the sun went away.
Totality
One thing the photos don't prepare you for is the color of the sky. I
expected it would look like twilight, maybe a little darker; but it
was an eerie, beautiful medium slate blue. With that unworldly
solar corona in the middle of it, and Venus gleaming as bright as
you've ever seen it, and Mercury shining bright on the other side.
There weren't many stars.
We didn't see birds doing anything unusual; as far as I can tell,
there are no birds in this part of Wyoming. But the cows did all
get in a line and start walking somewhere. Or so Dave tells me.
I wasn't looking at the cows.
Amazingly, I remembered to start my timer and to pull off the DSLR's
lens cap as I pushed the shutter button for the diamond-ring shots
without taking my eyes off the spectacle high above. I turned the
camera off and back on (to cancel AEB), switched to M mode, and
snapped a photo while I scuttled over to the telescope, pulled the
filter off and took a look at the corona in the wide-field eyepiece.
So beautiful! Binoculars, telescope, naked eye -- I don't know which
view was best.
I went through my exposure sequence on the camera, turning the dial a
couple of clicks each time without looking at the settings, keeping my
eyes on the sky or the telescope eyepiece. But at some point I happened
to glance at the viewfinder -- and discovered that the sun was drifting
out of the frame. Adjusting the tripod to get it back in the frame
took longer than I wanted, but I got it there and got my eyes
back on the sun as I snapped another photo ...
and my timer beeped.
I must have set it wrong! It couldn't possibly have been two
and a half minutes. It had been 30, 45 seconds tops.
But I nudged the telescope away from the sun, and looked back up -- to
another diamond ring. Totality really was ending and it was time to
stop looking.
Getting Out
The trip back to Golden, where we were staying with a relative, was
hellish. We packed up immediately after totality -- we figured we'd
seen partials before, and maybe everybody else would stay. No such luck.
By the time we got all the equipment packed there was already a steady
stream of cars heading south on 270.
A few miles north of Guernsey the traffic came to a stop. This was to
be the theme of the afternoon. Every small town in Wyoming has a stop sign
or signal, and that caused backups for miles in both directions.
We headed east, away from Denver, to take rural roads down through
eastern Wyoming and Colorado rather than I-25, but even so,
we hit small-town stop sign backups every five or ten miles.
We'd brought the Rav4 partly for this reason. I kept my eyes glued on
OsmAnd and we took dirt roads when we could, skirting the paved
highways -- but mostly there weren't any dirt roads going where we
needed to go. It took about 7 hours to get back to Golden, about twice
as long as it should have taken. And we should probably count
ourselves lucky -- I've heard from other people who took 11 hours to
get to Denver via other routes.
Lessons Learned
Dave is fond of the quote,
"No battle plan survives contact with the enemy"
(which turns out to be from Prussian military strategist
Helmuth
von Moltke the Elder).
The enemy, in this case, isn't the eclipse; it's time.
Two and a half minutes sounds like a lot, but it goes by like nothing.
Even in my drastically scaled-down plan, I had intended exposures from
1/2000 to 2 seconds (at f/5.6 and ISO 400). In practice, I only made
it to 1/320 because of fiddling with the tripod.
And that's okay. I'm thrilled with the photos I got, and definitely
wouldn't have traded any eyeball time for more photos. I'm more annoyed
that the tripod fiddling time made me miss a little bit of extra looking.
My script actually worked out better than I expected, and I was very
glad I'd done the preparation I had. The script was reasonable, the
solar finders worked really well, and the lens was even in focus
for the totality shots.
Then there's the eclipse itself.
I've read so many articles about solar eclipses as a mystical,
religious experience. It wasn't, for me. It was just an eerily
beautiful, other-worldly spectacle: that ring of cold fire staring
down from the slate blue sky, bright planets but no stars, everything
strange, like nothing I'd ever seen. Photos don't get across what it's
like to be standing there under that weird thing in the sky.
I'm not going to drop everything to become a globe-trotting eclipse
chaser ... but I sure hope I get to see another one some day.
Photos: 2017
August 21 Total Solar Eclipse in Wyoming.
Tags: eclipse, astronomy, photography
[
20:41 Aug 27, 2017
More science/astro |
permalink to this entry |
]
Thu, 20 Apr 2017
Last week, my hiking group had its annual trip, which this year
was Bluff, Utah, near Comb Ridge and Cedar Mesa, an area particular
known for its Anasazi ruins and petroglyphs.
(I'm aware that "Anasazi" is considered a politically incorrect term
these days, though it still seems to be in common use in Utah; it isn't
in New Mexico. My view is that I can understand why Pueblo people
dislike hearing their ancestors referred to by a term that means
something like "ancient enemies" in Navajo; but if they want everyone
to switch from using a mellifluous and easy to pronounce word like
"Anasazi", they ought to come up with a better, and shorter,
replacement than "Ancestral Puebloans." I mean, really.)
The photo at right is probably the most photogenic of the ruins I saw.
It's in Mule Canyon, on Cedar Mesa, and it's called "House on Fire"
because of the colors in the rock when the light is right.
The light was not right when we encountered it, in late morning around
10 am; but fortunately, we were doing an out-and-back hike. Someone in
our group had said that the best light came when sunlight reflected
off the red rock below the ruin up onto the rock above it, an effect
I've seen in other places, most notably Bryce Canyon, where the hoodoos
look positively radiant when seen backlit, because that's when
the most reflected light adds to the reds and oranges in the rock.
Sure enough, when we got back to House on Fire at 1:30 pm, the
light was much better. It wasn't completely obvious to the eye,
but comparing the photos afterward, the difference is impressive:
Changing
light on House on Fire Ruin.
The weather was almost perfect for our trip, except for one overly hot
afternoon on Wednesday.
And the hikes were fairly perfect, too -- fantastic ruins you can see
up close, huge petroglyph panels with hundreds of different creatures
and patterns (and some that could only have been science fiction,
like brain-man at left), sweeping views of canyons and slickrock,
and the geology of Comb Ridge and the Monument Upwarp.
And in case you read my last article, on translucent windows, and are
wondering how those generated waypoints worked: they were terrific,
and in some cases made the difference between finding a ruin and
wandering lost on the slickrock. I wish I'd had that years ago.
Most of what I have to say about the trip are already in the comments to
the photos, so I'll just link to the photo page:
Photos: Bluff trip, 2017.
Tags: travel, photography
[
19:28 Apr 20, 2017
More travel |
permalink to this entry |
]
Sun, 08 Jan 2017
The snowy days here have been so pretty, the snow contrasting with the
darkness of the piñons and junipers and the black basalt.
The light fluffy crystals sparkle in a rainbow of colors when they
catch the sunlight at the right angle, but I've been unable to catch
that effect in a photo.
We've had some unusual holiday visitors, too, culminating in this
morning's visit from a huge bull elk.
Dave came down to make coffee and saw the elk in the garden right next
to the window. But by the time I saw him, he was farther out in the
yard. And my DSLR batteries were dead, so I grabbed the point-and-shoot
and got what I could through the window.
Fortunately for my photography the elk wasn't going anywhere in any hurry.
He has an injured leg, and was limping badly.
He slowly made his way down the hill and into the neighbors' yard.
I hope he returns. Even with a limp that bad, an elk that size
has no predators in White Rock, so as long as he stays off the nearby
San Ildefonso reservation (where hunting is allowed) and manages to
find enough food, he should be all right. I'm tempted to buy some
hay to leave out for him.
Some of the sunsets have been pretty nice, too.
A few more photos.
Tags: nature, photography
[
19:48 Jan 08, 2017
More photo |
permalink to this entry |
]
Sun, 25 Dec 2016
Excellent Xmas to all!
We're having a white Xmas here..
Dave and I have been discussing how "Merry Christmas" isn't
alliterative like "Happy Holidays". We had trouble coming up with a
good C or K adjective to go with Christmas, but then we hit on the
answer: Have an Excellent Xmas! It also has the advantage of
inclusivity: not everyone celebrates the birth of Christ, but Xmas is
a secular holiday of lights, family and gifts, open to people of all
belief systems.
Meanwhile:
I spent a couple of nights recently learning how to photograph Xmas
lights and farolitos.
Farolitos, a New Mexico Christmas tradition, are paper bags, weighted
down with sand, with a candle inside. Sounds modest, but put a row of
them alongside a roadway or along the top of a typical New Mexican
adobe or faux-dobe and you have a beautiful display of lights.
They're also known as luminarias in southern New Mexico, but
Northern New Mexicans insist that a luminaria is a bonfire, and the
little paper bag lanterns should be called farolitos.
They're pretty, whatever you call them.
Locally, residents of several streets in Los Alamos and White Rock set
out farolitos along their roadsides for a few nights around Christmas,
and the county cooperates by turning off streetlights on those
streets. The display on Los Pueblos in Los Alamos is a zoo, a slow
exhaust-choked parade of cars that reminds me of the Griffith Park
light show in LA. But here in White Rock the farolito displays are
a lot less crowded, and this year I wanted to try photographing them.
Canon bugs affecting night photography
I have a little past experience with night photography. I went through
a brief astrophotography phase in my teens (in the pre-digital phase,
so I was using film and occasionally glass plates). But I haven't done
much night photography for years.
That's partly because I've had problems taking night shots with my
current digital SLRcamera, a Rebel Xsi (known outside the US as a
Canon 450d). It's old and modest as DSLRs go, but I've resisted
upgrading since I don't really need more features.
Except maybe when it comes to night photography. I've tried shooting
star trails, lightning shots and other nocturnal time exposures, and
keep hitting a snag: the camera refuses to take a photo. I'll be in
Manual mode, with my aperture and shutter speed set, with the lens in
Manual Focus mode with Image Stabilization turned off. Plug in the
remote shutter release, push the button ... and nothing happens except
a lot of motorized lens whirring noises. Which shouldn't be happening
-- in MF and non-IS mode the lens should be just sitting there intert,
not whirring its motors. I couldn't seem to find a way to convince it
that the MF switch meant that, yes, I wanted to focus manually.
It seemed to be primarily a problem with the EF-S 18-55mm kit lens;
the camera will usually condescend to take a night photo with my other
two lenses. I wondered if the MF switch might be broken, but then I
noticed that in some modes the camera explicitly told me I was in
manual focus mode.
I was almost to the point of ordering another lens just for night
shots when I finally hit upon the right search terms and found,
if not the reason it's happening, at least an excellent workaround.
Back Button Focus
I'm so sad that I went so many years without knowing about Back Button Focus.
It's well hidden in the menus, under Custom Functions #10.
Normally, the shutter button does a bunch of things. When you press it
halfway, the camera both autofocuses (sadly, even in manual focus mode)
and calculates exposure settings.
But there's a custom function that lets you separate the focus and
exposure calculations. In the Custom Functions menu option #10
(the number and exact text will be different on different Canon models,
but apparently most or all Canon DSLRs have this somewhere),
the heading says: Shutter/AE Lock Button.
Following that is a list of four obscure-looking options:
- AF/AE lock
- AE lock/AF
- AF/AF lock, no AE lock
- AE/AF, no AE lock
The text before the slash indicates what the shutter button, pressed
halfway, will do in that mode; the text after the slash is what
happens when you press the * or AE lock button on the
upper right of the camera back (the same button you use to zoom out
when reviewing pictures on the LCD screen).
The first option is the default: press the shutter button halfway to
activate autofocus; the AE lock button calculates and locks exposure settings.
The second option is the revelation: pressing the shutter button halfway
will calculate exposure settings, but does nothing for focus. To focus,
press the * or AE button, after which focus will be locked. Pressing
the shutter button won't refocus. This mode is called "Back button focus"
all over the web, but not in the manual.
Back button focus is useful in all sorts of cases.
For instance, if you want to autofocus once then keep the same focus
for subsequent shots, it gives you a way of doing that.
It also solves my night focus problem: even with the bug (whether it's
in the lens or the camera) that the lens tries to autofocus even in
manual focus mode, in this mode, pressing the shutter won't trigger that.
The camera assumes it's in focus and goes ahead and takes the picture.
Incidentally, the other two modes in that menu apply to AI SERVO mode
when you're letting the focus change constantly as it follows a moving
subject. The third mode makes the * button lock focus and stop
adjusting it; the fourth lets you toggle focus-adjusting on and off.
Live View Focusing
There's one other thing that's crucial for night shots: live view
focusing. Since you can't use autofocus in low light, you have to do
the focusing yourself. But most DSLR's focusing screens aren't good
enough that you can look through the viewfinder and get a reliable
focus on a star or even a string of holiday lights or farolitos.
Instead, press the SET button (the one in the middle of the
right/left/up/down buttons) to activate Live View (you may have to
enable it in the menus first). The mirror locks up and a preview of
what the camera is seeing appears on the LCD. Use the zoom button (the
one to the right of that */AE lock button) to zoom in; there are two
levels of zoom in addition to the un-zoomed view. You can use the
right/left/up/down buttons to control which part of the field the
zoomed view will show. Zoom all the way in (two clicks of the +
button) to fine-tune your manual focus. Press SET again to exit
live view.
It's not as good as a fine-grained focusing screen, but at least
it gets you close. Consider using relatively small apertures, like f/8,
since it will give you more latitude for focus errors. Yyou'll be
doing time exposures on a tripod anyway, so a narrow aperture just
means your exposures have to be a little longer than they otherwise
would have been.
After all that, my Xmas Eve farolitos photos turned out mediocre.
We had a storm blowing in, so a lot of the candles had blown out.
(In the photo below you can see how the light string on the left
is blurred, because the tree was blowing around so much during the
30-second exposure.)
But I had fun, and maybe I'll go out and try again tonight.
An excellent X-mas to you all!
Tags: photography
[
12:30 Dec 25, 2016
More photo |
permalink to this entry |
]
Mon, 05 Sep 2016
We drove up to Taos today to see the
Earthships.
Earthships are sustainable, completely off-the-grid houses built of adobe and
recycled materials. That was pretty much all I knew about them, except
that they were weird looking; I'd driven by on the highway a few times
(they're on highway 64 just west of the
beautiful Rio
Grande Gorge Bridge) but never stopped and paid the $7 admission
for the self-guided tour.
Seeing them up close was fun. The walls are made of old tires packed
with dirt, then covered with adobe. The result is quite strong, though
like all adobe structures it requires regular maintenance if you don't
want it to melt away. For non load bearing walls, they pack adobe
around old recycled bottles or cans.
The houses have a passive solar design, with big windows along one
side that make a greenhouse for growing food and freshening the air,
as well as collecting warmth in cold weather. Solar panels provide
power -- supposedly along with windmills, but I didn't see any
windmills in operation, and the ones they showed in photos looked
too tiny to offer much help. To help make the most of the solar power,
the house is wired for DC, and all the lighting, water pumps and so
forth run off low voltage DC. There's even a special DC refrigerator.
They do include an AC inverter for appliances like televisions and computer
equipment that can't run directly off DC.
Water is supposedly self sustaining too, though I don't see how that
could work in drought years. As long as there's enough rainfall, water
runs off the roof into a cistern and is used for drinking, bathing etc.,
after which it's run through filters and then pumped into the greenhouse.
Waste water from the greenhouse is used for flushing toilets, after
which it finally goes to the septic tank.
All very cool. We're in a house now that makes us very happy (and has
excellent passive solar, though we do plan to add solar panels and
a greywater system some day) but if I was building a house, I'd be
all over this.
We also discovered an excellent way to get there without getting stuck
in traffic-clogged Taos (it's a lovely town, but you really don't want
to go near there on a holiday, or a weekend ... or any other time when
people might be visiting). There's a road from Pilar that crosses the
Rio Grande then ascends up to the mesa high above the river,
continuing up to highway 64 right near the earthships. We'd been a
little way up that road once, on a petroglyph-viewing hike, but never
all the way through. The map said it was dirt from the Rio all the way
up to 64, and we were in the Corolla, since the Rav4's battery started
misbehaving a few days ago and we haven't replaced it yet.
So we were hesitant. But the nice folks at the Rio Grande Gorge
visitor center at Pilar assured us that the dirt section ended at the
top of the mesa and any car could make it ("it gets bumpy -- a New
Mexico massage! You'll get to the top very relaxed"). They were
right: the Corolla made it with no difficulty and it was a much
faster route than going through Taos.
We got home just in time for the rouladen I'd left cooking in the
crockpot, and then finished dinner just in time for a great sunset sky.
A few more photos:
Earthships (and a
great sunset).
Tags: misc, photography
[
21:05 Sep 05, 2016
More misc |
permalink to this entry |
]
Tue, 09 Aug 2016
A couple of days ago we had a spectacular afternoon double rainbow.
I was out planting grama grass seeds, hoping to take take advantage of
a rainy week, but I cut the planting short to run up and get my camera.
And then after shooting rainbow shots with the fisheye lens,
it occurred to me that I could switch to the zoom and take some
hummingbird shots with the rainbow in the background. How often
do you get a chance to do that? (Not to mention a great excuse not to
go back to planting grass seeds.)
(Actually, here, it isn't all that uncommon since we get a lot of
afternoon rainbows. But it's the first time I thought of trying it.)
Focus is always chancy when you're standing next to the feeder,
waiting for birds to fly by and shooting whatever you can.
Next time maybe I'll have time to set up a tripod and remote
shutter release. But I was pretty happy with what I got.
Photos:
Double rainbow, with hummingbirds.
Tags: nature, birds, rainbow, photography
[
19:40 Aug 09, 2016
More nature |
permalink to this entry |
]
Tue, 05 Jul 2016
I'll be at Texas LinuxFest in Austin, Texas this weekend.
Friday, July
8 is the big day for open source imaging:
first a morning Photo Walk led by Pat David, from 9-11,
after which Pat, an active GIMP contributor and the driving force
behind the PIXLS.US website and discussion
forums, gives a talk on "Open Source Photography Tools".
Then after lunch I'll give a GIMP tutorial.
We may also have a Graphics Hackathon/Q&A session to discuss
all the open-source graphics tools in the last slot of the day, but
that part is still tentative. I'm hoping we can get some good
discussion especially among the people who go on the photo walk.
Lots of interesting looking talks on Saturday, too. I've never been
to Texas LinuxFest before: it's a short conference, just two days,
but they're packing a lot into those two days and but it looks like
it'll be a lot of fun.
Tags: gimp, conferences, photography
[
18:37 Jul 05, 2016
More conferences |
permalink to this entry |
]
Sun, 04 Oct 2015
For the animations
I made from the lunar eclipse last week, the hard part was aligning
all the images so the moon (or, in the case of the moonrise image, the
hillside) was in the same position in every time.
This is a problem that comes up a lot with astrophotography, where
multiple images are stacked for a variety of reasons: to increase
contrast, to increase detail, or to take an average of a series of images,
as well as animations like I was making this time.
And of course animations can be fun in any context, not just astrophotography.
In the tutorial that follows, clicking on the images will show a full
sized screenshot with more detail.
Load all the images as layers in a single GIMP image
The first thing I did was load up all the images as layers in a single image:
File->Open as Layers..., then navigate to where the images are
and use shift-click to select all the filenames I wanted.
Work on two layers at once
By clicking on the "eyeball" icon in the Layers dialog, I could
adjust which layers were visible. For each pair of layers, I made
the top layer about 50% opaque by dragging the opacity slider (it's
not important that it be exactly at 50%, as long as you can see both
images).
Then use the Move tool to drag the top image on top of the bottom image.
But it's hard to tell when they're exactly aligned
"Drag the top image on top of the bottom image":
easy to say, hard to do. When the images are dim and red like that,
and half of the image is nearly invisible, it's very hard to tell when
they're exactly aligned.
Use a Contrast display filter
What helped was a Contrast filter.
View->Display Filters... and in the dialog that pops up,
click on
Contrast, and click on the right arrow to move it to
Active Filters.
The Contrast filter changes the colors so that dim red moon is fully
visible, and it's much easier to tell when the layers are
approximately on top of each other.
Use Difference mode for the final fine-tuning
Even with the Contrast filter, though, it's hard to see when the
images are exactly on top of each other. When you have them within a few
pixels, get rid of the contrast filter (you can keep the dialog up but
disable the filter by un-checking its checkbox in
Active Filters).
Then, in the Layers dialog, slide the top layer's Opacity back to 100%,
go to the
Mode selector and set the layer's mode to
Difference.
In Difference mode, you only see differences between the two layers.
So if your alignment is off by a few pixels, it'll be much easier to see.
Even in a case like an eclipse where the moon's appearance is changing
from frame to frame as the earth's shadow moves across it, you can still
get the best alignment by making the Difference between the two layers
as small as you can.
Use the Move tool and the keyboard: left, right, up and down arrows move
your layer by one pixel at a time. Pick a direction, hit the arrow key
a couple of times and see how the difference changes. If it got bigger,
use the opposite arrow key to go back the other way.
When you get to where there's almost no difference between the two layers,
you're done. Change Mode back to Normal, make sure Opacity is at 100%,
then move on to the next layer in the stack.
It's still a lot of work. I'd love to find a program that looks for
circular or partially-circular shapes in successive images and does
the alignment automatically. Someone on GIMP suggested I might be
able to write something using OpenCV, which has circle-finding
primitives (I've written briefly before about
SimpleCV,
a wrapper that makes OpenCV easy to use from Python).
But doing the alignment by hand in GIMP, while somewhat tedious,
didn't take as long as I expected once I got the hang of using the
Contrast display filter along with Opacity and Difference mode.
Creating the animation
Once you have your layers, how do you turn them into an animation?
The obvious solution, which I originally intended to use, is to save
as GIF and check the "animated" box. I tried that -- and discovered
that the color errors you get when converting an image to indexed make
a beautiful red lunar eclipse look absolutely awful.
So I threw together a Javascript script to animate images by loading
a series of JPEGs. That meant that I needed to export all the layers
from my GIMP image to separate JPG files.
GIMP doesn't have a built-in way to export all of an image's layers to
separate new images. But that's an easy plug-in to write, and a web
search found lots of plug-ins already written to do that job.
The one I ended up using was Lie Ryan's Python script in
How
to save different layers of a design in separate files;
though a couple of others looked promising (I didn't try them), such as
gimp-plugin-export-layers
and
save_all_layers.scm.
You can see the final animation here:
Lunar eclipse of
September 27, 2015: Animations.
Tags: gimp, photography, astronomy
[
09:44 Oct 04, 2015
More gimp |
permalink to this entry |
]
Tue, 03 Feb 2015
A few days ago, I wrote about
the snowpack we
get on the roof during snowstorms:
It doesn't just sit there until it gets warm enough to melt and run
off as water. Instead, the whole mass of snow moves together,
gradually, down the metal roof, like a glacier.
When it gets to the edge, it still doesn't fall; it somehow stays
intact, curling over and inward, until the mass is too great and it
loses cohesion and a clump falls with a Clunk!
The day after I posted that, I had a chance to see what happens as the
snow sheet slides off a roof if it doesn't have a long distance
to fall. It folds gracefully and gradually, like a sheet.
The underside as they slide off the roof is pretty interesting, too,
with varied shapes and patterns in addition to the imprinted pattern
of the roof.
But does it really move like a glacier? I decided to set up a camera
and film it on the move. I set the Rebel on a tripod with an AC power
adaptor, pointed it out the window at a section of roof with a good
snow load, plugged in the intervalometer I bought last summer, located
the manual to re-learn how to program it, and set it for a 30-second
interval. I ran that way for a bit over an hour -- long enough that
one section of ice had detached and fallen and a new section was
starting to slide down. Then I moved to another window and shot a series
of the same section of snow from underneath, with a 40-second interval.
I uploaded the photos to my workstation and verified that they'd
captured what I wanted. But when I stitched them into a movie, the
way I'd used for my
time-lapse
clouds last summer, it went way too fast -- the movie was over in
just a few seconds and you couldn't see what it was doing. Evidently
a 30-second interval is far too slow for the motion of a roof glacier
on a day in the mid-thirties.
But surely that's solvable in software? There must be a way to get avconv
to make duplicates of each frame, if I don't mind that the movie come
out slightly jump. I read through the avconv manual, but it wasn't
very clear about this. After a lot of fiddling and googling and help
from a more expert friend, I ended up with this:
avconv -r 3 -start_number 8252 -i 'img_%04d.jpg' -vcodec libx264 -r 30 timelapse.mp4
In avconv, -r specifies a frame rate for the next file, input or
output, that will be specified. So -r 3
specifies the
frame rate for the set of input images, -i 'img_%04d.jpg'
;
and then the later -r 30
overrides that 3 and sets a new
frame rate for the output file, -timelapse.mp4
. The start
number is because the first file in my sequence is named img_8252.jpg.
30, I'm told, is a reasonable frame rate for movies intended to be watched
on typical 60FPS monitors; 3 is a number I adjusted until the glacier in
the movie moved at what seemed like a good speed.
The movies came out quite interesting! The main movie, from the top,
is the most interesting; the one from the underside is shorter.
I wish I had a time-lapse of that folded sheet I showed above ...
but that happened overnight on the night after I made the movies.
By the next morning there wasn't enough left to be worth setting up
another time-lapse. But maybe one of these years I'll have a chance to
catch a sheet-folding roof glacier.
Tags: photography, time-lapse, glacier, snow
[
19:46 Feb 03, 2015
More photo |
permalink to this entry |
]
Thu, 16 Oct 2014
Last week both of the local mountain ranges turned gold simultaneously
as the aspens turned. Here are the Sangre de Cristos on a stormy day:
And then over the weekend, a windstorm blew a lot of those leaves away,
and a lot of the gold is gone now. But the aspen groves are still
beautiful up close ... here's one from Pajarito Mountain yesterday.
Tags: photography, nature
[
13:37 Oct 16, 2014
More nature |
permalink to this entry |
]
Thu, 02 Oct 2014
The wonderful summer thunderstorm season here seems to have died down.
But while it lasted, we had some spectacular double rainbows.
And I kept feeling frustrated when I took the SLR outside only to find
that my 18-55mm kit lens was nowhere near wide enough to capture it.
I could try
stitching
it together as a panorama, but panoramas of rainbows turn out to
be quite difficult -- there are no clean edges in the photo to tell
you where to join one image to the next, and automated programs like
Hugin won't even try.
There are plenty of other beautiful vistas here too -- cloudscapes,
mesas, stars. Clearly, it was time to invest in a wide-angle lens. But
how wide would it need to be to capture a double rainbow?
All over the web you can find out that a rainbow has a radius of 42
degrees, so you need a lens that covers 84 degrees to get the whole thing.
But what about a double rainbow? My web searches came to naught.
Lots of pages talk about double rainbows, but Google wasn't finding
anything that would tell me the angle.
I eventually gave up on the web and went to my physical bookshelf,
where Color and Light in Nature gave me a nice table
of primary and secondary rainbow angles of various wavelengths of light.
It turns out that 42 degrees everybody quotes is for light of 600 nm
wavelength, a blue-green or cyan color. At that wavelength, the
primary angle is 42.0° and the secondary angle is 51.0°.
Armed with that information, I went back to Google and searched for
double rainbow 51 OR 102 angle
and found a nice Slate
article on a
Double
rainbow and lightning photo. The photo in the article, while
lovely (lightning and a double rainbow in the South Dakota badlands),
only shows a tiny piece of the rainbow, not the whole one I'm hoping
to capture; but the article does mention the 51-degree angle.
Okay, so 51°×2 captures both bows in cyan light.
But what about other wavelengths?
A typical eye can see from about 400 nm (deep purple)
to about 760 nm (deep red). From the table in the book:
Wavelength | Primary | Secondary
|
---|
400 | 40.5° | 53.7°
|
600 | 42.0° | 51.0°
|
700 | 42.4° | 50.3°
|
Notice that while the primary angles get smaller with shorter
wavelengths, the secondary angles go the other way. That makes sense
if you remember that the outer rainbow has its colors reversed from
the inner one: red is on the outside of the primary bow, but the
inside of the secondary one.
So if I want to photograph a complete double rainbow in one shot,
I need a lens that can cover at least 108 degrees.
What focal length lens does that translate to?
Howard's
Astronomical Adventures has a nice focal length calculator.
If I look up my Rebel XSi on Wikipedia to find out that other
countries call it a 450D, and plug that in to the calculator, then
try various focal lengths (the calculator offers a chart but it didn't
work for me), it turns out that I need an 8mm lens, which will give me
an 108° 26‘ 46" field of view -- just about right.
So that's what I ordered -- a Rokinon 8mm fisheye. And it turns out to
be far wider than I need -- apparently the actual field of view in
fisheyes varies widely from lens to lens, and this one claims to have
a 180° field. So the focal length calculator isn't all that useful.
At any rate, this lens is plenty wide enough to capture those double
rainbows, as you can see.
About those books
By the way, that book I linked to earlier is apparently out of print
and has become ridiculously expensive. Another excellent book on
atmospheric phenomena is
Light
and Color in the Outdoors by Marcel Minnaert
(I actually have his earlier version, titled
The
Nature of Light and Color in the Open Air). Minnaert doesn't
give the useful table of frequencies and angles, but he has lots
of other fun and useful information on rainbows and related phenomena,
including detailed instructions for making rainbows indoors if you
want to measure angles or other quantities yourself.
Tags: nature, photography, rainbow
[
13:37 Oct 02, 2014
More photo |
permalink to this entry |
]
Mon, 22 Sep 2014
I had the opportunity to borrow a commercial crittercam
for a week from the local wildlife center.
Having grown frustrated with the high number of false positives on my
Raspberry Pi based
crittercam, I was looking forward to see how a commercial camera compared.
The Bushnell Trophycam I borrowed is a nicely compact,
waterproof unit, meant to strap to a tree or similar object.
It has an 8-megapixel camera that records photos to the SD card -- no
wi-fi. (I believe there are more expensive models that offer wi-fi.)
The camera captures IR as well as visible light, like the PiCam NoIR,
and there's an IR LED illuminator (quite a bit stronger than the cheap
one I bought for my crittercam) as well as what looks like a passive IR sensor.
I know the TrophyCam isn't immune to false positives; I've heard
complaints along those lines from a student who's using them to do
wildlife monitoring for LANL.
But how would it compare with my homebuilt crittercam?
I put out the TrophyCam first night, with bait (sunflower seeds) in
front of the camera. In the morning I had ... nothing. No false
positives, but no critters either. I did have some shots of myself,
walking away from it after setting it up, walking up to it to adjust
it after it got dark, and some sideways shots while I fiddled with the
latches trying to turn it off in the morning, so I know it was
working. But no woodrats -- and I always catch a woodrat or two
in PiCritterCam runs. Besides, the seeds I'd put out were gone,
so somebody had definitely been by during the night. Obviously
I needed a more sensitive setting.
I fiddled with the options, changed the sensitivity from automatic
to the most sensitive setting, and set it out for a second night, side
by side with my Pi Crittercam. This time it did a little better,
though not by much: one nighttime shot with a something in it,
plus one shot of someone's furry back and two shots of a mourning dove
after sunrise.
What few nighttime shots there were were mostly so blown out you
couldn't see any detail to be sure. Doesn't this camera know how to
adjust its exposure? The shot here has a creature in it. See it?
I didn't either, at first. It's just to the right of the bush.
You can just see the curve of its back and the beginning of a tail.
Meanwhile, the Pi cam sitting next to it caught eight reasonably exposed
nocturnal woodrat shots and two dove shots after dawn.
And 369 false positives where a leaf had moved in the wind or a dawn
shadow was marching across the ground. The TrophyCam only shot 47
photos total: 24 were of me, fiddling with the camera setup to get
them both pointing in the right direction, leaving 20 false positives.
So the Bushnell, clearly, gives you fewer false positives to hunt
through -- but you're also a lot less likely to catch an actual critter.
It also doesn't deal well with exposures in small areas and close distances:
its IR light source seems to be too bright for the camera to cope with.
I'm guessing, based on the name, that it's designed for shooting
deer walking by fifty feet away, not woodrats at a two-foot distance.
Okay, so let's see what the camera can do in a larger space. The next
two nights I set it up in large open areas to see what walked by. The
first night it caught four rabbit shots that night, with only five
false positives. The quality wasn't great, though: all long exposures
of blurred bunnies. The second night it caught nothing at all
overnight, but three rabbit shots the next morning. No false positives.
The final night, I strapped it to a piñon tree facing a little
clearing in the woods. Only two morning rabbits, but during the night
it caught a coyote. And only 5 false positives. I've never caught a
coyote (or anything else larger than a rabbit) with the PiCam.
So I'm not sure what to think. It's certainly a lot more relaxing to
go through the minimal output of the TrophyCam to see what I caught.
And it's certainly a lot easier to set up, and more waterproof, than
my jury-rigged milk carton setup with its two AC cords, one for the Pi
and one for the IR sensor. Being self-contained and battery operated
makes it easy to set up anywhere, not just near a power plug.
But it's made me rethink my pessimistic notion that I should give up
on this homemade PiCam setup and buy a commercial camera.
Even on its most sensitive setting, I can't make the TrophyCam
sensitive enough to catch small animals.
And the PiCam gets better picture quality than the Bushnell, not to
mention the option of hooking up a separate camera with flash.
So I guess I can't give up on the Pi setup yet. I just have to come up
with a sensible way of taming the false positives. I've been doing a lot
of experimenting with SimpleCV image processing, but alas, it's no better
at detecting actual critters than my simple pixel-counting script was.
But maybe I'll find the answer, one of these days. Meanwhile, I may
look into battery power.
Tags: crittercam, nature, raspberry pi, photography, maker
[
14:29 Sep 22, 2014
More hardware |
permalink to this entry |
]
Thu, 18 Sep 2014
A female hummingbird -- probably a black-chinned -- hanging out at
our window feeder on a cool cloudy morning.
Tags: birds, nature, photography
[
19:04 Sep 18, 2014
More nature/birds |
permalink to this entry |
]
Fri, 15 Aug 2014
A few weeks ago I wrote about building a simple
Arduino-driven
camera intervalometer to take repeat photos with my DSLR.
I'd been entertained by watching the clouds build and gather and dissipate
again while I stepped through all the false positives in my
crittercam,
and I wanted to try capturing them intentionally so I could make cloud
movies.
Of course, you don't have to build an Arduino device.
A search for timer remote control
or intervalometer
will find lots of good options around $20-30. I bought one
so I'll have a nice LCD interface rather than having to program an
Arduino every time I want to make movies.
Setting the image size
Okay, so you've set up your camera on a tripod with the intervalometer
hooked to it. (Depending on how long your movie is, you may also want
an external power supply for your camera.)
Now think about what size images you want.
If you're targeting YouTube, you probably want to use one of
YouTube's
preferred settings, bitrates and resolutions, perhaps 1280x720 or
1920x1080. But you may have some other reason to shoot at higher resolution:
perhaps you want to use some of the still images as well as making video.
For my first test, I shot at the full resolution of the camera.
So I had a directory full of big ten-megapixel photos with
filenames ranging from img_6624.jpg to img_6715.jpg.
I copied these into a new directory, so I didn't overwrite the originals.
You can use ImageMagick's mogrify to scale them all:
mogrify -scale 1280x720 *.jpg
I had an additional issue, though: rain was threatening and I didn't
want to leave my camera at risk of getting wet while I went dinner shopping,
so I moved the camera back under the patio roof. But with my fisheye lens,
that meant I had a lot of extra house showing and I wanted to crop
that off. I used GIMP on one image to determine the x, y, width and height
for the crop rectangle I wanted.
You can even crop to a different aspect ratio from your target,
and then fill the extra space with black:
mogrify img_6624.jpg -crop 2720x1450+135+315 -scale 1280 -gravity center -background black -extent 1280x720 *.jpg
If you decide to rescale your images to an unusual size, make sure
both dimensions are even, otherwise avconv will complain that
they're not divisible by two.
Finally: Making your movie
I found lots of pages explaining how to stitch
together time-lapse movies using mencoder, and a few
using ffmpeg. Unfortunately, in Debian, both are deprecated.
Mplayer has been removed entirely.
The ffmpeg-vs-avconv issue is apparently a big political war, and
I have no position on the matter, except that Debian has come down
strongly on the side of avconv and I get tired of getting nagged at
every time I run a program. So I needed to figure out how to use avconv.
I found some pages on avconv, but most of them didn't actually work.
Here's what worked for me:
avconv -f image2 -r 15 -start_number 6624 -i 'img_%04d.jpg' -vcodec libx264 time-lapse.mp4
Update: I don't know where that -f image2 came from -- ignore it.
And avconv can take an input and an output frame rate; they're
both specified with -r, and the only way input and output are
distinguished is their position in the command line. So a more
appropriate command might be something like this:
avconv -r 15 -start_number 6624 -i 'img_%04d.jpg' -vcodec libx264 -r 30 time-lapse.mp4
using 30 as a good output frame rate for people viewing on 60fps monitors.
Adjust the input frame rate, the -r 15, as needed to control the speed
of your time-lapse video.
Adjust the start_number and filename appropriately for the files you have.
Avconv produces an mp4 file suitable for uploading to youtube.
So here is my little test movie:
Time Lapse Clouds.
Tags: photography, time-lapse
[
12:05 Aug 15, 2014
More photo |
permalink to this entry |
]
Sat, 09 Aug 2014
We're having a huge bloom of a lovely flower called pale trumpets
(Ipomopsis longiflora), and it turns out that sphinx moths
just love them.
The white-lined sphinx moth (Hyles lineata) is a moth the size
of a hummingbird, and it behaves like a hummingbird, too. It flies
during the day, hovering from flower to flower to suck nectar,
being far too heavy to land on flowers like butterflies do.
I've seen them before, on hikes, but only gotten blurry shots with
my pocket camera. But with the pale trumpets blooming, the sphinx
moths come right at sunset and feed until near dark. That gives a
good excuse to play with the DSLR, telephoto lens and flash ...
and I still haven't gotten a really sharp photo, but I'm making
progress.
Check out that huge eye! I guess you need good vision in order to make
your living poking a long wiggly proboscis into long skinny flowers
while laboriously hovering in midair.
Photos here:
White-lined
sphinx moths on pale trumpets.
Tags: nature, moth, photography
[
21:23 Aug 09, 2014
More nature |
permalink to this entry |
]
Wed, 16 Jul 2014
While testing my
automated critter
camera, I was getting lots of false positives caused by clouds
gathering and growing and then evaporating away. False positives
are annoying, but I discovered that it's fun watching the clouds grow
and change in all those photos
... which got me thinking about time-lapse photography.
First, a disclaimer: it's easy and cheap to just buy an
intervalometer. Search for timer remote control
or intervalometer
and you'll find plenty of options for
around $20-30. In fact, I ordered one.
But, hey, it's not here yet, and I'm impatient.
And I've always wanted to try controlling a camera from an Arduino.
This seemed like the perfect excuse.
Why an Arduino rather than a Raspberry Pi or BeagleBone? Just because
it's simpler and cheaper, and this project doesn't need much compute
power. But everything here should be applicable to any microcontroller.
My Canon Rebel Xsi has a fairly simple wired remote control plug:
a standard 2.5mm stereo phone plug.
I say "standard" as though you can just walk into Radio Shack and buy
one, but in fact it turned out to be surprisingly difficult, even when
I was in Silicon Valley, to find them. Fortunately, I had found some,
several years ago, and had cables already wired up waiting for an experiment.
The outside connector ("sleeve") of the plug is ground.
Connecting ground to the middle ("ring") conductor makes the camera focus,
like pressing the shutter button halfway; connecting ground to the center
("tip") conductor makes it take a picture.
I have a wired cable release that I use for astronomy and spent a few
minutes with an ohmmeter verifying what did what, but if you don't
happen to have a cable release and a multimeter there are plenty of
Canon
remote control pinout diagrams on the web.
Now we need a way for the controller to connect one pin of the remote
to another on command.
There are ways to simulate that with transistors -- my
Arduino-controlled
robotic shark project did that. However, the shark was about a $40
toy, while my DSLR cost quite a bit more than that. While I
did find several people on the web saying they'd used transistors with
a DSLR with no ill effects, I found a lot more who were nervous about
trying it. I decided I was one of the nervous ones.
The alternative to transistors is to use something like a relay. In a relay,
voltage applied across one pair of contacts -- the signal from the
controller -- creates a magnetic field that closes a switch and joins
another pair of contacts -- the wires going to the camera's remote.
But there's a problem with relays: that magnetic field, when it
collapses, can send a pulse of current back up the wire to the controller,
possibly damaging it.
There's another alternative, though. An opto-isolator works like a
relay but without the magnetic pulse problem. Instead of a magnetic
field, it uses an LED (internally, inside the chip where you can't see it)
and a photo sensor. I bought some opto-isolators a while back and had
been looking for an excuse to try one. Actually two: I needed one for
the focus pin and one for the shutter pin.
How do you choose which opto-isolator to use out of the gazillion
options available in a components catalog? I don't know, but when I
bought a selection of them a few years ago, it included a 4N25, 4N26
and 4N27, which seem to be popular and well documented, as well as a
few other models that are so unpopular I couldn't even find a
datasheet for them. So I went with the 4N25.
Wiring an opto-isolator is easy. You do need a resistor across the inputs
(presumably because it's an LED).
380Ω
is apparently a good value for the 4N25, but
it's not critical. I didn't have any 380Ω but I had a bunch of
330Ω so that's what I used. The inputs (the signals from the Arduino)
go between pins 1 and 2, with a resistor; the outputs (the wires to the
camera remote plug) go between pins 4 and 5, as shown in
the diagram on this
Arduino
and Opto-isolators discussion, except that I didn't use any pull-up
resistor on the output.
Then you just need a simple Arduino program to drive the inputs.
Apparently the camera wants to see a focus half-press before it gets
the input to trigger the shutter, so I put in a slight delay there,
and another delay while I "hold the shutter button down" before
releasing both of them.
Here's some Arduino code to shoot a photo every ten seconds:
int focusPin = 6;
int shutterPin = 7;
int focusDelay = 50;
int shutterOpen = 100;
int betweenPictures = 10000;
void setup()
{
pinMode(focusPin, OUTPUT);
pinMode(shutterPin, OUTPUT);
}
void snapPhoto()
{
digitalWrite(focusPin, HIGH);
delay(focusDelay);
digitalWrite(shutterPin, HIGH);
delay(shutterOpen);
digitalWrite(shutterPin, LOW);
digitalWrite(focusPin, LOW);
}
void loop()
{
delay(betweenPictures);
snapPhoto();
}
Naturally, since then we haven't had any dramatic clouds, and the
lightning storms have all been late at night after I went to bed.
(I don't want to leave my nice camera out unattended in a rainstorm.)
But my intervalometer seemed to work fine in short tests.
Eventually I'll make some actual time-lapse movies ... but that will
be a separate article.
Tags: arduino, hardware, photography, intervalometer, time-lapse, maker
[
18:31 Jul 16, 2014
More hardware |
permalink to this entry |
]
Thu, 03 Jul 2014
In my last crittercam installment,
the
NoIR night-vision crittercam, I was having trouble with false positives,
where the camera would trigger repeatedly after dawn as leaves moved
in the wind and the morning shadows marched across the camera's field of view.
I wondered if a passive infra-red (PIR) sensor would be the answer.
I got one, and the answer is: no. It was very easy to hook up, and
didn't cost much, so it was a worthwhile experiment; but it gets
nearly as many false positives as camera-based motion detection.
It isn't as sensitive to wind, but as the ground and the foliage heat
up at dawn, the moving shadows are just as much a problem as they were
with image-based motion detection.
Still, I might be able to combine the two, so I figure it's worth
writing up.
Reading inputs from the HC-SR501 PIR sensor
The PIR sensor I chose was the common HC-SR501 module.
It has three pins -- Vcc, ground, and signal -- and two potentiometer
adjustments.
It's easy to hook up to a Raspberry Pi because it can take 5 volts
in on its Vcc pin, but its signal is 3.3v (a digital signal -- either
motion is detected or it isn't), so you don't have to fool with
voltage dividers or other means to get a 5v signal down to the 3v
the Pi can handle.
I used GPIO pin 7 for signal, because it's right on the corner of the
Pi's GPIO header and easy to find.
There are two ways to track a digital signal like this. Either you can
poll the pin in an infinfte loop:
import time
import RPi.GPIO as GPIO
pir_pin = 7
sleeptime = 1
GPIO.setmode(GPIO.BCM)
GPIO.setup(pir_pin, GPIO.IN)
while True:
if GPIO.input(pir_pin):
print "Motion detected!"
time.sleep(sleeptime)
or you can use interrupts: tell the Pi to call a function whenever it
sees a low-to-high transition on a pin:
import time
import RPi.GPIO as GPIO
pir_pin = 7
sleeptime = 300
def motion_detected(pir_pin):
print "Motion Detected!"
GPIO.setmode(GPIO.BCM)
GPIO.setup(pir_pin, GPIO.IN)
GPIO.add_event_detect(pir_pin, GPIO.RISING, callback=motion_detected)
while True:
print "Sleeping for %d sec" % sleeptime
time.sleep(sleeptime)
Obviously the second method is more efficient. But I already had a
loop set up checking the camera output and comparing it against
previous output, so I tried that method first, adding support to my
motion_detect.py
script. I set up the camera pointing at the wall, and, as root, ran the script
telling it to use a PIR sensor on pin 7, and the local and remote
directories to store photos:
# python motion_detect.py -p 7 /tmp ~pi/shared/snapshots/
and whenever I walked in front of the camera, it triggered and took
a photo. That was easy!
Reliability problems with add_event_detect
So easy that I decided to switch to the more efficient interrupt-driven
model. Writing the code was easy, but I found it triggered more often:
if I walked in front of the camera (and stayed the requisite 7 seconds
or so that it takes raspistill to get around to taking a photo),
when I walked back to my desk, I would find two photos, one showing my
feet and the other showing nothing. It seemed like it was triggering
when I got there, but also when I left the scene.
A bit of web searching indicates this is fairly common: that with RPi.GPIO
a lot of people see triggers on both rising and falling edges -- e.g. when
the PIR sensor starts seeing motion, and when it stops seeing motion
and goes back to its neutral state -- when they've asked for just
GPIO.RISING. Reports for this go back to 2011.
On the other hand, it's also possible that instead of seeing a GPIO
falling edge, what was happening was that I was getting multiple calls
to my function while I was standing there, even though the RPi hadn't
finished processing the first image yet. To guard against that, I put
a line at the beginning of my callback function that disabled further
callbacks, then I re-enabled them at the end of the function after the
Pi had finished copying the photo to the remote filesystem. That reduced
the false triggers, but didn't eliminate them entirely.
Oh, well, The sun was getting low by this point, so I stopped
fiddling with the code and put the camera out in the yard with a pile
of birdseed and peanut suet nuggets in front of it. I powered on,
sshed to the Pi and ran the motion_detect script, came back inside
and ran a tail -f on the output file.
I had dinner and worked on other things, occasionally checking the
output -- nothing! Finally I sshed to the Pi and ran ps aux
and discovered the script was no longer running.
I started it again, this time keeping my connection to the Pi active
so I could see when the script died. Then I went outside to check the
hardware. Most of the peanut suet nuggets were gone -- animals had
definitely been by. I waved my hands in front of the camera a few
times to make sure it got some triggers.
Came back inside -- to discover that Python had gotten a segmentation
fault. It turns out that nifty GPIO.add_event_detect() code isn't all
that reliable, and can cause Python to crash and dump core. I ran it
a few more times and sure enough, it crashed pretty quickly every time.
Apparently GPIO.add_event_detect
needs a bit more debugging,
and isn't safe to use in a program that has to run unattended.
Back to polling
Bummer! Fortunately, I had saved the polling version of my program, so
I hastily copied that back to the Pi and started things up again.
I triggered it a few times with my hand, and everything worked fine.
In fact, it ran all night and through the morning, with no problems
except the excessive number of false positives, already mentioned.
False positives weren't a problem at all during the night. I'm fairly
sure the problem happens when the sun starts hitting the ground. Then
there's a hot spot that marches along the ground, changing position in
a way that's all too obvious to the infra-red sensor.
I may try cross-checking between the PIR sensor and image changes from
the camera. But I'm not optimistic about that working: they both get
the most false positives at the same times, at dawn and dusk when the
shadow angle is changing rapidly. I suspect I'll have to find a
smarter solution, doing some image processing on the images as well
as cross-checking with the PIR sensor.
I've been uploading photos from my various tests here:
Tests of the
Raspberry Pi Night Vision Crittercam.
And as always, the code is on
github:
scripts/motioncam with some basic documentation on my site:
motion-detect.py:
a motion sensitive camera for Raspberry Pi or other Linux machines.
(I can't use github for the documentation because I can't seem to find
a way to get github to display html as anything other than source code.)
Tags: crittercam, hardware, raspberry pi, nature, photography, maker
[
20:13 Jul 03, 2014
More hardware |
permalink to this entry |
]
Thu, 26 Jun 2014
When I built my
http://shallowsky.com/blog/hardware/raspberry-pi-motion-camera.html
(and part
2), I always had the NoIR camera in the back of my mind. The NoIR is a
version of the Pi camera module with the infra-red blocking
filter removed, so you can shoot IR photos at night without disturbing
nocturnal wildlife (or alerting nocturnal burglars, if that's your target).
After I got the daylight version of the camera working, I ordered a NoIR
camera module and plugged it in to my RPi. I snapped some daylight
photos with raspstill and verified that it was connected and working;
then I waited for nightfall.
In the dark, I set up the camera and put my cup of hot chocolate in
front of it. Nothing. I hadn't realized that although CCD
cameras are sensitive in the near IR, the wavelengths only slightly
longer than visible light, they aren't sensitive anywhere near
the IR wavelengths that hot objects emit. For that, you need a special
thermal camera. For a near-IR CCD camera like the Pi NoIR, you need an
IR light source.
Knowing nothing about IR light sources, I did a search and came up
with something called a
"Infrared IR 12 Led Illuminator Board Plate for CCTV Security CCD Camera"
for about $5. It seemed similar to the light sources used on a few
pages I'd found for home-made night vision cameras, so I ordered it.
Then I waited, because I stupidly didn't notice until a week and a half
later that it was coming from China and wouldn't arrive for three weeks.
Always check the shipping time when ordering hardware!
When it finally arrived, it had a tiny 2-pin connector that I couldn't
match locally. In the end I bought a package of female-female SchmartBoard
jumpers at Radio Shack which were small enough to make decent contact
on the light's tiny-gauge power and ground pins.
I soldered up a connector that would let me use a a universal power
supply, taking a guess that it wanted 12 volts (most of the cheap LED
rings for CCD cameras seem to be 12V, though this one came with no
documentation at all). I was ready to test.
Testing the IR light
One problem with buying a cheap IR light with no documentation:
how do you tell if your power supply is working? Since the light is
completely invisible.
The only way to find out was to check on the Pi. I didn't want to have
to run back and forth between the dark room where the camera was set
up and the desktop where I was viewing raspistill images. So I
started a video stream on the RPi:
$ raspivid -o - -t 9999999 -w 800 -h 600 | cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/}' :demux=h264
Then, on the desktop: I ran vlc, and opened the network stream:
rtsp://pi:8554/
(I have a "pi" entry in /etc/hosts, but using an IP address also works).
Now I could fiddle with hardware in the dark room while looking through
the doorway at the video output on my monitor.
It took some fiddling to get a good connection on that tiny connector
... but eventually I got a black-and-white view of my darkened room,
just as I'd expect under IR illumination.
I poked some holes in the milk carton and used twist-ties to seccure
the light source next to the NoIR camera.
Lights, camera, action
Next problem: mute all the blinkenlights, so my camera wouldn't look
like a christmas tree and scare off the nocturnal critters.
The Pi itself has a relatively dim red run light, and it's inside the
milk carton so I wasn't too worried about it.
But the Pi camera has quite a bright red
light that goes on whenever the camera is being used.
Even through the thick milk carton bottom, it was glaring and obvious.
Fortunately, you can
disable
the Pi camera light: edit /boot/config.txt and add this line
disable_camera_led=1
My USB wi-fi dongle has a blue light that flickers as it gets traffic.
Not super bright, but attention-grabbing. I addressed that issue
with a triple thickness of duct tape.
The IR LEDs -- remember those invisible, impossible-to-test LEDs?
Well, it turns out that in darkness, they emit a faint but still
easily visible glow. Obviously there's nothing I can do about that --
I can't cover the camera's only light source! But it's quite dim, so
with any luck it's not spooking away too many animals.
Results, and problems
For most of my daytime testing I'd used a threshold of 30 -- meaning
a pixel was considered to have changed if its value differed by more
than 30 from the previous photo. That didn't work at all in IR: changes
are much more subtle since we're seeing essentially a black-and-white
image, and I had to divide by three and use a sensitivity of 10 or 11
if I wanted the camera to trigger at all.
With that change, I did capture some nocturnal visitors, and some
early morning ones too. Note the funny colors on the daylight shots:
that's why cameras generally have IR-blocking filters if they're not
specifically intended for night shots.
Here are more photos, and larger versions of those:
Images from my
night-vision camera tests.
But I'm not happy with the setup. For one thing, it has far too many
false positives. Maybe one out of ten or fifteen images actually has
an animal in it; the rest just triggered because the wind made the
leaves blow, or because a shadow moved or the color of the light changed.
A simple count of differing pixels is clearly not enough for this task.
Of course, the software could be smarter about things: it could try to
identify large blobs that had changed, rather than small changes
(blowing leaves) all over the image. I already know
SimpleCV
runs fine on the Raspberry Pi, and I could try using it to do
object detection.
But there's another problem with detection purely through camera images:
the Pi is incredibly slow to capture an image. It takes around 20 seconds
per cycle; some of that is waiting for the network but I think most of
it is the Pi talking to the camera. With quick-moving animals,
the animal may well be gone by the time the system has noticed a change.
I've caught several images of animal tails disappearing out of the
frame, including a quail who visited yesterday morning. Adding smarts
like SimpleCV will only make that problem worse.
So I'm going to try another solution: hooking up an infra-red motion detector.
I'm already working on setting up tests for that, and should have a
report soon. Meanwhile, pure image-based motion detection has been
an interesting experiment.
Tags: crittercam, hardware, raspberry pi, photography, programming, maker
[
13:31 Jun 26, 2014
More hardware |
permalink to this entry |
]
Sat, 24 May 2014
I wrote recently about the hardware involved in my
Raspberry
Pi motion-detecting wildlife camera.
Here are some more details.
The motion detection software
I started with the simple and clever
motion-detection
algorithm posted by "brainflakes" in a Raspberry Pi forum.
It reads a camera image into a PIL (Python Imaging Library) Image object,
then compares bytes inside that Image's buffer to see how many pixels
have changed, and by how much. It allows for monitoring only a test
region instead of the whole image, and can even create a debug image
showing which pixels have changed. A perfect starting point.
Camera support
As part of the PiDoorbell project,
I had already written a camera wrapper that could control either a USB
webcam or the pi camera module, if it was installed.
Initially that plugged right in.
But I was unhappy with the Pi camera's images --
it can't focus closer than five feet (though a commenter to my
previous article pointed out that it's possible to
break
the seal on the lens and refocus it manually.
Without refocusing, the wide-angle lens means
that a bird five feet away is pretty small, and even when you get
something in focus the images aren't very sharp. And a web search for
USB webcams with good optical quality was unhelpful -- the few people
who care about webcam image quality seem to care mostly about getting
the widest-angle lens possible, the exact opposite of what I wanted
for wildlife.
Was there any way I could hook up a real camera, and drive it from the
Pi over USB as though it were a webcam? The answer turned out to be
gphoto2.
But only a small subset of cameras are controllable over USB with gphoto2.
(I think that's because the cameras don't allow control, not because
gphoto doesn't support them.) That set didn't include any of the
point-and-shoot cameras we had in the house; and while my Rebel DSLR
might be USB controllable, I'm not comfortable about leaving it out in
the backyard day and night.
With gphoto2's camera compatibility list in one tab and ebay in another,
I looked for a camera that was available, cheap
(since I didn't know if this was going to work at all),
and controllable. I ordered a used Canon A520.
As I waited for it to arrive, I fiddled with my USB-or-pi-camera
to make a start at adding gphoto2 support. I ended up refactoring the
code quite a bit to make it easy to add new types of cameras besides
the three it supports now -- pi, USB webcam, and gphoto2.
I called the module
pycamera.
Using gphoto2
When the camera arrived, I spent quite a while fiddling with gphoto2
learning how to capture images. That turns out to be a bit tricky --
there's no documentation on the various options, apparently because
the options may be different for every camera, so you have to run
$ gphoto2 --set-config capture=1 --list-config
to get a list of options the camera supports, and then, for each of
those options, run
$ gphoto2 --get-config name [option]
to see what values that option can take.
Dual-camera option
Once I got everything working, the speed and shutter noise of capturing
made me wonder if I should worry about the lifespan of the Canon if I
used it to capture snapshots every 15 seconds or so, day and night.
Since I still had the Pi cam hooked up, I fiddled the code so that I
could use the pi cam to take the test images used to detect motion,
and save the real camera for the high-resolution photos when something
actually changes. Saves wear on the more expensive camera, and it's
certainly a lot quieter that way.
Uploading
To get the images off the Pi to where other computers can see them,
I use sshfs to mount a filesystem from another machine on our local net.
Unfortunately, sshfs on the pi doesn't work quite right.
Apparently it uses out-of-date libraries (and gives a warning
to that effect).
You have to be root to use it at all, unlike newer versions of sshfs,
and then, regardless of the permissions of the remote filesystem or
where you mount it locally,
you can only access the mounted filesystem as root.
Fortunately I normally run the motion detector as root anyway, because
the picamera Python module requires it, and I've just gotten in the
habit of using it even when I'm not using python-picamera.
But if you wanted to run as non-root, you'd probably have to use
NFS or some other remote filesystem protocol. Or find a newer version
of sshfs.
Testing the gphoto setup
For reference, here's an image using the previous version of the setup,
with the Raspberry Pi camera module. Click on the image to see a crop of
the full-resolution image in daylight -- basically the best the camera can do.
Definitely not what I was hoping for.
So I eagerly set up the tripod and hooked up the setup with the Canon.
I had a few glitches in trying to test it. First, no birds; then later
I discovered Dave had stolen my extension cord, but I didn't discover
that until after the camera's batteries needed recharging.
A new extension cord and an external power supply for the camera,
and I was back in business the next day.
And the results were worth it. As you can see here, using a
real camera does make a huge difference. I used a zoom setting of 6
(it goes to 12). Again, click on the image to see a crop of the
full-resolution photo.
In the end, I probably will order one of the No-IR Raspberry pi cameras,
just to have an easy way of seeing what sorts of critters visit us at
night. But for daylight shots, an external camera is clearly the way
to go.
The scripts
The current version of the script is
motion_detect.py
and of course it needs my
pycamera
module.
And here's
documentation
for the motion detection camera.
Tags: crittercam, hardware, raspberry pi, photography, maker
[
20:09 May 24, 2014
More hardware |
permalink to this entry |
]
Thu, 15 May 2014
I've been working on an automated wildlife camera, to catch birds at
the feeder, and the coyotes, deer, rabbits and perhaps roadrunners (we
haven't seen one yet, but they ought to be out there) that roam the
juniper woodland.
This is a similar project to the
PiDoorbell project presented at PyCon, and my much earlier
proximity
camera project that used an Arduino and a plug computer
but for a wildlife camera I didn't want to use a sonar rangefinder.
For one thing, it won't work with a bird feeder -- the feeder is
always there, so the addition of a bird won't change anything as
far as a sonar rangefinder is concerned. For another, the rangefinders
aren't very accurate beyond about six feet.
Starting with a Raspberry Pi was fairly obvious.
It's low power, cheap, it even has an optional integrated camera module
that has reasonable resolution, and I could re-use a lot of the
camera code I'd already written for PiDoorbell.
I patched together some software for testing.
I'll write in more detail about the software in a separate article,
but I started with the simple
motion
detection code posted by "brainflakes" in the Raspberry Pi forums.
It's a slick little piece of code you'll find in various versions
all over the net; it uses PIL, the Python Imaging Library, to compare
a specified region from successive photos to see how much has changed.
One aside about the brainflakes code: most of the pages you'll find
referencing it tell you to install python-imaging-tk. But there's
nothing in the code that uses tk, and python-imaging is really all
you need to install. I wrote a GUI wrapper for my motion detection code
using gtk, so I had no real need to learn the Tk equivalent.
Once I had some software vaguely working, it was time for testing.
The hardware
One big problem I had to solve was the enclosure. I needed something
I could put the Pi in that was moderately waterproof -- maybe not
enough to handle a raging thunderstorm, but rain or snow can happen
here at any time without much warning. I didn't want to have to spend
a lot of time building and waterproofing it, because this is just a
test run and I might change everything in the final version.
I looked around the house for plastic objects that could be repurposed
into a camera enclosure. A cookie container from the local deli looked
possible, but I wasn't quite happy with it. I was putting the last of
the milk into my morning coffee when I realized I held in my hand a
perfect first-draft camera enclosure.
A milk carton must be at least somewhat waterproof, right?
Even if it's theoretically made of paper.
I could use the flat bottom as a place to mount the Pi camera with its
two tiny screw holes,
and then cut a visor to protect the camera from rain.
It didn't take long to whip it all together: a little work with an
X-acto knife, a little duct tape. Then I put the Pi inside it, took it
outside and bungeed it to the fence, pointing at the bird feeder.
A few issues I had to resolve:
Raspbian has rather complicated networking. I was using a USB wi-fi dongle,
but I had trouble getting the Pi to boot configured properly to talk
to our WPA router. In Raspbian networking is configured in about six
different places, any one of which might do something like prioritize
the not-connected eth0 over the wi-fi dongle, making it impossible
to connect anywhere. I ended up uninstalling Network Manager and
turning off ifplugd and everything else I could find so it would
use my settings in /etc/network/interfaces, and in the end, even
though ifconfig says it's still prioritizing eth0 over wlan0, I got
it talking to the wi-fi.
I also had to run everything as root.
The python-picamera module imports RPi.GPIO and
needs access to /dev/mem, and even if you chmod /dev/mem to give
yourself adequate permissions, it still won't work except as root.
But if I used ssh -X to the Pi and then ran my GUI program with sudo,
I couldn't display any windows because the ssh permission is for the
"pi" user, not root.
Eventually I gave up on sudo, set a password for root, and used
ssh -X root@pi
to enable X.
The big issue: camera quality
But the real problem turned out to be camera quality.
The Raspberry Pi camera module has a resolution of 2592 x 1944, or 5
megapixels. That's terrific, far better than any USB webcam. Clearly
it should be perfect for this tast.
Update: see below. It's not a good camera, but it turns out I had a
lens problem and it's not this bad.
So, the Pi camera module might be okay if all I want is a record of
what animals visit the house. This image is good enough, just barely,
to tell that we're looking at a house finch (only if we already rule
out similar birds like purple finch and Cassin's finch -- the photo
could never give us enough information to distinguish among similar birds).
But what good is that? I want decent photos that I can put on my web site.
I have a USB camera, but it's only one megapixel and gives lousy
images, though at least they're roughly in focus so they're better
than the Pi cam.
So now I'm working on a setup where I drive an external camera
from the Pi using gphoto2. I have most of the components working,
but the code was getting ugly handling three types of cameras instead
of just two, so I'm refactoring it. With any luck I'll have something
to write about in a week or two.
Meanwhile, the temporary code is in my
github rpi
directory -- but it will probably move from there soon.
I'm very sad that the Pi camera module turned out to be so bad. I was
really looking forward to buying one of the No-IR versions and setting up
a night wildlife camera. I've lost enthusiasm for that project
after seeing how bad the images were. I may have to investigate how
to remove the IR filter from a point-and-shoot camera, after I get
the daylight version working.
Update, a few days later: It turns out I had some spooge on the lens.
It's not quite as bad as I made it out to be.
Here's a sample.
It's still not a great camera, and it can't focus anywhere near as
close as the 2 feet I've seen claimed -- 5 feet is about the closest
mine can focus, which means I can't get very close to the wildlife,
which was a lot of the point of building a wildlife camera.
I've seen suggestions of putting reading glasses in front of the lens
as a cheap macro adaptor.
Instead, I'm going ahead with the gphoto2 option, which is about ready to
test -- but the noIR Pi camera module might be marginally acceptable for
a night wildlife camera.
Tags: crittercam, hardware, raspberry pi, photography, maker
[
13:30 May 15, 2014
More hardware |
permalink to this entry |
]
Sat, 21 Dec 2013
I inherited a big pile of photo albums from my mother when she passed
away, and when time permits, I've been scanning them in.
Today I scanned in some old photos of my father. He used to be an
actor, before I was born, and there's a wonderful collection of
shots I'd never seen before showing him in various roles and costumes.
What a marvelous find. I've only uploaded a few of them so far --
there's far more needing to be scanned -- but what I have is here:
O. Raymond Peck,
actor.
Tags: family, photography
[
19:19 Dec 21, 2013
More misc |
permalink to this entry |
]
Wed, 16 Jan 2013
The weather was a bit warmer today than it has been, so I snuck off
for an hour's hike at Arastradero, where I was amazed by all the
western bluebirds out enjoying the sunny day. I counted three of
them just on the path from the parking lot to the road crossing.
Bold, too -- they let me get close enough to snap a shot with my
pocket camera.
Farther up the trail, a white-shouldered kite was calling as it
soared, and a large falcon flew by, too far away and too backlit
for me to identify it for sure as a peregrine.
But then I spotted an even more unusual beast -- a phantom horse
rearing out of the ground, ears pricked forward, eyes and mouth open
and mane whipped by a wind we could not feel on this pleasant, windless day.
Dave always teases me about my arboronecrophotography inclinations
(I like to take pictures of dead trees).
But how could I resist trying to capture a creature like this?
Tags: nature, birds, photography
[
20:26 Jan 16, 2013
More nature |
permalink to this entry |
]
Fri, 21 Sep 2012
This morning, the last space shuttle, Endeavour, made a piggyback
fly-by of California cities prior to landing at LAX, where it will be
trucked to its final resting place in Exposition Park.
And what science and astronomy fan could resist a once in a lifetime
chance to see the last shuttle in flight, piggyback on its 747 transporter?
Events kept me busy all morning, so I was late getting away.
Fortunately I'd expected that and planned for it. While watching the
flyby from Griffith Observatory sounded great, I suspected there would
be huge crowds, no parking and there's no way I could get there in time.
The Times suggested Universal City -- which I took to mean that
there would be huge crowds and traffic there too. So I picked a place
off the map, Blair Dr., that looked like it was easy to get to,
reasonably high and located in between Griffith and Universal.
It turned out to be a good choice. There were plenty of people there,
but I found a parking spot a few blocks away from where everybody
was hanging out and walked back to the viewpoint where I'd seen the
crowds.
I looked down and the first thing I saw was a smashed jumbo jet among
the wreckage of some houses. Um ... not the way I wanted to see the
shuttle! But then I realized I was looking at the Universal Studios
back lot. Right. Though binoculars I could even see the tram where
the folks on the studio tour went right by the "plane crash".
And I could look across to Universal City, where the
crowds made me happy I'd decided against going there -- I bet they
had some traffic jams too.
The crowd was friendly and everybody was sharing the latest rumors
of the shuttle's location -- "It just flew over Santa Barbara!"
"It's over West Hollywood -- get ready!" "Nope, now it's going west
again, might be a while." That helped with the wait in the hot sun.
Finally, "It's coming!" And we could see it, passing south of the
crowds at Universal City and coming this way ... and disappearing
behind some trees. We all shifted around so we'd see it when it
cleared the trees.
Only it didn't! We only got brief glimpses of it, between branches,
as the shuttle flew off toward Griffith Observatory. Oh no! Were we
in exactly the wrong location?
Then the word spread, from people farther down the road -- "It's
turning -- get ready for another pass!" This time it came by south of
us, giving us all a beautiful clear view as the 747 flew by with
the shuttle and its two fighter-plane escorts.
We hung around for a few more minutes, hoping for another pass, but
eventually we dispersed. The shuttle and its escorts flew on to LAX,
where it will be unloaded and trucked to Exposition Park. I feel lucky
to have gotten such a beautiful view of the last shuttle flight.
Photos: Space shuttle Endeavour flyover.
Tags: astronomy, photography
[
21:35 Sep 21, 2012
More science/astro |
permalink to this entry |
]
Wed, 06 Jun 2012
After a heart-stopping day of rain on Monday, Tuesday, the day of the
Venus
transit astronomers have been anticipating for decades,
dawned mostly clear.
For the 3 pm ingress, Dave and I set up in the backyard -- a 4.5-inch
Newtonian, a Takahashi FS102, and an 80mm f/6 refractor with an
eyepiece projection camera mount. I'd disliked the distraction during
the annular eclipse of switching between eyepiece and camera mount,
and was looking forward to having a dedicated camera scope this time.
Venus is big! There wasn't any trouble seeing it once it started its
transit. I was surprised at how slowly it moved -- so much slower than
a Mercury transit, though it shouldn't have been a surprise, since I
knew the event would take the rest of the evening, and wouldn't be
finished until well past our local sunset.
The big challenge of the day was to see the aureole -- the arc of Venus'
atmosphere standing out from the sun. With the severely windy weather
and turbulent air (lots of cumulus clouds) I wasn't hopeful. But
as Venus reached the point where only about 1/3 of its disk remained
outside the sun, the aureole became visible as a faint arc.
We couldn't see it in the 4.5-inch, and it definitely isn't visible
in the poorly-focused photos from the 80mm, but in the FS102 it
was definitely there.
About those poorly focused pictures: I hadn't used the 80mm, an Orion
Express, for photography before. It turned out its 2-inch Crayford
focuser, so nice for visual use, couldn't hold the weight of
a camera. With the sun high overhead, as soon as I focused,
the focuser tube would start to slide downward and I couldn't lock it.
I got a few shots through the 80mm, but had better luck holding a
point-and-shoot camera to the eyepiece of the FS102.
Time for experiments
Once the excitement of ingress was over,
there was time to try some experiments. I'd written about binocular
projection as a way anyone, without special equipment, could watch
the transit; so I wanted to make sure that worked. I held my cheap
binoc (purchased for $35 many years ago at Big 5) steady on top
of a tripod -- I never have gotten around to making a tripod mount
for it; though if I'd wanted a more permanent setup, duct tape would
have worked.
I couldn't see much projecting against the ground,
and it was too windy to put a piece of paper or cardboard down, but
an old whiteboard made a perfect solar projection screen. There was n
trouble at all seeing Venus and some of the larger sunspots projected
onto the whiteboard.
As the transit went on, we settled down to a routine of popping
outside the office every now and then to check on the transit.
Very civilized. But the transit lasted until past sunset, and our
western horizon is blocked by buildings.
I wanted some sunset shots. So we took a break for dinner, then drove
up into the hills to look for a place with a good ocean view.
The sunset expedition
Our first idea, a pullout off Highway 9,
had looked promising in Google Earth but turned out to have trees
and a hill (that somehow hadn't shown up in Google Earth) blocking
the sunset. So back up highway 9 and over to Russian Ridge, where
I remembered a trail entrance on the western side of the ridge that
might serve. Sure enough, it gave us a great sunset view. There was
only parking space for one car, but fortunately that's all we needed.
And we weren't the only transit watchers there -- someone else had
hiked in from the main parking lot carrying a solar filter, so we
joined him on the hillside as we waited for sunset.
I'd brought the 80mm refractor for visual observing and the 90 Mak
for camerawork. I didn't have a filter for the Mak, but Dave had some
Baader solar film, so earlier in the afternoon I'd whipped up a filter.
A Baskin-Robbins ice cream container lid turned out to be the perfect
size. Well, almost perfect -- it was just a trifle too large, but some
pads cut from an old mouse pad and taped inside the lid made it fit
perfectly. Dave used the Baader film, some foam and masking tape to
make a couple of filters for his binocular.
The sun sank through a series of marine cloud layers. Through the scopes
it looked more like Jupiter than the sun, with Jupiter's banding -- and Venus'
silhouette even looked like the shadow of one of Jupiter's moons.
Finally the sun got so low, and so obscured by clouds, that it seemed
safe to take the solar filter off the 90mm camera rig. (Of course, we
kept the solar filters on the other scope and binocular for visual observing.)
But even at the camera's fastest shutter speed, 1/4000, the sun came out
vastly overexposed with 90mm of aperture feeding it at f/5.6.
I had suspected that might be a problem, so I'd prepared a couple of
off-axis stops for the Mak, to cover most of the aperture leaving only a
small hole open. Again, BR ice cream containers turned out to be
perfect. I painted the insides flat black to eliminate reflections,
then cut holes in the ends -- one about the size of a quarter, the
other quite a bit larger. It turned out I didn't use the larger stop
at all, and it would have been handy to have one smaller than the
quarter-sized one -- even with that stop, the sun was overexposed at
first even at 1/4000 and I had to go back to the solar filter for a while.
I was happy with the results, though -- I got a nice series of sunset
photos complete with Venus silhouette.
More clouds rolled in as we packed up, providing a gorgeous
blue-and-pink sunset sky backdrop for our short walk back to the car.
What a lovely day for such a rare celestial event!
Photos here:
Venus Transit, June 5 2012.
Tags: astronomy, science, photography
[
12:48 Jun 06, 2012
More science/astro |
permalink to this entry |
]
Wed, 21 Jul 2010
On Linux Planet yesterday: an article on how to write scripts for chdk,
the Canon Hack Development Kit -- Part 3 in my series on CHDK.
Time-Lapse
Photography with your Inexpensive Canon Camera (CHDK p. 3)
I found that CHDK scripting wasn't quite as good as I'd hoped -- some
of the functions, especially the aperture and shutter setting, were
quite flaky on my A540 so it really didn't work to write a bracketing
script. But it's fantastic for simple tasks like time-lapse photography,
or taking a series of shots like the Grass Roots Mapping folk do.
If you're at OSCON and you like scripting and photos, check out my
session on Thursday afternoon at 4:30:
Writing
GIMP Plug-ins and Scripts, in which I'll walk through several GIMP
scripts in Python and Script-Fu and show some little-known tricks
you can do with Python plug-ins.
Tags: photography, writing, programming, mapping, conferences, oscon, speaking
[
10:31 Jul 21, 2010
More photo |
permalink to this entry |
]
Thu, 08 Jul 2010
Part 2 of my series on hacking Canon point-and-shoot cameras with CHDK:
Turn
Your Compact Canon Camera Into a Super-Camera With CHDK,
discusses some of CHDK's major features, like RAW image file
support, "zebra mode" and on-screen histograms, and custom video modes
(ever been annoyed that you can't zoom while shooting a video?)
Perhaps equally important, it discusses how to access these modes
and CHDK's other special menus, how to load CHDK automatically
whenever you power the camera on, and how to disable it temporarily.
Part 3, yet to come, will discuss how to write CHDK scripts.
Tags: writing, photography
[
17:27 Jul 08, 2010
More photo |
permalink to this entry |
]
Wed, 30 Jun 2010
You read so much about the dire state of amphibians in today's world.
They're delicate -- they can absorb toxins through their porous skins,
making them prey to all the pollution the human world dumps at their
doorstep, as well as being prey for a wide assortment of larger animals
and prone to infection by parasites. I remember seeing lots of frogs
around ponds in the woods when I was growing up, and these days it's
rare to see a frog in the wild at all.
But sometimes you get lucky and get an indication that maybe the
state of amphibians isn't as dire as all that.
Mark Wagner gave me a tip (thanks, Mark!) that the pond at Picchetti
Ranch was literally hopping with frogs. I thought he must be
exaggerating -- but he wasn't.
They're tiny, thumbtip-sized creatures and they're everywhere around
the margin of the lake, hopping away as you approach. It's tough to get
photos because they move so fast and like to hide under grass stems,
but like anything else, take a lot of pictures and you'll get lucky
on a few.
The scene is absolutely amazing. If you're at all a frog fan in the
south bay area, get yourself to Picchetti and take a look -- but be
very, very careful where you step, because they're everywhere and
they're hard to spot between jumps.
I unfortunately lack a good amphibian field guide, and couldn't find
much on the web either, but some people seem to think these
Picchetti frogs are Sierran tree frogs -- which apparently are sometimes
are green, sometimes brown and have a wide range of markings, so
identifying them isn't straightforward.
Photos: Tiny frogs at
Piccheti Ranch.
Tags: nature, amphibian, frog, photography
[
19:14 Jun 30, 2010
More nature |
permalink to this entry |
]
Wed, 23 Jun 2010
My latest Linux Planet article came out a day early:
RAW
Support (and more) For Your Canon Camera With CHDK.
CHDK is a cool way you can load custom firmware onto a Canon camera.
It lets you do all sorts of useful hacks, from saving in RAW format
even in cameras that supposedly don't allow that, to getting more
control over aperture, shutter speed and other parameters, to
writing scripts to control the camera.
I didn't have space for all that in one article, so today's Part 1
simply covers how to install CHDK; Part 2, in two weeks, will
discuss some of the great things you can do with CHDK.
Tags: writing, photography
[
20:02 Jun 23, 2010
More photo |
permalink to this entry |
]
Wed, 04 Feb 2009
I still haven't finished writing up a couple of blog entries from
bumming around Tasmania after LCA2009, but I did get some photos
uploaded:
Tasmania
photos. Way too many photos of cute Tassie devils and other
animals at the Bonorong wildlife park, as well as the usual
collection of scenics and silly travel photos.
Tags: travel, tasmania, lca2009, nature, photography
[
15:49 Feb 04, 2009
More travel/tasmania |
permalink to this entry |
]
Tue, 02 Sep 2008
I thought it would never happen ... I've finally joined the
Digital SLR world.
Why would it never happen? I enjoyed film SLRs for years ...
from the Olympus OM-1 (great little manual camera) I had as a teenager
to the Nikkormat EL and Nikon FG I used a decade ago. I only stopped
because processing and scanning slides was such a hassle compared
to the ease of uploading digital images. So why not a DSLR?
The problem was that when Nikon went digital, they orphaned all their
old manual-focus lenses. They're still physically compatible (they'll
screw on to the DSLR body), but peeved Nikon DSLR owners inform me
(and camera store clerks agree) that the Nikon cameras won't meter
with the old lens attached.
I don't mind doing my own focusing (manual focusing is one of the
prime advantages of an SLR, not a disadvantage) but having
to guess at the exposure setting too? "Oh, just carry a light meter,"
people say. On a camera that costs over $600? That bothers me.
So I was peeved at Nikon and not about to buy anything from them ...
but meanwhile I had all these lenses, and hated to buy some other
brand where the lenses wouldn't even screw on. So, no DSLR for me ...
Until I was pouring out my lens-mount frustrations during a camera
discussion one night on #gimp and one of the regulars (thanks, Liam!)
said "Well then, why don't you just get an adaptor that lets you use
Nikon MF lenses on a Canon?"
A what? said I.
Sure enough, there are lots of them on Ebay ... search for
canon nikon adaptor
or look at
Gadget
Infinity's "lens adaptor" section. You can even (for a little more
money) get a "confirm" lens that lights up the autofocus-confirm
points in the viewfinder to tell you when the camera thinks you're
in focus.
A few months passed (too busy to do camera research) but eventually I
found the time and budget ... and now I have a 5-day-old Canon Rebel
Xsi, which indeed takes excellent photos (correctly metered) through
my old Nikon AI-mount Sigma 70-300 APO zoom macro. And the 18-55 kit
lens (the equivalent of a 29-88 in a 35mm camera) isn't bad either --
a little slow (f/3.5 at the widest) but decently wide at the wide end
(in the years of using pocket digicams I'd forgotten how much nicer
it is to have a true wide-angle lens) and with a nice close focus
for macros at the long end.
Even the autofocus isn't bad -- there are still plenty of times when
I need manual, but the Rebel's autofocus is
much faster and more accurate than any I'd seen on earlier cameras.
It's such a great feeling to use an SLR again. The morning after the
camera arrived, I looked up and saw goldfinches at the feeder just
outside the window. I picked up the camera, switched it on, pointed,
zoomed, focused and snapped. No worries about whether the camera
might have decided to focus on the window, or the window frame, or
the tree, or the bush -- just focus and shoot. What a pleasure!
And the best part: this must be a camera made by geeks,
because when it has the Nikon lens attached ... it says F00!
Tags: photography
[
20:59 Sep 02, 2008
More photo |
permalink to this entry |
]
Thu, 25 Aug 2005
I was contacted months ago regarding a
photo
on my web site
asking whether it could be used along with an article on
molting patterns in Dowitchers in
Birding magazine.
Months went by (print magazines are slow) and I wondered if
the plan had been dropped, but last week I heard from the author,
Caleb Putnam, and the article is in the current (July/August) issue!
Yesterday I received a copy of the magazine and a modest payment.
Cool!
Even cooler, the photo is the frontispiece of the article.
The author says he's received many comments about how great a shot
it is for illustrating molt gaps. That's a pull quote if I ever
heard one: "Great shot for illustrating molt gaps."
The article is interesting as well -- I didn't know that molt patterns
could identify the two species of dowitcher. Telling long-billed and
short-billed dowitchers apart has been beyond my modest birding
skills, but perhaps I'll have better luck now. I'll be heading out
to Baylands today at lunch to see what the dowitchers are doing ...
Tags: photography
[
11:49 Aug 25, 2005
More photo |
permalink to this entry |
]
Tue, 28 Jun 2005
Some jerk decided it would be funny to throw a lit firecracker
into the dry brush beside the freeway a few blocks away from where
I live, with
predictable
results.
Fortunately the fire department responded incredibly quickly (must
have been less than five minutes from when I heard the bang to
when the fire truck arrived) and they were able to put the fire out
before it spread far.
I hope someone saw whoever threw the firecracker, and got a license
plate.
Tags: photography
[
23:12 Jun 28, 2005
More misc |
permalink to this entry |
]
Sun, 04 Jul 2004
Dan's party was last night,
including an group which was giving an informal workshop
on night photography.
The presentation was a little disappointing, just people
showing slides of recent photographs.
No discussion of techniques or interesting ideas for night
photography, things to try out that night.
It was mildly fun for the couple of us who were Linux users
to watch the Windows people fumble with their
JASC slideshow program trying to get it to present photos at a
reasonable size. Whenever I wonder why I bother to keep maintaining
pho,
I look at what Windows and Mac people have to go through to look
at photos and am amazed all over again.
But strangely, before heading off to Marin yesterday, I did some
searching for other linux image viewing programs, to see if they'd
solved the window manager problems I've been wrestling with for pho.
Amazingly, I couldn't find a single free program in Debian that did
what pho does (namely, view a list of images serially, at full size
or screen resolution). I had to search for xv source (not in
Debian,
probably licensing issues), which requires a couple of tweaks to get
it to build on linux, and which has the same window management
issues pho has. I guess I'll keep maintaining it after all!
After dark we trooped up the hill to photograph lights (Richmond
and the Richmond-San Rafael bridge were visible, along with parts
of Marin) and wait for moonrise. I took an SLR and the Minolta,
and wish I'd taken the Olympus -- nearly everyone else had digital
SLRs (Canon) and I wished for something with a decent zoom which
would still give me exposure feedback. It's not as if bay area
skies can support long star-trail exposures anyway. Moonrise was
lovely, a sliver of moon emerging above a thick cloudbank centered
over the San Rafael bridge, and growing into a full-sized moon.
I hope some of the film photos (on old expired PJM multispeed film!)
come out.
Most of the photographers there knew each other from previous
classes (I wasn't clear how many are students versus
instructors) and most of the group spent the hour before moonrise
clustered together taking turns taking the same shot, a person
silhouetted against the lights of Richmond while someone else fired
a flash from behind the person, back toward the camera, giving an
"aura" effect around the silhouette and lighting the nearby grass
a bit. Not really knowing anyone, I hung back and instead worked on
photos of the various photographers silhouetted against the sky
(which may or may not come out; I was shooting from 10 sec to about
3 min, betting on the Marin sky being too bright for longer star
trails, but we'll see. One of the other solo shooters was shooting
10 minute exposures and people kept walking into her frame.)
Dave shot a few Canon digicam images before the sunset light was
completely gone, then the wind got to him and he went back to the
house and didn't wait for moonrise.
I'd wondered about maybe taking one of their regular workshops,
but this outing was a bit like the couple of other photo workshops
I've done: no real instruction or sharing of ideas, basically just
a bunch of people wandering around taking photos. If you have
specific questions or know the instructors already you might be able
to get questions answered, but as a person new to the group, I felt
like I'd probably do just as well just going somewhere on my own and
taking a lot of photos.
It may be that their multi-day pay workshops involve more
instruction, and more feedback the next day on images taken at the
workshop. I'm curious about that; the few photo seminars and
classes I've taken have also promised feedback afterward, but
haven't
had much, if any.
Sometimes I think that the ideal format for a photo workshop is an
online class: give assignments, then people post their photos a few
days or a week later, and everyone discusses them, then you go off
to the next assignment with what you learned based on the feedback.
The important parts are the discussion and the feedback, not being
in the same physical place during the shooting (since not much
instruction seems to take place then, for most participants, and if
it does it seems to be of the type "everybody line up and take
exactly the same photo").
It's hard to do feedback in a several-day workshop at a place like
Death
Valley when people are shooting film and you can't get it developed
quickly enough; a digital camera might be a prerequisite to getting
much out of that sort of workshop.
Tags: photography
[
11:00 Jul 04, 2004
More photo |
permalink to this entry |
]