Shallow Thoughts

Akkana's Musings on Open Source Computing and Technology, Science, and Nature.

Tue, 06 Aug 2019

Update on my Rescued Nestlings

Last month I wrote about the orphaned nestlings I found on the ground off the back deck, and how I took them to a rehabilitator when the parents didn't come back to feed them.

Here's the rest of the story. Warning: it's only half a happy ending.

[Nestlings starting to feather out] Under the good care of our local bird rehabilitator, they started to feather out and gain weight quickly. She gave me some literature on bird rescue and let me visit them and help feed them. There's a lot of work and responsibility involved in bird rehabilitation!

[Nestlings starting to feather out] I'd sometimes thought I wanted to be a rehabilitator; now I'm not so sure I'm up to the responsibility. Though the chicks sure were adorable once they started to look like birds instead of embryos, sitting so trustingly in Sally's hand.

[Looking more like a bird] The big mystery was what species they were. Bird rehabilitators have charts where you can look up bird species according to weight, mouth color, gape color, skin color, feather color, and feet and leg size. But the charts only have a few species; they're woefully incomplete, and my babies didn't match any of the listings. We were thinking maybe robin or ash-throated flycatcher, but nothing really matched.

Fortunately, you can feed the same thing to anything but finches: Cornell makes a mixture of meat, dog food, vitamins and minerals that's suitable for most baby birds, though apparently it's dangerous to feed it to finches, so we crossed our fingers and guessed that they were too big to be house finches.

As they grew more feathers, Sally increasingly suspected they were canyon towhees (a common bird in White Rock), and although they still didn't have adult plumage by the time they left the cage, that's still what we think.

[Hopping alertly around the cage now] By about twenty days after the rescue, they were acting almost like adult birds, hopping restlessly around the cage, jumping up to the perch and fluttering back down. They were eating partly by themselves at this point, a variety of foods including lettuce, blueberries, cut up pea pods, and dried mealworms, though they weren't eating many seeds like you'd expect from towhees. They still liked being fed the Cornell meat mixture, and ate more of that than anything else.

I Get to be a Bird Mom For a While

At this point, Sally needed to go out of town, and I offered to babysit them so she didn't have to take them on her trip. (One of the big downsites of being a rehabilitator: while you're in charge of babies, they need constant care.) I took them back to my place, where I hoped I'd be able to release them: partly because they'd been born here, and partly because the towhees here in White Rock aren't so territorial as they apparently are in Los Alamos.

With the chicks safely stashed in the guest bedroom, I could tell they were getting restless and wanted out of the cage. When I opened the cage to feed them and change their water and bedding, they escaped out into the room a couple of times and I had to catch them and get them back in the cage. So I knew they could fly and wanted out. (I'm sure being moved from Sally's house to mine didn't help: the change in surroundings probably unnerved them.)

Sally advised me to leave the cage outside during the day for a couple of days prior to the releasing, so the birds can get used to the environment. The first day I put them outside, they immediately seemed much happier and calmer. It seemed they liked being outside.

I Fail as a Bird Mom

On their second morning outdoors, I left them with new food and water, then came back to check on them an hour later. They seemed much more agitated than before, flying madly from one side of the cage to the other. Sally had described her last tenant, a sparrow, doing that just before release; she had released the sparrow a bit earlier than planned because the bird seemed to want out so badly. I wondered if that was the case here, but decided to wait one more day.

But the larger of the two babies had other ideas. When I unzipped the top of the cage to re-fill the water dish, it was in the air immediately, and somehow shot through the tiny opening next to my arm.

It flew about thirty feet, landed in a clearing -- and was immediately taken by a Cooper's hawk that came out of nowhere.

The hawk flew off, the baby towhee squeaking pathetically in its talons, leaving me and the other baby in shock.

What a blow! The bird rescue literature Sally loaned me stresses that bad things can happen. There are so many things that can go wrong with a nestling or a release. They tell you how poor the odds are for baby birds in general. They remind you that the birds would have had no chance of survival if you hadn't rescued them; rescued, at least they have some chance.

While I know that's all true, I'm not sure it makes me feel much better.

In hindsight, Sally said the chicks' agitation that day might have been because they knew the hawk was there, though neither of us though about that possibility at the time. She thinks the hawk must have been "stalking them", hanging out nearby, aware that there was something delectable inside the cage. She's had chicks taken by hawks too. Still ... sigh.

The Next Release Goes Better

But there was still the remaining chick to think about. Sally and I discussed options and decided that I should bring the chick back inside, and then drive it back up to her house. The hawk would probably remain around my place for a while,and the area wouldn't be safe for a new fledgling. Indeed, I saw the hawk again a few days later. (Normally I love seeing Cooper's hawks!)

The chick was obviously unhappy, whether because of being brought back inside, loneliness, or remaining trauma from hearing the attack -- even if it didn't understand exactly what had happened, I'm sure the chick heard the "towhee in mortal peril" noises just as I did.

So the chick (whom Dave dubbed "Lucky") had to wait another several days before finally being released.

The release went well. Lucky, less bold than its nestmate, was initially reluctant to leave the cage, but eventually fluttered out and flew to the shade of a nearby bush, where we could see it pecking at the ground and apparently eating various unidentifiable bits. It looked like it was finding plenty to eat there, it was mostly hidden from predators and competetors, and it had shade and shelter -- a good spot to begin a new life.

(I tried to get a video of the release but that didn't work out.)

Since then the chick has kept a low profile, but Sally thinks she saw a towhee fledgling a couple of days later. So we have our fingers crossed!

More photos: Nestling Rescue Photos.

Tags: ,
[ 09:50 Aug 06, 2019    More nature/birds | permalink to this entry | comments ]

Thu, 01 Aug 2019

Silly Moon Names: a Nice Beginning Python Project

Every time the media invents a new moon term -- super blood black wolf moon, or whatever -- I roll my eyes.

[Lunar Perigee and Apogee sizes] First, this ridiculous "supermoon" thing is basically undetectable to the human eye. Here's an image showing the relative sizes of the absolute closest and farthest moons. It's easy enough to tell when you see the biggest and smallest moons side by side, but when it's half a degree in the sky, there's no way you'd notice that one was bigger or smaller than average.

Even better, here's a link to an animation of how the moon changes size and "librates" -- tilts so that we can see a little bit over onto the moon's far side -- during the course of a month.

Anyway, the media seem to lap this stuff up and every month there's a new stupid moon term. I'm sure nearly every astronomer was relieved to see the thoroughly sensible Gizmodo article yesterday, Oh My God Stop It With the Fake Moon Names What the Hell Is a 'Black Moon' That Isn't Anything. Not that that will stop the insanity.

If You Can't Beat 'Em, Join 'Em

And then, talking about the ridiculous moon name phenom with some friends, I realized I could play this game too. So I spent twenty minutes whipping up my own Silly Moon Name Generator.

It's super simple -- it just uses Linux' built-in dictionary, with no sense of which words are common, or adjectives or nouns or what. Of course it would be funnier with a hand-picked set of words, but there's a limit to how much time I want to waste on this.

You can add a parameter ?nwords=5 (or whatever number) if you want more or fewer words than four.

How Does It Work?

Random phrase generators like this are a great project for someone just getting started with Python. Python is so good at string manipulation that it makes this sort of thing easy: it only takes half a page of code to do something fun. So it's a great beginner project that most people would probably find more rewarding than cranking out Fibonacci numbers (assuming you're not a Fibonacci geek like I am). For more advanced programmers, random phrase generation can still be a fun and educational project -- skip to the end of this article for ideas.

For the basics, this is all you need: I've added comments explaining the code.

import random


def hypermoon(filename, nwords=4):
    '''Return a silly moon name with nwords words,
       each taken from a word list in the given filename.
    '''
    fp = open(filename)
    lines = fp.readlines()

    # A list to store the words to describe the moon:
    words = []
    for i in range(nwords):    # This will be run nwords times
        # Pick a random number between 0 and the number of lines in the file:
        whichline = random.randint(0, len(lines))

        # readlines() includes whitespace like newline characters.
        # Use whichline to pull one line from the file, and use
        # strip() to remove any extra whitespace:
        word = lines[whichline].strip()

        # Append it to our word list:
        words.append(word)

    # The last word in the phrase will be "moon", e.g.
    # super blood wolf black pancreas moon
    words.append("moon")

    # ' '.join(list) combines all the words with spaces between them
    return ' '.join(words)


# This is called when the program runs:
if __name__ == '__main__':
    random.seed()

    print(hypermoon('/usr/share/dict/words', 4))

A More Compact Format

In that code example, I expanded everything to try to make it clear for beginning programmers. In practice, Python lets you be a lot more terse, so the way I actually wrote it was more like:

def hypermoon(filename, nwords=4):
    with open(filename, encoding='utf-8') as fp:
        lines = fp.readlines()

    words = [ lines[random.randint(0, len(lines))].strip()
              for i in range(nwords) ]
    words.append('moon')
    return ' '.join(words)

There are three important differences (in bold):

Opening a file using "with" ensures the file will be closed properly when you're done with it. That's not important in this tiny example, but it's a good habit to get into.

I specify the 'utf-8' encoding when I open the file because when I ran it as a web app, it turned out the web server used the ASCII encoding and I got Python errors because there are accented characters in the dictionary somewhere. That's one of those Python annoyances you get used to when going beyond the beginner level.

The way I define words all in one line (well, it's conceptually one long line, though I split it into two so each line stays under 72 characters) is called a list comprehension. It's a nice compact alternative to defining an empty list [] and then calling append() a bunch of times, like I did in the first example.

Initially they might seem harder to read, but list comprehensions can actually make code clearer once you get used to them.

A Python Driven Web Page

Finally, to make it work as a web page, I added the CGI module. That isn't really a beginner thing so I won't paste it here, but you can see the CGI version at hypermoon.py on GitHub.

I should mention that there's some debate over CGI in Python. The movers and shakers in the Python community don't approve of CGI, and there's a plan to remove it from upcoming Python versions. The alternative is to use technologies like Flask or Django. while I'm a fan of Flask and have used it for several projects, it's way overkill for something like this, mostly because of all the special web server configuration it requires (and Django is far more heavyweight than Flask). In any case, be aware that the CGI module may be removed from Python's standard library in the near future. With any luck, python-cgi will still be available via pip install or as Linux distro packages.

More Advanced Programmers: Making it Funnier

I mentioned earlier that I thought the app would be a lot funnier with a handpicked set of words. I did that long, long ago with my Star Party Observing Report Generator (written in Perl; I hadn't yet started using Python back in 2001). That's easy and fun if you have the time to spare, or a lot of friends contributing.

You could instead use words taken from a set of input documents. For instance, only use words that appear in Shakespeare's plays, or in company mission statements, or in Wikipedia articles about dog breeds (this involves some web scraping, but Python is good at that too; I like BeautifulSoup).

Or you could let users contribute their own ideas for good words to use, storing the user suggestions in a database.

Another way to make the words seem more appropriate and less random might be to use one of the many natural language packages for Python, such as NLTK, the Natural Language Toolkit. That way, you could control how often you used adjectives vs. nouns, and avoid using verbs or articles at all.

Random word generators seem like a silly and trivial programming exercise -- because they are! But they're also a fun starting point for more advanced explorations with Python.

Tags: , , ,
[ 14:24 Aug 01, 2019    More humor | permalink to this entry | comments ]

Sat, 27 Jul 2019

Android: Saving MMS Photos and Videos and Text Conversations

Until recently, I didn't know anyone who texted much, but just recently I've been corresponding with several people who like to text. Which is fine with me, but comes with one problem: when someone texts me a photo or video, it's tough to see much on a phone screen. I'd much prefer to view it on my big computer screen.

Not to mention that I'd like to be able to archive some of these photos on a platform where I can actually do backups. Yes, I know I'm a dinosaur and modern people are supposed to entrust all our photos, addressbooks, private documents and everything else to Google or Apple. Sorry. I'm happier in the Mesozoic.

Anyway, the point is that I wanted a way to get a message that's sitting in my Android phone's Messages app onto my computer.

As far as I can tell from web searching, there's no way to back up all the texts on an Android phone if it's not rooted. If you want to keep an archive of a conversation ... well, sorry, you can't. (If you are rooted, there's apparently an sqlite db file buried somewhere under /data.)

[Saving an MMS video on Android] After much searching, I did discover that you can long-press on an individual photo or video and choose Save attachment.

A dialog will come up offering a filename and a checkbox. The box is unchecked, and the Save button won't enable unless you check the box. What good that does is beyond me, since there's no option of letting you choose a different filename.

There's also no option of seeing where it saved, and Android doesn't give any clue. But it turns out (after much searching and discovering that commands like find / -name "Download*" 2> /dev/null don't work in Android, they just fail silently; I don't know what you're supposed to use to search for files. Oh, right, you're not supposed to) to be in /sdcard/Download.

So once you know the filename, you can

adb shell ls /sdcard/Download
adb pull /sdcard/Download/whatever.jpg

This isn't so bad as long as you do it before people have sent you 100 images that you want to archive. Fortunately I have less than ten I need to save.

Saving SMS Conversations

You can save the text of an entire SMS conversation with someone, though it won't save the media associated with the conversation.

Click on the three-dots menu item in the upper right of a conversation, scroll down in the menu and choose Save messages; the filename will be something like sms-20190723122204 (the numbers are the current date and time). Select all messages, or just the ones you want to save. Click your way through a warning screen or two telling you it won't save MMS media or group messages.

When you tap Save. there's a briefly-flashing message telling you where it saved it: if you have a photographic memory, be prepared, otherwise, you might need to save the same conversation several times to read the whole path; on my old Samsung, it turns out they go into /storage/emulated/0/Messaging.

Format of a Saved Conversaion

What you get is an XML file with lots of entries, in reverse chronological order. One entry looks something like:

    <message type="SMS">
      <address>5059208957</address>
      <body>We+will+probably+be+here+all+day.+Let+me+know+when+you+could+visit+</body>
      <date>1563723506834</date>
      <read>1</read>
      <type>1</type>
      <locked>0</locked>
    </message>

So to make them easily readable, you'd want something that could split out the <body> parts and then HTML decode them to turn those plusses and HTML entities into spaces and readable characters.

Parsing XML is easy enough with Python's BeautifulSoup module -- or is it? Usually with BS4 I use the "lxml" parser, but I hit a snag in this case: see that body tag inside each message tag? It turns out the lxml parser, despite its name, is meant to be an HTML parser, not general XML; it inserts

>>> from bs4 import BeautifulSoup
>>> with open('sms-20190723202727.xml') as fp:
...     soup = BeautifulSoup(fp)
...     for tag in soup.findAll():
...         print(tag.name)
...
html
body
file
thread
message
...

See how it's added html and body tags that weren't actually in the file? Worse, it also ignores the body tags that actually are in the file -- which is a problem because body is the tag where Android's Messages app chooses to put the actual text of the message.

Instead, use the "xml" parser, which is actually intended for XML.

Once you get past the lxml problems, the rest is pretty easy, except that you need to know that the dates are in Java Timestamp format, which means you have to divide by 1000 to get a Unix timestamp you can pass to datetime. And each SMS text is URL plus-encoded, so you can unquote it with urllib.parse.unquote_plus.

from bs4 import BeautifulSoup
from datetime import datetime
import urllib.parse
import sys

def parse_sms_convo(filename):
    with open(filename) as fp:
        soup = BeautifulSoup(fp, "xml")

        for msg in soup.findAll('message'):
            d = datetime.fromtimestamp(int(msg.find('date').text)/1000)
            body = msg.find('body')
            addr = msg.find('address')
            print("%s (from %s):\n   %s" %
                  (d.strftime("%Y-%m-%d %H:%M"),
                   addr.text,
                   urllib.parse.unquote_plus(body.text)))
            print()

if __name__ == '__main__':
    parse_sms_convo(sys.argv[1])

Of course, the best would be to get the entire conversation including images and videos all together. Apparently that's possible with a rooted phone, but I haven't found any way that doesn't require rooting. (It forever amazes me that advanced users who care about things like root access aren't bothered in the least by the fact that rooting almost always requires downloading a binary from a dodgy malware-distribution website, running it and giving it complete access to your phone. Me, I just hope a day eventually comes when there's a phone OS option that gives me the choice of controlling my own phone without resorting to malware.)

[ 13:41 Jul 27, 2019    More tech | permalink to this entry | comments ]

Tue, 23 Jul 2019

360 Panoramas with Povray and/or ImageMagick

This is Part IV of a four-part article on ray tracing digital elevation model (DEM) data. The goal: render a ray-traced image of mountains from a digital elevation model (DEM).

Except there are actually several more parts on the way, related to using GRASS to make viewsheds. So maybe this is actually a five- or six-parter. We'll see.

The Easy Solution

Skipping to the chase here ... I had a whole long article written about how to make a sequence of images with povray, each pointing in a different direction, and then stitch them together with ImageMagick.

But a few days after I'd gotten it all working, I realized none of it was needed for this project, because ... ta-DA — povray accepts this argument inside its camera section:

    angle 360

Duh! That makes it so easy.

You do need to change povray's projection to cylindrical; the default is "perspective" which warps the images. If you set your look_at to point due south -- the first and second coordinates are the same as your observer coordinate, the third being zero so it's looking southward -- then povray will create a lovely strip starting at 0 degrees bearing (due north), and with south right in the middle. The camera section I ended up with was:

camera {
    cylinder 1

    location <0.344444, 0.029620, 0.519048>
    look_at  <0.344444, 0.029620, 0>

    angle 360
}
with the same light_source and height_field as in Part III.

[360 povray panorama from Overlook Point]

There are still some more steps I'd like to do. For instance, fitting names of peaks to that 360-degree pan.

The rest of this article discusses some of the techniques I would have used, which might be useful in other circumstances.

A Script to Spin the Observer Around

Angles on a globe aren't as easy as just adding 45 degrees to the bearing angle each time. You need some spherical trigonometry to make the angles even, and it depends on the observer's coordinates.

Obviously, this wasn't something I wanted to calculate by hand, so I wrote a script for it: demproj.py. Run it with the name of a DEM file and the observer's coordinates:

demproj.py demfile.png 35.827 -106.1803
It takes care of calculating the observer's elevation, normalizing to the image size and all that. It generates eight files, named outfileN.png, outfileNE.png etc.

Stitching Panoramas with ImageMagick

To stitch those demproj images manually in ImageMagick, this should work in theory:

convert -size 3600x600 xc:black \
    outfile000.png -geometry +0+0 -composite \
    outfile045.png -geometry +400+0 -composite \
    outfile090.png -geometry +800+0 -composite \
    outfile135.png -geometry +1200+0 -composite \
    outfile180.png -geometry +1600+0 -composite \
    outfile225.png -geometry +2000+0 -composite \
    outfile270.png -geometry +2400+0 -composite \
    outfile315.png -geometry +2800+0 -composite \
    out-composite.png
or simply
convert outfile*.png +smush -400 out-smush.png

Adjusting Panoramas in GIMP

But in practice, some of the images have a few-pixel offset, and I never did figure out why; maybe it's a rounding error in my angle calculations.

I opened the images as layers in GIMP, and used my GIMP script Pandora/ to lay them out as a panorama. The cylindrical projection should make the edges match perfectly, so you can turn off the layer masking.

Then use the Move tool to adjust for the slight errors (tip: when the Move tool is active, the arrow keys will move the current layer by a single pixel).

If you get the offsets perfect and want to know what they are so you can use them in ImageMagick or another program, use GIMP's Filters->Python-Fu->Console. This assumes the panorama image is the only one loaded in GIMP, otherwise you'll have to inspect gimp.image_list() to see where in the list your image is.

>>> img = gimp.image_list()[0]
>>> for layer in img.layers:
...     print layer.name, layer.offsets

Tags: , , ,
[ 15:28 Jul 23, 2019    More mapping | permalink to this entry | comments ]

Wed, 17 Jul 2019

Ray-Tracing Digital Elevation Data in 3D with Povray (Part III)

This is Part III of a four-part article on ray tracing digital elevation model (DEM) data. The goal: render a ray-traced image of mountains from a digital elevation model (DEM).

In Part II, I showed how the povray camera position and angle need to be adjusted based on the data, and the position of the light source depends on the camera position.

In particular, if the camera is too high, you won't see anything because all the relief will be tiny invisible bumps down below. If it's too low, it might be below the surface and then you can't see anything. If the light source is too high, you'll have no shadows, just a uniform grey surface.

That's easy enough to calculate for a simple test image like the one I used in Part II, where you know exactly what's in the file. But what about real DEM data where the elevations can vary?

Explore Your Test Data

[Hillshade of northern New Mexico mountains] For a test, I downloaded some data that includes the peaks I can see from White Rock in the local Jemez and Sangre de Cristo mountains.

wget -O mountains.tif 'http://opentopo.sdsc.edu/otr/getdem?demtype=SRTMGL3&west=-106.8&south=35.1&east=-105.0&north=36.5&outputFormat=GTiff'

Create a hillshade to make sure it looks like the right region:

gdaldem hillshade mountains.tif hillshade.png
pho hillshade.png
(or whatever your favorite image view is, if not pho). The image at right shows the hillshade for the data I'm using, with a yellow cross added at the location I'm going to use for the observer.

Sanity check: do the lowest and highest elevations look right? Let's look in both meters and feet, using the tricks from Part I.

>>> import gdal
>>> import numpy as np

>>> demdata = gdal.Open('mountains.tif')
>>> demarray = np.array(demdata.GetRasterBand(1).ReadAsArray())
>>> demarray.min(), demarray.max()
(1501, 3974)
>>> print([ x * 3.2808399 for x in (demarray.min(), demarray.max())])
[4924.5406899, 13038.057762600001]

That looks reasonable. Where are those highest and lowest points, in pixel coordinates?

>>> np.where(demarray == demarray.max())
(array([645]), array([1386]))
>>> np.where(demarray == demarray.min())
(array([1667]), array([175]))

Those coordinates are reversed because of the way numpy arrays are organized: (1386, 645) in the image looks like Truchas Peak (the highest peak in this part of the Sangres), while (175, 1667) is where the Rio Grande disappears downstream off the bottom left edge of the map -- not an unreasonable place to expect to find a low point. If you're having trouble eyeballing the coordinates, load the hillshade into GIMP and watch the coordinates reported at the bottom of the screen as you move the mouse.

While you're here, check the image width and height. You'll need it later.

>>> demarray.shape
(1680, 2160)
Again, those are backward: they're the image height, width.

Choose an Observing Spot

Let's pick a viewing spot: Overlook Point in White Rock (marked with the yellow cross on the image above). Its coordinates are -106.1803, 35.827. What are the pixel coordinates? Using the formula from the end of Part I:

>>> import affine
>>> affine_transform = affine.Affine.from_gdal(*demdata.GetGeoTransform())
>>> inverse_transform = ~affine_transform
>>> [ round(f) for f in inverse_transform * (-106.1803, 35.827) ]
[744, 808]

Just to double-check, what's the elevation at that point in the image? Note again that the numpy array needs the coordinates in reverse order: Y first, then X.

>>> demarray[808, 744], demarray[808, 744] * 3.28
(1878, 6159.839999999999)

1878 meters, 6160 feet. That's fine for Overlook Point. We have everything we need to set up a povray file.

Convert to PNG

As mentioned in Part II, povray will only accept height maps as a PNG file, so use gdal_translate to convert:

gdal_translate -ot UInt16 -of PNG mountains.tif mountains.png

Use the Data to Set Camera and Light Angles

The camera should be at the observer's position, and povray needs that as a line like

    location <rightward, upward, forward>
where those numbers are fractions of 1.

The image size in pixels is 2160x1680, and the observer is at pixel location (744, 808). So the first and third coordinates of location should be 744/2160 and 808/1680, right? Well, almost. That Y coordinate of 808 is measured from the top, while povray measures from the bottom. So the third coordinate is actually 1. - 808/1680.

Now we need height, but how do you normalize that? That's another thing nobody seems to document anywhere I can find; but since we're using a 16-bit PNG, I'll guess the maximum is 216 or 65536. That's meters, so DEM files can specify some darned high mountains! So that's why that location <0, .25, 0> line I got from the Mapping Hacks book didn't work: it put the camera at .25 * 65536 or 16,384 meters elevation, waaaaay up high in the sky.

My observer at Overlook Point is at 1,878 meters elevation, which corresponds to a povray height of 1878/65536. I'll use the same value for the look_at height to look horizontally. So now we can calculate all three location coordinates: 744/2160 = .3444, 1878/65536 = 0.0287, 1. - 808/1680 = 0.5190:

    location <.3444, 0.0287, .481>

Povray Glitches

Except, not so fast: that doesn't work. Remember how I mentioned in Part II that povray doesn't work if the camera location is at ground level? You have to put the camera some unspecified minimum distance above ground level before you see anything. I fiddled around a bit and found that if I multiplied the ground level height by 1.15 it worked, but 1.1 wasn't enough. I have no idea whether that will work in general. All I can tell you is, if you're setting location to be near ground level and the generated image looks super dark regardless of where your light source is, try raising your location a bit higher. I'll use 1878/65536 * 1.15 = 0.033.

For a first test, try setting look_at to some fixed place in the image, like the center of the top (north) edge (right .5, forward 1):

    location <.3444, 0.033, .481>
    look_at <.5, 0.033, 1>

That means you won't be looking exactly north, but that's okay, we're just testing and will worry about that later. The middle value, the elevation, is the same as the camera elevation so the camera will be pointed horizontally. (look_at can be at ground level or even lower, if you want to look down.)

Where should the light source be? I tried to be clever and put the light source at some predictable place over the observer's right shoulder, and most of the time it didn't work. I ended up just fiddling with the numbers until povray produced visible terrain. That's another one of those mysterious povray quirks. This light source worked fairly well for my DEM data, but feel free to experiment:

light_source { <2, 1, -1> color <1,1,1> }

All Together Now

Put it all together in a mountains.pov file:

camera {
    location <.3444, 0.0330, .481>
    look_at <.5, 0.0287, 1>
}

light_source { <2, 1, -1> color <1,1,1> }

height_field {
    png "mountains.png"
    smooth
    pigment {
        gradient y
        color_map {
            [ 0 color <.7 .7 .7> ]
            [ 1 color <1 1 1> ]
        }
    }
    scale <1, 1, 1>
}
[Povray-rendering of Black and Otowi Mesas from Overlook Point] Finally, you can run povray and generate an image!
povray +A +W800 +H600 +INAME_OF_POV_FILE +OOUTPUT_PNG_FILE

And once I finally got to this point I could immediately see it was correct. That's Black Mesa (Tunyo) out in the valley a little right of center, and I can see White Rock canyon in the foreground with Otowi Peak on the other side of the canyon. (I strongly recommend, when you experiment with this, that you choose a scene that's very distinctive and very familiar to you, otherwise you'll never be sure if you got it right.)

Next Steps

Now I've accomplished my goal: taking a DEM map and ray-tracing it. But I wanted even more. I wanted a 360-degree panorama of all the mountains around my observing point.

Povray can't do that by itself, but in Part IV, I'll show how to make a series of povray renderings and stitch them together into a panorama. Part IV, Making a Panorama from Raytraced DEM Images

Tags: , , , ,
[ 16:43 Jul 17, 2019    More mapping | permalink to this entry | comments ]

Fri, 12 Jul 2019

Height Fields in Povray (Ray Tracing Elevation Data, Part II)

This is Part II of a four-part article on ray tracing digital elevation model (DEM) data. (Actually, it's looking like there may be five or more parts in the end.)

The goal: render a ray-traced image of mountains from a digital elevation model (DEM).

My goal for that DEM data was to use ray tracing to show the elevations of mountain peaks as if you're inside the image looking out at those peaks.

I'd seen the open source ray tracer povray used for that purpose in the book Mapping Hacks: Tips & Tools for Electronic Cartography: Hack 20, "Make 3-D Raytraced Terrain Models", discusses how to use it for DEM data.

Unfortunately, the book is a decade out of date now, and lots of things have changed. When I tried following the instructions in Hack 20, no matter what DEM file I used as input I got the same distorted grey rectangle. Figuring out what was wrong meant understanding how povray works, which involved a lot of testing and poking since the documentation isn't clear.

Convert to PNG

Before you can do anything, convert the DEM file to a 16-bit greyscale PNG, the only format povray accepts for what it calls height fields:

gdal_translate -ot UInt16 -of PNG demfile.tif demfile.png

If your data is in some format like ArcGIS that has multiple files, rather than a single GeoTIFF file, try using the name of the directory containing the files in place of a filename.

Set up the .pov file

Now create a .pov file, which will look something like this:

camera {
    location <.5, .5, 2>
    look_at  <.5, .6, 0>
}

light_source { <0, 2, 1> color <1,1,1> }

height_field {
    png "YOUR_DEM_FILE.png"

    smooth
    pigment {
        gradient y
        color_map {
            [ 0 color <.5 .5 .5> ]
            [ 1 color <1 1 1> ]
        }
    }

    scale <1, 1, 1>
}

The trick is setting up the right values for the camera and light source. Coordinates like the camera location and look_at, are specified by three numbers that represent <rightward, upward, forward> as a fraction of the image size.

Imagine your DEM tilting forward to lie flat in front of you: the bottom (southern) edge of your DEM image corresponds to 0 forward, whereas the top (northern) edge is 1 forward. 0 in the first coordinate is the western edge, 1 is the eastern. So, for instance, if you want to put the virtual camera at the middle of the bottom (south) edge of your DEM and look straight north and horizontally, neither up nor down, you'd want:

    location <.5, HEIGHT, 0>
    look_at  <.5, HEIGHT, 1>
(I'll talk about HEIGHT in a minute.)

It's okay to go negative, or to use numbers bigger than zero; that just means a coordinate that's outside the height map. For instance, a camera location of

    location <-1, HEIGHT, 2>
would be off the west and north edges of the area you're mapping.

look_at, as you might guess, is the point the camera is looking at. Rather than specify an angle, you specify a point in three dimensions which defines the camera's angle.

What about HEIGHT? If you make it too high, you won't see anything because the relief in your DEM will be too far below you and will disappear. That's what happened with the code from the book: it specified location <0, .25, 0>, which, in current DEM files, means the camera is about 16,000 feet up in the sky, so high that the mountains shrink to invisibility.

If you make the height too low, then everything disappears because ... well, actually I don't know why. If it's 0, then you're most likely underground and I understand why you can't see anything, but you have to make it significantly higher than ground level, and I'm not sure why. Seems to be a povray quirk.

Once you have a .pov file with the right camera and light source, you can run povray like this:

povray +A +W800 +H600 +Idemfile.pov +Orendered.png
then take a look at rendered.png in your favorite image viewer.

Simple Sample Data

['bowling pin' sample DEM for testing povray] There's not much documentation for any of this. There's povray: Placing the Camera, but it doesn't explain details like which number controls which dimension or why it doesn't work if you're too high or too low. To figure out how it worked, I made a silly little test image in GIMP consisting of some circles with fuzzy edges. Those correspond to very tall pillars with steep sides: in these height maps, white means the highest point possible, black means the lowest.

Then I tried lots of different values for location and look_at until I understood what was going on.

For my bowling-pin image, it turned out looking northward (upward) from the south (the bottom of the image) didn't work, because the pillar at the point of the triangle blocked everything else. It turned out to be more useful to put the camera beyond the top (north side) of the image and look southward, back toward the image.

    location <.5, HEIGHT, 2>
    look_at  <.5, HEIGHT, 0>

[povray ray-traced bowling pin result]

The position of the light_source is also important. For instance, for my circles, the light source given in the original hack, <0, 3000, 0>, is so high that the pillars aren't visible at all, because the light is shining only on their tops and not on their sides. (That was also true for most DEM data I tried to view.) I had to move the light source much lower, so it illuminated the sides of the pillars and cast some shadows, and that was true for DEM data as well.

The .pov file above, with the camera halfway up the field (.5) and situated in the center of the north end of the field, looking southward and just slightly up from horizontal (.6), rendered like this. I can't explain the two artifacts in the middle. The artifacts at the tops and bottoms of the pillars are presumably rounding errors and don't worry me.

Finally, I felt like I was getting a handle on povray camera positioning. The next step was to apply it to real Digital Elevation Maps files. I'll cover that in Part III, Povray on real DEM data: Ray-Tracing Digital Elevation Data in 3D with Povray

Tags: , ,
[ 18:02 Jul 12, 2019    More mapping | permalink to this entry | comments ]

Sun, 07 Jul 2019

Working with Digital Elevation Models with GDAL and Python (Ray Tracing Elevation Data, Part I)

Part III of a four-part article:

One of my hiking buddies uses a phone app called Peak Finder. It's a neat program that lets you spin around and identify the mountain peaks you see.

Alas, I can't use it, because it won't work without a compass, and [expletive deleted] Samsung disables the compass in their phones, even though the hardware is there. I've often wondered if I could write a program that would do something similar. I could use the images in planetarium shows, and could even include additions like predicting exactly when and where the moon would rise on a given date.

Before plotting any mountains, first you need some elevation data, called a Digital Elevation Model or DEM.

Get the DEM data

Digital Elevation Models are available from a variety of sources in a variety of formats. But the downloaders and formats aren't as well documented as they could be, so it can be a confusing mess.

USGS

[Typical experience with USGS map tiles not loading] USGS steers you to the somewhat flaky and confusing National Map Download Client. Under Data in the left sidebar, click on Elevation Products (3DEP), select the accuracy you need, then zoom and pan the map until it shows what you need.

Current Extent doesn't seem to work consistently, so use Box/Point and sweep out a rectangle. Then click on Find products. Each "product" should have a download link next to it, or if not, you can put it in your cart and View Cart.

Except that National Map tiles often don't load, so you can end up with a mostly-empty map (as shown here) where you have no idea what area you're choosing. Once this starts happening, switching to a different set of tiles probably won't help; all you can do is wait a few hours and hope it gets better..

Or get your DEM data somewhere else. Even if you stick with the USGS, they have a different set of DEM data, called SRTM (it comes from the Shuttle Radar Topography Mission) which is downloaded from a completely different place, SRTM DEM data, Earth Explorer. It's marginally easier to use than the National Map and less flaky about tile loading, and it gives you GeoTIFF files instead of zip files containing various ArcGIS formats. Sounds good so far; but once you've wasted time defining the area you want, suddenly it reveals that you can't download anything unless you first make an account, and you have to go through a long registration process that demands name, address and phone number (!) before you can actually download anything.

Of course neither of these sources lets you just download data for a given set of coordinates; you have to go through the interactive website any time you want anything. So even if you don't mind giving the USGS your address and phone number, if you want something you can run from a program, you need to go elsewhere.

Unencumbered DEM Sources

Fortunately there are several other sources for elevation data. Be sure to read through the comments, which list better sources than in the main article.

The best I found is OpenTypography's SRTM API, which lets you download arbitrary areas specified by latitude/longitude bounding boxes.

Verify the Data: gdaldem

[Making a DEM visible with GIMP Levels] Okay, you've got some DEM data. Did you get the area you meant to get? Is there any data there? DEM data often comes packaged as an image, primarily GeoTIFF. You might think you could simply view that in an image viewer -- after all, those nice preview images they show you on those interactive downloaders show the terrain nicely. But the actual DEM data is scaled so that even high mountains don't show up; you probably won't be able to see anything but blackness.

One way of viewing a DEM file as an image is to load it into GIMP. Bring up Colors->Levels, go to the input slider (the upper of the two sliders) and slide the rightmost triangle leftward until it's near the right edge of the histogram. Don't save it that way (that will mess up the absolute elevations in the file); it's just a quick way of viewing the data.

[hillshade generated by gdaldem] A better way to check DEM data files is a beautiful little program called gdaldem. It has several options, like generating a hillshade image:

gdaldem hillshade n35_w107_1arc_v3.tif hillshade.png

Then view hillshade.png in your favorite image viewer and see if it looks like you expect. Having read quite a few elaborate tutorials on hillshade generation over the years, I was blown away at how easy it is with gdaldem.

Here are some other operations you can do on DEM data.

Translate the Data to Another Format

gdal has lots more useful stuff beyond gdaldem. For instance, my ultimate goal, ray tracing, will need a PNG:

gdal_translate -ot UInt16 -of PNG srtm_54_07.tif srtm_54_07.png

gdal_translate can recognize most DEM formats. If you have a complicated multi-file format like ARCGIS, try using the name of the directory where the files live.

Get Vertical Limits, for Scaling

What's the highest point in your data, and at what coordinates does that peak occur? You can find the highest and lowest points easily with Python's gdal package if you convert the gdal.Dataset into a numpy array:

import gdal
import numpy as np

demdata = gdal.Open(filename)
demarray = np.array(demdata.GetRasterBand(1).ReadAsArray())
print(demarray.min(), demarray.max())

That gives you the highest and lowest elevations. But where are they in the data? That's not super intuitive in numpy; the best way I've found is:

indices = np.where(demarray == demarray.max())
ymax, xmax = indices[0][0], indices[1][0]
print("The highest point is", demarray[ymax][xmax])
print("  at pixel location", xmax, ymax)

Translate Between Lat/Lon and Pixel Coordinates

But now that you have the pixel coordinates of the high point, how do you map that back to latitude and longitude? That's trickier, but here's one way, using the affine package:

import affine

affine_transform = affine.Affine.from_gdal(*demdata.GetGeoTransform())
lon, lat = affine_transform * (xmax, ymax)

What about the other way? You have latitude and longitude and you want to know what pixel location that corresponds to? Define an inverse transformation:

inverse_transform = ~affine_transform
px, py = [ round(f) for f in inverse_transform * (lon, lat) ]

Those transforms will become important once we get to Part III. But first, Part II, Understand Povray: Height Fields in Povray

Tags: , ,
[ 18:15 Jul 07, 2019    More mapping | permalink to this entry | comments ]

Tue, 02 Jul 2019

Orphaned Nestlings

Yesterday afternoon I was walking up the path to the back door when I noticed a bird's nest on the ground. I bent to examine it -- and spied two struggling baby birds on the rocks next to the nest.

[Orphaned nestling baby birds, gaping for food] I gently picked them up and put them back in the nest. Then what? On the ground there, they'd be easy fodder for coyotes, foxes or any other predator. But the tree it must have fallen from is a tall blue spruce; the lowest branches are way above head height, and even then I couldn't see any secure place to put a nest where it wouldn't immediately fall down again.

I chose another option: there's an upstairs deck immediately adjacent to that tree, so if I put the nest on the corner of the deck, it might be close enough to its original location that if the parents came looking for the nest, they'd be able to hear the chicks' calls. I've always read that parents will hang around and feed nestlings if they fall from the tree.

Nice theor; but the problem was that the chicks were too quiet. When they felt the nest jiggle as I moved it, they gaped, obviously wanting food; but although they did make a faint peeping, it wasn't loud enough to be heard from more than a few feet away.

Nevertheless, I left them there overnight, hoping a parent would find them in the evening or first thing in the morning. The deck is adjacent to my bedroom, so I was pretty sure I'd hear it if the parents came and fed the chicks. (It's easy to hear when the Bewick's wrens in the nest box above the other deck come to feed their chicks in the morning.)

Alas, there was no reunion. I heard no sounds and saw no activity. The chicks were still alive and active in the morning, but obviously very hungry, gaping every time they heard a noise. I wanted very badly to feed them, to find a bug or a little piece of steak or something that I could put in those gaping hungry mouths, but I was afraid of feeding them something that might turn out to be harmful. As it turned out, that was the right decision.

It was time to call for help (I'd posted on our local birders' list the evening before, but no one had any useful advice). Fortunately, we have an experienced bird rehabilitator in town, whom I know slightly, so I called her and got the okay to bring them in.

[Orphaned nestlings, now warm and fed] She weighed the babies (roughly 17 and 14 grams), put them on a heating pad and gave them a little pedialite solution. She said she couldn't actually feed them until they pooped; if I understood correctly, baby birds can get backed up and if you feed them then, it can kill them. Fortunately they both pooped right away after getting a drink, so she mixed up some baby bird formula and fed them with an eyedropper.

You know how parent birds always seem to shove their bill all the way down the chick's throat while feeding them? It always looks like they'd be in danger of puncturing the chick's stomach, but it turns out there's a reason for it. Much like humans, birds can have food going "down the wrong pipe", down the trachea or breathing tube rather than the esophagus or food tube. Unlike humans, birds don't have an epiglottis, the flap that closes over a mammal's trachea to keep food from getting in. When adult birds eat (I looked this up after getting home), the opening of the glottis closes during swallowing; but when feeding baby birds, you have to insert your eyedropper (or your bill, if you're a parent bird) well past the entrance to the trachea to make sure the food doesn't go down the breathing tube and drown/smother the chick. It takes training and practice to get this right, and sometimes even experienced bird rehabilitators get it wrong. So it was a good thing I didn't start randomly dropping chunks of food into the nestlings' mouths.

Sally wasn't any more able to identify the nestlings' species than I was. One possible suggestion I had was ash-throated flycatcher: they're about the right size, we have several hanging about the yard, and one had been hanging around that area of the yard all day, pestering the Bewick's wrens feeding their young in the nest box. I thought maybe the flycatchers wanted the nest box for their second nest of the season; but what if the flycatchers, normally cavity nesters, hadn't been able to find a suitable cavity, had tried building a nest in the blue spruce and done a poor job of it and the nest fell down?

It's a nice theory; but Sally showed me that these nestlings have crops (bulging places by their mouths where they were storing food as they ate), which apparently flycatchers don't. She said that's why flycatcher parents are so harried -- they're constantly on the move catching bugs to feed to their chicks, much more than most birds, because the babies can't store food themselves.

[The nest that fell] They could be canyon towhees or juniper titmouse; the bird rehabilitator guides didn't those species so it's hard to tell. Or they could be robins or even bluebirds, but I haven't seen many of either species around the yard this summer. They seem too big to be house finches, wrens, chickadees or bushtits.

The construction of the nest might give some clues. It's a work of art, roomy and sturdy and very comfortable looking, made of tansy mustard and other weeds and lined with soft hair. Maybe I'll find someone who's good at nest identification.

Anyway, for now, their species is a mystery, but they're warm and fed and being well cared for. She warned me that nestlings don't always survive and sometimes they have injuries from falling, but with any luck, they'll grow and eventually will be released.

Tags: ,
[ 15:40 Jul 02, 2019    More nature | permalink to this entry | comments ]