Shallow Thoughts : : Jul

Akkana's Musings on Open Source Computing and Technology, Science, and Nature.

Sat, 27 Jul 2019

Android: Saving MMS Photos and Videos and Text Conversations

Until recently, I didn't know anyone who texted much, but just recently I've been corresponding with several people who like to text. Which is fine with me, but comes with one problem: when someone texts me a photo or video, it's tough to see much on a phone screen. I'd much prefer to view it on my big computer screen.

Not to mention that I'd like to be able to archive some of these photos on a platform where I can actually do backups. Yes, I know I'm a dinosaur and modern people are supposed to entrust all our photos, addressbooks, private documents and everything else to Google or Apple. Sorry. I'm happier in the Mesozoic.

Anyway, the point is that I wanted a way to get a message that's sitting in my Android phone's Messages app onto my computer.

As far as I can tell from web searching, there's no way to back up all the texts on an Android phone if it's not rooted. If you want to keep an archive of a conversation ... well, sorry, you can't. (If you are rooted, there's apparently an sqlite db file buried somewhere under /data.)

[Saving an MMS video on Android] After much searching, I did discover that you can long-press on an individual photo or video and choose Save attachment.

A dialog will come up offering a filename and a checkbox. The box is unchecked, and the Save button won't enable unless you check the box. What good that does is beyond me, since there's no option of letting you choose a different filename.

There's also no option of seeing where it saved, and Android doesn't give any clue. But it turns out (after much searching and discovering that commands like find / -name "Download*" 2> /dev/null don't work in Android, they just fail silently; I don't know what you're supposed to use to search for files. Oh, right, you're not supposed to) to be in /sdcard/Download.

So once you know the filename, you can

adb shell ls /sdcard/Download
adb pull /sdcard/Download/whatever.jpg

This isn't so bad as long as you do it before people have sent you 100 images that you want to archive. Fortunately I have less than ten I need to save.

Saving SMS Conversations

You can save the text of an entire SMS conversation with someone, though it won't save the media associated with the conversation.

Click on the three-dots menu item in the upper right of a conversation, scroll down in the menu and choose Save messages; the filename will be something like sms-20190723122204 (the numbers are the current date and time). Select all messages, or just the ones you want to save. Click your way through a warning screen or two telling you it won't save MMS media or group messages.

When you tap Save. there's a briefly-flashing message telling you where it saved it: if you have a photographic memory, be prepared, otherwise, you might need to save the same conversation several times to read the whole path; on my old Samsung, it turns out they go into /storage/emulated/0/Messaging.

Format of a Saved Conversaion

What you get is an XML file with lots of entries, in reverse chronological order. One entry looks something like:

    <message type="SMS">
      <address>5059208957</address>
      <body>We+will+probably+be+here+all+day.+Let+me+know+when+you+could+visit+</body>
      <date>1563723506834</date>
      <read>1</read>
      <type>1</type>
      <locked>0</locked>
    </message>

So to make them easily readable, you'd want something that could split out the <body> parts and then HTML decode them to turn those plusses and HTML entities into spaces and readable characters.

Parsing XML is easy enough with Python's BeautifulSoup module -- or is it? Usually with BS4 I use the "lxml" parser, but I hit a snag in this case: see that body tag inside each message tag? It turns out the lxml parser, despite its name, is meant to be an HTML parser, not general XML; it inserts

>>> from bs4 import BeautifulSoup
>>> with open('sms-20190723202727.xml') as fp:
...     soup = BeautifulSoup(fp)
...     for tag in soup.findAll():
...         print(tag.name)
...
html
body
file
thread
message
...

See how it's added html and body tags that weren't actually in the file? Worse, it also ignores the body tags that actually are in the file -- which is a problem because body is the tag where Android's Messages app chooses to put the actual text of the message.

Instead, use the "xml" parser, which is actually intended for XML.

Once you get past the lxml problems, the rest is pretty easy, except that you need to know that the dates are in Java Timestamp format, which means you have to divide by 1000 to get a Unix timestamp you can pass to datetime. And each SMS text is URL plus-encoded, so you can unquote it with urllib.parse.unquote_plus.

from bs4 import BeautifulSoup
from datetime import datetime
import urllib.parse
import sys

def parse_sms_convo(filename):
    with open(filename) as fp:
        soup = BeautifulSoup(fp, "xml")

        for msg in soup.findAll('message'):
            d = datetime.fromtimestamp(int(msg.find('date').text)/1000)
            body = msg.find('body')
            addr = msg.find('address')
            print("%s (from %s):\n   %s" %
                  (d.strftime("%Y-%m-%d %H:%M"),
                   addr.text,
                   urllib.parse.unquote_plus(body.text)))
            print()

if __name__ == '__main__':
    parse_sms_convo(sys.argv[1])

Of course, the best would be to get the entire conversation including images and videos all together. Apparently that's possible with a rooted phone, but I haven't found any way that doesn't require rooting. (It forever amazes me that advanced users who care about things like root access aren't bothered in the least by the fact that rooting almost always requires downloading a binary from a dodgy malware-distribution website, running it and giving it complete access to your phone. Me, I just hope a day eventually comes when there's a phone OS option that gives me the choice of controlling my own phone without resorting to malware.)

[ 13:41 Jul 27, 2019    More tech | permalink to this entry | ]

Tue, 23 Jul 2019

360 Panoramas with Povray and/or ImageMagick

This is Part IV of a four-part article on ray tracing digital elevation model (DEM) data. The goal: render a ray-traced image of mountains from a digital elevation model (DEM).

Except there are actually several more parts on the way, related to using GRASS to make viewsheds. So maybe this is actually a five- or six-parter. We'll see.

The Easy Solution

Skipping to the chase here ... I had a whole long article written about how to make a sequence of images with povray, each pointing in a different direction, and then stitch them together with ImageMagick.

But a few days after I'd gotten it all working, I realized none of it was needed for this project, because ... ta-DA — povray accepts this argument inside its camera section:

    angle 360

Duh! That makes it so easy.

You do need to change povray's projection to cylindrical; the default is "perspective" which warps the images. If you set your look_at to point due south -- the first and second coordinates are the same as your observer coordinate, the third being zero so it's looking southward -- then povray will create a lovely strip starting at 0 degrees bearing (due north), and with south right in the middle. The camera section I ended up with was:

camera {
    cylinder 1

    location <0.344444, 0.029620, 0.519048>
    look_at  <0.344444, 0.029620, 0>

    angle 360
}
with the same light_source and height_field as in Part III.

[360 povray panorama from Overlook Point]

There are still some more steps I'd like to do. For instance, fitting names of peaks to that 360-degree pan.

The rest of this article discusses some of the techniques I would have used, which might be useful in other circumstances.

A Script to Spin the Observer Around

Angles on a globe aren't as easy as just adding 45 degrees to the bearing angle each time. You need some spherical trigonometry to make the angles even, and it depends on the observer's coordinates.

Obviously, this wasn't something I wanted to calculate by hand, so I wrote a script for it: demproj.py. Run it with the name of a DEM file and the observer's coordinates:

demproj.py demfile.png 35.827 -106.1803
It takes care of calculating the observer's elevation, normalizing to the image size and all that. It generates eight files, named outfileN.png, outfileNE.png etc.

Stitching Panoramas with ImageMagick

To stitch those demproj images manually in ImageMagick, this should work in theory:

convert -size 3600x600 xc:black \
    outfile000.png -geometry +0+0 -composite \
    outfile045.png -geometry +400+0 -composite \
    outfile090.png -geometry +800+0 -composite \
    outfile135.png -geometry +1200+0 -composite \
    outfile180.png -geometry +1600+0 -composite \
    outfile225.png -geometry +2000+0 -composite \
    outfile270.png -geometry +2400+0 -composite \
    outfile315.png -geometry +2800+0 -composite \
    out-composite.png
or simply
convert outfile*.png +smush -400 out-smush.png

Adjusting Panoramas in GIMP

But in practice, some of the images have a few-pixel offset, and I never did figure out why; maybe it's a rounding error in my angle calculations.

I opened the images as layers in GIMP, and used my GIMP script Pandora/ to lay them out as a panorama. The cylindrical projection should make the edges match perfectly, so you can turn off the layer masking.

Then use the Move tool to adjust for the slight errors (tip: when the Move tool is active, the arrow keys will move the current layer by a single pixel).

If you get the offsets perfect and want to know what they are so you can use them in ImageMagick or another program, use GIMP's Filters->Python-Fu->Console. This assumes the panorama image is the only one loaded in GIMP, otherwise you'll have to inspect gimp.image_list() to see where in the list your image is.

>>> img = gimp.image_list()[0]
>>> for layer in img.layers:
...     print layer.name, layer.offsets

Tags: , , , ,
[ 15:28 Jul 23, 2019    More mapping | permalink to this entry | ]

Wed, 17 Jul 2019

Ray-Tracing Digital Elevation Data in 3D with Povray (Part III)

This is Part III of a four-part article on ray tracing digital elevation model (DEM) data. The goal: render a ray-traced image of mountains from a digital elevation model (DEM).

In Part II, I showed how the povray camera position and angle need to be adjusted based on the data, and the position of the light source depends on the camera position.

In particular, if the camera is too high, you won't see anything because all the relief will be tiny invisible bumps down below. If it's too low, it might be below the surface and then you can't see anything. If the light source is too high, you'll have no shadows, just a uniform grey surface.

That's easy enough to calculate for a simple test image like the one I used in Part II, where you know exactly what's in the file. But what about real DEM data where the elevations can vary?

Explore Your Test Data

[Hillshade of northern New Mexico mountains] For a test, I downloaded some data that includes the peaks I can see from White Rock in the local Jemez and Sangre de Cristo mountains.

wget -O mountains.tif 'http://opentopo.sdsc.edu/otr/getdem?demtype=SRTMGL3&west=-106.8&south=35.1&east=-105.0&north=36.5&outputFormat=GTiff'

Create a hillshade to make sure it looks like the right region:

gdaldem hillshade mountains.tif hillshade.png
pho hillshade.png
(or whatever your favorite image view is, if not pho). The image at right shows the hillshade for the data I'm using, with a yellow cross added at the location I'm going to use for the observer.

Sanity check: do the lowest and highest elevations look right? Let's look in both meters and feet, using the tricks from Part I.

>>> import gdal
>>> import numpy as np

>>> demdata = gdal.Open('mountains.tif')
>>> demarray = np.array(demdata.GetRasterBand(1).ReadAsArray())
>>> demarray.min(), demarray.max()
(1501, 3974)
>>> print([ x * 3.2808399 for x in (demarray.min(), demarray.max())])
[4924.5406899, 13038.057762600001]

That looks reasonable. Where are those highest and lowest points, in pixel coordinates?

>>> np.where(demarray == demarray.max())
(array([645]), array([1386]))
>>> np.where(demarray == demarray.min())
(array([1667]), array([175]))

Those coordinates are reversed because of the way numpy arrays are organized: (1386, 645) in the image looks like Truchas Peak (the highest peak in this part of the Sangres), while (175, 1667) is where the Rio Grande disappears downstream off the bottom left edge of the map -- not an unreasonable place to expect to find a low point. If you're having trouble eyeballing the coordinates, load the hillshade into GIMP and watch the coordinates reported at the bottom of the screen as you move the mouse.

While you're here, check the image width and height. You'll need it later.

>>> demarray.shape
(1680, 2160)
Again, those are backward: they're the image height, width.

Choose an Observing Spot

Let's pick a viewing spot: Overlook Point in White Rock (marked with the yellow cross on the image above). Its coordinates are -106.1803, 35.827. What are the pixel coordinates? Using the formula from the end of Part I:

>>> import affine
>>> affine_transform = affine.Affine.from_gdal(*demdata.GetGeoTransform())
>>> inverse_transform = ~affine_transform
>>> [ round(f) for f in inverse_transform * (-106.1803, 35.827) ]
[744, 808]

Just to double-check, what's the elevation at that point in the image? Note again that the numpy array needs the coordinates in reverse order: Y first, then X.

>>> demarray[808, 744], demarray[808, 744] * 3.28
(1878, 6159.839999999999)

1878 meters, 6160 feet. That's fine for Overlook Point. We have everything we need to set up a povray file.

Convert to PNG

As mentioned in Part II, povray will only accept height maps as a PNG file, so use gdal_translate to convert:

gdal_translate -ot UInt16 -of PNG mountains.tif mountains.png

Use the Data to Set Camera and Light Angles

The camera should be at the observer's position, and povray needs that as a line like

    location <rightward, upward, forward>
where those numbers are fractions of 1.

The image size in pixels is 2160x1680, and the observer is at pixel location (744, 808). So the first and third coordinates of location should be 744/2160 and 808/1680, right? Well, almost. That Y coordinate of 808 is measured from the top, while povray measures from the bottom. So the third coordinate is actually 1. - 808/1680.

Now we need height, but how do you normalize that? That's another thing nobody seems to document anywhere I can find; but since we're using a 16-bit PNG, I'll guess the maximum is 216 or 65536. That's meters, so DEM files can specify some darned high mountains! So that's why that location <0, .25, 0> line I got from the Mapping Hacks book didn't work: it put the camera at .25 * 65536 or 16,384 meters elevation, waaaaay up high in the sky.

My observer at Overlook Point is at 1,878 meters elevation, which corresponds to a povray height of 1878/65536. I'll use the same value for the look_at height to look horizontally. So now we can calculate all three location coordinates: 744/2160 = .3444, 1878/65536 = 0.0287, 1. - 808/1680 = 0.5190:

    location <.3444, 0.0287, .481>

Povray Glitches

Except, not so fast: that doesn't work. Remember how I mentioned in Part II that povray doesn't work if the camera location is at ground level? You have to put the camera some unspecified minimum distance above ground level before you see anything. I fiddled around a bit and found that if I multiplied the ground level height by 1.15 it worked, but 1.1 wasn't enough. I have no idea whether that will work in general. All I can tell you is, if you're setting location to be near ground level and the generated image looks super dark regardless of where your light source is, try raising your location a bit higher. I'll use 1878/65536 * 1.15 = 0.033.

For a first test, try setting look_at to some fixed place in the image, like the center of the top (north) edge (right .5, forward 1):

    location <.3444, 0.033, .481>
    look_at <.5, 0.033, 1>

That means you won't be looking exactly north, but that's okay, we're just testing and will worry about that later. The middle value, the elevation, is the same as the camera elevation so the camera will be pointed horizontally. (look_at can be at ground level or even lower, if you want to look down.)

Where should the light source be? I tried to be clever and put the light source at some predictable place over the observer's right shoulder, and most of the time it didn't work. I ended up just fiddling with the numbers until povray produced visible terrain. That's another one of those mysterious povray quirks. This light source worked fairly well for my DEM data, but feel free to experiment:

light_source { <2, 1, -1> color <1,1,1> }

All Together Now

Put it all together in a mountains.pov file:

camera {
    location <.3444, 0.0330, .481>
    look_at <.5, 0.0287, 1>
}

light_source { <2, 1, -1> color <1,1,1> }

height_field {
    png "mountains.png"
    smooth
    pigment {
        gradient y
        color_map {
            [ 0 color <.7 .7 .7> ]
            [ 1 color <1 1 1> ]
        }
    }
    scale <1, 1, 1>
}
[Povray-rendering of Black and Otowi Mesas from Overlook Point] Finally, you can run povray and generate an image!
povray +A +W800 +H600 +INAME_OF_POV_FILE +OOUTPUT_PNG_FILE

And once I finally got to this point I could immediately see it was correct. That's Black Mesa (Tunyo) out in the valley a little right of center, and I can see White Rock canyon in the foreground with Otowi Peak on the other side of the canyon. (I strongly recommend, when you experiment with this, that you choose a scene that's very distinctive and very familiar to you, otherwise you'll never be sure if you got it right.)

Next Steps

Now I've accomplished my goal: taking a DEM map and ray-tracing it. But I wanted even more. I wanted a 360-degree panorama of all the mountains around my observing point.

Povray can't do that by itself, but in Part IV, I'll show how to make a series of povray renderings and stitch them together into a panorama. Part IV, Making a Panorama from Raytraced DEM Images

Tags: , , , , ,
[ 16:43 Jul 17, 2019    More mapping | permalink to this entry | ]

Fri, 12 Jul 2019

Height Fields in Povray (Ray Tracing Elevation Data, Part II)

This is Part II of a four-part article on ray tracing digital elevation model (DEM) data. (Actually, it's looking like there may be five or more parts in the end.)

The goal: render a ray-traced image of mountains from a digital elevation model (DEM).

My goal for that DEM data was to use ray tracing to show the elevations of mountain peaks as if you're inside the image looking out at those peaks.

I'd seen the open source ray tracer povray used for that purpose in the book Mapping Hacks: Tips & Tools for Electronic Cartography: Hack 20, "Make 3-D Raytraced Terrain Models", discusses how to use it for DEM data.

Unfortunately, the book is a decade out of date now, and lots of things have changed. When I tried following the instructions in Hack 20, no matter what DEM file I used as input I got the same distorted grey rectangle. Figuring out what was wrong meant understanding how povray works, which involved a lot of testing and poking since the documentation isn't clear.

Convert to PNG

Before you can do anything, convert the DEM file to a 16-bit greyscale PNG, the only format povray accepts for what it calls height fields:

gdal_translate -ot UInt16 -of PNG demfile.tif demfile.png

If your data is in some format like ArcGIS that has multiple files, rather than a single GeoTIFF file, try using the name of the directory containing the files in place of a filename.

Set up the .pov file

Now create a .pov file, which will look something like this:

camera {
    location <.5, .5, 2>
    look_at  <.5, .6, 0>
}

light_source { <0, 2, 1> color <1,1,1> }

height_field {
    png "YOUR_DEM_FILE.png"

    smooth
    pigment {
        gradient y
        color_map {
            [ 0 color <.5 .5 .5> ]
            [ 1 color <1 1 1> ]
        }
    }

    scale <1, 1, 1>
}

The trick is setting up the right values for the camera and light source. Coordinates like the camera location and look_at, are specified by three numbers that represent <rightward, upward, forward> as a fraction of the image size.

Imagine your DEM tilting forward to lie flat in front of you: the bottom (southern) edge of your DEM image corresponds to 0 forward, whereas the top (northern) edge is 1 forward. 0 in the first coordinate is the western edge, 1 is the eastern. So, for instance, if you want to put the virtual camera at the middle of the bottom (south) edge of your DEM and look straight north and horizontally, neither up nor down, you'd want:

    location <.5, HEIGHT, 0>
    look_at  <.5, HEIGHT, 1>
(I'll talk about HEIGHT in a minute.)

It's okay to go negative, or to use numbers bigger than zero; that just means a coordinate that's outside the height map. For instance, a camera location of

    location <-1, HEIGHT, 2>
would be off the west and north edges of the area you're mapping.

look_at, as you might guess, is the point the camera is looking at. Rather than specify an angle, you specify a point in three dimensions which defines the camera's angle.

What about HEIGHT? If you make it too high, you won't see anything because the relief in your DEM will be too far below you and will disappear. That's what happened with the code from the book: it specified location <0, .25, 0>, which, in current DEM files, means the camera is about 16,000 feet up in the sky, so high that the mountains shrink to invisibility.

If you make the height too low, then everything disappears because ... well, actually I don't know why. If it's 0, then you're most likely underground and I understand why you can't see anything, but you have to make it significantly higher than ground level, and I'm not sure why. Seems to be a povray quirk.

Once you have a .pov file with the right camera and light source, you can run povray like this:

povray +A +W800 +H600 +Idemfile.pov +Orendered.png
then take a look at rendered.png in your favorite image viewer.

Simple Sample Data

['bowling pin' sample DEM for testing povray] There's not much documentation for any of this. There's povray: Placing the Camera, but it doesn't explain details like which number controls which dimension or why it doesn't work if you're too high or too low. To figure out how it worked, I made a silly little test image in GIMP consisting of some circles with fuzzy edges. Those correspond to very tall pillars with steep sides: in these height maps, white means the highest point possible, black means the lowest.

Then I tried lots of different values for location and look_at until I understood what was going on.

For my bowling-pin image, it turned out looking northward (upward) from the south (the bottom of the image) didn't work, because the pillar at the point of the triangle blocked everything else. It turned out to be more useful to put the camera beyond the top (north side) of the image and look southward, back toward the image.

    location <.5, HEIGHT, 2>
    look_at  <.5, HEIGHT, 0>

[povray ray-traced bowling pin result]

The position of the light_source is also important. For instance, for my circles, the light source given in the original hack, <0, 3000, 0>, is so high that the pillars aren't visible at all, because the light is shining only on their tops and not on their sides. (That was also true for most DEM data I tried to view.) I had to move the light source much lower, so it illuminated the sides of the pillars and cast some shadows, and that was true for DEM data as well.

The .pov file above, with the camera halfway up the field (.5) and situated in the center of the north end of the field, looking southward and just slightly up from horizontal (.6), rendered like this. I can't explain the two artifacts in the middle. The artifacts at the tops and bottoms of the pillars are presumably rounding errors and don't worry me.

Finally, I felt like I was getting a handle on povray camera positioning. The next step was to apply it to real Digital Elevation Maps files. I'll cover that in Part III, Povray on real DEM data: Ray-Tracing Digital Elevation Data in 3D with Povray

Tags: , , ,
[ 18:02 Jul 12, 2019    More mapping | permalink to this entry | ]

Sun, 07 Jul 2019

Working with Digital Elevation Models with GDAL and Python (Ray Tracing Elevation Data, Part I)

Part III of a four-part article:

One of my hiking buddies uses a phone app called Peak Finder. It's a neat program that lets you spin around and identify the mountain peaks you see.

Alas, I can't use it, because it won't work without a compass, and [expletive deleted] Samsung disables the compass in their phones, even though the hardware is there. I've often wondered if I could write a program that would do something similar. I could use the images in planetarium shows, and could even include additions like predicting exactly when and where the moon would rise on a given date.

Before plotting any mountains, first you need some elevation data, called a Digital Elevation Model or DEM.

Get the DEM data

Digital Elevation Models are available from a variety of sources in a variety of formats. But the downloaders and formats aren't as well documented as they could be, so it can be a confusing mess.

USGS

[Typical experience with USGS map tiles not loading] USGS steers you to the somewhat flaky and confusing National Map Download Client. Under Data in the left sidebar, click on Elevation Products (3DEP), select the accuracy you need, then zoom and pan the map until it shows what you need.

Current Extent doesn't seem to work consistently, so use Box/Point and sweep out a rectangle. Then click on Find products. Each "product" should have a download link next to it, or if not, you can put it in your cart and View Cart.

Except that National Map tiles often don't load, so you can end up with a mostly-empty map (as shown here) where you have no idea what area you're choosing. Once this starts happening, switching to a different set of tiles probably won't help; all you can do is wait a few hours and hope it gets better..

Or get your DEM data somewhere else. Even if you stick with the USGS, they have a different set of DEM data, called SRTM (it comes from the Shuttle Radar Topography Mission) which is downloaded from a completely different place, SRTM DEM data, Earth Explorer. It's marginally easier to use than the National Map and less flaky about tile loading, and it gives you GeoTIFF files instead of zip files containing various ArcGIS formats. Sounds good so far; but once you've wasted time defining the area you want, suddenly it reveals that you can't download anything unless you first make an account, and you have to go through a long registration process that demands name, address and phone number (!) before you can actually download anything.

Of course neither of these sources lets you just download data for a given set of coordinates; you have to go through the interactive website any time you want anything. So even if you don't mind giving the USGS your address and phone number, if you want something you can run from a program, you need to go elsewhere.

Unencumbered DEM Sources

Fortunately there are several other sources for elevation data. Be sure to read through the comments, which list better sources than in the main article.

The best I found is OpenTypography's SRTM API, which lets you download arbitrary areas specified by latitude/longitude bounding boxes.

Verify the Data: gdaldem

[Making a DEM visible with GIMP Levels] Okay, you've got some DEM data. Did you get the area you meant to get? Is there any data there? DEM data often comes packaged as an image, primarily GeoTIFF. You might think you could simply view that in an image viewer -- after all, those nice preview images they show you on those interactive downloaders show the terrain nicely. But the actual DEM data is scaled so that even high mountains don't show up; you probably won't be able to see anything but blackness.

One way of viewing a DEM file as an image is to load it into GIMP. Bring up Colors->Levels, go to the input slider (the upper of the two sliders) and slide the rightmost triangle leftward until it's near the right edge of the histogram. Don't save it that way (that will mess up the absolute elevations in the file); it's just a quick way of viewing the data.

Update: A better, one-step way is Color > Auto > Stretch Contrast.


[hillshade generated by gdaldem] A better way to check DEM data files is a beautiful little program called gdaldem. It has several options, like generating a hillshade image:

gdaldem hillshade n35_w107_1arc_v3.tif hillshade.png

Then view hillshade.png in your favorite image viewer and see if it looks like you expect. Having read quite a few elaborate tutorials on hillshade generation over the years, I was blown away at how easy it is with gdaldem.

Here are some other operations you can do on DEM data.

Translate the Data to Another Format

gdal has lots more useful stuff beyond gdaldem. For instance, my ultimate goal, ray tracing, will need a PNG:

gdal_translate -ot UInt16 -of PNG srtm_54_07.tif srtm_54_07.png

gdal_translate can recognize most DEM formats. If you have a complicated multi-file format like ARCGIS, try using the name of the directory where the files live.

Get Vertical Limits, for Scaling

What's the highest point in your data, and at what coordinates does that peak occur? You can find the highest and lowest points easily with Python's gdal package if you convert the gdal.Dataset into a numpy array:

import gdal
import numpy as np

demdata = gdal.Open(filename)
demarray = np.array(demdata.GetRasterBand(1).ReadAsArray())
print(demarray.min(), demarray.max())

That gives you the highest and lowest elevations. But where are they in the data? That's not super intuitive in numpy; the best way I've found is:

indices = np.where(demarray == demarray.max())
ymax, xmax = indices[0][0], indices[1][0]
print("The highest point is", demarray[ymax][xmax])
print("  at pixel location", xmax, ymax)

Translate Between Lat/Lon and Pixel Coordinates

But now that you have the pixel coordinates of the high point, how do you map that back to latitude and longitude? That's trickier, but here's one way, using the affine package:

import affine

affine_transform = affine.Affine.from_gdal(*demdata.GetGeoTransform())
lon, lat = affine_transform * (xmax, ymax)

What about the other way? You have latitude and longitude and you want to know what pixel location that corresponds to? Define an inverse transformation:

inverse_transform = ~affine_transform
px, py = [ round(f) for f in inverse_transform * (lon, lat) ]

Those transforms will become important once we get to Part III. But first, Part II, Understand Povray: Height Fields in Povray

Tags: , , ,
[ 18:15 Jul 07, 2019    More mapping | permalink to this entry | ]

Tue, 02 Jul 2019

Orphaned Nestlings

Yesterday afternoon I was walking up the path to the back door when I noticed a bird's nest on the ground. I bent to examine it -- and spied two struggling baby birds on the rocks next to the nest.

[Orphaned nestling baby birds, gaping for food] I gently picked them up and put them back in the nest. Then what? On the ground there, they'd be easy fodder for coyotes, foxes or any other predator. But the tree it must have fallen from is a tall blue spruce; the lowest branches are way above head height, and even then I couldn't see any secure place to put a nest where it wouldn't immediately fall down again.

I chose another option: there's an upstairs deck immediately adjacent to that tree, so if I put the nest on the corner of the deck, it might be close enough to its original location that if the parents came looking for the nest, they'd be able to hear the chicks' calls. I've always read that parents will hang around and feed nestlings if they fall from the tree.

Nice theor; but the problem was that the chicks were too quiet. When they felt the nest jiggle as I moved it, they gaped, obviously wanting food; but although they did make a faint peeping, it wasn't loud enough to be heard from more than a few feet away.

Nevertheless, I left them there overnight, hoping a parent would find them in the evening or first thing in the morning. The deck is adjacent to my bedroom, so I was pretty sure I'd hear it if the parents came and fed the chicks. (It's easy to hear when the Bewick's wrens in the nest box above the other deck come to feed their chicks in the morning.)

Alas, there was no reunion. I heard no sounds and saw no activity. The chicks were still alive and active in the morning, but obviously very hungry, gaping every time they heard a noise. I wanted very badly to feed them, to find a bug or a little piece of steak or something that I could put in those gaping hungry mouths, but I was afraid of feeding them something that might turn out to be harmful. As it turned out, that was the right decision.

It was time to call for help (I'd posted on our local birders' list the evening before, but no one had any useful advice). Fortunately, we have an experienced bird rehabilitator in town, whom I know slightly, so I called her and got the okay to bring them in.

[Orphaned nestlings, now warm and fed] She weighed the babies (roughly 17 and 14 grams), put them on a heating pad and gave them a little pedialite solution. She said she couldn't actually feed them until they pooped; if I understood correctly, baby birds can get backed up and if you feed them then, it can kill them. Fortunately they both pooped right away after getting a drink, so she mixed up some baby bird formula and fed them with an eyedropper.

You know how parent birds always seem to shove their bill all the way down the chick's throat while feeding them? It always looks like they'd be in danger of puncturing the chick's stomach, but it turns out there's a reason for it. Much like humans, birds can have food going "down the wrong pipe", down the trachea or breathing tube rather than the esophagus or food tube. Unlike humans, birds don't have an epiglottis, the flap that closes over a mammal's trachea to keep food from getting in. When adult birds eat (I looked this up after getting home), the opening of the glottis closes during swallowing; but when feeding baby birds, you have to insert your eyedropper (or your bill, if you're a parent bird) well past the entrance to the trachea to make sure the food doesn't go down the breathing tube and drown/smother the chick. It takes training and practice to get this right, and sometimes even experienced bird rehabilitators get it wrong. So it was a good thing I didn't start randomly dropping chunks of food into the nestlings' mouths.

Sally wasn't any more able to identify the nestlings' species than I was. One possible suggestion I had was ash-throated flycatcher: they're about the right size, we have several hanging about the yard, and one had been hanging around that area of the yard all day, pestering the Bewick's wrens feeding their young in the nest box. I thought maybe the flycatchers wanted the nest box for their second nest of the season; but what if the flycatchers, normally cavity nesters, hadn't been able to find a suitable cavity, had tried building a nest in the blue spruce and done a poor job of it and the nest fell down?

It's a nice theory; but Sally showed me that these nestlings have crops (bulging places by their mouths where they were storing food as they ate), which apparently flycatchers don't. She said that's why flycatcher parents are so harried -- they're constantly on the move catching bugs to feed to their chicks, much more than most birds, because the babies can't store food themselves.

[The nest that fell] They could be canyon towhees or juniper titmouse; the bird rehabilitator guides didn't those species so it's hard to tell. Or they could be robins or even bluebirds, but I haven't seen many of either species around the yard this summer. They seem too big to be house finches, wrens, chickadees or bushtits.

The construction of the nest might give some clues. It's a work of art, roomy and sturdy and very comfortable looking, made of tansy mustard and other weeds and lined with soft hair. Maybe I'll find someone who's good at nest identification.

Anyway, for now, their species is a mystery, but they're warm and fed and being well cared for. She warned me that nestlings don't always survive and sometimes they have injuries from falling, but with any luck, they'll grow and eventually will be released.

Tags: ,
[ 15:40 Jul 02, 2019    More nature | permalink to this entry | ]