Shallow Thoughts

Akkana's Musings on Open Source Computing and Technology, Science, and Nature.

Thu, 10 Oct 2019

RSS was Down, Now Back Up, with Atom

A reader pointed out to me that the RSS page on my blog hadn't been updated since May.

That must be when I updated PyBlosxom to the latest version and switched to the python3 branch. Python 2, as you probably know, is no longer going to be supported starting sometime next year (the exact date in 2020 doesn't seem to be set). Anyway, PyBlosxom was one of the few tools I use that still depend only on Python 2, and since I saw they had a python3 branch, I tried it.

PyBlosxom is no longer a project in active development: as I understand it, some of the developers drifted off to other blogging systems, others decided that it worked well enough and didn't really need further tweaking. But someone, at some point, did make a try at porting it to Python 3; and when I tried the python3 branch, I was able to make it work after I made a couple of very minor tweaks (which I contributed back to the project on GitHub, and they were accepted).

Everything went fine for a few months, until I received the alert that the index.rss and index.rss20 files weren't being generated. Curiously, the RSS files for each individual post are still there; just not the index.rss and index.rss20.

I found there was already a bug filed on that. I tried the workaround mentioned in the bug, at the same time adding Atom to the RSS 0.9.1 and RSS 2.0 flavors I've had in the past. I haven't generated Atom for all the old entries, but any new entries starting now should be available as Atom.

Fingers crossed! if you're reading this, then it worked and my RSS pages are back. Sorry about the RSS hiatus.

Tags: , ,
[ 09:10 Oct 10, 2019    More blogging | permalink to this entry | comments ]

Tue, 01 Oct 2019

Making Web Maps using Python, Folium and Shapefiles

A friend recently introduced me to Folium, a quick and easy way of making web maps with Python.

The Folium Quickstart gets you started in a hurry. In just two lines of Python (plus the import line), you can write an HTML file that you can load in any browser to display a slippy map, or you can display it inline in a Jupyter notebook.

Folium uses the very mature Leaflet JavaScript library under the hood. But it lets you do all the development in a few lines of Python rather than a lot of lines of Javascript.

Having run through most of the quickstart, I was excited to try Folium for showing GeoJSON polygons. I'm helping with a redistricting advocacy project; I've gotten shapefiles for the voting districts in New Mexico, and have been wanting to build a map that shows them which I can then extend for other purposes.

Step 1: Get Some GeoJSON

The easiest place to get voting district data is from TIGER, the geographic arm of the US Census.

For the districts resulting from the 2010 Decadal Census, start here: Cartographic Boundary Files - Shapefile (you can also get them as KML, but not as GeoJSON). There's a category called "Congressional Districts: 116th Congress", and farther down the page, under "State-based Files", you can get shapefiles for the upper and lower houses of your state.

You can also likely download them from at www2.census.gov/geo/tiger/TIGER2010/, as long as you can figure out how to decode the obscure directory names. ELSD and POINTLM, so the first step is to figure out what those mean; I never found anything that could decode them.

(Before I found the TIGER district data, I took a more roundabout path that involved learning how to merge shapes; more on that in a separate post.)

Okay, now you have a shapefile (unzip the TIGER file to get a bunch of files with names like cb_2018_35_sldl_500k.* -- shape "files" are an absurd ESRI concept that actually use seven separate files for each dataset, so they're always packaged as a zip archive and programs that read shapefiles expect that when you pass them a .shp, there will be a bunch of other files with the same basename but different extensions in the same directory).

But Folium can't handle shapefiles, only GeoJSON. You can do that translation with a GDAL command:

ogr2ogr -t_srs EPSG:4326 -f GeoJSON file.json file.shp

Or you can do it programmatically with the GDAL Python bindings:

def shapefile2geojson(infile, outfile, fieldname):
    '''Translate a shapefile to GEOJSON.'''
    options = gdal.VectorTranslateOptions(format="GeoJSON",
                                          dstSRS="EPSG:4326")
    gdal.VectorTranslate(outfile, infile, options=options)

The EPSG:4326 specifier, if you read man ogr2ogr, is supposedly for reprojecting the data into WGS84 coordinates, which is what most web maps want (EPSG:4326 is an alias for WGS84). But it has an equally important function: even if your input shapefile is already in WGS84, adding that option somehow ensures that GDAL will use degrees as the output unit. The TIGER data already uses degrees so you don't strictly need that, but some data, like the precinct data I got from UNM RGIS, uses other units, like meters, which will confuse Folium and Leaflet. And the TIGER data isn't in WGS84 anyway; it's in GRS1980 (you can tell by reading the .prj file in the same directory as the .shp). Don't ask me about details of all these different geodetic reference systems; I'm still trying to figure it all out. Anyway, I recommend adding the EPSG:4326 as the safest option.

Step 2: Show the GeoJSON in a Folium Map

In theory, looking at the Folium Quickstart, all you need is folium.GeoJson(filename, name='geojson').add_to(m). In practice, you'll probably want to more, like

Each of these requires some extra work.

You can color the regions with a style function:

folium.GeoJson(jsonfile, style_function=style_fcn).add_to(m)

Here's a simple style function that chooses random colors:

import random

def random_html_color():
    r = random.randint(0,256)
    g = random.randint(0,256)
    b = random.randint(0,256)
    return '#%02x%02x%02x' % (r, g, b)

def style_fcn(x):
    return { 'fillColor': random_html_color() }

I wanted to let the user choose regions by clicking, but it turns out Folium doesn't have much support for that (it may be coming in a future release). You can do it by reading the GeoJSON yourself, splitting it into separate polygons and making them all separate Folium Polygons or GeoJSON objects, each with its own click behavior; but if you don't mind highlights and popups on mouseover instead of requiring a click, that's pretty easy. For highlighting in red whenever the user mouses over a polygon, set this highlight_function:

def highlight_fcn(x):
    return { 'fillColor': '#ff0000' }

For tooltips:

tooltip = folium.GeoJsonTooltip(fields=['NAME'])
In this case, 'NAME' is the field in the shapefile that I want to display when the user mouses over the region. If you're not sure of the field name, the nice thing about GeoJSON is that it's human readable. Generally you'll want to look inside "features", for "properties" to find the fields defined for each polygon. For instance, if I use jq to prettyprint the JSON generated for the NM state house districts:
$ jq . House.json | less
{
  "type": "FeatureCollection",
  "name": "cb_2018_35_sldl_500k",
  "crs": {
    "type": "name",
    "properties": {
      "name": "urn:ogc:def:crs:OGC:1.3:CRS84"
    }
  },
  "features": [
    {
      "type": "Feature",
      "properties": {
        "STATEFP": "35",
        "SLDLST": "009",
        "AFFGEOID": "620L600US35009",
        "GEOID": "35009",
        "NAME": "9",
        "LSAD": "LL",
        "LSY": "2018",
        "ALAND": 3405159792,
        "AWATER": 5020507
      },
      "geometry": {
        "type": "Polygon",
        "coordinates": [
...

If you still aren't sure which property name means what (for example, "NAME" could be anything), just keep browsing through the JSON file to see which fields change from feature to feature and give the values you're looking for, and it should become obvious pretty quickly.

Here's a working code example: polidistmap.py, and here's an example of a working map:

Tags: , ,
[ 12:29 Oct 01, 2019    More mapping | permalink to this entry | comments ]

Tue, 27 Aug 2019

Plane Fishing

[plane fishing] White Rock has a "glider port", which is just an informal spot along the edge of the canyon where sometimes people fly R/C sailplanes. On days when the winds are right, gliders can get some pretty good lift.

Last Sunday wasn't one of those days. The wind was coming from every direction but east, so the gliders were having to use their motors periodically to climb back up to altitude.

I was mostly trying to stay above the canyon rim, but I noticed all the other pilots were flying down below, so I decided maybe it wasn't that dangerous to let my plane get a little below the edge for a while before starting the motor. Wrong! Below the edge of the canyon, there's a risk of catching some evil rotors off the cliffs. One of those rotors caught my glider's wing and tossed it into a spiral. I was able to recover and get the plane flying straight again -- straight toward the cliff. It smacked hard -- I saw parts flying everywhere.

I didn't expect that the plane itself was salvageable -- it's only styrofoam, after all -- though it looked surprisingly intact. In any case, Dave and I hoped to recover the components: battery, motor, receivers, servos. Hiking to the plane proved difficult: you can get fairly near there on the Blue Dot trail, but then you need to climb three levels of cliff to reach the place where the plane sat. Coming down from above definitely would have required rapelling gear.

But Dave had an idea: let's go fishing!

It took some experimenting to figure out what sort of hook, line and pole you need to fish for a thirty-ounce plane that's fifty feet down a sheer cliff out of sight from the cliff above it. Dave did the fishing and I acted as the caller, sitting some hundred feet away where I could spot the plane through binoculars and shout out which direction he needed to move the line. But we got it in the end! I shot some quick snapshots while I wasn't busy spotting, which you can see here: Plane Fishing (photos).

[plane on the hook] Amazingly, the plane was almost undamaged. The plastic spinner was destroyed, but the motor seems fine. The nose of the plane is very slightly askew but not broken. The battery, after being plugged in to the receiver for 48 hours, was down to zero volts, but when we charged it carefully, it took a charge. The canopy went flying off at the moment of impact and is down there in the rocks somewhere, so I have a new canopy, spinner and collet on backorder, but in the meantime, the plane is probably flyable. I'll find out this weekend -- but if we fly at the gliderport, I'm not letting it get lower than cliff level, ever!

Tags:
[ 16:57 Aug 27, 2019    More misc | permalink to this entry | comments ]

Tue, 06 Aug 2019

Update on my Rescued Nestlings

Last month I wrote about the orphaned nestlings I found on the ground off the back deck, and how I took them to a rehabilitator when the parents didn't come back to feed them.

Here's the rest of the story. Warning: it's only half a happy ending.

[Nestlings starting to feather out] Under the good care of our local bird rehabilitator, they started to feather out and gain weight quickly. She gave me some literature on bird rescue and let me visit them and help feed them. There's a lot of work and responsibility involved in bird rehabilitation!

[Nestlings starting to feather out] I'd sometimes thought I wanted to be a rehabilitator; now I'm not so sure I'm up to the responsibility. Though the chicks sure were adorable once they started to look like birds instead of embryos, sitting so trustingly in Sally's hand.

[Looking more like a bird] The big mystery was what species they were. Bird rehabilitators have charts where you can look up bird species according to weight, mouth color, gape color, skin color, feather color, and feet and leg size. But the charts only have a few species; they're woefully incomplete, and my babies didn't match any of the listings. We were thinking maybe robin or ash-throated flycatcher, but nothing really matched.

Fortunately, you can feed the same thing to anything but finches: Cornell makes a mixture of meat, dog food, vitamins and minerals that's suitable for most baby birds, though apparently it's dangerous to feed it to finches, so we crossed our fingers and guessed that they were too big to be house finches.

As they grew more feathers, Sally increasingly suspected they were canyon towhees (a common bird in White Rock), and although they still didn't have adult plumage by the time they left the cage, that's still what we think.

[Hopping alertly around the cage now] By about twenty days after the rescue, they were acting almost like adult birds, hopping restlessly around the cage, jumping up to the perch and fluttering back down. They were eating partly by themselves at this point, a variety of foods including lettuce, blueberries, cut up pea pods, and dried mealworms, though they weren't eating many seeds like you'd expect from towhees. They still liked being fed the Cornell meat mixture, and ate more of that than anything else.

I Get to be a Bird Mom For a While

At this point, Sally needed to go out of town, and I offered to babysit them so she didn't have to take them on her trip. (One of the big downsites of being a rehabilitator: while you're in charge of babies, they need constant care.) I took them back to my place, where I hoped I'd be able to release them: partly because they'd been born here, and partly because the towhees here in White Rock aren't so territorial as they apparently are in Los Alamos.

With the chicks safely stashed in the guest bedroom, I could tell they were getting restless and wanted out of the cage. When I opened the cage to feed them and change their water and bedding, they escaped out into the room a couple of times and I had to catch them and get them back in the cage. So I knew they could fly and wanted out. (I'm sure being moved from Sally's house to mine didn't help: the change in surroundings probably unnerved them.)

Sally advised me to leave the cage outside during the day for a couple of days prior to the releasing, so the birds can get used to the environment. The first day I put them outside, they immediately seemed much happier and calmer. It seemed they liked being outside.

I Fail as a Bird Mom

On their second morning outdoors, I left them with new food and water, then came back to check on them an hour later. They seemed much more agitated than before, flying madly from one side of the cage to the other. Sally had described her last tenant, a sparrow, doing that just before release; she had released the sparrow a bit earlier than planned because the bird seemed to want out so badly. I wondered if that was the case here, but decided to wait one more day.

But the larger of the two babies had other ideas. When I unzipped the top of the cage to re-fill the water dish, it was in the air immediately, and somehow shot through the tiny opening next to my arm.

It flew about thirty feet, landed in a clearing -- and was immediately taken by a Cooper's hawk that came out of nowhere.

The hawk flew off, the baby towhee squeaking pathetically in its talons, leaving me and the other baby in shock.

What a blow! The bird rescue literature Sally loaned me stresses that bad things can happen. There are so many things that can go wrong with a nestling or a release. They tell you how poor the odds are for baby birds in general. They remind you that the birds would have had no chance of survival if you hadn't rescued them; rescued, at least they have some chance.

While I know that's all true, I'm not sure it makes me feel much better.

In hindsight, Sally said the chicks' agitation that day might have been because they knew the hawk was there, though neither of us though about that possibility at the time. She thinks the hawk must have been "stalking them", hanging out nearby, aware that there was something delectable inside the cage. She's had chicks taken by hawks too. Still ... sigh.

The Next Release Goes Better

But there was still the remaining chick to think about. Sally and I discussed options and decided that I should bring the chick back inside, and then drive it back up to her house. The hawk would probably remain around my place for a while,and the area wouldn't be safe for a new fledgling. Indeed, I saw the hawk again a few days later. (Normally I love seeing Cooper's hawks!)

The chick was obviously unhappy, whether because of being brought back inside, loneliness, or remaining trauma from hearing the attack -- even if it didn't understand exactly what had happened, I'm sure the chick heard the "towhee in mortal peril" noises just as I did.

So the chick (whom Dave dubbed "Lucky") had to wait another several days before finally being released.

The release went well. Lucky, less bold than its nestmate, was initially reluctant to leave the cage, but eventually fluttered out and flew to the shade of a nearby bush, where we could see it pecking at the ground and apparently eating various unidentifiable bits. It looked like it was finding plenty to eat there, it was mostly hidden from predators and competetors, and it had shade and shelter -- a good spot to begin a new life.

(I tried to get a video of the release but that didn't work out.)

Since then the chick has kept a low profile, but Sally thinks she saw a towhee fledgling a couple of days later. So we have our fingers crossed!

More photos: Nestling Rescue Photos.

Tags: ,
[ 09:50 Aug 06, 2019    More nature/birds | permalink to this entry | comments ]

Thu, 01 Aug 2019

Silly Moon Names: a Nice Beginning Python Project

Every time the media invents a new moon term -- super blood black wolf moon, or whatever -- I roll my eyes.

[Lunar Perigee and Apogee sizes] First, this ridiculous "supermoon" thing is basically undetectable to the human eye. Here's an image showing the relative sizes of the absolute closest and farthest moons. It's easy enough to tell when you see the biggest and smallest moons side by side, but when it's half a degree in the sky, there's no way you'd notice that one was bigger or smaller than average.

Even better, here's a link to an animation of how the moon changes size and "librates" -- tilts so that we can see a little bit over onto the moon's far side -- during the course of a month.

Anyway, the media seem to lap this stuff up and every month there's a new stupid moon term. I'm sure nearly every astronomer was relieved to see the thoroughly sensible Gizmodo article yesterday, Oh My God Stop It With the Fake Moon Names What the Hell Is a 'Black Moon' That Isn't Anything. Not that that will stop the insanity.

If You Can't Beat 'Em, Join 'Em

And then, talking about the ridiculous moon name phenom with some friends, I realized I could play this game too. So I spent twenty minutes whipping up my own Silly Moon Name Generator.

It's super simple -- it just uses Linux' built-in dictionary, with no sense of which words are common, or adjectives or nouns or what. Of course it would be funnier with a hand-picked set of words, but there's a limit to how much time I want to waste on this.

You can add a parameter ?nwords=5 (or whatever number) if you want more or fewer words than four.

How Does It Work?

Random phrase generators like this are a great project for someone just getting started with Python. Python is so good at string manipulation that it makes this sort of thing easy: it only takes half a page of code to do something fun. So it's a great beginner project that most people would probably find more rewarding than cranking out Fibonacci numbers (assuming you're not a Fibonacci geek like I am). For more advanced programmers, random phrase generation can still be a fun and educational project -- skip to the end of this article for ideas.

For the basics, this is all you need: I've added comments explaining the code.

import random


def hypermoon(filename, nwords=4):
    '''Return a silly moon name with nwords words,
       each taken from a word list in the given filename.
    '''
    fp = open(filename)
    lines = fp.readlines()

    # A list to store the words to describe the moon:
    words = []
    for i in range(nwords):    # This will be run nwords times
        # Pick a random number between 0 and the number of lines in the file:
        whichline = random.randint(0, len(lines))

        # readlines() includes whitespace like newline characters.
        # Use whichline to pull one line from the file, and use
        # strip() to remove any extra whitespace:
        word = lines[whichline].strip()

        # Append it to our word list:
        words.append(word)

    # The last word in the phrase will be "moon", e.g.
    # super blood wolf black pancreas moon
    words.append("moon")

    # ' '.join(list) combines all the words with spaces between them
    return ' '.join(words)


# This is called when the program runs:
if __name__ == '__main__':
    random.seed()

    print(hypermoon('/usr/share/dict/words', 4))

A More Compact Format

In that code example, I expanded everything to try to make it clear for beginning programmers. In practice, Python lets you be a lot more terse, so the way I actually wrote it was more like:

def hypermoon(filename, nwords=4):
    with open(filename, encoding='utf-8') as fp:
        lines = fp.readlines()

    words = [ lines[random.randint(0, len(lines))].strip()
              for i in range(nwords) ]
    words.append('moon')
    return ' '.join(words)

There are three important differences (in bold):

Opening a file using "with" ensures the file will be closed properly when you're done with it. That's not important in this tiny example, but it's a good habit to get into.

I specify the 'utf-8' encoding when I open the file because when I ran it as a web app, it turned out the web server used the ASCII encoding and I got Python errors because there are accented characters in the dictionary somewhere. That's one of those Python annoyances you get used to when going beyond the beginner level.

The way I define words all in one line (well, it's conceptually one long line, though I split it into two so each line stays under 72 characters) is called a list comprehension. It's a nice compact alternative to defining an empty list [] and then calling append() a bunch of times, like I did in the first example.

Initially they might seem harder to read, but list comprehensions can actually make code clearer once you get used to them.

A Python Driven Web Page

Finally, to make it work as a web page, I added the CGI module. That isn't really a beginner thing so I won't paste it here, but you can see the CGI version at hypermoon.py on GitHub.

I should mention that there's some debate over CGI in Python. The movers and shakers in the Python community don't approve of CGI, and there's a plan to remove it from upcoming Python versions. The alternative is to use technologies like Flask or Django. while I'm a fan of Flask and have used it for several projects, it's way overkill for something like this, mostly because of all the special web server configuration it requires (and Django is far more heavyweight than Flask). In any case, be aware that the CGI module may be removed from Python's standard library in the near future. With any luck, python-cgi will still be available via pip install or as Linux distro packages.

More Advanced Programmers: Making it Funnier

I mentioned earlier that I thought the app would be a lot funnier with a handpicked set of words. I did that long, long ago with my Star Party Observing Report Generator (written in Perl; I hadn't yet started using Python back in 2001). That's easy and fun if you have the time to spare, or a lot of friends contributing.

You could instead use words taken from a set of input documents. For instance, only use words that appear in Shakespeare's plays, or in company mission statements, or in Wikipedia articles about dog breeds (this involves some web scraping, but Python is good at that too; I like BeautifulSoup).

Or you could let users contribute their own ideas for good words to use, storing the user suggestions in a database.

Another way to make the words seem more appropriate and less random might be to use one of the many natural language packages for Python, such as NLTK, the Natural Language Toolkit. That way, you could control how often you used adjectives vs. nouns, and avoid using verbs or articles at all.

Random word generators seem like a silly and trivial programming exercise -- because they are! But they're also a fun starting point for more advanced explorations with Python.

Tags: , , ,
[ 14:24 Aug 01, 2019    More humor | permalink to this entry | comments ]

Sat, 27 Jul 2019

Android: Saving MMS Photos and Videos and Text Conversations

Until recently, I didn't know anyone who texted much, but just recently I've been corresponding with several people who like to text. Which is fine with me, but comes with one problem: when someone texts me a photo or video, it's tough to see much on a phone screen. I'd much prefer to view it on my big computer screen.

Not to mention that I'd like to be able to archive some of these photos on a platform where I can actually do backups. Yes, I know I'm a dinosaur and modern people are supposed to entrust all our photos, addressbooks, private documents and everything else to Google or Apple. Sorry. I'm happier in the Mesozoic.

Anyway, the point is that I wanted a way to get a message that's sitting in my Android phone's Messages app onto my computer.

As far as I can tell from web searching, there's no way to back up all the texts on an Android phone if it's not rooted. If you want to keep an archive of a conversation ... well, sorry, you can't. (If you are rooted, there's apparently an sqlite db file buried somewhere under /data.)

[Saving an MMS video on Android] After much searching, I did discover that you can long-press on an individual photo or video and choose Save attachment.

A dialog will come up offering a filename and a checkbox. The box is unchecked, and the Save button won't enable unless you check the box. What good that does is beyond me, since there's no option of letting you choose a different filename.

There's also no option of seeing where it saved, and Android doesn't give any clue. But it turns out (after much searching and discovering that commands like find / -name "Download*" 2> /dev/null don't work in Android, they just fail silently; I don't know what you're supposed to use to search for files. Oh, right, you're not supposed to) to be in /sdcard/Download.

So once you know the filename, you can

adb shell ls /sdcard/Download
adb pull /sdcard/Download/whatever.jpg

This isn't so bad as long as you do it before people have sent you 100 images that you want to archive. Fortunately I have less than ten I need to save.

Saving SMS Conversations

You can save the text of an entire SMS conversation with someone, though it won't save the media associated with the conversation.

Click on the three-dots menu item in the upper right of a conversation, scroll down in the menu and choose Save messages; the filename will be something like sms-20190723122204 (the numbers are the current date and time). Select all messages, or just the ones you want to save. Click your way through a warning screen or two telling you it won't save MMS media or group messages.

When you tap Save. there's a briefly-flashing message telling you where it saved it: if you have a photographic memory, be prepared, otherwise, you might need to save the same conversation several times to read the whole path; on my old Samsung, it turns out they go into /storage/emulated/0/Messaging.

Format of a Saved Conversaion

What you get is an XML file with lots of entries, in reverse chronological order. One entry looks something like:

    <message type="SMS">
      <address>5059208957</address>
      <body>We+will+probably+be+here+all+day.+Let+me+know+when+you+could+visit+</body>
      <date>1563723506834</date>
      <read>1</read>
      <type>1</type>
      <locked>0</locked>
    </message>

So to make them easily readable, you'd want something that could split out the <body> parts and then HTML decode them to turn those plusses and HTML entities into spaces and readable characters.

Parsing XML is easy enough with Python's BeautifulSoup module -- or is it? Usually with BS4 I use the "lxml" parser, but I hit a snag in this case: see that body tag inside each message tag? It turns out the lxml parser, despite its name, is meant to be an HTML parser, not general XML; it inserts

>>> from bs4 import BeautifulSoup
>>> with open('sms-20190723202727.xml') as fp:
...     soup = BeautifulSoup(fp)
...     for tag in soup.findAll():
...         print(tag.name)
...
html
body
file
thread
message
...

See how it's added html and body tags that weren't actually in the file? Worse, it also ignores the body tags that actually are in the file -- which is a problem because body is the tag where Android's Messages app chooses to put the actual text of the message.

Instead, use the "xml" parser, which is actually intended for XML.

Once you get past the lxml problems, the rest is pretty easy, except that you need to know that the dates are in Java Timestamp format, which means you have to divide by 1000 to get a Unix timestamp you can pass to datetime. And each SMS text is URL plus-encoded, so you can unquote it with urllib.parse.unquote_plus.

from bs4 import BeautifulSoup
from datetime import datetime
import urllib.parse
import sys

def parse_sms_convo(filename):
    with open(filename) as fp:
        soup = BeautifulSoup(fp, "xml")

        for msg in soup.findAll('message'):
            d = datetime.fromtimestamp(int(msg.find('date').text)/1000)
            body = msg.find('body')
            addr = msg.find('address')
            print("%s (from %s):\n   %s" %
                  (d.strftime("%Y-%m-%d %H:%M"),
                   addr.text,
                   urllib.parse.unquote_plus(body.text)))
            print()

if __name__ == '__main__':
    parse_sms_convo(sys.argv[1])

Of course, the best would be to get the entire conversation including images and videos all together. Apparently that's possible with a rooted phone, but I haven't found any way that doesn't require rooting. (It forever amazes me that advanced users who care about things like root access aren't bothered in the least by the fact that rooting almost always requires downloading a binary from a dodgy malware-distribution website, running it and giving it complete access to your phone. Me, I just hope a day eventually comes when there's a phone OS option that gives me the choice of controlling my own phone without resorting to malware.)

[ 13:41 Jul 27, 2019    More tech | permalink to this entry | comments ]

Tue, 23 Jul 2019

360 Panoramas with Povray and/or ImageMagick

This is Part IV of a four-part article on ray tracing digital elevation model (DEM) data. The goal: render a ray-traced image of mountains from a digital elevation model (DEM).

Except there are actually several more parts on the way, related to using GRASS to make viewsheds. So maybe this is actually a five- or six-parter. We'll see.

The Easy Solution

Skipping to the chase here ... I had a whole long article written about how to make a sequence of images with povray, each pointing in a different direction, and then stitch them together with ImageMagick.

But a few days after I'd gotten it all working, I realized none of it was needed for this project, because ... ta-DA — povray accepts this argument inside its camera section:

    angle 360

Duh! That makes it so easy.

You do need to change povray's projection to cylindrical; the default is "perspective" which warps the images. If you set your look_at to point due south -- the first and second coordinates are the same as your observer coordinate, the third being zero so it's looking southward -- then povray will create a lovely strip starting at 0 degrees bearing (due north), and with south right in the middle. The camera section I ended up with was:

camera {
    cylinder 1

    location <0.344444, 0.029620, 0.519048>
    look_at  <0.344444, 0.029620, 0>

    angle 360
}
with the same light_source and height_field as in Part III.

[360 povray panorama from Overlook Point]

There are still some more steps I'd like to do. For instance, fitting names of peaks to that 360-degree pan.

The rest of this article discusses some of the techniques I would have used, which might be useful in other circumstances.

A Script to Spin the Observer Around

Angles on a globe aren't as easy as just adding 45 degrees to the bearing angle each time. You need some spherical trigonometry to make the angles even, and it depends on the observer's coordinates.

Obviously, this wasn't something I wanted to calculate by hand, so I wrote a script for it: demproj.py. Run it with the name of a DEM file and the observer's coordinates:

demproj.py demfile.png 35.827 -106.1803
It takes care of calculating the observer's elevation, normalizing to the image size and all that. It generates eight files, named outfileN.png, outfileNE.png etc.

Stitching Panoramas with ImageMagick

To stitch those demproj images manually in ImageMagick, this should work in theory:

convert -size 3600x600 xc:black \
    outfile000.png -geometry +0+0 -composite \
    outfile045.png -geometry +400+0 -composite \
    outfile090.png -geometry +800+0 -composite \
    outfile135.png -geometry +1200+0 -composite \
    outfile180.png -geometry +1600+0 -composite \
    outfile225.png -geometry +2000+0 -composite \
    outfile270.png -geometry +2400+0 -composite \
    outfile315.png -geometry +2800+0 -composite \
    out-composite.png
or simply
convert outfile*.png +smush -400 out-smush.png

Adjusting Panoramas in GIMP

But in practice, some of the images have a few-pixel offset, and I never did figure out why; maybe it's a rounding error in my angle calculations.

I opened the images as layers in GIMP, and used my GIMP script Pandora/ to lay them out as a panorama. The cylindrical projection should make the edges match perfectly, so you can turn off the layer masking.

Then use the Move tool to adjust for the slight errors (tip: when the Move tool is active, the arrow keys will move the current layer by a single pixel).

If you get the offsets perfect and want to know what they are so you can use them in ImageMagick or another program, use GIMP's Filters->Python-Fu->Console. This assumes the panorama image is the only one loaded in GIMP, otherwise you'll have to inspect gimp.image_list() to see where in the list your image is.

>>> img = gimp.image_list()[0]
>>> for layer in img.layers:
...     print layer.name, layer.offsets

Tags: , , ,
[ 15:28 Jul 23, 2019    More mapping | permalink to this entry | comments ]

Wed, 17 Jul 2019

Ray-Tracing Digital Elevation Data in 3D with Povray (Part III)

This is Part III of a four-part article on ray tracing digital elevation model (DEM) data. The goal: render a ray-traced image of mountains from a digital elevation model (DEM).

In Part II, I showed how the povray camera position and angle need to be adjusted based on the data, and the position of the light source depends on the camera position.

In particular, if the camera is too high, you won't see anything because all the relief will be tiny invisible bumps down below. If it's too low, it might be below the surface and then you can't see anything. If the light source is too high, you'll have no shadows, just a uniform grey surface.

That's easy enough to calculate for a simple test image like the one I used in Part II, where you know exactly what's in the file. But what about real DEM data where the elevations can vary?

Explore Your Test Data

[Hillshade of northern New Mexico mountains] For a test, I downloaded some data that includes the peaks I can see from White Rock in the local Jemez and Sangre de Cristo mountains.

wget -O mountains.tif 'http://opentopo.sdsc.edu/otr/getdem?demtype=SRTMGL3&west=-106.8&south=35.1&east=-105.0&north=36.5&outputFormat=GTiff'

Create a hillshade to make sure it looks like the right region:

gdaldem hillshade mountains.tif hillshade.png
pho hillshade.png
(or whatever your favorite image view is, if not pho). The image at right shows the hillshade for the data I'm using, with a yellow cross added at the location I'm going to use for the observer.

Sanity check: do the lowest and highest elevations look right? Let's look in both meters and feet, using the tricks from Part I.

>>> import gdal
>>> import numpy as np

>>> demdata = gdal.Open('mountains.tif')
>>> demarray = np.array(demdata.GetRasterBand(1).ReadAsArray())
>>> demarray.min(), demarray.max()
(1501, 3974)
>>> print([ x * 3.2808399 for x in (demarray.min(), demarray.max())])
[4924.5406899, 13038.057762600001]

That looks reasonable. Where are those highest and lowest points, in pixel coordinates?

>>> np.where(demarray == demarray.max())
(array([645]), array([1386]))
>>> np.where(demarray == demarray.min())
(array([1667]), array([175]))

Those coordinates are reversed because of the way numpy arrays are organized: (1386, 645) in the image looks like Truchas Peak (the highest peak in this part of the Sangres), while (175, 1667) is where the Rio Grande disappears downstream off the bottom left edge of the map -- not an unreasonable place to expect to find a low point. If you're having trouble eyeballing the coordinates, load the hillshade into GIMP and watch the coordinates reported at the bottom of the screen as you move the mouse.

While you're here, check the image width and height. You'll need it later.

>>> demarray.shape
(1680, 2160)
Again, those are backward: they're the image height, width.

Choose an Observing Spot

Let's pick a viewing spot: Overlook Point in White Rock (marked with the yellow cross on the image above). Its coordinates are -106.1803, 35.827. What are the pixel coordinates? Using the formula from the end of Part I:

>>> import affine
>>> affine_transform = affine.Affine.from_gdal(*demdata.GetGeoTransform())
>>> inverse_transform = ~affine_transform
>>> [ round(f) for f in inverse_transform * (-106.1803, 35.827) ]
[744, 808]

Just to double-check, what's the elevation at that point in the image? Note again that the numpy array needs the coordinates in reverse order: Y first, then X.

>>> demarray[808, 744], demarray[808, 744] * 3.28
(1878, 6159.839999999999)

1878 meters, 6160 feet. That's fine for Overlook Point. We have everything we need to set up a povray file.

Convert to PNG

As mentioned in Part II, povray will only accept height maps as a PNG file, so use gdal_translate to convert:

gdal_translate -ot UInt16 -of PNG mountains.tif mountains.png

Use the Data to Set Camera and Light Angles

The camera should be at the observer's position, and povray needs that as a line like

    location <rightward, upward, forward>
where those numbers are fractions of 1.

The image size in pixels is 2160x1680, and the observer is at pixel location (744, 808). So the first and third coordinates of location should be 744/2160 and 808/1680, right? Well, almost. That Y coordinate of 808 is measured from the top, while povray measures from the bottom. So the third coordinate is actually 1. - 808/1680.

Now we need height, but how do you normalize that? That's another thing nobody seems to document anywhere I can find; but since we're using a 16-bit PNG, I'll guess the maximum is 216 or 65536. That's meters, so DEM files can specify some darned high mountains! So that's why that location <0, .25, 0> line I got from the Mapping Hacks book didn't work: it put the camera at .25 * 65536 or 16,384 meters elevation, waaaaay up high in the sky.

My observer at Overlook Point is at 1,878 meters elevation, which corresponds to a povray height of 1878/65536. I'll use the same value for the look_at height to look horizontally. So now we can calculate all three location coordinates: 744/2160 = .3444, 1878/65536 = 0.0287, 1. - 808/1680 = 0.5190:

    location <.3444, 0.0287, .481>

Povray Glitches

Except, not so fast: that doesn't work. Remember how I mentioned in Part II that povray doesn't work if the camera location is at ground level? You have to put the camera some unspecified minimum distance above ground level before you see anything. I fiddled around a bit and found that if I multiplied the ground level height by 1.15 it worked, but 1.1 wasn't enough. I have no idea whether that will work in general. All I can tell you is, if you're setting location to be near ground level and the generated image looks super dark regardless of where your light source is, try raising your location a bit higher. I'll use 1878/65536 * 1.15 = 0.033.

For a first test, try setting look_at to some fixed place in the image, like the center of the top (north) edge (right .5, forward 1):

    location <.3444, 0.033, .481>
    look_at <.5, 0.033, 1>

That means you won't be looking exactly north, but that's okay, we're just testing and will worry about that later. The middle value, the elevation, is the same as the camera elevation so the camera will be pointed horizontally. (look_at can be at ground level or even lower, if you want to look down.)

Where should the light source be? I tried to be clever and put the light source at some predictable place over the observer's right shoulder, and most of the time it didn't work. I ended up just fiddling with the numbers until povray produced visible terrain. That's another one of those mysterious povray quirks. This light source worked fairly well for my DEM data, but feel free to experiment:

light_source { <2, 1, -1> color <1,1,1> }

All Together Now

Put it all together in a mountains.pov file:

camera {
    location <.3444, 0.0330, .481>
    look_at <.5, 0.0287, 1>
}

light_source { <2, 1, -1> color <1,1,1> }

height_field {
    png "mountains.png"
    smooth
    pigment {
        gradient y
        color_map {
            [ 0 color <.7 .7 .7> ]
            [ 1 color <1 1 1> ]
        }
    }
    scale <1, 1, 1>
}
[Povray-rendering of Black and Otowi Mesas from Overlook Point] Finally, you can run povray and generate an image!
povray +A +W800 +H600 +INAME_OF_POV_FILE +OOUTPUT_PNG_FILE

And once I finally got to this point I could immediately see it was correct. That's Black Mesa (Tunyo) out in the valley a little right of center, and I can see White Rock canyon in the foreground with Otowi Peak on the other side of the canyon. (I strongly recommend, when you experiment with this, that you choose a scene that's very distinctive and very familiar to you, otherwise you'll never be sure if you got it right.)

Next Steps

Now I've accomplished my goal: taking a DEM map and ray-tracing it. But I wanted even more. I wanted a 360-degree panorama of all the mountains around my observing point.

Povray can't do that by itself, but in Part IV, I'll show how to make a series of povray renderings and stitch them together into a panorama. Part IV, Making a Panorama from Raytraced DEM Images

Tags: , , , ,
[ 16:43 Jul 17, 2019    More mapping | permalink to this entry | comments ]