For Day 15 of the 30 Day Map Challenge, "My Data", I'm highlighting
a feature I added to PyTopo
last week: the ability to read GPS tags in image files.
JPEG, and probably other image formats as well, lets you store GPS
coordinates inside the EXIF (EXchangeable Image File format) metadata
stored within each image file.
Open Space advocates in Los Alamos county have been fighting the
forces of development.
Ordinarily that's not a big problem. This county is wildly supportive of
its open space; a huge percent of residents hike, bike, watch birds or
otherwise appreciate the natural beauty around us. It helps that a lot
of the town is on finger mesas adjacent to un-developable canyons, so
you never need to go very far to be in a natural space.
But the county also loves to hire out-of-state consultants any time
anything is changing, and a couple of years ago, they hired a consulting
firm to rewrite Chapter 16 of our county code, concerning development.
That included the zoning maps. The consultants capriciously changed several
parcels previously zoned as open space to zones that allow much more
development
(like six-story apartment or commercial buildings), ignoring public input
protesting the changes, and the County Council rubber-stamped the
consultants' changes, promising to revisit the open space changes soon.
I have a new eBike! I'll write about it more before long, but for now,
what's relevant to the 30 Day
Map Challenge is that it's from Specialized, and if you use
the Specialized phone app, it can record all sorts of fun statistics
for rides, including GPS coordinates.
Like last year, I'm going to work it sporadically, since I've been
busy with a bunch of other things. But this sort of challenge can be
a great way to motivate myself to learn new technologies or get better
acquainted with old ones, so it's fun to work the challenges when I have time.
Day 1 is Points, and I'm mapping peaks in New Mexico.
I was talking to a friend about LANL's proposed new powerline.
A lot of people are opposing it because
the line would run through the Caja del Rio, an open-space
piñon-juniper area adjacent to Santa Fe which is owned by the
US Forest Service.
The proposed powerline would run from the Caja across the Rio Grande to the Lab.
It would carry not just power but also a broadband fiber line, something
Los Alamos town, if not the Lab, needs badly.
On the other hand, those opposed worry about
road-building and habitat destruction in the Caja.
I'm always puzzled reading accounts of the debate. There already is a
powerline running through the Caja and across the Rio via Powerline Point.
The discussions never say (a) whether the proposed
line would take a different route, and if so, (b) Why? why can't they
just tack on some more lines to the towers along the existing route?
For instance, in the slides from one of the public meetings, the
map
on slide 9
not only doesn't show the existing powerline, but also
uses a basemap that has no borders and NO ROADS. Why would you use a
map that doesn't show roads unless you're deliberately trying to
confuse people?
Several years ago I wrote about
Making
a Land Ownership Overlay with QGIS
and Making
Overlay Maps for OsmAnd.
I've been using that land use overlay for years. But recently
I needed to make several more overlays: land ownership for Utah for a
hiking trip, one for the eclipse, and I wanted to refresh my New Mexico
land ownership overlay since it was several years out of date.
It turns out some things have changed, so here's an update,
starting from the point where your intended overlay is loaded
as a layer in QGIS.
The dataset I used for
mapping
fire perimeters
is huge: not surprising if it's all historic fires for the US.
Classifying it in QGIS gave a warning, and operations
were very slow. Here's how to clip a big dataset in QGIS to restrict it
to a smaller geographic area.
For quite a while I've been wanting a map showing the perimeters of
the big local fires. When walking through a burned area, I wonder,
was this one from the Cerro Grande fire? Or Las Conchas? Or another fire?
I've been relying more on my phone
for photos I take while hiking, rather than carry a separate camera.
The Pixel 6a takes reasonably good photos, if you can put up with
the wildly excessive processing Google's camera app does whether you
want it or not.
That opens the possibility of GPS tagging photos, so I'd
have a good record of where on the trail each photo was taken.
But as it turns out: no. It seems the GPS coordinates the Pixel's
camera app records in photos is always wrong, by a significant amount.
And, weirdly, this doesn't seem to be something anyone's talking
about on the web ... or am I just using the wrong search terms?
I've been wistfully watching the hashtag
#30DayMapChallenge
on Mastodon. For several years in a row, I've told myself I'm going to try
the 30 Day Map Challenge
... and each time,
I get busy with other stuff. And this year is no different.
So instead of trying to do all thirty exercises, I'll just do a few of
the challenges when I have time and motivation.
Better than nothing, right?
And as it happened, yesterday I got the urge to do a map-related project.
Except it lined up with Day 2, whereas I didn't get it working til
this morning.
So, two days late, here is my:
30 Day Map Challenge Day 2: Lines
During a bike ride along the fast section of one of our fantastic
White Rock trails, I found myself wishing I could view my track logs
colorized according to how fast I was going. And I realized that
I could pretty easily add that to
PyTopo's track log
displaying code. And as long as I was doing that, why not also add
the ability to colorize by elevation as well?
Most GPX track logs already include elevation (though the ones I get
from OsmAnd aren't super accurate:
they're GPS elevation rather than using the barometric sensor that
some phones have). Track logs from OsmAnd sometimes include
speed, via the nonstandard construct
which PyTopo already knows how to parse;
and of course, for track logs that don't include speed,
it can be calculated according to the distance
and time difference from the previous track point.
Indeed, it was pretty easy to add. I put it on the context menu as a new
submenu, Colorize Tracks.
I probably should play with the colormaps
and use something smarter than a simple blue-to-red gradient,
but even as it is, it's fun to look at a hike to Nambe Lake colorized
by altitude (first image) or a mountain bike ride along Potrillo Mesa and
the Boundary Trail colorized by speed (second image).
Again, that's for the challenge two days ago.
Today's Map Challenge is "A Bad Map". No promises that I'll have time for
another mapping project today ... but I'm looking forward to seeing what
other people come up with.
Somebody in a group I'm in has commented more than once that White
Rock is a hotbed of Republicanism whereas Los Alamos leans Democratic.
(For outsiders, our tiny county has two geographically-distinct towns
in it, with separate zip codes, though officially they're both part of
Los Alamos township which covers all of Los Alamos county.
White Rock is about half the size of Los Alamos.)
After I'd heard her say it a couple times, I got curious. Was it true?
I asked her for a reference, but she didn't have one. I decided to
find out.
New Mexicans are used to people thinking we're not part of the US.
Every New Mexican has stories, like trying to mail-order
something and being told "We don't ship outside the US".
I had a little spare time and decided I'd follow a tutorial that's been
on my to-do list for a while:
Creating Beautiful River Maps with Python.
It combines river watercourse data from
gaia.geosci.unc.edu
with watershed boundaries from the
HydroSheds
project using Python and GeoPandas, making a map that is,
as promised in the title, beautiful.
Five years ago, I wrote about
Clicking through a translucent window: using X11 input shapes
and how I used a translucent image window that allows click-through,
positioned on top of PyTopo, to trace an image of an old map and
create tracks or waypoints.
But the transimageviewer.py app that I wrote then was based on
GTK2, which is now obsolete and has been removed from most Linux
distro repositories. So when I found myself wanting GIS to help
investigate a
growing trail controversy in Pueblo Canyon,
I discovered I didn't have a usable click-through image viewer.
I've been using the Wildland Fires map from MappingSupport.com
to keep an eye on the Cerro Pelado fire and the larger (though more
distant from me) Hermit's Peak/Calf Canyon fires raging in the Pecos.
It's an excellent map, but it's a little sporadic in whether it shows
the fire perimeter. In any case, as a data junkie, I wanted to know how
to get the data and make my own display, maybe for a quick viewer that
I can pop up when I sign on in the morning.
Also, Los Alamos County, on its
Cerro
Pelado Information page, has a map showing the "Go" lines (if the
fire crosses these lines, we have to evacuate) for Los Alamos and
White Rock and I'd like to be able to view those lines
on the same map with the fire perimeter and hot spots.
I've been super busy this month. The New Mexico Legislature was in
session, and in addition to other projects, I've had a chance to be
involved in the process of writing
a new bill and helping it move through the legislature.
It's been interesting, educational, and sometimes frustrating.
The bill is
SB304: Voting District Geographic Data.
It's an "open data" bill:
it mandates that election district boundary data for
all voting districts, down to the county and municipal level, be publicly
available at no charge on the Secretary of State's website.
For this year's LWV NM Voter Guides at
VOTE411.org,
I've been doing a lot of GIS fiddling, since the system needs to know the
voting districts for each race.
You would think it would be easy to find GIS for voting districts —
surely that's public information? — but counties and the state
are remarkably resistant to giving out any sort of data
(they're happy to give you a PDF or a JPG),
so finding the district data takes a lot of searching.
Often, when we finally manage to get GIS info, it isn't for what we want.
For instance, for San Juan County, there's a file that claims to be
County Commission districts (which would look like the image above left),
but the shapes in the file are actually voting precincts (above right).
A district is made up of multiple precincts; in San Juan, there are 77
precincts making up five districts.
In a case like that, you need some way of combining several shapes (a
bunch of precincts) into one (a district).
GIS "Dissolving"
It turns out that the process of coalescing lots of small
shapes into a smaller number of larger shapes is unintuitively
called "dissolving".
It was surprisingly hard to come up with a "D" to write about,
without descending into Data geekery (always a temptation).
Though you may decide I've done that anyway with today's topic.
Out for a scenic drive to shake off some of the house-bound cobwebs,
I got to thinking about how so many places are named after the Devil.
California was full of them -- the Devil's Punchbowl, the Devil's
Postpile, and so forth -- and nearly every western National Park
has at least one devilish feature.
How many are there really? Happily, there's an easy way to answer
questions like this: the
Geographic Names page on the USGS website,
which hosts the Geographic Names Information System (GNIS).
You can download entire place name files for a state, or
you can search for place name matches at:
GNIS Feature Search.
When I searched there for "devil", I got 1883 hits -- but many of them
don't actually include the word "Devil". What, are they taking lessons
from Google about searching for things that don't actually match the
search terms?
I decided I wanted to download the results so I could
count them more easily.
The page offers View & Print all or
Save as pipe "|" delimited file. I chose to save the file.
A friend recently introduced me to Folium, a quick and easy way of
making web maps with Python.
The Folium
Quickstart gets you started in a hurry. In just two lines of Python
(plus the import line), you can write an HTML file that you can load
in any browser to display a slippy map, or you can display it inline in a
Jupyter notebook.
Folium uses the very mature Leaflet
JavaScript library under the hood. But it lets you do all the
development in a few lines of Python rather than a lot of lines
of Javascript.
Having run through most of the quickstart, I was excited to try
Folium for showing GeoJSON polygons. I'm helping with a redistricting
advocacy project; I've gotten shapefiles for the voting districts in
New Mexico, and have been wanting to build a map that shows them
which I can then extend for other purposes.
Step 1: Get Some GeoJSON
The easiest place to get voting district data is from TIGER, the
geographic arm of the US Census.
For the districts resulting from the 2010 Decadal Census,
start here:
Cartographic
Boundary Files - Shapefile (you can also get them as KML,
but not as GeoJSON). There's a category called
"Congressional Districts: 116th Congress", and farther down the page,
under "State-based Files", you can get shapefiles for the upper and
lower houses of your state.
You can also likely download them from
at www2.census.gov/geo/tiger/TIGER2010/,
as long as you can figure out how to decode the obscure directory names.
ELSD and POINTLM, so
the first step is to figure out what those mean; I never found anything
that could decode them.
(Before I found the TIGER district data, I took a more roundabout path that
involved learning how to merge shapes; more on that in a separate post.)
Okay, now you have a shapefile (unzip the TIGER file to get a bunch of
files with names like cb_2018_35_sldl_500k.* -- shape "files"
are an absurd ESRI concept that actually use seven separate files for
each dataset, so they're always packaged as a zip archive and programs
that read shapefiles expect that when you pass them a .shp,
there will be a bunch of other files with the same basename but
different extensions in the same directory).
But Folium can't handle shapefiles, only GeoJSON. You can do that
translation with a GDAL command:
Or you can do it programmatically with the GDAL Python bindings:
def shapefile2geojson(infile, outfile, fieldname):
'''Translate a shapefile to GEOJSON.'''
options = gdal.VectorTranslateOptions(format="GeoJSON",
dstSRS="EPSG:4326")
gdal.VectorTranslate(outfile, infile, options=options)
The EPSG:4326 specifier, if you read man ogr2ogr, is supposedly
for reprojecting the data into WGS84 coordinates, which is what most
web maps want (EPSG:4326 is an alias for WGS84). But it has an equally
important function: even if your input shapefile is already in WGS84,
adding that option somehow ensures that GDAL will use degrees as the
output unit. The TIGER data already uses degrees so you don't strictly
need that, but some data, like the precinct data I got from UNM RGIS,
uses other units, like meters, which will confuse Folium and Leaflet.
And the TIGER data isn't in WGS84 anyway; it's in GRS1980 (you can tell
by reading the .prj file in the same directory as the .shp).
Don't ask me about details of all these different geodetic reference systems;
I'm still trying to figure it all out. Anyway, I recommend adding the
EPSG:4326 as the safest option.
Step 2: Show the GeoJSON in a Folium Map
In theory, looking at the Folium Quickstart, all you need is
folium.GeoJson(filename, name='geojson').add_to(m).
In practice, you'll probably want to more, like
color different regions differently
show some sort of highlight when the user chooses a region
Here's a simple style function that chooses random colors:
import random
def random_html_color():
r = random.randint(0,256)
g = random.randint(0,256)
b = random.randint(0,256)
return '#%02x%02x%02x' % (r, g, b)
def style_fcn(x):
return { 'fillColor': random_html_color() }
I wanted to let the user choose regions by clicking, but it turns out
Folium doesn't have much support for that (it may be coming in a
future release). You can do it by reading the GeoJSON yourself,
splitting it into separate polygons and making them all separate Folium
Polygons or GeoJSON objects, each with its own click behavior; but
if you don't mind highlights and popups on mouseover instead of
requiring a click, that's pretty easy. For highlighting in red whenever
the user mouses over a polygon, set this highlight_function:
In this case, 'NAME' is the field in the shapefile that I want to display
when the user mouses over the region.
If you're not sure of the field name, the nice thing about GeoJSON
is that it's human readable. Generally you'll want to look inside
"features", for "properties" to find the fields defined for each polygon.
For instance, if I use jq to prettyprint the JSON generated for the NM
state house districts:
If you still aren't sure which property name means what (for example,
"NAME" could be anything), just keep browsing through the JSON file to
see which fields change from feature to feature and give the values
you're looking for, and it should become obvious pretty quickly.
Here's a working code example:
polidistmap.py,
and here's an example of a working map:
This is Part IV of a four-part article on ray tracing digital elevation
model (DEM) data.
The goal: render a ray-traced image of mountains from a digital
elevation model (DEM).
Part IV, Make a 3D Panorama:
Making a Panorama from Raytraced DEM Images (this article)
Except there are actually several more parts on the way, related to
using GRASS to make viewsheds. So maybe this is actually a five- or
six-parter. We'll see.
The Easy Solution
Skipping to the chase here ... I had a whole long article
written about how to make a sequence of images with povray, each
pointing in a different direction, and then stitch them together
with ImageMagick.
But a few days after I'd gotten it all working, I realized none of it
was needed for this project, because ... ta-DA —
povray accepts this argument inside its camera section:
angle 360
Duh! That makes it so easy.
You do need to change povray's projection to cylindrical;
the default is "perspective" which warps the images.
If you set your look_at to point due south --
the first and second coordinates are the same as your observer coordinate,
the third being zero so it's looking southward -- then povray will
create a lovely strip starting at 0 degrees bearing (due north), and
with south right in the middle.
The camera section I ended up with was:
with the same light_source and height_field as in Part III.
There are still some more steps I'd like to do.
For instance, fitting names of peaks to that 360-degree pan.
The rest of this article discusses some of the techniques I would
have used, which might be useful in other circumstances.
A Script to Spin the Observer Around
Angles on a globe aren't as easy as just adding 45 degrees to the
bearing angle each time. You need some spherical trigonometry to
make the angles even, and it depends on the observer's coordinates.
Obviously, this wasn't something I wanted to calculate by hand, so
I wrote a script for it:
demproj.py.
Run it with the name of a DEM file and the observer's coordinates:
demproj.py demfile.png 35.827 -106.1803
It takes care of calculating the observer's elevation, normalizing
to the image size and all that. It generates eight files, named
outfileN.png, outfileNE.png etc.
Stitching Panoramas with ImageMagick
To stitch those demproj images manually in ImageMagick, this should
work in theory:
But in practice, some of the images have a few-pixel offset,
and I never did figure out why; maybe it's a rounding error
in my angle calculations.
I opened the images as layers in GIMP, and used my GIMP script
Pandora/
to lay them out as a panorama. The cylindrical projection should
make the edges match perfectly, so you can turn off the layer masking.
Then use the Move tool to adjust for the slight errors
(tip: when the Move tool is active, the arrow keys will move
the current layer by a single pixel).
If you get the offsets perfect and want to know what they are
so you can use them in ImageMagick or another program, use
GIMP's Filters->Python-Fu->Console.
This assumes the panorama image is the only one loaded in GIMP,
otherwise you'll have to inspect gimp.image_list() to see where
in the list your image is.
>>> img = gimp.image_list()[0]
>>> for layer in img.layers:
... print layer.name, layer.offsets
This is Part III of a four-part article on ray tracing digital elevation
model (DEM) data.
The goal: render a ray-traced image of mountains from a digital
elevation model (DEM).
In Part II, I showed how the povray camera position and angle
need to be adjusted based on the data, and the position of the light
source depends on the camera position.
In particular, if the camera is too high, you won't see anything
because all the relief will be tiny invisible bumps down below.
If it's too low, it might be below the surface and then you
can't see anything.
If the light source is too high, you'll have no shadows, just a
uniform grey surface.
That's easy enough to calculate for a simple test image like the one I
used in Part II, where you know exactly what's in the file.
But what about real DEM data where the elevations can vary?
Explore Your Test Data
For a test, I downloaded some data that includes the peaks I can see
from White Rock in the local Jemez and Sangre de Cristo mountains.
(or whatever your favorite image view is, if not
pho).
The image at right shows the hillshade for the data I'm using, with a
yellow cross added at the location I'm going to use for the observer.
Sanity check: do the lowest and highest elevations look right?
Let's look in both meters and feet, using the tricks from Part I.
>>> import gdal
>>> import numpy as np
>>> demdata = gdal.Open('mountains.tif')
>>> demarray = np.array(demdata.GetRasterBand(1).ReadAsArray())
>>> demarray.min(), demarray.max()
(1501, 3974)
>>> print([ x * 3.2808399 for x in (demarray.min(), demarray.max())])
[4924.5406899, 13038.057762600001]
That looks reasonable. Where are those highest and lowest points,
in pixel coordinates?
Those coordinates are reversed because of the way numpy arrays
are organized: (1386, 645) in the image looks like
Truchas Peak (the highest peak in this part of the Sangres), while
(175, 1667) is where the Rio Grande disappears downstream off the
bottom left edge of the map -- not an unreasonable place to expect to
find a low point. If you're having trouble eyeballing the coordinates,
load the hillshade into GIMP and watch the coordinates reported at the
bottom of the screen as you move the mouse.
While you're here, check the image width and height. You'll need it later.
>>> demarray.shape
(1680, 2160)
Again, those are backward: they're the image height, width.
Choose an Observing Spot
Let's pick a viewing spot: Overlook Point in White Rock
(marked with the yellow cross on the image above).
Its coordinates are -106.1803, 35.827. What are the pixel coordinates?
Using the formula from the end of Part I:
>>> import affine
>>> affine_transform = affine.Affine.from_gdal(*demdata.GetGeoTransform())
>>> inverse_transform = ~affine_transform
>>> [ round(f) for f in inverse_transform * (-106.1803, 35.827) ]
[744, 808]
Just to double-check, what's the elevation at that point in the image?
Note again that the numpy array needs the coordinates in reverse order:
Y first, then X.
The camera should be at the observer's position, and povray needs
that as a line like
location <rightward, upward, forward>
where those numbers are fractions of 1.
The image size in pixels is
2160x1680, and the observer is at pixel location (744, 808).
So the first and third coordinates of location should
be 744/2160 and 808/1680, right?
Well, almost. That Y coordinate of 808 is measured from the top,
while povray measures from the bottom. So the third coordinate is
actually 1. - 808/1680.
Now we need height, but how do you normalize that? That's another thing
nobody seems to document anywhere I can find; but since we're
using a 16-bit PNG, I'll guess the maximum is 216 or 65536.
That's meters, so DEM files can specify some darned high mountains!
So that's why that location <0, .25, 0> line I got
from the Mapping Hacks book didn't work: it put the camera at
.25 * 65536 or 16,384 meters elevation, waaaaay up high in the sky.
My observer at Overlook Point is at 1,878 meters elevation, which
corresponds to a povray height of 1878/65536. I'll use the same value
for the look_at height to look horizontally. So now we can
calculate all three location coordinates: 744/2160 = .3444,
1878/65536 = 0.0287, 1. - 808/1680 = 0.5190:
location <.3444, 0.0287, .481>
Povray Glitches
Except, not so fast: that doesn't work. Remember how I mentioned in
Part II that povray doesn't work if the camera location is at ground
level? You have to put the camera some unspecified minimum distance
above ground level before you see anything. I fiddled around a bit and
found that if I multiplied the ground level height by 1.15 it worked,
but 1.1 wasn't enough. I have no idea whether that will work in
general. All I can tell you is, if you're setting location to
be near ground level and the generated image looks super dark
regardless of where your light source is, try raising your location a
bit higher. I'll use 1878/65536 * 1.15 = 0.033.
For a first test, try setting look_at to some fixed place in
the image, like the center of the top (north) edge (right .5, forward 1):
That means you won't be looking exactly north, but that's okay, we're
just testing and will worry about that later.
The middle value, the elevation, is the same as the camera elevation
so the camera will be pointed horizontally. (look_at can be at
ground level or even lower, if you want to look down.)
Where should the light source be? I tried to be clever and put the
light source at some predictable place over the observer's right
shoulder, and most of the time it didn't work. I ended up just
fiddling with the numbers until povray produced visible terrain.
That's another one of those mysterious povray quirks. This light
source worked fairly well for my DEM data, but feel free to experiment:
And once I finally got to this point I could immediately see it was correct.
That's Black Mesa (Tunyo) out in the valley a little right of center,
and I can see White Rock
canyon in the foreground with Otowi Peak on the other side of the canyon.
(I strongly recommend, when you experiment with this, that you choose a
scene that's very distinctive and very familiar to you, otherwise you'll
never be sure if you got it right.)
Next Steps
Now I've accomplished my goal: taking a DEM map and ray-tracing it.
But I wanted even more. I wanted a 360-degree panorama of
all the mountains around my observing point.
Povray can't do that by itself, but
in Part IV, I'll show how to make a series of povray renderings
and stitch them together into a panorama.
Part IV,
Making a Panorama
from Raytraced DEM Images
This is Part II of a four-part article on ray tracing digital elevation
model (DEM) data. (Actually, it's looking like there may be five or
more parts in the end.)
The goal: render a ray-traced image of mountains from a digital
elevation model (DEM).
My goal for that DEM data was to use ray tracing to show the elevations
of mountain peaks as if you're inside the image looking out at those peaks.
I'd seen the open source ray tracer povray used for that purpose in
the book Mapping Hacks: Tips & Tools for Electronic Cartography:
Hack 20, "Make 3-D Raytraced Terrain Models", discusses how to use
it for DEM data.
Unfortunately, the book is a decade out of date now, and lots of
things have changed. When I tried following the instructions in Hack
20, no matter what DEM file I used as input I got the same distorted
grey rectangle. Figuring out what was wrong meant understanding how
povray works, which involved a lot of testing and poking since the
documentation isn't clear.
Convert to PNG
Before you can do anything, convert the DEM file to a 16-bit greyscale PNG,
the only format povray accepts for what it calls height fields:
If your data is in some format like ArcGIS that has multiple files,
rather than a single GeoTIFF file, try using the name of the directory
containing the files in place of a filename.
Set up the .pov file
Now create a .pov file, which will look something like this:
The trick is setting up the right values for the camera and light source.
Coordinates like the camera location and look_at,
are specified by three numbers that represent
<rightward, upward, forward>
as a fraction of the image size.
Imagine your DEM tilting forward to lie flat in
front of you: the bottom (southern) edge of your DEM image
corresponds to 0 forward, whereas the top (northern) edge is 1 forward.
0 in the first coordinate is the western edge, 1 is the eastern.
So, for instance, if you want to put the virtual camera at the
middle of the bottom (south) edge of your DEM and look straight
north and horizontally, neither up nor down, you'd want:
location <.5, HEIGHT, 0>
look_at <.5, HEIGHT, 1>
(I'll talk about HEIGHT in a minute.)
It's okay to go negative, or to use numbers bigger than zero; that
just means a coordinate that's outside the height map.
For instance, a camera location of
location <-1, HEIGHT, 2>
would be off the west and north edges of the area you're mapping.
look_at, as you might guess, is the point the camera is looking at.
Rather than specify an angle, you specify a point in three dimensions
which defines the camera's angle.
What about HEIGHT?
If you make it too high, you won't see anything
because the relief in your DEM will be too far below you and will disappear.
That's what happened with the code from the book: it specified
location <0, .25, 0>, which, in current DEM files,
means the camera is about 16,000 feet up in the sky, so high that the
mountains shrink to invisibility.
If you make the height too low, then everything disappears
because ... well, actually I don't know why. If it's 0, then you're
most likely underground and I understand why you can't see anything,
but you have to make it significantly higher than ground level, and
I'm not sure why. Seems to be a povray quirk.
Once you have a .pov file with the right camera and light source,
you can run povray like this:
then take a look at rendered.png in your favorite image viewer.
Simple Sample Data
There's not much documentation for any of this. There's
povray:
Placing the Camera, but it doesn't explain details like which
number controls which dimension or why it doesn't work if you're too
high or too low. To figure out how it worked, I made a silly little
test image in GIMP consisting of some circles with fuzzy edges.
Those correspond to very tall pillars with steep sides: in these
height maps, white means the highest point possible, black means
the lowest.
Then I tried lots of different values for location and look_at
until I understood what was going on.
For my bowling-pin image, it turned out looking northward (upward)
from the south (the bottom of the image) didn't work, because the
pillar at the point of the triangle blocked everything else.
It turned out to be more useful to put the camera beyond the top
(north side) of the image and look southward, back toward the image.
location <.5, HEIGHT, 2>
look_at <.5, HEIGHT, 0>
The position of the light_source is also important.
For instance, for my circles, the light source given in the original hack,
<0, 3000, 0>,
is so high that the pillars aren't visible at all,
because the light is shining only on their tops and not on their sides.
(That was also true for most DEM data I tried to view.)
I had to move the light source much lower, so it illuminated the sides
of the pillars and cast some shadows, and that was true for DEM data
as well.
The .pov file above, with the camera halfway up the field (.5)
and situated in the center of the north end of the field,
looking southward and just slightly up from horizontal (.6),
rendered like this. I can't explain the two artifacts in the middle.
The artifacts at the tops and bottoms of the pillars are presumably
rounding errors and don't worry me.
Finally, I felt like I was getting a handle on povray camera positioning.
The next step was to apply it to real Digital Elevation Maps files.
I'll cover that in Part III, Povray on real DEM data:
Ray-Tracing Digital Elevation Data in 3D with Povray
One of my hiking buddies uses a phone app called Peak Finder. It's a neat
program that lets you spin around and identify the mountain peaks you see.
Alas, I can't use it, because it won't work without a compass, and
[expletive deleted] Samsung disables the compass in their phones, even
though the hardware is there. I've often wondered if I could write a program
that would do something similar. I could use the images in planetarium
shows, and could even include additions like
predicting exactly when and where the moon would rise on a given date.
Before plotting any mountains, first you need some elevation data,
called a Digital Elevation Model or DEM.
Get the DEM data
Digital Elevation Models are available from a variety of sources in a
variety of formats. But the downloaders and formats aren't as well
documented as they could be, so it can be a confusing mess.
USGS
USGS steers you to the somewhat flaky and confusing
National Map
Download Client. Under Data in the left sidebar, click
on Elevation Products (3DEP), select the accuracy you need,
then zoom and pan the map until it shows what you need.
Current Extent doesn't seem to work consistently, so use
Box/Point and sweep out a rectangle.
Then click on Find products.
Each "product" should have a download link next to it,
or if not, you can put it in your cart and View Cart.
Except that National Map tiles often don't load, so you can end up with a
mostly-empty map (as shown here) where you have no idea what area
you're choosing. Once this starts happening, switching to a different
set of tiles probably won't help; all you can do is wait a few hours
and hope it gets better..
Or get your DEM data somewhere else. Even if you stick with the USGS,
they have a different set of DEM data, called SRTM (it comes from the
Shuttle Radar Topography Mission) which is downloaded from a completely
different place,
SRTM DEM data, Earth Explorer.
It's marginally easier to use than the National Map and less flaky
about tile loading, and it gives you
GeoTIFF files instead of zip files containing various ArcGIS formats.
Sounds good so far; but once you've wasted time defining the area you
want, suddenly it reveals that you can't download anything unless you
first make an account, and
you have to go through a long registration process that demands name,
address and phone number (!) before you can actually download anything.
Of course neither of these sources lets you just download data for
a given set of coordinates; you have to go through the interactive
website any time you want anything. So even if you don't mind giving
the USGS your address and phone number, if you want something you can
run from a program, you need to go elsewhere.
The best I found is
OpenTypography's
SRTM API, which lets you download arbitrary areas specified by
latitude/longitude bounding boxes.
Verify the Data: gdaldem
Okay, you've got some DEM data. Did you get the area you meant to get?
Is there any data there? DEM data often comes packaged as an image,
primarily GeoTIFF. You might think you could simply view that in an
image viewer -- after all, those nice preview images they show you
on those interactive downloaders show the terrain nicely.
But the actual DEM data is scaled so that even high mountains
don't show up; you probably won't be able to see anything but blackness.
One way of viewing a DEM file as an image is to load it into GIMP.
Bring up Colors->Levels, go to the input slider (the upper
of the two sliders) and slide the rightmost triangle leftward until it's
near the right edge of the histogram. Don't save it that way (that will
mess up the absolute elevations in the file); it's just
a quick way of viewing the data.
Update: A better, one-step way is Color > Auto > Stretch Contrast.
A better way to check DEM data files is a beautiful little program
called gdaldem
(part of the GDAL package).
It has several options, like generating a hillshade image:
Then view hillshade.png in your favorite image viewer and see
if it looks like you expect. Having read quite a few elaborate
tutorials on hillshade generation over the years, I was blown away at
how easy it is with gdaldem.
Here are some other operations you can do on DEM data.
Translate the Data to Another Format
gdal has lots more useful stuff beyond gdaldem. For instance, my
ultimate goal, ray tracing, will need a PNG:
gdal_translate can recognize most DEM formats. If you have a
complicated multi-file format like ARCGIS,
try using the name of the directory where the files live.
Get Vertical Limits, for Scaling
What's the highest point in your data, and at what coordinates does that
peak occur? You can find the highest and lowest points easily with Python's
gdal package if you convert the gdal.Dataset into a numpy array:
That gives you the highest and lowest elevations. But where are they
in the data? That's not super intuitive in numpy; the best way I've
found is:
indices = np.where(demarray == demarray.max())
ymax, xmax = indices[0][0], indices[1][0]
print("The highest point is", demarray[ymax][xmax])
print(" at pixel location", xmax, ymax)
Translate Between Lat/Lon and Pixel Coordinates
But now that you have the pixel coordinates of the high point, how do
you map that back to latitude and longitude?
That's trickier, but here's one way, using the affine package:
What about the other way? You have latitude and longitude
and you want to know what pixel location that corresponds to?
Define an inverse transformation:
inverse_transform = ~affine_transform
px, py = [ round(f) for f in inverse_transform * (lon, lat) ]
Those transforms will become important once we get to Part III.
But first, Part II, Understand Povray:
Height Fields in Povray
Dave and I will be presenting a free program on Stonehenge at the Los
Alamos Nature Center tomorrow, June 14.
The nature center has a list of programs people have asked for, and
Stonehenge came up as a topic in our quarterly meeting half a year ago.
Remembering my seventh grade fascination
with Stonehenge and its astronomical alignments -- I discovered
Stonehenge Decoded at the local library, and built a desktop
model showing the stones and their alignments -- I volunteered.
But after some further reading, I realized that not all of those
alignments are all they're cracked up to be and that there might not
be much of astronomical interest to talk about, and I un-volunteered.
But after thinking about it for a bit, I realized that "not all
they're cracked up to be" makes an interesting topic in itself.
So in the next round of planning, I re-volunteered; the result is
tomorrow night's presentation.
The talk will include a lot of history of Stonehenge and its construction,
and a review of some other important or amusing henges around the world.
But this article is on the astronomy, or lack thereof.
The Background: Stonehenge Decoded
Stonehenge famously aligns with the summer solstice sunrise, and
that's when tens of thousands of people flock to Salisbury, UK to
see the event. (I'm told that the rest of the time, the monument is
fenced off so you can't get very close to it, though I've never had
the opportunity to visit.)
Curiously, archaeological evidence suggests that the summer solstice
wasn't the big time for prehistorical gatherings at Stonehenge; the
time when it was most heavily used was the winter solstice, when there's
a less obvious alignment in the other direction. But never mind that.
In 1963, Gerald Hawkins wrote an article in Nature, which he
followed up two years later with a book entitled Stonehenge Decoded.
Hawkins had access to an IBM 7090, capable of a then-impressive
100 Kflops (thousand floating point operations per second; compare
a Raspberry Pi 3 at about 190 Mflops, or about a hundred Gflops for
something like an Intel i5). It cost $2.9 million (nearly $20 million
in today's dollars).
Using the 7090, Hawkins mapped the positions of all of Stonehenge's
major stones, then looked for interesting alignments with the sun and moon.
He found quite a few of them.
(Hawkins and Fred Hoyle also had a theory about the fifty-six Aubrey
holes being a lunar eclipse predictor, which captured my seventh-grade
imagination but which most researchers today think was more likely
just a coincidence.)
But I got to thinking ... Hawkins mapped at least 38 stones if you
don't count the Aubrey holes. If you take 38 randomly distributed points,
what are the chances that you'll find interesting astronomical alignments?
A Modern Re-Creation of Hawkins' Work
Programmers today have it a lot easier than Hawkins did.
We have languages like Python, with libraries like PyEphem to handle
the astronomical calculations.
And it doesn't hurt that our computers are about a million times faster.
Anyway, my script,
skyalignments.py
takes a GPX file containing a list of geographic coordinates and compares
those points to sunrise and sunset at the equinoxes and solstices,
as well as the full moonrise and moonset nearest the solstice or equinox.
It can find alignments among all the points in the GPX file, or from a
specified "observer" point to each point in the file. It allows a slop
of a few degrees, 2 degrees by default; this is about four times the
diameter of the sun or moon, but a half-step from your observing
position can make a bigger difference than that. I don't know how
much slop Hawkins used; I'd love to see his code.
My first thought was, what if you stand on a mountain peak and look
around you at other mountain peaks? (It's easy to get GPS coordinates
for peaks; if you can't find them online you can click on them on a map.)
So I plotted the major peaks in the Jemez and Sangre de Cristo mountains
that I figured were all mutually visible. It came to 22 points; about
half what Hawkins was working with.
Yikes! Way too many. What if I cut it down? So I tried eliminating all
but the really obvious ones, the ones you really notice from across
the valley. The most prominent 11 peaks: 5 in the Jemez, 6 in the Sangres.
That was a little more manageable. Now I was down to only 22 alignments.
Now, I'm pretty sure that the Ancient Ones -- or aliens -- didn't lay
out the Jemez and Sangre de Cristo mountains to align with the rising
and setting sun and moon. No, what this tells us is that pretty much any
distribution of points will give you a bunch of astronomical alignments.
And that's just the sun and moon, all Hawkins was considering. If you
look for writing on astronomical alignments in ancient monuments,
you'll find all people claiming to have found alignments with all
sorts of other rising and setting bodies, like Sirius and
Orion's belt. Imagine how many alignments I could have found if I'd
included the hundred brightest stars.
So I'm not convinced.
Certainly Stonehenge's solstice alignment looks real; I'm not disputing that.
And there are lots of other archaeoastronomy sites that are even
more convincing, like the Chaco sun dagger. But I've also seen plenty of
web pages, and plenty of talks, where someone maps out a collection of
points at an ancient site and uses alignments among them as proof that
it was an ancient observatory. I suspect most of those alignments are more
evidence of random chance and wishful thinking than archeoastronomy.
Now that I know how to
make
a map overlay for OsmAnd, I wanted a land ownership overlay.
When we're hiking, we often wonder whether we're on Forest Service,
BLM, or NPS land, or private land, or Indian land. It's not easy to tell.
Finding Land Ownership Data
The first trick was finding the data. The New Mexico State Land Office has an
interactive
New Mexico Land Status map, but that's no help when walking
around, and their downloadable GIS files only cover the lands
administered by the state land office, which mostly doesn't include
any areas where we hike. They do have some detailed
PDF maps of
New Mexico Lands if you have a printer capable of printing
enormous pages, which most of us don't.
In theory I could download their 11" x 17" Land Status PDF, convert it
to a raster file, and georeference it as I described in the earlier
article; but since they obviously have the GIS data (used for the
interactive map) I'd much rather download the data and save myself
all that extra work.
Eventually I found New Mexico ownership data at
UNM's RGIS page, which has
an excellent collection of GIS data available for download. Click
on Boundaries, then download Surface Land Ownership.
It's available in a variety of formats; I chose the geojson format
because I find it the most readable and the easiest to parse with
Python, though ESRI shapefiles arguably might have been easier in QGIS.
Colorizing Polygons in QGIS
You can run qgis on a geojson file directly. When it loads it shows the
boundaries, and you can use the Info tool to click on a polygon and
see its metadata -- ownership might be BLM, DOE, FS, I, or whatever.
But they're all the same color, so it's hard to get a sense of land
ownership just clicking around.
To colorize the polygons differently, right-click on the layer name and
choose Properties. For Style, choose Categorized.
For Column, pick the attribute you want to use to choose colors:
for this dataset, it's "own", for ownership.
Color ramp is initially set to random.
Click Classify to generate an initial color ramp, then
click Apply to see what it looks like on the map.
Then you can customize the colors by doubleclicking on specific color
swatches. For instance, by unstated convention most maps show Forest
Service land as green, BLM and Indian land as various shades of brown.
Click Apply as you change colors, until you're happy with the
result.
You can export the colored layer to GeoTIFF using QGIS' confusing
and poorly documented
Print Composer.
Create one with: Project > New Print Composer,
which will open with a blank white canvas.
Zoom and pan in the QGIS window so the full extent of the image you
want to export is visible. Then, in the Print Composer,
Layout > Add Map. Click and drag in the blank canvas,
going from one corner to the opposite corner, and some portion of
the map should appear.
There doesn't seem to be any way to Print Composer to import your
whole map automatically, or for you to control what portion of the map
from the QGIS window will show up in the Print Composer when you drag.
If you guess wrong and don't get all of your map, hit Delete, switch
to the QGIS window and drag and/or zoom your map a little, then switch
back to Print Composer and try adding it again.
You can also make adjustments by changing the Extents in
the Item Properties tab, and clicking the Set to map canvas
extent button in that tab will enlarge your extents to cover
approximately what's currently showing in the QGIS window.
It's a fiddly process and there's not much control, but when you
decide it's close enough, Composer > Export as Image... and
choose TIFF format. (Print Composer offers both TIFF and TIF; I don't
know if there's a difference. I only tried TIFF with two effs.) That
should write a GeoTIFF format; to verify that, go to a terminal and
run gdalinfo on the saved TIFF file and make sure it says it's
GeoTIFF.
Load into OsmAnd
Finally, load the image into OsmAnd's tiles folder as discussed in the
previous article,
then bring up the Configure map menu and enable the overlay.
I found that the black lines dividing the various pieces of land are
a bit thicker than I'd like. You can't get that super accurate "I'm
standing with one foot in USFS land and the other foot in BLM land"
feeling because of the thick black DMZ dividing them. But that's
probably just as well: I suspect the data doesn't have pinpoint
accuracy either. I'm sure there's a way to reduce the thickness of
the black line or eliminate it entirely, but for now, I'm happy with
what I have.
Update: Here's another, easier, way to show land use on OsmAnd using
overlay tiles from the BLM (in the US):
Adding BLM Land Use Maps to Osmand on Android.
It isn't as general (you can only show something you can get from an
online tiled source) and it updates in real-time, meaning it might use
cellphone data rather than working entirely offline, but it's still a great
option to know about.
For many years I've wished I could take a raster map image, like a
geology map, an old historical map, or a trail map, and overlay it
onto the map shown in OsmAnd
so I can use it on my phone while walking around.
I've tried many times, but there are so many steps and I never
found a method that worked.
Last week, the ever helpful Bart Eisenberg posted to the OsmAnd list
a video he'd made:
Displaying
web-based maps with MAPC2MAPC: OsmAnd Maps & Navigation.
Bart makes great videos ... but in this case, MAPC2MAPC
turned out to be a Windows program so it's no help to a Linux user.
Darn!
But seeing his steps laid out inspired me to try again, and gave me
some useful terms for web searching. And this time I finally succeeded.
I was also helped by a post to the OsmAnd list by A Thompson,
How to get aerial image into offline use?,
though I needed to change a few of the steps.
(Note: click on any of the screenshots here to see a larger version.)
Georeference the Image Using QGIS
Update in Feb 2024: Several things have changed in QGIS georeferencing
(the version I'm using now is 3.28.15-Firenze),
so note the updated sections below.
The first step is to georeference the image: turn the plain raster image
into a GeoTiff that has references showing where on Earth its corners are.
It turns out there's an open source program that can do that, QGIS.
Although it's poorly documented,
it's fairly easy once you figure out the trick.
I started with the tutorial
Georeferencing Basics,
but it omits one important point, which I finally found in BBRHUFT's
How
to Georeference a map in QGIS. Step 11 is the key: the Coordinate
Reference System (CRS) must be the same in the georeferencing window
as it is in the main QGIS window. That sounds like a no-brainer, but
in practice, the lists of possible CRSes shown in the two windows
don't overlap, so unless you follow BBRHUFT's advice and
type 3857 into the filter box in both windows, you'll likely
end up with CRSes that don't match. It'll look like it's working, but
the resulting GeoTiff will have coordinates nowhere near where they
should be
Instead, follow BBRHUFT's advice and type 3857 into the filter
box in both windows. The "WGS 84 / Pseudo Mercator" CRS will show up
and you can use it in both places. Then the GeoTiff will come out in
the right place.
If you're starting from a PDF, you may need to convert it to a raster format
like PNG or JPG first. GIMP can do that.
So, the full QGIS steps are:
Start qgis, and bring up Project > Project Properties.
Click on CRS, type 3857 in the filter box,
and make sure the CRS is set to WGS 84 / Pseudo Mercator.
(optional) While you're in Project Properties, you may want to
click on General and set
Display Coordinates Using: Decimal degrees.
By default qgis always uses "meters" (measured from where?)
which seems like a bizarre and un-useful default.
In Plugins > Manage and Install Plugins...,
install two plugins: Quick Map Services and
Georeferencer GDAL.
Update: both OSM and the Georeferencer are now included by default,
so you no longer have to install these plugins.
Back in the main QGIS window, go to Web > QuickMapServices
(that's one of the plugins you just installed) and pick
a background map, probably something from OpenStreetMap.
Once you find one that loads, navigate to the area you're
interested in.
Update: Sometimes, including an OpenStreetMap is as easy as clicking on
the OpenStreetMap line in the upper left pane of the QGIS window.
Other times there is no such line, and you have to use
Web > QuickMapServices. I don't know how QGIS decides.
(All the OpenStreetMap tile sets print "API Key Required" all over
everything. You can get an API key from Thunderforest -- I use one
for PyTopo -- but it won't help you here, because nobody seems to
know how to get QGIS to use an API key, though I found dozens of
threads with people asking. If you find a way, let me know.)
Start the georeferencer:
Raster > Georeferencer > Georeferencer....
It may prompt you for a CRS; if so, choose WGS 84 / Pseudo Mercator
if it's available, and if it's not, type 3857 in the filter box
to find it.
In the Georeferencer window, File > Open Raster...
Update: If it isn't showing the files you want to open, you may
need to adjust the
Files of type menu to make sure it includes your file type.
When I ran it just now, I had two JPG files to georeference.
The first time, it showed those files automatically.
When I went to open the second file, it showed nothing about I had
to change Files of type to JPEG JFIF before
I could open the file.
Select matching points. This part is time consuming but fun.
Zoom in to your raster map in the Georeferencer window and pick a
point that's easy to identify: ideally the intersection of two
roads, or a place where a road crosses a river, or similar
identifying features. Make sure you can identify the same point
in both the Georeferencer window and the main QGIS window
(see the yellow arrows in the diagram).
Click on the point in the Georeferencer window, then click "from
map canvas".
In the main QGIS window, click on the corresponding point.
You can zoom with the mousewheel and pan by middle mouse dragging.
Repeat for a total of four to eight points.
You'll see yellow boxes pop up in both windows showing the matching points.
If you don't, there's something wrong; probably your CRSes don't
match.
In the Georeferencer window, call up
Settings > Transformation Settings.
Choose a Transformation type (most people seem to like Thin Plate Spline),
a Resampling method (Lanczos is good), a target SRS (make sure it's
3857 WGS 84 / Pseudo Mercator to match the main window).
Set Output raster to your desired filename, ending in .tiff.
Be sure to check the "Load in QGIS when done" box at the bottom.
Click OK.
File > Start Georeferencing. It's quite fast, and will
create your .tiff file when it's finished, and should load the TIFF
as a new layer in the main QGIS window.
Check it and make sure it seems to be lined up properly with
geographic features. If you need to you can right-click on the layer
and adjust its transparency.
In the Georeferencer window, File > Save GCP Points as...
in case you want to go back and do this again. Now you can close
the Georeferencer window.
Project > Save to save the QGIS project, so if you want
to try again you don't have to repeat all the steps.
Now you can close qgis. Whew!
Convert the GeoTiff to Map Tiles
The ultimate goal is to convert to OsmAnd's sqlite format, but there's
no way to get there directly. First you have to convert it to map tiles
in a format called mbtiles.
QGIS has a plug-in called QTiles but it didn't work for me: it briefly
displayed a progress bar which then disappeared without creating any files.
Fortunately, you can do the conversion much more easily with
gdal_translate, which at least on Debian is part of
the gdal-bin package.
gdal_translate filename.tiff filename.mbtiles
That will create tiles for a limited range of zoom levels (maybe only
one zoom level). gdalinfo will tell you the zoom levels in the
file. If you want to be able to zoom out and still see your overlay,
you might want to add wider zoom levels, which you can do like this:
gdaladdo -r nearest filename.mbtiles 2 4 8 16
Incidentally, gdal can also create a directory of tiles suitable for
a web slippy map, though you don't need that for OsmAnd. For that,
use gdal2tiles, which on Debian is part of
the python-gdal package:
mkdir tiles
gdal2tiles filename.tiff tiles
Not only does it create tiles, it also includes multiple HTML files
you can use to display those tiles using the Leaflet, OpenLayers or
Google Maps JavaScript libraries. Very cool!
Create the OsmAnd sqlite file
Tarwirdur has written a nice simple Python script to translate from
mbtiles to OsmAnd sqlite:
mbtiles2osmand.py.
Download it then run
So easy to use! Most of the other references I saw said to use
Mobile Atlas Creator (MOBAC)
and that looked a lot more complicated.
Incidentally, Bart's video says MAPC2MAPC calls the format
"Locus/Rmaps/Galileo/OSMAND (sqlite)", which might be useful to know
for web search purposes.
Install in OsmAnd
Once you have the .sqlitedb file, copy it to OsmAnd's tiles folder
in whatever way you prefer. For me, that's
adb push file.sqlitedb $androidSD/Android/data/net.osmand.plus/files/tiles
where $androidSD is the /storage/whatever location of
my device's SD card.
Then start OsmAnd and tap on the icon in the upper left for your current mode
(car, bike, walking etc.) to bring up the Configure map menu.
Scroll down to Overlay or Underlay map, enable one of
those two and you should be able to choose your newly installed map.
You can adjust the overlay's transparency with a slider that's visible
at the bottom of the map (the blue slider just above the distance scale),
so you can see your overlay and the main map at the same time.
The overlay disappears if you zoom out too far, and I haven't yet figured
out what controls that; I'm still working on those details.
Sure, this process is a lot of work. But the result is worth it.
Check out the geologic layers we walked through on a portion of
a recent hike in Rendija Canyon (our hike is the purple path).
Update 2022-06-24: Although the concepts described in this article
are still valid, the program I wrote depends on GTK2 and is therefore
obsolete. I discuss versions for more modern toolkits here:
Clicking through a Translucent Image Window.
It happened again: someone sent me a JPEG file with an image of a topo
map, with a hiking trail and interesting stopping points drawn on it.
Better than nothing. But what I really want on a hike is GPX waypoints
that I can load into OsmAnd, so I can see whether I'm still on the trail
and how to get to each point from where I am now.
My PyTopo program
lets you view the coordinates of any point, so you can make a waypoint
from that. But for adding lots of waypoints, that's too much work, so
I added an "Add Waypoint" context menu item -- that was easy,
took maybe twenty minutes.
PyTopo already had the ability to save its existing tracks and waypoints
as a GPX file, so no problem there.
But how do you locate the waypoints you want? You can do it the hard
way: show the JPEG in one window, PyTopo in the other, and
do the "let's see the road bends left then right, and the point is
off to the northwest just above the right bend and about two and a half
times as far away as the distance through both road bends". Ugh.
It takes forever and it's terribly inaccurate.
More than once, I've wished for a way to put up a translucent image
overlay that would let me click through it. So I could see the image,
line it up with the map in PyTopo (resizing as needed),
then click exactly where I wanted waypoints.
I needed two features beyond what normal image viewers offer:
translucency, and the ability to pass mouse clicks through to the
window underneath.
A translucent image viewer, in Python
The first part, translucency, turned out to be trivial.
In a class inheriting from my
Python
ImageViewerWindow, I just needed to add this line to the constructor:
self.set_opacity(.5)
Plus one more step.
The window was translucent now, but it didn't look translucent,
because I'm running a simple window manager (Openbox) that
doesn't have a compositor built in. Turns out you can run a compositor on top
of Openbox. There are lots of compositors; the first one I found,
which worked fine, was
xcompmgr -c -t-6 -l-6 -o.1
The -c specifies client-side compositing. -t and -l specify top and left
offsets for window shadows (negative so they go on the bottom right).
-o.1 sets the opacity of window shadows. In the long run, -o0 is
probably best (no shadows at all) since the shadow interferes
a bit with seeing the window under the translucent one. But having a
subtle .1 shadow was useful while I was debugging.
That's all I needed: voilà, translucent windows.
Now on to the (much) harder part.
A click-through window, in C
X11 has something called the SHAPE extension, which I experimented with
once before to make a silly program called
moonroot.
It's also used for the familiar "xeyes" program.
It's used to make windows that aren't square, by passing a shape mask
telling X what shape you want your window to be.
In theory, I knew I could do something like make a mask where every
other pixel was transparent, which would simulate a translucent image,
and I'd at least be able to pass clicks through on half the pixels.
But fortunately, first I asked the estimable Openbox guru Mikael
Magnusson, who tipped me off that the SHAPE extension also allows for
an "input shape" that does exactly what I wanted: lets you catch
events on only part of the window and pass them through on the rest,
regardless of which parts of the window are visible.
Knowing that was great. Making it work was another matter.
Input shapes turn out to be something hardly anyone uses, and
there's very little documentation.
In both C and Python, I struggled with drawing onto a pixmap
and using it to set the input shape. Finally I realized that there's a
call to set the input shape from an X region. It's much easier to build
a region out of rectangles than to draw onto a pixmap.
I got a C demo working first. The essence of it was this:
if (!XShapeQueryExtension(dpy, &shape_event_base, &shape_error_base)) {
printf("No SHAPE extension\n");
return;
}
/* Make a shaped window, a rectangle smaller than the total
* size of the window. The rest will be transparent.
*/
region = CreateRegion(outerBound, outerBound,
XWinSize-outerBound*2, YWinSize-outerBound*2);
XShapeCombineRegion(dpy, win, ShapeBounding, 0, 0, region, ShapeSet);
XDestroyRegion(region);
/* Make a frame region.
* So in the outer frame, we get input, but inside it, it passes through.
*/
region = CreateFrameRegion(innerBound);
XShapeCombineRegion(dpy, win, ShapeInput, 0, 0, region, ShapeSet);
XDestroyRegion(region);
CreateRegion sets up rectangle boundaries, then creates a region
from those boundaries:
Region CreateRegion(int x, int y, int w, int h) {
Region region = XCreateRegion();
XRectangle rectangle;
rectangle.x = x;
rectangle.y = y;
rectangle.width = w;
rectangle.height = h;
XUnionRectWithRegion(&rectangle, region, region);
return region;
}
Next problem: once I had shaped input working, I could no longer move
or resize the window, because the window manager passed events through
the window's titlebar and decorations as well as through the rest of
the window.
That's why you'll see that CreateFrameRegion call in the gist:
-- I had a theory that if I omitted the outer part of the window from
the input shape, and handled input normally around the outside, maybe
that would extend to the window manager decorations. But the problem
turned out to be a minor Openbox bug, which Mikael quickly
tracked down (in openbox/frame.c, in the
XShapeCombineRectangles call on line 321,
change ShapeBounding to kind).
Openbox developers are the greatest!
Input Shapes in Python
Okay, now I had a proof of concept: X input shapes definitely can work,
at least in C. How about in Python?
There's a set of python-xlib bindings, and they even supports the SHAPE
extension, but they have no documentation and didn't seem to include
input shapes. I filed a GitHub issue and traded a few notes with
the maintainer of the project.
It turned out the newest version of python-xlib had been completely
rewritten, and supposedly does support input shapes. But the API is
completely different from the C API, and after wasting about half a day
tweaking the demo program trying to reverse engineer it, I gave up.
Fortunately, it turns out there's a much easier way. Python-gtk has
shape support, even including input shapes. And if you use regions
instead of pixmaps, it's this simple:
if self.is_composited():
region = gtk.gdk.region_rectangle(gtk.gdk.Rectangle(0, 0, 1, 1))
self.window.input_shape_combine_region(region, 0, 0)
My transimageviewer.py
came out nice and simple, inheriting from imageviewer.py and adding only
translucency and the input shape.
If you want to define an input shape based on pixmaps instead of regions,
it's a bit harder and you need to use the Cairo drawing API. I never got as
far as working code, but I believe it should go something like this:
# Warning: untested code!
bitmap = gtk.gdk.Pixmap(None, self.width, self.height, 1)
cr = bitmap.cairo_create()
# Draw a white circle in a black rect:
cr.rectangle(0, 0, self.width, self.height)
cr.set_operator(cairo.OPERATOR_CLEAR)
cr.fill();
# draw white filled circle
cr.arc(self.width / 2, self.height / 2, self.width / 4,
0, 2 * math.pi);
cr.set_operator(cairo.OPERATOR_OVER);
cr.fill();
self.window.input_shape_combine_mask(bitmap, 0, 0)
The translucent image viewer worked just as I'd hoped. I was able to
take a JPG of a trailmap, overlay it on top of a PyTopo window, scale
the JPG using the normal Openbox window manager handles, then
right-click on top of trail markers to set waypoints. When I was done,
a "Save as GPX" in PyTopo and I had a file ready to take with me on my
phone.
I used the Basemap package for plotting.
It used to be part of matplotlib, but it's been split off into its
own toolkit, grouped under mpl_toolkits: on Debian, it's
available as python-mpltoolkits.basemap, or you can find
Basemap on GitHub.
It's easiest to start with the
fillstates.py
example that shows
how to draw a US map with different states colored differently.
You'll need the three shapefiles (because of ESRI's silly shapefile format):
st99_d00.dbf, st99_d00.shp and st99_d00.shx, available
in the same examples directory.
Of course, to plot counties, you need county shapefiles as well.
The US Census has
county
shapefiles at several different resolutions (I used the 500k version).
Then you can plot state and counties outlines like this:
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
def draw_us_map():
# Set the lower left and upper right limits of the bounding box:
lllon = -119
urlon = -64
lllat = 22.0
urlat = 50.5
# and calculate a centerpoint, needed for the projection:
centerlon = float(lllon + urlon) / 2.0
centerlat = float(lllat + urlat) / 2.0
m = Basemap(resolution='i', # crude, low, intermediate, high, full
llcrnrlon = lllon, urcrnrlon = urlon,
lon_0 = centerlon,
llcrnrlat = lllat, urcrnrlat = urlat,
lat_0 = centerlat,
projection='tmerc')
# Read state boundaries.
shp_info = m.readshapefile('st99_d00', 'states',
drawbounds=True, color='lightgrey')
# Read county boundaries
shp_info = m.readshapefile('cb_2015_us_county_500k',
'counties',
drawbounds=True)
if __name__ == "__main__":
draw_us_map()
plt.title('US Counties')
# Get rid of some of the extraneous whitespace matplotlib loves to use.
plt.tight_layout(pad=0, w_pad=0, h_pad=0)
plt.show()
Accessing the state and county data after reading shapefiles
Great. Now that we've plotted all the states and counties, how do we
get a list of them, so that when I read out "Santa Clara, CA" from
the data I'm trying to plot, I know which map object to color?
After calling readshapefile('st99_d00', 'states'), m has two new
members, both lists: m.states and m.states_info.
m.states_info[] is a list of dicts mirroring what was in the shapefile.
For the Census state list, the useful keys are NAME, AREA, and PERIMETER.
There's also STATE, which is an integer (not restricted to 1 through 50)
but I'll get to that.
If you want the shape for, say, California,
iterate through m.states_info[] looking for the one where
m.states_info[i]["NAME"] == "California".
Note i; the shape coordinates will be in m.states[i]n
(in basemap map coordinates, not latitude/longitude).
Correlating states and counties in Census shapefiles
County data is similar, with county names in
m.counties_info[i]["NAME"].
Remember that STATE integer? Each county has a STATEFP,
m.counties_info[i]["STATEFP"] that matches some state's
m.states_info[i]["STATE"].
But doing that search every time would be slow. So right after calling
readshapefile for the states, I make a table of states. Empirically,
STATE in the state list goes up to 72. Why 72? Shrug.
MAXSTATEFP = 73
states = [None] * MAXSTATEFP
for state in m.states_info:
statefp = int(state["STATE"])
# Many states have multiple entries in m.states (because of islands).
# Only add it once.
if not states[statefp]:
states[statefp] = state["NAME"]
That'll make it easy to look up a county's state name quickly when
we're looping through all the counties.
Calculating colors for each county
Time to figure out the colors from the Deleetdk election results CSV file.
Reading lines from the CSV file into a dictionary is superficially easy enough:
fp = open("tidy_data.csv")
reader = csv.DictReader(fp)
# Make a dictionary of all "county, state" and their colors.
county_colors = {}
for county in reader:
# What color is this county?
pop = float(county["votes"])
blue = float(county["results.clintonh"])/pop
red = float(county["Total.Population"])/pop
county_colors["%s, %s" % (county["name"], county["State"])] \
= (red, 0, blue)
But in practice, that wasn't good enough, because the county names
in the Deleetdk names didn't always match the official Census county names.
Fuzzy matches
For instance, the CSV file had no results for Alaska or Puerto Rico,
so I had to skip those. Non-ASCII characters were a problem:
"Doña Ana" county in the census data was "Dona Ana" in the CSV.
I had to strip off " County", " Borough" and similar terms:
"St Louis" in the census data was "St. Louis County" in the CSV.
Some names were capitalized differently, like PLYMOUTH vs. Plymouth,
or Lac Qui Parle vs. Lac qui Parle.
And some names were just different, like "Jeff Davis" vs. "Jefferson Davis".
To get around that I used SequenceMatcher to look for fuzzy matches
when I couldn't find an exact match:
def fuzzy_find(s, slist):
'''Try to find a fuzzy match for s in slist.
'''
best_ratio = -1
best_match = None
ls = s.lower()
for ss in slist:
r = SequenceMatcher(None, ls, ss.lower()).ratio()
if r > best_ratio:
best_ratio = r
best_match = ss
if best_ratio > .75:
return best_match
return None
Correlate the county names from the two datasets
It's finally time to loop through the counties in the map to color and
plot them.
Remember STATE vs. STATEFP? It turns out there are a few counties in
the census county shapefile with a STATEFP that doesn't match any
STATE in the state shapefile. Mostly they're in the Virgin Islands
and I don't have election data for them anyway, so I skipped them for now.
I also skipped Puerto Rico and Alaska (no results in the election data)
and counties that had no corresponding state: I'll omit that code here,
but you can see it in the final script, linked at the end.
for i, county in enumerate(m.counties_info):
countyname = county["NAME"]
try:
statename = states[int(county["STATEFP"])]
except IndexError:
print countyname, "has out-of-index statefp of", county["STATEFP"]
continue
countystate = "%s, %s" % (countyname, statename)
try:
ccolor = county_colors[countystate]
except KeyError:
# No exact match; try for a fuzzy match
fuzzyname = fuzzy_find(countystate, county_colors.keys())
if fuzzyname:
ccolor = county_colors[fuzzyname]
county_colors[countystate] = ccolor
else:
print "No match for", countystate
continue
countyseg = m.counties[i]
poly = Polygon(countyseg, facecolor=ccolor) # edgecolor="white"
ax.add_patch(poly)
Moving Hawaii
Finally, although the CSV didn't have results for Alaska, it did have
Hawaii. To display it, you can move it when creating the patches:
The offsets are in map coordinates and are empirical; I fiddled with
them until Hawaii showed up at a reasonable place.
Well, that was a much longer article than I intended. Turns out
it takes a fair amount of code to correlate several datasets and
turn them into a map. But a lot of the work will be applicable
to other datasets.
ArcGIS shapefiles are crazy. Typically they come as an archive that
includes many different files, with the same base name but different
extensions:
filename.sbn, filename.shx, filename.cpg, filename.sbx,
filename.dbf, filename.shp, filename.prj, and so forth.
Which of these are important and which aren't?
To be honest, I don't know. I found this description in my searches:
"A shape file map consists of the geometry (.shp), the spatial index
(.shx), the attribute table (.dbf) and the projection metadata file
(.prj)." Poking around, I found that most of the interesting metadata
(trail name, description, type, access restrictions and so on)
was in the .dbf file.
You can convert the whole mess into other formats using the
ogr2ogr program. On Debian it's part of the gdal-bin
package. Pass it the .shp filename, and it will look in the same
directory for files with the same basename and other shapefile-related
extensions. For instance, to convert to KML:
ogr2ogr -f KML output.kml input.shp
Unfortunately, most of the metadata -- comments on trail conditions
and access restrictions that were in the .dbf file -- didn't make it
into the KML.
GPX was even worse.
ogr2ogr knows how to convert directly to GPX,
but that printed a lot of errors like
"Field of name 'foo' is not supported in GPX schema. Use
GPX_USE_EXTENSIONS creation option to allow use of the <extensions>
element." So I tried
ogr2ogr -f "GPX" -dsco GPX_USE_EXTENSIONS=YES output.gpx input.shp
but that just led to more errors.
It did produce a GPX file, but it had almost no useful data in it,
far less than the KML did. I got a better GPX file by using ogr2ogr
to convert to KML, then using gpsbabel to convert that KML to GPX.
That preserved most, maybe all, of the metadata the .dbf file and gave
me a nicely formatted file. The only problem was that I didn't have any
programs that could read GeoJSON ...
But JSON is a nice straightforward format, easy to
read and easy to parse, and it took surprisingly little work to add
GeoJSON parsing to
PyTopo.
Now, at least, I have a way to view the maps converted from shapefiles,
click on a trail and see the metadata from the original shapefile.
Update: Years later, I added simple track editing to my own map program,
PyTopo.
It can split an existing track, or create a new track, then
can save as GPX.
For instance, I have some scans of old maps from the 60s and 70s
showing the trails in the local neighborhood. There's no newer
version. (In many cases, the trails have disappeared from lack of use
-- no one knows where they're supposed to be even though they're
legally trails where you're allowed to walk.) I wanted a way to turn
trails from the old map into GPX tracks.
My first thought was to trace the old PDF map. A lot of web searching
found a grand total of one page that talks about that:
How to convert image of map into vector format?.
It involves using GIMP to make an image containing just black lines
on a white background, saving as uncompressed TIFF, then using a series
of commands in GRASS. I made a start on that, but it was looking
like it might be a big job that way.
Since a lot of the old trails are still visible as faint traces in
satellite photos, I decided to investigate tracing satellite photos
in a map editor first, before trying the GRASS method.
But finding a working open source map editor turns out to be
basically impossible. (Opportunity alert: it actually wouldn't
be that hard to add that to PyTopo. Some day I'll try that, but
now I was trying to solve a problem and hoping not to get sidetracked.)
The only open source map editor I've found is called Viking, and it's
terrible. The user interface is complicated and poorly documented, and
I could input only two or three trail segments before it crashed
and I had to restart. Saving often, I did build up part of the trail
network that way, but it was so slow and tedious restoring between
crashes that I gave up.
OpenStreetMap has several editors available, and some of them are
quite good, but they're (quite understandably) oriented toward defining
roads that you're going to upload to the OpenStreetMap world map.
I do that for real trails that I've walked myself, but it doesn't seem
appropriate for historical paths between houses, some of which are now
fenced off and few of which I've actually tried walking yet.
Editing a track in Google Earth
In the end, the only reasonable map editor I found was Google Earth --
free as in beer, not speech.
It's actually quite a good track editor once I figured out how to use
it -- the documentation is sketchy and no one who writes about it tells
you the important parts, which were, for me:
Click on "My Places" in the sidebar before starting, assuming you'll
want to keep these tracks around.
Right-click on My Places and choose Add->Folder if
you're going to be creating more than one path. That way you can have
a single KML file (Google Earth creates KML/KMZ, not GPX) with all
your tracks together.
Move and zoom the map to where you can see the starting point for your path.
Click the "Add Path" button in the toolbar. This brings up a dialog
where you can name the path and choose a color that will stand out
against the map. Do not hit Return after typing the name --
that will immediately dismiss the dialog and take you out of path
editing mode, leaving you with an empty named object in your sidebar.
If you forget, like I kept doing, you'll have to right-click it and
choose Properties to get back into editing mode.
Iconify, shade or do whatever your window manager allows to get that
large, intrusive dialog out of the way of the map you're trying to edit.
Shade worked well for me in Openbox.
Click on the starting point for your path. If you forgot to move the
map so that this point is visible, you're out of luck: there's no way
I've found to move the map at this point. (You might expect something
like dragging with the middle mouse button, but you'd be wrong.)
Do not in any circumstances be tempted to drag with the left button
to move the map: this will draw lots of path points.
If you added points you don't want -- for instance, if you dragged on
the map trying to move it -- Ctrl-Z doesn't undo, and there's no Undo
in the menus, but Delete removes previous points. Whew.
Once you've started adding points, you can move the map using the arrow
keys on your keyboard. And you can always zoom with the mousewheel.
When you finish one path, click OK in its properties dialog to end it.
Save periodically: click on the folder you created in My Places and
choose Save Place As... Google Earth is a lot less crashy than
Viking, but I have seen crashes.
When you're done for the day, be sure to
File->Save->Save My Places.
Google Earth apparently doesn't do this automatically; I was forever
being confused why it didn't remember things I had done, and why every
time I started it it would give me syntax errors on My Places saying
it was about to correct the problem, then the next time I'd get the
exact same error. Save My Places finally fixed that, so I guess
it's something we're expected to do now and then in Google Earth.
Once I'd learned those tricks, the map-making went fairly quickly.
I had intended only to trace a few trails then stop for the night,
but when I realized I was more than halfway through I decided to push
through, and ended up with a nice set of KML tracks which I converted
to GPX and loaded onto my phone. Now I'm ready to explore.
I use map tracks quite a bit. On my Android phone, I use
OsmAnd, an excellent open-source
mapping tool that can download map data generated from
free OpenStreetMap,
then display the maps offline, so I can use them in places where
there's no cellphone signal (like nearly any hiking trail).
At my computer, I never found a decent open-source mapping program,
so I wrote my own, PyTopo,
which downloads tiles from OpenStreetMap.
In OsmAnd, I record tracks from all my hikes, upload the GPX files,
and view them in PyTopo. But it's nice to go the other way, too,
and take tracks or waypoints from other people or from the web and
view them in my own mapping programs, or use them to find them when hiking.
Translating between KML, KMZ and GPX
Both OsmAnd and PyTopo can show Garmin track files in the GPX format.
PyTopo can also show KML and KMZ files, Google's more complicated
mapping format, but OsmAnd can't. A lot of track files are distributed
in Google formats, and I find I have to translate them fairly often --
for instance, lists of trails or lists of waypoints on a new hike I plan
to do may be distributed as KML or KMZ.
The command-line gpsbabel program does a fine job
translating KML to GPX.
But I find its syntax hard to remember, so I wrote a shell alias:
so I can just type kml2gpx file.kml and it will create
a file.gpx for me.
More often, people distribute KMZ files, because they're smaller.
They're just gzipped KML files, so use "zip" and "unzip" to
unpack them. In Python you can use the zipfile module.
(Updated to reflect that it's zip, not gzip.)
Of course, if you ever have a need to go from GPX to KML, you can
reverse the gpsbabel arguments appropriately; and if you need KMZ,
run zip afterward.
UTM coordinates
A couple of people I know use a different format, called
UTM,
which stands for Universal Transverse Mercator, for waypoints,
and there are some secret lists of interesting local features passed
around in that format.
It's a strange system. Instead of using latitude and longitude like
most world mapping coordinate systems, UTM breaks the world into 60
longitudinal zones. UTM coordinates don't usually specify their zone
(at least, none of the ones I've been given ever have), so if someone
gives you a UTM coordinate, you need to know what zone you're in
before you can translate it to a latitude and longitude. Then a pair
of UTM coordinates specifies easting, and northing which
tells you where you are inside the zone. Wikipedia has a
map
of UTM zones.
Note that UTM isn't a file format: it's just a way of specifying two
(really three, if you count the zone) coordinates. So if you're given
a list of UTM coordinate pairs, gpsbabel doesn't have a ready-made way
to translate them into a GPX file. Fortunately, it allows a
"universal CSV"
(comma separated values) format, where the first line specifies
which field goes where. So you can define a UTM UniCSV format that looks
like this:
name,utm_z,utm_e,utm_n,comment
Trailhead,13,0395145,3966291,Trailhead on Buckman Rd
Sierra Club TH,13,0396210,3966597,Alternate trailhead in the arroyo
I (and all the UTM coordinates I've had to deal with) are in zone 13,
so that's what I used for that example and I hardwired that into my
alias, but if you're near a zone boundary, you'll need to figure out
which zone to use for each coordinate.
I also know someone who tends to send me single UTM coordinate pairs,
because that's what she has her Garmin configured to show her.
For instance, "We'll be using the trailhead at 0395145 3966291".
This happened often enough, and I got tired of looking up the UTM UniCSV
format every time, that I made another shell function just for that.
For a year or so, I've been appending "output=classic" to any Google
Maps URL. But Google disabled Classic mode last month.
(There have been
a
few other ways to get classic Google maps back, but Google is gradually
disabling them one by one.)
I have basically three problems with the new maps:
If you search for something, the screen is taken up by a huge
box showing you what you searched for; if you click the "x" to dismiss
the huge box so you can see the map underneath, the box disappears but
so does the pin showing your search target.
A big swath at the bottom of the screen is taken up by a filmstrip
of photos from the location, and it's an extra click to dismiss that.
Moving or zooming the map is very, very slow: it relies on OpenGL
support in the browser, which doesn't work well on Linux in general,
or on a lot of graphics cards on any platform.
Now that I don't have the "classic" option any more, I've had to find
ways around the problems -- either that, or switch to Bing maps.
Here's how to make the maps usable in Firefox.
First, for the slowness: the cure is to disable webgl in Firefox.
Go to about:config and search for webgl.
Then doubleclick on the line for webgl.disabled to make it
true.
For the other two, you can add userContent lines to tell
Firefox to hide those boxes.
Locate your
Firefox profile.
Inside it, edit chrome/userContent.css (create that file
if it doesn't already exist), and add the following two lines:
Voilà! The boxes that used to hide the map are now invisible.
Of course, that also means you can't use anything inside them; but I
never found them useful for anything anyway.
It was pretty cool at first, but pasting every address into the
latitude/longitude web page and then pasting the resulting coordinates
into the address file, got old, fast.
That's exactly the sort of repetitive task that computers are supposed
to handle for us.
The lat/lon page used Javascript and the Google Maps API.
and I already had a Google Maps API key (they have all sorts of fun
APIs for map geeks) ... but I really wanted something
that could run locally, reading and converting a local file.
And then I discovered the
Python googlemaps
package. Exactly what I needed! It's in the Python Package Index,
so I installed it with pip install googlemaps.
That enabled me to change my
waymaker
Python script: if the first line of a
description wasn't a latitude and longitude, instead it looked for
something that might be an address.
Addresses in my data files might be one line or might be two,
but since they're all US addresses, I know they'll end with a
two-capital-letter state abbreviation and a 5-digit zip code:
2948 W Main St. Anytown, NM 12345.
You can find that with a regular expression:
match = re.search('.*[A-Z]{2}\s+\d{5}$', line)
But first I needed to check whether the first line of the entry was already
latitude/longitude coordinates, since I'd already converted some of
my files. That uses another regular expression. Python doesn't seem
to have a built-in way to search for generic numeric expressions
(containing digits, decimal points or +/- symbols) so I made one,
since I had to use it twice if I was searching for two numbers with
whitespace between them.
numeric = '[\+\-\d\.]'
match = re.search('^(%s+)\s+(%s+)$' % (numeric, numeric),
line)
(For anyone who wants to quibble, I know the regular expression
isn't perfect.
For instance, it would match expressions like 23+48..6.1-64.5.
Not likely to be a problem in these files, so I didn't tune it further.)
If the script doesn't find coordinates
but does find something that looks like an address, it feeds the
address into Google Maps and gets the resulting coordinates.
That code looks like this:
from googlemaps import GoogleMaps
gmaps = GoogleMaps('YOUR GOOGLE MAPS API KEY HERE')
try:
lat, lon = gmaps.address_to_latlng(addr)
except googlemaps.GoogleMapsError, e:
print "Oh, no! Couldn't geocode", addr
print e
Overall, a nice simple solution made possible with python-googlemaps.
The full script is on github:
waymaker.
Dave and I have been doing some exploratory househunting trips,
and one of the challenges is how to maintain a list of houses and
navigate from location to location. It's basically like geocaching,
navigating from one known location to the next.
Sure, there are smartphone apps to do things like "show houses for
sale near here" against a Google Maps background. But we didn't want
everything, just the few gems we'd picked out ahead of time.
And some of the places we're looking are fairly remote -- you can't
always count on a consistent signal everywhere as you drive around,
let alone a connection fast enough to download map tiles.
Fortunately, I use a wonderful open-source Android program called
OsmAnd.
It's the best, bar none, at offline mapping: download data files
prepared from OpenStreetMap
vector data, and you're good to go, even into remote areas with no
network connectivity. It's saved our butts more than once exploring
remote dirt tracks in the Mojave. And since the maps come from
OpenStreetMap, if you find anything wrong with the map, you can fix it.
So the map part is taken care of. What about that list of houses?
Making waypoint files
On the other hand, one of OsmAnd's many cool features is that it can
show track logs. I can upload a GPX file from my Garmin, or record a
track within OsmAnd, and display the track on OsmAnd's map.
GPX track files can include waypoints. What if I made a GPX file
consisting only of waypoints and descriptions for each house?
My husband was already making text files of potentially interesting houses:
404 E David Dr
Flagstaff, AZ 86001
$355,000
3 Bed 2 Bath
1,673 Sq Ft
0.23 acres
http://blahblah/long_url
2948 W Wilson Dr
Flagstaff, AZ 86001
$285,000
3 Bed 2 Bath
1,908 Sq Ft
8,000 Sq Ft Lot
http://blahblah/long_url
... (and so on)
So I just needed to turn those into GPX.
GPX is a fairly straightforward XML format -- I've parsed GPX files
for pytopo
and for ellie,
and generating them from Python should be easier than parsing.
But first I needed latitude and longitude coordinates.
A quick web search solved that: an excellent page called
Find
latitude and longitude with Google Maps.
You paste the address in and it shows you the location on a map
along with latitude and longitude. Thanks to Bernard Vatant at Mondeca!
For each house, I copied the coordinates directly from the page
and pasted them into the file. (Though that got old after about the fifth
house; I'll write about automating that step in a separate article.)
Then I wrote a script called
waymaker
that parses a file of coordinates and descriptions and makes waypoint files.
Run it like this: waymaker infile.txt outfile.gpx
and it will create (or overwrite) a gpx file consisting of those waypoints.
Getting it into OsmAnd
I plugged my Android device into my computer's USB port, mounted it as
usb-storage and copied all the GPX files into osmand/tracks
(I had to create the tracks subdirectory myself, since I hadn't
recorded any tracks. After restarting OsmAnd, it was able to see all
the waypoint files.
OsmAnd has a couple of other ways of showing points besides track files.
"Favorites" lets you mark a point on the map and save it to various
Favorites categories. But although there's a file named favorites.gpx,
changes you make to it never show up in the program. Apparently they're
cached somewhere else. "POI" (short for Points of Interest) can be
uploaded, but only as a .obf OsmAnd file or a .sqlitedb database, and
there isn't much documentation on how to create either one.
GPX tracks seemed like the easiest solution, and I've been happy
with them so far.
Update: I asked on the osmand mailing list; it turns out that on the
Favorites screen (Define View, then Favorites) there's a Refresh
button that makes osmand re-read favorites.gpx. Works great.
It uses pretty much the same format as track files -- I took
<wpt></wpt> sequences I'd generated with waymaker and
added them to my existing favorites.gpx file, adding appropriate
categories. It's nice to have two different ways to display and
categorize waypoints within the app.
Using waypoints in OsmAnd
How do you view these waypoints once they're loaded?
When you're in OsmAnd's map view, tap the menu button and choose
Define View, then GPX track...
You'll see a list of all your GPX files; choose the one you want.
You'll be taken back to the map view,
at a location and zoom level that shows all your waypoints. Don't
panic if you don't see them immediately; sometimes I needed
to scroll and zoom around a little before OsmAnd noticed there were
waypoints and started drawing them.
Then you can navigate in the usual way. When you get to a waypoint,
tap on it to see the description brieftly -- I was happy to find that
multiple line descriptions work just fine. Or long-press on it to pop up a
persistent description window that will stay up until you dismiss it.
It worked beautifully for our trip, both for houses and for
other things like motels and points of interest along the way.
A new trail opened up above Alum Rock park! Actually a whole new open
space preserve, called Sierra Vista -- with an extensive set of trails
that go all sorts of interesting places.
Dave and I visit Alum Rock frequently -- we were married there --
so having so much new trail mileage is exciting. We tried to explore it
on foot, but quickly realized the mileage was more suited to mountain
bikes. Even with bikes, we'll be exploring this area for a while
(mostly due to not having biked in far too long, so it'll take us
a while to work up to that much riding ... a combination of health
problems and family issues have conspired to keep us off the bikes).
Of course, part of the fun of discovering a new trail system is poring
over maps trying to figure out where the trails will take us, then
taking GPS track logs to study later to see where we actually went.
And as usual when uploading GPS track logs and viewing them in pytopo,
I found some things that weren't working quite the way I wanted,
so the session ended up being less about studying maps and more
about hacking Python.
In the end, I fixed quite a few little bugs, improved some features,
and got saved sites with saved zoom levels working far better.
Now, PyTopo 1.0 happened quite a while ago -- but there were two of
us hacking madly on it at the time, and pinning down the exact time
when it should be called 1.0 wasn't easy. In fact, we never actually
did it. I know that sounds silly -- of all releases to not get around
to, finally reaching 1.0? Nevertheless, that's what happened.
I thought about cheating and calling this one 1.0, but we've had 1.0
beta RPMs floating around for so long (and for a much earlier release)
that that didn't seem right.
So I've called the new release PyTopo 1.1. It seems to be working
pretty solidly. It's certainly been very helpful to me in exploring
the new trails. It's great for cross-checking with Google Earth:
the OpenCycleMap database has much better trail data than Google
does, and pytopo has easy track log loading and will work offline,
while Google has the 3-D projection aerial imagery that shows
where trails and roads were historically (which may or may not
correspond to where they decide to put the new trails).
It's great to have both.
I spent Friday and Saturday at the
WhereCamp unconference on mapping,
geolocation and related topics.
This was my second year at WhereCamp. It's always a bit humbling. I
feel like I'm pretty geeky, and I've written a couple of Python
mapping apps and I know spherical geometry and stuff ... but when
I get in a room with the folks at WhereCamp I realize I don't know
anything at all. And it's all so interesting I want to learn all of it!
It's a terrific and energetic unconference.
I
I won't try to write up a full report, but here are some highlights.
Several Grassroots Mapping
people were there again this year.
Jeffrey Warren led people in constructing balloons from tape and mylar
space blankets in the morning, and they shot some aerial photos.
Then in a late-afternoon session he discussed how to stitch the
aerial photos together using
Cargen Knitter.
But he also had other projects to discuss:
the Passenger
Pigeon project to give cameras to people who will
be flying over environmental that need to be monitored -- like New York's
Gowanus Canal superfund site, next to La Guardia airport.
And the new Public Laboratory for Open
Technology and Science has a new project making vegetation maps
by taking aerial photos with two cameras simultaneously, one normal,
one modified for infra-red photography.
How do you make an IR camera? First you have to remove the IR-blocking
filter that all digital cameras come with (CCD sensors are very
sensitive to IR light). Then you need to add a filter that blocks
out most of the visible light. How? Well, it turns out that exposed
photographic film (remember film?) makes a good IR-only filter.
So you go to a camera store, buy a roll of film, rip it out of the
reel while ignoring the screams of the people in the store, then hand
it back to them and ask to have it developed. Cheap and easy.
Even cooler, you can use a similar technique to
make a
spectrometer from a camera, a cardboard box and a broken CD.
Jeffrey showed spectra for several common objects, including bacon
(actually pancetta, it turns out).
JW:
See the dip in the UV? Pork fat is very absorbent
in the UV. That's why some people use pork products as sunscreen.
Audience member:
Who are these people?
JW:
Well, I read about them on the internet.
I ask you, how can you beat a talk like that?
Two Google representatives gave an interesting demo of some of
the new Google APIs related to maps and data visualization, in
particular Fusion
Tables. Motion charts sounded especially interesting
but they didn't have a demo handy; there may be one appearing soon
in the Fusion Charts gallery.
They also showed the new enterprise-oriented
Google Earth Builder, and custom street views for Google Maps.
There were a lot of informal discussion sessions, people brainstorming
and sharing ideas. Some of the most interesting ones I went to included
Indoor orientation with Android sensors: how do you map an unfamiliar
room using just the sensors in a typical phone? How do you orient
yourself in a room for which you do have a map?
A discussion of open data and the impending shutdown of data.gov.
How do we ensure open data stays open?
Using the Twitter API to analyze linguistic changes -- who initiates
new terms and how do they spread? -- or to search for location-based
needs (any good ice cream places near where I am?)
Techniques of data visualization -- how can we move beyond basic
heat maps and use interactivity to show more information? What works
and what doesn't?
An ongoing hack session in the scheduling room included folks
working on projects like a system to get information from pilots
to firefighters in real time. It was also a great place to get help
on any map-related programming issues one might have.
Random amusing factoid that I still need to look up
(aside from the pork and UV one): Automobile tires have RFID in them?
Lightning talks included demonstrations and discussions of
global Twitter activity as the Japanese quake and tsunami
news unfolded, the new CD from OSGeo,
the upcoming PII conference --
that's privacy identity innovation -- in Santa Clara.
There were quite a few outdoor game sessions Friday. I didn't take part
myself since they all relied on having an iPhone or Android phone: my
Archos 5 isn't
reliable enough at picking up distant wi-fi signals to work as an
always-connected device, and the Stanford wi-fi net was very flaky
even with my laptop, with lots of dropped connections.
Even the OpenStreetMap mapping party
was set up to require smartphones, in contrast with past mapping
parties that used Garmin GPS units. Maybe this is ultimately a good thing:
every mapping party I've been to fizzled out after everyone got back
and tried to upload their data and discovered that nobody had
GPSBabel installed, nor the drivers for reading data off a Garmin.
I suspect most mapping party data ended up getting tossed out.
If everybody's uploading their data in realtime with smartphones,
you avoid all that and get a lot more data. But it does limit your
contributors a bit.
There were a couple of lowlights. Parking was very tight, and somewhat
expensive on Friday, and there wasn't any info on the site except
a cheerfully misleading "There's plenty of parking!" And the lunch
schedule on Saturday as a bit of a mess -- no one was sure when the
lunch break was (it wasn't on the schedule), so afternoon schedule had
to be re-done a couple times while everybody worked it out. Still,
those are pretty trivial complaints -- sheesh, it's a free, volunteer
conference! and they even provided free meals, and t-shirts too!
Really, WhereCamp is an astoundingly fun gathering. I always leave
full of inspiration and ideas, and appreciation for the amazing
people and projects presented there. A big thanks to the organizers
and sponsors. I can't wait 'til next year -- and I hope I'll have
something worth presenting then!
My last entry mentioned some work I'd done to one of my mapping programs,
Ellie, to gather statistics from the track logs I get from my Garmin GPS.
In the course of working on Ellie, I discovered something
phenomenally silly about the GPX files from my Garmin Vista CX,
as uploaded with gpsbabel.
Track log points, quite reasonably, have time stamps in "Zulu time"
(essentially the same as GMT, give or take some fraction of a second).
They look like this:
But the waypoints you set for specific points of interest, even if
they're in the same GPX file, have timestamps that have no time zone at all.
They look like this:
Notice the waypoint's time isn't actually in a time field -- it's
duplicated in two fields, cmt (comment) and desc (description).
So it's not really intended to be a time stamp -- but it sure would be
handy if you could use it as one.
You might be able to correlate waypoints with track points by
comparing coordinates ... unless you spent more than an hour hanging
around a particular location, or came back several hours later (perhaps
starting and ending your hike at the same place). In that case ...
you'd better know what the local time zone was, including daylight
savings time.
What a silly omission, considering that the GPS obviously already knows
the Zulu time and could just as easily use that!
On our recent Mojave trip, as usual I spent some of the evenings
reviewing maps and track logs from some of the neat places we explored.
There isn't really any existing open source program for offline
mapping, something that works even when you don't have a network.
So long ago, I wrote Pytopo,
a little program that can take map tiles from a Windows program called
Topo! (or tiles you generate yourself somehow) and let you navigate
around in that map.
But in the last few years, a wonderful new source of map tiles has
become available: OpenStreetMap.
On my last desert trip, I whipped up some code to show OSM tiles, but
a lot of the code was hacky and empirical because I couldn't find any
documentation for details like the tile naming scheme.
Well, that's changed. Upon returning to civilization I discovered
there's now a wonderful page explaining the
Slippy
map tilenames very clearly, with sample code and everything.
And that was the missing piece -- from there, all the things I'd
been missing in pytopo came together, and now it's a useful
self-contained mapping script that can download its own tiles, and
cache them so that when you lose net access, your maps don't disappear
along with everything else.
Pytopo can show GPS track logs and waypoints, so you can see where you
went as well as where you might want to go, and whether that road off
to the right actually would have connected with where you thought you
were heading.
Most of the pytopo work came after returning from the desert, when I
was able to google and find that OSM tile naming page. But while still
out there and with no access to the web, I wanted to review the track
logs from some of our hikes and see how much climbing we'd done.
I have a simple package for plotting elevation from track logs,
called Ellie.
But when I ran it, I discovered that I'd never gotten around to
installing the pylab Python plotting package (say that three times
fast!) on this laptop.
No hope of installing the package without a net ... so instead, I
tweaked Ellie so that so that without pylab you can still print out
statistics like total climb. While I was at it I added total distance,
time spent moving and time spent stopped. Not a big deal, but it gave
me the numbers I wanted. It's available as ellie 0.3.
Last week I had the opportunity to go to the
Where 2.0 conference
(thanks, Linux Pro Magazine!)
Then, on the weekend, the free WhereCamp followed it up.
I'd been to WhereCamp last year. It was wonderful, geeky, highly
technical and greatly inspiring. I thought I was the only person
interested in mapping, especially in Python, and after the first
couple of sessions I was blown away with how little I knew and what
a thriving and expert community there was. I was looking forward to
the full experience this year -- I figured Where 2.0 must be
similar but even better.
Actually they're completely different events. Where 2.0 was dominated
by location-aware startups: people with iPhone games (Foursquare and
others in a similar mold), shopping apps (find the closest pizza
place to your location!) and so on. The talks were mostly 15 minutes
long, so while there were lots of people there with fascinating apps
or great stories to tell, there was no time to get detail on anything.
I think the real point of Where 2.0 is to get a sketch of who's doing
what so you can go collar them in the "hallway track" later and make
business deals.
Here are some highlights
from Where 2.0. I'll write up WhereCamp separately.
Ignite Where
The Ignite session Tuesday night was great fun, as Ignite sessions
almost always are.
The Ignite session was broken in the middle by a half-hour interlude
where a bunch of startups gave one-minute presentations on their
products, then the audience voted on the best, then an award was given
which had already been decided and had nothing to do with the audience
vote (we didn't even get to find out which company the audience chose).
Big yawner: one minute isn't long enough for anyone to show off a
product meaningfully, and I wasn't the only one there who brought
reading material to keep them occupied until the second round of Ignite
talks started up again.
Crowdsourcing the Impossible: Ushahidi-Haiti, Patrick Meier
Have Chickens, Need Lasers!, Martin Isenburg -- which didn't
actually involve lasers as far as I could tell, but it was certainly lively.
Wednesday talks
Patrick Meier gave a longer version of his Ignite talk on
Mobilizing Ushahidi-Haiti, full of interesting stories of how
OpenStreetMap and other technologies like Twitter came together
to help in the Haiti rescue effort.
Clouds, Crowds, and Shrouds: How One Government Agency Seeks to
Change the Way It Spatially Enables Its Information, by Terrance Busch of
the US Defense Intelligence Agency, was an interesting look into the
challenges of setting up a serious mapping effort, then integrating
later with commercial and crowdsourced efforts.
In Complexities in Bringing Home Environmental Awareness, Kim Balassiano
of the US EPA showed the EPA
MyEnvironment page, where you can find information about local
environmental issues like toxic waste cleanups. They want users to
enter good news too, like composting workshops or community gardens,
but so far the data on the map is mostly bad. Still a useful site.
Thursday talks
There were a couple of interesting keynotes on Thursday morning, but
work kept me at home. I thought I could catch them on the live video
stream, but unfortunately the stream that had worked fine on Wednesday
wasn't working on Thursday, so I missed the Mark L. DeMulder's talk
on the USGS's National Map efforts. Fortunately, they were at WhereCamp
where they gave much more detail. Likewise, I missed the big ESRI
announcement that everyone was talking about all afternoon -- they
released some web thing, but as far as I can tell they're still
totally Windows-centric and thus irrelevant to a Linux and open source
user. But I want to go back and view
the
video anyway.
There was another talk on Thursday which I won't name, but it had
a few lessons for speakers:
Be aware of when you're speaking, so somebody doesn't have to come
find you in the audience and say "Hey, you're next. Are you coming?"
If you're not bringing your own laptop, try to get access to the
presentation machine beforehand and test out your presentation.
Especially if you're planning on showing a video that may require
downloading nonstandard software.
Base Map 2.0 was a panel-slash-debate between Steve Coast
(OpenStreetMap), Timothy Trainor (U.S. Census Bureau), Peter ter Haar
(Ordnance Survey), Di-Ann Eisnor (Platial), and moderated by Ian White of
Urban Mapping. It was fabulous. I've never seen such a lively panel:
White kept things moving, told jokes, asked provocative and sometimes
inflammatory questions and was by far the best panel moderator I've
seen. The panelists kept up with him and gave cogent, interesting
and illuminating answers. Two big issues were the just-announced
release of Ordnance Survey data, and licensing issues causing mismatches
between OSM, OS and Census datasets.
Community-based Grassroots Mapping with Balloons and Kites in
Lima, Peru
by Jeffrey Warren was another fabulous talk.
He builds balloons out of garbage bags, soda bottles and a digital camera,
goes to poor communities in places like Lima and teaches the community
(including the kids) how to map their own communities. This is more than
an academic exercise for them, since maps can help them prove title to
their land. Check it out at
GrassrootsMapping.org and
build your own aerial mapping balloon!
(He was at WhereCamp, too, where we got to see the equipment up close.)
Visualizing Spatio-temporal War Casualty Data in Google Earth by
Sean Askay of Google was just as good. He's built a KML file called
Map the Fallen showing US and
allied casualties from Iraq: the soldiers' hometowns, place of death,
age, gender, and lots of other details about them with links to tribute pages,
plus temporal information showing how casualties changed as the war
progressed. It's an amazing piece of work, and sobering ... and I was
most annoyed to find out that it needs a version of Google Earth that
doesn't run on Linux, so I can't run it for myself. Boo!
Overall, a very fun conference, though it left me hungry for detail.
Happily, after a day off there was WhereCamp to fill that void.
On my last Mojave trip, I spent a lot of the evenings hacking on
PyTopo.
I was going to try to stick to OpenStreetMap and other existing mapping
applications like TangoGPS, a neat little smartphone app for
downloading OpenStreetMap tiles that also runs on the desktop --
but really, there still isn't any mapping app that works well enough
for exploring maps when you have no net connection.
In particular, uploading my GPS track logs after a day of mapping,
I discovered that Tango really wasn't a good way of exploring them,
and I already know Merkaartor, nice as it is for entering new OSM
data, isn't very good at working offline. There I was, with PyTopo
and a boring hotel room; I couldn't stop myself from tweaking a bit.
Adding tracklogs was gratifyingly easy. But other aspects of the
code bother me, and when I started looking at what I might need to
do to display those Tango/OSM tiles ... well, I've known for a while
that some day I'd need to refactor PyTopo's code, and now was the time.
Surprisingly, I completed most of the refactoring on the trip.
But even after the refactoring, displaying those OSM tiles turned out
to be a lot harder than I'd hoped, because I couldn't find any
reliable way of mapping a tile name to the coordinates of that tile.
I haven't found any documentation on that anywhere, and Tango and
several other programs all do it differently and get slightly
different coordinates. That one problem was to occupy my spare time
for weeks after I got home, and I still don't have it solved.
But meanwhile, the rest of the refactoring was done, nice features
like track logs were working, and I've had to move on to other
projects. I am going to finish the OSM tile MapCollection class,
but why hold up a release with a lot of useful changes just for that?
So here's PyTopo 0.8,
and the couple of known problems with the new features will have to wait
for 0.9.
But what I want to know about Bing is this:
Why does it think we're in Portugal when Dave runs it under Omniweb on Mac?
In every other browser it gives the screen you've probably seen,
with side menus (and a horizontal scrollbar if your window isn't
wide enough, ugh) and some sort of pretty picture as a background.
In Omniweb, you get a cleaner layout with no sidebars or horizontal
scrollbars, a different pretty picture -- often
prettier than the one you get on all the other browsers, though
both images change daily -- and a set of togglebuttons that don't
show up in any of the other browsers, letting you restrict results
to only English or only results from Portugal.
Why does it think we're in Portugal when Dave uses Omniweb?
Equally puzzling, why do only people in Portugal have the option
of restricting the results to English only?
Someone on the OSM newbies list asked how he could strip
waypoints out of a GPX track file. Seems he has track logs of an
interesting and mostly-unmapped place that he wants to add to
openstreetmap, but there
are some waypoints that shouldn't be included, and he wanted a
good way of separating them out before uploading.
Most of the replies involved "just edit the XML." Sure, GPX files
are pretty simple and readable XML -- but a user shouldn't ever have
to do that! Gpsman and gpsbabel were also mentioned, but they're not
terribly easy to use either.
That reminded me that I had another XML-parsing task I'd been wanting
to write in Python: a way to split track files from my Garmin GPS.
Sometimes, after a day of mapping, I end up with several track
segments in the same track log file. Maybe I mapped several different
trails; maybe I didn't get a chance to upload one day's mapping before
going out the next day. Invariably some of the segments are of zero
length (I don't know why the Garmin does that, but it always does).
Applications like merkaartor don't like this one bit, so I
usually end up editing the XML file and splitting it into
segments by hand. I'm comfortable with XML -- but it's still silly.
I already have some basic XML parsing as part
of PyTopo and Ellie, so I know the parsing very easy to do.
So, spurred on by the posting on OSM-newbies,
I wrote a little GPX parser/splitter called
gpxmgr.
gpxmgr -l file.gpx can show you how many track logs are
in the file; gpxmgr -w file.gpx can write new files for
each non-zero track log. Add -p if you want to be prompted for
each filename (otherwise it'll use the name of the track log,
which might be something like "ACTIVE\ LOG\ #2").
How, you may wonder, does that help the original
poster's need to separate out waypoints from track files?
It doesn't. See, my GPS won't save tracklogs and
waypoints in the same file, even if you want them that way;
you have to use two separate gpsbabel commands to upload a track
file and a waypoint file. So I don't actually know what a
tracklog-plus-waypoint file looks like.
If anyone wants to use gpxmgr to manage waypoints as well as tracks,
send me a sample GPX file that combines them both.
I missed a lot of the miniconf talks on Tuesday because I wanted to
make some last-minute changes to my talk. But I do want to comment
on one: Simon Greener's talk on "A Review of Australian Geodata
Providers." Of course, I'm not in Australia, but it was quite
interesting to hear how similar Australia's problematic geodata
siguation is to the situation in the US. His presentation was
entertaining, animated and I learned some interesting facts about
GPS and geodata in general.
And Dave and I got another good astronomy opportunity with the dark
skies at Peppermint Bay at the Speakers' Dinner. Despite occasional
intrusive clouds we managed to get a great view of the Large
Magellanic Cloud and a decent view of the small one, as well as
eta Carinae and the star clouds between Crux and Carina. Pity
I'd forgotten to bring my thumpin' travel optics that I'd been using
the previous evening: a 6x20 monocular.
I've been resisting the GPS siren song for years -- mostly because I
knew it would be a huge time sink involving months of futzing with
drivers and software trying to get it to do something useful.
But my experience at an OpenStreetMap
mapping
party got me fired up about it, and I ordered a Garmin Vista Cx.
Shopping for a handheld GPS is confusing. I was fairly convinced I
wanted a Garmin, just because it's the brand used by most people in
the open source mapping community so I knew they were likely to work.
I wanted one with a barometric altimeter, because I
wanted that data from my hikes and bike rides (and besides,
it's fun to know how much you've climbed on an outing; I used to have
a bike computer with an altimeter and it was a surprisingly good
motivator for working harder and getting in better shape).
But Garmin has a bazillion models and I never found any comparison
page explaining the differences among the various hiking eTrex models.
Eventually I worked it out:
Garmin eTrex models, decoded
C
Color display. This generally also implies USB connectivity
instead of serial, just because the color models are newer.
H
High precision (a more sensitive satellite receiver).
x
Takes micro-SD cards. This may not be important for storing
tracks and waypoints (you can store quite a long track with the
built-in memory) but they mean that you can load extra base maps,
like topographic data or other useful features.
Vista, Summit
These models have barometric altimeters and magnetic compasses.
(I never did figure out the difference between a Vista and a Summit,
except that in the color models (C), Vistas take micro-SD cards (x)
while Summits don't, so there's a Summit C and HC while Vistas
come in Cx and HCx. I don't know what the difference is between
a monochrome Summit and Vista.)
Legend, Venture
These have no altimeter or compass.
A Venture is a Legend that comes without the bundled
extras like SD card, USB cable and base maps, so it's cheaper.
For me, the price/performance curve pointed to the Vista Cx.
Loading maps
Loading base maps was simplicity itself, and I found lots of howtos
on how to use downloadable maps. Just mount the micro-SD card on any
computer, make a directory called Garmin, and name the file
gmapsupp.img.
I used the CloudMade map
for California, and it worked great.
There are lots of howtos on generating your own maps, too,
and I'm looking forward to making some with topographic data
(which the CloudMade maps don't have). The most promising
howtos I've found so far are the
OSM
Map On Garmin page on the OSM wiki and the much more difficult,
but gorgeous,
Hiking
Biking Mapswiki page.
Uploading tracks and waypoints
But the real goal was to be able to take this toy out on a hike,
then come back and upload the track and waypoint files.
I already knew, from the mapping party, that Garmins have an odd
misfeature: you can connect them in usb-storage mode, where they look
like an external disk and don't need any special software ... but then
you can't upload any waypoints. (In fact, when I tried it with my
Vista Cx I didn't even see the track file.) To upload tracks and
waypoints, you need to use something that speaks Garmin protocol:
namely, the excellent GPSBabel.
So far so good. How do you call GPSbabel?
Luckily for me, just before my GPS arrived,
Iván Sánchez Ortega posted a
useful
little gpsbabel script
to the OSM newbies list and I thought I was all set.
But once I actually had the Vista in hand, complete with track and
waypoints from a walk around the block, it turned out it wasn't quite
that simple -- because Ubuntu didn't create the /dev/ttyUSB0 that
Iván's script used. A web search found tons of people having that
problem on Ubuntu and talking about various workarounds, involving
making sure the garmin_usb driver is blacklisted in
/etc/modprobe.d/blacklist (it was already), adding a
/etc/udev/rules.d/45-garmin.rules file that changes permissions
and ownership of ... um, I guess of the file that isn't being created?
That didn't make much sense. Anyway, none of it helped.
But finally I found the fix: keep the garmin_usb driver blacklisted
use "usb:" as the device to pass to GPSBabel rather than
"/dev/ttyUSB0". So the commands are:
Like so many other things, it's easy once you know the secret!
Viewing tracklogs works great in Merkaartor, though I haven't yet
found an app that does anything useful with the elevation data.
I may have to write one.
Update: After I wrote this but before I was able to post it,
a discussion on the OSM Newbies list with someone
who was having similar troubles resulted in this useful wiki page:
Garmin
on GNU/Linux. It may also be worth checking the Discussion
tab on that wiki page for further information.
Update, October 2011:
As of Debian Squeeze or Ubuntu Natty, you need two steps:
Add a line to /etc/modprobe.d/blacklist.conf:
blacklist garmin_gps
Create a udev file,
/etc/udev/rules.d/51-garmin.rules, to set the permissions so
that you can access the device without being root. It contains the line:
Last month, OpenStreetMap and its benefactor company
CloudMade
held a "mapping party" in Palo Alto. I love maps and mapping (I wrote
my own little topographic
map viewer when I couldn't find one ready-made) and I've been
wanting to know more about the state of open source mapping.
A mapping party sounded perfect.
The party was a loosely organized affair. We met at a coffeehouse
and discussed basics of mapping and openstreetmap. The hosts tried
to show us newbies how OSM works, but that was complicated by the
coffeehouse's wireless net being down. No big deal -- turns out the
point of a mapping party is to hand out GPSes to anyone who doesn't
already have one and send us out to do some mapping.
I attached myself to a couple of CloudMade folks who had some
experience already and we headed north on a pedestrian path. We spent
a couple of hours walking urban trails and marking waypoints.
Then we all converged on a tea shop (whose wireless worked a little
better than the one at the coffeehouse, but still not very reliably)
for lunch and transfer of track and waypoint files.
This part didn't work all that well. It turned out the units we were
using (Garmin Legend HCx) can transfer files in two modes, USB
mass storage (the easy way, just move files as if from an external
disk) or USB Garmin protocol (the hard way: you have to use software
like gpsbabel, or the Garmin software if you're on Windows).
And in mass storage mode, you get a file but the waypoints aren't there.
The folks running the event all had Macs, and there were several Linux
users there as well, but no Windows laptops. By the time the Macs both
had gpsbabel downloaded over the tea shop's flaky net, it was past
time for me to leave, so I never did get to see our waypoint files.
Still, I could see it was possible (and one of the Linux attendees
assured me that he had no trouble with any of the software; in fact,
he found it easier than what the Mac people at the party were going
through).
But I was still pretty jazzed about how easy OpenStreetMap is
to use. You can contribute to the maps even without a GPS.
Once you've registered on the site, you just click on the Edit tab
on any map, and you see a flash application called "Potlatch" that
lets you mark trails, roads or other features based on satellite
images or the existing map. I was able to change a couple of mismarked
roads near where I live, as well as adding a new trail and correcting
the info on an existing one for one of the nearby parks.
If you prefer (as, I admit, I do) to work offline or don't like flash,
you can use a Java app, JOSM, or a native app, merkaartor. Very cool!
Merkaartor is my favorite so far (because it's faster and works
better in standalone mode) though it's still fairly rough around
the edges. They're all described on the OSM
Map Editing
page.
Of course, all this left me lusting after a GPS. But that's another
story, to be told separately.
Belated release announcement: 0.5b2 of my little map viewer
PyTopo
has been working well, so I released 0.5 last week with only a
few minor changes from the beta.
I'm sure I'll immediately find six major bugs -- but hey, that's
what point releases are for. I only did betas this time because
of the changed configuration file format.
I also made a start on a documentation page for the .pytopo file
(though it doesn't really have much that wasn't already written
in comments inside the script).
A few months ago, someone contacted me who was trying to use my
PyTopo map display script for a different set of map data, the
Topo! National Parks series. We exchanged some email about the
format the maps used.
I'd been wanting to make PyTopo more general
anyway, and already had some hacky code in my local version to
let it use a local geologic map that I'd chopped into segments.
So, faced with an Actual User (always a good incentive!), I
took the opportunity to clean up the code, use some of Python's
support for classes, and introduce several classes of map data.
I called it 0.5 beta 1 since it wasn't well tested. But in the last
few days, I had occasion to do some map exploring,
cleaned up a few remaining bugs, and implemented a feature which
I hadn't gotten around to implementing in the new framework
(saving maps to a file).
I think it's ready to use now. I'm going to do some more testing:
after visiting the USGS
Open House today and watching Jim Lienkaemper's narrated
Virtual
Tour of the Hayward Fault,
I'm all fired up about trying again to find more online geologic
map data.
But meanwhile, PyTopo is feature complete and has the known
bugs fixed. The latest version is on
the PyTopo page.
I needed to print some maps for one of my geology class field trips,
so I added a "save current map" key to PyTopo (which saves to .gif,
and then I print it with gimp-print). It calls montage
from Image Magick.
While on vacation, I couldn't resist tweaking
pytopo
so that I could use it to explore some of the areas we were
visiting.
It seems fairly usable now. You can scroll around, zoom in and out
to change between the two different map series, and get the
coordinates of a particular location by clicking. I celebrated
by making a page for it, with a silly tux-peering-over-map icon.
One annoyance: it repaints every time it gets a focus in or out,
which means, for people like me who use mouse focus, that it
repaints twice for each time the mouse moves over the window.
This isn't visible, but it would drag the CPU down a bit on a
slow machine (which matters since mapping programs are particularly
useful on laptops and handhelds).
It turns out this is a pygtk problem: any pygtk drawing area window
gets spurious Expose events every time the focus changes (whether or
not you've asked to track focus events), and it reports that the
whole window needs to be repainted, and doesn't seem to be
distinguishable in any way from a real Expose event.
The regular gtk libraries (called from C) don't do this, nor
do Xlib C programs; only pygtk.
I filed
bug 172842
on pygtk; perhaps someone will come up with a workaround, though
the couple of pygtk developers I found on #pygtk couldn't think
of one (and said I shouldn't worry about it since most people
don't use pointer focus ... sigh).
I couldn't stop myself -- I wrote up a little topo map viewer in
PyGTK, so I can move around with arrow keys or by clicking near the
edges. It makes it a lot easier to navigate the map directory if
I don't know the exact starting coordinates.
I think CoordsToFilename has some bugs; the data CD also has some
holes, and some directories don't seem to exist in the expected
place. I haven't figured that out yet.