Shallow Thoughts : tags : python

Akkana's Musings on Open Source Computing and Technology, Science, and Nature.

Thu, 11 Sep 2014

Making emailed LinkedIn discussion thread links actually work

I don't use web forums, the kind you have to read online, because they don't scale. If you're only interested in one subject, then they work fine: you can keep a browser tab for your one or two web forums perenially open and hit reload every few hours to see what's new. If you're interested in twelve subjects, each of which has several different web forums devoted to it -- how could you possibly keep up with that? So I don't bother with forums unless they offer an email gateway, so they'll notify me by email when new discussions get started, without my needing to check all those web pages several times per day.

LinkedIn discussions mostly work like a web forum. But for a while, they had a reasonably usable email gateway. You could set a preference to be notified of each new conversation. You still had to click on the web link to read the conversation so far, but if you posted something, you'd get the rest of the discussion emailed to you as each message was posted. Not quite as good as a regular mailing list, but it worked pretty well. I used it for several years to keep up with the very active Toastmasters group discussions.

About a year ago, something broke in their software, and they lost the ability to send email for new conversations. I filed a trouble ticket, and got a note saying they were aware of the problem and working on it. I followed up three months later (by filing another ticket -- there's no way to add to an existing one) and got a response saying be patient, they were still working on it. 11 months later, I'm still being patient, but it's pretty clear they have no intention of ever fixing the problem.

Just recently I fiddled with something in my LinkedIn prefs, and started getting "Popular Discussions" emails every day or so. The featured "popular discussion" is always something stupid that I have no interest in, but it's followed by a section headed "Other Popular Discussions" that at least gives me some idea what's been posted in the last few days. Seemed like it might be worth clicking on the links even though it means I'd always be a few days late responding to any conversations.

Except -- none of the links work. They all go to a generic page with a red header saying "Sorry it seems there was a problem with the link you followed."

I'm reading the plaintext version of the mail they send out. I tried viewing the HTML part of the mail in a browser, and sure enough, those links worked. So I tried comparing the text links with the HTML:

Text version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken=AQEqep2nxSZJIg&ek=b2_anet_digest&li=82&m=group_discussions&ts=textdisc-6&itemID=5914453683503906819&itemType=member&anetID=98449
HTML version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken=AQEqep2nxSZJIg&ek=b2_anet_digest&li=17&m=group_discussions&ts=grouppost-disc-6&itemID=5914453683503906819&itemType=member&anetID=98449

Well, that's clear as mud, isn't it?

HTML entity substitution

I pasted both links one on top of each other, to make it easier to compare them one at a time. That made it fairly easy to find the first difference:

Text version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken= ...
HTML version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken= ...

Time to die laughing: they're doing HTML entity substitution on the plaintext part of their email notifications, changing & to & everywhere in the link.

If you take the link from the text email and replace & with &, the link works, and takes you to the specific discussion.

Pagination

Except you can't actually read the discussion. I went to a discussion that had been open for 2 days and had 35 responses, and LinkedIn only showed four of them. I don't even know which four they are -- are they the first four, the last four, or some Facebook-style "four responses we thought you'd like". There's a button to click on to show the most recent entries, but then I only see a few of the most recent responses, still not the whole thread.

Hooray for the web -- of course, plenty of other people have had this problem too, and a little web searching unveiled a solution. Add a pagination token to the end of the URL that tells LinkedIn to show 1000 messages at once.

&count=1000&paginationToken=
It won't actually show 1000 (or all) responses -- but if you start at the beginning of the page and scroll down reading responses one by one, it will auto-load new batches. Yes, infinite scrolling pages can be annoying, but at least it's a way to read a LinkedIn conversation in order.

Making it automatic

Okay, now I know how to edit one of their URLs to make it work. Do I want to do that by hand any time I want to view a discussion? Noooo!

Time for a script! Since I'll be selecting the URLs from mutt, they'll be in the X PRIMARY clipboard. And unfortunately, mutt adds newlines so I might as well strip those as well as fixing the LinkedIn problems. (Firefox will strip newlines for me when I paste in a multi-line URL, but why rely on that?)

Here's the important part of the script:

import subprocess, gtk

primary = gtk.clipboard_get(gtk.gdk.SELECTION_PRIMARY)
if not primary.wait_is_text_available() :
    sys.exit(0)
link = primary.wait_for_text()
link = link.replace("\n", "").replace("&", "&") + \
       "&count=1000&paginationToken="
subprocess.call(["firefox", "-new-tab", link])

And here's the full script: linkedinify on GitHub. I also added it to pyclip, the script I call from Openbox to open a URL in Firefox when I middle-click on the desktop.

Now I can finally go back to participating in those discussions.

Tags: , , ,
[ 13:10 Sep 11, 2014    More tech/web | permalink to this entry | comments ]

Thu, 31 Jul 2014

Predicting planetary visibility with PyEphem

Part II: Predicting Conjunctions

After I'd written a basic script to calculate when planets will be visible, the next step was predicting conjunctions, times when two or more planets are close together in the sky.

Finding separation between two objects is easy in PyEphem: it's just one line once you've set up your objects, observer and date.

p1 = ephem.Mars()
p2 = ephem.Jupiter()
observer = ephem.Observer()  # and then set it to your city, etc.
observer.date = ephem.date('2014/8/1')
p1.compute(observer)
p2.compute(observer)

ephem.separation(p1, p2)

So all I have to do is loop over all the visible planets and see when the separation is less than some set minimum, like 4 degrees, right?

Well, not really. That tells me if there's a conjunction between a particular pair of planets, like Mars and Jupiter. But the really interesting events are when you have three or more objects close together in the sky. And events like that often span several days. If there's a conjunction of Mars, Venus, and the moon, I don't want to print something awful like

Friday:
  Conjunction between Mars and Venus, separation 2.7 degrees.
  Conjunction between the moon and Mars, separation 3.8 degrees.
Saturday:
  Conjunction between Mars and Venus, separation 2.2 degrees.
  Conjunction between Venus and the moon, separation 3.9 degrees.
  Conjunction between the moon and Mars, separation 3.2 degrees.
Sunday:
  Conjunction between Venus and the moon, separation 4.0 degrees.
  Conjunction between the moon and Mars, separation 2.5 degrees.

... and so on, for each day. I'd prefer something like:

Conjunction between Mars, Venus and the moon lasts from Friday through Sunday.
  Mars and Venus are closest on Saturday (2.2 degrees).
  The moon and Mars are closest on Sunday (2.5 degrees).

At first I tried just keeping a list of planets involved in the conjunction. So if I see Mars and Jupiter close together, I'd make a list [mars, jupiter], and then if I see Venus and Mars on the same date, I search through all the current conjunction lists and see if either Venus or Mars is already in a list, and if so, add the other one. But that got out of hand quickly. What if my conjunction list looks like [ [mars, venus], [jupiter, saturn] ] and then I see there's also a conjunction between Mars and Jupiter? Oops -- how do you merge those two lists together?

The solution to taking all these pairs and turning them into a list of groups that are all connected actually lies in graph theory: each conjunction pair, like [mars, venus], is an edge, and the trick is to find all the connected edges. But turning my list of conjunction pairs into a graph so I could use a pre-made graph theory algorithm looked like it was going to be more code -- and a lot harder to read and less maintainable -- than making a bunch of custom Python classes.

I eventually ended up with three classes: ConjunctionPair, for a single conjunction observed between two bodies on a single date; Conjunction, a collection of ConjunctionPairs covering as many bodies and dates as needed; and ConjunctionList, the list of all Conjunctions currently active. That let me write methods to handle merging multiple conjunction events together if they turned out to be connected, as well as a method to summarize the event in a nice, readable way.

So predicting conjunctions ended up being a lot more code than I expected -- but only because of the problem of presenting it neatly to the user. As always, user interface represents the hardest part of coding.

The working script is on github at conjunctions.py.

Tags: , , ,
[ 19:57 Jul 31, 2014    More science/astro | permalink to this entry | comments ]

Wed, 23 Jul 2014

Predicting planetary visibility with PyEphem

Part 1: Basic Planetary Visibility

All through the years I was writing the planet observing column for the San Jose Astronomical Association, I was annoyed at the lack of places to go to find out about upcoming events like conjunctions, when two or more planets are close together in the sky. It's easy to find out about conjunctions in the next month, but not so easy to find sites that will tell you several months in advance, like you need if you're writing for a print publication (even a club newsletter).

For some reason I never thought about trying to calculate it myself. I just assumed it would be hard, and wanted a source that could spoon-feed me the predictions.

The best source I know of is the RASC Observer's Handbook, which I faithfully bought every year and checked each month so I could enter that month's events by hand. Except for January and February, when I didn't have the next year's handbook yet by the time my column went to press and I was on my own. I have to confess, I was happy to get away from that aspect of the column when I moved.

In my new town, I've been helping the local nature center with their website. They had some great pages already, like a What's Blooming Now? page that keeps track of which flowers are blooming now and only shows the current ones. I've been helping them extend it by adding features like showing only flowers of a particular color, separating the data into CSV databases so it's easier to add new flowers or butterflies, and so forth. Eventually we hope to build similar databases of birds, reptiles and amphibians.

And recently someone suggested that their astronomy page could use some help. Indeed it could -- it hadn't been updated in about five years. So we got to work looking for a source of upcoming astronomy events we could use as a data source for the page, and we found sources for a few things, like moon phases and eclipses, but not much.

Someone asked about planetary conjunctions, and remembering how I'd always struggled to find that data, especially in months when I didn't have the RASC handbook yet, I got to wondering about calculating it myself. Obviously it's possible to calculate when a planet will be visible, or whether two planets are close to each other in the sky. And I've done some programming with PyEphem before, and found it fairly easy to use. How hard could it be?

Note: this article covers only the basic problem of predicting when a planet will be visible in the evening. A followup article will discuss the harder problem of conjunctions.

Calculating planet visibility with PyEphem

The first step was figuring out when planets were up. That was straightforward. Make a list of the easily visible planets (remember, this is for a nature center, so people using the page aren't expected to have telescopes):

import ephem

planets = [
    ephem.Moon(),
    ephem.Mercury(),
    ephem.Venus(),
    ephem.Mars(),
    ephem.Jupiter(),
    ephem.Saturn()
    ]

Then we need an observer with the right latitude, longitude and elevation. Elevation is apparently in meters, though they never bother to mention that in the PyEphem documentation:

observer = ephem.Observer()
observer.name = "Los Alamos"
observer.lon = '-106.2978'
observer.lat = '35.8911'
observer.elevation = 2286  # meters, though the docs don't actually say

Then we loop over the date range for which we want predictions. For a given date d, we're going to need to know the time of sunset, because we want to know which planets will still be up after nightfall.

observer.date = d
sunset = observer.previous_setting(sun)

Then we need to loop over planets and figure out which ones are visible. It seems like a reasonable first approach to declare that any planet that's visible after sunset and before midnight is worth mentioning.

Now, PyEphem can tell you directly the rising and setting times of a planet on a given day. But I found it simplified the code if I just checked the planet's altitude at sunset and again at midnight. If either one of them is "high enough", then the planet is visible that night. (Fortunately, here in the mid latitudes we don't have to worry that a planet will rise after sunset and then set again before midnight. If we were closer to the arctic or antarctic circles, that would be a concern in some seasons.)

min_alt = 10. * math.pi / 180.
for planet in planets:
    observer.date = sunset
    planet.compute(observer)
    if planet.alt > min_alt:
        print planet.name, "is already up at sunset"

Easy enough for sunset. But how do we set the date to midnight on that same night? That turns out to be a bit tricky with PyEphem's date class. Here's what I came up with:

    midnight = list(observer.date.tuple())
    midnight[3:6] = [7, 0, 0]
    observer.date = ephem.date(tuple(midnight))
    planet.compute(observer)
    if planet.alt > min_alt:
        print planet.name, "will rise before midnight"

What's that 7 there? That's Greenwich Mean Time when it's midnight in our time zone. It's hardwired because this is for a web site meant for locals. Obviously, for a more general program, you should get the time zone from the computer and add accordingly, and you should also be smarter about daylight savings time and such. The PyEphem documentation, fortunately, gives you tips on how to deal with time zones. (In practice, though, the rise and set times of planets on a given day doesn't change much with time zone.)

And now you have your predictions of which planets will be visible on a given date. The rest is just a matter of writing it out into your chosen database format.

In the next article, I'll cover planetary and lunar conjunctions -- which were superficially very simple, but turned out to have some tricks that made the programming harder than I expected.

Tags: , , ,
[ 21:32 Jul 23, 2014    More science/astro | permalink to this entry | comments ]

Sun, 11 May 2014

Sonograms in Python

I went to a terrific workshop last week on identifying bird songs. We listened to recordings of songs from some of the trickier local species, and discussed the differences and how to remember them. I'm not a serious birder -- I don't do lists or Big Days or anything like that, and I dislike getting up at 6am just because the birds do -- but I do try to identify birds (as well as mammals, reptiles, rocks, geographic features, and pretty much anything else I see while hiking or just sitting in the yard) and I've always had trouble remembering their songs.

[Sonogram of ruby-crowned kinglet] One of the tools birders use to study bird songs is the sonogram. It's a plot of frequency (on the vertical axis) and intensity (represented by color, red being louder) versus time. Looking at a sonogram you can identify not just how fast a bird trills and whether it calls in groups of three or five, but whether it's buzzy/rattly (a vertical line, lots of frequencies at once) or a purer whistle, and whether each note is ascending or descending.

The class last week included sonograms for the species we studied. But what about other species? The class didn't cover even all the local species I'd like to be able to recognize. I have several collections of bird calls on CD (which I bought to use in combination with my "tweet" script -- yes, the name messes up google searches, but my tweet predates Twitter -- a tweet Python script and tweet in HTML for Android). It would be great to be able to make sonograms from some of those recordings too.

But a search for Linux sonogram turned up nothing useful. Audacity has a histogram visualization mode with lots of options, but none of them seem to result in a usable sonogram, and most discussions I found on the net agreed that it couldn't do it. There's another sound editor program called snd which can do sonograms, but it's fiddly to use and none of the many color schemes produce a sonogram that I found very readable.

Okay, what about python scripts? Surely that's been done?

I had better luck there. Matplotlib's pylab package has a specgram() call that does more or less what I wanted, and here's an example of how to use pylab.specgram(). (That post also has another example using a library called timeside, but timeside's PyPI package doesn't have any dependency information, and after playing the old RPM-chase game installing another dependency, trying it, then installing the next dependency, I gave up.)

The only problem with pylab.specgram() was that it shows the full range of the sound, both in time and frequency. The recordings I was examining can last a minute or more and go up to 20,000 Hz -- and when pylab tries to fit that all on the screen, you end up with a plot where the details are too small to show you anything useful.

You'd think there would be a way for pylab.specgram() to show only part of the spectrum, but that doesn't seem to be. I finally found a Stack Overflow discussion where "edited" gives an excellent rewritten version of pylab.specgram which allows setting minimum and maximum frequency cutoffs. Worked great!

Then I did some fiddling to allow for analyzing only part of the recording -- Python's wave package has no way to read in just the first six seconds of a .wav file, so I had to read in the whole file, read the data into a numpy array, then take a slice representing the seconds of the recording I actually wanted.

But now I can plot nice sonograms of any bird song I want to see, print them out or stick them on my Android device so I can carry them with me.

Update: Oops! I forgot to include a link to the script. Here it is: Sonograms in Python.


Tags: , , ,
[ 09:17 May 11, 2014    More programming | permalink to this entry | comments ]

Thu, 17 Apr 2014

Back from PyCon

I'm back from Montreal, settling back in.

The PiDoorbell tutorial went well, in the end. Of course just about everything that could go wrong, did. The hard-wired ethernet connection we'd been promised didn't materialize, and there was no way to get the Raspberry Pis onto the conference wi-fi because it used browser authentication (it still baffles me why anyone still uses that! Browser authentication made sense in 2007 when lots of people only had 801.11g and couldn't do WPA; it makes absolutely zero sense now).

Anyway, lacking a sensible way to get everyone's Pis on the net, Deepa stepped as network engineer for the tutorial and hooked up the router she had brought to her laptop's wi-fi connection so the Pis could route through that.

Then we found we had too few SD cards. We didn't realize why until afterward: when we compared the attendee count to the sign-up list we'd gotten, we had quite a few more attendees than we'd planned for. We had a few extra SD cards, but not enough, so I and a couple of the other instructors/TAs had to loan out SD cards we'd brought for our own Pis. ("Now edit /etc/network/interfaces ... okay, pretend you didn't see that, that's the password for my home router, now delete that and change it to ...")

Then some of the SD cards turned out not to have been updated with the latest packages, Mac users couldn't find the drivers to run the serial cable, Windows users (or was it Macs?) had trouble setting static ethernet addresses so they could ssh to the Pi, all the problems we'd expected and a few we hadn't.

But despite all the problems, the TAs: Deepa (who was more like a co-presenter than a TA), Serpil, Lyz and Stuart, plus Rupa and I, were able to get everyone working. All the attendees got their LEDs blinking, their sonar rangefinders rangefinding, and the PiDoorbell script running. Many people brought cameras and got their Pis snapping pictures when the sensor registered someone in front of it. Time restrictions and network problems meant that most people didn't get the Dropbox and Twilio registration finished to get notifications sent to their phones, but that's okay -- we knew that was a long shot, and everybody got far enough that they can add the network notifications later if they want.

And the most important thing is that everybody looked like they were having a good time. We haven't seen the reviews (I'm not sure if PyCon shares reviews with the tutorial instructors; I hope so, but a lot of conferences don't) but I hope everybody had fun and felt like they got something out of it.

The rest of PyCon was excellent, too. I went to some great talks, got lots of ideas for new projects and packages I want to try, had fun meeting new people, and got to see a little of Montreal. And ate a lot of good food.

Now I'm back in the land of enchantment, with its crazy weather -- we've gone from snow to sun to cold breezes to HOT to threatening thunderstorm in the couple of days I've been back. Never a dull moment! I confess I'm missing those chocolate croissants for breakfast just a little bit. We still don't have internet: it's nearly 9 weeks since Comcast's first visit, and their latest prediction (which changes every time I talk to them) is a week from today.

But it's warm and sunny this morning, there's a white-crowned sparrow singing outside the window, and I've just seen our first hummingbird (a male -- I think it's a broad-tailed, but it'll take a while to be confident of IDs on all these new-to-me birds). PyCon was fun -- but it's nice to be home.

Tags: , ,
[ 10:20 Apr 17, 2014    More conferences | permalink to this entry | comments ]

Sun, 06 Apr 2014

Snow-Hail while preparing for Montreal

Things have been hectic in the last few days before I leave for Montreal with last-minute preparation for our PyCon tutorial, Build your own PiDoorbell - Learn Home Automation with Python next Wednesday.

[Snow-hail coming down on the Piñons] But New Mexico came through on my next-to-last full day with some pretty interesting weather. A windstorm in the afternoon gave way to thunder (but almost no lightning -- I saw maybe one indistinct flash) which gave way to a strange fluffy hail that got gradually bigger until it eventually grew to pea-sized snowballs, big enough and snow enough to capture well in photographs as they came down on the junipers and in the garden.

Then after about twenty minutes the storm stopped the sun came out. And now I'm back to tweaking tutorial slides and thinking about packing while watching the sunset light on the Rio Grande gorge.

But tomorrow I leave it behind and fly to Montreal. See you at PyCon!

Tags: , , , , ,
[ 18:55 Apr 06, 2014    More misc | permalink to this entry | comments ]

Wed, 29 Jan 2014

PyCon Tutorial: Build your own PiDoorbell - Learn Home Automation with Python

[Raspberry Pi from wikipedia] The first batch of hardware has been ordered for Rupa's and my tutorial at PyCon in Montreal this April!

We're presenting Build your own PiDoorbell - Learn Home Automation with Python on the afternoon of Wednesday, April 9.

It'll be a hands-on workshop, where we'll experiment with the Raspberry Pi's GPIO pins and learn how to control simple things like an LED. Then we'll hook up sonar rangefinders to the RPis, and build a little device that can be used to monitor visitors at your front door, birds at your feeder, co-workers standing in front of your monitor while you're away, or just about anything else you can think of.

Participants will bring their own Raspberry Pi computers and power supplies -- attendees of last year's PyCon got them there, but a new Model A can be gotten for $30, and a model B for $40.

We'll provide everything else. We worried that requiring participants to bring a long list of esoteric hardware was just asking for trouble, so we worked a deal with PyCon and they're sponsoring hardware for attendees. Thank you, PyCon! CodeChix is fronting the money for the kits and helping with our travel expenses, thanks to donations from some generous sponsors. We'll be passing out hardware kits and SD cards at the beginning of the workshop, which attendees can take home afterward.

We're also looking for volunteer T/As. The key to a good hardware workshop is having lots of helpers who can make sure everybody's keeping up and nobody's getting lost. We have a few top-notch T/As signed up already, but we can always use more. We can't provide hardware for T/As, but most of it's quite inexpensive if you want to buy your own kit to practice on. And we'll teach you everything you need to know about how get your PiDoorbell up and running -- no need to be an expert at hardware or even at Python, as long as you're interested in learning and in helping other people learn.

This should be a really fun workshop! PyCon tutorial sign-ups just opened recently, so sign up for the tutorial (we do need advance registration so we know how many hardware kits to buy). And if you're going to be at PyCon and are interested in being a T/A, drop me or Rupa a line and we'll get you on the list and get you all the information you need.

See you at PyCon!

Tags: , , , , ,
[ 20:32 Jan 29, 2014    More hardware | permalink to this entry | comments ]

Wed, 11 Dec 2013

Counting syllables in Python

When I wrote recently about my Dactylic dinosaur doggerel, I glossed over a minor problem with my final poem: the rules of double-dactylic doggerel say that the sixth line (or sometimes the seventh) should be a single double-dactyl word -- something like "paleontologist" or "hexasyllabic'ly". I used "dinosaur orchestra" -- two words, which is cheating.

I don't feel too guilty about that. If you read the post, you may recall that the verse was the result of drifting grumpily through an insomniac morning where I would have preferred to be getting back to sleep. Coming up with anything that scans at all is probably good enough.

Still, it bugged me, not being able to think of a double-dactylic word that related somehow to Parasaurolophus. So I vowed that, later that day when I was up and at the computer, I would attempt to find one and rewrite the poem accordingly.

I thought that would be fairly straightforward. Not so much. I thought there would be some utility I could run that would count syllables for me, then I could run /usr/share/dict/words through it, print out all the 6-syllable words, and find one that fit. Turns out there is no such utility.

But Python has a library for everything, doesn't it?

Some searching turned up PyHyphen, which includes some syllable-counting functions. It apparently uses the hyphenation dictionaries that come with LibreOffice.

There's a Debian package for it, python-pyhyphen -- but it doesn't work. First, it depends on another package, hyphen-en-us, but doesn't have that dependency encoded in the package, even as a suggested or recommended package. But even when you install the hyphenated dictionary, it still doesn't work because it doesn't point to the dictionary in the place it was installed. Looks like that problem was reported almost two years ago, bug 627944: python-pyhyphen: doesn't work out-of-the-box with hyphen-* packages. There's a fix there that involves editing two files, /usr/lib/python2.7/dist-packages/hyphen/config.py and /usr/lib/python2.7/dist-packages/hyphen/__init__.py.

Or you can just give up on Debian and pip install pyhyphen, which is a lot easier.

But once you get it working, you find that it's terrible. It was wrong about almost every word I tried. I hope not too many people are relying on this hyphen-en-us dictionary for important documents. Its results seemed nearly random, and I quickly gave up on it for getting a useful list of words around six syllables.

Just for fun, since my count syllables web search turned up quite a few websites claiming that functionality, I tried entering some of my long test words manually. All of the websites I tried were wrong more than half the time, and often they were off by more than two syllables. I don't mind off-by-ones -- I can look at words claiming 5 and 7 syllables while searching for double dactyls -- but if I have to include 4-syllable words as well, I'll never find what I'm looking for.

That discouraged me from using another Python suggestion I'd seen, the nltk (natural language toolkit) package. I've been looking for an excuse to play with nltk, and some day I will, but for this project I was looking for a quick approximate solution, and the nltk examples I found mostly looked like using it would require a bigger time commitment than I was willing to devote to silly poetry. And if none of the dedicated syllable-counting websites or dictionaries got it right, would a big time investment in nltk pay off?

Anyway, by this time I'd wasted more than an hour poking around various libraries and websites for this silly unimportant problem, and I decided that with that kind of time investment, I could probably do better on my own than the official solutions were giving me. Why not basically just count vowels?

So I whipped up a little script, countsyl, that did just that. I gave it a list of vowels, with a few simple rules. Obviously, you can't just say every vowel is a new syllable -- there are too many double vowels and silent letters and such. But you can't say that any run of multiple vowels together counts as one syllable, because sometimes the vowels do count; and you can't make absolute rules like "'e' at the end of a word is always silent", because sometimes it isn't. So I kept both minimum and maximum syllable counts for each word, and printed both.

And much to my surprise, without much tuning at all my silly little script immediately much better results than the hyphenation dictionary or the dedicated websites.

Alas, although it did give me quite a few hexasyllabic words in /usr/share/dict/words, none of them were useful at all for a program on Parasaurolophus. What I really needed was a musical term (since that's what the poem is about). What about a musical dictionary?

I found a list of musical terms on Wikipedia: Glossary of musical terminology, saved it as a local file, ran a few vim substitutes and turned it into a plain list of words. That did a little better, and gave me some possible ideas: (non?)contrapuntally? (something)harmonically? extemporaneously?

But none of them worked out, and by then I'd run out of steam. I gave up and blogged the poem as originally written, with the cheating two-word phrase "dinosaur orchestra", and vowed to write up how to count words in Python -- which I have now done. Quite noncontrapuntally, and definitely not extemporaneously. But at least I have a useful little script next time I want to get an approximate syllable count.

Tags: , , , , ,
[ 17:51 Dec 11, 2013    More programming | permalink to this entry | comments ]

Wed, 13 Nov 2013

Does scrolling output make a program slower? Followup.

Last week I wrote about some tests I'd made to answer the question Does scrolling output make a program slower? My test showed that when running a program that generates lots of output, like an rsync -av, the rsync process will slow way down as it waits for all that output to scroll across whatever terminal client you're using. Hiding the terminal helps a lot if it's an xterm or a Linux console, but doesn't help much with gnome-terminal.

A couple of people asked in the comments about the actual source of the slowdown. Is the original process -- the rsync, or my test script, that's actually producing all that output -- actually blocking waiting for the terminal? Or is it just that the CPU is so busy doing all that font rendering that it has no time to devote to the original program, and that's why it's so much slower?

I found pingu on IRC (thanks to JanC) and the group had a very interesting discussion, during which I ran a series of additional tests.

In the end, I'm convinced that CPU allocation to the original process is not the issue, and that output is indeed blocked waiting for the terminal to display the output. Here's why.

First, I installed a couple of performance meters and looked at the CPU load while rendering. With conky, CPU use went up equally (about 35-40%) on both CPU cores while the test was running. But that didn't tell me anything about which processes were getting all that CPU.

htop was more useful. It showed X first among CPU users, xterm second, and my test script third. However, the test script never got more than 10% of the total CPU during the test; X and xterm took up nearly all the remaining CPU.

Even with the xterm hidden, X and xterm were the top two CPU users. But this time the script, at number 3, got around 30% of the CPU rather than 10%. That still doesn't seem like it could account for the huge difference in speed (the test ran about 7 times faster with xterm hidden); but it's interesting to know that even a hidden xterm will take up that much CPU.

It was also suggested that I try running it to /dev/null, something I definitely should have thought to try before. The test took .55 seconds with its output redirected to /dev/null, and .57 seconds redirected to a file on disk (of course, the kernel would have been buffering, so there was no disk wait involved). For comparison, the test had taken 56 seconds with xterm visible and scrolling, and 8 seconds with xterm hidden.

I also spent a lot of time experimenting with sleeping for various amounts of time between printed lines. With time.sleep(.0001) and xterm visible, the test took 104.71 seconds. With xterm shaded and the same sleep, it took 98.36 seconds, only 6 seconds faster. Redirected to /dev/null but with a .0001 sleep, it took 97.44 sec.

I think this argues for the blocking theory rather than the CPU-bound one: the argument being that the sleep gives the program a chance to wait for the output rather than blocking the whole time. If you figure it's CPU bound, I'm not sure how you'd explain the result.

But a .0001 second sleep probably isn't very accurate anyway -- we were all skeptical that Linux can manage sleep times that small. So I made another set of tests, with a .001 second sleep every 10 lines of output. The results: 65.05 with xterm visible; 63.36 with xterm hidden; 57.12 to /dev/null. That's with a total of 50 seconds of sleeping included (my test prints 500000 lines). So with all that CPU still going toward font rendering, the visible-xterm case still only took 7 seconds longer than the /dev/null case. I think this argues even more strongly that the original test, without the sleep, is blocking, not CPU bound.

But then I realized what the ultimate test should be. What happens when I run the test over an ssh connection, with xterm and X running on my local machine but the actual script running on the remote machine?

The remote machine I used for the ssh tests was a little slower than the machine I used to run the other tests, but that probably doesn't make much difference to the results.

The results? 60.29 sec printing over ssh (LAN) to a visible xterm; 7.24 sec doing the same thing with xterm hidden. Fairly similar to what I'd seen before when the test, xterm and X were all running on the same machine.

Interestingly, the ssh process during the test took 7% of my CPU, almost as much as the python script was getting before, just to transfer all the output lines so xterm could display them.

So I'm convinced now that the performance bottleneck has nothing to do with the process being CPU bound and having all its CPU sucked away by rendering the output, and that the bottleneck is in the process being blocked in writing its output while waiting for the terminal to catch up.

I'd be interested it hear further comments -- are there other interpretations of the results besides mine? I'm also happy to run further tests.

Tags: , , ,
[ 17:19 Nov 13, 2013    More linux | permalink to this entry | comments ]

Fri, 08 Nov 2013

Does scrolling output make a program slower?

While watching my rsync -av messages scroll by during a big backup, I wondered, as I often have, whether that -v (verbose) flag was slowing my backup down.

In other words: when you run a program that prints lots of output, so there's so much output the terminal can't display it all in real-time -- like an rsync -v on lots of small files -- does the program wait ("block") while the terminal catches up?

And if the program does block, can you speed up your backup by hiding the terminal, either by switching to another desktop, or by iconifying or shading the terminal window so it's not visible? Is there any difference among the different ways of hiding the terminal, like switching desktops, iconifying and shading?

Since I've never seen a discussion of that, I decided to test it myself. I wrote a very simple Python program:

import time

start = time.time()

for i in xrange(500000):
    print "Now we have printed", i, "relatively long lines to stdout."

print time.time() - start, "seconds to print", i, "lines."

I ran it under various combinations of visible and invisible terminal. The results were striking. These are rounded to the nearest tenth of a second, in most cases the average of several runs:

Terminal type Seconds
xterm, visible 56.0
xterm, other desktop 8.0
xterm, shaded 8.5
xterm, iconified 8.0
Linux framebuffer, visible 179.1
Linux framebuffer, hidden 3.7
gnome-terminal, visible 56.9
gnome-terminal, other desktop 56.7
gnome-terminal, iconified 56.7
gnome-terminal, shaded 43.8

Discussion:

First, the answer to the original question is clear. If I'm displaying output in an xterm, then hiding it in any way will make a huge difference in how long the program takes to complete.

On the other hand, if you use gnome-terminal instead of xterm, hiding your terminal window won't make much difference. Gnome-terminal is nearly as fast as xterm when it's displaying; but it apparently lacks xterm's smarts about not doing that work when it's hidden. If you use gnome-terminal, you don't get much benefit out of hiding it.

I was surprised how slow the Linux console was (I'm using the framebuffer in the Debian 3.2.0-4-686-pae on Intel graphics). But it's easy to see where that time is going when you watch the output: in xterm, you see lots of blank space as xterm skips drawing lines trying to keep up with the program's output. The framebuffer doesn't do that: it prints and scrolls every line, no matter how far behind it gets.

But equally interesting is how much faster the framebuffer is when it's not visible. (I typed Ctrl-alt-F2, logged in, ran the program, then typed Ctrl-alt-F7 to go back to X while the program ran.) Obviously xterm is doing some background processing that the framebuffer console doesn't need to do. The absolute time difference, less than four seconds, is too small to worry about, but it's interesting anyway.

I would have liked to try it my test a base Linux console, with no framebuffer, but figuring out how to get a distro kernel out of framebuffer mode was a bigger project than I wanted to tackle that afternoon.

I should mention that I wasn't super-scientific about these tests. I avoided doing any heavy work on the machine while the tests were running, but I was still doing light editing (like this article), reading mail and running xchat. The times for multiple runs were quite consistent, so I don't think my light system activity affected the results much.

So there you have it. If you're running an output-intensive program like rsync -av and you care how fast it runs, use either xterm or the console, and leave it hidden most of the time.

Tags: , , ,
[ 15:17 Nov 08, 2013    More linux | permalink to this entry | comments ]

Mon, 07 Oct 2013

Viewing HTML mail messages from Mutt (or other command-line mailers)

Command-line mailers like mutt have one disadvantage: viewing HTML mail with embedded images. Without images, HTML mail is no problem -- run it through lynx, links or w3m. But if you want to see images in place, how do you do it?

Mutt can send a message to a browser like firefox ... but only the textual part of the message. The images don't show up.

That's because mail messages include images, not as separate files, but as attachments within the same file, encoded it a format known as MIME (Multipurpose Internet Mail Extensions). An image link in the HTML, instead of looking like <img src="picture.jpg">., will instead look something like <img src="cid:0635428E-AE25-4FA0-93AC-6B8379300161">. (Apple's Mail.app) or <img src="cid:1.3631871432@web82503.mail.mud.yahoo.com">. (Yahoo's webmail).

CID stands for Content ID, and refers to the ID of the image as it is encoded in MIME inside the image. GUI mail programs, of course, know how to decode this and show the image. Mutt doesn't.

A web search finds a handful of shell scripts that use the munpack program (part of the mpack package on Debian systems) to split off the files; then they use various combinations of sed and awk to try to view those files. Except that none of the scripts I found actually work for messages sent from modern mailers -- they don't decode the CID links properly.

I wasted several hours fiddling with various shell scripts, trying to adjust sed and awk commands to figure out the problem, when I had the usual epiphany that always eventually arises from shell script fiddling: "Wouldn't this be a lot easier in Python?"

Python's email package

Python has a package called email that knows how to list and unpack MIME attachments. Starting from the example near the bottom of that page, it was easy to split off the various attachments and save them in a temp directory. The key is

import email

fp = open(msgfile)
msg = email.message_from_file(fp)
fp.close()

for part in msg.walk():

That left the problem of how to match CIDs with filenames, and rewrite the links in the HTML message accordingly.

The documentation on the email package is a bit unclear, unfortunately. For instance, they don't give any hints what object you'll get when iterating over a message with walk, and if you try it, they're just type 'instance'. So what operations can you expect are legal on them? If you run help(part) in the Python console on one of the parts you get from walk, it's generally class Message, so you can use the Message API, with functions like get_content_type(), get_filename(). and get_payload().

More useful, it has dictionary keys() for the attributes it knows about each attachment. part.keys() gets you a list like

['Content-Type', 
 'Content-Transfer-Encoding',
 'Content-ID',
 'Content-Disposition' ]

So by making a list relating part.get_filename() (with a made-up filename if it doesn't have one already) to part['Content-ID'], I'd have enough information to rewrite those links.

Case-insensitive dictionary matching

But wait! Not so simple. That list is from a Yahoo mail message, but if you try keys() on a part sent by Apple mail, instead if will be 'Content-Id'. Note the lower-case d, Id, instead of the ID that Yahoo used.

Unfortunately, Python doesn't have a way of looking up items in a dictionary with the key being case-sensitive. So I used a loop:

    for k in part.keys():
        if k.lower() == 'content-id':
            print "Content ID is", part[k]

Most mailers seem to put angle brackets around the content id, so that would print things like "Content ID is <14.3631871432@web82503.mail.mud.yahoo.com>". Those angle brackets have to be removed, since the CID links in the HTML file don't have them.

for k in part.keys():
    if k.lower() == 'content-id':
        if part[k].startswith('<') and part[k].endswith('>'):
            part[k] = part[k][1:-1]

But that didn't work -- the angle brackets were still there, even though if I printed part[k][1:-1] it printed without angle brackets. What was up?

Unmutable parts inside email.Message

It turned out that the parts inside an email Message (and maybe the Message itself) are unmutable -- you can't change them. Python doesn't throw an exception; it just doesn't change anything. So I had to make a local copy:

for k in part.keys():
    if k.lower() == 'content-id':
        content_id = part[k]
        if content_id.startswith('<') and content_id.endswith('>'):
            content_id = content_id[1:-1]
and then save content_id, not part[k], in my list of filenames and CIDs.

Then the rest is easy. Assuming I've built up a list called subfiles containing dictionaries with 'filename' and 'Content-Id', I can do the substitution in the HTML source:

    htmlsrc = html_part.get_payload(decode=True)
    for sf in subfiles:
        htmlsrc = re.sub('cid: ?' + sf['Content-Id'],
                         'file://' + sf['filename'],
                         htmlsrc, flags=re.IGNORECASE)

Then all I have to do is hook it up to a key in my .muttrc:

# macro  index  <F10>  "<copy-message>/tmp/mutttmpbox\n<enter><shell-escape>~/bin/viewhtmlmail.py\n" "View HTML in browser"
# macro  pager  <F10>  "<copy-message>/tmp/mutttmpbox\n<enter><shell-escape>~/bin/viewhtmlmail.py\n" "View HTML in browser"

Works nicely! Here's the complete script: viewhtmlmail.

Tags: , , , , ,
[ 11:49 Oct 07, 2013    More tech/email | permalink to this entry | comments ]

Wed, 28 Aug 2013

Python scripts for Android

Python on Android. Wouldn't that make so many things so much easier?

I've known for a long time about SL4A, but when I read, a year or two ago, that Google officially disclaimed support for languages other than Java and C and didn't want their employees working on projects like SL4A, I decided it wasn't a good bet.

But recently I heard from someone who had just discovered SL4A and its Python support and talked about it like a going thing. I had an Android scripting problem I really wanted to solve, and decided it was time to take another look.

It turns out SL4A and its Python interpreter are still being maintained, and indeed, I was able to solve my problem that way. But the documentation was scanty at best. So here are some shortcuts.

Getting Python running on Android

How do you install it in the first place? Took me three or four tries: it turns out it's extremely picky about the order in which you do things, and the documentation doesn't warn you about that. Follow these steps:

  1. Enable "Unknown Sources" under Application settings if you haven't already.
  2. Download both sl4a_r6.apk and PythonForAndroid_r4.apk
  3. Install sl4a from the apk. Do not install Python yet.
  4. Find SL4A in Applications and run it. It will say "no matches found" (i.e. no scripts) but that's okay: the important thing is that it creates the directory where the scripts will live, /sdcard/sl4a/scripts, without which PythonForAndroid would fail to install.
  5. Install PythonForAndroid from the apk.
  6. Find Python for Android in Applications and run it. Tap Install. This will install the sample scripts, and you'll be ready to go.

Make a shortcut on the home screen:

You've written a script and it does what you want. But to run it, you have to run SL4A, choose the Python interpreter, scroll around to find the script, tap on it, and indicate whether or not you want to see the console. Way too many steps!

Turns out you can make a shortcut on the home screen to an SL4A script, like this: (thanks to this tip):

This will give you the familiar twin-snake Python icon on your home screen. There doesn't seem to be any way to change this to a different icon.

Wait, what about UI?

Well, that still seems to be a big hole in the whole SL4A model. You can write great scripts that print to the console. You can even do a few specialized things, like popup menus, messages (what the Python Android module calls makeToast()) and notifications. The test.py sample script is a great illustration of how to use all those features, plus a lot more.

But what if you want to show a window, put a few buttons in it, let the user control things? Nobody seems to have thought about that possibility. I mean, it's not "sorry, we haven't had time to implement this", it isn't even mentioned as something someone would want to do on an Android device. Boggle.

The only possibility I've found is that there is apparently a way to use Android's WebView class from Python. I have not tried this yet; when I do, I'll write it up separately.

WebView may not be the best way to do UI. I've spent many hours tearing my hair out over its limitations even when called from Java. But still, it's something. And one very interesting thing about it is that it provides an easy way to call up an HTML page, either local or remote, from an Android home screen icon. So that may be the best reason yet to check out SL4A.

Tags: , ,
[ 22:31 Aug 28, 2013    More programming | permalink to this entry | comments ]

Tue, 20 Aug 2013

Using Google Maps with Python to turn a list of addresses into waypoints

A few days ago I tlaked about how I use making waypoint files for a list of house addresses is OsmAnd. For waypoint files, you need latitude/longitude coordinates, and I was getting those from a web page that used the online Google Maps API to convert an address into latitude and longitude coordinates.

It was pretty cool at first, but pasting every address into the latitude/longitude web page and then pasting the resulting coordinates into the address file, got old, fast. That's exactly the sort of repetitive task that computers are supposed to handle for us.

The lat/lon page used Javascript and the Google Maps API. and I already had a Google Maps API key (they have all sorts of fun APIs for map geeks) ... but I really wanted something that could run locally, reading and converting a local file.

And then I discovered the Python googlemaps package. Exactly what I needed! It's in the Python Package Index, so I installed it with pip install googlemaps. That enabled me to change my waymaker Python script: if the first line of a description wasn't a latitude and longitude, instead it looked for something that might be an address.

Addresses in my data files might be one line or might be two, but since they're all US addresses, I know they'll end with a two-capital-letter state abbreviation and a 5-digit zip code: 2948 W Main St. Anytown, NM 12345. You can find that with a regular expression:

    match = re.search('.*[A-Z]{2}\s+\d{5}$', line)

But first I needed to check whether the first line of the entry was already latitude/longitude coordinates, since I'd already converted some of my files. That uses another regular expression. Python doesn't seem to have a built-in way to search for generic numeric expressions (containing digits, decimal points or +/- symbols) so I made one, since I had to use it twice if I was searching for two numbers with whitespace between them.

    numeric = '[\+\-\d\.]'
    match = re.search('^(%s+)\s+(%s+)$' % (numeric, numeric),
                      line)
(For anyone who wants to quibble, I know the regular expression isn't perfect. For instance, it would match expressions like 23+48..6.1-64.5. Not likely to be a problem in these files, so I didn't tune it further.)

If the script doesn't find coordinates but does find something that looks like an address, it feeds the address into Google Maps and gets the resulting coordinates. That code looks like this:

from googlemaps import GoogleMaps

gmaps = GoogleMaps('YOUR GOOGLE MAPS API KEY HERE')
try:
    lat, lon = gmaps.address_to_latlng(addr)
except googlemaps.GoogleMapsError, e:
    print "Oh, no! Couldn't geocode", addr
    print e

Overall, a nice simple solution made possible with python-googlemaps. The full script is on github: waymaker.

Tags: , , , , , ,
[ 12:24 Aug 20, 2013    More mapping | permalink to this entry | comments ]

Tue, 28 May 2013

A quick URL shortener

For years I've used bookmarklets to shorten URLs. For instance, with is.gd, I set up a bookmark to javascript:document.location='http://is.gd/create.php?longurl='+encodeURIComponent(location.href);, give it a keyword like isgd, and then when I'm on a page I want to paste into Twitter (the only reason I need a URL shortener), I type Ctrl-L (to focus the URL bar) then isgd and hit return. Easy.

But with the latest rev of Firefox (I'm not sure if this started with version 20 or 21), sometimes javascript: links don't work. They just display the javascript source in the URLbar rather than executing it. Lacking a solution to the Firefox problem, I still needed a way of shortening URLs. So I looked into Python solutions.

It turns out there are a few URL shorteners with public web APIs. is.gd is one of them; shorturl.com is another. There are also APIs for bit.ly and goo.gl if you don't mind registering and getting an API key. Given that, it's pretty easy to write a Python script.

Which of course I did: shorturl.

[Python url shortening script] In the browser, I select the URL I want (e.g. by doubleclicking in the URLbar, or by right-clicking and choosing "Copy link location". That puts the URL in the X selection. Then I run the shorturl script, with no arguments. (I have it in my window manager's root menu.)

shorturl reads the X selection and shortens the URL (it tries is.gd first, then shorturl.com if is.gd doesn't work for some reason). Then it pops up a little window showing me both the short URL and the original long one, so I can be sure I shortened the right thing. (One thing I don't like about a lot of the URL services is that they don't tell you the original URL; I only find out later that I tweeted a link to something that wasn't at all the link I intended to share.)

It also copies the short URL into the X selection, so after verifying that the long URL was the one I wanted, I can go straight to my Twitter window (in my case, a Bitlbee tab in my IRC client) and middleclick to paste it.

After I've pasted the short link, I can dismiss the window by typing q. Don't type q too early -- since the python script owns the X selection, you won't be able to paste it anywhere once you've closed the window. (Unless you're running a selection-managing app like klipper.)

I just wish there were some way to use it for Twitter's own shortener, t.co. It's so frustrating that Twitter makes us all shorten URLs to fit in 140 characters just so they can shorten them again with their own service -- in the process removing any way for readers to see where the link will go. Sorry, folks -- nothing I can do about that. Complain to Twitter about why they won't let anyone use t.co directly.

Tags: , ,
[ 12:42 May 28, 2013    More tech/web | permalink to this entry | comments ]

Sat, 25 May 2013

Telling your Raspberry Pi that your terminal is bigger than 24 lines

When I'm working with an embedded Linux box -- a plug computer, or most recently with a Raspberry Pi -- I usually use GNU screen as my terminal program. screen /dev/ttyUSB0 115200 connects to the appropriate USB serial port at the appropriate speed, and then you can log in just as if you were using telnet or ssh.

With one exception: the window size. Typically everything is fine until you use an editor, like vim. Once you fire up an editor, it assumes your terminal window is only 24 lines high, regardless of its actual size. And even after you exit the editor, somehow your window will have been changed so that it scrolls at the 24th line, leaving the bottom of the window empty.

Tracking down why it happens took some hunting. Tthere are lots of different places the screen size can be set. Libraries like curses can ask the terminal its size (but apparently most programs don't). There's a size built into most terminfo entries (specified by the TERM environment variable) -- but it's not clear that gets used very much any more. There are environment variables LINES and COLUMNS, and a lot of programs read those; but they're often unset, and even if they are set, you can't trust them. And setting any of these didn't help -- I could change TERM and LINES and COLUMNS all I wanted, but as soon as I ran vim the terminal would revert to that scrolling-at-24-lines behavior.

In the end it turned out the important setting was the tty setting. You can get a summary of what the tty driver thinks its size is:

% stty size
32 80

But to set it, you use rows and columns rather than size. I discovered I could type stty rows 32 (or whatever my current terminal size was), and then I could run vim and it would stay at 32 rather than reverting to 24. So that was the important setting vim was following.

The basic problem was that screen, over a serial line, doesn't have a protocol for passing the terminal's size information, the way a remote login program like ssh, rsh or telnet does. So how could I get my terminal size set appropriately on login?

Auto-detecting terminal size

There's one program that will do it for you, which I remembered from the olden days of Unix, back before programs like telnet had this nice size-setting built in. It's called resize, and on Debian, it turned out to be part of the xterm package.

That's actually okay on my current Raspberry Pi, since I have X libraries installed in case I ever want to hook up a monitor. But in general, a little embedded Linux box shouldn't need X, so I wasn't very satisfied with this solution. I wanted something with no X dependencies. Could I do the same thing in Python?

How it works

Well, as I mentioned, there are ways of getting the size of the actual terminal window, by printing an escape sequence and parsing the result.

But finding the escape sequence was trickier than I expected. It isn't written about very much. I ended up running script and capturing the output that resize sent, which seemed a little crazy: '\e[7\e[r\e[999;999H\e[6n' (where \e means the escape character). Holy cow! What are all those 999s?

Apparently what's going on is that there isn't any sequence to ask xterm (or other terminal programs) "What's your size?" But there is a sequence to ask, "Where is the cursor on the screen right now?"

So what you do is send a sequence telling it to go to row 999 and column 999; and then another sequence asking "Where are you really?" Then read the answer: it's the window size.

(Note: if we ever get monitors big enough for 1000x1000 terminals, this will fail. I'm not too worried.)

Reading the answer

Okay, great, we've asked the terminal where it is, and it responds. How do we read the answer? That was actually the trickiest part.

First, you have to write to /dev/tty, not just stdout.

Second, you need the output to be available for your program to read, not just echo in the terminal for the user to see. Setting the tty to noncanonical mode does that.

Third, you can't just do a normal blocking read of stdin -- it'll never return. Instead, put stdin into non-blocking mode and use select() to see when there's something available to read.

And of course, you have to make sure you reset the terminal back to normal canonical line-buffered mode when you're done, whether or not your read succeeds.

Once you do all that, you can read the output, which will look something like "\e[32;80R". The two numbers, of course, are the lines and columns values you want; ignore the rest.

stty in python

Oh, yes, and one other thing: once you've read the terminal size, how do you set the stty size appropriately? You can't just run system('stty rows %d' % (rows) seems like it should work, but it doesn't, probably because it's using stdout instead of /dev/tty. But I did find one way to do it, the enigmatic:

fcntl.ioctl(fd, termios.TIOCSWINSZ,
            struct.pack("HHHH", rows, cols, 0, 0))

Here it all is in one script, which you can install on your Raspberry Pi (or other embedded Linux box) and run from .bash_profile:
termsize: set stty size to the size of the current terminal window.

Tags: , , , ,
[ 19:47 May 25, 2013    More hardware | permalink to this entry | comments ]

Wed, 15 May 2013

Finding versions of installed packages in Debian/Ubuntu

Checking versions in Debian-based systems is a bit of a pain.

This happens to me a couple of times a month: for some reason I need to know what version of something I'm currently running -- often a library, like libgtk. aptitude show will tell you all about a package -- but only if you know its exact name. You can't do aptitude show libgtk or even aptitude show '*libgtk*' -- you have to know that the package name is libgtk2.0-0. Why is it libgtk2.0-0? I have no idea, and it makes no sense to me.

So I always have to do something like aptitude search libgtk | egrep '^i' to find out what packages I have installed that matches the name libgtk, find the package I want, then copy and paste that name after typing aptitude show.

But it turns out it's super easy in Python to query Debian packages using the Python apt package. In fact, this is all the code you need:

import sys
import apt

cache = apt.cache.Cache()

pat = sys.argv[1]

for pkgname in cache.keys():
    if pat in pkgname:
        pkg = cache[pkgname]
        instver = pkg.installed
        if instver:
            print pkg.name, instver.version
Then run aptver libgtk and you're all set.

In practice, I wanted nicer formatting, with columns that lined up, so the actual script is a little longer. I also added a -u flag to show uninstalled packages as well as installed ones. Amusingly, the code to format the columns took about twice as many lines as the code that does the actual work. There doesn't seem to be a standard way of formatting columns in Python, though there are lots of different implementations on the web. Now there's one more -- in my aptver on github.

Tags: , , , ,
[ 16:07 May 15, 2013    More linux | permalink to this entry | comments ]

Sat, 13 Apr 2013

Parsing NOAA historical weather data

We've been considering the possibility of moving out of the Bay Area to somewhere less crowded, somewhere in the desert southwest we so love to visit. But that also means moving to somewhere with much harsher weather.

How harsh? It's pretty easy to search for a specific location and get average temperatures. But what if I want to make a table to compare several different locations? I couldn't find any site that made that easy.

No problem, I say. Surely there's a Python library, I say. Well, no, as it turns out. There are Python APIs to get the current weather anywhere; but if you want historical weather data, or weather data averaged over many years, you're out of luck.

NOAA purports to have historical climate data, but the only dataset I found was spotty and hard to use. There's an FTP site containing directories by year; inside are gzipped files with names like 723710-03162-2012.op.gz. The first two numbers are station numbers, and there's a file at the top level called ish-history.txt with a list of the station codes and corresponding numbers. Not obvious!

Once you figure out the station codes, the files themselves are easy to parse, with lines like

STN--- WBAN   YEARMODA    TEMP       DEWP      SLP        STP       VISIB      WDSP     MXSPD   GUST    MAX     MIN   PRCP   SNDP   FRSHTT
724945 23293  20120101    49.5 24    38.8 24  1021.1 24  1019.5 24    9.9 24    1.5 24    4.1  999.9    68.0    37.0   0.00G 999.9  000000
Each line represents one day (20120101 is January 1st, 2012), and the codes are explained in another file called GSOD_DESC.txt. For instance, MAX is the daily high temperature, and SNDP is snow depth.

[NOAA historical temp program] So all I needed was to decode the station names, download the right files and parse them. That took about a day to write (including a lot of time wasted futzing with mysterious incantations for matplotlib).

Little accessibility refresher: I showed it to Dave -- "Neat, look at this, San Jose is the blue pair, Flagstaff is green and Page is red." His reaction: "This makes no sense. They all look the same to me. I have no idea which is which." Oops -- right. Don't use color as your only visual indicator. I knew that, supposedly! So I added markers in different shapes for each site. (I wish somebody would teach that lesson to Google Maps, which uses color as its only indicator on the traffic layer, so it's useless for red-green colorblind people.)

Back to the data -- it turns out NOAA doesn't actually have that much historical data available for download. If you search on most of these locations, you'll find sites that claim to have historical temperatures dating back 50 years or more, sometimes back to the 1800s. But NOAA typically only has files starting at about 2005 or 2006. I don't know where sites are getting this older data, or how reliable it is.

Still, averages since 2006 are still interesting to compare. Here's a run of noaatemps.py KSJC KFLG KSAF KLAM KCEZ KPGA KCNY. It's striking how moderate California weather is compared to any of these inland sites. No surprise there. Another surprise was that Los Alamos, despite its high elevation, has more moderate weather than most of the others -- lower highs, higher lows. I was a bit disappointed at how sparse the site list was -- no site in Moab? Really? So I used Canyonlands Field instead.

Anyway, it's fun for a data junkie to play around with, and it prints data on other weather factors, like precipitation and snowpack, although it doesn't plot them yet. The code is on my GitHub scripts page, under Weather.

Anyone found a better source for historical weather information? I'd love to have something that went back far enough to do some climate research, see what sites are getting warmer, colder, or seeing greater or lesser spreads between their extreme temperatures. The NOAA dataset obviously can't do that, so there must be something else that weather researchers use. Data on other countries would be interesting, too. Is there anything that's available to the public?

Tags: , , ,
[ 22:57 Apr 13, 2013    More programming | permalink to this entry | comments ]

Tue, 19 Mar 2013

Letters not used in Python keywords

One of the closing lightning talks at PyCon this year concerned the answers to a list of Python programming puzzles given at some other point during the conference. I hadn't seen the questions (I'm still not sure where they are), but some of the problems looked fun.

One of them was: "What are the letters not used in Python keywords?" I hadn't known about Python's keyword module, which could come in handy some day:

>>> import keyword
>>> keyword.kwlist
['and', 'as', 'assert', 'break', 'class', 'continue', 'def', 'del', 'elif', 'else', 'except', 'exec', 'finally', 'for', 'from', 'global', 'if', 'import', 'in', 'is', 'lambda', 'not', 'or', 'pass', 'print', 'raise', 'return', 'try', 'while', 'with', 'yield']

So, given the list of keywords, what's the best way to find the list of unique letters?

Any time you want a list of unique anything, you want a set. For instance,

>>> set([1, 2, 3, 2, 2, 4, 5, 1, 5])
set([1, 2, 3, 4, 5])
But first you need a list of letters so can make a set out of it.

Split the list of words into a list of letters

My first idea was to use list comprehensions. You can split a single word into letters like this:

>>> [ x for x in 'hello' ]
['h', 'e', 'l', 'l', 'o']

It took a bit of fiddling to get the right syntax to apply that to every word in the list:

>>> [[c for c in w] for w in keyword.kwlist]
[['a', 'n', 'd'], ['a', 's'], ['a', 's', 's', 'e', 'r', 't'], ... ]

Update: Dave Foster points out that [list(w) for w in keyword.kwlist] is another way, simpler and cleaner way than the double list comprehension.

That's a list of lists, so it needs to be flattened into a single list of letters before we can turn it into a set.

Flatten the list of lists

There are lots of ways to flatten a list of lists. Here are four of them:

[item for sublist in [[c for c in w] for w in keyword.kwlist] for item in sublist]

reduce(lambda x,y: x+y, [[c for c in w] for w in keyword.kwlist])

import itertools
list(itertools.chain.from_iterable([[c for c in w] for w in keyword.kwlist]))

sum([[c for c in w] for w in keyword.kwlist], [])

That last one, using sum(), makes use of the fact that Python uses + for list concatenation -- in other words, that [1, 2, 3] + [4, 5, 6] is [1, 2, 3, 4, 5, 6]. But the first method (item for sublist in) is faster: see Making a flat list out of list of lists in Python on StackOverflow. And another StackOverflow thread has a nice script for plotting speed vs. list size of various flatteners.

A simpler way of making the set

But it turns out none of this list comprehension stuff is needed anyway. set('word') splits words into letters already:

>>> set('bubble')
set(['e', 'b', 'u', 'l'])
Ignore the order -- elements of a set often end up displaying in some strange order. The important thing is that it has all the letters and no repeats.

Now we have an easy way of making a set containing the letters in one word. But how do we apply that to a list of words?

Again I initially tried using list comprehensions, then realized there's an easier way. Given a list of strings, it's trivial to join them into a single string using ''.join(). And that gives us our set of letters within keywords:

>>> set(''.join(keyword.kwlist))
set(['a', 'c', 'b', 'e', 'd', 'g', 'f', 'i', 'h', 'k', 'm', 'l', 'o', 'n', 'p', 's', 'r', 'u', 't', 'w', 'y', 'x'])

What letters are not in the set?

Almost done! But the original problem was to find the letters not in keywords. We can do that by subtracting this set from the set of all letters from a to z. How do we get that? The string module will give us a list:

>>> string.lowercase
'abcdefghijklmnopqrstuvwxyz'

You could also use a list comprehension and ord and chr (alas, range won't give you a range of letters directly):

>>> [chr(i) for i in range(ord('a'), ord('z')+1)]
['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
It's a bit longer, but doesn't require an import.

Now that you have your a-z set, just subtract the two sets:

>>> set(string.lowercase[:]) - set(''.join(keyword.kwlist))
set(['q', 'j', 'z', 'v'])

So the only letters not used in Python keywords are q, j, z and v.

Just a useless little ditty, really ... but I thought it was a fun exercise, so maybe you will too.

Tags: ,
[ 13:36 Mar 19, 2013    More programming | permalink to this entry | comments ]

Sat, 16 Mar 2013

SimpleCV on Raspberry Pi

I'm at PyCon, and I spent a lot of the afternoon in the Raspberry Pi lab.

Raspberry Pis are big at PyCon this year -- because everybody at the conference got a free RPi! To encourage everyone to play, they have a lab set up, well equipped with monitors, keyboards, power and ethernet cables, plus a collection of breadboards, wires, LEDs, switches and sensors.

I'm primarily interested in the RPi as a robotics controller, one powerful enough to run a camera and do some minimal image processing (which an Arduino can't do). And on Thursday, I attended a PyCon tutorial on the Python image processing library SimpleCV. It's a wrapper for OpenCV that makes it easy to access parts of images, do basic transforms like greyscale, monochrome, blur, flip and rotate, do edge and line detection, and even detect faces and other objects. Sounded like just the ticket, if I could get it to work on a Raspberry Pi.

SimpleCV can be a bit tricky to install on Mac and Windows, apparently. But the README on the SimpleCV git repository gives an easy 2-line install for Ubuntu. It doesn't run on Debian Squeeze (though it installs), because apparently it depends on a recent version of pygame and Squeeze's is too old; but Ubuntu Pangolin handled it just fine.

The question was, would it work on Raspbian Wheezy? Seemed like a perfect project to try out in the PyCon RPi lab. Once my RPi was set up and I'd run an apt-get update, I used used netsurf (the most modern of the lightweight browsers available on the RPi) to browse to the SimpleCV installation instructions. The first line,

sudo apt-get install ipython python-opencv python-scipy python-numpy python-pygame python-setuptools python-pip
was no problem. All those packages are available in the Raspbian repositories.

But the second line,

sudo pip install https://github.com/ingenuitas/SimpleCV/zipball/master
failed miserably. Seems that pip likes to put its large downloaded files in /tmp; and on Raspbian, running off an SD card, /tmp quite reasonably is a tmpfs, running in RAM. But that means it's quite small, and programs that expect to be able to use it to store large files are doomed to failure.

I tried a couple of simple Linux patches, with no success. You can't rename /tmp to replace it with a symlink to a directory on the SD card, because /tmp is always in use. And pip makes a new temp directory name each time it's run, so you can't just symlink the pip location to a place on the SD card.

I thought about rebooting after editing the tmpfs out of /etc/fstab, but it turns out it's not set up there, and it wasn't obvious how to disable the tmpfs. Searching later from home, the size is set in /etc/default/tmpfs. As for disabling the tmpfs and using the SD card instead, it's not clear. There's a block of code in /etc/init.d/mountkernfs.sh that makes that decision; it looks like symlinking /tmp to somewhere else might do it, or else commenting out the code that sets RAMTMP="yes". But I haven't tested that.

Instead of rebooting, I downloaded the file to the SD card:

wget https://github.com/ingenuitas/SimpleCV/master

But it turned out it's not so easy to pip install from a local file. After much fussing around I came up with this, which worked:

pip install http:///home/pi/master --download-cache /home/pi/tmp

That worked, and the resulting SimpleCV install worked nicely! I typed some simple tests into the simplecv shell, playing around with their built-in test image "lenna":

img = Image('lenna')
img.show()
img.binarize().show()
img.toGray().show()
img.edges().show()
img.invert().show()

And, for something a little harder, some face feature detection: let's find her eyes and outline them in yellow.

img.listHaarFeatures()
img.findHaarFeatures('eye.xml').draw(color=Color.YELLOW)
[Lenna, edges] [Lenna, eyes detected]

SimpleCV is lots of fun! And the edge detection was quite fast on the RPi -- this may well be usable by a robot, once I get the motors going.

Tags: , , , ,
[ 21:43 Mar 16, 2013    More linux/install | permalink to this entry | comments ]

Thu, 21 Feb 2013

New project: Metapho image tagger

I'm excited about my new project: MetaPho, an image tagger.

It arose out of a discussion on the LinuxChix Techtalk list: photo collection management software. John Sturdy was looking for an efficient way of viewing and tagging large collections of photos. Like me, he likes fast, lightweight, keyboard-driven programs. And like me, he didn't want a database-driven system that ties you forever to one image cataloging program. I put my image tags in plaintext files, named Keywords, so that I can easily write scripts to search or modify them, or user grep, and I can even make quick changes with a text editor.

I shared some tips on how I use my Pho image viewer for tagging images, and it sounded close to what he was looking for. But as we discussed ideas about image tagging, we realized that there were things he wanted to do that pho doesn't do well, things not offered by any other image tagger we've been able to find. While discussing how we might add new tagging functionality to pho, I increasingly had the feeling that I was trying to fit off-road tires onto a Miata -- or insert your own favorite metaphor for "making something do something it wasn't designed to do."

Pho is a great image viewer, but the more I patched it to handle tagging, the uglier and more complicated the code got, and it also got more complex to use.

[metapho screenshot] And really, everything we needed for tagging could be easily done in a Python-GTK application. (Pho is written in C because it does a lot of complicated focus management to deal with how window managers handle window moving and resizing. A tagger wouldn't need any of that.)

I whipped up a demo image viewer in a few hours and showed it to John. We continued the discussion, I made a GitHub repo, and over the next week or so the code grew into an efficient and already surprisingly usable image tagger.

We have big plans for it, like tags organized into categories so we can have lots of tags without cluttering the interface too much. But really, even as it is, it's better than anything I've used before. I've been scanning in lots of photos from old family albums (like this one of my mother and grandmother, and me at 9 months) and it's been great to be able to add and review tags easily.

If you want to check out MetaPho, or contribute to it (either code or user interface design), it lives in my MetaPho repository on GitHub. And I wrote up a quick man page in markdown format: metapho.1.md.

Feedback and contributors welcome!

Tags: , , , , ,
[ 19:31 Feb 21, 2013    More programming | permalink to this entry | comments ]

Sat, 19 Jan 2013

Converting C to Python with a vi regexp

I'm fiddling with a serial motor controller board, trying to get it working with a Raspberry Pi. (It works nicely with an Arduino, but one thing I'm learning is that everything hardware-related is far easier with Arduino than with RPi.)

The excellent Arduino library helpfully provided by Pololu has a list of all the commands the board understands. Since it's Arduino, they're in C++, and look something like this:

#define QIK_GET_FIRMWARE_VERSION         0x81
#define QIK_GET_ERROR_BYTE               0x82
#define QIK_GET_CONFIGURATION_PARAMETER  0x83
[ ... ]
#define QIK_CONFIG_DEVICE_ID                        0
#define QIK_CONFIG_PWM_PARAMETER                    1
and so on.

On the Arduino side, I'd prefer to use Python, so I need to get them to look more like:

    QIK_GET_FIRMWARE_VERSION = 0x81
    QIK_GET_ERROR_BYTE = 0x82
    QIK_GET_CONFIGURATION_PARAMETER = 0x83
[ ... ]
    QIK_CONFIG_DEVICE_ID = 0
    QIK_CONFIG_PWM_PARAMETER = 1
... and so on ... with an indent at the beginning of each line since I want this to be part of a class.

There are 32 #defines, so of course, I didn't want to make all those changes by hand. So I used vim. It took a little fiddling -- mostly because I'd forgotten that vim doesn't offer + to mean "one or more repetitions", so I had to use * instead. Here's the expression I ended up with:

.,$s/\#define *\([A-Z0-9_]*\) *\(.*\)/ \1 = \2/

In English, you can read this as:

From the current line to the end of the file (,.$/), look for a pattern consisting of only capital letters, digits and underscores ([A-Z0-9_]). Save that as expression #1 (\( \)). Skip over any spaces, then take the rest of the line (.*), and call it expression #2 (\( \)).

Then replace all that with a new line consisting of 4 spaces, expression 1, a spaced-out equals sign, and expression 2 ( \1 = \2).

Who knew that all you needed was a one-line regular expression to translate C into Python?

(Okay, so maybe it's not quite that simple. Too bad a regexp won't handle the logic inside the library as well, and the pin assignments.)

Tags: , , , , ,
[ 21:38 Jan 19, 2013    More linux/editors | permalink to this entry | comments ]

Wed, 31 Oct 2012

Comparing sunset times with PyEphem

We were marveling at how early it's getting dark now -- seems like a big difference even compared to a few weeks ago. Things change fast this time of year.

Since we're bouncing back and forth a lot between southern and northern California, Dave wondered how Los Angeles days differed from San Jose days. Of course, San Jose being nearly 4 degrees farther north, it should have shorter days -- but by the weirdness of orbital mechanics that doesn't necessarily mean that the sun sets earlier in San Jose. His gut feel was that LA was actually getting an earlier sunset.

"I can calculate that," I said, and fired up a Python interpreter to check with PyEphem. Since PyEphem doesn't know San Jose (hmph! San Jose is bigger than San Francisco) I used San Francisco.

Since PyEphem's Observer class only has next_rising() and next_setting(), I had to set a start date of midnight so I could subtract the two dates properly to get the length of the day.

>>> sun = ephem.Sun()
>>> la = ephem.city('Los Angeles')
>>> sf = ephem.city('San Francisco')
>>> 
>>> mid = ephem.Date('2012/10/31 8:00')
>>> 
>>> la.next_rising(sun, start=mid)
2012/10/31 14:11:57
>>> la.next_setting(sun, start=mid)
2012/11/1 01:00:45
>>> la.next_setting(sun, start=mid) - la.next_rising(sun, start=mid)
0.45055988136300584
>>> 
>>> sf.next_rising(sun, start=mid)
2012/10/31 14:34:19
>>> sf.next_setting(sun, start=mid)
2012/11/1 01:11:44
>>> sf.next_setting(sun, start=mid) - sf.next_rising(sun, start=mid)
0.4426457611261867

So Dave's intuition was right: northern California really does have a later sunset than southern California at this time of year, even though the total day length is shorter -- the difference in sunrise time makes up for the later sunset.

How much shorter?

>>> (la.next_setting(sun, start=mid) - la.next_rising(sun, start=mid)) - (sf.next_setting(sun, start=mid) - sf.next_rising(sun, start=mid))
0.007914120236819144
>>> ((la.next_setting(sun, start=mid) - la.next_rising(sun, start=mid)) - (sf.next_setting(sun, start=mid) - sf.next_rising(sun, start=mid))) * 24
0.18993888568365946
>>> ((la.next_setting(sun, start=mid) - la.next_rising(sun, start=mid)) - (sf.next_setting(sun, start=mid) - sf.next_rising(sun, start=mid))) * 24 * 60
11.396333141019568

And we have our answer -- there's about 11 minutes difference in day length between SF and LA.

Tags: , ,
[ 11:46 Oct 31, 2012    More science/astro | permalink to this entry | comments ]

Wed, 17 Oct 2012

Asynchronous sound playing in Python

A little while back I wrote about my Python xchat script to play sound alerts.

But one thing that's been annoying me about it -- it was a problem with the old perl alert script too -- is the repeated sounds. If lots of twitter updates come in on the Bitlbee channel, or if someone pastes numerous lines into a channel, I hear POPPOPPOPPOPPOPPOP or repetitions of whatever the alert sound is for that type of message. It's annoying to me, but even more so to anyone else in the same room.

It would be so much nicer if I could have it play just one repetition of any given alert, even if there are eight lines all coming in at the same time. So I decided to write a Python class to handle that.

My existing code used subprocesses to call the basic ALSA sound player, /usr/bin/aplay -q. Initially I used
if not os.fork() : os.execl(APLAY, APLAY, "-q", alertfile)
but I later switched to the cleaner
subprocess.call([APLAY, '-q', alertfile])
But of course, it would be better to do it all from Python without requiring an external process like aplay. So I looked into that first.

Sadly, it turns out Python audio support is a mess. The built-in libraries are fairly limited in functionality and formats, and the external libraries that handle sound are mostly unmaintained, unless you want to pull in a larger library like pygame. After a little web searching I decided that maybe an aplay subprocess wasn't so bad after all.

Okay, so how should I handle the subprocesses? I decided the best way was to keep track of what sound was currently playing. If another alert fires for the same sound while that sound is already playing, just ignore it. If an alert comes in for a different sound, then wait() for the current sound to finish, then start the new sound.

That's all quite easy with Python's subprocess module. subprocess.Popen() returns a Popen object that tracks a process ID and can check whether that process has finished or not. If self.curpath is the path to the sound currently playing and self.current is the Popen object for whatever aplay process is currently running, then:

    if self.current :
        if self.current.poll() == None :
            # Current process hasn't finished yet. Is this the same sound?
            if path == self.curpath :
                # A repeat of the currently playing sound.
                # Don't play it more than once.
                return
            else :
                # Trying to play a different sound.
                # Wait on the current sound then play the new one.
                self.wait()

    self.curpath = path
    self.current = subprocess.Popen([ "/usr/bin/aplay", '-q', path ] )

Finally, it's a good idea when exiting the program to check whether any aplay process is running, and wait() for it. Otherwise, you might end up with a zombie aplay process.

    def __del__(self) :
        self.wait()

I don't know if xchat actually closes down Python objects gracefully, so I don't know whether the __del__ destructor will actually be called. But at least I tried. It's possible that a context manager might be more reliable.

The full scripts are on github at pyplay.py for the basic SoundPlayer class, and chatsounds.py for the xchat script that includes SoundPlayer.

Tags: , ,
[ 13:07 Oct 17, 2012    More programming | permalink to this entry | comments ]

Wed, 26 Sep 2012

Writing xchat scripts in Python (to play sound alerts)

I use xchat as my IRC client. Mostly I like it, but its sound alerts aren't quite as configurable as I'd like. I have a few channels, like my Bitlbee Twitter feed, where I want a much more subtle alert, or no alert at all. And I want an easy way of turning sounds on and off, in case I get busy with something and need to minimize distractions.

Years ago I grabbed a perl xchat plug-in called "Smet's NickSound" that did something close to what I wanted. I've hacked a few things into it. But every time I try to customize it any further, I'm hit with the pain of write-only Perl. I've written Perl scripts, honest. But I always have a really hard time reading anyone else's Perl code and figuring out what it's doing. When I dove in again recently to try to figure out why I was getting so many alerts when first starting up xchat, I finally decided: learning how to write a Python xchat script couldn't be any harder than reverse engineering a Perl one.

First, of course, I looked for an existing nick sound Python script ... and totally struck out. In fact, mostly I struck out on finding any xchat Python scripts at all. I know there are Python bindings for xchat, because there's documentation for them. But sample plug-ins? Nope. For some reason, nobody's writing xchat plug-ins in Python.

I eventually found two minimal examples: this very simple example and the more elaborate utf8decoder. I was able to put them together and cobble up a working nick sound plug-in. It's easy once you have an example to work from to help you figure out the event hook arguments.

So here's my own little example, which may help the next person trying to learn xchat Python scripting: chatsounds.py on github.

Tags: , , ,
[ 22:13 Sep 26, 2012    More programming | permalink to this entry | comments ]

Wed, 19 Sep 2012

Python: Do two dates span a particular day of the week or month?

When I'm using my RSS reader FeedMe, I normally check every feed every day. But that can be wasteful: some feeds, like World Wide Words, only update once a week. A few feeds update even less often, like serialized books that come out once a month or whenever the author has time to add something new.

So I decided it would be nice to add some "when" logic to FeedMe, so I could add when = Sat in the config section for World Wide Words and have it only update once a week.

That sounded trivial -- a little python parsing logic to tell days from numbers, a few calls to time.localtime() and I was done.

Except of course I wasn't. Because sometimes, like when I'm on vacation, I don't always update every day. If I missed a Saturday, then I'd never see that week's edition of World Wide Words. And that would be terrible!

So what I really needed was a way to ask, "Has a Saturday occurred (including today) since the last time I ran feedme?"

The last time I ran feedme is easy to determine: it's in the last modified date of the cache file. Or, in more Pythonic terms, it's statbuf = os.stat(cachefile).st_mtime. And of course I can get the current time with time.localtime(). But how do I figure out whether a given week or month day falls between those two dates?

I'm sure this particular wheel has been invented many times. There's probably even a nifty Python library somewhere to do it. But how do you google for that? I tried to think of keywords and found nothing. So I went for a nice walk in the redwoods and thought about it for a bit, and came up with a solution.

Turns out for the week day case, you can just use modular arithmetic: if (weekday_2 - target_weekday) % 7 < (weekday_2 - weekday_1) then the day does indeed fall between the two dates.

Things are a little more complicated for the day of the month, though, because you don't know whether you need mod 30 or 31 or 29 or 28, so you either have to make your own table, or import the calendar module just so you can call calendar.monthrange().

I decided it was easier to use logic: if the difference between the two dates is greater than 31, then it definitely includes any month day. Otherwise, check whether they're in the same month or not, and do greater than/less than comparisons on the three dates.

Throw in a bunch of conversion to make it easy to call, and a bunch of unit tests to make sure everything works and my later tweaks don't break anything, and I had a nice function I could call from Feedme.

falls_between.py on github

Tags: ,
[ 22:07 Sep 19, 2012    More programming | permalink to this entry | comments ]

Sun, 02 Sep 2012

GIMP plug-in to export scaled-down versions of images

In a discussion on Google+arising from my Save/Export clean plug-in, someone said to the world in general

PLEASE provide an option to select the size of the export. Having to scale the XCF then export then throw out the result without saving is really awkward.

I thought, What a good idea! Suppose you're editing a large image, with layers and text and other jazz, saving in GIMP's native XCF format, but you want to export a smaller version for the web. Every time you make a significant change, you have to: Scale (remembering the scale size or percentage you're targeting); Save a Copy (or Export in GIMP 2.8); then Undo the Scale. If you forget the Undo, you're in deep trouble and might end up overwriting the XCF original with a scaled-down version.

If I had a plug-in that would export to another file type (such as JPG) with a scale factor, remembering that scale factor so I didn't have to, it would save me both effort and risk.

And that sounded pretty easy to write, using some of the tricks I'd learned from my Save/Export Clean and wallpaper scripts. So I wrote export-scaled.py

It's still brand new, so if anyone tries it, I'd appreciate knowing if it's useful or if you have any problems with it.

Geeky programming commentary

(Folks not interested in the programming details can stop reading now.)

Linked input fields

[screenshot: linked input fields] One fun project was writing a set of linked text entries for the dialog:
Scale to: Percentage 100 % Width: 640 Height: 480

Change any one of the three, and the other two change automatically. There's no chain link between width and height: It's assumed that if you're exporting a scaled copy, you won't want to change the image's aspect ratio, so any one of the three is enough.

That turned out to be surprisingly hard to do with GTK SpinBoxes: I had to read their values as strings and parse them, because the numeric values kept snapping back to their original values as soon as focus went to another field.

Image parasites

Another fun challenge was how to save the scale ratio, so the second time you call up the plug-in on the same image it uses whatever values you used the first time. If you're going to scale to 50%, you don't want to have to type that in every time. And of course, you want it to remember the exported file path, so you don't have to navigate there every time.

For that, I used GIMP parasites: little arbitrary pieces of data you can attach to any image. I've known about parasites for a long time, but I'd never had occasion to use them in a Python plug-in before. I was pleased to find that they were documented in the official GIMP Python documentation, and they worked just as documented. It was easy to test them, too: in the Python console (Filters->Python->Console...), type something like

img = gimp_image_list()[0]
img.parasite_list()
img.parasite_find(img.parasite_list()[0])
and so forth. Nice!

Not prompting for JPG settings

My plug-in was almost done. But when I ran it and told it to save to filenamecopy.jpg, it prompted me with that annoying JPEG settings dialog. Okay, being prompted once isn't so bad. But then when I exported a second time, it prompted me again, and didn't remember the values from before. So the question was, what controls whether the settings dialog is shown, and how could I prevent it?

Of course, I could prompt the user for JPEG quality, then call jpeg-save-file directly -- but what if you want to export to PNG or GIF or some other format? I needed something more general

Turns out, nobody really remembers how this works, and it's not documented anywhere. Some people thought that passing run_mode=RUN_WITH_LAST_VALS when I called pdb.gimp_file_save() would do the trick, but it didn't help.

So I guessed that there might be a parasite that was storing those settings: if the JPEG save plug-in sees the parasite, it uses those values and doesn't prompt. Using the Python console technique I just mentioned, I tried checking the parasites on a newly created image and on an image read in from an existing JPG file, then saving each one as JPG and checking the parasite list afterward.

Bingo! When you read in a JPG file, it has a parasite called 'jpeg-settings'. (The new image doesn't have this, naturally). But after you write a file to JPG from within GIMP, it has not only 'jpeg-settings' but also a second parasite, 'jpeg-save-options'.

So I made the plug-in check the scaled image after saving it, looking for any parasites with names ending in either -settings or -save-options; any such parasites are copied to the original image. Then, the next time you invoke Export Scaled, it does the same search, and copies those parasites to the scaled image before calling gimp-file-save.

That darned invisible JPG settings dialog

One niggling annoyance remained. The first time you get the JPG settings dialog, it pops up invisibly, under the Export dialog you're using. So if you didn't know to look for it by moving the dialog, you'd think the plug-in had frozen. GIMP 2.6 had a bug where that happened every time I saved, so I assumed there was nothing I can do about it.

GIMP 2.8 has fixed that bug -- yet it still happened when my plug-in called gimp_file_save: the JPG dialog popped up under the currently active dialog, at least under Openbox.

There isn't any way to pass window IDs through gimp_file_save so the JPG dialog pops up as transient to a particular window. But a few days after I wrote the export-scaled, I realized there was still something I could do: hide the dialog when the user clicks Save. Then make sure that I show it again if any errors occur during saving.

Of course, it wasn't quite that simple. Calling chooser.hide() by itself does nothing, because X is asynchronous and things don't happen in any predictable order. But it's possible to force X to sync the display: chooser.get_display().sync().

I'm not sure how robust this is going to be -- but it seems to work well in the testing I've done so far, and it's really nice to get that huge GTK file chooser dialog out of the way as soon as possible.

Tags: , ,
[ 18:34 Sep 02, 2012    More gimp | permalink to this entry | comments ]

Tue, 21 Aug 2012

GIMP: Re-uniting Save and Export

In GIMP 2.8, the developers changed the way you save files. "Save" is now used only for GIMP's native format, XCF (and compressed variants like .xcf.gz and .xcf.bz2). Other formats that may lose information on layers, fonts and other aspects of the edited image must be "Exported" rather than saved.

This has caused much consternation and flameage on the gimp-user mailing list, especially from people who use GIMP primarily for simple edits to JPEG or PNG files.

I don't particularly like the new model myself. Sometimes I use GIMP in the way the developers are encouraging, adding dozens of layers, fonts, layer masks and other effects. Much more often, I use GIMP to crop and rescale a handful of JPG photos I took with my camera on a hike. While I found it easy enough to adapt to using Ctrl-E (Export) instead of Ctrl-S (Save), it was annoying that when I exited the app, I'd always get am "Unsaved images" warning, and it was impossible to tell from the warning dialog which images were safely exported and which might not have been saved or exported at all.

But flaming on the mailing lists, much as some people seem to enjoy it (500 messages on the subject and still counting!) wasn't the answer. The developers have stated very clearly that they're not going to change the model back. So is there another solution?

Yes -- a very simple solution, in fact. Write a plug-in that saves or exports the current image back to its current file name, then marks it as clean so GIMP won't warn about it when you quit.

It turned out to be extremely easy to write, and you can get it here: GIMP: Save/export clean plug-in. If it suits your GIMP workflow, you can even bind it to Ctrl-S ... or any other key you like.

Warning: I deliberately did not add any "Are you sure you want to overwrite?" confirmation dialogs. This plug-in will overwrite your current file, without asking for permission. After all, that's its job. So be aware of that.

How it's written

Here are some details about how it works. Non software geeks can skip the rest of this article.

When I first looked into writing this, I was amazed at how simple it was: really just two lines of Python (plus the usual plug-in registration boilerplate).

    pdb.gimp_file_save(img, drawable, img.filename, img.filename)
    pdb.gimp_image_clean_all(img)

The first line saves the image back to its current filename. (The gimp-file-save PDB call still handles all types, not just XCF.) The second line marks the image as clean.

Both of those are PDB calls, which means that people who don't have GIMP Python could write script-fu to do this part.

So why didn't I use script-fu? Because I quickly found that if I bound the plug-in to Ctrl-S, I'd want to use it for new images -- images that don't have a filename yet. And for that, you need to pop up some sort of "Save as" dialog -- something Python can do easily, and Script-fu can't do at all.

A Save-as dialog with smart directory default

I couldn't use the standard GIMP save-as dialog: as far as I can tell, there's no way to call that dialog from a plug-in. But it turned out the GTK save-as dialog has no default directory to start in: you have to set the starting directory every time. So I needed a reasonable initial directory.

I didn't want to come up with some desktop twaddle like ~/My Pictures or whatever -- is there really anyone that model fits? Certainly not me. I debated maintaining a preference you could set, or saving the last used directory as a preference, but that complicates things and I wasn't sure it's really that helpful for most people anyway.

So I thought about where I usually want to save images in a GIMP session. Usually, I want to save them to the same directory where I've been saving other images in the same session, right?

I can figure that out by looping through all currently open images with for img in gimp.image_list() : and checking os.path.dirname(img.filename) for each one. Keep track of how many times each directory is being used; whichever is used the most times is probably where the user wants to store the next image.

Keeping count in Python

Looping through is easy, but what's the cleanest, most Pythonic way of maintaining the count for each directory and finding the most popular one? Naturally, Python has a class for that, collections.Counter.

Once I've counted everything, I can ask for the most common path. The code looks a bit complicated because most_common(1) returns a one-item list of a tuple of the single most common path and the number of times it's been used -- for instance, [ ('/home/akkana/public_html/images/birds', 5) ]. So the path is the first element of the first element, or most_common(1)[0][0]. Put it together:

    counts = collections.Counter()
    for img in gimp.image_list() :
        if img.filename :
            counts[os.path.dirname(img.filename)] += 1
    try :
        return counts.most_common(1)[0][0]
    except :
        return None

So that's the only tricky part of this plug-in. The rest is straightforward, and you can read the code on GitHub: save-export-clean.py.

Tags: , ,
[ 12:26 Aug 21, 2012    More gimp | permalink to this entry | comments ]

Sat, 09 Jun 2012

Viewing and modifying epub ebook tags

My epub Books folder is starting to look like my physical bookshelf at home -- huge and overflowing with books I hope to read some day. Mostly free books from the wonderful Project Gutenberg and DRM-free books from publishers and authors who support that model.

With the Nook's standard library viewer that's impossible to manage. All you can do is sort all those books alphabetically by title or author and laboriously page through, some five books to a page, hoping the one you want will catch your eye. Worse, sometimes books show up in the author view but don't show up in the title view, or vice versa. I guess Barnes & Noble think nobody keeps more than ten or so books on their shelves.

Fortunately on my rooted Nook I have the option of using better readers, like FBreader and Aldiko, that let me sort by tags. If I want to read something about the Civil War, or Astronomy, or just relax with some Science Fiction, I can browse by keyword.

Well, in theory. In practice, tagging of ebooks is inconsistent and not very useful.

For instance, the Gutenberg tags for Othello are:

while the tags for Vanity Fair are

The Prince and the Pauper's tag list looks like:

while Captains Courageous looks like

I can understand wanting to tag details like this, but few of those tags are helpful when I'm browsing books on my little handheld device. I can't imagine sitting down to read and thinking, "Let's see, what books do I have on Interracial marriage? Or Saltwater fishing? No, on second thought I'd rather read some fiction set in the time of Edward VI, King of England, 1537-1553."

And of course, with over 90 books loaded on my ebook readers, it means I have hundreds of entries in my tags list, with few of them including more than one book.

Clearly what I needed to do was to change the tags on my ebooks.

Viewing and modifying epub tags

That ought to be simple, right? But ebooks are still a very young technology, and there's surprisingly little software devoted to them. Calibre can probably do it if you don't mind maintaining your whole book collection under calibre; but I like to be able to work on files one at a time or in small groups. And I couldn't find a program that would let me do that.

What to do? Well, epub is a fairly simple XML format, right? So modifying it with Python shouldn't that hard.

Managing epub in Python

An epub file is a collection of XML files packaged in a zip archive. So I unzipped one of my epub books and poked around. I found the tags in a file called content.opf, inside a <metadata> tag. They look like this:

<dc:subject>Science fiction</dc:subject>

So I could use Python's zipfile module to access the content.opf file inside the zip archive, then use the xml.dom.minidom parser to get to the tags. Writing a script to display existing tags was very easy.

What about replacing the old, unweildy tag list with new, simple tags?

It's easy enough to add nodes in Python's minidom. So the trick is writing it back to the epub file. The zipfile module doesn't have a way to modify a zip file in place, so I created a new zip archive and copied files from the old archive to the new one, replacing content.opf with a new version.

Python's difficulty with character sets in XML

But I hit a snag in writing the new content.opf. Python's XML classes have a toprettyxml() method to write the contents of a DOM tree. Seemed simple, and that worked for several ebooks ... until I hit one that contained a non-ASCII character. Then Python threw a UnicodeEncodeError: 'ascii' codec can't encode character u'\u2014' in position 606: ordinal not in range(128).

Of course, there are ways (lots of them) to encode that output string -- I could do

ozf.writestr(info, dom.toprettyxml().encode(encoding, 'xmlcharrefreplace'))
, or
writestr(info, dom.toprettyxml(encoding=encoding)
Except ... what should I pass as the encoding? The content.opf file started with its encoding:
<?xml version='1.0' encoding='UTF-8'?>
but Python's minidom offers no way to get that information. In fact, none of Python's XML parsers seem to offer this.

Since you need a charset to avoid the UnicodeEncodeError, the only options are (1) always use a fixed charset, like utf-8, for content.opf, or (2) open content.opf and parse the charset line by hand after Python has already parsed the rest of the file. Yuck! So I chose the first option ... I can always revisit that if the utf-8 in content.opf ever causes problems.

The final script

Charset difficulties aside, though, I'm quite pleased with my epubtags.py script. It's very handy to be able to print tags on any .epub file, and after cleaning up the tags on my ebooks, it's great to be able to browse by category in FBreader. Here's the program: epubtag.py.

Tags: , ,
[ 13:05 Jun 09, 2012    More programming | permalink to this entry | comments ]

Sat, 26 May 2012

Use stdeb to make Debian packages for a Python package

I write a lot of little Python scripts. And I use Ubuntu and Debian. So why aren't any of my scripts packaged for those distros?

Because Debian packaging is absurdly hard, and there's very little documentation on how to do it. In particular, there's no help on how to take something small, like a Python script, and turn it into a package someone else could install on a Debian system. It's pretty crazy, since RPM packaging of Python scripts is so easy.

Recently at the Ubuntu Developers' Summit, Asheesh of OpenHatch pointed me toward a Python package called stdeb that simplifies a lot of the steps and makes Python packaging fairly straightforward.

You'll need a setup.py file to describe your Python script, and you'll probably want a .desktop file and an icon. If you haven't done that before, see my article on Packaging Python for MeeGo for some hints.

Then install python-stdeb. The package has some requirements that aren't listed as dependencies, so you'll need to install:

apt-get install python-stdeb fakeroot python-all
(I have no idea why it needs python-all, which installs only a directory /usr/share/doc/python-all with some policy documentation files, but if you don't install it, stdeb will fail later.)

Now create a config file for stdeb to tell it what Debian/Ubuntu version you're going to be targeting, if it's anything other than Debian unstable (stdeb's default). Unfortunately, there seems to be no way to pass this on the command line rather than in a config file. So if you want to make packages for several distros, you'll have to edit the config file for every distro you want to support. Here's what I'm using for Ubuntu 12.04 Precise Pangolin:

[DEFAULT]
Suite: precise

Now you're ready to run stdeb. I know of two ways to run it. You can generate both source and binary packages, like this:

python setup.py --command-packages=stdeb.command bdist_deb
Or you can generate source packages only, like this:
python setup.py --command-packages=stdeb.command sdist_dsc

Either syntax creates a directory called deb_dist. It contains a lot of files including a source .dsc, several tarballs, a copy of your source directory, and (if you used bdist_deb) a binary .deb package.

If you used the bdist_deb form, don't be put off that it concludes with a message:

dpkg-buildpackage: binary only upload (no source included)
It's fibbing: the source .dsc is there as well as the binary .deb. I presume it prints the warning because it creates them as separate steps, and the binary is the last step.

Now you can use dpkg -i to install your binary deb, or you can use the source dsc for various purposes, like creating a repository or a Launchpad PPA. But those involve a lot more steps -- so I'll cover that in a separate article about creating PPAs.

Update: you can find that article here: Creating packages for a Launchpad PPA.

Tags: , , , ,
[ 11:44 May 26, 2012    More programming | permalink to this entry | comments ]

Fri, 27 Apr 2012

Venus is at its brightest -- why? And how to calculate it

Venus has been a beautiful sight in the evening sky for months, but at the end of April it's reaching a brightness peak, magnitude -4.7.

By then, if you look at it in a telescope or even good binoculars, you'll see it has waned to a crescent. That's a bit non-obvious: when the moon is a crescent, it's a lot fainter than a full moon. So why is Venus brightest in its crescent phase?

It has to do with their orbits. The moon is always about the same distance away, about 385,000 km or 239,000 miles (I've owned cars with more miles than that!), though it varies a little, from 362,600 km at perigee to 405,400 km at apogee.

When we look at the full moon, not only are we seeing the whole Earth-facing surface illuminated, but the central part of that light is reflecting straight up off the moon's surface. When we look at a crescent moon, we're seeing light that's near the moon's sunrise or sunset point -- dimmer and more spread out than the concentrated light of noon -- and in addition we're seeing less of it.

Venus, in contrast, varies its distance from us immensely. We can't see Venus when it's "full", because it's on the other side of the sun from us and lost in the sun's glow. It'll next be there a year from now, in April of 2013. But if we could see it when it's full, Venus would be a distant 1.7 AU from us. An AU is an Astronomical Unit, the average distance of the earth from the sun or about 89 million miles, so Venus when it's full is about 170 million miles away. Its disk is a tiny 9.9 arcseconds (an arcsecond is 1/3600 of a degree) -- about the size of Mars this month.

In contrast, when we look at the crescent Venus around the end of this month, although we're only seeing about 28% of its surface illuminated, and that only with glancing twilight rays, it's much closer to us -- less than half an AU, or about 45 million miles -- and its disk extends a huge 37 arcseconds, bigger than Jupiter this month.

Of course, eventually, as Venus pulls between us and the sun, its crescent gets so slim that even its huge size can't compensate. So its peak brightness happens when those two curves cross, when the disk is somewhere around 27% illuminated, as happens at the end of this month and the beginning of May.

Exactly when? Good question. The RASC Handbook says Venus' "greatest illuminated extent" is on April 30, but PyEphem and XEphem say Venus is actually brighter from May 3-8 ... and when it emerges from the sun's glare and moves into the morning sky in June, it'll be slightly brighter still, peaking at magnitude -4.8 in the first week of July.)

Tracking Venus with PyEphem

When I started my Shallow Sky column this month, I saw the notice of Venus's maximum brightness and greatest illuminated extent in the RASC Handbook. But I wanted more details -- how much did its distance and size really change, when would the brightness peak again as it emerged from the sun's glare, when would it next be "full"?

PyEphem made it easy to calculate all this. Just create an ephem.Venus() object, calculate its values for any date of interest, then print out parameters like phase, mag, earth_distance and size. In just a few minutes of programming, I had a nice table of Venus data.

import ephem

venus = ephem.Venus()

print '%10s   %6s %6s %6s %6s' % ('date', '%', 'mag', 'dist', 'size')
def print_venus(when) :
    venus.compute(when)
    fmt = '%02d-%02d-%02d   %6.2f %6.2f %6.2f %6.2f'
    trip = when.triple()
    print fmt % (trip[0], trip[1], trip[2],
                 venus.phase, venus.mag, venus.earth_distance, venus.size)

# Loop from the beginning of 2012 through the middle of 2013:
d = ephem.date('2012')
end_date = ephem.date('2013/6/1')
while d < end_date :
    print_venus(d)
    # Add a week:
    d = ephem.date(d + ephem.hour * 24)

I've found PyEphem very handy for calculations like this -- and it's great to be able to double-check listings in other publications.

Tags: , ,
[ 14:44 Apr 27, 2012    More science/astro | permalink to this entry | comments ]

Fri, 16 Mar 2012

Image manipulation in Python

Someone asked me about determining whether an image was "portrait" or "landscape" mode from a script.

I've long had a script for automatically rescaling and rotating images, using ImageMagick under the hood and adjusting automatically for aspect ratio. But the scripts are kind of a mess -- I've been using them for over a decade, and they started life as a csh script back in the pnmscale days, gradually added ImageMagick and jpegtran support and eventually got translated to (not very good) Python.

I've had it in the back of my head that I should rewrite this stuff in cleaner Python using the ImageMagick bindings, rather than calling its commandline tools. So the question today spurred me to look into that. I found that ImageMagick isn't the way to go, but PIL would be a fine solution for most of what I need.

ImageMagick: undocumented and inconstant

Ubuntu has a python-pythonmagick package, which I installed. Unfortunately, it has no documentation, and there seems to be no web documentation either. If you search for it, you find a few other people asking where the documentation is.

Using things like help(PythonMagick) and help(PythonMagick.Image), you can ferret out a few details, like how to get an image's size:

import PythonMagick
filename = 'img001.jpg'
img = PythonMagick.Image(filename)
size = img.size()
print filename, "is", size.width(), "x", size.height()

Great. Now what if you want to rescale it to some other size? Web searching found examples of that, but it doesn't work, as illustrated here:

>>> img.scale('1024x768')
>>> img.size().height()
640

The built-in help was no help:

>>> help(img.scale)
Help on method scale:

scale(...) method of PythonMagick.Image instance
    scale( (Image)arg1, (Geometry)arg2) -> None :
    
        C++ signature :
            void scale(Magick::Image {lvalue},Magick::Geometry)

So what does it want for (Geometry)? Strings don't seem to work, 2-tuples don't work, and there's no Geometry object in PythonMagick. By this time I was tired of guesswork. Can the Python Imaging Library do better?

PIL -- the Python Imaging Library

PIL, happily, does have documentation. So it was easy to figure out how to get an image's size:

from PIL import Image
im = Image.open(filename)
w = im.size[0]
h = im.size[1]
print filename, "is", w, "x", h
It was equally easy to scale it to half its original size, then write it to a file:
newim = im.resize((w/2, h/2))
newim.save("small-" + filename)

Reading EXIF

Wow, that's great! How about EXIF -- can you read that? Yes, PIL has a module for that too:

import PIL.ExifTags

exif = im._getexif()
for tag, value in exif.items():
    decoded = PIL.ExifTags.TAGS.get(tag, tag)
    print decoded, '->', value

There are other ways to read exif -- pyexiv2 seems highly regarded. It has documentation, a tutorial, and apparently it can even write EXIF tags.

If neither PIL nor pyexiv2 meets your needs, here's a Stack Overflow thread on other Python EXIF solutions, and here's another discussion of Python EXIF. But since you probably already have PIL, it's certainly an easy way to get started.

What about the query that started all this: how to find out whether an image is portrait or landscape? Well, the most important thing is the image dimensions themselves -- whether img.size[0] > img.size[1]. But sometimes you want to know what the camera's orientation sensor thought. For that, you can use this code snippet:

for tag, value in exif.items():
    decoded = PIL.ExifTags.TAGS.get(tag, tag)
    if decoded == 'Orientation':
        print decoded, ":", value
Then compare the number you get to this Exif Orientation table. Normal landscape-mode photos will be 1.

Given all this, have I actually rewritten resizeall and rotateall using PIL? Why, no! I'll put it on my to-do list, honest. But since the scripts are actually working fine (just don't look at the code), I'll leave them be for now.

Tags: , , , ,
[ 15:33 Mar 16, 2012    More programming | permalink to this entry | comments ]

Sun, 08 Jan 2012

Parsing HTML in Python

I've been having (mis)adventures learning about Python's various options for parsing HTML.

Up until now, I've avoided doing any HTMl parsing in my RSS reader FeedMe. I use regular expressions to find the places where content starts and ends, and to screen out content like advertising, and to rewrite links. Using regexps on HTML is generally considered to be a no-no, but it didn't seem worth parsing the whole document just for those modest goals.

But I've long wanted to add support for downloading images, so you could view the downloaded pages with their embedded images if you so chose. That means not only identifying img tags and extracting their src attributes, but also rewriting the img tag afterward to point to the locally stored image. It was time to learn how to parse HTML.

Since I'm forever seeing people flamed on the #python IRC channel for using regexps on HTML, I figured real HTML parsing must be straightforward. A quick web search led me to Python's built-in HTMLParser class. It comes with a nice example for how to use it: define a class that inherits from HTMLParser, then define some functions it can call for things like handle_starttag and handle_endtag; then call self.feed(). Something like this:

from HTMLParser import HTMLParser

class MyFancyHTMLParser(HTMLParser):
  def fetch_url(self, url) :
    request = urllib2.Request(url)
    response = urllib2.urlopen(request)
    link = response.geturl()
    html = response.read()
    response.close()
    self.feed(html)   # feed() starts the HTMLParser parsing

  def handle_starttag(self, tag, attrs):
    if tag == 'img' :
      # attrs is a list of tuples, (attribute, value)
      srcindex = self.has_attr('src', attrs)
      if srcindex < 0 :
        return   # img with no src tag? skip it
      src = attrs[srcindex][1]
      # Make relative URLs absolute
      src = self.make_absolute(src)
      attrs[srcindex] = (attrs[srcindex][0], src)

    print '<' + tag
    for attr in attrs :
      print ' ' + attr[0]
      if len(attr) > 1 and type(attr[1]) == 'str' :
        # make sure attr[1] doesn't have any embedded double-quotes
        val = attr[1].replace('"', '\"')
        print '="' + val + '"')
    print '>'

  def handle_endtag(self, tag):
    self.outfile.write('</' + tag.encode(self.encoding) + '>\n')

Easy, right? Of course there are a lot more details, but the basics are simple.

I coded it up and it didn't take long to get it downloading images and changing img tags to point to them. Woohoo! Whee!

The bad news about HTMLParser

Except ... after using it a few days, I was hitting some weird errors. In particular, this one:
HTMLParser.HTMLParseError: bad end tag: ''

It comes from sites that have illegal content. For instance, stories on Slate.com include Javascript lines like this one inside <script></script> tags:
document.write("<script type='text/javascript' src='whatever'></scr" + "ipt>");

This is technically illegal html -- but lots of sites do it, so protesting that it's technically illegal doesn't help if you're trying to read a real-world site.

Some discussions said setting self.CDATA_CONTENT_ELEMENTS = () would help, but it didn't.

HTMLParser's code is in Python, not C. So I took a look at where the errors are generated, thinking maybe I could override them. It was easy enough to redefine parse_endtag() to make it not throw an error (I had to duplicate some internal strings too). But then I hit another error, so I redefined unknown_decl() and _scan_name(). And then I hit another error. I'm sure you see where this was going. Pretty soon I had over 100 lines of duplicated code, and I was still getting errors and needed to redefine even more functions. This clearly wasn't the way to go.

Using lxml.html

I'd been trying to avoid adding dependencies to additional Python packages, but if you want to parse real-world HTML, you have to. There are two main options: Beautiful Soup and lxml.html. Beautiful Soup is popular for large projects, but the consensus seems to be that lxml.html is more error-tolerant and lighter weight.

Indeed, lxml.html is much more forgiving. You can't handle start and end tags as they pass through, like you can with HTMLParser. Instead you parse the HTML into an in-memory tree, like this:

  tree = lxml.html.fromstring(html)

How do you iterate over the tree? lxml.html is a good parser, but it has rather poor documentation, so it took some struggling to figure out what was inside the tree and how to iterate over it.

You can visit every element in the tree with

for e in tree.iter() :
  print e.tag

But that's not terribly useful if you need to know which tags are inside which other tags. Instead, define a function that iterates over the top level elements and calls itself recursively on each child.

The top of the tree itself is an element -- typically the <html></html> -- and each element has .tag and .attrib. If it contains text inside it (like a <p> tag), it also has .text. So to make something that works similarly to HTMLParser:

def crawl_tree(tree) :
  handle_starttag(tree.tag, tree.attrib)
  if tree.text :
    handle_data(tree.text)
  for node in tree :
    crawl_tree(node)
  handle_endtag(tree.tag)

But wait -- we're not quite all there. You need to handle two undocumented cases.

First, comment tags are special: their tag attribute, instead of being a string, is <built-in function Comment> so you have to handle that specially and not assume that tag is text that you can print or test against.

Second, what about cases like <p>Here is some <i>italicised</i> text.</p> ? in this case, you have the p tag, and its text is "Here is some ". Then the p has a child, the i tag, with text of "italicised". But what about the rest of the string, " text."?

That's called a tail -- and it's the tail of the adjacent i tag it follows, not the parent p tag that contains it. Confusing!

So our function becomes:

def crawl_tree(tree) :
  if type(tree.tag) is str :
    handle_starttag(tree.tag, tree.attrib)
    if tree.text :
      handle_data(tree.text)
    for node in tree :
      crawl_tree(node)
    handle_endtag(tree.tag)
  if tree.tail :
    handle_data(tree.tail)

See how it works? If it's a comment (tree.tag isn't a string), we'll skip everything -- except the tail. Even a comment might have a tail:
<p>Here is some <!-- this is a comment --> text we want to show.</p>
so even if we're skipping comment we need its tail.

I'm sure I'll find other gotchas I've missed, so I'm not releasing this version of feedme until it's had a lot more testing. But it looks like lxml.html is a reliable way to parse real-world pages. It even has a lot of convenience functions like link rewriting that you can use without iterating the tree at all. Definitely worth a look!

Tags: , ,
[ 15:04 Jan 08, 2012    More programming | permalink to this entry | comments ]

Thu, 29 Dec 2011

Plotting the Analemma

My SJAA planet-observing column for January is about the Analemma and the Equation of Time.

The analemma is that funny figure-eight you see on world globes in the middle of the Pacific Ocean. Its shape is the shape traced out by the sun in the sky, if you mark its position at precisely the same time of day over the course of an entire year.

The analemma has two components: the vertical component represents the sun's declination, how far north or south it is in our sky. The horizontal component represents the equation of time.

The equation of time describes how the sun moves relatively faster or slower at different times of year. It, too, has two components: it's the sum of two sine waves, one representing how the earth speeds up and slows down as it moves in its elliptical orbit, the other a function the tilt (or "obliquity") of the earth's axis compared to its orbital plane, the ecliptic.

[components of the Equation of time] The Wikipedia page for Equation of time includes a link to a lovely piece of R code by Thomas Steiner showing how the two components relate. It's labeled in German, but since the source is included, I was able to add English labels and use it for my article.

But if you look at photos of real analemmas in the sky, they're always tilted. Shouldn't they be vertical? Why are they tilted, and how does the tilt vary with location? To find out, I wanted a program to calculate the analemma.

Calculating analemmas in PyEphem

The very useful astronomy Python package PyEphem makes it easy to calculate the position of any astronomical object for a specific location. Install it with: easy_install pyephem for Python 2, or easy_install ephem for Python 3.

import ephem
observer = ephem.city('San Francisco')
sun = ephem.Sun()
sun.compute(observer)
print sun.alt, sun.az

The alt and az are the altitude and azimuth of the sun right now. They're printed as strings: 25:23:16.6 203:49:35.6 but they're actually type 'ephem.Angle', so float(sun.alt) will give you a number in radians that you can use for calculations.

Of course, you can specify any location, not just major cities. PyEphem doesn't know San Jose, so here's the approximate location of Houge Park where the San Jose Astronomical Association meets:

observer = ephem.Observer()
observer.name = "San Jose"
observer.lon = '-121:56.8'
observer.lat = '37:15.55'

You can also specify elevation, barometric pressure and other parameters.

So here's a simple analemma, calculating the sun's position at noon on the 15th of each month of 2011:

    for m in range(1, 13) :
        observer.date('2011/%d/15 12:00' % (m))
        sun.compute(observer)

I used a simple PyGTK window to plot sun.az and sun.alt, so once it was initialized, I drew the points like this:

    # Y scale is 45 degrees (PI/2), horizon to halfway to zenith:
    y = int(self.height - float(self.sun.alt) * self.height / math.pi)
    # So make X scale 45 degrees too, centered around due south.
    # Want az = PI to come out at x = width/2.
    x = int(float(self.sun.az) * self.width / math.pi / 2)
    # print self.sun.az, float(self.sun.az), float(self.sun.alt), x, y
    self.drawing_area.window.draw_arc(self.xgc, True, x, y, 4, 4, 0, 23040)

So now you just need to calculate the sun's position at the same time of day but different dates spread throughout the year.

[analemma in San Jose at noon clock time] And my 12-noon analemma came out almost vertical! Maybe the tilt I saw in analemma photos was just a function of taking the photo early in the morning or late in the afternoon? To find out, I calculated the analemma for 7:30am and 4:30pm, and sure enough, those were tilted.

But wait -- notice my noon analemma was almost vertical -- but it wasn't exactly vertical. Why was it skewed at all?

Time is always a problem

As always with astronomy programs, time zones turned out to be the hardest part of the project. I tried to add other locations to my program and immediately ran into a problem.

The ephem.Date class always uses UTC, and has no concept of converting to the observer's timezone. You can convert to the timezone of the person running the program with localtime, but that's not useful when you're trying to plot an analemma at local noon.

At first, I was only calculating analemmas for my own location. So I set time to '20:00', that being the UTC for my local noon. And I got the image at right. It's an analemma, all right, and it's almost vertical. Almost ... but not quite. What was up?

Well, I was calculating for 12 noon clock time -- but clock time isn't the same as mean solar time unless you're right in the middle of your time zone.

You can calculate what your real localtime is (regardless of what politicians say your time zone should be) by using your longitude rather than your official time zone:

    date = '2011/%d/12 12:00' % (m)
    adjtime = ephem.date(ephem.date(date) \
                    - float(self.observer.lon) * 12 / math.pi * ephem.hour)
    observer.date = adjtime

Maybe that needs a little explaining. I take the initial time string, like '2011/12/15 12:00', and convert it to an ephem.date. The number of hours I want to adjust is my longitude (in radians) times 12 divided by pi -- that's because if you go pi (180) degrees to the other side of the earth, you'll be 12 hours off. Finally, I have to multiply that by ephem.hour because ... um, because that's the way to add hours in PyEphem and they don't really document the internals of ephem.Date.

[analemma in San Jose at noon clock time] Set the observer date to this adjusted time before calculating your analemma, and you get the much more vertical figure you see here. This also explains why the morning and evening analemmas weren't symmetrical in the previous run.

This code is location independent, so now I can run my analemma program on a city name, or specify longitude and latitude.

PyEphem turned out to be a great tool for exploring analemmas. But to really understand analemma shapes, I had more exploring to do. I'll write about that, and post my complete analemma program, in the next article.

Tags: , , , , ,
[ 20:54 Dec 29, 2011    More science/astro | permalink to this entry | comments ]

Thu, 22 Dec 2011

Calculating the Solstice and shortest day

Today is the winter solstice -- the official beginning of winter.

The solstice is determined by the Earth's tilt on its axis, not anything to do with the shape of its orbit: the solstice is the point when the poles come closest to pointing toward or away from the sun. To us, standing on Earth, that means the winter solstice is the day when the sun's highest point in the sky is lowest.

You can calculate the exact time of the equinox using the handy Python package PyEphem. Install it with: easy_install pyephem for Python 2, or easy_install ephem for Python 3. Then ask it for the date of the next or previous equinox. You have to give it a starting date, so I'll pick a date in late summer that's nowhere near the solstice:

>>> ephem.next_solstice('2011/8/1')
2011/12/22 05:29:52
That agrees with my RASC Observer's Handbook: Dec 22, 5:30 UTC. (Whew!)

PyEphem gives all times in UTC, so, since I'm in California, I subtract 8 hours to find out that the solstice was actually last night at 9:30. If I'm lazy, I can get PyEphem to do the subtraction for me:

ephem.date(ephem.next_solstice('2011/8/1') - 8./24)
2011/12/21 21:29:52
I used 8./24 because PyEphem's dates are in decimal days, so in order to subtract 8 hours I have to convert that into a fraction of a 24-hour day. The decimal point after the 8 is to get Python to do the division in floating point, otherwise it'll do an integer division and subtract int(8/24) = 0.

The shortest day

The winter solstice also pretty much marks the shortest day of the year. But was the shortest day yesterday, or today? To check that, set up an "observer" at a specific place on Earth, since sunrise and sunset times vary depending on where you are. PyEphem doesn't know about San Jose, so I'll use San Francisco:

>>> import ephem
>>> observer = ephem.city("San Francisco")
>>> sun = ephem.Sun()
>>> for i in range(20,25) :
...   d = '2011/12/%i 20:00' % i
...   print d, (observer.next_setting(sun, d) - observer.previous_rising(sun, d)) * 24
2011/12/20 20:00 9.56007901422
2011/12/21 20:00 9.55920379754
2011/12/22 20:00 9.55932991847
2011/12/23 20:00 9.56045709446
2011/12/24 20:00 9.56258416496
I'm multiplying by 24 to get hours rather than decimal days.

So the shortest day, at least here in the bay area, was actually yesterday, 2011/12/21. Not too surprising, since the solstice wasn't that long after sunset yesterday.

If you look at the actual sunrise and sunset times, you'll find that the latest sunrise and earliest sunset don't correspond to the solstice or the shortest day. But that's all tied up with the equation of time and the analemma ... and I'll cover that in a separate article.

Tags: , , , ,
[ 11:28 Dec 22, 2011    More science/astro | permalink to this entry | comments ]

Wed, 16 Nov 2011

New trails, and new PyTopo 1.1 release

A new trail opened up above Alum Rock park! Actually a whole new open space preserve, called Sierra Vista -- with an extensive set of trails that go all sorts of interesting places.

Dave and I visit Alum Rock frequently -- we were married there -- so having so much new trail mileage is exciting. We tried to explore it on foot, but quickly realized the mileage was more suited to mountain bikes. Even with bikes, we'll be exploring this area for a while (mostly due to not having biked in far too long, so it'll take us a while to work up to that much riding ... a combination of health problems and family issues have conspired to keep us off the bikes).

Of course, part of the fun of discovering a new trail system is poring over maps trying to figure out where the trails will take us, then taking GPS track logs to study later to see where we actually went.

And as usual when uploading GPS track logs and viewing them in pytopo, I found some things that weren't working quite the way I wanted, so the session ended up being less about studying maps and more about hacking Python.

In the end, I fixed quite a few little bugs, improved some features, and got saved sites with saved zoom levels working far better.

Now, PyTopo 1.0 happened quite a while ago -- but there were two of us hacking madly on it at the time, and pinning down the exact time when it should be called 1.0 wasn't easy. In fact, we never actually did it. I know that sounds silly -- of all releases to not get around to, finally reaching 1.0? Nevertheless, that's what happened.

I thought about cheating and calling this one 1.0, but we've had 1.0 beta RPMs floating around for so long (and for a much earlier release) that that didn't seem right.

So I've called the new release PyTopo 1.1. It seems to be working pretty solidly. It's certainly been very helpful to me in exploring the new trails. It's great for cross-checking with Google Earth: the OpenCycleMap database has much better trail data than Google does, and pytopo has easy track log loading and will work offline, while Google has the 3-D projection aerial imagery that shows where trails and roads were historically (which may or may not correspond to where they decide to put the new trails). It's great to have both.

Anyway, here's the new PyTopo.

Tags: , ,
[ 20:59 Nov 16, 2011    More mapping | permalink to this entry | comments ]

Sun, 16 Oct 2011

Monitor an Arduino's serial output from Python

Debugging Arduino sensors can sometimes be tricky. While working on my Arduino sonar project, I found myself wanting to know what values the Arduino was reading from its analog port.

It's easy enough to print from the Arduino to its USB-serial line. First add some code like this in setup():

    Serial.begin(9600);
Then in loop(), if you just read the value "val":
    Serial.println(val);

Serial output from Python

That's all straightforward -- but then you need something that reads it on the PC side.

When you're using the Arduino Java development environment, you can set it up to display serial output in a few lines at the bottom of the window. But it's not terrifically easy to read there, and I don't want to be tied to the Java IDE -- I'm much happier doing my Arduino development from the command line. But then how do you read serial output when you're debugging? In general, you can use the screen program to talk to serial ports -- it's the tool of choice to log in to plug computers. For the Arduino, you can do something like this: screen /dev/ttyUSB0 9600

But I found that a bit fiddly for various reasons. And I discovered that it's easy to write something like this in Python, using the serial module.

You can start with something as simple as this:

import serial

ser = serial.Serial("/dev/ttyUSB0", 9600)
while True:
    print ser.readline()

Serial input as well as output

That worked great for debugging purposes. But I had another project (which I will write up separately) where I needed to be able to send commands to the Arduino as well as reading output it printed. How do you do both at once?

With the select module, you can monitor several file descriptors at once. If the user has typed something, send it over the serial line to the Arduino; if the Arduino has printed something, read it and display it for the user to see.

That loop looks like this:

while True :
    # Check whether the user has typed anything (timeout of .2 sec):
    inp, outp, err = select.select([sys.stdin, self.ser], [], [], .2)

    # If the user has typed anything, send it to the Arduino:
    if sys.stdin in inp :
        line = sys.stdin.readline()
        self.ser.write(line)

    # If the Arduino has printed anything, display it:
    if self.ser in inp :
line = self.ser.readline().strip()
print "Arduino:", line

Add in a loop to find the right serial port (the Arduino doesn't always show up on /dev/ttyUSB0) and a little error and exception handling, and I had a useful script that met all my Arduino communication needs: ardmonitor.

Tags: , , ,
[ 20:27 Oct 16, 2011    More hardware | permalink to this entry | comments ]

Tue, 27 Sep 2011

Banishing errant tooltips

Every now and then I have to run a program that doesn't manage its tooltips well. I mouse over some button to find out what it does, a tooltip pops up -- but then the tooltip won't go away. Even if I change desktops, the tooltip follows me and stays up on all desktops. Worse, it's set to stay on top of all other windows, so it blocks anything underneath it.

The places where I see this happen most often are XEphem (probably as an artifact of the broken Motif libraries we're stuck with on Linux); Adobe's acroread (Acrobat Reader), though perhaps that's gotten better since I last used it; and Wine.

I don't use Wine much, but lately I've had to use it for a medical imaging program that doesn't seem to have a Linux equivalent (viewing PETscan data). Every button has a tooltip, and once a tooltip pops up, it never goes aawy. Eventually I might have five of six of these little floating windows getting in the way of whatever I'm doing on other desktops, until I quit the wine program.

So how does one get rid of errant tooltips littering your screen? Could I write an Xlib program that could nuke them?

Finding window type

First we need to know what's special about tooltip windows, so the program can identify them. First I ran my wine program and produced some sticky tooltips.

Once they were up, I ran xwininfo and clicked on a tooltip. It gave me a bunch of information about the windows size and location, color depth, etc. ... but the useful part is this:

  Override Redirect State: yes

In X, override-redirect windows are windows that are immune to being controlled by the window manager. That's why they don't go away when you change desktops, or move when you move the parent window.

So what if I just find all override-redirect windows and unmap (hide) them? Or would that kill too many innocent victims?

Python-Xlib

I thought I'd have to write my little app in C, since it's doing low-level Xlib calls. But no -- there's a nice set of Python bindings, python-xlib. The documentation isn't great, but it was still pretty easy to whip something up.

The first thing I needed was a window list: I wanted to make sure I could find all the override-redirect windows. Here's how to do that:

from Xlib import display

dpy = display.Display()
screen = dpy.screen()
root = screen.root
tree = root.query_tree()

for w in tree.children :
    print w

w is a Window (documented here). I see in the documentation that I can get_attributes(). I'd also like to know which window is which -- calling get_wm_name() seems like a reasonable way to do that. Maybe if I print them, those will tell me how to find the override-redirect windows:

for w in tree.children :
    print w.get_wm_name(), w.get_attributes()

Window type, redux

Examining the list, I could see that override_redirect was one of the attributes. But there were quite a lot of override-redirect windows. It turns out many apps, such as Firefox, use them for things like menus. Most of the time they're not visible. But you can look at w.get_attributes().map_state to see that.

So that greatly reduced the number of windows I needed to examine:

for w in tree.children :
    att = w.get_attributes()
    if att.map_state and att.override_redirect :
        print w.get_wm_name(), att

I learned that tooltips from well-behaved programs like Firefox tended to set wm_name to the contents of the tooltip. Wine doesn't -- the wine tooltips had an empty string for wm_name. If I wanted to kill just the wine tooltips, that might be useful to know.

But I also noticed something more important: the tooltip windows were also "transient for" their parent windows. Transient for means a temporary window popped up on behalf of a parent window; it's kept on top of its parent window, and goes away when the parent does.

Now I had a reasonable set of attributes for the windows I wanted to unmap. I tried it:

for w in tree.children :
    att = w.get_attributes()
    if att.map_state and att.override_redirect and w.get_wm_transient_for():
        w.unmap()

It worked! At least in my first test: I ran the wine program, made a tooltip pop up, then ran my killtips program ... and the tooltip disappeared.

Multiple tooltips: flushing the display

But then I tried it with several tooltips showing (yes, wine will pop up new tooltips without hiding the old ones first) and the result wasn't so good. My program only hid the first tooltip. If I ran it again, it would hide the second, and again for the third. How odd!

I wondered if there might be a timing problem. Adding a time.sleep(1) after each w.unmap() fixed it, but sleeping surely wasn't the right solution.

But X is asynchronous: things don't necessarily happen right away. To force (well, at least encourage) X to deal with any queued events it might have stacked up, you can call dpy.flush().

I tried adding that after each w.unmap(), and it worked. But it turned out I only need one

dpy.flush()
at the end of the program, just exiting. Apparently if I don't do that, only the first unmap ever gets executed by the X server, and the rest are discarded. Sounds like flush() is a good idea as the last line of any python-xlib program.

killtips will hide tooltips from well-behaved programs too. If you have any tooltips showing in Firefox or any GTK programs, or any menus visible, killtips will unmap them. If I wanted to make sure the program only attacked the ones generated by wine, I could add an extra test on whether w.get_wm_name() == "".

But in practice, it doesn't seem to be a problem. Well-behaved programs handle having their tooltips unmapped just fine: the next time you call up a menu or a tooltip, the program will re-map it.

Not so in wine: once you dismiss one of those wine tooltips, it's gone forever, at least until you quit and restart the program. But that doesn't bother me much: once I've seen the tooltip for a button and found out what that button does, I'm probably not going to need to see it again for a while.

So I'm happy with killtips, and I think it will solve the problem. Here's the full script: killtips.

Tags: , , , ,
[ 11:36 Sep 27, 2011    More programming | permalink to this entry | comments ]

Fri, 09 Sep 2011

Count characters or words in the X selection from Python

This post is, above all, a lesson in doing a web search first. Even when what you're looking for is so obscure you're sure no one else has wanted it. But the script I got out of it might turn out to be useful.

It started with using Bitlbee for Twitter. I love bitlbee -- it turns a Twitter stream into just another IRC channel tab in the xchat I'm normally running anyway.

The only thing I didn't love about bitlbee is that, unlike the twitter app I'd previously used, I didn't have any way to keep track of when I neared the 140-character limit. There were various ways around that, mostly involving pasting the text into other apps before submitting it. But they were all too many steps.

It occurred to me that one way around this was to select-all, then run something that would show me the number of characters in the X selection. That sounded like an easy app to write.

Getting the X selection from Python

I was somewhat surprised to find that Python has no way of querying the X selection. It can do just about everything else -- even simulate X events. But there are several command-line applications that can print the selection, so it's easy enough to run xsel or xclip from Python and read the output.

I ended up writing a little app that brings up a dialog showing the current count, then hangs around until you dismiss it, querying the selection once a second and updating the count. It's called countsel.

Of course, if you don't want to write a Python script you can use commandline tools directly. Here are a couple of examples, using xclip instead of xsel: xterm -title 'lines words chars' -geometry 25x2 -e bash -c 'xclip -o | wc; read -n 1' pops up a terminal showing the "wc" counts of the selection once, and xterm -title 'lines words chars' -geometry 25x1 -e watch -t 'xclip -o | wc' loops over those counts printing them once a second.

Binding commands to a key is different for every window manager. In Openbox, I added this to rc.xml to call up my program whenever I type W-t (short for Twitter):

    <keybind key="W-t">
      <action name="Execute">
        <execute>/home/akkana/bin/countsel</execute>
      </action>
    </keybind>

Now, any time I needed to check my character count, I could triple-click or type Shift-Home, then hit W-t to call up the dialog and get a count. Then I could leave the dialog up, and whenever I wanted a new count, just Shift-Home or triple-click again, and the dialog updates automatically. Not perfect, but not bad.

Xchat plug-in for a much more elegant solution

Only after getting countsel working did it occur to me to wonder if anyone else had the same Bitlbee+xchat+twitter problem. And a web search found exactly what I needed: xchat-inputcount.pl, a wonderful xchat script that adds a character-counter next to the input box as you're typing. It's a teensy bit buggy, but still, it's far better than my solution. I had no idea you could add user-interface elements to xchat like that!

But that's okay. Countsel didn't take long to write. And I've added word counting to countsel, so I can use it for word counts on anything I'm writing.

Tags: , ,
[ 12:32 Sep 09, 2011    More programming | permalink to this entry | comments ]

Wed, 31 Aug 2011

Read Excel XLS spreadsheets with Python

Someone mailed out information to a club I'm in as an .XLS file. Another Excel spreadsheet. Sigh.

I do know one way to read them. Fire up OpenOffice, listen to my CPU fan spin as I wait forever for the app to start up, open the xls file, then click in one cell after another as I deal with the fact that spreadsheet programs only show you a tiny part of the text in each cell. I'm not against spreadsheets per se -- they're great for calculating tables of interconnected numbers -- but they're a terrible way to read tabular data.

Over the years, lots of open-source programs like word2x and catdoc have sprung up to read the text in MS Word .doc files. Surely by now there must be something like that for XLS files?

Well, I didn't find any ready-made programs, but I found something better: Python's xlrd module, as well as a nice clear example at ScienceOSS of how to Read Excel files from Python.

Following that example, in six lines I had a simple program to print the spreadsheet's contents:

import xlrd

for filename in sys.argv[1:] :
    wb = xlrd.open_workbook(filename)
    for sheetname in wb.sheet_names() :
        sh = wb.sheet_by_name(sheetname)
        for rownum in range(sh.nrows) :
            print sh.row_values(rownum)

Of course, having gotten that far, I wanted better formatting so I could compare the values in the spreadsheet. Didn't take long to write, and the whole thing still came out under 40 lines: xlsrd. And I was able to read that XLS file that was mailed to the club, easily and without hassle.

I'm forever amazed at all the wonderful, easy-to-use modules there are for Python.

Tags: ,
[ 10:58 Aug 31, 2011    More programming | permalink to this entry | comments ]

Thu, 25 Aug 2011

Deleting email from a mail server with Python

How do you delete email from a mail server without downloading or reading it all?

Why? Maybe you got a huge load of spam and you need to delete it. Maybe you have your laptop set up to keep a copy of your mail on the server so you can get it on your desktop later ... but after a while you realize it's not worth downloading all that mail again. In my case, I use an ISP that keeps copies of all mail forwarded from one alias to another, so I periodically need to clean out the copies.

There are quite a few reasons you might want to delete mail without reading it ... so I was surprised to find that there didn't seem to be any easy way to do so.

But POP3 is a fairly simple protocol. How hard could it be to write a Python script to do what I needed?

Not hard at all, in fact. The poplib package does most of the work for you, encapsulating both the networking and the POP3 protocol. It even does SSL, so you don't have to send your password in the clear.

Once you've authenticated, you can list() messages, which gives you a status and a list of message numbers and sizes, separated by a space. Just loop through them and delete each one.

Here's a skeleton program to delete messages:

server = "mail.example.com"
port = 995
user = "myname"
passwd = "seekrit"

pop = poplib.POP3_SSL(server, port)
pop.user(user)
pop.pass_(passwd)

poplist = pop.list()
if poplist[0].startswith('+OK') :
    msglist = poplist[1]
    for msgspec in msglist :
        # msgspec is something like "3 3941", 
        # msg number and size in octets
        msgnum = int(msgspec.split(' ')[0])
        print "Deleting msg %d\r" % msgnum,
        pop.dele(msgnum)
    else :
        print "No messages for", user
else :
    print "Couldn't list messages: status", poplist[0]
pop.quit()

Of course, you might want to add more error checking, loop through a list of users, etc. Here's the full script: deletemail.

Tags: , ,
[ 17:41 Aug 25, 2011    More programming | permalink to this entry | comments ]

Fri, 19 Aug 2011

Beginning Python: Sorting lists of objects

The Beginning Python class has pretty much died down -- although there are still a couple of interested students posting really great homework solutions, I think most people have fallen behind, and it's time to wrap up the course.

So today, I didn't post a formal lesson. But I did have something to share about how I used Python's object-oriented capabilities to solve a problem I had copying new podcast files onto my MP3 player. I used Python's built-in list sort() function, along with the easy way it lets me define operators like < and > for any object I define.

You can read all about it in my post to the Courses list describing how I sorted my list of podcast objects. Or just go straight to the final program, pods.

Tags: , ,
[ 19:48 Aug 19, 2011    More education | permalink to this entry | comments ]

Fri, 12 Aug 2011

Beginning Python, Lesson 9: More extras

Lesson 9 in my online Python course is up: Lesson 9: Extras (requested topics), including string operations, web development and GUI toolkits.

The web development and GUI toolkits are topics which were requested by students, while the string ops are things that just seemed too useful not to include.

Tags: , ,
[ 17:45 Aug 12, 2011    More education | permalink to this entry | comments ]

Fri, 05 Aug 2011

Beginning Python, Lesson 8: Extras

Lesson 8 in my online Python course is up: Lesson 8: Extras, including exception handling, optional arguments, and running system commands. A motley collection of fun and useful topics that didn't quite fit anywhere in the earlier formal lessons, but you'll find a lot of use for them in writing real-world Python scripts. In the homework, I have some examples of some of my scripts using these techniques; I'm sure the students will have lots of interesting problems of their own.

Tags: , ,
[ 14:56 Aug 05, 2011    More education | permalink to this entry | comments ]

Sat, 30 Jul 2011

Beginning Python, Lesson 7: Object-oriented programming

Lesson 7 in my online Python course is up: Lesson 7: Object-oriented programming.

This is the last formal lesson in the Beginning Python class. But I will be posting a few more "tips and tricks" lessons, little things that didn't fit in other lessons plus suggestions for useful Python packages students may want to check out as they continue their Python hacking.

Tags: , ,
[ 10:28 Jul 30, 2011    More education | permalink to this entry | comments ]

Fri, 22 Jul 2011

Beginning Python, Lesson 6: Functions and Dictionaries

Lesson 6 in my online Python course is up: Lesson 6: Functions and Dictionaries.

We're getting near the end of the course -- partly because I think students may be saturated, though I may post one more lesson. I'll post on the list and see what the students think about it.

This afternoon, though, is pretty much booked up trying to get my mother's new Nook Touch e-book reader working with Linux. Would be easy ... except that she wants to be able to check out books from her local public library, which of course uses proprietary software from Adobe and other companies to do DRM. It remains to be seen if this will be possible ... of course, I'll post the results once we know.

Tags: , ,
[ 17:49 Jul 22, 2011    More education | permalink to this entry | comments ]

Fri, 15 Jul 2011

Beginning Python, Lesson 5

Lesson 5 in my online Python course is up: Infinite loops, modulo, and random numbers.

It's a motley mix of topics, mostly because I wanted to have a fun homework project that actually did something interesting. I hope everyone enjoys it!

Tags: , ,
[ 16:44 Jul 15, 2011    More education | permalink to this entry | comments ]

Fri, 08 Jul 2011

Beginning Python, Lesson 4

Lesson 4 in my online Python course is up: Modules and command-line arguments.

This lesson is a little longer than previous lessons, but that's partly because of a couple of digressions at the beginning. Hope I didn't overdo it! The homework includes an optional debugging problem for folks who want to dive a little deeper into this stuff.

Tags: , ,
[ 20:20 Jul 08, 2011    More education | permalink to this entry | comments ]

Sun, 03 Jul 2011

Beginning Python, Lesson 3: Strings and Lists

Lesson 3 in my online Python course is up: Fun with Strings and Lists.

There may be some backlog on the mailing list -- my first attempt to post the lesson didn't show up at all, but my second try made it. Mail seems to be flowing now, but if you try to post something and it doesn't show up, let me know or tell us on irc.linuxchix.org, so we know if there's a continuing problem that needs to be fixed, not just a one-time glitch.

Meanwhile, I'm having some trouble getting new blog entries posted. Due to some network glitches, I had to migrate shallowsky.com to a different ISP, and it turns out the PyBlosxom 1.4 I'd been using doesn't work with more recent versions of Python; but none of my PyBlosxom plug-ins work in 1.5. Aren't software upgrades a joy? So I'm getting lots of practice debugging other people's Python code trying to get the plug-ins updated, and there probably won't be many blog entries until I've figured that out.

Once that's all straightened out, I should have a cool new PyTopo feature to report on, as well as some Arduino hacks I've had on the back burner for a while.

Tags: , ,
[ 11:57 Jul 03, 2011    More education | permalink to this entry | comments ]

Fri, 24 Jun 2011

Beginning Python, Lesson 2 posted

I've just posted Lesson 2 in my online Python course, covering loops, if statements, and beer! You can read it in the list archives: Lesson 2: Loops, if, and beer, or, better, subscribe to the list so you can join the discussion.

I hope everybody has fun writing loops!

Tags: , ,
[ 16:10 Jun 24, 2011    More education | permalink to this entry | comments ]

Thu, 16 Jun 2011

Beginning Programming in Python course starting

I'm about to start a new LinuxChix course: Beginning Programming in Python.

It will be held on the Linuxchix Courses mailing list: to follow the course, subscribe to the list. Lessons will be posted weekly, on Fridays, with the first lesson starting tomorrow, Friday, June 17.

This is intended a short course, probably only 4-5 weeks to start with, aimed mostly at people who are new to programming. Though of course anyone is welcome, even if you've programmed before. And experienced programmers are welcome to hang out, lurk and help answer questions. I might extended the course if people are still interested and having fun.

The course is free (just subscribe to the mailing list) and open to both women and men. Standard LinuxChix rules apply: Be polite, be helpful. And do the homework. :-)

Tags: , ,
[ 09:51 Jun 16, 2011    More education | permalink to this entry | comments ]

Fri, 20 May 2011

Packaging Python for MeeGo (or other RPM-based distros)

Writing Python scripts for MeeGo is easy. But how do you package a Python script in an RPM other MeeGo users can install?

It turned out to be far easier than I expected. Python and Ubuntu had all the tools I needed.

First you'll need a .desktop file describing your app, if you don't already have one. This gives window managers the information they need to show your icon and application name so the user can run it. Here's the one I wrote for PyTopo: pytopo.desktop.

Of course, you'll also want a desktop icon. Most other applications on MeeGo seemed to use 48x48 pixel PNG images, so that's what I made, though it seems to be quite flexible -- an SVG is ideal.

With your script, desktop file and an icon, you're ready to create a package.

Create a setup.py file describing your package, as in the distutils simple example or the more detailed distutils setup script page. For a sample standalone script with a desktop file and icon, you can take a look at my PyTopo setup.py.

Starting from the Python setup script, Python's distutils can generate RPM or even Windows packages -- assuming you have the appropriate tools installed on your machine.

I'm on an Ubuntu (Debian-based) machine, and all the docs imply you have to be on an RPM-based distro to make an RPM. Happily, that's not true: Ubuntu has RPM tools you can install.

$ sudo apt-get install rpm

Then let Python do its thing:

$ python setup.py bdist_rpm

Python generates the spec file and everything else needed and builds a multiarch RPM that's ready to install on MeeGo. You can install it by copying it to the MeeGo device with scp dist/PyTopo-1.0-1.noarch.rpm meego@address.of.device:/tmp/. Then, as root on the device, install it with rpm -i /tmp/PyTopo-1.0-1.noarch.rpm. You're done!

To see a working example, you can browse my latest PyTopo source (only what's in SVN; it needs a few more tweaks before it's ready for a formal release). Or try the RPM I made for MeeGo: PyTopo-1.0-1.noarch.rpm. I'd love to hear whether this works on other RPM-based distros.

What about Debian packages?

Curiously, making a Debian package on Debian/Ubuntu is much less straightforward even if you're starting on a Debian/Ubuntu machine. Distutils can't do it on its own. There's a Debian Python package recipe, but it begins with a caution that you shouldn't use it for a package you want to submit. For that, you probably have to wade through the Complete Ubuntu Packaging Guide. Clearly, that will need a separate article.

Tags: , , , ,
[ 18:44 May 20, 2011    More programming | permalink to this entry | comments ]

Fri, 13 May 2011

Children of the Code -- Derived Python projects

I got some fun email today -- two different people letting me know about new projects derived from my Python code.

One is M-Poker, originally based on a PyQt tutorial I wrote for Linux Planet. Ville Jyrkkä has taken that sketch and turned it into a real poker program. And it uses PySide now -- the new replacement for PyQt, and one I need to start using for MeeGo development. So I'll be taking a look at M-Poker myself and maybe learning things from it. There are some screenshots on the blog A Hacker's Life in Finland.

The other project is xkemu, a Python module for faking keypresses, grown out of pykey, a Python version of my Crikey keypress generation program. xkemu-server.py looks like a neat project -- you can run it and send it commands to generate key presses, rather than just running a script each time.

(Sniff) My children are going out into the world and joining other projects. I feel so proud. :-)

Tags: ,
[ 21:04 May 13, 2011    More programming | permalink to this entry | comments ]

Mon, 18 Apr 2011

A simple Python mixer (to solve a problem with sound in Natty)

I had to buy a new hard drive recently, and figured as long as I had a new install ahead of me, why not try the latest Ubuntu 11.04 beta, "Natty Narwhal"?

One of the things I noticed right away was that sound was really LOUD! -- and my usual volume keys weren't working to change that.

I have a simple setup under openbox: Meta-F7 and Meta-F8 call a shell script called "louder" and "softer" (two links to the same script), and depending on how it's invoked, the script calls aumix -v +4 or aumix -v -4.

Great, except it turns out aumix doesn't work -- at all -- under Natty (bug 684416). Rumor has it that Natty has dropped all support for OSS sound, though I don't know if that's actually true -- the bug has been sitting for four months without anyone commenting on it. (Ubuntu never seems terribly concerned about having programs in their repositories that completely fail to do anything; sadly, programs can persist that way for years.)

The command-line replacement for aumix seems to be amixer, but its documentation is sketchy at best. After a bit of experimentation, I found if I set the Master volume to 100% using alsamixergui, I could call amixer set PCM 4- or 4-. But I couldn't use amixer set Master 4+ -- sometimes it would work but most of the time it wouldn't.

That all seemed a bit too flaky for me -- surely there must be a better way? Some magic Python library? Sure enough, there's python-alsaaudio, and learning how to use it took a lot less time than I'd already wasted trying random amixer commands to see what worked. Here's the program:

#!/usr/bin/env python
# Set the volume louder or softer, depending on program name.

import alsaaudio, sys, os

increment = 4

# First find a mixer. Use the first one.
try :
    mixer = alsaaudio.Mixer('Master', 0)
except alsaaudio.ALSAAudioError :
    sys.stderr.write("No such mixer\n")
    sys.exit(1)

cur = mixer.getvolume()[0]
if os.path.basename(sys.argv[0]).startswith("louder") :
    mixer.setvolume(cur + increment, alsaaudio.MIXER_CHANNEL_ALL)
else :
    mixer.setvolume(cur - increment, alsaaudio.MIXER_CHANNEL_ALL)
print "Volume from", cur, "to", mixer.getvolume()[0]

Tags: , , ,
[ 21:13 Apr 18, 2011    More programming | permalink to this entry | comments ]

Fri, 18 Mar 2011

Finding Twitter references to you

Twitter is a bit frustrating when you try to have conversations there. You say something, then an hour later, someone replies to you (by making a tweet that includes your Twitter @handle). If you're away from your computer, or don't happen to be watching it with an eagle eye right then -- that's it, you'll never see it again. Some Twitter programs alert you to @ references even if they're old, but many programs don't.

Wouldn't it be nice if you could be notified regularly if anyone replied to your tweets, or mentioned you?

Happily, you can. The Twitter API is fairly simple; I wrote a Python function a while back to do searches in my Twitter app "twit", based on a code snippet I originally cribbed from Gwibber. But if you take out all the user interface code from twit and use just the simple JSON code, you get a nice short app. The full script is here: twitref, but the essence of it is this:

import sys, simplejson, urllib, urllib2

def get_search_data(query):
    s = simplejson.loads(urllib2.urlopen(
            urllib2.Request("http://search.twitter.com/search.json",
                            urllib.urlencode({"q": query}))).read())
    return s

def json_search(query):
    for data in get_search_data(query)["results"]:
        yield data

if __name__ == "__main__" :
    for searchterm in sys.argv[1:] :
        print "**** Tweets containing", searchterm
        statuses = json_search(searchterm)
        for st in statuses :
            print st['created_at']
            print "<%s> %s" % (st['from_user'], st['text'])
            print ""

You can run twitref @yourname from the commandline now and then. You can even call it as a cron job and mail yourself the output, if you want to make sure you see replies. Of course, you can use it to search for other patterns too, like twitref #vss or twitref #scale9x.

You'll need the simplejson Python library, which most distros offer as a package; on Ubuntu, install python-simplejson.

It's unclear how long any of this will continue to be supported, since Twitter recently announced that they disapprove of third-party apps using their API. Oh, well ... if Twitter stops allowing outside apps, I'm not sure how interested I'll be in continuing to use it.

On the other hand, their original announcement on Google Groups seems to have been removed -- I was going to link to it here and discovered it was no longer there. So maybe Twitter is listening to the outcry and re-thinking their position.

Tags: , , ,
[ 10:53 Mar 18, 2011    More programming | permalink to this entry | comments ]

Thu, 10 Mar 2011

On Linux Planet: Plotting mail logs with CairoPlot

[Pie chart showing origins of spam]

My latest LinuxPlanet article is on plotting pretty graphs from Python with CairoPlot.

Of course, to demonstrate a graphing package I needed some data. So I decided to plot some stats parsed from my Postfix mail log file. We bounce a lot of mail (mostly spam but some false positives from mis-configured email servers) that comes in with bogus HELO addresses. So I thought I'd take a graphical look at the geographical sources of those messages.

The majority were from IPs that weren't identifiable at all -- no reverse DNS info. But after that, the vast majority turned out to be, surprisingly, from .il (Israel) and .br (Brazil).

Surprised me! What fun to get useful and interesting data when I thought I was just looking for samples for an article.

Tags: , , ,
[ 15:08 Mar 10, 2011    More programming | permalink to this entry | comments ]

Tue, 22 Feb 2011

Python for (Cartalk) Puzzlers

Last week's Car Talk had a fun puzzler called "Three Pieces of Paper":

Three different numbers are chosen at random, and one is written on each of three slips of paper. The slips are then placed face down on the table. The objective is to choose the slip upon which is written the largest number.

Here are the rules: You can turn over any slip of paper and look at the amount written on it. If for any reason you think this is the largest, you're done; you keep it. Otherwise you discard it and turn over a second slip. Again, if you think this is the one with the biggest number, you keep that one and the game is over. If you don't, you discard that one too.

What are the odds of winning? The obvious answer is one in three, but you can do better than that. After thinking about it a little I figured out the strategy pretty quickly (I won't spoil it here; follow the link above to see the answer). But the question was: how often does the correct strategy give you the answer?

It made for a good "things to think about when trying to fall asleep" insomnia game. And I mostly convinced myself that the answer was 50%. But probability problems are tricky beasts (witness the Monty Hall Problem, which even professional mathematicians got wrong) and I wasn't confident about it. Even after hearing Click and Clack describe the answer on this week's show, asserting that the answer was 50%, I still wanted to prove it to myself.

Why not write a simple program? That way I could run lots of trials and see if the strategy wins 50% of the time.

So here's my silly Python program:

#! /usr/bin/env python

# Cartalk puzzler Feb 2011

import random, time

random.seed()

tot = 0
wins = 0

while True:
    # pick 3 numbers:
    n1 = random.randint(0, 100)
    n2 = random.randint(0, 100)
    n3 = random.randint(0, 100)

    # Always look at but discard the first number.
    # If the second number is greater than the first, stick with it;
    # otherwise choose the third number.
    if n2 > n1 :
        final = n2
    else :
        final = n3

    biggest = max(n1, n2, n3)
    win = (final == biggest)
    tot += 1
    if win :
        wins += 1
    print "%4d %4d %4d %10d %10s %6d/%-6d = %10d%%" % (n1, n2, n3, final,
                                                       str(win),
                                                       wins, tot,
                                                       int(wins*100/tot))
    if tot % 1000 == 0:
        print "(%d ...)" % tot
        time.sleep(1)

It chooses numbers between 0 and 100, for no particular reason; I could randomize that, but it wouldn't matter to the result. I made it print out all the outcomes, but pause for a second after every thousand trials ... otherwise the text scrolls too fast to read.

And indeed, the answer converges very rapidly to 50%. Hurray!

After I wrote the script, I checked Car Talk's website. They have a good breakdown of all the possible outcomes and how they map to a probability. Of course, I could have checked that first, before writing the program. But I was thinking about this in the car while driving home, with no access to the web ... and besides, isn't it always more fun to prove something to yourself than to take someone else's word for it?

Tags: , , ,
[ 21:17 Feb 22, 2011    More programming | permalink to this entry | comments ]

Fri, 18 Feb 2011

New GIMP Arrow Designer

[arrow] While writing a blog post on GIMP's confusing Auto button (to be posted soon), I needed some arrows, and discovered a bug in my Arrow Designer script when making arrows that are mostly vertical.

So I fixed it. You can get the new Arrow Designer 0.5 on my GIMP Arrow Designer page.

It's purely a coincidence that I discovered this a week before SCALE, where I'll be speaking on Writing GIMP Scripts and Plug-Ins. Arrow Designer is one of my showpieces for making interactive plug-ins with GIMP-Python, so I'm glad I noticed the bug when I did.

Tags: , ,
[ 21:28 Feb 18, 2011    More gimp | permalink to this entry | comments ]

Mon, 31 Jan 2011

Feedme 0.7

[FeedMe, Seymour!] I've been enjoying my Android tablet e-reader for a couple of months now ... and it's made me realize some of the shortcomings in FeedMe. So of course I've been making changes along the way -- quite a few of them, from handling multiple output file types (html, plucker, ePub or FictionBook) to smarter handling of start, end and skip patterns to a different format of the output directory.

It's been fairly solid for a few weeks now, so it's time to release ... FeedMe 0.7.

Tags: , , ,
[ 22:32 Jan 31, 2011    More programming | permalink to this entry | comments ]

Tue, 18 Jan 2011

X Terminal Colors (and dark and light backgrounds)

[Displaying colors in an xterm] At work, I'm testing some web programming on a server where we use a shared account -- everybody logs in as the same user. That wouldn't be a problem, except nearly all Linuxes are set up to use colors in programs like ls and vim that are only readable against a dark background. I prefer a light background (not white) for my terminal windows.

How, then, can I set things up so that both dark- and light-backgrounded people can use the account? I could set up a script that would set up a different set of aliases and configuration files, like when I changed my vim colors. Better, I could fix all of them at once by changing my terminal's idea of colors -- so when the remote machine thinks it's feeding me a light color, I see a dark one.

I use xterm, which has an easy way of setting colors: it has a list of 16 colors defined in X resources. So I can change them in ~/.Xdefaults.

That's all very well. But first I needed a way of seeing the existing colors, so I knew what needed changing, and of testing my changes.

Script to show all terminal colors

I thought I remembered once seeing a program to display terminal colors, but now that I needed one, I couldn't find it. Surely it should be trivial to write. Just find the escape sequences and write a script to substitute 0 through 15, right?

Except finding the escape sequences turned out to be harder than I expected. Sure, I found them -- lots of them, pages that conflicted with each other, most giving sequences that didn't do anything visible in my xterm.

Eventually I used script to capture output from a vim session to see what it used. It used <ESC>[38;5;Nm to set color N, and <ESC>[m to reset to the default color. This more or less agreed Wikipedia's ANSI escape code page, which says <ESC>[38;5; does "Set xterm-256 text coloor" with a note "Dubious - discuss". The discussion says this isn't very standard. That page also mentions the simpler sequence <ESC>[0;Nm to set the first 8 colors.

Okay, so why not write a script that shows both? Like this:

#! /usr/bin/env python

# Display the colors available in a terminal.

print "16-color mode:"
for color in range(0, 16) :
    for i in range(0, 3) :
        print "\033[0;%sm%02s\033[m" % (str(color + 30), str(color)),
    print

# Programs like ls and vim use the first 16 colors of the 256-color palette.
print "256-color mode:"
for color in range(0, 256) :
    for i in range(0, 3) :
        print "\033[38;5;%sm%03s\033[m" % (str(color), str(color)),
    print

Voilà! That shows the 8 colors I needed to see what vim and ls were doing, plus a lovely rainbow of other possible colors in case I ever want to do any serious ASCII graphics in my terminal.

Changing the X resources

The next step was to change the X resources. I started by looking for where the current resources were set, and found them in /etc/X11/app-defaults/XTerm-color:

$ grep color /etc/X11/app-defaults/XTerm-color
irrelevant stuff snipped
*VT100*color0: black
*VT100*color1: red3
*VT100*color2: green3
*VT100*color3: yellow3
*VT100*color4: blue2
*VT100*color5: magenta3
*VT100*color6: cyan3
*VT100*color7: gray90
*VT100*color8: gray50
*VT100*color9: red
*VT100*color10: green
*VT100*color11: yellow
*VT100*color12: rgb:5c/5c/ff
*VT100*color13: magenta
*VT100*color14: cyan
*VT100*color15: white
! Disclaimer: there are no standard colors used in terminal emulation.
! The choice for color4 and color12 is a tradeoff between contrast, depending
! on whether they are used for text or backgrounds.  Note that either color4 or
! color12 would be used for text, while only color4 would be used for a
! Originally color4/color12 were set to the names blue3/blue
!*VT100*color4: blue3
!*VT100*color12: blue
!*VT100*color4: DodgerBlue1
!*VT100*color12: SteelBlue1

So all I needed to do was take the ones that don't show up well -- yellow, green and so forth -- and change them to colors that work better, choosing from the color names in /etc/X11/rgb.txt or my own RGB values. So I added lines like this to my ~/.Xdefaults:

!! color2 was green3
*VT100*color2: green4
!! color8 was gray50
*VT100*color8: gray30
!! color10 was green
*VT100*color10: rgb:00/aa/00
!! color11 was yellow
*VT100*color11: dark orange
!! color14 was cyan
*VT100*color14: dark cyan
... and so on.

Now I can share accounts, and I no longer have to curse at those default ls and vim settings!

Update: Tip from Mikachu: ctlseqs.txt is an excellent reference on terminal control sequences.


Tags: , , , , ,
[ 10:56 Jan 18, 2011    More linux | permalink to this entry | comments ]

Tue, 04 Jan 2011

Fontasia v 0.5

[Fontasia, a font viewer and categorizer] I had a nice relaxing holiday season. A little too relaxing -- I didn't get much hacking done, and spent more time fighting with things that didn't work than making progress fixing things.

But I did spend quite a bit of time with my laptop, currently running Arch Linux, trying to get the fonts to work as well as they do in Ubuntu. I don't have a definite solution yet to my Arch font issues, but all the fiddling with fonts did lead me to realize that I needed an easier way to preview specific fonts in bold.

So I added Bold and Italic buttons to fontasia, and called it Fontasia 0.5. I'm finding it quite handy for previewing all my fixed-width fonts while trying to find one emacs can display.

Tags: , ,
[ 23:00 Jan 04, 2011    More programming | permalink to this entry | comments ]

Sat, 30 Oct 2010

New versions of mapping programs: Pytopo and Ellie

[pytopo logo] On our recent Mojave trip, as usual I spent some of the evenings reviewing maps and track logs from some of the neat places we explored.

There isn't really any existing open source program for offline mapping, something that works even when you don't have a network. So long ago, I wrote Pytopo, a little program that can take map tiles from a Windows program called Topo! (or tiles you generate yourself somehow) and let you navigate around in that map.

But in the last few years, a wonderful new source of map tiles has become available: OpenStreetMap. On my last desert trip, I whipped up some code to show OSM tiles, but a lot of the code was hacky and empirical because I couldn't find any documentation for details like the tile naming scheme.

Well, that's changed. Upon returning to civilization I discovered there's now a wonderful page explaining the Slippy map tilenames very clearly, with sample code and everything. And that was the missing piece -- from there, all the things I'd been missing in pytopo came together, and now it's a useful self-contained mapping script that can download its own tiles, and cache them so that when you lose net access, your maps don't disappear along with everything else.

Pytopo can show GPS track logs and waypoints, so you can see where you went as well as where you might want to go, and whether that road off to the right actually would have connected with where you thought you were heading.

It's all updated in svn and on the Pytopo page.

Ellie

[Ellie icon]

Most of the pytopo work came after returning from the desert, when I was able to google and find that OSM tile naming page. But while still out there and with no access to the web, I wanted to review the track logs from some of our hikes and see how much climbing we'd done. I have a simple package for plotting elevation from track logs, called Ellie. But when I ran it, I discovered that I'd never gotten around to installing the pylab Python plotting package (say that three times fast!) on this laptop.

No hope of installing the package without a net ... so instead, I tweaked Ellie so that so that without pylab you can still print out statistics like total climb. While I was at it I added total distance, time spent moving and time spent stopped. Not a big deal, but it gave me the numbers I wanted. It's available as ellie 0.3.

Tags: , ,
[ 19:24 Oct 30, 2010    More mapping | permalink to this entry | comments ]

Fri, 15 Oct 2010

Snakes on a Couch! Using Python with CouchDB

Part II of my CouchDB tutorial is out at Linux Planet. In it, I use Python and CouchDB to write a simple application that keeps track of which restaurants you've been to recently, and to suggest new places to eat where you haven't been.

Snakes on a Couch, Part 2: Where do you want to eat?

Tags: , , , ,
[ 21:00 Oct 15, 2010    More writing | permalink to this entry | comments ]

Thu, 23 Sep 2010

Snakes on a Couch! Using Python with CouchDB

I've been learning CouchDB, the hot NoSQL database, as part of my new job. It's interesting -- a very different mindset compared to classic databases like MySQL.

There's a fairly good Python package for it, python-couchdb ... but the documentation is somewhat incomplete and there's very little else written about it, and virtually no sample code to steal.

That makes it a perfect topic for a Linux Planet tutorial! So here it is, Part 1:

Snakes on a Couch! Using Python with CouchDB.

I have a rather fun application for the database I introduce in the article, but you'll have to wait until Part 2, two weeks from now, to see the details.

Tags: , , , ,
[ 11:55 Sep 23, 2010    More writing | permalink to this entry | comments ]

Fri, 03 Sep 2010

Fontasia v 0.3

A couple of weeks ago I posted about fontasia, my new font-chooser app. [Fontasia: font viewer/categorizer It's gone through a couple of revisions since then, and Mikael Magnusson contributed several excellent improvements, like being able to render each font in the font list.

I'd been holding off on posting 0.3, hoping to have time to do something about the font buttons -- they really need to be smaller, so there's space for more categories. But between a new job and several other commitments, I haven't had time to implement that. And the fancy font list is so cool it really ought to be shared.

So here it is: fontasia 0.3.

Tags: , ,
[ 10:31 Sep 03, 2010    More programming | permalink to this entry | comments ]

Tue, 17 Aug 2010

Fontasia: View and categorize your fonts

[Fontasia: font viewer/categorizer We were talking about fonts again on IRC, and how there really isn't any decent font viewer on Linux that lets you group fonts into categories.

Any time you need to choose a font -- perhaps you know you need one that's fixed-width, script, cartoony, western-themed -- you have to go through your entire font list, clicking one by one on hundreds of fonts and saving the relevant ones somehow so you can compare them later. If you have a lot of fonts installed, it can take an hour or more to choose the right font for a project.

There's a program called fontypython that does some font categorization, but it's hard to use: it doesn't operate on your installed fonts, only on fonts you copy into a special directory. I never quite understood that; I want to categorize the fonts I can actually use on my system.

I've been wanting to write a font categorizer for a long time, but I always trip up on finding documentation on getting Python to render fonts. But this time, when I googled, I found jan bodnar's ZetCode Pango tutorial, which gave me all I needed and I was off and running.

Fontasia is initially a font viewer. It shows all your fonts in a list on the left, with a preview on the right. But it also lets you add categories: just type the category name in the box and click Add category and a button for that category will appear, with the current font added to it. A font can be in multiple categories.

Once you've categorized your fonts, a menu at the top of the window lets you show just the fonts in a particular category. So if you're working on a project that needs a Western-style font, show that category and you'll see only relevant fonts.

You can also show only the fonts you've categorized -- that way you can exclude fonts you never use -- I don't speak Tamil or Urdu so I don't really need to see those fonts when I'm choosing a font. Or you can show only the uncategorized fonts: this is useful when you add some new fonts to your system and need to go through them and categorize them.

I'm excited about fontasia. It's only a few days old and already used it several times for real-world font selection problems.

If you want to try it, it's here: Fontasia: View and categorize fonts.

Tags: , ,
[ 12:20 Aug 17, 2010    More programming | permalink to this entry | comments ]

Sat, 10 Jul 2010

Interactive arrow design in GIMP

How many times have you wanted an easy way of making arrows in GIMP?

I need arrows all the time, for screenshots and diagrams. And there really isn't any easy way to do that in GIMP. There's a script-fu for making arrows in the Plug-in registry, but it's fiddly and always takes quite a few iterations to get it right. More often, I use a collection of arrow brushes I downloaded from somewhere -- I can't remember exactly where I got my collection, but there are lots of options if you google gimp arrow brushes -- then use the free rotate tool to rotate the arrow in the right direction.

[GIMP Arrow Designer] The topic of arrows came up again on #gimp yesterday, and Alexia Death mentioned her script-fu in GIMP Fx Foundary that "abuses the selection" to make shapes, like stars and polygons. She suggested that it would be easy to make arrows the same way, using the current selection as a guide to where the arrow should go.

And that got me thinking about Joao Bueno's neat Python plug-in demo that watches the size of the selection and updates a dialog every time the selection changes. Why not write an interactive Python script that monitors the selection and lets you change the arrow by changing the size of the selection, while fine-tuning the shape and size of the arrowhead interactively via a dialog?

Of course I had to write it. And it works great! I wish I'd written this five years ago.

This will also make a great demo for my OSCON 2010 talk on Writing GIMP Scripts and Plug-ins, Thursday July 22. I wish I'd had it for Libre Graphics Meeting last month.

It's here: GIMP Arrow Designer.

Tags: , , ,
[ 11:25 Jul 10, 2010    More gimp | permalink to this entry | comments ]

Fri, 16 Apr 2010

Tee in Python

I needed a way to send the output of a Python program to two places simultaneously: print it on-screen, and save it to a file.

Normally I'd use the Linux command tee for that: prog | tee prog.out saves a copy of the output to the file prog.out as well as printing it. That worked fine until I added something that needed to prompt the user for an answer. That doesn't work when you're piping through tee: the output gets buffered and doesn't show up when you need it to, even if you try to flush() it explicitly.

I investigated shell-based solutions: the output I need is on sterr, while Python's raw_input() user prompt uses stdout, so if I could get the shell to send stderr through tee without stdout, that would have worked. My preferred shell, tcsh, can't do this at all, but bash supposedly can. But the best examples I could find on the web, like the arcane prog 2>&1 >&3 3>&- | tee prog.out 3>&- didn't work.

I considered using /dev/tty or opening a pty, but those calls only work on Linux and Unix and the program is otherwise cross-platform.

What I really wanted was a class that acts like a standard Python file object, but when you write to it it writes to two places: the log file and stderr.

I found an example of someone trying to write a Python tee class, but it didn't work: it worked for write() but not for print >>

I am greatly indebted to KirkMcDonald of #python for finding the problem. In the Python source implementing >>, PyFile_WriteObject (line 2447) checks the object's type, and if it's subclassed from the built-in file object, it writes directly to the object's fd instead of calling write().

The solution is to use composition rather than inheritance. Don't make your file-like class inherit from file, but instead include a file object inside it. Like this:

import sys

class tee :
    def __init__(self, _fd1, _fd2) :
        self.fd1 = _fd1
        self.fd2 = _fd2

    def __del__(self) :
        if self.fd1 != sys.stdout and self.fd1 != sys.stderr :
            self.fd1.close()
        if self.fd2 != sys.stdout and self.fd2 != sys.stderr :
            self.fd2.close()

    def write(self, text) :
        self.fd1.write(text)
        self.fd2.write(text)

    def flush(self) :
        self.fd1.flush()
        self.fd2.flush()

stderrsav = sys.stderr
outputlog = open(logfilename, "w")
sys.stderr = tee(stderrsav, outputlog)

And it works! print >>sys.stderr, "Hello, world" now goes to the file as well as stderr, and raw_input still works to prompt the user for input.

In general, I'm told, it's not safe to inherit from Python's built-in objects like file, because they tend to make assumptions instead of making virtual calls to your overloaded methods. What happened here will happen for other objects too. So use composition instead when extending Python's built-in types.

Tags: ,
[ 09:48 Apr 16, 2010    More programming | permalink to this entry | comments ]

Fri, 08 Jan 2010

Python-GTK regression: How to catch mouse button release

We just had the second earthquake in two days, and I was chatting with someone about past earthquakes and wanted to measure the distance to some local landmarks. So I fired up PyTopo as the easiest way to do that. Click on one point, click on a second point and it prints distance and bearing from the first point to the second.

Except it didn't. In fact, clicks weren't working at all. And although I have hacked a bit on parts of pytopo (the most recent project was trying to get scaling working properly in tiles imported from OpenStreetMap), the click handling isn't something I've touched in quite a while.

It turned out that there's a regression in PyGTK: mouse button release events now need you to set an event mask for button presses as well as button releases. You need both, for some reason. So you now need code that looks like this:

drawing_area.connect("button-release-event", button_event)
drawing_area.set_events(gtk.gdk.EXPOSURE_MASK |
                        # next line wasn't needed before:
                        gtk.gdk.BUTTON_PRESS_MASK |
                        gtk.gdk.BUTTON_RELEASE_MASK )

An easy fix ... once you find it.

I filed bug 606453 to see whether the regression was intentional.

I've checked in the fix to the PyTopo svn repository on Google Code. It's so nice having a public source code repository like that! I'm planning to move Pho to Google Code soon.

Tags: , , ,
[ 14:20 Jan 08, 2010    More programming | permalink to this entry | comments ]

Wed, 25 Nov 2009

Character Sets and Encodings in Linux, part 2

Continuing the discussion of those funny characters you sometimes see in email or on web pages, today's Linux Planet article discusses how to convert and handle encoding errors, using Python or the command-line tool recode:

Mastering Characters Sets in Linux (Weird Characters, part 2).

Tags: , , , , , , ,
[ 15:06 Nov 25, 2009    More writing | permalink to this entry | comments ]

Wed, 11 Nov 2009

Building a Py-Webkit-GTK presentation tool

I almost always write my presentation slides using HTML. Usually I use Firefox to present them; it's the browser I normally run, so I know it's installd and the slides all work there. But there are several disadvantages to using Firefox:

Last year, when I was researching lightweight browsers, one of the ones that impressed me most was something I didn't expect: the demo app that comes with pywebkitgtk (package python-webkit on Ubuntu). In just a few lines of Python, you can create your own browser with any UI you like, with a fully functional content area. Their current demo even has tabs.

So why not use pywebkitgtk to create a simple fullscreen webkit-based presentation tool?

It was even simpler than I expected. Here's the code:

#!/usr/bin/env python
# python-gtk-webkit presentation program.
# Copyright (C) 2009 by Akkana Peck.
# Share and enjoy under the GPL v2 or later.

import sys
import gobject
import gtk
import webkit

class WebBrowser(gtk.Window):
    def __init__(self, url):
        gtk.Window.__init__(self)
        self.fullscreen()

        self._browser= webkit.WebView()
        self.add(self._browser)
        self.connect('destroy', gtk.main_quit)

        self._browser.open(url)
        self.show_all()

if __name__ == "__main__":
    if len(sys.argv) <= 1 :
        print "Usage:", sys.argv[0], "url"
        sys.exit(0)

    gobject.threads_init()
    webbrowser = WebBrowser(sys.argv[1])
    gtk.main()

That's all! No navigation needed, since the slides include javascript navigation to skip to the next slide, previous, beginning and end. It does need some way to quit (for now I kill it with ctrl-C) but that should be easy to add.

Webkit and image buffering

It works great. The only problem is that webkit's image loading turns out to be fairly poor compared to Firefox's. In a presentation where most slides are full-page images, webkit clears the browser screen to white, then loads the image, creating a noticable flash each time. Having the images in cache, by stepping through the slide show then starting from the beginning again, doesn't help much (these are local images on disk anyway, not loaded from the net). Firefox loads the same images with no flash and no perceptible delay.

I'm not sure if there's a solution. I asked some webkit developers and the only suggestion I got was to rewrite the javascript in the slides to do image preloading. I'd rather not do that -- it would complicate the slide code quite a bit solely for a problem that exists only in one library.

There might be some clever way to hack double-buffering in the app code. Perhaps something like catching the 'load-started' signal, switching to another gtk widget that's a static copy of the current page (if there's a way to do that), then switching back on 'load-finished'.

But that will be a separate article if I figure it out. Ideas welcome!

Update, years later: I've used this for quite a few real presentations now. Of course, I keep tweaking it: see my scripts page for the latest version.

Tags: , , , ,
[ 17:12 Nov 11, 2009    More programming | permalink to this entry | comments ]

Tue, 20 Oct 2009

Gathering RSS files for a Palm PDA: FeedMe

For years I've been reading daily news feeds on a series of PalmOS PDAs, using a program called Sitescooper that finds new pages on my list of sites, downloads them, then runs Plucker to translate them into Plucker's open Palm-compatible ebook format.

Sitescooper has an elaborate series of rules for trying to get around the complicated formatting in modern HTML web pages. It has an elaborate cache system to figure out what it's seen before. When sites change their design (which most news sites seem to do roughly monthly), it means going in and figuring out the new format and writing a new Sitescooper site file. And it doesn't understand RSS, so you can't use the simplified RSS that most sites offer. Finally, it's no longer maintained; in fact, I was the last maintainer, after the original author lost interest.

Several weeks ago, bma tweeted about a Python RSS reader he'd hacked up using the feedparser package. His reader targeted email, not Palm, but finding out about feedparser was enough to get me started. So I wrote FeedMe (Carla Schroder came up with the all-important name).

I've been using it for a couple of weeks now and I'm very happy with the results. It's still quite rough, of course, but it's already producing better files than Sitescooper did, and it seems more maintainable. Time will tell.

Of course it needs to be made more flexible, adjusted so that it can produce formats besides Plucker, and so on. I'll get to it.

And the only site I miss now, because it doesn't offer an RSS feed, is Linux Planet. Maybe I'll find a solution for that eventually.

Tags: , , , ,
[ 21:08 Oct 20, 2009    More programming | permalink to this entry | comments ]

Mon, 03 Aug 2009

Twit: Now with pattern searches

During OSCON a couple of weeks ago, I kept wishing I could do Twitter searches for a pattern like #oscon in a cleaner way than keeping a tab open in Firefox where I periodically hit Refresh.

Python-twitter doesn't support searches, alas, though it is part of the Twitter API. There's an experimental branch of python-twitter with searching, but I couldn't get it to work. But it turns out Gwibber is also written in Python, and I was able to lift some JSON code from Gwibber to implement a search. (Gwibber itself, alas, doesn't work for me: it bombs out looking for the Gnome keyring. Too bad, looks like it might be a decent client.)

I hacked up a "search for OSCON" program and used it a little during the week of the conference, then got home and absorbed in catching up and preparing for next week's GetSET summer camp, where I'm running an astronomy workshop and a Javascript workshop for high school girls. That's been keeping me frazzled, but I found a little time last night to clean up the search code and release Twit 0.3 with search and a few other new command-line arguments.

No big deal, but it was nice to take a hacking break from all this workshop coordinating. I'm definitely happier program than I am organizing events, that's for sure.

Tags: , ,
[ 18:23 Aug 03, 2009    More programming | permalink to this entry | comments ]

Thu, 09 Jul 2009

Twittering -- and writing Twitter clients

I finally dragged myself into 2009 and tried Twitter.

I'd been skeptical, but it's actually fairly interesting and not that much of a time sink. While it's true that some people tweet about every detail of their lives -- "I'm waiting for a bus" / "Oh, hooray, the bus is finally here" / "I got a good seat in the second row of the bus" / "The bus just passed Second St. and two kids got on" / "Here's a blurry photo from my phone of the Broadway Av. sign as we pass it" -- it's easy enough to identify those people and un-follow them.

And there are tons of people tweeting about interesting stuff. It's like a news ticker, but customizable -- news on the latest protests in Iran, the latest progress on freeing the Mars Spirit Rover, the latest interesting publication on dinosaur fossils, and what's going on at that interesting conference halfway around the world.

The trick is to figure out how you want the information delivered. I didn't want to have to leave a tab open in Firefox all the time. There was an xchat plug-in that sounded perfect -- I have an xchat window up most of the time I'm online -- but it turned out it works by picking one of the servers you're connected to, making a private channel and posting things there. That seemed abusive to the server -- what if everyone on Freenode did that?

So I wanted a separate client. Something lightweight and simple. Unfortunately, all the Twitter clients available for Linux either require that I install a lot of infrastructure first (either Adobe Air or Mono), or they just plain didn't work (a Twitter client where you can't click on links? Come on!)

But then I tried out the Python-Twitter bindings, and they were so easy to use I decided to write them up for my next Linux Planet article, which came out today: Write Your Own Linux Twitter Client In Less Time Than It Takes To Find One!.

The article shows how to use the bindings to write a bare-bones client. But of course, I've been hacking on the client all along, so the one I'm actually using has a lot more features like *ahem* letting you click on links. And letting you block threads, though I haven't actually tested that since I haven't seen any threads I wanted to block since my first day.

You can download the current version of Twit, and anyone who's interested can follow me on Twitter. I don't promise to be interesting -- that's up to you to decide -- but I do promise not to tweet about every block of my bus ride.

Tags: , , ,
[ 16:09 Jul 09, 2009    More writing | permalink to this entry | comments ]

Sat, 20 Jun 2009

Pytopo 0.8 released

On my last Mojave trip, I spent a lot of the evenings hacking on PyTopo.

I was going to try to stick to OpenStreetMap and other existing mapping applications like TangoGPS, a neat little smartphone app for downloading OpenStreetMap tiles that also runs on the desktop -- but really, there still isn't any mapping app that works well enough for exploring maps when you have no net connection.

In particular, uploading my GPS track logs after a day of mapping, I discovered that Tango really wasn't a good way of exploring them, and I already know Merkaartor, nice as it is for entering new OSM data, isn't very good at working offline. There I was, with PyTopo and a boring hotel room; I couldn't stop myself from tweaking a bit.

Adding tracklogs was gratifyingly easy. But other aspects of the code bother me, and when I started looking at what I might need to do to display those Tango/OSM tiles ... well, I've known for a while that some day I'd need to refactor PyTopo's code, and now was the time.

Surprisingly, I completed most of the refactoring on the trip. But even after the refactoring, displaying those OSM tiles turned out to be a lot harder than I'd hoped, because I couldn't find any reliable way of mapping a tile name to the coordinates of that tile. I haven't found any documentation on that anywhere, and Tango and several other programs all do it differently and get slightly different coordinates. That one problem was to occupy my spare time for weeks after I got home, and I still don't have it solved.

But meanwhile, the rest of the refactoring was done, nice features like track logs were working, and I've had to move on to other projects. I am going to finish the OSM tile MapCollection class, but why hold up a release with a lot of useful changes just for that?

So here's PyTopo 0.8, and the couple of known problems with the new features will have to wait for 0.9.

Tags: , , , ,
[ 20:49 Jun 20, 2009    More programming | permalink to this entry | comments ]

Fri, 19 Jun 2009

Python: show all methods in a given object or module

A silly little thing, but something that Python books mostly don't mention and I can never find via Google:

How do you find all the methods in a given class, object or module?

Ideally the documentation would tell you. Wouldn't that be nice? But in the real world, you can't count on that, and examining all of an object's available methods can often give you a good guess at how to do whatever you're trying to do.

Python objects keep their symbol table in a dictionary called __dict__ (that's two underscores on either end of the word). So just look at object.__dict__. If you just want the names of the functions, use object.__dict__.keys().

Thanks to JanC for suggesting dir(object) and help(object), which can be more helpful -- not all objects have a __dict__.

Tags: , ,
[ 12:44 Jun 19, 2009    More programming | permalink to this entry | comments ]

Sun, 14 Jun 2009

Programming With PyGTK, part 3: Key events and object oriented Python

Part 3 of "Graphical Python Programming With PyGTK" uses object-oriented Python to clean up the code from Part 2, and also adds handling of key events to get rid of that silly Quit button. PythonGTK Programming part 3: Screensaver, Objects, and User Input

Tags: , ,
[ 12:18 Jun 14, 2009    More writing | permalink to this entry | comments ]

Mon, 01 Jun 2009

A GPX file manager

Someone on the OSM newbies list asked how he could strip waypoints out of a GPX track file. Seems he has track logs of an interesting and mostly-unmapped place that he wants to add to openstreetmap, but there are some waypoints that shouldn't be included, and he wanted a good way of separating them out before uploading.

Most of the replies involved "just edit the XML." Sure, GPX files are pretty simple and readable XML -- but a user shouldn't ever have to do that! Gpsman and gpsbabel were also mentioned, but they're not terribly easy to use either.

That reminded me that I had another XML-parsing task I'd been wanting to write in Python: a way to split track files from my Garmin GPS.

Sometimes, after a day of mapping, I end up with several track segments in the same track log file. Maybe I mapped several different trails; maybe I didn't get a chance to upload one day's mapping before going out the next day. Invariably some of the segments are of zero length (I don't know why the Garmin does that, but it always does). Applications like merkaartor don't like this one bit, so I usually end up editing the XML file and splitting it into segments by hand. I'm comfortable with XML -- but it's still silly.

I already have some basic XML parsing as part of PyTopo and Ellie, so I know the parsing very easy to do. So, spurred on by the posting on OSM-newbies, I wrote a little GPX parser/splitter called gpxmgr. gpxmgr -l file.gpx can show you how many track logs are in the file; gpxmgr -w file.gpx can write new files for each non-zero track log. Add -p if you want to be prompted for each filename (otherwise it'll use the name of the track log, which might be something like "ACTIVE\ LOG\ #2").

How, you may wonder, does that help the original poster's need to separate out waypoints from track files? It doesn't. See, my GPS won't save tracklogs and waypoints in the same file, even if you want them that way; you have to use two separate gpsbabel commands to upload a track file and a waypoint file. So I don't actually know what a tracklog-plus-waypoint file looks like. If anyone wants to use gpxmgr to manage waypoints as well as tracks, send me a sample GPX file that combines them both.

Tags: , ,
[ 20:43 Jun 01, 2009    More mapping | permalink to this entry | comments ]

Thu, 28 May 2009

Programming With PyGTK, part 2: pretty screensaver-type graphics

Part 2 of Graphical Python Programming With PyGTK gets into how to do some cool Qix screensaver-style graphics, in: Graphical Python Programming With PyGTK, part 2: Write Your Own Screensaver.

There's also a digg link.

Tags: , ,
[ 18:09 May 28, 2009    More writing | permalink to this entry | comments ]

Thu, 14 May 2009

Graphical Python Programming With PyGTK

This week's Linux Planet article is another one on Python and graphical toolkits, but this time it's a little more advanced: Graphical Python Programming With PyGTK.

This one started out as a fun and whizzy screensaver sort of program that draws lots of pretty colors -- but I couldn't quite fit it all into one article, so that will have to wait for the sequel two weeks from now.

Tags: , ,
[ 19:53 May 14, 2009    More writing | permalink to this entry | comments ]

Thu, 23 Apr 2009

Linux Planet: GIMP Python Plugins, part II

Latest Linux Planet article: How to write a "blobify" GIMP plug-in in Python to make text look three-dimensional.

Creating a Fancy 3D-Effect GIMP Plugin in Python.

Tags: , ,
[ 11:46 Apr 23, 2009    More writing | permalink to this entry | comments ]

Thu, 09 Apr 2009

Linux Planet: Writing Plugins for GIMP in Python

Latest Linux Planet article: Part 1 of a two-parter on Writing GIMP scripts in Python. As usual, there's a Digg link too.

Tags: , ,
[ 22:21 Apr 09, 2009    More writing | permalink to this entry | comments ]

Thu, 26 Mar 2009

GUI Programming in Python For Beginners

Latest on Linux Planet: another introductory programming article, this time on Python's tkinter library: GUI Programming in Python For Beginners. (As usual, there's a Digg link and also a Reddit one.)

Tags: , ,
[ 16:36 Mar 26, 2009    More writing | permalink to this entry | comments ]

Tue, 03 Mar 2009

Ellie: Plot GPS elevation profiles

Ever since I got the GPS I've been wanting something that plots the elevation data it stores. There are lots of apps that will show me the track I followed in latitude and longitude, but I couldn't find anything that would plot elevations.

But GPX (the XML-based format commonly used to upload track logs) is very straightforward -- you can look at the file and read the elevations right out of it. I knew it wouldn't be hard to write a script to plot them in Python; it just needed a few quiet hours. Sounded like just the ticket for a rainy day stuck at home with a sore throat.

Sure enough, it was fairly easy. I used xml.dom.minidom to parse the file (I'd already had some experience with it in gimplabels for converting gLabels templates), and pylab from matplotlib for doing the plotting. Easy and nice looking.

I even threw in the nice "conditional main" code from Matt Harrison's SCALE7x Python talk, so it should be callable from other Python code.

Here's the page and a screenshot: Ellie: plot elevation from a GPS track.

Tags: , ,
[ 17:57 Mar 03, 2009    More programming | permalink to this entry | comments ]

Sat, 28 Feb 2009

langgrep: search only in scripts of a specified language

I was making a minor tweak to my garmin script that uses gpsbabel to read in tracklogs and waypoints from my GPS unit, and I needed to look up the syntax of how to do some little thing in sh script. (One of the hazards of switching languages a lot: you forget syntax details and have to look things up a lot, or at least I do.)

I have quite a collection of scripts in various languages in my ~/bin (plus, of course, all the scripts normally installed in /usr/bin on any Linux machine) so I knew I'd have lots of examples. But there are scripts of all languages sharing space in those directories; it's hard to find just sh examples. For about the two-hundredth time, I wished, "Wouldn't it be nice to have a command that can search for patterns only in files that are really sh scripts?"

And then, the inevitable followup ... "You know, that would be really easy to write."

So I did -- a little python hack called langgrep that takes a language, grep arguments and a file list, looks for a shebang line and only greps the files that have a shebang matching the specified language.

Of course, while writing langgrep I needed langgrep, to look up details of python syntax for things like string.find (I can never remember whether it's string.find(s, pat) or s.find(pat); the python libraries are usually nicely object-oriented but strings are an exception and it's the former, string.find). I experimented with various shell options -- this is Unix, so of course there are plenty of ways of doing this in the shell, without writing a script. For instance:

grep find `egrep -l '#\\!.*python' *`
grep find `file * | grep python | sed 's/:.*//'`
i in foo; file $i|grep python && grep find $i; done    # in sh/bash
These are all pretty straightforward, but when I try to make them into tcsh aliases things get a lot trickier. tcsh lets you make aliases that take arguments, so you can use !:1 to mean the first argument, !2-$ to mean all the arguments starting with the second one. That's all very well, but when you put them into a shell alias in a file like .cshrc that has to be parsed, characters like ! and $ can mean other things as well, so you have to escape them with \. So the second of those three lines above turns into something like
alias greplang "grep \!:2-$ `file * | grep \!:1 | sed 's/:.*//'`"
except that doesn't work either, so it probably needs more escaping somewhere. Anyway, I decided after a little alias hacking that figuring out the right collection of backslash escapes would probably take just as long as writing a python script to do the job, and writing the python script sounded more fun.

So here it is: my langgrep script. (Awful name, I know; better ideas welcome!) Use it like this (if python is the language you're looking for, find is the search pattern, and you want -w to find only "find" as a whole word):

langgrep python -w find ~/bin/*

Tags: , ,
[ 10:57 Feb 28, 2009    More programming | permalink to this entry | comments ]

Sat, 16 Aug 2008

Fast Pixel Ops in GIMP-Python

Last night Joao and I were on IRC helping someone who was learning to write gimp plug-ins. We got to talking about pixel operations and how to do them in Python. I offered my arclayer.py as an example of using pixel regions in gimp, but added that C is a lot faster for pixel operations. I wondered if reading directly from the tiles (then writing to a pixel region) might be faster.

But Joao knew a still faster way. As I understand it, one major reason Python is slow at pixel region operations compared to a C plug-in is that Python only writes to the region one pixel at a time, while C can write batches of pixels by row, column, etc. But it turns out you can grab a whole pixel region into a Python array, manipulate it as an array then write the whole array back to the region. He thought this would probably be quite a bit faster than writing to the pixel region for every pixel.

He showed me how to change the arclayer.py code to use arrays, and I tried it on a few test layers. Was it faster? I made a test I knew would take a long time in arclayer, a line of text about 1500 pixels wide. Tested it in the old arclayer; it took just over a minute to calculate the arc. Then I tried Joao's array version: timing with my wristwatch stopwatch, I call it about 1.7 seconds. Wow! That might be faster than the C version.

The updated, fast version (0.3) of arclayer.py is on my arclayer page.

If you just want the trick to using arrays, here it is:

from array import array

[ ... setting up ... ]
        # initialize the regions and get their contents into arrays:
        srcRgn = layer.get_pixel_rgn(0, 0, srcWidth, srcHeight,
                                     False, False)
        src_pixels = array("B", srcRgn[0:srcWidth, 0:srcHeight])

        dstRgn = destDrawable.get_pixel_rgn(0, 0, newWidth, newHeight,
                                            True, True)
        p_size = len(srcRgn[0,0])               
        dest_pixels = array("B", "\x00" * (newWidth * newHeight * p_size))

[ ... then inside the loop over x and y ... ]
                        src_pos = (x + srcWidth * y) * p_size
                        dest_pos = (newx + newWidth * newy) * p_size
                        
                        newval = src_pixels[src_pos: src_pos + p_size]
                        dest_pixels[dest_pos : dest_pos + p_size] = newval

[ ... when the loop is all finished ... ]
        # Copy the whole array back to the pixel region:
        dstRgn[0:newWidth, 0:newHeight] = dest_pixels.tostring() 

Good stuff!

Tags: , , ,
[ 22:02 Aug 16, 2008    More gimp | permalink to this entry | comments ]

Sun, 25 May 2008

Crikey in Python, and generating key events with XTest

A user on the One Laptop Per Child (OLPC, also known as the XO) platform wrote to ask me how to use crikey on that platform.

There are two stages to getting crikey running on a new platform:

  1. Build it, and
  2. Figure out how to make a key run a specific program.

The crikey page contains instructions I've collected for binding keys in various window managers, since that's usually the hard part. On normal Linux machines the first step is normally no problem. But apparently the OLPC comes with gcc but without make or the X header files. (Not too surprising: it's not a machine aimed at developers and I assume most people developing for the machine cross-compile from a more capable Linux box.)

We're still working on that (if my correspondant gets it working, I'll post the instructions), but while I was googling for information about the OLPC's X environment I stumbled upon a library I didn't know existed: python-xlib. It turns out it's possible to do most or all of what crikey does from Python. The OLPC is Python based; if I could write crikey in Python, it might solve the problem. So I whipped up a little key event generating script as a test.

Unfortunately, it didn't solve the OLPC problem (they don't include python-xlib on the machine either) but it was a fun exercises, and might be useful as an example of how to generate key events in python-xlib. It supports both event generating methods: the X Test extension and XSendEvent. Here's the script: /pykey-0.1.

But while I was debugging the X Test code, I had to solve a bug that I didn't remember ever solving in the C version of crikey. Sure enough, it needed the same fix I'd had to do in the python version. Two fixes, actually. First, when you send a fake key event through XTest, there's no way to specify a shift mask. So if you need a shifted character like A, you have to send KeyPress Shift, KeyPress a. But if that's all you send, XTest on some systems does exactly what the real key would do if held down and never released: it autorepeats. (But only for a little while, not forever. Go figure.)

So the real answer is to send KeyPress Shift, KeyPress a, KeyRelease a, KeyRelease Shift. Then everything works nicely. I've updated crikey accordingly and released version 0.7 (though since XTest isn't used by default, most users won't see any change from 0.6). In the XSendEvent case, crikey still doesn't send the KeyRelease event -- because some systems actually see it as another KeyPress. (Hey, what fun would computers be if they were consistent and always predictable, huh?)

Both C and Python versions are linked off the crikey page.

Tags: , , ,
[ 15:50 May 25, 2008    More programming | permalink to this entry | comments ]

Fri, 12 Oct 2007

PyTopo and PyGTK pixbuf memory leakage

On a recent Mojave desert trip, we tried to follow a minor dirt road that wasn't mapped correctly on any of the maps we had, and eventually had to retrace our steps. Back at the hotel, I fired up my trusty PyTopo on the East Mojave map set and tried to trace the road. But I found that as I scrolled along the road, things got slower and slower until it just wasn't usable any more.

PyTopo was taking up all of my poor laptop's memory. Why? Python is garbage collected -- you're not supposed to have to manage memory explicitly, like freeing pixbufs. I poked around in all the sample code and man pages I had available but couldn't find any pygtk examples that seemed to be doing any explicit freeing.

When we got back to civilization (read: internet access) I did some searching and found the key. It's even in the PyGTK Image FAQ, and there's also some discussion in a mailing list thread from 2003.

Turns out that although Python is supposed to handle its own garbage collection, the Python interpreter doesn't grok the size of a pixbuf object; in particular, it doesn't see the image bits as part of the object's size. So dereferencing lots of pixbuf objects doesn't trigger any "enough memory has been freed that it's time to run the garbage collector" actions.

The solution is easy enough: call gc.collect() explicitly after drawing a map (or any other time a bunch of pixbufs have been dereferenced).

So there's a new version of PyTopo, 0.6 that should run a lot better on small memory machines, plus a new collection format (yet another format from the packaged Topo! map sets) courtesy of Tom Trebisky.

Oh ... in case you're wondering, the ancient USGS maps from Topo! didn't show the road correctly either.

Tags: , , ,
[ 22:21 Oct 12, 2007    More programming | permalink to this entry | comments ]

Tue, 04 Sep 2007

Egg Timer in Python and TkInter

I left the water on too long in the garden again. I keep doing that: I'll set up something where I need to check back in five minutes or fifteen minutes, then I get involved in what I'm doing and 45 minutes later, the cornbread is burnt or the garden is flooded.

When I was growing up, my mom had a little mechanical egg timer. You twist the dial to 5 minutes or whatever, and it goes tick-tick-tick and then DING! I could probably find one of those to buy (they're probably all digital now and include clocks and USB plugs and bluetooth ports) but since the problem is always that I'm getting distracted by something on the computer, why not run an app there?

Of course, you can do this with shell commands. The simple solution is:

(sleep 300; zenity --info --text="Turn off the water!") &

But the zenity dialogs are small -- what if I don't notice it? -- and besides, I have to multiply by 60 to turn a minute delay into sleep seconds. I'm lazy -- I want the computer to do that for me!
Update: Ed Davies points out that "sleep 5m" also works.

A slightly more elaborate solution is at. Say something like: at now + 15 minutes and when it prompts for commands, type something like:

export DISPLAY=:0.0
zenity --info --text="Your cornbread is ready"
to pop up a window with a message. But that's too much typing and has the same problem of the small easily-ignored dialogs. I'd really rather have a great big red window that I can't possibly miss.

Surely, I thought, someone has already written a nice egg-timer application! I tried aptitude search timer and found several apps such as gtimer, which is much more complicated than I wanted (you can define named events and choose from a list of ... never mind, I stopped reading there). I tried googling, but didn't have much luck there either (lots of Windows and web apps, no Linux apps or cross-platform scripts).

Clearly just writing the damn thing was going to be easier than finding one. (Why is it that every time I want to do something simple on a computer, I have to write it? I feel so sorry for people who don't program.)

I wanted to do it in python, but what to use for the window that pops up? I've used python-gtk in the past, but I've been meaning to check out TkInter (the gui toolkit that's kinda-sorta part of Python) and this seemed like a nice opportunity since the goal was so simple.

The resulting script: eggtimer. Call it like this:

eggtimer 5 Turn off the water
and in five minutes, it will pop up a huge red window the size of the screen with your message in big letters. (Click it or hit a key to dismiss it.)

First Impressions of TkInter

It was good to have an excuse to try TkInter and compare it with python-gtk. TkInter has been recommended as something normally installed with Python, so the user doesn't have to install anything extra. This is apparently true on Windows (and maybe on Mac), but on Ubuntu it goes the other way: I already had pygtk, because GIMP uses it, but to use TkInter I had to install python-tk.

For developing I found TkInter irritating. Most of the irritation concerned the poor documentation: there are several tutorials demonstrating very basic uses, but not much detailed documentation for answering questions like "What class is the root Tk() window and what methods does it have?" (The best I found -- which never showed up in google, but was referenced from O'Reilly's Programming Python -- was here.) In contrast, python-gtk is very well documented.

Things I couldn't do (or, at least, couldn't figure out how to do, and googling found only postings from other people wanting to do the same thing):

I expect I'll be sticking with pygtk for future projects. It's just too hard figuring things out with no documentation. But it was fun having an excuse to try something new.

Tags: , ,
[ 14:35 Sep 04, 2007    More programming | permalink to this entry | comments ]

Fri, 25 Aug 2006

PyTopo 0.5

Belated release announcement: 0.5b2 of my little map viewer PyTopo has been working well, so I released 0.5 last week with only a few minor changes from the beta. I'm sure I'll immediately find six major bugs -- but hey, that's what point releases are for. I only did betas this time because of the changed configuration file format.

I also made a start on a documentation page for the .pytopo file (though it doesn't really have much that wasn't already written in comments inside the script).

Tags: , , , ,
[ 22:10 Aug 25, 2006    More programming | permalink to this entry | comments ]

Sat, 03 Jun 2006

Cleaner, More Flexible Python Map Viewing

A few months ago, someone contacted me who was trying to use my PyTopo map display script for a different set of map data, the Topo! National Parks series. We exchanged some email about the format the maps used.

I'd been wanting to make PyTopo more general anyway, and already had some hacky code in my local version to let it use a local geologic map that I'd chopped into segments. So, faced with an Actual User (always a good incentive!), I took the opportunity to clean up the code, use some of Python's support for classes, and introduce several classes of map data.

I called it 0.5 beta 1 since it wasn't well tested. But in the last few days, I had occasion to do some map exploring, cleaned up a few remaining bugs, and implemented a feature which I hadn't gotten around to implementing in the new framework (saving maps to a file).

I think it's ready to use now. I'm going to do some more testing: after visiting the USGS Open House today and watching Jim Lienkaemper's narrated Virtual Tour of the Hayward Fault, I'm all fired up about trying again to find more online geologic map data. But meanwhile, PyTopo is feature complete and has the known bugs fixed. The latest version is on the PyTopo page.

Tags: , , , ,
[ 18:25 Jun 03, 2006    More programming | permalink to this entry | comments ]

Tue, 21 Jun 2005

A Fast Volume Control App

I updated my Debian sid system yesterday, and discovered today that gnome-volume-control has changed their UI yet again. Now the window comes up with two tabs, Playback and Capture; the default tab, Playback, has only one slider in it, PCM, and all the important sliders, like Volume, are under Capture. (I'm told this is some interaction with how ALSA sees my sound chip.)

That's just silly. I've never liked the app anyway -- it takes forever to come up, so I end up missing too much of any clip that starts out quiet. All I need is a simple, fast window with a single slider controlling master volume. But nothing like that seems to exist, except panel applets that are tied to the panels of particular window managers.

So I wrote one, in PyGTK. vol is a simple script which shows a slider, and calls aumix under the hood to get and set the volume. It's horizontal by default; vol -h gives a vertical slider.

Aside: it's somewhat amazing that Python has no direct way to read an integer out of a string containing more than just that integer: for example, to read 70 out of "70,". I had to write a function to handle that. It's such a terrific no-nonsense language most of the time, yet so bad at a few things. (And when I asked about a general solution in the python channel at [large IRC network], I got a bunch of replies like "use int(str[0:2])" and "use int(str[0:-1])". Shock and bafflement ensued when I pointed out that 5, 100, and -27 are all integers too and wouldn't be handled by those approaches.)

Tags: , , ,
[ 15:54 Jun 21, 2005    More programming | permalink to this entry | comments ]

Wed, 13 Apr 2005

PyTopo 0.3

I needed to print some maps for one of my geology class field trips, so I added a "save current map" key to PyTopo (which saves to .gif, and then I print it with gimp-print). It calls montage from Image Magick.

Get yer PyTopo 0.3 here.

Tags: , , ,
[ 17:56 Apr 13, 2005    More programming | permalink to this entry | comments ]

Sat, 09 Apr 2005

Python Expose vs. Focus

A few days ago, I mentioned my woes regarding Python sending spurious expose events every time the drawing area gains or loses focus.

Since then, I've spoken with several gtk people, and investigated several workarounds, which I'm writing up here for the benefit of anyone else trying to solve this problem.

First, "it's a feature". What's happening is that the default focus in and out handlers for the drawing area (or perhaps its parent class) assume that any widget which gains keyboard focus needs to redraw its entire window (presumably because it's locate-highlighting and therefore changing color everywhere?) to indicate the focus change. Rather than let the widget decide that on its own, the focus handler forces the issue via this expose event. This may be a bad decision, and it doesn't agree with the gtk or pygtk documentation for what an expose event means, but it's been that way for long enough that I'm told it's unlikely to be changed now (people may be depending on the current behavior).

Especially if there are workarounds -- and there are.

I wrote that this happened only in pygtk and not C gtk, but I was wrong. The spurious expose events are only passed if the CAN_FOCUS flag is set. My C gtk test snippet did not need CAN_FOCUS, because the program from which it was taken, pho, already implements the simplest workaround: put the key-press handler on the window, rather than the drawing area. Window apparently does not have the focus/expose misbehavior.

I worry about this approach, though, because if there are any other UI elements in the window which need to respond to key events, they will never get the chance. I'd rather keep the events on the drawing area.

And that becomes possible by overriding the drawing area's default focus in/out handlers. Simply write a no-op handler which returns TRUE, and set it as the handler for both focus-in and focus-out. This is the solution I've taken (and I may change pho to do the same thing, though it's unlikely ever to be a problem in pho).

In C, there's a third workaround: query the default focus handlers, and disconnect() them. That is a little more efficient (you aren't calling your nop routines all the time) but it doesn't seem to be possible from pygtk: pygtk offers disconnect(), but there's no way to locate the default handlers in order to disconnect them.

But there's a fourth workaround which might work even in pygtk: derive a class from drawing area, and set the focus in and out handlers to null. I haven't actually tried this yet, but it may be the best approach for an app big enough that it needs its own UI classes.

One other thing: it was suggested that I should try using AccelGroups for my key bindings, instead of a key-press handler, and then I could even make the bindings user-configurable. Sounded great! AccelGroups turn out to be very easy to use, and a nice feature. But they also turn out to have undocumented limitations on what can and can't be an accelerator. In particular, the arrow keys can't be accelerators; which makes AccelGroup accelerators less than useful for a widget or app that needs to handle user-initiated scrolling or movement. Too bad!

Tags: , , ,
[ 21:52 Apr 09, 2005    More programming | permalink to this entry | comments ]

Wed, 06 Apr 2005

PyTopo is usable; pygtk is inefficient

While on vacation, I couldn't resist tweaking pytopo so that I could use it to explore some of the areas we were visiting.

It seems fairly usable now. You can scroll around, zoom in and out to change between the two different map series, and get the coordinates of a particular location by clicking. I celebrated by making a page for it, with a silly tux-peering-over-map icon.

One annoyance: it repaints every time it gets a focus in or out, which means, for people like me who use mouse focus, that it repaints twice for each time the mouse moves over the window. This isn't visible, but it would drag the CPU down a bit on a slow machine (which matters since mapping programs are particularly useful on laptops and handhelds).

It turns out this is a pygtk problem: any pygtk drawing area window gets spurious Expose events every time the focus changes (whether or not you've asked to track focus events), and it reports that the whole window needs to be repainted, and doesn't seem to be distinguishable in any way from a real Expose event. The regular gtk libraries (called from C) don't do this, nor do Xlib C programs; only pygtk.

I filed bug 172842 on pygtk; perhaps someone will come up with a workaround, though the couple of pygtk developers I found on #pygtk couldn't think of one (and said I shouldn't worry about it since most people don't use pointer focus ... sigh).

Tags: , , , ,
[ 17:26 Apr 06, 2005    More programming | permalink to this entry | comments ]

Sun, 27 Mar 2005

Python GTK Topographic Map Program

I couldn't stop myself -- I wrote up a little topo map viewer in PyGTK, so I can move around with arrow keys or by clicking near the edges. It makes it a lot easier to navigate the map directory if I don't know the exact starting coordinates.

It's called PyTopo, and it's in the same place as my earlier two topo scripts.

I think CoordsToFilename has some bugs; the data CD also has some holes, and some directories don't seem to exist in the expected place. I haven't figured that out yet.

Tags: , , , ,
[ 18:53 Mar 27, 2005    More programming | permalink to this entry | comments ]

Topographic Maps for Linux

I've long wished for something like those topographic map packages I keep seeing in stores. The USGS (US Geological Survey) sells digitized versions of their maps, but there's a hefty setup fee for setting up an order, so it's only reasonable when buying large collections all at once.

There are various Linux mapping applications which do things like download squillions of small map sections from online mapping sites, but they're all highly GPS oriented and I haven't had much luck getting them to work without one. I don't (yet?) have a GPS; but even if I had one, I usually want to make maps for places I've been or might go, not for where I am right now. (I don't generally carry a laptop along on hikes!)

The Topo! map/software packages sold in camping/hiking stores (sometimes under the aegis of National Geographic are very reasonably priced. But of course, the software is written for Windows (and maybe also Mac), not much help to Linux users, and the box gives no indication of the format of the data. Googling is no help; it seems no Linux user has ever tried buying one of these packages to see what's inside. The employees at my local outdoor equipment store (Mel Cotton's) were very nice without knowing the answer, and offered the sensible suggestion of calling the phone number on the box, which turns out to be a small local company, "Wildflower Productions", located in San Francisco.

Calling Wildflower, alas, results in an all too familiar runaround: a touchtone menu tree where no path results in the possibility of contact with a human. Sometimes I wonder why companies bother to list a phone number at all, when they obviously have no intention of letting anyone call in.

Concluding that the only way to find out was to buy one, I did so. A worthwhile experiment, as it turned out! The maps inside are simple GIF files, digitized from the USGS 7.5-minute series and, wonder of wonders, also from the discontinued but still useful 15-minute series. Each directory contains GIF files covering the area of one 7.5 minute map, in small .75-minute square pieces, including pieces of the 15-minute map covering the same area.

A few minutes of hacking with python and Image Magick resulted in a script to stitch together all images in one directory to make one full USGS 7.5 minute map; after a few hours of hacking, I can stitch a map of arbitrary size given start and end longitude and latitude. My initial scripts, such as they are.

Of course, I don't yet have nicities like a key, or an interactive scrolling window, or interpretation of the USGS digital elevation data. I expect I have more work to do. But for now, just being able to generate and print maps for a specific area is a huge boon, especially with all the mapping we're doing in Field Geology class. GIMP's "measure" tool will come in handy for measuring distances and angles!

Tags: , , ,
[ 12:13 Mar 27, 2005    More programming | permalink to this entry | comments ]

Syndicated on:
LinuxChix Live
Ubuntu Women
Women in Free Software
Graphics Planet
DevChix
Ubuntu California
Planet Openbox
Devchix
Planet LCA2009

Friends' Blogs:
Morris "Mojo" Jones
Jane Houston Jones
Dan Heller
Long Live the Village Green
Ups & Downs
DailyBBG

Other Blogs of Interest:
DevChix
Scott Adams
Dave Barry
BoingBoing

Powered by PyBlosxom.