Shallow Thoughts : : 2018

Akkana's Musings on Open Source Computing and Technology, Science, and Nature.

Fri, 28 Dec 2018

A Post-Christmas Snow

[Snow world] The morning after Christmas we woke up to a beautiful white world, with snow still coming down.

Shoveling is a drag, but still, the snowy landscape is so beautiful, and still such a wonderful novelty for ex-Californians.

This morning we awoke to much the same view, except the snow was deeper -- 8-12 inches, quite a lot for White Rock.

[Roof glacier hanging] We also had the usual amusement of Roof Glaciers: as the mat of snow gradually slides off the metal roof, it hangs off the edge, gradually curling, until finally the weight is great enough that it breaks off and falls. Definitely an amusing sight from inside, and fun from outside too (a few years ago I made time-lapse movies of the roof glaciers).

[Sunny New Mexican snow world] And then, this being New Mexico, the sun came out, so even while snowflakes continued to swirl down we got a bright sunny sparkly snow vista.

Yesterday, the snow stopped falling by afternoon, so Raspberry Pi Club had its usual Thursday meeting. But the second storm came in hours earlier than predicted, and driving home from Pi Club was a bit icy. I wasn't looking forward to the drive up to PEEC and back tonight in a heavier snowstorm for our planetarium talk; but PEEC has closed the Nature Center today on account of snow, which means that tonight's planetarium talk is also canceled. We'll reschedule, probably next quarter.

Happy Holidays, everyone, whether you're huddling inside watching the snow, enjoying sunny weather, or anything in between. Stay warm, and walk in beauty.

Tags: , ,
[ 12:28 Dec 28, 2018    More misc | permalink to this entry | ]

Fri, 21 Dec 2018

A Splint for "Trigger Thumb"

It's been a while since my last blog post. Partly that's because I've been busy with other things, like a welding class, learning a lot of great new techniques. But it's also because I've been trying to keep typing to a minimum (not easy for me) because of a thumb problem.

It's called "trigger thumb" and apparently is caused by a tendon that gets stuck in its sheath. It can be caused by repetitive motion, but in my case I just woke up with it one day, after a day when I hadn't been doing anything particularly hand-intensive.

Cortisone injections and surgery are the usual treatment. I may yet try cortisone, but the number of such injections you can get are severely limited (like, twice in a lifetime), and the surgery didn't sound appealing, so I wanted to try other approaches first.

Some discussions I found mentioned splinting. I tried splinting it with a popsicle stick and tape, but a straight splint made it much worse: keeping it straight made it want to stay straight, and after removing the splint it was quite painful to try to bend it. For weeks it just kept getting worse.

But I finally found something that helped: a bent splint. I glued two pieces of popsicle stick together at an angle, and at bedtime I taped them to my thumb so it stayed a little bent overnight. That helped quite a bit. But it was a pain to set up and tended to come loose. I wanted something I could just slip on and off, without going through all that tape, that wouldn't come loose. Preferably with an adjustable angle.

[trigger thumb splint v.1: steel] So I cut some strips of steel, got out the welder and made myself a bent splint. It's tough to weld thin pieces with the MIG welder, and I melted it in places. But it worked amazingly well. I lined it with some Moleskin, and after a few days with it, the thumb definitely started to feel better. The tendon was still popping, but it hurt a lot less and I could start using my hand again. And the metal splint let me adjust how much my thumb was bent, which wasn't true at all with the popsicle stick approach.

Plus, it had a neat sort of Spanish Inquisition/Hannibal Lecter look. It looks like a torture device, but really, it's amazingly comfortable.

The only problem: it was heavy. I could feel it dragging down on my thumb all the time. I wished it was a little lighter.

[trigger thumb splint v.2: brass] The hardware store sells strips of brass that looked like just the ticket. But you can't MIG weld brass, only steel. Good thing I was taking that welding class! I asked the instructor, and he brought in some carbon-bronze filler rod and showed me how to "TIG braze". It's difficult and fiddly: brass melts very easily, and the trick is to get the base metal hot, then bring in the filler rod and blip the TIG pedal just enough to melt the rod so it flows in without melting the base metal. While my instructor made it look easy, when I tried it myself I always ended up getting the temperature too hot and melting some of the brass.

So my splint looks a bit ragged in spots. Still, the finished product works wonderfully, and it's quite a bit lighter than its steel cousin. Dave thinks it still looks Hannibal Lecterish, but that doesn't bother me. I skipped the Moleskin this time: it's comfy enough without it, and it's a lot easier to slip the splint on and off.

I'm still trying to spend less time typing until my thumb heals completely. But with the splint, and occasional ice packs, it's improving, doesn't hurt any more, and I'm hoping I can get by without cortisone.

And besides, isn't it more fun to weld up your own medical equipment? (Don't tell the AMA!)

Tags: ,
[ 16:37 Dec 21, 2018    More misc | permalink to this entry | ]

Sun, 28 Oct 2018

How to Extend a Moonrise

Last night, as we drove home from the Pumpkin Glow -- one of Los Alamos's best annual events, a night exhibition of dozens of carved pumpkins all together in one place -- I noticed a glow on the horizon right around Truchas Peak and wondered if the moon was going to rise that far north.

Sure enough, I saw the first sliver of the moon poking over the peak as we passed the airport. "We may get an extended moonrise tonight", I said, realizing that as the moon rose, we'd be descending the "Main Hill Road", as that section of NM 502 is locally known, so we'd get lower with respect to the mountains even as the moon got higher. Which would win?

As it turns out, neither. The change of angle during the descent down the Main Hill Road exactly matches the rate of moonrise, so the size of the moon's sliver stayed almost exactly the same during the whole descent, until we got down to the "Y" where a nearby mesa blocked our view entirely. By the time we could see the moon again, it was just freeing itself of the mountains.

Neat! Made me think of The Little Prince: his home asteroid B6-12 (no, that's not a real asteroid desgination) was small enough that by moving his chair, he could watch sunset over and over again. I'm a sucker for moonrises -- and now I know how I can make them last longer!

Tags: ,
[ 19:32 Oct 28, 2018    More science/astro | permalink to this entry | ]

Sun, 21 Oct 2018

How to tell sparrows apart

[Sparrow ID page] I was filing an eBird report the other day, dutifully cataloging the first junco of the year and the various other birds that have been hanging around, when a sparrow flew into my binocular field. A chipping sparrow? Probably ... but this one wasn't so clearly marked.

I always have trouble telling the dang sparrows apart. When I open the bird book, I always have to page through dozens of pages of sparrows that are never seen in this county, trying to figure out which one looks most like what I'm seeing.

I used to do that with juncos, but then I made a local copy of a wonderful comparison photo Bob Walker published a couple years ago on the PEEC blog: Bird of the Week – The Dark-eyed Junco. (I also have the same sort of crib sheet for the Raspberry Pi GPIO pins.) Obviously I needed a similar crib sheet for sparrows.

So I collected the best publically-licensed images I could find on the web, and made Sparrows of Los Alamos County, with comparison images close together so I can check them quickly before the bird flies away.

If you live somewhere else so the Los Alamos County list isn't quite what you need, you're welcome to use the code to make your own version.

Tags: , ,
[ 19:18 Oct 21, 2018    More nature | permalink to this entry | ]

Sat, 13 Oct 2018

Tape Rabbit

[Packing tape rabbit] I had to mail a package recently, and finished up a roll of packing tape.

I hadn't realized before I removed the tape roll from its built-in dispenser that packing tape was dispensed by rabbits.

Tags:
[ 20:11 Oct 13, 2018    More humor | permalink to this entry | ]

Sun, 23 Sep 2018

Writing Solar System Simulations with NAIF SPICE and SpiceyPy

Someone asked me about my Javascript Jupiter code, and whether it used PyEphem. It doesn't, of course, because it's Javascript, not Python (I wish there was something as easy as PyEphem for Javascript!); instead it uses code from the book Astronomical Formulae for Calculators by Jean Meeus. (His better known Astronomical Algorithms, intended for computers rather than calculators, is actually harder to use for programming because Astronomical Algorithms is written for BASIC and the algorithms are relatively hard to translate into other languages, whereas Astronomical Formulae for Calculators concentrates on explaining the algorithms clearly, so you can punch them into a calculator by hand, and this ends up making it fairly easy to implement them in a modern computer language as well.)

Anyway, the person asking also mentioned JPL's page HORIZONS Ephemerides page, which I've certainly found useful at times. Years ago, I tried emailing the site maintainer asking if they might consider releasing the code as open source; it seemed like a reasonable request, given that it came from a government agency and didn't involve anything secret. But I never got an answer.

[SpiceyPy example: Cassini's position] But going to that page today, I find that code is now available! What's available is a massive toolkit called SPICE (it's all in capitals but there's no indication what it might stand for. It comes from NAIF, which is NASA's Navigation and Ancillary Information Facility).

SPICE allows for accurate calculations of all sorts of solar system quantities, from the basic solar system bodies like planets to all of NASA's active and historical public missions. It has bindings for quite a few languages, including C. The official list doesn't include Python, but there's a third-party Python wrapper called SpiceyPy that works fine.

The tricky part of programming with SPICE is that most of the code is hidden away in "kernels" that are specific to the objects and quantities you're calculating. For any given program you'll probably need to download at least four "kernels", maybe more. That wouldn't be a problem except that there's not much help for figuring out which kernels you need and then finding them. There are lots of SPICE examples online but few of them tell you which kernels they need, let alone where to find them.

After wrestling with some of the examples, I learned some tricks for finding kernels, at least enough to get the basic examples working. I've collected what I've learned so far into a new GitHub repository: NAIF SPICE Examples. The README there explains what I know so far about getting kernels; as I learn more, I'll update it.

SPICE isn't easy to use, but it's probably much more accurate than simpler code like PyEphem or my Meeus-based Javascript code, and it can calculate so many more objects. It's definitely something worth knowing about for anyone doing solar system simulations.

Tags: , ,
[ 16:43 Sep 23, 2018    More programming | permalink to this entry | ]

Sun, 16 Sep 2018

Printing Two-Sided from the Command Line

The laser printers we bought recently can print on both sides of the page. Nice feature! I've never had access to a printer that can do that before.

But that requires figuring out how to tell the printer to do the right thing. Reading the man page for lp, I spotted the sides option: print -o sides=two-sided-long-edge. But that doesn't work by itself. Adding -n 2 looked like the way to go, but nope! That gives you one sheet that has page 1 on both sides, and a second sheet that has page 2 on both sides. Because of course that's what a normal person would want. Right.

The real answer, after further research and experimentation, turned out to be the collate=true option:

lp -o sides=two-sided-long-edge -o collate=true -d printername file

Tags: , ,
[ 11:05 Sep 16, 2018    More linux | permalink to this entry | ]

Mon, 03 Sep 2018

Raspberry Pi Zero as Ethernet Gadget Part 3: An Automated Script

Continuing the discussion of USB networking from a Raspberry Pi Zero or Zero W (Part 1: Configuring an Ethernet Gadget and Part 2: Routing to the Outside World): You've connected your Pi Zero to another Linux computer, which I'll call the gateway computer, via a micro-USB cable. Configuring the Pi end is easy. Configuring the gateway end is easy as long as you know the interface name that corresponds to the gadget.

ip link gave a list of several networking devices; on my laptop right now they include lo, enp3s0, wlp2s0 and enp0s20u1. How do you tell which one is the Pi Gadget? When I tested it on another machine, it showed up as enp0s26u1u1i1. Even aside from my wanting to script it, it's tough for a beginner to guess which interface is the right one.

Try dmesg

Sometimes you can tell by inspecting the output of dmesg | tail. If you run dmesg shortly after you initialized the gadget (either by plugging the USB cable into the gateway computer, you'll see some lines like:

[  639.301065] cdc_ether 3-1:1.0 enp0s20u1: renamed from usb0
[ 9458.218049] usb 3-1: USB disconnect, device number 3
[ 9458.218169] cdc_ether 3-1:1.0 enp0s20u1: unregister 'cdc_ether' usb-0000:00:14.0-1, CDC Ethernet Device
[ 9462.363485] usb 3-1: new high-speed USB device number 4 using xhci_hcd
[ 9462.504635] usb 3-1: New USB device found, idVendor=0525, idProduct=a4a2
[ 9462.504642] usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 9462.504647] usb 3-1: Product: RNDIS/Ethernet Gadget
[ 9462.504660] usb 3-1: Manufacturer: Linux 4.14.50+ with 20980000.usb
[ 9462.506242] cdc_ether 3-1:1.0 usb0: register 'cdc_ether' at usb-0000:00:14.0-1, CDC Ethernet Device, f2:df:cf:71:b9:92
[ 9462.523189] cdc_ether 3-1:1.0 enp0s20u1: renamed from usb0

(Aside: whose bright idea was it that it would be a good idea to rename usb0 to enp0s26u1u1i1, or wlan0 to wlp2s0? I'm curious exactly who finds their task easier with the name enp0s26u1u1i1 than with usb0. It certainly complicated all sorts of network scripts and howtos when the name wlan0 went away.)

Anyway, from inspecting that dmesg output you can probably figure out the name of your gadget interface. But it would be nice to have something more deterministic, something that could be used from a script. My goal was to have a shell function in my .zshrc, so I could type pigadget and have it set everything up automatically. How to do that?

A More Deterministic Way

First, the name starts with en, meaning it's an ethernet interface, as opposed to wi-fi, loopback, or various other types of networking interface. My laptop also has a built-in ethernet interface, enp3s0, as well as lo0, the loopback or "localhost" interface, and wlp2s0, the wi-fi chip, the one that used to be called wlan0.

Second, it has a 'u' in the name. USB ethernet interfaces start with en and then add suffixes to enumerate all the hubs involved. So the number of 'u's in the name tells you how many hubs are involved; that enp0s26u1u1i1 I saw on my desktop had two hubs in the way, the computer's internal USB hub plus the external one sitting on my desk.

So if you have no USB ethernet interfaces on your computer, looking for an interface name that starts with 'en' and has at least one 'u' would be enough. But if you have USB ethernet, that won't work so well.

Using the MAC Address

You can get some useful information from the MAC address, called "link/ether" in the ip link output. In this case, it's f2:df:cf:71:b9:92, but -- whoops! -- the next time I rebooted the Pi, it became ba:d9:9c:79:c0:ea. The address turns out to be randomly generated and will be different every time. It is possible to set it to a fixed value, and that thread has some suggestions on how, but I think they're out of date, since they reference a kernel module called g_ether whereas the module on my updated Raspbian Stretch is called cdc_ether. I haven't tried.

Anyway, random or not, the MAC address also has one useful property: the first octet (f2 in my first example) will always have the '2' bit set, as an indicator that it's a "locally administered" MAC address rather than one that's globally unique. See the Wikipedia page on MAC addressing for details on the structure of MAC addresses. Both f2 (11110010 in binary) and ba (10111010 binary) have the 2 (00000010) bit set.

No physical networking device, like a USB ethernet dongle, should have that bit set; physical devices have MAC addresses that indicate what company makes them. For instance, Raspberry Pis with networking, like the Pi 3 or Pi Zero W, have interfaces that start with b8:27:eb. Note the 2 bit isn't set in b8.

Most people won't have any USB ethernet devices connected that have the "locally administered" bit set. So it's a fairly good test for a USB ethernet gadget.

Turning That Into a Shell Script

So how do we package that into a pipeline so the shell -- zsh, bash or whatever -- can check whether that 2 bit is set?

First, use ip -o link to print out information about all network interfaces on the system. But really you only need the ones starting with en and containing a u. Splitting out the u isn't easy at this point -- you can check for it later -- but you can at least limit it to lines that have en after a colon-space. That gives output like:

$ ip -o link | grep ": en"
5: enp3s0:  mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000\    link/ether 74:d0:2b:71:7a:3e brd ff:ff:ff:ff:ff:ff
8: enp0s20u1:  mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000\    link/ether f2:df:cf:71:b9:92 brd ff:ff:ff:ff:ff:ff

Within that, you only need two pieces: the interface name (the second word) and the MAC address (the 17th word). Awk is a good tool for picking particular words out of an output line:

$ ip -o link | grep ': en' | awk '{print $2, $17}'
enp3s0: 74:d0:2b:71:7a:3e
enp0s20u1: f2:df:cf:71:b9:92

The next part is harder: you have to get the shell to loop over those output lines, split them into the interface name and the MAC address, then split off the second character of the MAC address and test it as a hexadecimal number to see if the '2' bit is set. I suspected that this would be the time to give up and write a Python script, but no, it turns out zsh and even bash can test bits:

ip -o link | grep en | awk '{print $2, $17}' | \
    while read -r iff mac; do
        # LON is a numeric variable containing the digit we care about.
        # The "let" is required so LON will be a numeric variable,
        # otherwise it's a string and the bitwise test fails.
        let LON=0x$(echo $mac | sed -e 's/:.*//' -e 's/.//')

        # Is the 2 bit set? Meaning it's a locally administered MAC
        if ((($LON & 0x2) != 0)); then
            echo "Bit is set, $iff is the interface"
        fi
    done

Pretty neat! So now we just need to package it up into a shell function and do something useful with $iff when you find one with the bit set: namely, break out of the loop, call ip a add and ip link set to enable networking to the Raspberry Pi gadget, and enable routing so the Pi will be able to get to networks outside this one. Here's the final function:

# Set up a Linux box to talk to a Pi0 using USB gadget on 192.168.0.7:
pigadget() {
    iface=''

    ip -o link | grep en | awk '{print $2, $17}' | \
        while read -r iff mac; do
            # LON is a numeric variable containing the digit we care about.
            # The "let" is required so zsh will know it's numeric,
            # otherwise the bitwise test will fail.
            let LON=0x$(echo $mac | sed -e 's/:.*//' -e 's/.//')

            # Is the 2 bit set? Meaning it's a locally administered MAC
            if ((($LON & 0x2) != 0)); then
                iface=$(echo $iff | sed 's/:.*//')
                break
            fi
        done

    if [[ x$iface == x ]]; then
        echo "No locally administered en interface:"
        ip a | egrep '^[0-9]:'
        echo Bailing.
        return
    fi

    sudo ip a add 192.168.7.1/24 dev $iface
    sudo ip link set dev $iface up

    # Enable routing so the gadget can get to the outside world:
    sudo sh -c 'echo 1 > /proc/sys/net/ipv4/ip_forward'
    sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
}

Tags: , ,
[ 18:41 Sep 03, 2018    More linux | permalink to this entry | ]

Fri, 31 Aug 2018

Raspberry Pi Zero as Ethernet Gadget Part 2: Routing to the Outside World

I wrote some time ago about how to use a Raspberry Pi over USB as an "Ethernet Gadget". It's a handy way to talk to a headless Pi Zero or Zero W if you're somewhere where it doesn't already have a wi-fi network configured.

However, the setup I gave in that article doesn't offer a way for the Pi Zero to talk to the outside world. The Pi is set up to use the machine on the other end of the USB cable for routing and DNS, but that doesn't help if the machine on the other end isn't acting as a router or a DNS host.

A lot of the ethernet gadget tutorials I found online explain how to do this on Mac and Windows, but it was tough to find an example for Linux. The best I found was for Slackware, How to connect to the internet over USB from the Raspberry Pi Zero, which should work on any Linux, not just Slackware.

Let's assume you have the Pi running as a gadget and you can talk to it, as discussed in the previous article, so you've run:

sudo ip a add 192.168.7.1/24 dev enp0s20u1
sudo ip link set dev enp0s20u1 up
substituting your network number and the interface name that the Pi created on your Linux machine, which you can find in dmesg | tail or ip link. (In Part 3 I'll talk more about how to find the right interface name if it isn't obvious.)

At this point, the network is up and you should be able to ping the Pi with the address you gave it, assuming you used a static IP: ping 192.168.7.2 If that works, you can ssh to it, assuming you've enabled ssh. But from the Pi's end, all it can see is your machine; it can't get out to the wider world.

For that, you need to enable IP forwarding and masquerading:

sudo sh -c 'echo 1 > /proc/sys/net/ipv4/ip_forward'
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Now the Pi can route to the outside world, but it still doesn't have DNS so it can't get any domain names. To test that, on the gateway machine try pinging some well-known host:

$ ping -c 2 google.com
PING google.com (216.58.219.110) 56(84) bytes of data.
64 bytes from mia07s25-in-f14.1e100.net (216.58.219.110): icmp_seq=1 ttl=56 time=78.6 ms
64 bytes from mia07s25-in-f14.1e100.net (216.58.219.110): icmp_seq=2 ttl=56 time=78.7 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 78.646/78.678/78.710/0.032 ms

Take the IP address from that -- e.g. 216.58.219.110 -- then go to a shell on the Pi and try ping -c 2 216.58.219.110, and you should see a response.

DNS with a Public DNS Server

Now all you need is DNS. The easy way is to use one of the free DNS services, like Google's 8.8.8.8. Edit /etc/resolv.conf and add a line like

nameserver 8.8.8.8
and then try pinging some well-known hostname.

If it works, you can make that permanent by editing /etc/resolv.conf, and adding this line:

name_servers=8.8.8.8

Otherwise you'll have to do it every time you boot.

Your Own DNS Server

But not everyone wants to use public nameservers like 8.8.8.8. For one thing, there are privacy implications: it means you're telling Google about every site you ever use for any reason.

Fortunately, there's an easy way around that, and you don't even have to figure out how to configure bind/named. On the gateway box, install dnsmasq, available through your distro's repositories. It will use whatever nameserver you're already using on that machine, and relay it to other machines like your Pi that need the information. I didn't need to configure it at all; it worked right out of the box.

In the next article, Part 3: more about those crazy interface names (why is it enp0s20u1 on my laptop but enp0s26u1u1i1 on my desktop?), how to identify which interface is the gadget by using its MAC, and how to put it all together into a shell function so you can set it up with one command.

Tags: , ,
[ 15:25 Aug 31, 2018    More linux | permalink to this entry | ]

Thu, 23 Aug 2018

Making Sure the Debian Kernel is Up To Date

I try to avoid Grub2 on my Linux machines, for reasons I've discussed before. Even if I run it, I usually block it from auto-updating /boot since that tends to overwrite other operating systems. But on a couple of my Debian machines, that has meant needing to notice when a system update has installed a new kernel, so I can update the relevant boot files. Inevitably, I fail to notice, and end up running an out of date kernel.

But didn't Debian use to have a /boot/vmlinuz that always linked to the latest kernel? That was such a good idea: what happened to that?

I'll get to that. But before I found out, I got sidetracked trying to find a way to check whether my kernel was up-to-date, so I could have it warn me of out-of-date kernels when I log in.

That turned out to be fairly easy using uname and a little shell pipery:

# Is the kernel running behind?
kernelvers=$(uname -a | awk '{ print $3; }')
latestvers=$(cd /boot; ls -1 vmlinuz-* | sort --version-sort | tail -1 | sed 's/vmlinuz-//')
if [[ $kernelvers != $latestvers ]]; then
    echo "======= Running kernel $kernelvers but $latestvers is available"
else
    echo "The kernel is up to date"
fi

I put that in my .login. But meanwhile I discovered that that /boot/vmlinuz link still exists -- it just isn't enabled by default for some strange reason. That, of course, is the right way to make sure you're on the latest kernel, and you can do it with the linux-update-symlinks command.

linux-update-symlinks is called automatically when you install a new kernel -- but by default it updates symlinks in the root directory, /, which isn't much help if you're trying to boot off a separate /boot partition.

But you can configure it to notice your /boot partition. Edit /etc/kernel-img.conf and change link_in_boot to yes:

link_in_boot = yes

Then linux-update-symlinks will automatically update the /boot/vmlinuz link whenever you update the kernel, and whatever bootloader you prefer can point to that image. It also updates /boot/vmlinuz.old to point to the previous kernel in case you can't boot from the new one.

Update: To get linux-update-symlinks to update symlinks to reflect the current kernel, you need to reinstall the package for the current kernel, e.g. apt-get install --reinstall linux-image-4.18.0-3-amd64. Just apt-get install --reinstall linux-image-amd64 isn't enough.

Tags: , ,
[ 20:14 Aug 23, 2018    More linux/kernel | permalink to this entry | ]

Fri, 17 Aug 2018

Easy DIY Cellphone Stand

Over the years I've picked up a couple of cellphone stands as conference giveaways. A stand is a nice idea, especially if you like to read news articles during mealtime, but the stands I've tried never seem to be quite what I want. Either they're not adjustable, or they're too bulky or heavy to want to carry them around all the time.

A while back, I was browsing on ebay looking for something better than the ones I have. I saw a few that looked like they might be worth trying, but then it occurred to me: I could make one pretty easily that would work better than anything I'd found for sale.

I started with plans that involved wire and a hinge -- the hinge so the two sides of the stand would fold together to fit in a purse or pocket -- and spent a few hours trying different hinge options.I wasn't satisfied, though. And then I realized: all I had to do was bend the wire into the shape I needed. Voilà -- instant lightweight adjustable cellphone stand.

And it has worked great. I've been using it for months and it's much better than any of the prefab stands I had before.

Bend a piece of wire

[Bent wire]

I don't know where this wire came from: it was in my spare-metal-parts collection. You want something a little thinner than coathanger wire, so you can bend it relatively easily; "baling wire" or "rebar wire" is probably about right.

Bend the tips around

[Tips adjusted to fit your phone's width]

Adjust the curve so it's big enough that your cellphone will fit in the crook of the wires.

Bend the back end down, and spread the two halves apart

[Bend the back end down]

Adjust so it fits your phone

[Instant cellphone stand]

Coat the finished stand with rubberized coating (available at your local hardware store in either dip or spray-on varieties) so it won't slide around on tables and won't scratch anything. The finished product is adjustable to any angle you need -- so you can adjust it based on the lighting in any room -- and you can fold the two halves together to make it easy to carry.

Tags: ,
[ 12:06 Aug 17, 2018    More hardware | permalink to this entry | ]

Sun, 12 Aug 2018

Prevent a Linux/systemd System from Auto-Sleeping

About three weeks ago, a Debian (testing) update made a significant change on my system: it added a 30-minute suspend timeout. If I left the machine unattended for half an hour, it would automatically go to sleep.

What's wrong with that? you ask. Why not just let it sleep if you're going to be away from it that long?

But sometimes there's a reason to leave it running. For instance, I might want to track an ongoing discussion in IRC, and occasionally come back to check in. Or, more important, a long-running job that doesn't require user input, like a system backup, or ripping a CD. or testing a web service. None of those count as "activity" to keep the system awake: only mouse and keyboard motions count.

There are lots of pages that point to the file /etc/systemd/logind.conf, where you can find commented-out lines like

#IdleAction=ignore
#IdleActionSec=30min

The comment at the top of the file says that these are the defaults and references the logind.conf man page. Indeed, man logind.conf says that setting IdleAction=ignore should prevent annything from happening, and that setting IdleActionSec=120min should lead to a longer delay.

Alas, neither is true. This file is completely ignored as far as I can tell, and I don't know why it's there, or where the 30 minute setting is coming from.

What actually did work was in Debian's Suspend wiki page. I was skeptical since the page hasn't been updated since Debian Jessie (Stretch, the successor to Jessie, has been out for more than a year now) and the change I was seeing just happened in the last month. I was also leery because the only advice it gives is "For systems which should never attempt any type of suspension, these targets can be disabled at the systemd level". I do suspend my system, frequently; I just don't want it to happen unless I tell it to, or with a much longer timeout than 30 minutes.

But it turns out the command it suggests does work:

sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
and it doesn't disable suspending entirely: I can still suspend manually, it just disables autosuspend. So that's good enough.

Be warned: the page says next:

Then run systemctl restart systemd-logind.service or reboot.

It neglects to mention that restarting systemd-logind.service will kill your X session, so don't run that command if you're in the middle of anything.

It would be nice to know where the 30-second timeout had been coming from, so I could enable it after, say, 90 or 120 minutes. A timeout sounds like a good thing, if it's something the user can configure. But like so many systemd functions, no one who writes documentation seems to know how it actually works, and those who know aren't telling.

Tags: ,
[ 13:36 Aug 12, 2018    More linux | permalink to this entry | ]

Wed, 08 Aug 2018

August Hailstorm

We're still not getting the regular thunderstorms one would normally expect in the New Mexico monsoon season, but at least we're getting a little relief from the drought.

Last Saturday we had a fairly impressive afternoon squall. It only lasted about ten minutes but it dumped over an inch of rain and hail in that time. ("Over an inch" means our irritating new weather station stopped recording at exactly 1.0 even though we got some more rain after that, making us suspect that it has some kind of built-in "that can't be right!" filter. It reads in hundredths of an inch and it's hard to believe that we didn't even get another .01 after that.)

{Pile of hailstones on our deck} It was typical New Mexico hail -- lentil-sized, not like the baseballs we heard about in Colorado Springs a few days later that killed some zoo animals. I hear this area does occasionally get big hailstones, but it's fortunately rare.

There was enough hail on the ground to make for wintry snow scenes, and we found an enormous pile of hailstones on our back deck that persisted through the next day (that deck is always shady). Of course, the hail out in the yard disappeared in under half an hour once the New Mexico sun came out.

{Pile of hailstones on our deck} But before that, as soon as the squall ended, we went out to walk the property and take a look the "snow" and in particular at "La Cienega" or "the swamp", our fanciful name for an area down at the bottom of the hill where water collects and there's a little willow grove. There was indeed water there -- covered with a layer of floating hail -- but on the way down we also had a new "creek" with several tributaries, areas where the torrent carved out little streambeds.

It's fun to have our own creek ... even if it's only for part of a day.

More photos: August hailstorm.

Tags:
[ 19:28 Aug 08, 2018    More nature | permalink to this entry | ]

Sun, 29 Jul 2018

Building Firefox: Changing the App Name

In my several recent articles about building Firefox from source, I omitted one minor change I made, which will probably sound a bit silly. A self-built Firefox thinks its name is "Nightly", so, for example, the Help menu includes About Nightly.

Somehow I found that unreasonably irritating. It's not a nightly build; in fact, I hope to build it as seldom as possible, ideally only after a git pull when new versions are released. Yet Firefox shows its name in quite a few places, so you're constantly faced with that "Nightly". After all the work to build Firefox, why put up with that?

To find where it was coming from, I used my recursive grep alias which skips the obj- directory plus things like object files and metadata. This is how I define it in my .zshrc (obviously, not all of these clauses are necessary for this Firefox search), and then how I called it to try to find instances of "Nightly" in the source:

gr() {
  find . \( -type f -and -not -name '*.o' -and -not -name '*.so' -and -not -name '*.a' -and -not -name '*.pyc' -and -not -name '*.jpg' -and -not -name '*.JPG' -and -not -name '*.png' -and -not -name '*.xcf*' -and -not -name '*.gmo' -and -not -name '.intltool*' -and -not -name '*.po' -and -not -name 'po' -and -not -name '*.tar*' -and -not -name '*.zip' -or -name '.metadata' -or -name 'build' -or -name 'obj-*' -or -name '.git' -or -name '.svn' -prune \) -print0 | xargs -0 grep $* /dev/null
}

gr Nightly | grep -v '//' | grep -v '#' | grep -v isNightly  | grep test | grep -v task | fgrep -v .js | fgrep -v .cpp | grep -v mobile >grep.out

Even with all those exclusions, that still ends up printing an enormous list. But it turns out all the important hits are in the browser directory, so you can get away with running it from there rather than from the top level.

I found a bunch of likely files that all had very similar "Nightly" lines in them:

Since I didn't know which one was relevant, I changed each of them to slightly different names, then rebuilt and checked to see which names I actually saw while running the browser.

It turned out that browser/branding/unofficial/locales/en-US/brand.dtd is the file that controls the application name in the Help menu and in Help->About -- though the title of the About window is still "Nightly" and I haven't found what controls that.

branding/unofficial/locales/en-US/brand.ftl controls the "Nightly" references in the Edit->Preferences window.

I don't know what all the others do. There may be other instances of "Nightly" that appear elsewhere in the app, the other files, but I haven't seen them yet.

Past Firefox building articles: Building Firefox Quantum; Building Firefox for ALSA (non PulseAudio) Sound; Firefox Quantum: Fixing Ctrl W (or other key bindings).

Tags: , , ,
[ 18:23 Jul 29, 2018    More tech/web | permalink to this entry | ]

Mon, 23 Jul 2018

Rain Song

We've been in the depths of a desperate drought. Last year's monsoon season never happened, and then the winter snow season didn't happen either.

Dave and I aren't believers in tropical garden foliage that requires a lot of water; but even piñons and junipers and other native plants need some water. You know it's bad when you find yourself carrying a watering can down to the cholla and prickly pear to keep them alive.

This year, the Forest Service closed all the trails for about a month -- too much risk of one careless cigarette-smoking hiker, or at least I think that was the reason (they never really explained it) -- and most the other trail agencies followed suit. But then in early July, the forecasts started predicting the monsoon at last. We got some cloudy afternoons, some humid days (what qualifies as humid in New Mexico, anyway -- sometimes all the way up to 40%), and the various agencies opened their trails again. Which came as a surprise, because those clouds and muggy days didn't actually include any significant rain. Apparently mere air humidity is enough to mitigate a lot of the fire risk?

Tonight the skies finally let loose. When the thunder and lightning started in earnest, a little after dinner, Dave and I went out to the patio to soak in the suddenly cool and electric air and some spectacular lightning bolts while watching the hummingbirds squabble over territory. We could see rain to the southwest, toward Albuquerque, and more rain to the east, toward the Sangres, but nothing where we were.

Then a sound began -- a distant humming/roaring, like the tires of a big truck passing on the road. "Are we hearing rain approaching?" we both asked at the same time. Since moving to New Mexico we're familiar with being able to see rain a long way away; and of course everyone has heard rain as it falls around them, either as a light pitter-patter or the louder sound from a real storm; but we'd never been able to hear the movement of a rainstorm as it gradually moved toward us.

Sure enough, the sound got louder and louder, almost unbearably loud -- and then suddenly we were inundated with giant-sized drops, blowing way in past the patio roof to where we were sitting.

I've heard of rain dances, and songs sung to bring the rain, but I didn't know it could sing back.

We ran for the door, not soon enough. But that was okay; we didn't mind getting drenched. After a drought this long, water from the sky is cause only for celebration.

The squall dumped over a third of an inch in only a few minutes. (This according to our shiny new weather station with a sensitive tipping-bucket rain gauge that measures in hundredths of an inch.) Then it eased up to a light drizzle for a while, the lightning moved farther away, and we decided it was safe to run down the trail to "La Cienega" (Spanish for swamp) at the bottom of the property and see if any water had accumulated. Sure enough! Lake La Senda (our humorous moniker for a couple of little puddles that sometimes persist as long as a couple of days) was several inches deep. Across the road, we could hear a canyon tree frog starting to sing his ratchety song -- almost more welcome than the sound of the rain itself.

As I type this, we're reading a touch over half an inch and we're down to a light drizzle. The thunder has receded but there's still plenty of lightning.

More rain! Keep it coming!

Tags:
[ 20:38 Jul 23, 2018    More nature | permalink to this entry | ]

Fri, 20 Jul 2018

Pulseaudio: the more things change, the more they stay the same

Such a classic Linux story.

For a video I'll be showing during tonight's planetarium presentation (Sextants, Stars, and Satellites: Celestial Navigation Through the Ages, for anyone in the Los Alamos area), I wanted to get HDMI audio working from my laptop, running Debian Stretch. I'd done that once before on this laptop (HDMI Presentation Setup Part I and Part II) so I had some instructions to follow; but while aplay -l showed the HDMI audio device, aplay -D plughw:0,3 didn't play anything and alsamixer and alsamixergui only showed two devices, not the long list of devices I was used to seeing.

Web searches related to Linux HDMI audio all pointed to pulseaudio, which I don't use, and I was having trouble finding anything for plain ALSA without pulse. In the old days, removing pulseaudio used to be the cure for practically every Linux audio problem. But I thought to myself, It's been a couple years since I actually tried pulse, and people have told me it's better now. And it would be a relief to have pulseaudio working so things like Firefox would Just Work. Maybe I should try installing it and see what happens.

So I ran an aptitude search pulseaudio to find the package name I'd need to install. Imagine my surprise when it turned out that it was already installed!

So I did some more web searching to find out how to talk to pulse and figure out how to enable HDMI, or un-mute it, or whatever it was I needed. But to no avail: everything I found was stuff like "In the Ubuntu audio panel, do this". The few pages I found that listed commands to run didn't help -- the commands all gave errors.

Running short on time, I reverted to the old days: aptitude purge pulseaudio. Rebooted to make sure the audio system was reset, ran alsamixergui and sure enough, there were all my normal devices, including the IEC958 device for HDMI, which was indeed muted. I unmuted it, tried the video again -- and music blasted from my TV's speakers.

I'm sure there are machines where pulseaudio works. There are even a few people who have audio setups complicated enough to need something like pulseaudio. But in 2018, just as in 2006, aptitude purge pulseaudio is the easiest solution to a Linux sound problem.

Tags: , , , ,
[ 14:17 Jul 20, 2018    More linux | permalink to this entry | ]

Sun, 15 Jul 2018

LWV National Convention, 2018: Plenary Sessions

or: How Sausage is Made

I'm a big fan of the League of Women Voters. Really. State and local Leagues do amazing work. They publish and distribute those non-partisan Voter Guides you've probably seen before each election. They register new voters, and advocate for voting rights and better polling access for everybody, including minorities and poor people. They advocate on lots of other issues too, like redistricting, transparency, the influence of money in politics, and health care. I've only been involved with the League for a few years; although my grandmother was active in her local League as far back as I can remember, somehow it didn't occur to me to get involved until I moved to a small town where it was more obvious what a difference the local League made.

So, local and state Leagues are great. But after returning from my second LWV national convention, I find myself wondering how all this great work manages to come out of an organization that has got to be the most undemocratic, conniving political body I've ever been involved with.

I have separate write-ups of the caucuses and other program sessions I attended at this year's convention, for other LWV members wanting to know what they missed. But the Plenary sessions are where the national League's business is conducted, and I felt I should speak publicly about how they're run.

In case there's any confusion, this article describes my personal reactions to the convention's plenary sessions. I am speaking only for myself, not for any state or local league.

The 2018 National Convention Plenary Sessions

I didn't record details of every motion; check the Convention 2018 Daily Briefing if you care. (You might think there would be a published official record of the business conducted at the national convention; good luck on finding it.)

The theme of the convention, printed as a banner on many pages of the convention handbook, was Creating a More Perfect Democracy. It should have been: Democracy: For Everyone Else.

Friday Plenary

In case you're unfamiliar with the term (as I was), "Plenary" means full or complete, from the Latin plenus, full. A plenary session is a session of a conference which all members of all parties are to attend. It doesn't seem to imply voting, though that's how the LWVUS uses the term.

After the national anthem, the welcome by a designated local official, a talk, an opening address, acceptance of various committee reports, and so on, the tone of the convention was set with the adoption of the convention rules.

A gentleman from the Oregon state League (LWVOR) proposed a motion that would have required internal decisions to be able to be questioned as part of convention business. This would include the controversial new values statement. There had been discussion of the values statement before the convention, establishing that many people disagreed with it and wanted a vote.

LWVUS president Chris Carson wasn't having any of it. First, she insisted, the correct parliamentary way to do this was to vote to approve the rest of the rules, not including this one. That passed easily. Then she stated that the motion on the table would require a 2/3 vote, because it was an amendment to the rules which had just passed. (Never mind that she had told us we were voting to pass all the rules except that one).

The Oregon delegate who had made the motion protested that the first paragraph of the convention rules on page 27 of the handbook clearly stated that amendment of the rules only requires a simple majority. Carson responded that would have been true before the convention rules were adopted, but now that we'd voted to adopt them, it now required a 2/3 vote to amend them due to some other rule somewhere else, not in the handbook. She was adamant that the motion could not now pass with a simple majority.

The Oregon delegate was incredulous. "You mean that if I'd known you were going to do this, I should have protested voting on adopting the rules before voting on the motion?"

The room erupted in unrest. Many people wanted to speak, but after only a couple, Carson unilaterally cut off further discussion. But then, after a lot of muttering with her Parliamentarian, she announced that she would take a show-of-hands vote on whether to approve her ruling requiring the 2/3 vote. She allowed only three people to speak on that motion (the motion to accept her ruling) and then called the question herself.

The vote was fairly close but was ruled to be in favor of her ruling, meaning that the original motion would require a 2/3 vote. When we finally voted on the original motion it looked roughly equal, not 2/3 in favor -- so the motion to allow debate on the values statement failed.

(We never did find out what this mysterious other rule was that supposedly mandated the 2/3 vote. The national convention has an official Parliamentarian sitting on the podium, as well as parliamentary assistants sitting next to each microphone in the audience, but somehow there's nobody who does much of a job of keeping track of what's going on or can state the rules under which we're operating. Several times during the three days of plenary, Carson and her parliamentarian lost track of things, for instance, saying she'd hear two pro and two con comments but actually calling three pro and one con.)

I notice in the daily briefing, this whole fracas is summarized as, "The motion was defeated by a hand vote."

Officer "Elections"

With the rules adopted by railroad, we were next presented with the slate of candidates for national positions. That sounds like an election but it's not.

During discussion of the previous motion, one national board member speaking against the motion (or for Carson's 2/3 ruling, I can't remember which) said "You elected us, so you should trust us." That spawned some audience muttering, too. See, in case there's any confusion, delegates at the convention do not actually get to vote for candidates. We're presented with a complete slate of candidates chosen by the nominating committee (for whom we also do not vote), and the only option is to vote yes or no on the whole slate "by acclamation".

There is one moment where it is possible to make a nomination from the floor. If nominated, such a nominee has one minute to make her case to the delegates before the final vote. Since there's obviously no chance, there are seldom any floor nominees, and on the rare occasion someone tries, they invariably lose.

Now, I understand that it's not easy getting volunteers for leadership positions in nonprofit organizations. It's fairly common, in local organizations, that you can't fill out all the available positions and have to go begging for people to fill officer positions, so you'll very often see a slate of officers proposed all at once. But in the nationwide LWVUS? In the entire US, in the (hundreds of thousands? I can't seem to find any membership figures, though I found a history document that says there were 157,000 members in the 1960s) of LWV members nationwide, there are not enough people interested in being a national officer that there couldn't be a competitive election? Really?

Though, admittedly ... after watching the sausage being made, I'm not sure I'd want to be part of that.

Not Recommended Items

Of course, the slate of officers was approved. Then we moved on to "Not Recommended Items". How that works: in the run-up to the convention, local Leagues propose areas the National board should focus on during the upcoming two years. The National board decides what they care about, and marks the rest as as "Not recommended". During the Friday plenary session, delegates can vote to reconsider these items.

I knew that because I'd gone to the Abolish the Electoral College caucus the previous evening, and that was the first of the not-recommended items proposed for consideration.

It turned out there were two similar motions: the "Abolish the Electoral College" proposal and the "Support the National Popular Vote Compact" proposal, two different approaches to eliminating the electoral college. The NPV is achievable -- quite a few states have already signed, totalling 172 electoral votes of the 270 that would be needed to bring the compact into effect. The "Abolish" side, on the other hand, would require a Constitutional amendment which would have to be ratified even by states that currently have a big advantage due to the electoral college. Not going to happen.)

Both proposals got enough votes to move on to consideration at Saturday's plenary, though. Someone proposed that the two groups merged their proposals, and met with the groups after the session, but alas, we found out on Saturday that they never came to agreement.

One more proposal that won consideration was one to advocate for implementation of the Equal Rights Amendment should it be ratified. A nice sentiment that everyone agreed with, and harmless since it's not likely to happen.

Friday morning "Transformation Journey" Presentation and Budget Discussion

I didn't take many notes on this, except during the presentation of the new IT manager, who made noise about reduced administrative burden for local Leagues and improving access to data for Leagues at all levels. These are laudable goals and badly needed, though he didn't go into any detail about how any of was going to work. Since it was all vague high-level hand waving I won't bother to write up my notes (ask me if you want to see them).

The only reason I have this section here is for the sharp-eyed person who asked during the budget discussion, "What's this line item about 'mailing list rental?'"

Carson dismissed that worry -- Oh, don't worry, there are no members on that list. That's just a list of donors who aren't members.

Say what? People who donate to the LWVUS, if they aren't members, get their names on a mailing list that the League then sells? Way to treat your donors with respect.

I wish nonprofits would get a clue. There are so many charities that I'd like to donate to if I could do so without resigning myself to a flood of paper in my mailbox every day for the rest of my life. If nonprofits had half a lick of sense, they would declare "We will never give your contact info to anyone else", and offer "check this box to be excluded even from our own pleas for money more than once or twice a year." I'd be so much more willing to donate.

Saturday Plenary

The credentials committee reported: delegates present represented 762 Leagues, with 867 voting delegates from 49 states plus the District of Columbia. That's out of 1709 eligible voting delegates -- about half. Not surprising given the expense of the convention. I'm told there have been proposals in past years to change the rules to make it possible to vote without attending convention, but no luck so far.

Consideration of not-recommended items: the abolition of the electoral college failed. Advocacy for the National Popular Vote Compact passed. So the delegates agreed with me on which of the two is achievable. Too bad the Electoral Abolition people weren't willing to compromise and merge their proposal with the NPV one.

The ERA proposal passed overwhelmingly.

Rosie Rios, 43rd Treasurer of the US, gave a terrific talk on, among other things, the visibility of women on currency, in public art and in other public places, and what that means for girls growing up. I say a little more about her talk in my Caucus Summary.

We had been scheduled to go over the bylaws before Rios' talk, but that plan had been revised because there was an immigration protest (regarding the separation of children from parents) scheduled some distance north of the venue, and a lot of delegates wanted to go. So the revised plan, we'd been told Friday, was to have Rios' talk and then adjourn and discuss the bylaws on Sunday.

Machinations

What actually happened: Carson asked for a show of hands of people who wanted to go to the protest, which looked like maybe 60% of the room. She dismissed those people with well wishes.

Then she looked over the people still in the room and said, "It looks like we might still have a quorum. Let's count."

I have no idea what method they used to count the people sitting in the room, or what count they arrived at: we weren't told, and none of this is mentioned in the daily summary linked at the top of this article. But somehow she decided we still had a quorum, and announced that we would begin discussion of the bylaws.

The room erupted in angry murmurs -- she had clearly stated before dismissing the other delegates that we were done for the day and would not be discussing the bylaws until Sunday.

"It's appalling", one of our delegation, a first-timer, murmured. Indeed.

But the plenary proceeded. We voted to pass the first bylaws proposal, an uncontroversial one that merely clarified some wording, and I'm sure the intent was to sneak the second proposal through as well -- a vague proposal making it easier to withdraw recognition from a state or local league -- but enough delegates remained who had actually read the proposals and weren't willing to let it by without discussion.

On the other hand, the discussion didn't come to anything. A rewording amendment that I'm told had been universally agreed to at the Bylaws caucus the previous evening failed to go through because too many of the people who understood the issue were away at the protest. The amendment failed, so even though we ran out of time and had to stop before voting on the proposal, the amended wording had already failed and couldn't be reconsidered on Sunday when the discussion was resumed.

(In case you're curious, this strategy is also how Pluto got demoted from being a planet. The IAU did almost exactly the same thing as the LWVUS, waiting until most of the voting members were out of the room before presenting the proposal to a small minority of delegates. Astronomers who were at the meeting but out of the room for the Pluto vote have spoken out, saying the decision was a bad one and makes little sense scientifically.)

Sunday Plenary

There's not much to say about Sunday. The bylaws proposal was still controversial, especially since half the delegation never had the chance to vote on the rewording proposal; the vote required a "card vote", meaning rather than counting hands or voices, delegates passed colored cards to the aisles to be counted. This was the only card vote of the convention.

Accessibility note: I was surprised to note that the voting cards were differentiated only by color; they didn't have anything like "yes" or "no" printed on them. I wonder how many colorblind delegates there were in that huge roomful of people who couldn't tell the cards apart.

The rest of Sunday's voting was on relatively unimportant, uncontroversial measures, ending with a bunch of proclamations that don't actually change anything. Those easily passed, rah, rah. We're against gun violence, for the ERA, against the electoral college, for pricing carbon emissions, for reproductive rights and privacy, and for climate change assessments that align with scientific principles. Nobody proposed anything about apple pie but I'm sure we would have been for that too.

And thus ended the conference and we all headed off to lunch or the airport. Feeling frustrated, a bit dirtied and not exactly fired up about Democracy.


Up: LWV National Convention, June-July 2018, Chicago

Tags: ,
[ 18:09 Jul 15, 2018    More politics | permalink to this entry | ]

Sat, 07 Jul 2018

Script to modify omni.ja for a custom Firefox

A quick followup to my article on Modifying Firefox Files Inside omni.ja:

The steps for modifying the file are fairly easy, but they have to be done a lot.

First there's the problem of Firefox updates: if a new omni.ja is part of the update, then your changes will be overwritten, so you'll have to make them again on the new omni.ja.

But, worse, even aside from updates they don't stay changed. I've had Ctrl-W mysteriously revert back to its old wired-in behavior in the middle of a Firefox session. I'm still not clear how this happens: I speculate that something in Firefox's update mechanism may allow parts of omni.ja to be overridden, even though I was told by Mike Kaply, the onetime master of overlays, that they weren't recommended any more (at least by users, though that doesn't necessarily mean they're not used for updates).

But in any case, you can be browsing merrily along and suddenly one of your changes doesn't work any more, even though the change is still right there in browser/omni.ja. And the only fix I've found so far is to download a new Firefox and re-apply the changes. Re-applying them to the current version doesn't work -- they're already there. And it doesn't help to keep the tarball you originally downloaded around so you can re-install that; firefox updates every week or two so that version is guaranteed to be out of date.

All this means that it's crazy not to script the omni changes so you can apply them easily with a single command. So here's a shell script that takes the path to the current Firefox, unpacks browser/omni.ja, makes a couple of simple changes and re-packs it. I called it kitfox-patch since I used to call my personally modified Firefox build "Kitfox".

Of course, if your changes are different from mine you'll want to edit the script to change the sed commands.

I hope eventually to figure out how it is that omni.ja changes stop working, and whether it's an overlay or something else, and whether there's a way to re-apply fixes without having to download a whole new Firefox. If I figure it out I'll report back.

Tags: ,
[ 15:01 Jul 07, 2018    More tech/web | permalink to this entry | ]

Sat, 23 Jun 2018

Modifying Firefox Files Inside Omni.ja

My article on Fixing key bindings in Firefox Quantum by modifying the source tree got attention from several people who offered helpful suggestions via Twitter and email on how to accomplish the same thing using just files in omni.ja, so it could be done without rebuilding the Firefox source. That would be vastly better, especially for people who need to change something like key bindings or browser messages but don't have a souped-up development machine to build the whole browser.

Brian Carpenter had several suggestions and eventually pointed me to an old post by Mike Kaply, Don’t Unpack and Repack omni.ja[r] that said there were better ways to override specific files.

Unfortunately, Mike Kaply responded that that article was written for XUL extensions, which are now obsolete, so the article ought to be removed. That's too bad, because it did sound like a much nicer solution. I looked into trying it anyway, but the instructions it points to for Overriding specific files is woefully short on detail on how to map a path inside omni.ja like chrome://package/type/original-uri.whatever, to a URL, and the single example I could find was so old that the file it referenced didn't exist at the same location any more. After a fruitless half hour or so, I took Mike's warning to heart and decided it wasn't worth wasting more time chasing something that wasn't expected to work anyway. (If someone knows otherwise, please let me know!)

But then Paul Wise offered a solution that actually worked, as an easy to follow sequence of shell commands. (I've changed some of them very slightly.)

$ tar xf ~/Tarballs/firefox-60.0.2.tar.bz2
  # (This creates a "firefox" directory inside the current one.)

$ mkdir omni
$ cd omni

$ unzip -q ../firefox/browser/omni.ja
warning [../firefox-60.0.2/browser/omni.ja]:  34187320 extra bytes at beginning or within zipfile
  (attempting to process anyway)
error [../firefox-60.0.2/browser/omni.ja]:  reported length of central directory is
  -34187320 bytes too long (Atari STZip zipfile?  J.H.Holm ZIPSPLIT 1.1
  zipfile?).  Compensating...
zsh: exit 2     unzip -q ../firefox-60.0.2/browser/omni.ja

$ sed -i 's/or enter address/or just twiddle your thumbs/' chrome/en-US/locale/browser/browser.dtd chrome/en-US/locale/browser/browser.properties

I was a little put off by all the warnings unzip gave, but kept going.

Of course, you can just edit those two files rather than using sed; but the sed command was Paul's way of being very specific about the changes he was suggesting, which I appreciated.

Use these flags to repackage omni.ja:

$ zip -qr9XD ../omni.ja *

I had tried that before (without the q since I like to see what zip and tar commands are doing) and hadn't succeeded. And indeed, when I listed the two files, the new omni.ja I'd just packaged was about a third the size of the original:

$ ls -l ../omni.ja ../firefox-60.0.2/browser/omni.ja
-rw-r--r-- 1 akkana akkana 34469045 Jun  5 12:14 ../firefox/browser/omni.ja
-rw-r--r-- 1 akkana akkana 11828315 Jun 17 10:37 ../omni.ja

But still, it's worth a try:

$ cp ../omni.ja ../firefox/browser/omni.ja

Then run the new Firefox. I have a spare profile I keep around for testing, but Paul's instructions included a nifty way of running with a brand new profile and it's definitely worth knowing:

$ cd ../firefox

$ MOZILLA_DISABLE_PLUGINS=1 ./firefox -safe-mode -no-remote -profile $(mktemp -d tmp-firefox-profile-XXXXXXXXXX) -offline about:blank

Also note the flags like safe-mode and no-remote, plus disabling plugins -- all good ideas when testing something new.

And it worked! When I started up, I got the new message, "Search or just twiddle your thumbs", in the URL bar.

Fixing Ctrl-W

Of course, now I had to test it with my real change. Since I like Paul's way of using sed to specify exactly what changes to make, here's a sed version of my Ctrl-W fix:

$ sed -i '/key_close/s/ reserved="true"//' chrome/browser/content/browser/browser.xul

Then run it. To test Ctrl-W, you need a website that includes a text field you can type in, so -offline isn't an option unless you happen to have a local web page that includes some text fields. Google is an easy way to test ... and you might as well re-use that firefox profile you just made rather than making another one:

$ MOZILLA_DISABLE_PLUGINS=1 ./firefox -safe-mode -no-remote -profile tmp-firefox-profile-* https://google.com

I typed a few words in the google search field that came up, deleted them with Ctrl-W -- all was good! Thanks, Paul! And Brian, and everybody else who sent suggestions.

Why are the sizes so different?

I was still puzzled by that threefold difference in size between the omni.ja I repacked and the original that comes with Firefox. Was something missing? Paul had the key to that too: use zipinfo on both versions of the file to see what differed. Turned out Mozilla's version, after a long file listing, ends with

2650 files, 33947999 bytes uncompressed, 33947999 bytes compressed:  0.0%
while my re-packaged version ends with
2650 files, 33947969 bytes uncompressed, 11307294 bytes compressed:  66.7%

So apparently Mozilla's omni.ja is using no compression at all. It may be that that makes it start up a little faster; but Quantum takes so long to start up that any slight difference in uncompressing omni.ja isn't noticable to me.

I was able to run through this whole procedure on my poor slow netbook, the one where building Firefox took something like 15 hours ... and in a few minutes I had a working modified Firefox. And with the sed command, this is all scriptable, so it'll be easy to re-do whenever Firefox has a security update. Win!

Update: I have a simple shell script to do this: Script to modify omni.ja for a custom Firefox.

Tags: ,
[ 20:37 Jun 23, 2018    More tech/web | permalink to this entry | ]

Thu, 14 Jun 2018

Firefox Quantum: Fixing Ctrl W (or other key bindings)

When I first tried switching to Firefox Quantum, the regression that bothered me most was Ctrl-W, which I use everywhere as word erase (try it -- you'll get addicted, like I am). Ctrl-W deletes words in the URL bar; but if you type Ctrl-W in a text field on a website, like when editing a bug report or a "Contact" form, it closes the current tab, losing everything you've just typed. It's always worked in Firefox in the past; this is a new problem with Quantum, and after losing a page of typing for about the 20th time, I was ready to give up and find another browser.

A web search found plenty of people online asking about key bindings like Ctrl-W, but apparently since the deprecation of XUL and XBL extensions, Quantum no longer offers any way to change or even just to disable its built-in key bindings.

I wasted a few days chasing a solution inspired by this clever way of remapping keys only for certain windows using xdotool getactivewindow; I even went so far as to write a Python script that intercepts keystrokes, determines the application for the window where the key was typed, and remaps it if the application and keystroke match a list of keys to be remapped. So if Ctrl-W is typed in a Firefox window, Firefox will instead receive Alt-Backspace. (Why not just type Alt-Backspace, you ask? Because it's much harder to type, can't be typed from the home position, and isn't in the same place on every keyboard the way W is.)

But sadly, that approach didn't work because it turned out my window manager, Openbox, acts on programmatically-generated key bindings as well as ones that are actually typed. If I type a Ctrl-W and it's in Firefox, that's fine: my Python program sees it, generates an Alt-Backspace and everything is groovy. But if I type a Ctrl-W in any other application, the program doesn't need to change it, so it generates a Ctrl-W, which Openbox sees and calls the program again, and you have an infinite loop. I couldn't find any way around this. And admittedly, it's a horrible hack having a program intercept every keystroke. So I needed to fix Firefox somehow.

But after spending days searching for a way to customize Firefox's keys, to no avail, I came to the conclusion that the only way was to modify the source code and rebuild Firefox from source.

Ironically, one of the snags I hit in building it was that I'd named my key remapper "pykey.py", and it was still in my PYTHONPATH; it turns out the Firefox build also has a module called pykey.py and mine was interfering. But eventually I got the build working.

Firefox Key Bindings

I was lucky: building was the only hard part, because a very helpful person on Mozilla's #introduction IRC channel pointed me toward the solution, saving me hours of debugging. Edit browser/base/content/browser-sets.inc around line 240 and remove reserved="true" from key_closeWindow. It turned out I needed to remove reserved="true" from the adjacent key_close line as well.

Another file that's related, but more general, is nsXBLWindowKeyHandler.cpp around line 832; but I didn't need that since the simpler fix worked.

Transferring omni.ja -- or Not

In theory, since browser-sets.inc isn't compiled C++, it seems like you should be able to make this fix without building the whole source tree. In an actual Firefox release, browser-sets.inc is part of omni.ja, and indeed if you unpack omni.ja you'll see the key_closeWindow and key_close lines. So it seems like you ought to be able to regenerate omni.ja without rebuilding all the C++ code.

Unfortunately, in practice omni.ja is more complicated than that. Although you can unzip it and edit the files, if you zip it back up, Firefox doesn't see it as valid. I guess that's why they renamed it .ja: long ago it used to be omni.jar and, like other .jar files, was a standard zip archive that you could edit. But the new .ja file isn't documented anywhere I could find, and all the web discussions I found on how to re-create it amounted to "it's complicated, you probably don't want to try".

And you'd think that I could take the omni.ja file from my desktop machine, where I built Firefox, and copy it to my laptop, replacing the omni.ja file from a released copy of Firefox. But no -- somehow, it isn't seen, and the old key bindings are still active. They must be duplicated somewhere else, and I haven't figured out where.

It sure would be nice to have a way to transfer an omni.ja. Building Firefox on my laptop takes nearly a full day (though hopefully rebuilding after pulling minor security updates won't be quite so bad). If anyone knows of a way, please let me know!

Tags: , ,
[ 16:45 Jun 14, 2018    More tech/web | permalink to this entry | ]

Sat, 09 Jun 2018

Building Firefox for ALSA (non PulseAudio) Sound

I did the work to built my own Firefox primarily to fix a couple of serious regressions that couldn't be fixed any other way. I'll start with the one that's probably more common (at least, there are many people complaining about it in many different web forums): the fact that Firefox won't play sound on Linux machines that don't use PulseAudio.

There's a bug with a long discussion of the problem, Bug 1345661 - PulseAudio requirement breaks Firefox on ALSA-only systems; and the discussion in the bug links to another discussion of the Firefox/PulseAudio problem). Some comments in those discussions suggest that some near-future version of Firefox may restore ALSA sound for non-Pulse systems; but most of those comments are six months old, yet it's still not fixed in the version Mozilla is distributing now.

In theory, ALSA sound is easy to enable. Build pptions in Firefox are controlled through a file called mozconfig. Create that file at the top level of your build directory, then add to it:

ac_add_options --enable-alsa
ac_add_options --disable-pulseaudio

You can see other options with ./configure --help

Of course, like everything else in the computer world, there were complications. When I typed mach build, I got:

Assertion failed in _parse_loader_output:
Traceback (most recent call last):
  File "/home/akkana/outsrc/gecko-dev/python/mozbuild/mozbuild/mozconfig.py", line 260, in read_mozconfig
    parsed = self._parse_loader_output(output)
  File "/home/akkana/outsrc/gecko-dev/python/mozbuild/mozbuild/mozconfig.py", line 375, in _parse_loader_output
    assert not in_variable
AssertionError
Error loading mozconfig: /home/akkana/outsrc/gecko-dev/mozconfig

Evaluation of your mozconfig produced unexpected output.  This could be
triggered by a command inside your mozconfig failing or producing some warnings
or error messages. Please change your mozconfig to not error and/or to catch
errors in executed commands.

mozconfig output:

------BEGIN_ENV_BEFORE_SOURCE
... followed by a many-page dump of all my environment variables, twice.

It turned out that was coming from line 449 of python/mozbuild/mozbuild/mozconfig.py:

   # Lines with a quote not ending in a quote are multi-line.
    if has_quote and not value.endswith("'"):
        in_variable = name
        current.append(value)
        continue
    else:
        value = value[:-1] if has_quote else value

I'm guessing this was added because some Mozilla developer sets a multi-line environment variable that has a quote in it but doesn't end with a quote. Or something. Anyway, some fairly specific case. I, on the other hand, have a different specific case: a short environment variable that includes one or more single quotes, and the test for their specific case breaks my build.

(In case you're curious why I have quotes in an environment variable: The prompt-setting code in my .zshrc includes a variable called PRIMES. In a login shell, this is set to the empty string, but in subshells, I add ' for each level of shell under the login shell. So my regular prompt might be (hostname)-, but if I run a subshell to test something, the prompt will be (hostname')-, a subshell inside that will be (hostname'')-, and so on. It's a reminder that I'm still in a subshell and need to exit when I'm done testing. In theory, I could do that with SHLVL, but SHLVL doesn't care about login shells, so my normal shells inside X are all SHLVL=2 while shells on a console or from an ssh are SHLVL=1, so if I used SHLVL I'd have to have some special case code to deal with that.

Also, of course I could use a character other than a single-quote. But in the thirty or so years I've used this, Firefox is the first program that's ever had a problem with it. And apparently I'm not the first one to have a problem with this: bug 1455065 was apparently someone else with the same problem. Maybe that will show up in the release branch eventually.)

Anyway, disabling that line fixed the problem:

   # Lines with a quote not ending in a quote are multi-line.
    if False and has_quote and not value.endswith("'"):
and after that, mach build succeeded, I built a new Firefox, and lo and behond! I can play sound in YouTube videos and on Xeno-Canto again, without needing an additional browser.

Tags: , ,
[ 16:49 Jun 09, 2018    More tech/web | permalink to this entry | ]

Sun, 03 Jun 2018

Building Firefox Quantum

With Firefox Quantum, Mozilla has moved away from letting users configure the browser they way they like. If I was going to switch to Quantum as my everyday browser, there were several problems I needed to fix first -- and they all now require modifying the source code, then building the whole browser from scratch.

I'll write separately about fixing the specific problems; but first I had to build Firefox. Although I was a Firefox developer way back in the day, the development environment has changed completely since then, so I might as well have been starting from scratch.

Setting up a Firefox build

I started with Mozilla's Linux build preparation page. There's a script called bootstrap.py that's amazingly comprehensive. It will check what's installed on your machine and install what's needed for a Firefox build -- and believe me, there are a lot of dependencies. Don't take the "quick" part of the "quick and easy" comment at the beginning of the script too seriously; I think on my machine, which already has a fairly full set of build tools, the script was downloading additional dependencies for 45 minutes or so. But it was indeed fairly easy: the script asks lots of questions about optional dependencies, and usually has suggestions, which I mostly followed.

Eventually bootstrap.py finishes loading the dependencies and gets to the point of wanting to check out the mozilla-unified repository, and that's where I got into trouble.

The script wants to check out the bleeding edge tip of Mozilla development. That's what you want if you're a developer checking in to the project. What I wanted was a copy of the currently released Firefox, but with a chance to make my own customizations. And that turns out to be difficult.

Getting a copy of the release tree

In theory, once you've checked out mozilla-unified with Mercurial, assuming you let bootstrap.py enable the recommended "firefoxtree" hg extension (which I did), you can switch to the release branch with:

hg pull release
hg up -c release

That didn't work for me: I tried it numerous times over the course of the day, and every time it died with "abort: HTTP request error (incomplete response; expected 5328 bytes got 2672)" after "adding manifests" when it started "adding file changes".

That sent me on a long quest aided by someone in Mozilla's helpful #introduction channel, where they help people with build issues. You might think it would be a common thing to want to build a copy of the released version of Firefox, and update it when a new release comes out. But apparently not; although #introduction is a friendly and helpful channel, everyone seemed baffled as to why hg up didn't work and what the possible alternatives might be.

Bundles and Artifacts

Eventually someone pointed me to the long list of "bundle" tarballs and advised me on how to get a release tarball there. I actually did that, and (skipping ahead briefly) it built and ran; but I later discovered that "bundles" aren't actually hg repositories and can't be updated. So once you've downloaded your 3 gigabytes or so of Mozilla stuff and built it, it's only good for a week or two until the next Mozilla release, when you're now hopelessly out of date and have to download a whole nuther bundle. Bundles definitely aren't the answer, and they aren't well supported or documented either. I recommend staying away from them.

I should also mention "artifact builds". These sound like a great idea: a lot of the source is already built for you, so you just build a little bit of it. However, artifact builds are only available for a few platforms and branches. If your OS differs in any way from whoever made the artifact build, or if you're requesting a branch, you're likely to waste a lot of time (like I did) downloading stuff only to get mysterious error messages. And even if it works, you won't be able to update it to keep on top of security fixes. Doesn't seem like a good bet.

GitHub to the rescue

Okay, so Mercurial's branch switching doesn't work. But it turns out you don't have to use Mercurial. There's a GitHub mirror for Firefox called gecko-dev, and after cloning it you can use normal git commands to switch branches:

git clone https://github.com/mozilla/gecko-dev.git
cd gecko-dev/
git checkout -t origin/release

You can verify you're on the right branch with git branch -vv, or if you want to list all branches and their remotes, git branch -avv.

Finally: a Firefox release branch that you can actually update!

Building Firefox

Once you have a source tree, you can use the all-powerful mach script to build the current release of Firefox:

./mach build

Of course that takes forever -- hours and hours, depending on how fast your machine is.

Running your New Firefox

The build, after it finishes, helpfully tells you to test it with ./mach run, which runs your newly-built firefox with a special profile, so it doesn't interfere with your running build. It also prints:

For more information on what to do now, see https://developer.mozilla.org/docs/Developer_Guide/So_You_Just_Built_Firefox

Great! Except there's no information there on how to package or run your build -- it's just a placeholder page asking people to contribute to the page.

It turns out that obj-whatever/dist/bin is the directory that corresponds to the tarball you download from Mozilla, and you can run /path/to/mozilla-release/obj-whatever/dist/bin/firefox from anywhere.

I tried filing a bug request to have a sub-page created explaining how to run a newly built Firefox, but it hasn't gotten any response. Maybe I'll just edit the "So You Just Built" page.

Update, 7 months later:
Nobody ever did respond to my bug, but someone on Mozilla's #introduction channel for help with builds gave me a blessing to modify the page directly, which I did.

Specifically, I added:

Your new Firefox executable can be found in: $OBJDIR/dist/bin/firefox. You can run it from there.

If you need to move it, e.g. to another machine, you can run:
./mach package
This should create an OS-specific package, e.g. a tarball on Linux, which will appear in $OBJDIR/dist. You can also copy the $OBJDIR/dist/bin directory -- be sure to use a copy method that expands soft links -- but the result will be much larger than what you get with mach package. On Windows you may want to read about Windows Installer Builds.

Incidentally, my gecko-dev build takes 16G of disk space, of which 9.3G is things it built, which are helpfully segregated in obj-x86_64-pc-linux-gnu.

Tags: ,
[ 15:55 Jun 03, 2018    More tech/web | permalink to this entry | ]

Thu, 31 May 2018

Trying Firefox Variants: From Firefox ESR to Pale Moon to Quantum

For the last year or so the Firefox development team has been making life ever harder for users. First they broke all the old extensions that were based on XUL and XBL, so a lot of customizations no longer worked. Then they made PulseAudio mandatory on Linux bug (1345661), so on systems like mine that don't run Pulse, there's no way to get sound in a web page. Forget YouTube or XenoCanto unless you keep another browser around for that purpose.

For those reasons I'd been avoiding the Firefox upgrade, sticking to Debian's firefox-esr ("Extended Support Release"). But when Debian updated firefox-esr to Firefox 56 ESR late last year, performance became unusable. Like half a minute between when you hit Page Down and when the page actually scrolls. It was time to switch browsers.

Pale Moon

I'd been hearing about the Firefox variant Pale Moon. It's a fork of an older Firefox, supposedly with an emphasis on openness and configurability.

I installed the Debian palemoon package. Performance was fine, similar to Firefox before the tragic firefox-56. It was missing a few things -- no built-in PDF viewer or Reader mode -- but I don't use Reader mode that often, and the built-in PDF viewer is an annoyance at least as often as it's a help. (In Firefox it's fairly random about when it kicks in anyway, so I'm never sure whether I'll get the PDF viewer or a Save-as prompt on any given PDF link).

For form and password autofill, for some reason Pale Moon doesn't fill out fields until you type the first letter. For instance, if I had an account with name "myname" and a stored password, when I loaded the page, both fields would be empty, as if there's nothing stored for that page. But typing an 'm' in the username field makes both username and password fields fill in. This isn't something Firefox ever did and I don't particularly like it, but it isn't a major problem.

Then there were some minor irritations, like the fact that profiles were stored in a folder named ~/.moonchild\ productions/ -- super long so it messed up directory listings, and with a space in the middle. PaleMoon was also very insistent about using new tabs for everything, including URLs launched from other programs -- there doesn't seem to be any way to get it to open URLs in the active tab.

I used it as my main browser for several months, and it basically worked. But the irritations started to get to me, and I started considering other options. The final kicker when I saw Pale Moon bug 86, in which, as far as I can tell, someone working on the PaleMoon in OpenBSD tries to use system libraries instead of PaleMoon's patched libraries, and is attacked for it in the bug. Reading the exchange made me want to avoid PaleMoon for two reasons. First, the rudeness: a toxic community that doesn't treat contributors well isn't likely to last long or to have the resources to keep on top of bug and security fixes. Second, the technical question: if Pale Moon's code is so quirky that it can't use standard system libraries and needs a bunch of custom-patched libraries, what does that say about how maintainable it will be in the long term?

Firefox Quantum

Much has been made in the technical press of the latest Firefox, called "Quantum", and its supposed speed. I was a bit dubious of that: it's easy to make your program seem fast after you force everybody into a few years of working with a program that's degraded its performance by an order of magnitude, like Firefox had. After firefox 56, anything would seem fast.

Still, maybe it would at least be fast enough to be usable. But I had trepidations too. What about all those extensions that don't work any more? What about sound not working? Could I live with that?

Debian has no current firefox package, so I downloaded the tarball from mozilla.org, unpacked it, made a new firefox profile and ran it.

Initial startup performance is terrible -- it takes forever to bring up the first window, and I often get a "Firefox seems slow to start up" message at the bottom of the screen, with a link to a page of a bunch of completely irrelevant hints. Still, I typically only start Firefox once a day. Once it's up, performance is a bit laggy but a lot better than firefox-esr 56 was, certainly usable.

I was able to find replacements for most of the really important extensions (the ones that control things like cookies and javascript). But sound, as predicted, didn't work. And there were several other, worse regressions from older Firefox versions.

As it turned out, the only way to make Firefox Quantum usable for me was to build a custom version where I could fix the regressions. To keep articles from being way too long, I'll write about all those issues separately: how to build Firefox, how to fix broken key bindings, and how to fix the PulseAudio problem.

Tags: ,
[ 16:07 May 31, 2018    More tech/web | permalink to this entry | ]

Sun, 27 May 2018

Faking Javascript <body onload=""> in Wordpress

After I'd switched from the Google Maps API to Leaflet get my trail map working on my own website, the next step was to move it to the Nature Center's website to replace the broken Google Maps version.

PEEC, unfortunately for me, uses Wordpress (on the theory that this makes it easier for volunteers and non-technical staff to add content). I am not a Wordpress person at all; to me, systems like Wordpress and Drupal mostly add obstacles that mean standard HTML doesn't work right and has to be modified in nonstandard ways. This was a case in point.

The Leaflet library for displaying maps relies on calling an initialization function when the body of the page is loaded:

<body onLoad="javascript:init_trailmap();">

But in a Wordpress website, the <body> tag comes from Wordpress, so you can't edit it to add an onload.

A web search found lots of people wanting body onloads, and they had found all sorts of elaborate ruses to get around the problem. Most of the solutions seemed like they involved editing site-wide Wordpress files to add special case behavior depending on the page name. That sounded brittle, especially on a site where I'm not the Wordpress administrator: would I have to figure this out all over again every time Wordpress got upgraded?

But I found a trick in a Stack Overflow discussion, Adding onload to body, that included a tricky bit of code. There's a javascript function to add an onload to the tag; then that javascript is wrapped inside a PHP function. Then, if I'm reading it correctly, The PHP function registers itself with Wordpress so it will be called when the Wordpress footer is added; at that point, the PHP will run, which will add the javascript to the body tag in time for for the onload even to call the Javascript. Yikes!

But it worked. Here's what I ended up with, in the PHP page that Wordpress was already calling for the page:

<?php
/* Wordpress doesn't give you access to the <body> tag to add a call
 * to init_trailmap(). This is a workaround to dynamically add that tag.
 */
function add_onload() {
?>

<script type="text/javascript">
  document.getElementsByTagName('body')[0].onload = init_trailmap;
</script>

<?php
}

add_action( 'wp_footer', 'add_onload' );
?>

Complicated, but it's a nice trick; and it let us switch to Leaflet and get the PEEC interactive Los Alamos area trail map working again.

Tags: , , , , ,
[ 15:49 May 27, 2018    More tech/web | permalink to this entry | ]

Thu, 24 May 2018

Google Maps API No Longer Free?

A while ago I wrote an interactive trail map page for the PEEC nature center website. At the time, I wanted to use an open library, like OpenLayers or Leaflet; but there were no good sources of satellite/aerial map tiles at the time. The only one I found didn't work because they had a big blank area anywhere near LANL -- maybe because of the restricted airspace around the Lab. Anyway, I figured people would want a satellite option, so I used Google Maps instead despite its much more frustrating API.

This week we've been working on converting the website to https. Most things went surprisingly smoothly (though we had a lot more absolute URLs in our pages and databases than we'd realized). But when we got through, I discovered the trail map was broken. I'm still not clear why, but somehow the change from http to https made Google's API stop working. In trying to fix the problem, I discovered that Google's map API may soon cease to be free:

New pricing and product changes will go into effect starting June 11, 2018. For more information, check out the Guide for Existing Users.

That has a button for "Transition Tool" which, when you click it, won't tell you anything about the new pricing structure until you've already set up a billing account. Um ... no thanks, Google.

Googling for google maps api billing led to a page headed "Pricing that scales to fit your needs", which has an elaborate pricing structure listing a whole bnch of variants (I have no idea which of these I was using), of which the first $200/month is free. But since they insist on setting up a billing account, I'd probably have to give them a credit card number -- which one? My personal credit card, for a page that isn't even on my site? Does the nonprofit nature center even have a credit card? How many of these API calls is their site likely to get in a month, and what are the chances of going over the limit?

It all rubbed me the wrong way, especially when the context of "Your trail maps page that real people actually use has broken without warning, and will be held hostage until you give usa credit card number". This is what one gets for using a supposedly free (as in beer) library that's not Free open source software.

So I replaced Google with the excellent open source Leaflet library, which, as a bonus, has much better documentation than Google Maps. (It's not that Google's documentation is poorly written; it's that they keep changing their APIs, but there's no way to tell the dozen or so different APIs apart because they're all just called "Maps", so when you search for documentation you're almost guaranteed to get something that stopped working six years ago -- but the documentation is still there making it look like it's still valid.) And I was happy to discover that, in the time since I originally set up the trailmap page, some open providers of aerial/satellite map tiles have appeared. So we can use open source and have a satellite view.

Our trail map is back online with Leaflet, and with any luck, this time it will keep working. PEEC Los Alamos Area Trail Map.

Tags: , , ,
[ 16:13 May 24, 2018    More programming | permalink to this entry | ]

Tue, 22 May 2018

Downloading all the Books in a Humble Bundle

Humble Bundle has a great bundle going right now (for another 15 minutes -- sorry, I meant to post this earlier) on books by Nebula-winning science fiction authors, including some old favorites of mine, and a few I'd been meaning to read.

I like Humble Bundle a lot, but one thing about them I don't like: they make it very difficult to download books, insisting that you click on every single link (and then do whatever "Download this link / yes, really download, to this directory" dance your browser insists on) rather than offering a sane option like a tarball or zip file. I guess part of their business model includes wanting their customers to get RSI. This has apparently been a problem for quite some time; a web search found lots of discussions of ways of automating the downloads, most of which apparently no longer work (none of the ones I tried did).

But a wizard friend on IRC quickly came up with a solution: some javascript you can paste into Firefox's console. She started with a quickie function that fetched all but a few of the files, but then modified it for better error checking and the ability to get different formats.

In Firefox, open the web console (Tools/Web Developer/Web Console) and paste this in the single-line javascript text field at the bottom.

// How many seconds to delay between downloads.
var delay = 1000;
// whether to use window.location or window.open
// window.open is more convenient, but may be popup-blocked
var window_open = false;
// the filetypes to look for, in order of preference.
// Make sure your browser won't try to preview these filetypes.
var filetypes = ['epub', 'mobi', 'pdf'];

var downloads = document.getElementsByClassName('download-buttons');
var i = 0;
var success = 0;

function download() {
  var children = downloads[i].children;
  var hrefs = {};
  for (var j = 0; j < children.length; j++) {
    var href = children[j].getElementsByClassName('a')[0].href;
    for (var k = 0; k < filetypes.length; k++) {
      if (href.includes(filetypes[k])) {
        hrefs[filetypes[k]] = href;
        console.log('Found ' + filetypes[k] + ': ' + href);
      }
    }
  }
  var href = undefined;
  for (var k = 0; k < filetypes.length; k++) {
    if (hrefs[filetypes[k]] != undefined) {
      href = hrefs[filetypes[k]];
      break;
    }
  }
  if (href != undefined) {
    console.log('Downloading: ' + href);
    if (window_open) {
      window.open(href);
    } else {
      window.location = href;
    }
    success++;
  }
  i++;
  console.log(i + '/' + downloads.length + '; ' + success + ' successes.');
  if (i < downloads.length) {
    window.setTimeout(download, delay);
  }
}
download();

If you have "Always ask where to save files" checked in Preferences/General, you'll still get a download dialog for each book (but at least you don't have to click; you can hit return for each one). Even if this is your preference, you might want to consider changing it before downloading a bunch of Humble books.

Anyway, pretty cool! Takes the sting out of bundles, especially big ones like this 42-book collection.

Tags: , , ,
[ 17:49 May 22, 2018    More tech/web | permalink to this entry | ]

Mon, 14 May 2018

Plotting the Jet Stream, or Other Winds, with ECMWF Data

I've been trying to learn more about weather from a friend who used to work in the field -- in particular, New Mexico's notoriously windy spring. One of the reasons behind our spring winds relates to the location of the jet stream. But I couldn't find many good references showing how the jet stream moves throughout the year. So I decided to try to plot it myself -- if I could find the data. Getting weather data can surprisingly hard.

In my search, I stumbled across Geert Barentsen's excellent Annual variations in the jet stream (video). It wasn't quite what I wanted -- it shows the position of the jet stream in December in successive years -- but the important thing is that he provides a Python script on GitHub that shows how he produced his beautiful animation.

[Sample jet steam image]

Well -- mostly. It turns out his data sources are no longer available, and he didn't go into a lot of detail on where he got his data, only saying that it was from the ECMWF ERA re-analysis model (with a link that's now 404). That led me on a merry chase through the ECMWF website trying to figure out which part of which database I needed. ECMWF has lots of publically available databases (and even more) and they even have Python libraries to access them; and they even have a lot of documentation, but somehow none of the documentation addresses questions like which database includes which variables and how to find and fetch the data you're after, and a lot of the sample code doesn't actually work. I ended up using the "ERA Interim, Daily" dataset and requesting data for only specific times and only the variables and pressure levels I was interested in. It's a great source of data once you figure out how to request it.

Sign up for an ECMWF API Key

Access ECMWF Public Datasets (there's also Access MARS and I'm not sure what the difference is), which has links you can click on to register for an API key.

Once you get the email with your initial password, log in using the URL in the email, and change the password. That gave me a "next" button that, when I clicked it, took me to a page warning me that the page was obsolete and I should update whatever bookmark I had used to get there. That page also doesn't offer a link to the new page where you can get your key details, so go here: Your API key. The API Key page gives you some lines you can paste into ~/.ecmwfapirc.

You'll also have to accept the license terms for the databases you want to use.

Install the Python API

That sets you up to use the ECMWF api. They have a Web API and a Python library, plus some other Python packages, but after struggling with a bunch of Magics tutorial examples that mostly crashed or couldn't find data, I decided I was better off sticking to the basic Python downloader API and plotting the results with Matplotlib.

The Python data-fetching API works well. To install it, activate your preferred Python virtualenv or whatever you use for pip packages, then run the pip command shown at Web API Downloads (under "Click here to see the installation/update instructions..."). As always with pip packages, you'll have to decide on a Python version (they support both 2 and 3) and whether to use a virtualenv, the much-disrecommended sudo pip, pip3, etc. I used pip3 in a virtualenv and it worked fine.

Specify a dataset and parameters

That's great, but how do you know which dataset you want to load?

There doesn't seem to be anything that just lists which datasets have which variables. The only way I found is to go to the Web API page for a particular dataset to see the form where you can request different variables. For instance, I ended up using the "interim-full-daily" database, where you can choose date ranges and lists of parameters. There are more choices in the sidebar: for instance, clicking on "Pressure levels" lets you choose from a list of barometric pressures ranging from 1000 all the way down to 1. No units are specified, but they're millibars, also known as hectoPascals (hPa): 1000 is more or less the pressure at ground level, 250 is roughly where the jet stream is, and Los Alamos is roughly at 775 hPa (you can find charts of pressure vs. altitude on the web).

When you go to any of the Web API pages, it will show you a dialog suggesting you read about Data retrieval efficiency, which you should definitely do if you're expecting to request a lot of data, then click on the details for the database you're using to find out how data is grouped in "tape files". For instance, in the ERA-interim database, tapes are grouped by date, so if you're requesting multiple parameters for multiple months, request all the parameters for a given month together, rather than making one request for level 250, another request for level 1000, etc.

Once you've checked the boxes for the data you want, you can fetch the data via the web interface, or click on "View the MARS request" to get parameters you can plug into a Python script.

If you choose the Python script option as I did, you can start with the basic data retrieval example. Use the second example, the one that uses 'format' : "netcdf", which will (eventually) give you a file ending in .nc.

Requesting a specific area

You can request only a limited area,

"area": "75/-20/10/60",
but they're not very forthcoming on the syntax of that, and it's particularly confusing since "75/-20/10/60" supposedly means "Europe". It's hard to figure how those numbers as longitudes and latitudes correspond to Europe, which doesn't go down to 10 degrees latitude, let alone -20 degrees. The Post-processing keywords page gives more information: it's North/West/South/East, which still makes no sense for Europe, until you expand the Area examples tab on that page and find out that by "Europe" they mean Europe plus Saudi Arabia and most of North Africa.

Using the data: What's in it?

Once you have the data file, assuming you requested data in netcdf format, you can parse the .nc file with the netCDF4 Python module -- available as Debian package "python3-netcdf4", or via pip -- to read that file:

import netCDF4

data = netCDF4.Dataset('filename.nc')

But what's in that Dataset? Try running the preceding two lines in the interactive Python shell, then:

>>> for key in data.variables:
...   print(key)
... 
longitude
latitude
level
time
w
vo
u
v

You can find out more about a parameter, like its units, type, and shape (array dimensions). Let's look at "level":

>>> data['level']
<class 'netCDF4._netCDF4.Variable'>
int32 level(level)
    units: millibars
    long_name: pressure_level
unlimited dimensions: 
current shape = (3,)
filling on, default _FillValue of -2147483647 used

>>> data['level'][:]
array([ 250,  775, 1000], dtype=int32)

>>> type(data['level'][:])
<class 'numpy.ndarray'>

Levels has shape (3,): it's a one-dimensional array with three elements: 250, 775 and 1000. Those are the three levels I requested from the web API and in my Python script). The units are millibars.

More complicated variables

How about something more complicated? u and v are the two components of wind speed.

>>> data['u']
<class 'netCDF4._netCDF4.Variable'>
int16 u(time, level, latitude, longitude)
    scale_factor: 0.002161405503194121
    add_offset: 30.095301438361684
    _FillValue: -32767
    missing_value: -32767
    units: m s**-1
    long_name: U component of wind
    standard_name: eastward_wind
unlimited dimensions: time
current shape = (30, 3, 241, 480)
filling on
u (v is the same) has a shape of (30, 3, 241, 480): it's a 4-dimensional array. Why? Looking at the numbers in the shape gives a clue. The second dimension has 3 rows: they correspond to the three levels, because there's a wind speed at every level. The first dimension has 30 rows: it corresponds to the dates I requested (the month of April 2015). I can verify that:
>>> data['time'].shape
(30,)

Sure enough, there are 30 times, so that's what the first dimension of u and v correspond to. The other dimensions, presumably, are latitude and longitude. Let's check that:

>>> data['longitude'].shape
(480,)
>>> data['latitude'].shape
(241,)

Sure enough! So, although it would be nice if it actually told you which dimension corresponded with which parameter, you can probably figure it out. If you're not sure, print the shapes of all the variables and work out which dimensions correspond to what:

>>> for key in data.variables:
...   print(key, data[key].shape)

Iterating over times

data['time'] has all the times for which you have data (30 data points for my initial test of the days in April 2015). The easiest way to plot anything is to iterate over those values:

    timeunits = JSdata.data['time'].units
    cal = JSdata.data['time'].calendar
    for i, t in enumerate(JSdata.data['time']):
        thedate = netCDF4.num2date(t, units=timeunits, calendar=cal)

Then you can use thedate like a datetime, calling thedate.strftime or whatever you need.

So that's how to access your data. All that's left is to plot it -- and in this case I had Geert Barentsen's script to start with, so I just modified it a little to work with slightly changed data format, and then added some argument parsing and runtime options.

Converting to Video

I already wrote about how to take the still images the program produces and turn them into a video: Making Videos (that work in Firefox) from a Series of Images.

However, it turns out ffmpeg can't handle files that are named with timestamps, like jetstream-2017-06-14-250.png. It can only handle one sequential integer. So I thought, what if I removed the dashes from the name, and used names like jetstream-20170614-250.png with %8d? No dice: ffmpeg also has the limitation that the integer can have at most four digits.

So I had to rename my images. A shell command works: I ran this in zsh but I think it should work in bash too.

cd outdir
mkdir moviedir

i=1
for fil in *.png; do
  newname=$(printf "%04d.png" $i)
  ln -s ../$fil moviedir/$newname
  i=$((i+1))
done

ffmpeg -i moviedir/%4d.png -filter:v "setpts=2.5*PTS" -pix_fmt yuv420p jetstream.mp4
The -filter:v "setpts=2.5*PTS" controls the delay between frames -- I'm not clear on the units, but larger numbers have more delay, and I think it's a multiplier, so this is 2.5 times slower than the default.

When I uploaded the video to YouTube, I got a warning, "Your videos will process faster if you encode into a streamable file format." I then spent half a day trying to find a combination of ffmpeg arguments that avoided that warning, and eventually gave up. As far as I can tell, the warning only affects the 20 seconds or so of processing that happens after the 5-10 minutes it takes to upload the video, so I'm not sure it's terribly important.

Results

Here's a video of the jet stream from 2012 to early 2018, and an earlier effort with a much longer 6.0x delay.

And here's the script, updated from the original Barentsen script and with a bunch of command-line options to let you plot different collections of data: jetstream.py on GitHub.

Tags: , , ,
[ 14:18 May 14, 2018    More programming | permalink to this entry | ]

Fri, 11 May 2018

Making Videos (that work in Firefox) from a Series of Images

I was working on a weather project to make animated maps of the jet stream. Getting and plotting wind data is a much longer article (coming soon), but once I had all the images plotted, I wanted to combine them all into a time-lapse video showing how the jet stream moves.

Like most projects, it's simple once you find the right recipe. If your images are named outdir/filename00.png, outdir/filename01.png, outdir/filename02.png and so on, you can turn them into an MPEG4 video with ffmpeg:

ffmpeg -i outdir/filename%2d.png -filter:v "setpts=6.0*PTS" -pix_fmt yuv420p jetstream.mp4

%02d, for non-programmers, just means a 2-digit decimal integer with leading zeros, If the filenames just use 1, 2, 3, ... 10, 11 without leading zeros, use %2d instead; if they have three digits, use %03d or %3d, and so on.

Update: If your first photo isn't numbered 00, you can set a -start_number — but it must come before the -i and filename template. For instance:

ffmpeg -start_number 17 --i outdir/filename%2d.png -filter:v "setpts=6.0*PTS" -pix_fmt yuv420p jetstream.mp4

That "setpts=6.0*PTS" controls the speed of the playback, by adding or removing frames. PTS stands for "Presentation TimeStamps", which apparently is a measure of how far along a frame is in the file; setpts=6.0*PTS means for each frame, figure out how far it would have been in the file (PTS) and multiply that by 6. So if a frame would normally have been at timestamp 10 seconds, now it will be at 60 seconds, and the video will be six times longer and six times slower. And yes, you can also use values less than one to speed a video up. You can also change a video's playback speed by changing the frame rate, either with the -r option, e.g. -r 30, or with the fps filter, filter:v fps=30. The default frame rate is 25.

You can examine values like the frame rate, number of frames and duration of a video file with: ffprobe -select_streams v -show_streams filename or with the mediainfo program (not part of ffmpeg).

The -pix_fmt yuv420p turned out to be the tricky part. The recipes I found online didn't include that part, but without it, Firefox claims "Video can't be played because the file is corrupt", even though most other browsers can play it just fine. If you open Firefox's web console and reload, it offers the additional information "Details: mozilla::SupportChecker::AddMediaFormatChecker(const mozilla::TrackInfo&)::<lambda()>: Decoder may not have the capability to handle the requested video format with YUV444 chroma subsampling.":

Adding -pix_fmt yuv420p cured the problem and made the video compatible with Firefox, though at first I had problems with ffmpeg complaining "height not divisible by 2 (1980x1113)" (even though the height of the images was in fact divisible by 2). I'm not sure what was wrong; later ffmpeg stopped giving me that error message and converted the video. It may depend on where in the ffmpeg command you put the pix_fmt flag or what other flags are present. ffmpeg arguments are a mystery to me.

Of course, if you're only making something to be uploaded to youtube, the Firefox limitation probably doesn't matter and you may not need the -pix_fmt yuv420p argument.

Animated GIFs

Making an animated GIF is easier. You can use ImageMagick's convert:

convert -delay 30 -loop 0 *.png jetstream.gif
The GIF will be a lot larger, though. For my initial test of thirty 1000 x 500 images, the MP4 was 760K while the GIF was 4.2M.

Tags: , , ,
[ 09:59 May 11, 2018    More linux | permalink to this entry | ]

Mon, 07 May 2018

A Hissy Fit

As I came home from the market and prepared to turn into the driveway I had to stop for an obstacle: a bullsnake who had stretched himself across the road.

[pugnacious bullsnake]

I pulled off, got out of the car and ran back. A pickup truck was coming around the bend and I was afraid he would run over the snake, but he stopped and rolled down the window to help. White Rock people are like that, even the ones in pickup trucks.

The snake was pugnacious, not your usual mellow bullsnake. He coiled up and started hissing madly. The truck driver said "Aw, c'mon, you're not fooling anybody. We know you're not a rattlesnake," but the snake wasn't listening. (I guess that's understandable, since they have no ears.)

I tried to loom in front of him and stamp on the ground to herd him off the road, but he wasn't having any of it. He just kept coiling and hissing, and struck at me when I got a little closer.

I moved my hand slowly around behind his head and gently took hold of his neck -- like what you see people do with rattlesnakes, though I'd never try that with a venomous snake without a lot of practice and training. With a bullsnake, even if they bite you it's not a big deal. When I was a teenager I had a pet gopher snake (a fringe benefit of having a mother who worked on wildlife documentaries), and though "Goph" was quite tame, he once accidentally bit me when I was replacing his water dish after feeding him and he mistook my hand for a mouse. (He seemed acutely embarrassed, if such an emotion can be attributed to a reptile; he let go immediately and retreated to sulk in the far corner of his aquarium.) Anyway, it didn't hurt; their teeth are tiny and incredibly sharp, and it feels like the pinprick from a finger blood test at the doctor's office.

Anyway, the bullsnake today didn't bite. But after I moved him off the road to a nice warm basalt rock in the yard, he stayed agitated, hissing loudly, coiling and beating his tail to mimic a rattlesnake. He didn't look like he was going to run and hide any time soon, so I ran inside to grab a camera.

In the photos, I thought it was interesting how he held his mouth when he hisses. Dave thought it looked like W.C. Fields. I hadn't had a chance to see that up close before: my pet snake never had occasion to hiss, and I haven't often seen wild bullsnakes be so pugnacious either -- certainly not for long enough that I've been able to photograph it. You can also see how he puffs up his neck.

I now have a new appreciation of the term "hissy fit".

[pugnacious bullsnake]

Tags: ,
[ 15:06 May 07, 2018    More nature | permalink to this entry | ]

Fri, 27 Apr 2018

Displaying PDF with Python, Qt5 and Poppler

I had a need for a Qt widget that could display PDF. That turned out to be surprisingly hard to do. The Qt Wiki has a page on Handling PDF, which suggests only two alternatives: QtPDF, which is C++ only so I would need to write a wrapper to use it with Python (and then anyone else who used my code would have to compile and install it); or Poppler. Poppler is a common library on Linux, available as a package and used for programs like evince, so that seemed like the best route.

But Python bindings for Poppler are a bit harder to come by. I found a little one-page example using Poppler and Gtk3 via gi.repository ... but in this case I needed it to work with a Qt5 program, and my attempts to translate that example to work with Qt were futile. Poppler's page.render(ctx) takes a Cairo context, and Cairo is apparently a Gtk-centered phenomenon: I couldn't find any way to get a Cairo context from a Qt5 widget, and although I found some web examples suggesting renderToImage(), the Poppler available in gi.repository doesn't have that function.

But it turns out there's another Poppler: popplerqt5, available in the Debian package python3-poppler-qt5. That Poppler does have renderToImage, and you can take that image and paint it in a paint() callback or turn it into a pixmap you can use with a QLabel. Here's the basic sequence:

    document = Poppler.Document.load(filename)
    document.setRenderHint(Poppler.Document.TextAntialiasing)
    page = document.page(pageno)
    img = self.page.renderToImage(dpi, dpi)

    # Use the rendered image as the pixmap for a label:
    pixmap = QPixmap.fromImage(img)
    label.setPixmap(pixmap)

The line to set text antialiasing is not optional. Well, theoretically it's optional; go ahead, try it without that and see for yourself. It's basically unreadable.

Of course, there are plenty of other details to take care of. For instance, you can get the size of the rendered image:

    size = page.pageSize()
... after which you can use size.width() and size.height(). They're in points. There are 72 points per inch, so calculate accordingly in the dpi values you pass to renderToImage if you're targeting a specific DPI or need it to fit in a specific window size.

Window Resize and Efficient Rendering

Speaking of fitting to a window size, I wanted to resize the content whenever the window was resized, which meant redefining resizeEvent(self, event) on the widget. Initially my PDFWidget inherited from Qwidget with a custom paintEvent(), like this:

        # Create self.img once, early on:
        self.img = self.page.renderToImage(self.dpi, self.dpi)

    def paintEvent(self, event):
        qp = QPainter()
        qp.begin(self)
        qp.drawImage(QPoint(0, 0), self.img)
        qp.end()
(Poppler also has a function page.renderToPainter(), but I never did figure out how to get it to do anything useful.)

That worked, but when I added resizeEvent I got an infinite loop: paintEvent() called resizeEvent() which triggered another paintEvent(), ad infinitum. I couldn't find a way around that (GTK has similar problems -- seems like nearly everything you do generates another expose event -- but there you can temporarily disable expose events while you're drawing). So I rewrote my PDFWidget class to inherit from QLabel instead of QWidget, converted the QImage to a QPixmap and passed it to self.setPixmap(). That let me get rid of the paintEvent() function entirely and let QLabel handle the painting, which is probably more efficient anyway.

Showing all pages in a scrolled widget

renderToImage gives you one image corresponding to one page of the PDF document. More often, you'll want to see the whole document laid out, with all the pages. So you need a way to stack a bunch of widgets vertically, one for each page. You can do that with a QVBoxLayout on a widget inside a QScrollArea.

I haven't done much Qt5 programming, so I wasn't familiar with how these QVBoxes work. Most toolkits I've worked with have a VBox container widget to which you add child widgets, but in Qt5, you create a widget (no particular type -- a QWidget is enough), then create a layout object that modifies the widget, and add the sub-widgets to the layout object. There isn't much documentation for any of this, and very few examples of doing it in Python, so it took some fiddling to get it working.

Initial Window Size

One last thing: Qt5 doesn't seem to have a concept of desired initial window size. Most of the examples I found, especially the ones that use a .ui file, use setGeometry(); but that requires an (X, Y) position as well as (width, height), and there's no way to tell it to ignore the position. That means that instead of letting your window manager place the window according to your preferences, the window will insist on showing up at whatever arbitrary place you set in the code. Worse, most of the Qt5 examples I found online set the geometry to (0, 0): when I tried that, the window came up with the widget in the upper left corner of the screen and the window's titlebar hidden above the top of the screen, so there's no way to move the window to a better location unless you happen to know your window manager's hidden key binding for that. (Hint: on many Linux window managers, hold Alt down and drag anywhere in the window to move it. If that doesn't work, try holding down the "Windows" key instead of Alt.)

This may explain why I've been seeing an increasing number of these ill-behaved programs that come up with their titlebars offscreen. But if you want your programs to be better behaved, it works to self.resize(width, height) a widget when you first create it.

The current incarnation of my PDF viewer, set up as a module so you can import it and use it in other programs, is at qpdfview.py on GitHub.

Tags: , ,
[ 19:01 Apr 27, 2018    More programming | permalink to this entry | ]

Thu, 05 Apr 2018

Cave Creek Hiking and Birding Trip

A week ago I got back from a trip to the Chiricahua mountains of southern Arizona, specifically Cave Creek on the eastern side of the range. The trip was theoretically a hiking trip, but it was also for birding and wildlife watching -- southern Arizona is near the Mexican border and gets a lot of birds and other animals not seen in the rest of the US -- and an excuse to visit a friend who lives near there.

Although it's close enough that it could be driven in one fairly long day, we took a roundabout 2-day route so we could explore some other areas along the way that we'd been curious about.

First, we wanted to take a look at the White Mesa Bike Trails northwest of Albuquerque, near the Ojito Wilderness. We'll be back at some point with bikes, but we wanted to get a general idea of the country and terrain. The Ojito, too, looks like it might be worth a hiking trip, though it's rather poorly signed: we saw several kiosks with maps where the "YOU ARE HERE" was clearly completely misplaced. Still, how can you not want to go back to a place where the two main trails are named Seismosaurus and Hoodoo?

[Cabezon] The route past the Ojito also led past Cabezon Peak, a volcanic neck we've seen from a long distance away and wanted to see closer. It's apparently possible to climb it but we're told the top part is fairly technical, more than just a hike.

Finally, we went up and over Mt Taylor, something we've been meaning to do for many years. You can drive fairly close to the top, but this being late spring, there was still snow on the upper part of the road and our Rav4's tires weren't up to the challenge. We'll go back some time and hike all the way to the top.

We spent the night in Grants, then the following day, headed down through El Malpais, stopping briefly at the beautiful Sandstone Overlook, then down through the Datil and Mogollon area. We wanted to take a look at a trail called the Catwalk, but when we got there, it was cold, blustery, and starting to rain and sleet. So we didn't hike the Catwalk this time, but at least we got a look at the beginning of it, then continued down through Silver City and thence to I-10, where just short of the Arizona border we were amused by the Burma Shave dust storm signs about which I already wrote.

At Cave Creek

[Beautiful rocks at Cave Creek] Cave Creek Ranch, in Portal, AZ, turned out to be a lovely place to stay, especially for anyone interested in wildlife. I saw several "life birds" and mammals, plus quite a few more that I'd seen at some point but had never had the opportunity to photograph. Even had we not been hiking, just hanging around the ranch watching the critters was a lot of fun. They charge $5 for people who aren't staying there to come and sit in the feeder area; I'm not sure how strictly they enforce it, but given how much they must spend on feed, it would be nice to help support them.

The bird everyone was looking for was the Elegant Trogon. Supposedly one had been seen recently along the creekbed, and we all wanted to see it.

They also had a nifty suspension bridge for pedestrians crossing a dry (this year) arroyo over on another part of the property. I guess I was so busy watching the critters that I never went wandering around, and I would have missed the bridge entirely had Dave not pointed it out to me on the last day.

The only big hike I did was the Burro Trail to Horseshoe Pass, about 10 miles and maybe 1800 feet of climbing. It started with a long hike up the creek, during which everybody had eyes and ears trained on the sycamores (we were told the trogon favored sycamores). No trogon. But it was a pretty hike, and once we finally started climbing out of the creekbed there were great views of the soaring cliffs above Cave Creek Canyon. Dave opted to skip the upper part of the trail to the saddle; I went, but have to admit that it was mostly just more of the same, with a lot of scrambling and a few difficult and exposed traverses. At the time I thought it was worth it, but by the time we'd slogged all the way back to the cars I was doubting that.

[ Organ Pipe Formation at Chiricahua National Monument ] On the second day the group went over the Chiricahuas to Chiricahua National Monument, on the other side. Forest road 42 is closed in winter, but we'd been told that it was open now since the winter had been such a dry one, and it wasn't a particularly technical road, certainly easy in the Rav4. But we had plans to visit our friend over at the base of the next mountain range west, so we just made a quick visit to the monument, did a quick hike around the nature trail and headed on.

Back with the group at Cave Creek on Thursday, we opted for a shorter, more relaxed hike in the canyon to Ash Spring rather than the brutal ascent to Silver Peak. In the canyon, maybe we'd see the trogon! Nope, no trogon. But it was a very pleasant hike, with our first horned lizard ("horny toad") spotting of the year, a couple of other lizards, and some lovely views.

Critters

We'd been making a lot of trogon jokes over the past few days, as we saw visitor after visitor trudging away muttering about not having seen one. "They should rename the town of Portal to Trogon, AZ." "They should rename that B&B Trogon's Roost Bed and Breakfast." Finally, at the end of Thursday's hike, we stopped in at the local ranger station, where among other things (like admiring their caged gila monster) we asked about trogon sightings. Turns out the last one to be seen had been in November. A local thought maybe she'd heard one in January. Whoever had relayed the rumor that one had been seen recently was being wildly optimistic.

[ Northern Cardinal ] [ Coati ] [ Javalina ] [ white-tailed buck ]
Fortunately, I'm not a die-hard birder and I didn't go there specifically for the trogon. I saw lots of good birds and some mammals I'd never seen before (full list), like a coatimundi (I didn't realize those ever came up to the US) and a herd (pack? flock?) of javalinas. And white-tailed deer -- easterners will laugh, but those aren't common anywhere I've lived (mule deer are the rule in California and Northern New Mexico). Plus some good hikes with great views, and a nice visit with our friend. It was a good trip.

On the way home, again we took two days for the opportunity to visit some places we hadn't seen. First, Cloudcroft, NM: a place we'd heard a lot about because a lot of astronomers retire there. It's high in the mountains and quite lovely, with lots of hiking trails in the surrounding national forest. Worth a visit some time.

From Cloudcroft we traveled through the Mescalero Apache reservation, which was unexpectedly beautiful, mountainous and wooded and dotted with nicely kept houses and ranches, to Ruidoso, a nice little town where we spent the night.

Lincoln

[ Lincoln, NM ] Our last stop, Saturday morning, was Lincoln, site of the Lincoln County War (think Billy the Kid). The whole tiny town is set up as a tourist attraction, with old historic buildings ... that were all closed. Because why would any tourists be about on a beautiful Saturday in spring? There were two tiny museums, one at each end of town, which were open, and one of them tried to entice us into paying the entrance fee by assuring us that the ticket was good for all the sites in town. Might have worked, if we hadn't already walked the length of the town peering into windows of all the closed sites. Too bad -- some of them looked interesting, particularly the general store. But we enjoyed our stroll through the town, and we got a giggle out of the tourist town being closed on Saturday -- their approach to tourism seems about as effective as Los Alamos'.

Photos from the trip are at Cave Creek and the Chiricahuas.

Tags: , ,
[ 10:04 Apr 05, 2018    More travel | permalink to this entry | ]

Mon, 26 Mar 2018

Dust Storm Burma Shave Signs

I just got back from a trip to the Chiricahuas, specifically Cave Creek. More on that later, after I've done some more photo triaging. But first, a story from the road.

[NM Burma Shave dust storm signs]

Driving on I-10 in New Mexico near the Arizona border, we saw several signs about dust storms. The first one said,

ZERO VISIBILITY IS POSSIBLE

Dave commented, "I prefer the ones that say, 'may exist'." And as if the highway department heard him, a minute or two later we passed a much more typical New Mexico road sign:

DUST STORMS MAY EXIST
New Mexico, the existential state.

But then things got more fun. We drove for a few more miles, then we passed a sign that obviously wasn't meant to stand alone:

IN A DUST STORM

"It's a Burma Shave!" we said simultaneously. (I'm not old enough to remember Burma Shave signs in real life, but I've heard stories and love the concept.) The next sign came quickly:

PULL OFF ROADWAY

"What on earth are they going to find to rhyme with 'roadway'?" I wondered. I racked my brains but couldn't come up with anything. As it turns out, neither could NMDOT. There were three more signs:

TURN VEHICLE OFF
FEET OFF BRAKES
STAY BUCKLED

"Hmph", I thought. "What an opportunity missed." But I still couldn't come up with a rhyme for "roadway". Since we were on Interstate 10, and there's not much to do on a long freeway drive, I penned an alternative:

IN A DUST STORM
PULL OFF TEN
YOU WILL LIVE
TO DRIVE AGAIN

Much better, isn't it? But one thing bothered me: you're not really supposed to pull all the way off Interstate 10, just onto the shoulder. How about:

IN A DUST STORM
PULL TO SHOULDER
YOU WILL LIVE
TO GET MUCH OLDER

I wasn't quite happy with it. I thought my next attempt was an improvement:

IN A DUST STORM
PULL TO SHOULDER
YOU MAY CRASH IF
YOU ARE BOLDER
but Dave said I should stick with "GET MUCH OLDER".

Oh, well. Even if I'm not old enough to remember real Burma Shave signs, and even if NMDOT doesn't have the vision to make their own signs rhyme, I can still have fun with the idea.

Tags: , , ,
[ 16:05 Mar 26, 2018    More travel | permalink to this entry | ]

Sat, 10 Mar 2018

Intel Galileo v2 Linux Basics

[Intel Galileo Gen2 by Mwilde2 on Wikimedia commons] Our makerspace got a donation of a bunch of Galileo gen2 boards from Intel (image from Mwilde2 on Wikimedia commons).

The Galileo line has been discontinued, so there's no support and no community, but in theory they're fairly interesting boards. You can use a Galileo in two ways: you can treat it like an Arduino, after using the Arduino IDE to download a Galileo hardware definition since they're not Atmega chips. They even have Arduino-format headers so you can plug in an Arduino shield. That works okay (once you figure out that you need to download the Galileo v2 hardware definitions, not the regular Galileo). But they run Linux under the hood, so you can also use them as a single-board Linux computer.

Serial Cable

The first question is how to talk to the board. The documentation is terrible, and web searches aren't much help because these boards were never terribly popular. Worse, the v1 boards seem to have been more widely adopted than the v2 boards, so a lot of what you find on the web doesn't apply to v2. For instance, the v1 required a special serial cable that used a headphone jack as its connector.

Some of the Intel documentation talks about how you can load a special Arduino sketch that then disables the Arduino bootloader and instead lets you use the USB cable as a serial monitor. That made me nervous: once you load that sketch, Arduino mode no longer works until you run a command on Linux to start it up again. So if the sketch doesn't work, you may have no way to talk to the Galileo. Given the state of the documentation I'd already struggled with for Arduino mode, it didn't sound like a good gamble. I thought a real serial cable sounded like a better option.

Of course, the Galileo documentation doesn't tell you what needs to plug in where for a serial cable. The board does have a standard FTDI 6-pin header on the board next to the ethernet jack, and the labels on the pins seemed to correspond to the standard pinout on my Adafruit FTDI Friend: Gnd, CTS, VCC, TX, RX, RTS. So I tried that first, using GNU screen to connect to it from Linux just like I would a Raspberry Pi with a serial cable:

screen /dev/ttyUSB0 115200

Powered up the Galileo and sure enough, I got boot messages and was able to log in as root with no password. It annoyingly forces orange text on a black background, making it especially hard to read on a light-background terminal, but hey, it's a start.

Later I tried a Raspberry Pi serial cable, with just RX (green), TX (white) and Gnd (black) -- don't use the red VCC wire since the Galileo is already getting power from its own power brick -- and that worked too. The Galileo doesn't actually need CTS or RTS. So that's good: two easy ways to talk to the board without buying specialized hardware. Funny they didn't bother to mention it in the docs.

Blinking an LED from the Command Line

Once connected, how do you do anything? Most of the Intel tutorials on Linux are useless, devoting most of their space to things like how to run Putty on Windows and no space at all to how to talk to pins. But I finally found a discussion thread with a Python example for Galileo. That's not immediately helpful since the built-in Linux doesn't have python installed (nor gcc, natch). Fortunately, the Python example used files in /sys rather than a dedicated Python library; we can access /sys files just as well from the shell.

Of course, the first task is to blink an LED on pin 13. That apparently corresponds to GPIO 7 (what are the other arduino/GPIO correspondences? I haven't found a reference for that yet.) So you need to export that pin (which creates /sys/class/gpio/gpio7 and set its direction to out. But that's not enough: the pin still doesn't turn on when you echo 1 > /sys/class/gpio/gpio7/value. Why not? I don't know, but the Python script exports three other pins -- 46, 30, and 31 -- and echoes 0 to 30 and 31. (It does this without first setting their directions to out, and if you try that, you'll get an error, so I'm not convinced the Python script presented as the "Correct answer" would actually have worked. Be warned.)

Anyway, I ended up with these shell lines as preparation before the Galileo can actually blink:

# echo 7 >/sys/class/gpio/export

# echo out > /sys/class/gpio/gpio7/direction

# echo 46 >/sys/class/gpio/export
# echo 30 >/sys/class/gpio/export
# echo 31 >/sys/class/gpio/export

# echo out > /sys/class/gpio/gpio30/direction
# echo out > /sys/class/gpio/gpio31/direction
# echo 0  > /sys/class/gpio/gpio30/value
# echo 0  > /sys/class/gpio/gpio31/value

And now, finally, you can control the LED on pin 13 (GPIO 7):

# echo 1 > /sys/class/gpio/gpio7/value
# echo 0 > /sys/class/gpio/gpio7/value
or run a blink loop:
# while /bin/true; do
> echo 1  > /sys/class/gpio/gpio7/value
> sleep 1
> echo 0  > /sys/class/gpio/gpio7/value
> sleep 1
> done

Searching Fruitlessly for a "Real" Linux Image

All the Galileo documentation is emphatic that you should download a Linux distro and burn it to an SD card rather than using the Yocto that comes preinstalled. The preinstalled Linux apparently has no persistent storage, so not only does it not save your Linux programs, it doesn't even remember the current Arduino sketch. And it has no programming languages and only a rudimentary busybox shell. So finding and downloading a Linux distro was the next step.

Unfortunately, that mostly led to dead ends. All the official Intel docs describe different download filenames, and they all point to generic download pages that no longer include any of the filenames mentioned. Apparently Intel changed the name for its Galileo images frequently and never updated its documentation.

After forty-five minutes of searching and clicking around, I eventually found my way to Intel® IoT Developer Kit Installer Files, which includes sizable downloads with names like

From the size, I suspect those are all Linux images. But what are they and how do they differ? Do any of them still have working repositories? Which ones come with Python, with gcc, with GPIO support, with useful development libraries? Do any of them get security updates?

As far as I can tell, the only way to tell is to download each image, burn it to a card, boot from it, then explore the filesystem trying to figure out what distro it is and how to try updating it.

But by this time I'd wasted three hours and gotten no further than the shell commands to blink a single LED, and I ran out of enthusiasm. I mean, I could spend five more hours on this, try several of the Linux images, and see which one works best. Or I could spend $10 on a Raspberry Pi Zero W that has abundant documentation, libraries, books, and community howtos. Plus wi-fi, bluetooth and HDMI, none of which the Galileo has.

Arduino and Linux Living Together

So that's as far as I've gone. But I do want to note one useful thing I stumbled upon while searching for information about Linux distributions:

Starting Arduino sketch from Linux terminal shows how to run an Arduino sketch (assuming it's already compiled) from Linux:

sketch.elf /dev/ttyGS0 &

It's a fairly cool option to have. Maybe one of these days, I'll pick one of the many available distros and try it.

Tags: , , , ,
[ 13:54 Mar 10, 2018    More hardware | permalink to this entry | ]

Thu, 01 Mar 2018

Re-enabling PHP when a Debian system upgrade disables it

I updated my Debian Testing system via apt-get upgrade, as one does during the normal course of running a Debian system. The next time I went to a locally hosted website, I discovered PHP didn't work. One of my websites gave an error, due to a directive in .htaccess; another one presented pages that were full of PHP code interspersed with the HTML of the page. Ick!

In theory, Debian updates aren't supposed to change configuration files without asking first, but in practice, silent and unexpected Apache bustage is fairly common. But for this one, I couldn't find anything in a web search, so maybe this will help.

The problem turned out to be that /etc/apache2/mods-available/ includes four files:

$ ls /etc/apache2/mods-available/*php*
/etc/apache2/mods-available/php7.0.conf
/etc/apache2/mods-available/php7.0.load
/etc/apache2/mods-available/php7.2.conf
/etc/apache2/mods-available/php7.2.load

The appropriate files are supposed to be linked from there into /etc/apache2/mods-enabled. Presumably, I previously had a link to ../mods-available/php7.0.* (or perhaps 7.1?); the upgrade to PHP 7.2 must have removed that existing link without replacing it with a link to the new ../mods-available/php7.2.*.

The solution is to restore those links, either with ln -s or with the approved apache2 commands (as root, of course):

# a2enmod php7.2
# systemctl restart apache2

Whew! Easy fix, but it took a while to realize what was broken, and would have been nice if it didn't break in the first place. Why is the link version-specific anyway? Why isn't there a file called /etc/apache2/mods-available/php.* for the latest version? Does PHP really change enough between minor releases to break websites? Doesn't it break a website more to disable PHP entirely than to swap in a newer version of it?

Tags: , , ,
[ 10:31 Mar 01, 2018    More linux | permalink to this entry | ]

Fri, 23 Feb 2018

PEEC Planetarium Show: "The Analemma Dilemma"

[Analemma by Giuseppe Donatiello via Wikimedia Commons] Dave and I are giving a planetarium show at PEEC tonight on the analemma.

I've been interested in the analemma for years and have written about it before, here on the blog and in the SJAA Ephemeris. But there were a lot of things I still didn't understand as well as I liked. When we signed up three months ago to give this talk, I had plenty of lead time to do more investigating, uncovering lots of interesting details regarding the analemmas of other planets, the contributions of the two factors that go into the Equation of Time, why some analemmas are figure-8s while some aren't, and the supposed "moon analemmas" that have appeared on the Astronomy Picture of the Day. I added some new features to the analemma script I'd written years ago as well as corresponding with an expert who'd written some great Equation of Time code for all the planets. It's been fun.

I'll write about some of what I learned when I get a chance, but meanwhile, people in the Los Alamos area can hear all about it tonight, at our PEEC show: The Analemma Dilemma, 7 pm tonight, Friday Feb 23, at the Nature Center, admission $6/adult, $4/child.

Tags: , , , ,
[ 10:23 Feb 23, 2018    More science/astro | permalink to this entry | ]

Sat, 17 Feb 2018

Multiplexing Input or Output on a Raspberry Pi Part 2: Port Expanders

In the previous article I talked about Multiplexing input/output using shift registers for a music keyboard project. I ended up with three CD4021 8-bit shift registers cascaded. It worked; but I found that I was spending all my time in the delays between polling each bit serially. I wanted a way to read those bits faster. So I ordered some I/O expander chips.

[Keyboard wired to Raspberry Pi with two MCP23017 port expanders] I/O expander, or port expander, chips take a lot of the hassle out of multiplexing. Instead of writing code to read bits serially, you can use I2C. Some chips also have built-in pullup resistors, so you don't need all those extra wires for pullups or pulldowns. There are lots of options, but two common chips are the MCP23017, which controls 16 lines, and the MCP23008 and PCF8574p, which each handle 8. I'll only discuss the MCP23017 here, because if eight is good, surely sixteen is better! But the MCP23008 is basically the same thing with fewer I/O lines.

A good tutorial to get you started is How To Use A MCP23017 I2C Port Expander With The Raspberry Pi - 2013 Part 1 along with part 2, Python and part 3, reading input.

I'm not going to try to repeat what's in those tutorials, just fill in some gaps I found. For instance, I didn't find I needed sudo for all those I2C commands in Part 1 since my user is already in the i2c group.

Using Python smbus

Part 2 of that tutorial uses Python smbus, but it doesn't really explain all the magic numbers it uses, so it wasn't obvious how to generalize it when I added a second expander chip. It uses this code:

DEVICE = 0x20 # Device address (A0-A2)
IODIRA = 0x00 # Pin direction register
OLATA  = 0x14 # Register for outputs
GPIOA  = 0x12 # Register for inputs

# Set all GPA pins as outputs by setting
# all bits of IODIRA register to 0
bus.write_byte_data(DEVICE,IODIRA,0x00)

# Set output all 7 output bits to 0
bus.write_byte_data(DEVICE,OLATA,0)

DEVICE is the address on the I2C bus, the one you see with i2cdetect -y 1 (20, initially).

IODIRA is the direction: when you call

bus.write_byte_data(DEVICE, IODIRA, 0x00)
you're saying that all eight bits in GPA should be used for output. Zero specifies output, one input: so if you said
bus.write_byte_data(DEVICE, IODIRA, 0x1F)
you'd be specifying that you want to use the lowest five bits for output and the upper three for input.

OLATA = 0x14 is the command to use when writing data:

bus.write_byte_data(DEVICE, OLATA, MyData)
means write data to the eight GPA pins. But what if you want to write to the eight GPB pins instead? Then you'd use
OLATB  = 0x15
bus.write_byte_data(DEVICE, OLATB, MyData)

Likewise, if you want to read input from some of the GPB bits, use

GPIOB  = 0x13
val = bus.read_byte_data(DEVICE, GPIOB)

The MCP23017 even has internal pullup resistors you can enable:

GPPUA  = 0x0c    # Pullup resistor on GPA
GPPUB  = 0x0d    # Pullup resistor on GPB
bus.write_byte_data(DEVICE, GPPUB, inmaskB)

Here's a full example: MCP23017.py on GitHub.

Using WiringPi

You can also talk to an MCP23017 using the WiringPi library. In that case, you don't set all the bits at once, but instead treat each bit as though it were a separate pin. That's easier to think about conceptually -- you don't have to worry about bit shifting and masking, just use pins one at a time -- but it might be slower if the library is doing a separate read each time you ask for an input bit. It's probably not the right approach to use if you're trying to check a whole keyboard's state at once.

Start by picking a base address for the pin number -- 65 is the lowest you can pick -- and initializing:

pin_base = 65
i2c_addr = 0x20

wiringpi.wiringPiSetup()
wiringpi.mcp23017Setup(pin_base, i2c_addr)

Then you can set input or output mode for each pin:

wiringpi.pinMode(pin_base, wiringpi.OUTPUT)
wiringpi.pinMode(input_pin, wiringpi.INPUT)
and then write to or read from each pin:
wiringpi.digitalWrite(pin_no, 1)
val = wiringpi.digitalRead(pin_no)

WiringPi also gives you access to the MCP23017's internal pullup resistors:

wiringpi.pullUpDnControl(input_pin, 2)

Here's an example in Python: MCP23017-wiringpi.py on GitHub, and one in C: MCP23017-wiringpi.c on GitHub.

Using multiple MCP23017s

But how do you cascade several MCP23017 chips?

Well, you don't actually cascade them. Since they're I2C devices, you wire them so they each have different addresses on the I2C bus, then query them individually. Happily, that's easier than keeping track of how many bits you've looped through ona shift register.

Pins 15, 16 and 17 on the chip are the address lines, labeled A0, A1 and A2. If you ground all three you get the base address of 0x20. With all three connected to VCC, it will use 0x27 (binary 111 added to the base address). So you can send commands to your first device at 0x20, then to your second one at 0x21 and so on. If you're using WiringPi, you can call mcp23017Setup(pin_base2, i2c_addr2) for your second chip.

I had trouble getting the addresses to work initially, and it turned out the problem wasn't in my understanding of the address line wiring, but that one of my cheap Chinese breadboard had a bad power and ground bus in one quadrant. That's a good lesson for the future: when things don't work as expected, don't assume the breadboard is above suspicion.

Using two MCP23017 chips with their built-in pullup resistors simplified the wiring for my music keyboard enormously, and it made the code cleaner too. Here's the modified code: keyboard.py on GitHub.

What about the speed? It is indeed quite a bit faster than the shift register code. But it's still too laggy to use as a real music keyboard. So I'll still need to do more profiling, and maybe find a faster way of generating notes, if I want to play music on this toy.

Tags: , ,
[ 15:44 Feb 17, 2018    More hardware | permalink to this entry | ]

Tue, 13 Feb 2018

Multiplexing Input or Output on a Raspberry Pi Part 1: Shift Registers

I was scouting for parts at a thrift shop and spotted a little 23-key music keyboard. It looked like a fun Raspberry Pi project.

I was hoping it would turn out to use some common protocol like I2C, but when I dissected it, it turned out there was a ribbon cable with 32 wires coming from the keyboard. So each key is a separate pushbutton.

[23-key keyboard wired to a Raspberry Pi] A Raspberry Pi doesn't have that many GPIO pins, and neither does an Arduino Uno. An Arduino Mega does, but buying a Mega to go between the Pi and the keyboard kind of misses the point of scavenging a $3 keyboard; I might as well just buy an I2C or MIDI keyboard. So I needed some sort of I/O multiplexer that would let me read 31 keys using a lot fewer pins.

There are a bunch of different approaches to multiplexing. A lot of keyboards use a matrix approach, but that makes more sense when you're wiring up all the buttons from scratch, not starting with a pre-wired keyboard like this. The two approaches I'll discuss here are shift registers and multiplexer chips.

If you just want to get the job done in the most efficient way, you definitely want a multiplexer (port expander) chip, which I'll cover in Part 2. But for now, let's look at the old-school way: shift registers.

PISO Shift Registers

There are lots of types of shift registers, but for reading lots of inputs, you need a PISO shift register: "Parallel In, Serial Out." That means you can tell the chip to read some number -- typically 8 -- of inputs in parallel, then switch into serial mode and read all the bits one at a time.

Some PISO shift registers can cascade: you can connect a second shift register to the first one and read twice as many bits. For 23 keys I needed three 8-bit shift registers.

Two popular cascading PISO shift registers are the CD4021 and the SN74LS165. They work similarly but they're not exactly the same.

The basic principle with both the CD4021 and the SN74LS165: connect power and ground, and wire up all your inputs to the eight data pins. You'll need pullup or pulldown resistors on each input line, just like you normally would for a pushbutton; I recommend picking up a few high-value (like 1-10k) resistor arrays: you can get these in SIP (single inline package) or DIP (dual-) form factors that plug easily into a breadboard. Resistor arrays can be either independent two pins for each resistor in the array) or bussed (one pin in the chip is a common pin, which you wire to ground for a pulldown or V+ for a pullup; each of the rest of the pins is a resistor). I find bussed networks particularly handy because they can reduce the number of wires you need to run, and with a job where you're multiplexing lots of lines, you'll find that getting the wiring straight is a big part of the job. (See the photo above to see what a snarl this was even with resistor networks.)

For the CD4021, connect three more pins: clock and data pins (labeled CLK and either Q7 or Q8 on the chip's pinout, pins 10 and 3), plus a "latch" pin (labeled M, pin 9). For the SN74LS165, you need one more pin: you need clock and data (labeled CP and Q7, pins 2 and 9), latch (labeled PL, pin 1), and clock enable (labeled CE, pin 15).

At least for the CD4021, some people recommend a 0.1 uF bypass capacitor across the power/ground connections of each CD4021.

If you need to cascade several chips with the CD4021, wire DS (pin 11) from the first chip to Q7 (pin 3), then wire both chips clock lines together and both chips' data lines together. The SN74LS165 is the same: DS (pin 10) to Q8 (pin 9) and tie the clock and data lines together.

Once wired up, you toggle the latch to read the parallel data, then toggle it again and use the clock pin to read the series of bits. You can see the specific details in my Python scripts: CD4021.py on GitHub and SN74LS165.py on GitHub.

Some References

For wiring diagrams, more background, and Arduino code for the CD4021, read Arduino ShiftIn. For the SN74LS165, read: Arduino: SN74HC165N, 74HC165 8 bit Parallel in/Serial out Shift Register, or Sparkfun: Shift Registers.

Of course, you can use a shift register for output as well as input. In that case you need a SIPO (Serial In, Parallel Out) shift register like a 74HC595. See Arduino ShiftOut: Serial to Parallel Shifting-Out with a 74HC595 Interfacing 74HC595 Serial Shift Register with Raspberry Pi. Another, less common option is the 74HC164N: Using a SN74HC164N Shift Register With Raspberry Pi

For input from my keyboard, initially I used three CD4021s. It basically worked, and you can see the code for it at keyboard.py (older version, for CD4021 shift registers), on GitHub.

But it turned out that looping over all those bits was slow -- I've been advised that you should wait at least 25 microseconds between bits for the CD4021, and even at 10 microseconds I found there wasa significant delay between hitting the key and hearing the note.I thought it might be all the fancy numpy code to generate waveforms for the chords, but when I used the Python profiler, it said most of the program's time was taken up in time.sleep(). Fortunately, there's a faster solution than shift registers: port expanders, which I'll talk about in Multiplexing Part 2: Port Expanders.

Tags: , ,
[ 12:23 Feb 13, 2018    More hardware | permalink to this entry | ]

Fri, 02 Feb 2018

Raspberry Pi Console over USB: Configuring an Ethernet Gadget

When I work with a Raspberry Pi from anywhere other than home, I want to make sure I can do what I need to do without a network.

With a Pi model B, you can use an ethernet cable. But that doesn't work with a Pi Zero, at least not without an adapter. The lowest common denominator is a serial cable, and I always recommend that people working with headless Pis get one of these; but there are a lot of things that are difficult or impossible over a serial cable, like file transfer, X forwarding, and running any sort of browser or other network-aware application on the Pi.

Recently I learned how to configure a Pi Zero as a USB ethernet gadget, which lets you network between the Pi and your laptop using only a USB cable. It requires a bit of setup, but it's definitely worth it. (This apparently only works with Zero and Zero W, not with a Pi 3.)

The Cable

The first step is getting the cable. For a Pi Zero or Zero W, you can use a standard micro-USB cable: you probably have a bunch of them for charging phones (if you're not an Apple person) and other devices.

Set up the Pi

Setting up the Raspberry Pi end requires editing two files in /boot, which you can do either on the Pi itself, or by mounting the first SD card partition on another machine.

In /boot/config.txt add this at the end:

dtoverlay=dwc2

In /boot/cmdline.txt, at the end of the long list of options but on the same line, add a space, followed by: modules-load=dwc2,g_ether

Set a static IP address

This step is optional. In theory you're supposed to use some kind of .local address that Bonjour (the Apple protocol that used to be called zeroconf, and before that was called Rendezvous, and on Linux machines is called Avahi). That doesn't work on my Linux machine. If you don't use Bonjour, finding the Pi over the ethernet link will be much easier if you set it up to use a static IP address. And since there will be nobody else on your USB network besides the Pi and the computer on the other end of the cable, there's no reason not to have a static address: you're not going to collide with anybody else.

You could configure a static IP in /etc/network/interfaces, but that interferes with the way Raspbian handles wi-fi via wpa_supplicant and dhcpcd; so you'd have USB networking but your wi-fi won't work any more.

Instead, configure your address in Raspbian via dhcpcd. Edit /etc/dhcpcd.conf and add this:

interface usb0
static ip_address=192.168.7.2
static routers=192.168.7.1
static domain_name_servers=192.168.7.1

This will tell Raspbian to use address 192.168.7.2 for its USB interface. You'll set up your other computer to use 192.168.7.1.

Now your Pi should be ready to boot with USB networking enabled. Plug in a USB cable (if it's a model A or B) or a micro USB cable (if it's a Zero), plug the other end into your computer, then power up the Pi.

Setting up a Linux machine for USB networking

The final step is to configure your local computer's USB ethernet to use 192.168.7.1.

On Linux, find the name of the USB ethernet interface. This will only show up after you've booted the Pi with the ethernet cable plugged in to both machines.

ip a
The USB interface will probably start eith en and will probably be the last interface shown.

On my Debian machine, the USB network showed up as enp0s26u1u1. So I can configure it thusly (as root, of course):

ip a add 192.168.7.1/24 dev enp0s26u1u1
ip link set dev enp0s26u1u1 up
(You can also use the older ifconfig rather than ip: sudo ifconfig enp0s26u1u1 192.168.7.1 up)

You should now be able to ssh into your Raspberry Pi using the address 192.168.7.2, and you can make an appropriate entry in /etc/hosts, if you wish.

For a less hands-on solution, if you're using Mac or Windows, try Adafruit's USB gadget tutorial. It's possible that might also work for Linux machines running Avahi. If you're using Windows, you might prefer CircuitBasics' ethernet gadget tutorial.

Happy networking!

Update: there's now a Part 2: Routing to the Outside World and Part 3: an Automated Script.

Tags: , ,
[ 14:53 Feb 02, 2018    More linux | permalink to this entry | ]

Thu, 25 Jan 2018

Tricks for Installing a Laser Printer on Linux in CUPS

(Wherein I rant about how bad CUPS has become.)

I had to set up two new printers recently. CUPS hasn't gotten any better since the last time I bought a printer, maybe five years ago; in fact, it's gotten quite a bit worse. I'm amazed at how difficult it was to add these fairly standard laser printers, both of which I'd researched beforehand to make sure they worked with Linux.

It took me about three hours for the first printer. The second one, a few weeks later, "only" took about 45 minutes ... at which point I realized I'd better write everything down so it'll be faster if I need to do it again, or if I get the silly notion that I might want to print from another computer, like my laptop.

I used the CUPS web interface; I didn't try any of the command-line tools.

Figure out the connection type

In the CUPS web interface, after you log in and click on Administration, whether you click on Find New Printers or Add Printer, you're faced with a bunch of identical options with no clue how to choose between them. For example, Find New Printers with a Dell E310dw connected shows:

Available Printers
  • [Add This Printer] Virtual Braille BRF Printer (CUPS-BRF)
  • [Add This Printer] Dell Printer E310dw (Dell Printer E310dw)
  • [Add This Printer] Dell Printer E310dw (Dell Printer E310dw)
  • [Add This Printer] Dell Printer E310dw (Dell Printer E310dw (driverless))

What is a normal human supposed to do with this? What's the difference between the three E210dw entries and which one am I supposed to choose? (Skipping ahead: None of them.) And why is it finding a virtual Braille BRF Printer?

The only way to find out the difference is to choose one, click on Next and look carefully at the URL. For the three E310dw options above, that gives:

Again skipping ahead: none of those are actually right. Go ahead, try all three of them and see. You'll get error messages about empty PPD files. But while you're trying them, write down, for each one, the URL listed as Connection (something like the dnssd:, lpd: or ipp: URLs listed above); and note, in the driver list after you click on your manufacturer, how many entries there are for your printer model, and where they show up in the list. You'll need that information later.

Download some drivers

Muttering about the idiocy of all this -- why ship empty drivers that won't install? Why not just omit drivers if they're not available? Why use the exact same name for three different printer entries and four different driver entries? -- the next step is to download and install the manufacturer's drivers. If you're on anything but Redhat, you'll probably either need to download an RPM and unpack it, or else google for the hidden .deb files that exist on both Dell's and Brother's websites that their sites won't actually find for you.

It might seem like you could just grab the PPD from inside those RPM files and put it wherever CUPS is finding empty ones, but I never got that to work. Much as I dislike installing proprietary .deb files, for both printers that was the only method I found that worked. Both Dell and Brother have two different packages to install. Why two and what's the difference? I don't know.

Once you've installed the printer driver packages, you can go back to the CUPS Add Printer screen. Which hasn't gotten any clearer than before. But for both the Brother and the Dell, ipp: is the only printer protocol that worked. So try each entry until you find the one that starts with ipp:.

Set up an IP address and the correct URL

But wait, you're not done. Because CUPS gives you a URL like ipp://DELL316BAA.local:631/ipp/print, and whatever that .local thing is, it doesn't work. You'll be able to install the printer, but when you try to print to it it fails with "unable to locate printer".

(.local apparently has something to do with assuming you're running a daemon that does "Bonjour", the latest name for the Apple service discovery protocol that was originally called Rendezvous, then renamed to Zeroconf, then to Bonjour. On Linux it's called Avahi, but even with an Avahi daemon this .local thing didn't work for me. At least it made me realize that I had the useless Avahi daemon running, so now I can remove it.).

So go back to Add Printer and click on Internet Printing Protocol (ipp) under Other network printers and click Continue. That takes you to a screen that suggests that you want URLs like:

http://hostname:631/ipp/
http://hostname:631/ipp/port1

ipp://hostname/ipp/
ipp://hostname/ipp/port1

lpd://hostname/queue

socket://hostname
socket://hostname:9100

None of these is actually right. What these printers want -- at least, what both the Brother and the Dell wanted -- was ipp://printerhostname:631/ipp/print

printerhostname? Oh, did I forget to mention static IP? I definitely recommend that you make a static IP for your printer, or at least add it to your router's DHCP list so it always gets the same address. Then you can make an entry in /etc/hosts for printerhostname. I guess that .local thing was supposed to compensate for an address that changes all the time, which might be a nifty idea if it worked, but since it doesn't, make a static IP and use it in your ipp: URL.

Choose a driver

Now, finally! you can move on to choosing a driver. After you pick the manufacturer, you'll be presented with a list that probably includes at least three entries for your printer model. Here's where it helps if you paid attention to how the list looked before you installed the manufacturer's drivers: if there's a new entry for your printer that wasn't there before, that's the non-empty one you want. If there are two or more new entries for your printer that weren't there before, as there were for the Dell ... shrug, all you can do is pick one and hope.

Of course, once you manage to get through configuration to "Printer successfully added", you should immediately run Maintenance->Print Test Page. You may have to power cycle the printer first since it has probably gone to sleep while you were fighting with CUPS.

All this took me maybe three hours the first time, but it only took me about 45 minutes the second time. Hopefully now that I've written this, it'll be much faster next time. At least if I don't succumb to the siren song of thinking a fairly standard laser printer ought to have a driver that's already in CUPS, like they did a decade ago, instead of always needing a download from the manufacturer.

If laser printers are this hard I don't even want to think about what it's like to install a photo printer on Linux these days.

Tags: , , ,
[ 16:19 Jan 25, 2018    More linux | permalink to this entry | ]

Sun, 21 Jan 2018

Reading Buttons from a Raspberry Pi

When you attach hardware buttons to a Raspberry Pi's GPIO pin, reading the button's value at any given instant is easy with GPIO.input(). But what if you want to watch for button changes? And how do you do that from a GUI program where the main loop is buried in some library?

Here are some examples of ways to read buttons from a Pi. For this example, I have one side of my button wired to the Raspberry Pi's GPIO 18 and the other side wired to the Pi's 3.3v pin. I'll use the Pi's internal pulldown resistor rather than adding external resistors.

The simplest way: Polling

The obvious way to monitor a button is in a loop, checking the button's value each time:

import RPi.GPIO as GPIO
import time

button_pin = 18

GPIO.setmode(GPIO.BCM)

GPIO.setup(button_pin, GPIO.IN, pull_up_down = GPIO.PUD_DOWN)

try:
    while True:
        if GPIO.input(button_pin):
            print("ON")
        else:
            print("OFF")

        time.sleep(1)

except KeyboardInterrupt:
    print("Cleaning up")
    GPIO.cleanup()

But if you want to be doing something else while you're waiting, instead of just sleeping for a second, it's better to use edge detection.

Edge Detection

GPIO.add_event_detect, will call you back whenever it sees the pin's value change. I'll define a button_handler function that prints out the value of the pin whenever it gets called:

import RPi.GPIO as GPIO
import time

def button_handler(pin):
    print("pin %s's value is %s" % (pin, GPIO.input(pin)))

if __name__ == '__main__':
    button_pin = 18

    GPIO.setmode(GPIO.BCM)

    GPIO.setup(button_pin, GPIO.IN, pull_up_down = GPIO.PUD_DOWN)

    # events can be GPIO.RISING, GPIO.FALLING, or GPIO.BOTH
    GPIO.add_event_detect(button_pin, GPIO.BOTH,
                          callback=button_handler,
                          bouncetime=300)

    try:
        time.sleep(1000)
    except KeyboardInterrupt:
        GPIO.cleanup()

Pretty nifty. But if you try it, you'll probably find that sometimes the value is wrong. You release the switch but it says the value is 1 rather than 0. What's up?

Debounce and Delays

The problem seems to be in the way RPi.GPIO handles that bouncetime=300 parameter.

The bouncetime is there because hardware switches are noisy. As you move the switch from ON to OFF, it doesn't go cleanly all at once from 3.3 volts to 0 volts. Most switches will flicker back and forth between the two values before settling down. To see bounce in action, try the program above without the bouncetime=300. There are ways of fixing bounce in hardware, by adding a capacitor or a Schmitt trigger to the circuit; or you can "debounce" the button in software, by waiting a while after you see a change before acting on it. That's what the bouncetime parameter is for.

But apparently RPi.GPIO, when it handles bouncetime, doesn't always wait quite long enough before calling its event function. It sometimes calls button_handler while the switch is still bouncing, and the value you read might be the wrong one. Increasing bouncetime doesn't help. This seems to be a bug in the RPi.GPIO library.

You'll get more reliable results if you wait a little while before reading the pin's value:

def button_handler(pin):
    time.sleep(.01)    # Wait a while for the pin to settle
    print("pin %s's value is %s" % (pin, GPIO.input(pin)))

Why .01 seconds? Because when I tried it, .001 wasn't enough, and if I used the full bounce time, .3 seconds (corresponding to 300 millisecond bouncetime), I found that the button handler sometimes got called multiple times with the wrong value. I wish I had a better answer for the right amount of time to wait.

Incidentally, the choice of 300 milliseconds for bouncetime is arbitrary and the best value depends on the circuit. You can play around with different values (after commenting out the .01-second sleep) and see how they work with your own circuit and switch.

You might think you could solve the problem by using two handlers:

    GPIO.add_event_detect(button_pin, GPIO.RISING, callback=button_on,
                          bouncetime=bouncetime)
    GPIO.add_event_detect(button_pin, GPIO.FALLING, callback=button_off,
                          bouncetime=bouncetime)
but that apparently isn't allowed: RuntimeError: Conflicting edge detection already enabled for this GPIO channel.

Even if you look just for GPIO.RISING, you'll still get some bogus calls, because there are both rising and falling edges as the switch bounces. Detecting GPIO.BOTH, waiting a short time and checking the pin's value is the only reliable method I've found.

Edge Detection from a GUI Program

And now, the main inspiration for all of this: when you're running a program with a graphical user interface, you don't have control over the event loop. Fortunately, edge detection works fine from a GUI program. For instance, here's a simple TkInter program that monitors a button and shows its state.

import Tkinter
from RPi import GPIO
import time

class ButtonWindow:
    def __init__(self, button_pin):
        self.tkroot = Tkinter.Tk()
        self.tkroot.geometry("100x60")

        self.label = Tkinter.Label(self.tkroot, text="????",
                                   bg="black", fg="white")
        self.label.pack(padx=5, pady=10, side=Tkinter.LEFT)

        self.button_pin = button_pin
        GPIO.setmode(GPIO.BCM)

        GPIO.setup(self.button_pin, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)

        GPIO.add_event_detect(self.button_pin, GPIO.BOTH,
                              callback=self.button_handler,
                              bouncetime=300)

    def button_handler(self, channel):
        time.sleep(.01)
        if GPIO.input(channel):
            self.label.config(text="ON")
            self.label.configure(bg="red")
        else:
            self.label.config(text="OFF")
            self.label.configure(bg="blue")

if __name__ == '__main__':
    win = ButtonWindow(18)
    win.tkroot.mainloop()

You can see slightly longer versions of these programs in my GitHub Pi Zero Book repository.

Tags: , , ,
[ 11:32 Jan 21, 2018    More hardware | permalink to this entry | ]