Shallow Thoughts : : May
Akkana's Musings on Open Source Computing and Technology, Science, and Nature.
Thu, 31 May 2018
For the last year or so the Firefox development team has been making
life ever harder for users. First they broke all the old extensions
that were based on XUL and XBL, so a lot of customizations no longer
worked. Then they
made
PulseAudio mandatory on Linux bug (1345661), so on systems
like mine that don't run Pulse, there's no way to get sound in
a web page. Forget YouTube or XenoCanto unless you keep another
browser around for that purpose.
For those reasons I'd been avoiding the Firefox upgrade, sticking to
Debian's firefox-esr ("Extended Support Release"). But when
Debian updated firefox-esr to Firefox 56 ESR late last year, performance
became unusable. Like half a minute between when you hit Page Down
and when the page actually scrolls. It was time to switch browsers.
Pale Moon
I'd been hearing about the Firefox variant Pale Moon. It's a fork of
an older Firefox, supposedly with an emphasis on openness and configurability.
I installed the Debian palemoon package. Performance was fine,
similar to Firefox before the tragic firefox-56. It was missing a few
things -- no built-in PDF viewer or Reader mode -- but I don't use
Reader mode that often, and the built-in PDF viewer is an annoyance at
least as often as it's a help. (In Firefox it's fairly random about when
it kicks in anyway, so I'm never sure whether I'll get the PDF viewer
or a Save-as prompt on any given PDF link).
For form and password autofill, for some reason Pale Moon doesn't fill
out fields until you type the first letter. For instance, if I had an
account with name "myname" and a stored password, when I loaded the
page, both fields would be empty, as if there's nothing stored for that
page. But typing an 'm' in the username field makes both username and
password fields fill in. This isn't something Firefox ever did and I
don't particularly like it, but it isn't a major problem.
Then there were some minor irritations, like the fact that profiles
were stored in a folder named ~/.moonchild\ productions/ --
super long so it messed up directory listings, and with a space in the
middle. PaleMoon was also very insistent about using new tabs for
everything, including URLs launched from other programs -- there
doesn't seem to be any way to get it to open URLs in the active tab.
I used it as my main browser for several months, and it basically worked.
But the irritations started to get to me, and I started considering
other options. The final kicker when I saw
Pale Moon
bug 86, in which, as far as I can tell, someone working on the
PaleMoon in OpenBSD tries to use system libraries instead of
PaleMoon's patched libraries, and is attacked for it in the bug.
Reading the exchange made me want to avoid PaleMoon for two reasons.
First, the rudeness: a toxic community that doesn't treat contributors
well isn't likely to last long or to have the resources to keep on top
of bug and security fixes. Second, the technical question: if Pale
Moon's code is so quirky that it can't use standard system libraries
and needs a bunch of custom-patched libraries, what does that say
about how maintainable it will be in the long term?
Firefox Quantum
Much has been made in the technical press of the latest Firefox,
called "Quantum", and its supposed speed. I was a bit dubious of that:
it's easy to make your program seem fast after you force everybody
into a few years of working with a program that's degraded its
performance by an order of magnitude, like Firefox had. After
firefox 56, anything would seem fast.
Still, maybe it would at least be fast enough to be usable. But I had
trepidations too. What about all those extensions that don't work any
more? What about sound not working? Could I live with that?
Debian has no current firefox package, so I downloaded the tarball
from mozilla.org, unpacked it,
made a new firefox profile and ran it.
Initial startup performance is terrible -- it takes forever to bring
up the first window, and I often get a "Firefox seems slow to start up"
message at the bottom of the screen, with a link to a page of a bunch
of completely irrelevant hints.
Still, I typically only start Firefox once a day. Once it's up,
performance is a bit laggy but a lot better than firefox-esr 56 was,
certainly usable.
I was able to find replacements for most of the really important
extensions (the ones that control things like cookies and javascript).
But sound, as predicted, didn't work. And there were several other,
worse regressions from older Firefox versions.
As it turned out, the only way to make Firefox Quantum usable for me
was to build a custom version where I could fix the regressions.
To keep articles from being way too long, I'll write about all those
issues separately:
how to build Firefox,
how to fix broken key bindings,
and how to fix the PulseAudio problem.
Tags: web, firefox
[
16:07 May 31, 2018
More tech/web |
permalink to this entry |
]
Sun, 27 May 2018
After I'd
switched
from the Google Maps API to Leaflet get my trail map
working on my own website,
the next step was to move it to the Nature Center's website
to replace the broken Google Maps version.
PEEC, unfortunately for me, uses Wordpress (on the theory that this
makes it easier for volunteers and non-technical staff to add
content). I am not a Wordpress person at all; to me, systems
like Wordpress and Drupal mostly add obstacles that mean standard HTML
doesn't work right and has to be modified in nonstandard ways.
This was a case in point.
The Leaflet library for displaying maps relies on calling an
initialization function when the body of the page is loaded:
<body onLoad="javascript:init_trailmap();">
But in a Wordpress website, the <body>
tag comes
from Wordpress, so you can't edit it to add an onload.
A web search found lots of people wanting body onloads, and
they had found all sorts of elaborate ruses to get around the problem.
Most of the solutions seemed like they involved editing
site-wide Wordpress files to add special case behavior depending
on the page name. That sounded brittle, especially on a site where
I'm not the Wordpress administrator: would I have to figure this out
all over again every time Wordpress got upgraded?
But I found a trick in a Stack Overflow discussion,
Adding onload to body,
that included a tricky bit of code. There's a javascript function to add
an onload to the
tag; then that javascript is wrapped inside a
PHP function. Then, if I'm reading it correctly, The PHP function registers
itself with Wordpress so it will be called when the Wordpress footer is
added; at that point, the PHP will run, which will add the javascript
to the
body
tag in time for for the
onload
even to call the Javascript. Yikes!
But it worked.
Here's what I ended up with, in the PHP page that Wordpress was
already calling for the page:
<?php
/* Wordpress doesn't give you access to the <body> tag to add a call
* to init_trailmap(). This is a workaround to dynamically add that tag.
*/
function add_onload() {
?>
<script type="text/javascript">
document.getElementsByTagName('body')[0].onload = init_trailmap;
</script>
<?php
}
add_action( 'wp_footer', 'add_onload' );
?>
Complicated, but it's a nice trick; and it let us switch to Leaflet
and get the
PEEC
interactive Los Alamos area trail map
working again.
Tags: web, programming, javascript, php, wordpress, mapping
[
15:49 May 27, 2018
More tech/web |
permalink to this entry |
]
Thu, 24 May 2018
A while ago I wrote an
interactive
trail map page for the PEEC nature center website.
At the time, I wanted to use an open library, like OpenLayers or Leaflet;
but there were no good sources of satellite/aerial map tiles at the
time. The only one I found didn't work because they had a big blank
area anywhere near LANL -- maybe because of the restricted
airspace around the Lab. Anyway, I figured people would want a
satellite option, so I used Google Maps instead despite its much
more frustrating API.
This week we've been working on converting the website to https.
Most things went surprisingly smoothly (though we had a lot more
absolute URLs in our pages and databases than we'd realized).
But when we got through, I discovered the trail map was broken.
I'm still not clear why, but somehow the change from http to https
made Google's API stop working.
In trying to fix the problem, I discovered that Google's map API
may soon cease to be free:
New pricing and product changes will go into effect starting June 11,
2018. For more information, check out the
Guide for
Existing Users.
That has a button for "Transition Tool" which, when you click it,
won't tell you anything about the new pricing structure until you've
already set up a billing account. Um ... no thanks, Google.
Googling for google maps api billing
led to a page headed
"Pricing
that scales to fit your needs", which has an elaborate pricing
structure listing a whole bnch of variants (I have no idea which
of these I was using), of which the first $200/month is free.
But since they insist on setting up a billing account, I'd probably
have to give them a credit card number -- which one? My personal
credit card, for a page that isn't even on my site? Does the nonprofit
nature center even have a credit card? How many of these API calls is
their site likely to get in a month, and what are the chances of going
over the limit?
It all rubbed me the wrong way, especially when the context
of "Your trail maps page that real people actually use has
broken without warning, and will be held hostage until you give usa
credit card number". This is what one gets for using a supposedly free
(as in beer) library that's not Free open source software.
So I replaced Google with the excellent open source
Leaflet library, which, as a
bonus, has much better documentation than Google Maps. (It's not that
Google's documentation is poorly written; it's that they keep changing
their APIs, but there's no way to tell the dozen or so different APIs
apart because they're all just called "Maps", so when you search for
documentation you're almost guaranteed to get something that stopped
working six years ago -- but the documentation is still there making
it look like it's still valid.)
And I was happy to discover that, in the time since I originally set
up the trailmap page, some open providers of aerial/satellite map
tiles have appeared. So we can use open source and have a
satellite view.
Our trail map is back online with Leaflet, and with any luck,
this time it will keep working.
PEEC
Los Alamos Area Trail Map.
Tags: mapping, web, programming, javascript
[
16:13 May 24, 2018
More programming |
permalink to this entry |
]
Tue, 22 May 2018
Humble Bundle has a great
bundle going right now (for another 15 minutes -- sorry, I meant to post
this earlier) on books by Nebula-winning science fiction authors,
including some old favorites of mine, and a few I'd been meaning to read.
I like Humble Bundle a lot, but one thing about them I don't like:
they make it very difficult to download books, insisting that you click
on every single link (and then do whatever "Download this link / yes, really
download, to this directory" dance your browser insists on) rather than
offering a sane option like a tarball or zip file. I guess part of their
business model includes wanting their customers to get RSI. This has
apparently been a problem for quite some time; a web search found lots
of discussions of ways of automating the downloads, most of which
apparently no longer work (none of the ones I tried did).
But a wizard friend on IRC quickly came up with a solution:
some javascript you can paste into Firefox's console. She started
with a quickie function that fetched all but a few of the files, but
then modified it for better error checking and the ability to get
different formats.
In Firefox, open the web console (Tools/Web Developer/Web Console)
and paste this in the single-line javascript text field at the bottom.
// How many seconds to delay between downloads.
var delay = 1000;
// whether to use window.location or window.open
// window.open is more convenient, but may be popup-blocked
var window_open = false;
// the filetypes to look for, in order of preference.
// Make sure your browser won't try to preview these filetypes.
var filetypes = ['epub', 'mobi', 'pdf'];
var downloads = document.getElementsByClassName('download-buttons');
var i = 0;
var success = 0;
function download() {
var children = downloads[i].children;
var hrefs = {};
for (var j = 0; j < children.length; j++) {
var href = children[j].getElementsByClassName('a')[0].href;
for (var k = 0; k < filetypes.length; k++) {
if (href.includes(filetypes[k])) {
hrefs[filetypes[k]] = href;
console.log('Found ' + filetypes[k] + ': ' + href);
}
}
}
var href = undefined;
for (var k = 0; k < filetypes.length; k++) {
if (hrefs[filetypes[k]] != undefined) {
href = hrefs[filetypes[k]];
break;
}
}
if (href != undefined) {
console.log('Downloading: ' + href);
if (window_open) {
window.open(href);
} else {
window.location = href;
}
success++;
}
i++;
console.log(i + '/' + downloads.length + '; ' + success + ' successes.');
if (i < downloads.length) {
window.setTimeout(download, delay);
}
}
download();
If you have "Always ask where to save files" checked in
Preferences/General, you'll still get a download dialog for each book
(but at least you don't have to click; you can hit return for each
one). Even if this is your preference, you might want to consider
changing it before downloading a bunch of Humble books.
Anyway, pretty cool! Takes the sting out of bundles, especially big
ones like this 42-book collection.
Tags: ebook, programming, web, firefox
[
17:49 May 22, 2018
More tech/web |
permalink to this entry |
]
Mon, 14 May 2018
I've been trying to learn more about weather from a friend who
used to work in the field -- in particular, New Mexico's notoriously
windy spring. One of the reasons behind our spring winds relates to
the location of the jet stream. But I couldn't find many
good references showing how the jet stream moves throughout the year.
So I decided to try to plot it myself -- if I could find the data.
Getting weather data can surprisingly hard.
In my search, I stumbled across Geert Barentsen's excellent
Annual
variations in the jet stream (video). It wasn't quite what I wanted --
it shows the position of the jet stream in December in successive
years -- but the important thing is that he provides a Python script
on GitHub that shows how he produced his beautiful animation.
Well -- mostly. It turns out his data sources are no longer available,
and he didn't go into a lot of detail on where he got his data, only
saying that it was from the ECMWF ERA re-analysis model (with a
link that's now 404).
That led me on a merry chase through the ECMWF website trying to
figure out which part of which database I needed. ECMWF has
lots
of publically available databases
(and even more)
and they even have Python libraries to access them;
and they even have a lot of documentation, but somehow none of
the documentation addresses questions like which database includes
which variables and how to find and fetch the data you're after,
and a lot of the sample code doesn't actually work.
I ended up using the "ERA Interim, Daily" dataset and requesting
data for only specific times and only the variables and pressure levels
I was interested in. It's a great source of data once you figure out
how to request it.
Sign up for an ECMWF API Key
Access ECMWF Public Datasets
(there's also
Access
MARS and I'm not sure what the difference is),
which has links you can click on to register for an API key.
Once you get the email with your initial password, log in using the URL
in the email, and change the password.
That gave me a "next" button that, when I clicked it, took me to
a page warning me that the page was obsolete and I should update
whatever bookmark I had used to get there.
That page also doesn't offer a link to the new page where you can
get your key details, so go here:
Your API key.
The API Key page gives you some lines you can paste into ~/.ecmwfapirc.
You'll also have to
accept the
license terms for the databases you want to use.
Install the Python API
That sets you up to use the ECMWF api. They have a
Web
API and a Python library,
plus some
other Python packages,
but after struggling with a bunch of Magics tutorial examples that mostly
crashed or couldn't find data, I decided I was better off sticking to
the basic Python downloader API and plotting the results with Matplotlib.
The Python data-fetching API works well. To install it,
activate your preferred Python virtualenv or whatever you use
for pip packages, then run the pip command shown at
Web
API Downloads (under "Click here to see the
installation/update instructions...").
As always with pip packages, you'll have to decide on a Python version
(they support both 2 and 3) and whether to use a virtualenv, the
much-disrecommended sudo pip, pip3, etc. I used pip3 in a virtualenv
and it worked fine.
Specify a dataset and parameters
That's great, but how do you know which dataset you want to load?
There doesn't seem to be anything that just lists which datasets have
which variables. The only way I found is to go to the Web API page
for a particular dataset to see the form where you can request
different variables. For instance, I ended up using the
"interim-full-daily"
database, where you can choose date ranges and lists of parameters.
There are more choices in the sidebar: for instance, clicking on
"Pressure levels" lets you choose from a list of barometric pressures
ranging from 1000 all the way down to 1. No units are specified, but
they're millibars, also known as hectoPascals (hPa): 1000 is more or
less the pressure at ground level, 250 is roughly where the jet stream
is, and Los Alamos is roughly at 775 hPa (you can find charts of
pressure vs. altitude on the web).
When you go to any of the Web API pages, it will show you a dialog suggesting
you read about
Data
retrieval efficiency, which you should definitely do if you're
expecting to request a lot of data, then click on the details for
the database you're using to find out how data is grouped in "tape files".
For instance, in the ERA-interim database, tapes are grouped by date,
so if you're requesting multiple parameters for multiple months,
request all the parameters for a given month together, rather than
making one request for level 250, another request for level 1000, etc.
Once you've checked the boxes for the data you want, you can fetch the
data via the web interface, or click on "View the MARS request" to get
parameters you can plug into a Python script.
If you choose the Python script option as I did, you can start with the
basic data retrieval example.
Use the second example, the one that uses 'format' : "netcdf"
,
which will (eventually) give you a file ending in .nc.
Requesting a specific area
You can request only a limited area,
"area": "75/-20/10/60",
but they're not very forthcoming on the syntax of that, and it's
particularly confusing since "75/-20/10/60" supposedly means "Europe".
It's hard to figure how those numbers as longitudes and latitudes
correspond to Europe, which doesn't go down to 10 degrees latitude,
let alone -20 degrees. The
Post-processing
keywords page gives more information: it's North/West/South/East,
which still makes no sense for Europe,
until you expand the
Area examples tab on that
page and find out that by "Europe" they mean Europe plus Saudi Arabia and
most of North Africa.
Using the data: What's in it?
Once you have the data file, assuming you requested data in netcdf format,
you can parse the .nc file with the netCDF4 Python module -- available
as Debian package "python3-netcdf4", or via pip -- to read that file:
import netCDF4
data = netCDF4.Dataset('filename.nc')
But what's in that Dataset?
Try running the preceding two lines in the
interactive Python shell, then:
>>> for key in data.variables:
... print(key)
...
longitude
latitude
level
time
w
vo
u
v
You can find out more about a parameter, like its units, type,
and shape (array dimensions). Let's look at "level":
>>> data['level']
<class 'netCDF4._netCDF4.Variable'>
int32 level(level)
units: millibars
long_name: pressure_level
unlimited dimensions:
current shape = (3,)
filling on, default _FillValue of -2147483647 used
>>> data['level'][:]
array([ 250, 775, 1000], dtype=int32)
>>> type(data['level'][:])
<class 'numpy.ndarray'>
Levels has shape (3,)
: it's a one-dimensional array with
three elements: 250, 775 and 1000.
Those are the three levels I requested from the web API and in my
Python script). The units are millibars.
More complicated variables
How about something more complicated? u and v are the two
components of wind speed.
>>> data['u']
<class 'netCDF4._netCDF4.Variable'>
int16 u(time, level, latitude, longitude)
scale_factor: 0.002161405503194121
add_offset: 30.095301438361684
_FillValue: -32767
missing_value: -32767
units: m s**-1
long_name: U component of wind
standard_name: eastward_wind
unlimited dimensions: time
current shape = (30, 3, 241, 480)
filling on
u (v is the same) has a shape of (30, 3, 241, 480): it's a
4-dimensional array. Why? Looking at the numbers in the shape gives a clue.
The second dimension has 3 rows: they correspond to the three levels,
because there's a wind speed at every level. The first dimension has
30 rows: it corresponds to the dates I requested (the month of April 2015).
I can verify that:
>>> data['time'].shape
(30,)
Sure enough, there are 30 times, so that's what the first dimension
of u and v correspond to. The other dimensions, presumably, are
latitude and longitude. Let's check that:
>>> data['longitude'].shape
(480,)
>>> data['latitude'].shape
(241,)
Sure enough! So, although it would be nice if it actually told you
which dimension corresponded with which parameter, you can probably
figure it out. If you're not sure, print the shapes of all the
variables and work out which dimensions correspond to what:
>>> for key in data.variables:
... print(key, data[key].shape)
Iterating over times
data['time'] has all the times for which you have data
(30 data points for my initial test of the days in April 2015).
The easiest way to plot anything is to iterate over those values:
timeunits = JSdata.data['time'].units
cal = JSdata.data['time'].calendar
for i, t in enumerate(JSdata.data['time']):
thedate = netCDF4.num2date(t, units=timeunits, calendar=cal)
Then you can use thedate like a datetime,
calling thedate.strftime
or whatever you need.
So that's how to access your data. All that's left is to plot it --
and in this case I had Geert Barentsen's script to start with, so I
just modified it a little to work with slightly changed data format,
and then added some argument parsing and runtime options.
Converting to Video
I already wrote about how to take the still images the program produces
and turn them into a video:
Making
Videos (that work in Firefox) from a Series of Images.
However, it turns out ffmpeg can't handle files that are named with
timestamps, like jetstream-2017-06-14-250.png. It can only
handle one sequential integer. So I thought, what if I removed the
dashes from the name, and used names like jetstream-20170614-250.png
with %8d? No dice: ffmpeg also has the limitation that the
integer can have at most four digits.
So I had to rename my images. A shell command works: I ran this in
zsh but I think it should work in bash too.
cd outdir
mkdir moviedir
i=1
for fil in *.png; do
newname=$(printf "%04d.png" $i)
ln -s ../$fil moviedir/$newname
i=$((i+1))
done
ffmpeg -i moviedir/%4d.png -filter:v "setpts=2.5*PTS" -pix_fmt yuv420p jetstream.mp4
The
-filter:v "setpts=2.5*PTS"
controls the delay between
frames -- I'm not clear on the units, but larger numbers have more delay,
and I think it's a multiplier,
so this is 2.5 times slower than the default.
When I uploaded the video to YouTube, I got a warning,
"Your videos will process faster if you encode into a streamable file format."
I then spent half a day trying to find a combination of ffmpeg arguments
that avoided that warning, and eventually gave up. As far as I can tell,
the warning only affects the 20 seconds or so of processing that happens
after the 5-10 minutes it takes to upload the video, so I'm not sure
it's terribly important.
Results
Here's a
video of the jet stream from
2012 to early 2018, and an earlier effort with a
much longer 6.0x delay.
And here's the script, updated from the original Barentsen script
and with a bunch of command-line options to let you plot different
collections of data:
jetstream.py on GitHub.
Tags: programming, python, data, weather
[
14:18 May 14, 2018
More programming |
permalink to this entry |
]
Fri, 11 May 2018
I was working on a weather project to make animated maps of the
jet stream. Getting and plotting wind data is a much longer article
(coming soon), but once I had all the images plotted, I wanted to
combine them all into a time-lapse video showing how the jet stream moves.
Like most projects, it's simple once you find the right recipe.
If your images are named outdir/filename00.png, outdir/filename01.png,
outdir/filename02.png and so on,
you can turn them into an MPEG4 video with ffmpeg:
ffmpeg -i outdir/filename%2d.png -filter:v "setpts=6.0*PTS" -pix_fmt yuv420p jetstream.mp4
%02d
, for non-programmers, just means a 2-digit decimal integer
with leading zeros, If the filenames just use 1, 2, 3, ... 10, 11 without
leading zeros, use %2d instead; if they have three digits, use %03d or
%3d, and so on.
Update:
If your first photo isn't numbered 00, you can set a
-start_number — but it must come before the -i and
filename template. For instance:
ffmpeg -start_number 17 --i outdir/filename%2d.png -filter:v "setpts=6.0*PTS" -pix_fmt yuv420p jetstream.mp4
That "setpts=6.0*PTS"
controls the speed of the playback,
by adding or removing frames.
PTS stands for "Presentation TimeStamps",
which apparently is a measure of how far along a frame is in the file;
setpts=6.0*PTS
means for each frame, figure out how far
it would have been in the file (PTS) and multiply that by 6. So if
a frame would normally have been at timestamp 10 seconds, now it will be at
60 seconds, and the video will be six times longer and six times slower.
And yes, you can also use values less than one to speed a video up.
You can also change a video's playback speed by
changing the
frame rate, either with the -r option, e.g. -r 30
,
or with the fps filter, filter:v fps=30
.
The default frame rate is 25.
You can examine values like the frame rate, number of frames and duration
of a video file with:
ffprobe -select_streams v -show_streams filename
or with the mediainfo program (not part of ffmpeg).
The -pix_fmt yuv420p
turned out to be the tricky part.
The recipes I found online didn't include that part, but without it,
Firefox claims "Video can't be played because the file is corrupt",
even though most other browsers can play it just fine.
If you open Firefox's web console and reload, it offers the additional
information
"Details: mozilla::SupportChecker::AddMediaFormatChecker(const mozilla::TrackInfo&)::<lambda()>: Decoder may not have the capability to handle the requested video format with YUV444 chroma subsampling.":
Adding -pix_fmt yuv420p
cured the problem and made the
video compatible with Firefox, though at first I had problems with
ffmpeg complaining "height not divisible by 2 (1980x1113)" (even though
the height of the images was in fact divisible by 2).
I'm not sure what was wrong; later ffmpeg stopped giving me that error
message and converted the video. It may depend on where in the ffmpeg
command you put the pix_fmt flag or what other flags are
present. ffmpeg arguments are a mystery to me.
Of course, if you're only making something to be uploaded to youtube,
the Firefox limitation probably doesn't matter and you may not need
the -pix_fmt yuv420p
argument.
Animated GIFs
Making an animated GIF is easier. You can use ImageMagick's convert:
convert -delay 30 -loop 0 *.png jetstream.gif
The GIF will be a lot larger, though. For my initial test of thirty
1000 x 500 images, the MP4 was 760K while the GIF was 4.2M.
Tags: web, video, time-lapse, firefox
[
09:59 May 11, 2018
More linux |
permalink to this entry |
]
Mon, 07 May 2018
As I came home from the market and prepared to turn into the driveway
I had to stop for an obstacle: a bullsnake who had
stretched himself across the road.
I pulled off, got out of the car and ran back. A pickup truck was
coming around the bend and I was afraid he would run over the snake,
but he stopped and rolled down the window to help. White Rock people
are like that, even the ones in pickup trucks.
The snake was pugnacious, not your usual mellow bullsnake. He coiled
up and started hissing madly.
The truck driver said "Aw, c'mon, you're not fooling anybody. We
know you're not a rattlesnake," but the snake wasn't listening.
(I guess that's understandable, since they have no ears.)
I tried to loom in front of him and stamp on the ground to herd him
off the road, but he wasn't having any of it. He just kept coiling and
hissing, and struck at me when I got a little closer.
I moved my hand slowly around behind his head and gently took hold of
his neck -- like what you see people do with rattlesnakes, though I'd
never try that with a venomous snake without a lot of practice and
training. With a bullsnake, even if they bite you it's not a big deal.
When I was a teenager I had a pet gopher snake (a fringe benefit of
having a mother who worked on wildlife documentaries), and though
"Goph" was quite tame, he once accidentally bit me when I was
replacing his water dish after feeding him and he mistook my hand for
a mouse. (He seemed acutely embarrassed, if such an emotion can be
attributed to a reptile; he let go immediately and retreated to sulk
in the far corner of his aquarium.) Anyway, it didn't hurt; their
teeth are tiny and incredibly sharp, and it feels like the pinprick
from a finger blood test at the doctor's office.
Anyway, the bullsnake today didn't bite. But after I moved him off the
road to a nice warm basalt rock in the yard, he stayed agitated, hissing
loudly, coiling and beating his tail to mimic a rattlesnake. He didn't
look like he was going to run and hide any time soon, so I ran inside
to grab a camera.
In the photos, I thought it was interesting how he held his mouth when
he hisses. Dave thought it looked like W.C. Fields.
I hadn't had a chance to see that up close before: my pet snake never
had occasion to hiss, and I haven't often seen wild bullsnakes be
so pugnacious either -- certainly not for long enough that I've been
able to photograph it. You can also see how he puffs up his neck.
I now have a new appreciation of the term "hissy fit".
Tags: snake, nature
[
15:06 May 07, 2018
More nature |
permalink to this entry |
]