I record track files in OsmAnd
for most of my hikes. But about a quarter of the
time, when I get back to the car, I forget to turn tracking off.
So I end up with a track file that shows the hike plus at least
part of the car trip afterward. Not very useful for purposes like
calculating mileage.
My PyTopo can do simple
track edits, like splitting the track into two or deleting from some
point to the end. But most of the time it's easier just to edit the
GPX file with a text editor.
Eesh, edit GPX directly?? Sounds like something you oughtn't
do, doesn't it? But actually, GPX (a form of
XML) is human readable
and editable. And this specific case, separating the walking from the
car trip in an OsmAnd track file, is particularly easy, because OsmAnd
helpfully adds a speed to every point it saves.
These instructions seem long, but really, once you've done it once,
you'll realize that it's all very straightforward; explaining the steps
is harder than doing them.
It describes the boot sequence, from grub to kernel loading to init
scripts to starting X. Part I covers the classic "SysV Init" model
still used to some extent by every distro; part II will cover
Upstart, the version that's gradually working its way into some of
the newer Linux releases.
Part 3 and final of my series on configuring Ubuntu's new grub2 boot menu.
I translate a couple of commonly-seen error messages, but most of
the article is devoted to multi-boot machines. If you have several
different operating systems or Linux distros installed on separate
disk partitions, grub2 has some unpleasant surprises, so see my
article for some (unfortunately very hacky) workarounds for its
limitations.
Part 2 of my 3-parter on configuring Ubuntu's new grub2 boot menu
covers cleaning up all the bogus menu entries (if you have a
multiple-boot system) and some tricks on setting color and image
backgrounds:
I have another PEEC Planetarium talk coming up in a few weeks,
a talk on the
summer solstice
co-presenting with Chick Keller on Fri, Jun 18 at 7pm MDT.
I'm letting Chick do most of the talking about archaeoastronomy
since he knows a lot more about it than I do, while I'll be talking
about the celestial dynamics -- what is a solstice, what is the sun
doing in our sky and why would you care, and some weirdnesses relating
to sunrise and sunset times and the length of the day.
And of course I'll be talking about the analemma, because
just try to stop me talking about analemmas whenever the topic
of the sun's motion comes up.
But besides the analemma, I need a lot of graphics of the earth
showing the terminator, the dividing line between day and night.
A couple of years ago, Dave and I acquired an H-alpha solar scope.
Neither of us had been much of a solar observer.
We'd only had white-light filters: filters you put over the
front of a regular telescope to block out most of the sun's light
so you can see sunspots.
H-alpha filters are a whole different beast:
you can see prominences, those huge arcs of fire that reach out into
space for tens of thousands of miles, many times the size of the Earth.
And you can also see all sorts of interesting flares and granulation
on the surface of the sun, something only barely hinted at in
white-light images.
I've been working on a short talk on
Fibonacci numbers
for a friend's math class.
Back when I was in high school, I did a research project on Fibonacci
numbers (their use in planning the growth of a city's power stations),
and for a while I had to explain the project endlessly, so I thought I
remembered pretty well what sorts of visuals I'd need -- some pine
cones, maybe some flower petals or branching plants, graphics of the
golden ratio and the Fibonacci/ Golden Spiral, and some nice visuals
of natural wonders like the chambered nautilus and how that all fits
in with the Fibonacci sequence.
I collected my pine cones, took some pictures and made some slides,
then it was time to get to work on the golden spirals.
I wrote a little GIMP script-fu to generate a Fibonacci spiral and
set of boxes, then I went looking for a Chambered Nautilus image
on which I could superimpose the spiral, and found a pretty good
one by Chris 73 at Wikipedia.
I pasted it into GIMP, then pasted my golden spiral on top of it,
activated the Scale tool (Keep Aspect Ratio) and started scaling.
And I just couldn't get them to match!
No matter how I scaled or translated the spiral, it just didn't expand
at the same rate as the nautilus shell.
So I called up Google Images and tried a few different nautilus images
-- with exactly the same result. I just couldn't get my Fibonacci
spiral to come close.
Well, this Science News article entitled
Sea
Shell Spirals says I'm not the only one. In 1999, retired
mathematician Clement Falbo measured a series of nautilus shells
at San Francisco's California Academy of Sciences, and he found
that while they were indeed logarithmic spirals (like the golden
spiral), their ratios ranged from about 1.24 to 1.43, with an average
ratio of about 1.33 to 1, not even close to the 1.618... ratio
of the Golden Spiral. In 2002,John Sharp
noticed
the same problem (that link doesn't work for me, but maybe you'll
have better luck).
As the Science News article points out,
Nonetheless, many accounts still insist that a cross section of
nautilus shell shows a growth pattern of chambers governed by the
golden ratio.
No kidding! Google on fibonacci nautilus and you'll get a
boatload of pages using the chambered nautilus as an illustration
of the Fibonacci (or Golden) spiral in nature.
It's not just the web, though -- I've been reading about nautili
as Fibonacci examples for decades in books and magazines.
All these writers just pass on what they've read elsewhere ...
just like I did for all those years, never actually measuring
a nautilus shell or trying to inscribe a golden spiral on one.
Now do a Google image search for the same terms, and you'll get
lots of beautiful pictures of sectioned nautilus shells.
You'll also get quite a few pictures of fibonacci spirals.
But none of those beautiful pictures will actually have both
the nautilus and the spiral in the same image.
And now I know why -- because they don't match!
(Happily, this actually may be a better subject for my talk than
the nautilus illustration I'd originally planned. "Don't believe
everything you read" is always a good lesson for high schoolers ...
and it's just as relevant for us adults as well.)
Yesterday afternoon, I stepped out the back door and walked a few
steps along the rocky path when I noticed movement at my feet.
It was a hummingbird, hidden in the rocks, and I'd almost stepped on it.
Closer examination showed that the hummer was holding its left wing
out straight -- not a good sign. He might have flown into a window,
but there's no way to know for sure how this little guy got injured.
The first order of business was to get him off the path so
he wouldn't get stepped on.
*
The local bird community has gotten me using
eBird.
It's sort of social networking for birders -- you can report sightings,
keep track of what birds you've seen where, and see what other people
are seeing in your area.
The only problem is the user interface for that last part. The data is
all there, but asking a question like "Where in this county have people
seen broad-tailed hummingbirds so far this spring?" is a lengthy
process, involving clicking through many screens and typing the
county name (not even a zip code -- you have to type the name).
If you want some region smaller than the county, good luck.
I found myself wanting that so often that I wrote an entry page for it.
My Bird Maps page
is meant to be used as a smart bookmark (also known as bookmarklets
or keyword bookmarks),
so you can type birdmap hummingbird or birdmap golden eagle
in your location bar as a quick way of searching for a species.
It reads the bird you've typed in, and looks through a list of
species, and if there's only one bird that matches, it takes you
straight to the eBird map to show you where people have reported
the bird so far this year.
If there's more than one match -- for instance, for birdmap hummingbird
or birdmap sparrow -- it will show you a list of possible matches,
and you can click on one to go to the map.
Like every Javascript project, it was both fun and annoying to write.
Though the hardest part wasn't programming; it was getting a list of
the nonstandard 4-letter bird codes eBird uses. I had to scrape one
of their HTML pages for that.
But it was worth it: I'm finding the page quite useful.
Firefox has made it increasingly difficult with every release to make
smart bookmarks. There are a few extensions, such as "Add Bookmark Here",
which make it a little easier. But without any extensions installed,
here's how you do it in Firefox 36:
First, go to the birdmap page
(or whatever page you want to smart-bookmark) and click on the * button
that makes a bookmark. Then click on the = next to the *, and in the
menu, choose Show all bookmarks.
In the dialog that comes up, find the bookmark you just made (maybe in
Unsorted bookmarks?) and click on it.
Click the More button at the bottom of the dialog.
(Click on the image at right for a full-sized screenshot.)
Now you should see a Keyword entry under the Tags entry
in the lower right of that dialog.
Change the Location to
http://shallowsky.com/birdmap.html?bird=%s.
Then give it a Keyword of birdmap
(or anything else you want to call it).
Close the dialog.
Now, you should be able to go to your location bar and type:
birdmap common raven
or
birdmap sparrow
and it will take you to my birdmap page. If the bird name specifies
just one bird, like common raven, you'll go straight from there to
the eBird map. If there are lots of possible matches, as with sparrow,
you'll stay on the birdmap page so you can choose which sparrow you want.
How to change the default location
If you're not in Los Alamos, you probably want a way to set your own
coordinates. Fortunately, you can; but first you have to get those
coordinates.
Here's the fastest way I've found to get coordinates for a region on eBird:
Click "Explore a Region"
Type in your region and hit Enter
Click on the map in the upper right
Then look at the URL: a part of it should look something like this:
env.minX=-122.202087&env.minY=36.89291&env.maxX=-121.208778&env.maxY=37.484802
If the map isn't right where you want it, try editing the URL, hitting
Enter for each change, and watch the map reload until it points where
you want it to. Then copy the four parameters and add them to your
smart bookmark, like this:
http://shallowsky.com/birdmap.html?bird=%s&minX=-122.202087&minY=36.89291&maxX=-121.208778&maxY=37.484802
Note that all of the the "env." have been removed.
The only catch is that I got my list of 4-letter eBird codes from an
eBird page for New Mexico.
I haven't found any way of getting the list for the entire US.
So if you want a bird that doesn't occur in New Mexico, my page might
not find it. If you like birdmap but want to use it in a different
state, contact me and tell me which state
you need, and I'll add those birds.
This year's drought was fierce. We only had two substantial rainfalls
all summer. And here in piñon-juniper country, that means the
piñon trees were under heavy attack by piñon Ips bark beetles,
Ips confusus.
Piñon bark beetles are apparently around all the time, but
normally, the trees can fight them off by producing extra sap.
But when it gets dry, drought-stressed trees can't make enough sap,
the beetles proliferate, and trees start dying.
Bark beetles are apparently the biggest known killer of mature
piñon trees.
We're aware of this, and we water the piñons we can reach,
and cross our fingers for the ones that are farther from the house.
But this year we lost four trees -- all of them close enough to the
house that we'd been watering them every three or four weeks.
Sometimes it seems like yellow is the color of fall.
First, in late summer, a wide variety of sunflower appear:
at the house we get mostly the ones with the uninspiring name of
cowpen daisy (Verbesina encelioides). The flower is much
prettier than its name would suggest.
Then the snakeweed (Gutierrezia sarothrae) and
chamisa (Ericameria nauseosa) take over,
with their carpets of tiny yellow flowers.
(More unfortunate names.
Chamisa has a mildly unpleasant smell when it's blooming,
which presumably explains its unfortunate scientific name;
I don't know why snakeweed is called that.)
At the New Mexico GNU & Linux User Group,
currently meeting virtually on Jitsi, someone expressed interest in scraping
websites. Since I do quite a bit of scraping, I offered to give
a tutorial on scraping with the Python module
BeautifulSoup.
"What about selenium?" he asked. Sorry, I said, I've never needed
selenium enough to figure it out.
I wrote at length about my explorations into
selenium
to fetch stories from the New York Times (as a subscriber).
But I mentioned in Part III that there was a much easier way
to fetch those stories, as long as the stories didn't need JavaScript.
That way is to use normal file fetching (using urllib or requests),
but with a CookieJar object containing the cookies from a Firefox
session where I'd logged in.
When we left off, I was learning
the
basics of selenium in order to fetch stories (as a subscriber)
from the New York Times. Fetching stories was working properly,
and all that remained was to put it in an automated script, then
move it to a server where it could run automatically without my
desktop machine needing to be on.
Unfortunately, that turned out to be the hardest part of the problem.
Part III: Handling Errors and Timeouts (this article)
At the end of Part II, selenium was running on a server with the
minimal number of X and GTK libraries installed.
But now that it can run unattended, there's nother problem:
there are all kinds of ways this can fail,
and your script needs to handle those errors somehow.
Before diving in, I should mention that for my original goal,
fetching stories from the NY Times as a subscriber,
it turned out I didn't need selenium after all.
Since handling selenium errors turned out to be so brittle
(as I'll describe in this article), I'm now using requests
combined with a Python CookieJar. I'll write about that in a
future article. Meanwhile ...
Handling Errors and Timeouts
Timeouts are a particular problem with selenium,
because there doesn't seem to be any reliable way to change them
so the selenium script doesn't hang for ridiculously long periods.
At a recent LUG meeting, we were talking about various uses for web
scraping, and someone brought up a Wikipedia game: start on any page,
click on the first real link, then repeat on the page that comes up.
The claim is that this chain always gets to Wikipedia's page on
Philosophy.
We tried a few rounds, and sure enough, every page we tried did
eventually get to Philosophy, usually via languages, which goes to
communication, goes to discipline, action, intention, mental, thought,
idea, philosophy.
It's a perfect game for a discussion of scraping. It should be an easy
exercise to write a scraper to do this, right?
Dear Asus, and other manufacturers who make Eee imitations:
The Eee laptops are mondo cool. So lovely and light.
Thank you, Asus, for showing that it can be done and that there's
lots of interest in small, light, cheap laptops, thus inspiring
a bazillion imitators. And thank you even more for offering
Linux as a viable option!
Now would one of you please, please offer some models that
have at least XGA resolution so I can actually buy one? Some of us
who travel with a laptop do so in order to make presentations.
On projectors that use 1024x768.
So far HP is the only manufacturer to offer WXGA, in the Mini-Note.
But I read that Linux support is poor for the "Chrome 9" graphics
chip, and reviewers seem very underwhelmed with the Via C7
processor's performance and battery life.
Rumours of a new Mini-Note with a Via Nano or, preferably, Intel
Atom and Intel graphics chip, keep me waiting.
C'mon, won't somebody else step up and give HP some competition?
It's so weird to have my choice of about 8 different 1024x600
netbook models under $500, but if I want another 168 pixels
vertically, the price from everyone except HP jumps to over $2000.
Folks: there is a marketing niche here that you're missing.
The March League of Women Voters' Lunch With a Leader featured Jerry Smith, the county's new Broadband Manager.
I wrote it up for the LWV newsletter, but since that's PDF, I thought
I'd post a more accessible copy here.
A priest, a minister, and a rabbit walk into a bar.
The bartender asks the rabbit what he'll have to drink.
"How should I know?" says the rabbit. "I'm only here because of autocomplete."
Firefox folks like to call the location bar/URL bar the "awesomebar"
because of the suggestions it makes. Sometimes, those suggestions
are pretty great; there are a lot of sites I don't bother to bookmark
because I know they will show up as the first suggestion.
Other times, the "awesomebar" not so awesome. It gets stuck on some site
I never use, and there's seemingly no way to make Firefox forget that site.
Firefox decided, some time ago, that whenever I try to type in a local
file pathname, as soon as I start typing /home/... I must be
looking for one specific file: an article I wrote over two months
ago and am long since done with.
Usually it happens when I'm trying to preview a new article.
I no longer have any interest in my local copy of that old article;
it's not bookmarked or anything like that; I haven't viewed it in
quite some time. But try to tell Firefox that. It's convinced that
the old one (why that one, and not one of the more recent ones?)
is what I want.
A recursive grep in ~/.mozilla/firefox showed that the only reference
to the old unwanted file was in the binary file places.sqlite.
My places.sqlite was 11Mb. I look through the Prefs window
showed that the default setting was to store history for minimum of
90 days. That seemed rather excessive, so I reduced it drastically.
But that didn't reduce the size of the file any, nor did it banish
the spurious URLbar suggestion when I typed /home/....
After some discussion with folks on IRC, it developed that Firefox
may never actually reduce the size of the places.sqlite file at all.
Even if it did reduce the amount of data in the file (which it's not
clear it does), it never tells sqlite to compact the file to use less
space. Apparently there was some work on that about a year ago, but it
was slow and unreliable and they never got it working, and
eventually gave up on it.
You can run an sqlite compaction by hand (make sure to exit your
running firefox first!):
sqlite3 places.sqlite vacuum
But vacuuming really didn't help much. It reduced the size of the file
from 11 to 8.8 Mb (after reducing the number of days firefox was
supposed to store to less than a third of the original size) and it
didn't get rid of that spurious suggestion.
So the only remaining option seemed to be to remove the file.
It stores both history and bookmarks, so it's best to back up
bookmarks before removing it. I backed up bookmarks to the
.json format firefox likes to use for backups, and also exported
them to a more human (and browser) readable bookmarks.html.
Then I removed the places.sqlite file.
Success! The spurious recommendation was gone. Typing seems faster
too (less of those freezes while the "awesomebar" searches through
its list of recommendations).
So I guess firefox can't be trusted to clean up after itself,
and users who care have to do that manually.
It remains to be seen how much the file will grow now. I expect
periodic vacuumings or removals will still be warranted if I don't
want a huge file; but it's pretty easy to do, and firefox found the
bookmarks backup and reloaded them without any extra work on my part.
In the meantime, I made a new bookmark -- hidden in my bookmarklets
menu so it doesn't clutter the main bookmarks menu -- to the
directory where I preview articles I'm writing. That ought to help a
bit with future URLbar suggestions.
Yesterday I started up my browser and discovered that I had no menu.
I understand that Mozilla wants us not to use the menu ... because why
would anyone want to use any of Firefox's zillions of useful features
that aren't available through the hamburger menu? ... but they've
always made it possible to show a menubar if you really want one.
Right-click in the area where the tabs are, and there's an option
for Menu Bar that you can turn on.
And that option was still there, and the space above the tabs where
it should show up was still taking up space ... there just weren't
any menu buttons to click on.
Except they were. I tried clicking near the left edge and a familiar
File menu popped up. Aha! The menubar is still there; it's
just invisible. (In the screenshot above, if you look hard you can
actually see the menu items, barely; in the theme I was actually
using, which got uninstalled while I flailed around trying to fix the
problem, they were much less visible.)
I maintain quite a few domains, both domains I own and domains
belonging to various nonprofits I belong to.
For testing these websites, I make virtual domains in apache,
choosing an alias for each site.
For instance, for the LWVNM website, the apache site file has
<VirtualHost *:80>
ServerName lwvlocal
and my host table, /etc/hosts, has
127.0.0.1 localhost lwvlocal
(The localhost line in my host table has entries for
all the various virtual hosts I use, not just this one).
That all used to work fine. If I wanted to test a new page on the LWVNM
website, I'd go to Firefox's urlbar and type something like
lwvlocal/newpage.html
and it would show me the new page, which I could work on until
it was time to push it to the web server.
A month or so ago, a new update to Firefox broke that.
I've spent a lot of the past week battling Russian spammers on
the New Mexico Bill Tracker.
The New Mexico legislature just began a special session to define the
new voting districts, which happens every 10 years after the census.
When new legislative sessions start, the BillTracker usually needs
some hand-holding to make sure it's tracking the new session. (I've
been working on code to make it notice new sessions automatically, but
it's not fully working yet). So when the session started, I checked
the log files...
and found them full of Russian spam.
Specifically, what was happening was that a bot was going to my
new user registration page and creating new accounts where the
username was a paragraph of Cyrillic spam.
Firefox's zoom settings are useful. You can zoom in on a page with
Ctrl-+ (actually Ctrl-+ on a US-English keyboard), or out with Ctrl--.
Useful, that is, until you start noticing that lots of pages you
visit have weirdly large or small font sizes, and it turns out that
Firefox is remembering a Zoom setting you used on that site
half a year ago on a different monitor.
Whenever you zoom, Firefox remembers that site, and uses that
zoom setting any time you go to that site forevermore (unless you
zoom back out).
Now that I'm using the same laptop in different modes —
sometimes plugged into a monitor, sometimes using its own screen —
that has become a problem.
Ever since a friend let us test-ride her electric bike at PEEC's
annual Electric Vehicle Show two years ago,
Dave's been stewing over the idea of getting an e-bike.
Why an e-bike?
One goal was to help us get into the back country. There are several
remote places -- most notably, in Canyonlands' Needles and Maze
districts -- that can only be accessed through trails that are beyond
our Rav4's ability. Or at least beyond our risk tolerance.
I wrote a few years ago about keeping
lists
of the books I read, first because I was curious how many books I read,
but later because I found the lists quite useful for other purposes.
But I realized this year that I hardly ever write about the good books
I discover each year. That's a shame: how are we to find out about
great new authors if we don't all make a point of sharing them?
So today I'm going to write about the best books I read in 2021.
I'll probably write catch-up articles about some of the best from
earlier years in future articles.
On most Sundays, you can find me at Overlook Park where the
Los Alamos Aeromodelers fly radio controlled model airplanes at the
big soccer field. The
Los Alamos Aeromodelers
used to be an official flying club, but now it's just a group of
friends who fly together.
I first got into R/C flying in the 1980s. Back then, model planes were
made of balsa wood. They took forever to build. The planes were heavy
(five or six pounds)
and flew fast, and so when they crashed — which they did a
LOT — you had a lot of fastidious rebuilding to do.
They were powered with internal combustion 2-stroke motors.
They were SO LOUD. You couldn't fly them in local parks;
you had to drive to a remote flying field where the noise wouldn't
disturb anyone.
Plus the motors were finicky and messy: they spewed oil
everywhere, so you had to clean the plane off with paper towels and
a degreaser after every flight. Ick.
Shoveling our long driveways and multiple decks and patios is a lot
of work, still novel and unfamiliar to a couple of refugees from California.
Especially when, like yesterday, the snow keeps coming down
so you have to do it repeatedly.
Okay, that subject line isn't likely to surprise any veteran linux
user. But here's the deal: wvdialconf in the past didn't support
winmodems (it checked the four /dev/ttyN ports, but not /dev/modem
or /dev/ttyHSF0 or anything like that) so in order to get it working
on a laptop, you had to hand-edit the modem information in the file,
or copy from another machine (if you have a machine with a real
modem) and then edit that. Whereas kppp can use /dev/modem, and
it's relatively easy to add new phone numbers. So I've been using
kppp on the laptop (which has a winmodem, alas) for years.
But with the SBC switch to Yahoo, I haven't been able to dial up.
"Access denied" and "Failed to authenticate ourselves to peer" right
after it sends the username and password, with PAP or PAP/CHAP
(with CHAP only, it fails earlier with a different error).
Just recently I discovered that wvdial now does check /dev/modem,
and works fine on winmodems (assuming of course a driver is present).
I tried it on sbc -- and it works.
I'm still not entirely sure what's wrong with kppp. Perhaps
SBC isn't actually using PAP, but their own script, and
wvdial's intelligent "Try to guess how to log in" code is figuring
it out (it's pretty simple, looking at the transcript in the log).
I could probably use the transcript to make a login script that
would work with kppp. But why bother? wvdial handles it
excellently, and doesn't flood stdout with hundreds of lines
of complaints about missing icons like kppp does.
I installed the latest Ubuntu Linux, called "Breezy Badger", just
before leaving to visit family over the holidays. My previous Ubuntu
attempt on this machine had been rather unstable (probably not
Ubuntu's fault -- 2.6 kernels and this laptop don't get along very
well) but Ubuntu seems to have some very sharp kernel developers, so
I was curious to see whether there'd been progress.
Installation:
Didn't go well. I had most of the same problems I'd had installing
Hoary to this laptop (mostly due to the installer assuming that a
CDROM and network must remain connected throughout the install,
something that's impossible on a laptop where both of those functions
require sharing the single PCMCIA port). The Breezy installer has the
additional "feature" that it tends to hang if you change things like
the CDROM while the install is in progress, trashing everything and
forcing you to restart from the beginning. (Filed bug 20443.)
Networking:
But eventually I found a sequence that let me get a network-less
Breezy onto the laptop, and I'm happy to report that Breezy's built-in
networking tools were able to add networking after the first boot
(something that hadn't worked in Hoary). Well, admittedly I did have
to add a script, /etc/hotplug/pci/3c59x, to call ifup when my cardbus
network card is plugged in; but every other distro needs that too, and
Breezy is the first 2.6-based distro which correctly calls the script
every time.
Suspend:
Once up and running, Breezy shows impressive laptop savvy.
Like Hoary, it can suspend either to disk or to RAM; unlike Hoary, it
can do this without my needing to hack any config files except to
uncomment the line enabling suspend to RAM in /etc/default/acpi-support.
It does print various error messages on stdout when it resumes from
sleep or hibernate, but that's a minor issue.
Not only that, but it restores both network and usb when resuming from
suspend (on hoary I had to hack some of the suspend scripts to make
that work).
(Kernel flakiness:
Well, mostly it suspends fine. Unplugging a usb mouse at the wrong
time still causes a kernel hang.
That's a 2.6 bug, not an Ubuntu-specific problem.
And the system also tends to hang and need to be power cycled about
one time out of five when exiting X; perhaps it's an Xorg bug.)
Ironically, my "safe" partition on this laptop (a much-
modified Debian sarge) mysteriously stopped seeing PCMCIA on the first
day away from home, so I ended up using Breezy for the whole trip
and giving it a good workout.
Hal:
One problem Breezy shares with Hoary is that every few seconds, the
hald daemon makes the hard drive beep and whir. Unlike Hoary, which
had an easy
solution, Breezy ignores the storage_media_check_enabled and
storage_automount_enabled hints. The only way I found to disable
the beeping was to kill hald entirely by renaming /usr/sbin/hald
(it's not called from /etc/init.d, and I never did find out who was
starting it so I could disable it). Removing hald seems to have caused
no ill effects; at least, hotplug of pcmcia and usb still works, as do
udev rules. (Filed bug 21238.
Udev:
Oh, about those udev rules! Regular readers may recall that I had some
trouble with Hoary regarding
udev
choking on multiple flash card readers which I solved
on my desktop machine with a udev rule that renames the four fixed,
always present devices. But with a laptop, I don't have fixed devices;
I wanted a setup that would work regardless of what I plugged in.
That required a new udev rule. Here's the rule that worked for me:
in /etc/udev/permissions.rules, change
Note that this means that whatever scripts/removable.sh does, it's not
happening any more. That doesn't seem to have caused any problem,
though. (Filed bug 21662
on that problem.)
Conclusion:
Overall, Breezy is quite impressive and required very little tweaking
before it was usable. It was my primary distro for two weeks while
travelling; I may switch to it on the desktop once I find a workaround
for bug
352358 in GTK 2.8 (which has been fixed in gnome cvs, but
that doesn't make it any less maddening when using the buggy version).
The best thing at Linuxworld was the Powertop BOF,
despite the fact that it ended up stuck in a room with no projector.
The presenter, Arjan van de Ven, coped well with the setback and
managed just fine.
The main goal of Powertop is to find applications that are polling or
otherwise waking the CPU unnecessarily,
draining power when they don't need to.
Most of the BOF focused on "stupid stuff": programs that wake up
too often for no reason. Some examples he gave (many of these will
be fixed in upcoming versions of the software):
gnome-screensaver checked every 2 sec to see if the mouse moved
(rather than using the X notification for mouse move);
gnome volume checked 10 times a second whether the volume has changed;
gnome-clock woke up once a second to see if the minute had rolled
over, rather than checking once a minute;
firefox in an ssl layer polled 10 times a second in case there was a
notification;
the gnome file monitor woke up 40 times a second to check a queue
even if there was nothing in the queue;
evolution woke up 10 times a second;
the fedora desktop checked 10 times a second for a smartcard;
gksu used a 10000x/sec loop (he figures someone mistook
milliseconds/microseconds: this alone used up 45 min on one battery test run)
Adobe's closed-source flash browser plugin woke up 2.5 times a
second, and acroread had similar problems (this has been reported to
Adobe but it's not clear if a fix is coming any time soon).
And that's all just the desktop stuff, without getting into other
polling culprits like hal and the kernel's USB system. The kernel
itself is often a significant culprit: until recently, kernels woke
up once a millisecond whether they needed to or not. With the recent
"tickless" option that appeared in the most recent kernel, 2.6.22,
the CPU won't wake up unless it needs to.
A KDE user asked if the KDE desktop was similarly bad. The answer
was yes, with a caveat: Arjan said he gave a presentation a while back
to a group of KDE developers, and halfway through, one of the
developers interrupted him when he pointed out a problem
to say "That's not true any more -- I just checked in a fix while
you were talking." It's nice to hear that at least some developers
care about this stuff! Arjan said most developers responded
very well to patches he'd contributed to fix the polling issues.
(Of course, those of us who use lightweight window managers like
openbox or fvwm have already cut out most of these gnome and kde
power-suckers. The browser issues were the only ones that applied
to me, and I certainly do notice firefox' polling: when the laptop
gets slow, firefox is almost always the culprit, and killing it
usually brings performance back.)
As for hardware, he mentioned that
some linux LCD drivers don't really dim the backlight when you
reduce brightness -- they just make all the pixels darker.
(I've been making a point of dimming my screen when running off batteries;
time to use that Kill-A-Watt and find out if it actually matters!)
Wireless cards like the ipw100 use
a lot of power even when not transmitting -- sometimes even more than
when they're transmitting -- so turning them off can be a big help.
Using a USB mouse can cut as much as half an hour off a battery.
The 2.6.23 kernel has lots of new USB power saving code, which should help.
Many devices have activity every millisecond,
so there's lots of room to improve.
Another issue is that even if you get rid of the 10x/sec misbehavers,
some applications really do need to wake up every second or so. That's
not so bad by itself, but if you have lots of daemons all waking up at
different times, you end up with a CPU that never gets to sleep.
The solution is to synchronize them by rounding the wakeup times to
the nearest second, so that they all wake up at
about the same time, and the CPU can deal with them
all then go back to sleep. But there's a trick: each machine has to
round to a different value. You don't want every networking
application on every machine across the internet all waking up at once
-- that's a good way to flood your network servers. Arjan's phrase:
"You don't want to round the entire internet" [to the same value].
The solution is a new routine in glib: timeout_add_seconds.
It takes a hash of the hostname (and maybe other values) and uses that
to decide where to round timeouts for the current machine.
If you write programs that wake up on a regular basis, check it out.
In the kernel, round_jiffies does something similar.
After all the theory, we were treated to a demo of powertop in action.
Not surprisingly, it looks a bit like top. High on the screen
is summary information telling you how much time your CPU is spending
in the various sleep states. Getting into the deeper sleep states is
generally best, but it's not quite that simple: if you're only getting
there for short periods, it takes longer and uses more power to get
back to a running state than it would from higher sleep states.
Below that is the list of culprits: who is waking your CPU up most
often? This updates every few seconds, much like the top
program. Some of it's clear (names of programs or library routines);
other lines are more obscure if you're not a kernel hacker, but
I'm sure they can all be tracked down.
At the bottom of the screen is a geat feature: a short hint telling
you how you could eliminate the current top offender (e.g. kill the
process that's polling). Not only that, but in many cases powertop
will do it for you at the touch of a key. Very nice! You can try
disabling things and see right away whether it helped.
Arjan stepped through killing several processes and showing the
power saving benefits of each one. (I couldn't help but notice, when
he was done, that the remaining top offender, right above nautilus,
was gnome-power-manager. Oh, the irony!)
It's all very nifty and I'm looking forward to trying it myself.
Unfortunately, I can't do that on the
laptop where I really care about battery life. Powertop requires a
kernel API that went in with the "tickless" option, meaning it's
in 2.6.22 (and I believe it's available as a patch for 2.6.21).
My laptop is stuck back on 2.6.18 because of an IRQ handling bug (bug 7264).
Powertop also requires ACPI, which I have to disable
because of an infinite loop in kacpid (bug 8274,
Ubuntu bug
75174). It's frustrating to have great performance tools like
powertop available, yet not be able to use them because of kernel
regressions. But at least I can experiment with it on my desktop
machine.
I got swsusp working on blackbird. Re-reading the
kernel documentation with a night's sleep behind me revealed that
at the very end of swsusp.txt, there's code for a C program to
suspend non-ACPI machines. Worked fine! swsusp in 2.6.7 doesn't
quite resume X properly (I had to ctrl-alt-FN back and forth a few
times before I saw my X screen) but it's progress, anyway.
Tags: linux, suspend, laptop
[ 11:12 Apr 10, 2022
More linux |
permalink to this entry |
]
I gave a lightning talk at the Ubucon -- the Ubuntu miniconf -- at the
SCALE 8x, Southern
California Linux Expo yesterday. I've been writing about grub2
for Linux Planet but it left
me with some, well, opinions that I wanted to share.
A lightning talk
is an informal very short talk, anywhere from 2 to 5 minutes.
Typically a conference will have a session of lightning talks,
where anyone can get up to plug a project, tell a story or flame about
an annoyance. Anything goes.
I'm a lightning talk junkie -- I love giving them, and I
love hearing what everyone else has to say.
I had some simple slides for this particular talk. Generally I've
used bold or other set-offs to indicate terms I showed on a slide.
SCALE 8x, by
the way, is awesome so far, and I'm looking forward to the next two days.
Grub2 3-minute lightning talk
What's a grub? A soft wriggly worm.
But it's also the Ubuntu Bootloader.
And in Karmic, we have a brand new grub: grub2!
Well, sort of. Karmic uses Grub 2 version 1.97 beta4.
Aside from the fact that it's a beta -- nuff said about that --
what's this business of grub TWO being version ONE point something?
Are you hearing alarm bells go off yet?
But it must be better, right?
Like, they say it cleans up partition numbering.
Yay! So that confusing syntax in grub1, where you have to say
(hd0,0) that doesn't look like anything else on Linux,
and you're always wanting to put the parenthesis in the wrong place
-- they finally fixed that?
Well, no. Now it looks like this: (hd0,1)
THEY KEPT THE CONFUSING SYNTAX BUT CHANGED THE NUMBER!
Gee, guys, thanks for making things simpler!
But at least grub2 is better at graphics, right? Like what if
you want to add a background image under that boring boot screen?
A dark image, because the text is white.
Except now Ubuntu changes the text color to black.
So you look in the config file to find out why ...
if background_image `make_system_path_relative...
set color_normal=black/black
... there it is! But why are there two blacks?
Of course, there's no documentation. They can't be fg/bg --
black on black wouldn't make any sense, right?
Well, it turns out it DOES mean foreground and background -- but the second
"black" doesn't mean black. It's a special grub2 code for "transparent".
That's right, they wrote this brand new program from scratch, but they
couldn't make a parser that understands "none" or "transparent".
What if you actually want text with a black background? I have
no idea. I guess you're out of luck.
Okay, what about dual booting? grub's great at that, right?
I have three distros installed on this laptop. There's a shared /boot
partition. When I change something, all I have to do is edit a file
in /boot/grub. It's great -- so much better than lilo! Anybody remember
what a pain lilo was?
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by /usr/sbin/grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#
Oops, wait -- not with grub2. Now I'm not supposed to edit
that file. Instead, I edit files in TWO places,
/etc/grub.d and /etc/default/grub.conf, and then
run a program in a third place, /usr/bin/update-grub.
All this has to be done from the same machine where you installed
grub2 -- if you're booted into one of your other distros, you're out
of luck.
grub2 takes us back to the bad old days of lilo.
FAIL
Grub2 really is a soft slimy worm after all.
But I have some ideas for workarounds. If you care, watch my next
few articles on LinuxPlanet.com.
I like to keep my laptop's boot sequence lean and mean, so it
can boot very quickly. (Eventually I'll write about this in
more detail.) Recently I made some tweaks, and then went through
a couple of dist-upgrades (it's currently running Debian "sarge"),
and had some glitches. Some of what I learned was interesting
enough to be worth sharing.
First, apache stopped serving http://localhost/ -- not
important for most machines, but on the laptop it's nice to be
able to use a local web server when there's no network connected.
Further investigation revealed that this had nothing to do with
apache: it was localhost that wasn't working, for any port.
I thought perhaps my recent install of 2.4.29 was at fault, since
some configuration had changed, but that wasn't it either.
Eventually I discovered that the lo interface was there,
but wasn't being configured, because my boot-time tweaking had
disabled the ifupdown boot-time script, normally called from
/etc/rcS.d.
That's all straightforward, and I restored ifupdown to its
rightful place using update-rc.d ifupdown start 39 S .
Dancer suggested apt-get install --reinstall ifupdown
which sounds like a better way; I'll do that next time. But
meanwhile, what's this ifupdown-clean script that gets installed
as S18ifupdown-clean ?
I asked around, but nobody seemed to know, and googling doesn't
make it any clearer. The script obviously cleans up something
related to /etc/network/ifstate, which seems to be a text
file holding the names of the currently configured network
interfaces. Why? Wouldn't it be better to get this information
from the kernel or from ifconfig? I remain unclear as to what the
ifstate file is for or why ifupdown-clean is needed.
Now my loopback interface worked -- hurray!
But after another dist-upgrade, now eth0 stopped working.
It turns out there's a new hotplug in town. (I knew this
because apt-get asked me for permission to overwrite
/etc/hotplug/net.agent; the changes were significant enough
that I said yes, fully aware that this would likely break eth0.)
The new net.agent comes with comments referencing
NET_AGENT_POLICY in /etc/default/hotplug, and documentation
in /usr/share/doc/hotplug/README.Debian. I found the
documentation baffling -- did NET_AGENT_POLICY=all mean that
it would try to configure all interfaces on boot, or only that
it would try to configure them when they were hotplugged?
It turns out it means the latter. net.agent defaults to
NET_AGENT_POLICY=hotplug, which doesn't do anything unless you
edit /etc/network/interfaces and make a bunch of changes;
but changing NET_AGENT_POLICY=all makes hotplug "just work".
I didn't even have to excise LIFACE from the net.agent code,
like I needed to in the previous release. And it still works
fine with all my existing Network
Schemes entries in /etc/network/interfaces.
This new hotplug looks like a win for laptop users. I haven't
tried it with usb yet, but I have no reason to worry about that.
Speaking of usb, hotplug, and the laptop: I'm forever hoping
to switch to the 2.6 kernel, because it handles usb hotplug so much
better than 2.4; but so far, I've been prevented by PCMCIA hotplug
issues and general instability when the laptop suspends and resumes.
(2.6 works fine on the desktop, where PCMCIA and power management
don't come into play.)
A few days ago, I built both 2.4.29 and 2.6.10, since I was behind
on both branches. 2.4.29 works fine. 2.6.10, alas, is even less
stable than 2.6.9 was. On the laptop's very first resume from BIOS
suspend after the first 2.6.10 boot, it hung, in the same way I'd
been seeing sporadically from 2.6.9: no keyboard lights blinking
(so not a kernel "oops"), cpu fan sometimes spinning,
and no keyboard response to ctl-alt-Fn or anything else.
I suppose the next step is to hook up the "magic sysrq" key and see
if it responds to the keyboard at all when in that state.
In the previous article
I wrote about how the two X selections, the primary and clipboard,
work. But I glossed over the details of key bindings to copy and paste
the two selections in various apps.
That's because it's complicated. Although having the two selections
available is really a wonderful feature, I can understand why so many
people are confused about it or think that copy and paste just
generally doesn't work on Linux -- because apps are so woefully
inconsistent about their key bindings, and what they do have is
so poorly documented.
"Why don't they all just use the standard Ctrl-C, Ctrl-V?" you ask.
(Cmd-C, Cmd-V for Mac users, but I'm not going to try to include the
Mac versions of every binding in this article; Mac users will have
to generalize.)
Simple: those keys have well-known and very commonly used bindings
already, which date back to long before copy and paste were invented.
I've written before about how I'd like to get a netbook like an Asus Eee,
except that the screen resolution puts me off: no one makes a netbook
with vertical resolution of more than 600. Since most projectors prefer
1024x768, I'm wary of buying a laptop that can't display that resolution.
(What was wrong with my beloved old Vaio? Nothing, really, except that
the continued march of software bloat means that a machine that can't
use more than 256M RAM is hurting when trying to run programs
(*cough* Firefox *cough) that start life by grabbing about 90M and
goes steadily up from there. I can find lightweight alternatives for
nearly everything else, but not for the browser -- Dillo just doesn't
cut it.)
Ebay turned out to be the answer: there are lots of subnotebooks
there, nice used machines with full displays at netbook prices.
And so a month before LCA I landed a nice Vaio TX650 with 1.5G RAM,
Pentium M, Intel 915GM graphics and Centrino networking.
All nice Linux-supported hardware.
But that raised another issue: how do widescreen laptops
(the TX650 is 1366x768) talk to a projector?
I knew it was possible -- I see people presenting from widescreen
machines all the time -- but nobody ever writes about how it works.
The first step was to get it talking to an external monitor at all.
I ran a VGA cable to my monitor, plugged the other end into the Vaio
(it's so nice not to need a video dongle!) and booted. Nothing. Hmm.
But after some poking and googling, I learned that
with Intel graphics, xrandr is the answer:
xrandr --output VGA --mode 1024x768
switches the external VGA signal on, and
xrandr --auto
switches it back off.
Update, April 2010: With Ubuntu Lucid, this has changed and now it's
xrandr --output VGA1 --mode 1024x768
-- in other words, VGA changed to VGA1. You can run xrandr
with no arguments to get a list of possible output devices and find
out whether X sees the external projector or screen correctly.
Well, mostly. Sometimes it doesn't work -- like, unfortunately,
at the lightning talk session, so I had to give my
talk without visuals. I haven't figured that out yet.
Does the projector have to be connected before I run xrandr?
Should it not be connected until after I've already run xrandr?
Once it's failed, it doesn't help to run xrandr again ... but
a lot of fiddling and re-plugging the cable and power cycling the
projector can sometimes fix the problem, which obviously isn't helpful
in a lightning talk situation.
Eventually I'll figure that out and blog it (ideas, anyone?)
but the real point of today's article is resolution. What I
wanted to know was: what happened to that wide 1366-pixel screen when
I was projecting 1024 pixels? Would it show me some horrible elongated
interpolated screen? Would it display on the left part of the laptop
screen, or the middle part?
The answer, I was happy to learn, is that it does the best thing
possible: it sends the leftmost 1024 pixels to the projector, while
still showing me all 1366 pixels on the laptop screen.
Why ... that means ... I can write notes for myself, to display in
the rightmost 342 screen pixels!
All it took was a little bit of
CSS hacking
in my
HTML slide
presentation package, and it worked fine.
Now I have notes just like my Mac friends with their Powerpoint and
their dual-head video cards, only I get to use Linux and HTML.
How marvellous! I could get used to this widescreen stuff.
I've used my simple network schemes setup for many years. No worries
about how distros come up with a new network configuration GUI every
release; no worries about all the bloat that NetworkManager insists
on before it will run; no extra daemons running all the time polling
my net just in case I might want to switch networks suddenly.
It's all command-line based; if I'm at home, I type
netscheme home
and my network will be configured for that setup until I tell it
otherwise.
If I go to a cafe with an open wi-fi link, I type
netscheme wifi; I have other schemes for places
I go where I need a wireless essid or WEP key. It's all very easy
and fast.
Last week for SCALE I decided it was silly to have to su and create
a new scheme file for conferences where all I really needed was the
name of the network (the essid), so I added a quick hack to my
netscheme script so that typing netscheme foo, where
there's no existing scheme by that name, will switch to a scheme
using foo as the essid. Worked nicely, and that inspired
me to update the "documentation".
I wrote an elaborate page on my network schemes back around 2003,
but of course it's all changed since then and I haven't done much
toward updating the page. So I've rewritten it completely, taking
out most of the old cruft that doesn't seem to apply any more.
It's here:
Howto Set Up
Multiple Network Schemes on a Linux Laptop.
My new netbook (about which, more later) has a trackpad with
areas set aside for both vertical and horizontal scrolling.
Vertical scrolling worked out of the box on Squeeze without needing
any extra fiddling. My old Vaio had that too, and I loved it.
Horizontal scrolling took some extra research.
I was able to turn it on with a command:
synclient HorizEdgeScroll=1
(thank you, AbsolutelyTech).
Then it worked fine, for the duration of that session.
But it took a lot more searching to find out the right place to set
it permanently. Nearly every page you find on trackpad settings tells
you to edit /etc/X11/xorg.conf. Distros haven't normally
used xorg.conf in over three years! Sure, you can generate
one, then edit it -- but surely there's a better way.
And there is. At least in Debian Squeeze and Ubuntu Natty, you can edit
/usr/share/X11/xorg.conf.d/50-synaptics.conf
and add this options line inside the "InputClass" section:
options "HorizEdgeScroll" "1"
Don't forget the quotes, or X won't even start.
In theory, you can use this for any of the Synaptics driver options --
tap rate and sensitivity, even multitouch. Your mileage may vary --
horizontal scroll is the only one I've tested so far. But at least
it's easier than generating and maintaining an xorg.conf file!
When I bought my Carbon X1 laptop a few years ago, the sound card was
new and not yet well supported by Linux. (I knew that before I bought,
and decided it was a good gamble that support would improve soon.)
Early on, there was a long thread on Lenovo's forum discussing,
in particular, how to get the microphone working. The key, apparently,
was the SOF (Sound Open Firmware) support, which was standard
in the 5.3 Linux kernel
that came with Ubuntu 10.10,
but needed an elaborate
script to get working in earlier kernels.
It worked fine on Ubuntu. But under Debian, the built-in
mic didn't work.
Linux Magazine has a good article by Jonathan A. Zdziarski on
Linux on
the laptop: Ten power tools for the mobile Linux user.
He gives hints such as what services to turn off for better power
management, and how to configure apmd to turn off those services
automatically; finding modules to drive various types of wireless
internet cards; various ways of minimizing disk activity;
and even making data calls with a mobile phone.
I've been setting up a new Lenovo X201 laptop at work.
(Yes, work -- I've somehow fallen into an actual job where I go in to
an actual office at least some of the time. How novel!)
At the office I have a Lenovo docking station, attached to a monitor
and keyboard. The monitor, naturally, has a different resolution from
the laptop's own monitor.
Under Gnome and compiz, when I plugged in the monitor, I could either let
the monitor mirror the laptop display -- in which case X would refuse to
work at greater than 1024x768, much smaller than the native resolution
of either the laptop screen or the external monitor -- or I could call
up the classic two-monitor configuration dialog, where I could
configure the external monitor to be its correct size and sit
alongside the computer's monitor. I had to do this every time I
plugged in.
If I wanted to work on the big screen,
then when I undocked, I had to drag all the windows on all desktops
back to the built-in LCD first, or they'd be lost.
Using just the external monitor and turning off the laptop screen
didn't seem to be an allowed option.
That all lasted for about two days. Gnome and I just don't get along.
Pretty soon gdm was mysteriously refusing to let me log in (probably didn't
like my under-1000 user id), and after wasting half a day fighting
it I gave up and reverted with relief to my familiar Openbox desktop.
But now I'm in the Openbox world and don't have that dialog anyway.
What are my options?
xrandr monitor detection
Fortunately, I already knew about
using
xrandr to send to a projector; it was only a little more complicated
using it for the monitor in the docking station. Running xrandr with
no arguments prints all the displays it currently sees, so you can tell
whether an external display or projector is connected and even what
resolutions it supports.
I used that for a code snippet in my .xinitrc:
# Check whether the external monitor is connected: returns 0 on success
xrandr | grep VGA | grep " connected "
if [ $? -eq 0 ]; then
xrandr --output VGA1 --mode 1600x900;
xrandr --output LVDS1 --off
else
xrandr --output VGA1 --off
xrandr --output LVDS1 --mode 1280x800
fi
That worked nicely. When I start X it checks for an external monitor,
and if it finds one it turns off the laptop display (so it's off when
the laptop is sitting closed in the docking station) and sets the
screen size for the external monitor.
Making it automatic
All well and good. I worked happily all day in the docking station,
suspended the laptop and un-docked it, brought it home, woke it up --
and of course the display was still off. Oops.
Okay, so it also needs the same check when resuming from suspend.
That used to be in /etc/acpi/resume.d, but in Lucid they've
moved it (because we definitely wouldn't want users to get complacent
and think they know how to configure things!) and now it lives in
/etc/pm/sleep.d. I created a new file,
/etc/pm/sleep.d/20_enable_display which looks like this:
#!/bin/sh
case "$1" in
resume)
# Check whether the external monitor is connected:
# returns 0 on success
xrandr | grep VGA | grep " connected "
if [ $? -eq 0 ]; then
xrandr --output VGA1 --mode 1600x900
xrandr --output LVDS1 --off
else
xrandr --output VGA1 --off
xrandr --output LVDS1 --mode 1280x800
fi
hsetroot -center `find -L $HOME/Backgrounds -name "*.*" | $HOME/bin/randomline`
;;
suspend|hibernate)
;;
*)
;;
esac
exit $?
Neat! Now, every time I wake from suspend, the laptop checks whether
an external monitor is connected and sets the resolution accordingly.
And it re-sets the background (using my
random
wallpaper method) so I don't get a tiled background on the big monitor.
Update: hsetroot -fill works better than -center
given that I'm displaying background images on two different
resolutions. Of course, if I wanted to get fancy I could make
separate background sets, one for each monitor, and choose images
from the appropriate set.
We're almost done. Two more possible adjustments.
Detecting undocking
First, while poking around in /etc/acpi I noticed a script
named undock.sh. In theory, I can put the same code snippet in
there, and then if I un-dock the laptop without suspending it first,
it will immediately change resolution. I haven't actually tried that yet.
Projectors
Second, this business of turning off the built-in display if there's
anything plugged into the VGA port is going to break if I use this
laptop for presentations, since a projector will also show up as VGA1.
So the code may need to be a little smarter. For example:
The theory here is that an external monitor will be able to do 1680 or
1600, so it will have a line like
VGA1 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 434mm x 270mm.
The 1680x matches the 16.0x pattern in grep.
A projector isn't likely to do more than 1280, so it won't match the
pattern '16.0x'. However, that isn't very robust; it will probably
fail for one of those fancy new 1920x1080 monitors.
You could extend it with
I'm forever having problems connecting to wireless networks,
especially with my Netgear Prism 54 card. The most common failure mode:
I insert the card and run /etc/init.d/networking restart
(udev is supposed to handle this, but that stopped working a month
or so ago). The card looks like it's connecting, ifconfig eth0
says it has the right IP address and it's marked up -- but try to
connect anywhere and it says "no route to host" or
"Destination host unreachable".
I've seen this both on networks which require a WEP key
and those that don't, and on nets where my older Prism2/Orinoco based
card will connect fine.
Apparently, the root of the problem
is that the Prism54 is more sensitive than the Prism2: it can see
more nearby networks. The Prism2 (with the orinoco_cs driver)
only sees the strongest network, and gloms onto it.
But the Prism54 chooses an access point according to arcane wisdom
only known to the driver developers.
So even if you're sitting right next to your access point and the
next one is half a block away and almost out of range, you need to
specify which one you want. How do you do that? Use the ESSID.
Every wireless network has a short identifier called the ESSID
to distinguish it from other nearby networks.
You can list all the access points the card sees with:
iwlist eth0 scan
(I'll be assuming eth0 as the ethernet device throughout this
article. Depending on your distro and hardware, you may need to
substitute ath0 or eth1 or whatever your wireless card
calls itself. Some cards don't support scanning,
but details like that seem to be improving in recent kernels.)
You'll probably see a lot of ESSIDs like "linksys" or
"default" or "OEM" -- the default values on typical low-cost consumer
access points. Of course, you can set your own access point's ESSID
to anything you want.
So what if you think your wireless card should be working, but it can't
connect anywhere? Check the ESSID first. Start with iwconfig:
iwconfig eth0
iwconfig lists the access point associated with the card right now.
If it's not the one you expect, there are two ways to change that.
First, change it temporarily to make sure you're choosing the right ESSID:
iwconfig eth0 essid MyESSID
If your accesspoint requires a key, add key nnnnnnnnnn
to the end of that line. Then see if your network is working.
If that works, you can make it permanent. On Debian-derived distros,
just add lines to the entry in /etc/network/interfaces:
wireless-essid MyESSID
wireless-key nnnnnnnnnn
Some older howtos may suggest an interfaces line that looks like this:
up iwconfig eth0 essid MyESSID
Don't get sucked in. This "up" syntax used to work (along with pre-up
and post-up), but although man interfaces still mentions it,
it doesn't work reliably in modern releases.
Use wireless-essid instead.
Of course, you can also use a gooey tool like
gnome-network-manager to set the essid and key. Not being a
gnome user, some time ago I hacked up the beginnings of a standalone
Python GTK tool to configure networks. During this week's wi-fi
fiddlings, I dug it out and blew some of the dust off:
wifi-picker.
You can choose from a list of known networks (including both essid and
key) set up in your own configuration file, or from a list of essids
currently visible to the card, and (assuming you run it as root)
it can then set the essid and key to whatever you choose.
For networks I use often, I prefer to set up a long-term
network
scheme, but it's fun to have something I can run once to
show me the visible networks then let me set essid and key.
Emacs has various options for editing HTML, none of them especially
good. I gave up on html-mode a while back because it had so many
evil tendencies, like not ever letting you type a double dash
without reformatting several lines around the current line as an
HTML comment. I'm using web-mode now, which is better.
But there was one nice thing that html-mode had: quick key bindings
for inserting tags. For instance, C-c i would insert
the tag for italics, <i></i>, and if you
had something selected it would make that italic:
<i>whatever you had selected</i>.
It's a nice idea, but it was too smart (for some value of "smart"
that means "dumb and annoying") for its own good. For instance,
it loves to randomly insert things like newlines in places where they
don't make any sense.
When I bought my new laptop several years ago, I chose Ubuntu as
its first distro even though I usually run Debian.
For one thing, Ubuntu has an excellent installer.
Second, they seem to do more testing on cutting-edge hardware, so I
thought the chances were better that hardware on a brand-new laptop
would be supported.
Ubuntu has been working fine for a couple of years, but with 21.10
("Impish Indri") it took a precipitous downturn.
I updated my Ubuntu "dapper" yesterday. When I booted this morning,
I couldn't get to any outside sites: no DNS. A quick look at
/etc/resolv.conf revealed that it was empty -- my normal
static nameservers were missing -- except for a comment
indicating that the file is prone to be overwritten at any moment
by a program called resolvconf.
man resolvconf provided no enlightenment. Clearly it's intended
to work with packages such as PPP which get dynamic network
information, but that offered no clue as to why it should be operating
on a system with a static IP address and static nameservers.
The closest Ubuntu bug I found was bug
31057. The Ubuntu developer commenting in the bug asserts that
resolvconf is indeed the program at fault. The bug reporter
disputes this, saying that resolvconf isn't even installed on the
machine. So the cause is still a mystery.
After editing /etc/resolv.conf to restore my nameservers,
I uninstalled resolvconf along with some other packages that I clearly
don't need on this machine, hoping to prevent the problem from
happening again:
Meanwhile, I did some reading.
It turns out that resolvconf depends on
an undocumented bit of information added to the
/etc/network/interfaces file: lines like
dns-nameservers 192.168.1.1 192.168.1.2
This is not documented under man interfaces, nor under man
resolvconf; it turns out that you're supposed to find out about it
from /usr/share/doc/resolvconf/README.gz. But of course, since
it isn't a standard part of /etc/network/interfaces, no
automatic configuration programs will add the DNS lines there for you.
Hope you read the README!
resolvconf isn't inherently a bad idea, actually; it's supposed to
replace any old "up" commands in interfaces that copy
resolv.conf files into place.
Having all the information in the interfaces file would be a better
solution, if it were documented and supported.
Meanwhile, be careful about resolvconf, which you may have even
if you never intentionally installed it.
This
thread on a Debian list discusses the problem briefly, and this
reply quotes the relevant parts of the resolvconf README (in case
you're curious but have already removed resolvconf in a fit of pique).
The new Debian Etch installation
on my laptop was working pretty well.
But it had one weirdness: the ethernet card was on eth1, not eth0.
ifconfig -a revealed that eth0 was ... something else,
with no IP address configured and a really long MAC address.
What was it?
Poking around dmesg revealed that it was related to the IEEE 1394 and
the eth1394 module. It was firewire networking.
This laptop, being a Vaio, does have a built-in firewire interface
(Sony calls it i.Link). The Etch installer, when it detected no
network present, had noted that it was "possible, though unlikely"
that I might want to use firewire instead, and asked whether to
enable it. I declined.
Yet the installed system ended up with firewire networking not only
installed, but taking the first network slot, ahead of any network
cards. It didn't get in the way of functionality, but it was annoying
and clutters the output whenever I type ifconfig -a.
Probably took up a little extra boot time and system resources, too.
I wanted it gone.
Easier said than done, as it turns out.
I could see two possible approaches.
Figure out who was setting it to eth1, and tell it to ignore
the device instead.
Blacklist the kernel module, so it couldn't load at all.
I begain with approach 1.
The obvious culprit, of course, was udev. (I had already ruled out
hal, by removing it, rebooting and observing that the bogus eth0 was
still there.) Poking around /etc/udev/rules.d revealed the file
where the naming was happening: z25_persistent-net.rules.
It looks like all you have to do is comment out the two lines
for the firewire device in that file. Don't believe it.
Upon reboot, udev sees the firewire devices and says "Oops!
persistent-net.rules doesn't have a rule for this device. I'd better
add one!" and you end up with both your commented-out line, plus a
brand new uncommented line. No help.
Where is that controlled? From another file,
z45_persistent-net-generator.rules. So all you have to do is
edit that file and comment out the lines, right?
Well, no. The firewire lines in that file merely tell udev how to add
a comment when it updates z25_persistent-net.rules.
It still updates the file, it just doesn't comment it as clearly.
There are some lines in z45_persistent-net-generator.rules
whose comments say they're disabling particular devices, by adding a rule
GOTO="persistent_net_generator_end". But adding that
in the firewire device lines caused the boot process to hang.
There may be a way to ignore a device from this file, but I haven't
found it, nor any documentation on how this system works.
Defeated, I switched to approach 2: prevent the module from loading at
all. I never expect to use firewire networking, so it's no loss. And indeed,
there are lots of other modules loaded I'd like to blacklist, since
they represent hardware this machine doesn't have. So it would be
nice to learn how.
I had a vague memory of there having been a file with a name like
/etc/modules.blacklist some time back in the Pliocene.
But apparently no such file exists any more.
I did find /etc/modprobe.d/blacklist, which looked
promising; but the comment at the beginning of that file says
# This file lists modules which will not be loaded as the result of
# alias expsnsion, with the purpose of preventing the hotplug subsystem
# to load them. It does not affect autoloading of modules by the kernel.
Okay, sounds like this file isn't what I wanted. (And ... hotplug? I
thought that was long gone, replaced by udev scripts.)
I tried it anyway. Sure enough, not what I wanted.
I fiddled with several other approaches before Debian diva Erinn Clark
found this helpful page.
I created a file called /etc/modprobe.d/00local
and added this line to it:
install eth1394 /bin/true
and on the next boot, the module was no longer loaded, and no longer
showed up as a bogus ethernet device. Hurray!
This /etc/modprobe.d/00local technique probably doesn't bear
examining too closely. It has "hack" written all over it.
But if that's the only way to blacklist problematic modules,
I guess it's better than nothing.
My laptop has always been able to sleep (suspend to RAM), one way
or another, but I had never managed it on a desktop machine.
Every time I tried running something like
apm -s, apm -S, echo 3 >/sys/power/state, or Ubuntu's
/etc/acpi/sleep.sh, the machine would sleep nicely, then when I
resumed it would come up partway then hang, or would simply boot
rather than resuming.
Dave was annoyed by it too: his Mac G4 sleeps just fine, but none
of his Linux desktops could. And finally he got annoyed enough to
spend half a day playing with different options. With what he
learned, both he and I now have desktops that can suspend to RAM
(his under Debian Sarge, mine under Ubuntu Edgy).
One step was to install hibernate (available as
a deb package in both Sarge and Edgy, but distros which don't offer
it can probably get it from somewhere on suspend2.net).
The hibernate program suspends to disk by default (which
is what its parent project, suspend2, is all about) but it
can also suspend to RAM, with the following set of arcane arguments:
hibernate -v 4 -F /etc/hibernate/ram.conf
(the -v 4 adds a lot of debugging output; remove it once
you have things working).
Though actually, in retrospect I suspect I didn't need to install
hibernate at all, and Ubuntu's /etc/acpi/sleep.sh script would
have done just as well, once I'd finished the other step:
Fiddle with BIOS options. Most BIOSes have a submenu named something
like "Power Management Options", and they're almost always set wrong
by default (if you want suspend to work). Which ones are wrong
depends on your BIOS, of course. On Dave's old PIII system, the
key was to change "Sleep States" to include S3 (S3 is the ACPI
suspend-to-RAM state). He also enabled APM sleep, which was disabled
by default but which works better with the older Linux kernels he
uses under Sarge.
On my much newer AMD64 system, the key was an option to "Run VGABIOS
if S3 Resume", which was turned off by default. So I guess it wasn't
re-enabling the video when I resumed. (You might think this would
mean the machine comes up but doesn't have video, but it's never
as simple as that -- the machine came up with its disk light solid
red and no network access, so it wasn't just the screen that was
futzed.)
Such a simple fix! I should have fiddled with BIOS settings long
ago. It's lovely to be able to suspend my machine when I go away
for a while. Power consumption as measured on the Kill-a-Watt
goes down to 5 watts, versus 3 when the machine is "off"
(desktop machines never actually power off, they're always sitting
there on standby waiting for you to press the power button)
and about 75 watts when the machine is up and running.
Now I just have to tweak the suspend scripts so that it gives me a
new desktop background when I resume, since I've been having so much
fun with my random
wallpaper script.
Later update: Alas, I was too optimistic. Turns out it actually only
works about one time out of three. The other two times, it hangs
after X comes up, or else it initially reboots instead of resuming.
Bummer!
For many years, I used extlinux as my boot loader to avoid having to
deal with
the
annoying and difficult grub2. But that was on MBR machines.
I never got the sense that extlinux was terribly well supported
in the newer UEFI/Secure Boot world. So when I bought my current
machine a few years ago, I bit the bullet and let Ubuntu's installer
put grub2 on the hard drive.
One of the things I lost in that transition was a boot splash image.
Another in the series of "I keep forgetting how to do this and
have to figure it out again each time, so this time I'm going to
write it down!"
Enabling remote X in a distro that disables it by default:
Of course, you need xhost. For testing, xhost +
enables access from any machine; once everything is working, you
may want to be selective, xhost hostname for the hosts from
which you're likely to want to connect.
If you log in to the console and start X, check
/etc/X11/xinit/xserverrc and see if it starts X with the
-nolisten flag. This is usually the problem, at least on
Debian derivatives: remove the -nolisten tcp.
If you log in using gdm, gdmconfig has an option in the
Security tab: "Always disallow TCP connections to X server
(disables all remote connections)". Un-checking this solves the
problem, but logging out won't be enough to see the change.
You must restart X; Ctrl-Alt-Backspace will do that.
Update: If you use kdm, the configuration to change is in
/etc/kde3/kdm/kdmrc
I use wireless so seldom that it seems like each time I need it, it's
a brand new adventure finding out what has changed since the last time
to make it break in a new and exciting way.
This week's wi-fi adventure involved Ubuntu's current "Gutsy Gibbon"
release and my prism54 wireless card. I booted the machine,
switched to the right
(a href="http://shallowsky.com/linux/networkSchemes.html">network
scheme, inserted the card, and ... no lights.
ifconfig -a showed the card on eth1 rather
than eth0.
After some fiddling, I ejected the card and re-inserted it; now
ifconfig -a showed it on eth2. Each time I
inserted it, the number incremented by one.
Ah, that's something I remembered from
Debian
Etch -- a problem with the udev "persistent net rules" file in
/etc/udev.
Sure enough, /etc/udev/70-persistent-net.rules had two entries
for the card, one on eth1 and the other on eth2. Ejecting and
re-inserting added another one for eth3. Since my network scheme is
set up to apply to eth0, this obviously wouldn't work.
A comment in that file says it's generated from
75-persistent-net-generator.rules. But unfortunately,
the rules uesd by that file are undocumented and opaque -- I've never
been able to figure out how to make a change in its behavior.
I fiddled around for a bit, then gave up and chose the brute force
solution:
Edit /etc/udev/70-persistent-net.rules to give the
device the right name (eth1, eth0 or whatever).
And that worked fine. Without 75-persistent-net-generator.rules
getting in the way, the name seen in 70-persistent-net.rules
works fine and I'm able to use the network.
The weird thing about this is that I've been using Gutsy with my wired
network card (a 3com) for at least a month now without this problem
showing up. For some reason, the persistent net generator doesn't work
for the Prism54 card though it works fine for the 3com.
A scan of the Ubuntu bug repository reveals lots of other people
hitting similar problems on an assortment of wireless cards;
bug
153727 is a fairly typical report, but the older
bug 31502
(marked as fixed) points to a likely reason this is apparently so
common on wireless cards -- apparently some of them report the wrong
MAC address before the firmware is loaded.
It's nice to be back on a relatively minimal Debian install,
instead of Ubuntu-with-everything. But one thing that I have
to admit I appreciated about Ubuntu: printing "just worked".
Turn on a printer, call up the print menu in any app, and the
printer I turned on would be there in the menu, without any need
of struggling with CUPS configurations.
Ubuntu was using Avahi, the Linux version of Apple's Zeroconf/Bonjour
framework, to discover printers. I knew that I'd probably need to
install Avahi if I wanted easy printer configuration on Debian.
But as it turned out, getting printing working was both harder, and easier.
Great talk at PenLUG by Chander Kant of Linux Certified,
on Linux on Laptops. It was much the same talk he gave a month ago
at SVLUG, except at PenLUG he got to give his talk without being
pestered by a bunch of irritating flamers, and it was possible for
people to ask real questions and get answers.
The biggest revelation: "Suspend to Disk" in the 2.6 kernel is
actually ACPI S4 suspend, and won't do anything on a machine that
doesn't support ACPI S4. Why can't they say that in the help?
Sheesh!
I'm experimenting with Ubuntu's "Hardy Heron" beta on the laptop, and
one problem I've hit is that it never configures my network card properly.
The card is a cardbus 3Com card that uses the 3c59x driver.
When I plug it in, or when I boot or resume after a suspend, the
card ends up in a state where it shows up in ifconfig eth0,
but it isn't marked UP. ifup eth0 says it's already up;
ifdown eth0 complains
error: SIOCDELRT: No such process
but afterward, I can run ifup eth0 and this time it
works. I've made an alias, net, that does
sudo ifdown eth0; sudo ifup eth0. That's silly --
I wanted to fix it so it happened automatically.
Unfortunately, there's nothing written anywhere on debugging udev.
I fiddled a little with udevmonitor and
udevtest /class/net/eth0 and it looked like udev
was in fact running the ifup rule in
/etc/udev/rules.d/85-ifupdown.rules, which calls:
/sbin/start-stop-daemon --start --background --pid file /var/run/network/bogus --startas /sbin/ifup -- --allow auto $env{INTERFACE}
So I tried running that by hand (with $env{INTERFACE} being eth0)
and, indeed, it didn't bring the interface up.
But that suggested a fix: how about adding --force
to that ifup line? I don't know why the card is already in a state
where ifup doesn't want to handle it, but it is, and maybe
--force would fix it. Sure enough: that worked fine,
and it even works when resuming after a suspend.
I filed bug
211955 including a description of the fix. Maybe there's some
reason for not wanting to use --force in 85-ifupdown
(why wouldn't you always want to configure a network card when it's
added and is specified as auto and allow-hotplug in
/etc/network/interfaces?) but if so, maybe someone will
suggest a better fix.
I've been experimenting with Ubuntu's second release, "Hoary
Hedgehog" off and on since just before it was released.
Overall, I'm very impressed. It's quite usable on a desktop machine;
but more important, I'm blown away by the fact that Ubuntu's kernel
team has made a 2.6 acpi kernel that actually works on my aging but
still beloved little Vaio SR17 laptop. It can suspend to RAM (if I
uncomment ACPI_SLEEP in /etc/defaults/acpi-support), it can
suspend to disk, it gets power button events (which are easily
customizable: by default it shuts the machine down, but if I replace
powerbtn.sh with a single line calling sleep.sh, it
suspends), it can read the CPU temperature. Very cool.
One thing didn't work: USB stopped working when resuming after a
suspend to RAM. It turned out this was a hotplug problem, not a kernel
problem: the solution was to add calls to /etc/init.d/hotplug
stop and /etc/init.d/hotplug start in the
/etc/acpi/sleep.sh script.
Problem solved (except now resuming takes forever, as does
booting; I need to tune that hotplug startup script and get rid of
whatever is taking so long).
Sonypi (the jogdial driver) also works. It isn't automatically loaded
(I've added it to /etc/modules), and it disables the power button (so
much for changing the script to call sleep.sh), a minor
annoyance. But when loaded, it automatically creates /dev/sonypi, so I
don't have to play the usual guessing game about which minor number it
wanted this time.
Oh, did I mention that the Hoary live CD also works on the Vaio?
It's the first live linux CD which has ever worked on this machine
(all the others, including rescue disks like the Bootable Business
Card and SuperRescue, have problems with the Sony PCMCIA-emulating-IDE
CD drive). It's too slow to use for real work, but the fact that it
works at all is amazing.
I have to balance this by saying that Ubuntu's not perfect.
The installer, which is apparently the Debian Sarge installer
dumbed down to reduce the number of choices, is inconsistent,
difficult, and can't deal with a networkless install (which, on
a laptop which can't have a CD drive and networking at the same time
because they both use the single PCMCIA slot, makes installation quite
tricky). The only way I found was to boot into expert mode, skip the
network installation step, then, after the system was up and running
(and I'd several times dismissed irritating warnings about how it
couldn't find the network, therefore "some things" in gnome wouldn't
work properly, and did I want to log in anyway?) I manually edited
/etc/network/interfaces to configure my card (none of Ubuntu's
built-in hardware or network configuration tools would let me
configure my vanilla 3Com card; presumably they depend on something
that would have been done at install time if I'd been allowed to
configure networking then). (Bug 2835.)
About that expert mode: I needed that even for the desktop,
because hoary's normal installer doesn't offer an option for
a static IP address. But on both desktop and laptop this causes a
problem. You see, hoary's normal mode of operation is to add the
first-created user to the sudoers list, and then not create a root
account at all. All of their system administration tools depend on the
user being in the sudoers file. Fine. But someone at ubuntu apparently
decided that anyone installing in expert mode probably wants a root
account (no argument so far) and therefore doesn't need to be in the
sudoers file. Which means that after the install, none of the admin
tools work; they just pop up variants on a permission denied dialog.
The solution is to use visudo to add yourself to
/etc/sudoers. (Bugs 7636 and
9832.)
Expert mode also has some other bugs, like prompting over and over for
additional kernel modules (bug 5999).
Okay, so nothing's perfect. I'm not very impressed with Hoary's
installer, though most of its problems are inherited from Sarge.
But once it's on the machine, Hoary works great. It's a modern
Debian-based Linux that gets security upgrades (something Debian
hasn't been able to do, though they keep making noises about finally
releasing Sarge). And there's that amazing kernel. Now that I have the
hotplug-on-resume problem fixed, I'm going to try using it as the
primary OS on the laptop for a while, and see how it goes.
Several years ago, my job required me to use a program -- never mind
which one -- from a prominent closed-source company. This program was
doing various annoying things in addition to its primary task --
operations that got around the window manager and left artifacts
all over my screen, operations that potentially opened files other
than the ones I asked it to open -- but in addition, I noticed that
when I ran the program, the lights on the DSL modem started going crazy.
It looked like the program was making network connections, when it had
no reason to do that. Was it really doing that?
Unfortunately, at the time I couldn't find any Linux command that would
tell me the answer. As mentioned in the above Ubuntu thread, there are
programs for Mac and even Windows to tell you this sort of information,
but there's no obvious way to find out on Linux.
The discussion ensuing in the ubuntu-devel-discuss thread tossed
around suggestions like apparmor and selinux -- massive, complex ways
of putting up fortifications your whole system. But nobody seemed to
have a simple answer to how to find information about what apps
are making network connections.
Well, it turns out there are a a couple ofsimple way to get that list.
First, you can use ss:
Update:
you might also want to add listening connections where programs
are listening for incoming connections: ss -tpla
Though this may be less urgent if you have a firewall in place.
-t shows only TCP connections (so you won't see all the interprocess
communication among programs running on your machine). -p prints the
process associated with each connection.
ss can do some other useful things, too, like show all the programs
connected to your X server right now, or show all your ssh connections.
See man ss for examples.
Or you can use netstat:
$ netstat -A inet -p
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 imbrium.timochari:51800 linuxchix.osuosl.o:ircd ESTABLISHED 1063/xchat
tcp 0 0 imbrium.timochari:59011 ec2-107-21-74-122.:ircd ESTABLISHED 1063/xchat
tcp 0 0 imbrium.timochari:54253 adams.freenode.net:ircd ESTABLISHED 1063/xchat
tcp 0 0 imbrium.timochari:58158 s3-1-w.amazonaws.:https ESTABLISHED
1097/firefox-bin
In both cases, the input is a bit crowded and hard to read. If all you
want is a list of processes making connections, that's easy enough to do
with the usual Unix utilities like grep and sed:
$ ss -tp | grep -v Recv-Q | sed -e 's/.*users:(("//' -e 's/".*$//' | sort | uniq
$ netstat -A inet -p | grep '^tcp' | grep '/' | sed 's_.*/__' | sort | uniq
Finally, you can keep an eye on what's going on by using watch to run
one of these commands repeatedly:
watch ss -tp
Using watch with one of the pipelines to print only process names is
possible, but harder since you have to escape a lot of quotation marks.
If you want to do that, I recommend writing a script.
And back to the concerns expressed on the Ubuntu thread,
you could also write a script to keep logs of which processes made
connections over the course of a day. That's definitely a tool I'll
keep in my arsenal.
My laptop, a Sony Vaio TX650P, badly needed a kernel update.
2.6.28.3 had been running so well that I just hadn't gotten around
to changing it. When I finally updated to 2.6.31.5, nearly everything
worked, with one exception:
Fn-F5 and Fn-F6 no longer adjusted screen brightness.
I found that there were lots of bugs filed on this problem --
kernel.org
bug 12816,
Ubuntu
bug 414810,
Fedora
bug 519105
and so on. A few of them had terse comments like "Can you try this
patched binary release?" but none of them had a source patch,
or any hints as to the actual nature of the problem.
But one of them did point me to
/sys/class/backlight. In my working 2.6.28.3 kernel, that
directory contained a sony subdirectory containing useful files
that let me query or change the brightness:
echo 1 >/sys/class/backlight/sony/brightness
On my nonworking 2.6.31.5 kernel, I had /sys/class/backlight
but there was no sony subdirectory there.
grep SONY .config in
the two kernels revealed that my working kernel had SONY_LAPTOP set,
while the nonworking one did not. No problem! Just figure out where
SONY_LAPTOP is in the configuration (it turns out to be at the very
end of "device drivers" under "X86 Platform Specific Device Drivers"),
make menuconfig, set SONY_LAPTOP and rebuild ... right?
Well, no. make menuconfig showed me lots of laptop
manufacturers in the "Platform Specific" category, but Sony
wasn't one of them. Of course, since it didn't show the option it
also didn't offer a chance to read the help for the option either,
which might have told me what its dependencies were.
Time for a recursive grep of kernel source:
grep -r SONY_LAPTOP . arch/x86/configs/i386_defconfig told me the option did indeed
still exist, and where to find it. drivers/platform/x86/Kconfig
lists the option itself, and says it depends on INPUT and RFKILL.
RFKILL? A bit more poking around located that one near the end of
"Networking support", with the name "RF switch subsystem support".
(I love how user-visible names for kernel options only marginally
resemble the CONFIG names.)
It's apparently intended for
"control over RF switches found on many WiFi and Bluetooth cards,"
something I don't seem to need on this laptop (my WiFi works fine
without it) -- except that the kernel for some reason won't let me
build the ACPI brightness control without it.
So that's the secret. Go to "Networking support", set
"RF switch subsystem support", then back up
to "Device drivers", scroll down to the end to find
"X86 Platform Specific Device Drivers" and inside it you'll now see
"Sony Laptop Extras". Enable that, and the Fn-F5/F6 keys (as well as
/sys/class/backlight/sony/brightness) will work again.
And, being Debian, there's no real bug system so you can't just CC
yourself on the bug to see when new fixes might be available to try.
You just have to wait, try every few days and see if the system
That's hard when the system doesn't work at all. Last week, I was
booting into a shell but X wouldn't run, so at least I could pull
updates. This week, X starts but the keyboard and mouse don't work at all,
making it hard to run an upgrade.
has been fixed.
Fortunately, I have an install of Debian stable ("Jessie") on this
system as well. When I partition a large disk I always reserve several
root partitions so I can try out other Linux distros, and when running
the more experimental versions, like Sid, sometimes that's a life saver.
So I've been running Jessie while I wait for Sid to get fixed.
The only trick is: how can I upgrade my Sid partition while running
Jessie, since Sid isn't usable at all?
I have an entry in /etc/fstab that lets me mount my Sid
partition easily:
/dev/sda6 /sid ext4 defaults,user,noauto,exec 0 0
So I can type mount /sid as myself, without even needing
to be root.
But Debian's apt upgrade tools assume everything will be on /,
not on /sid. So I'll need to use chroot /sid (as root)
to change the root of the filesystem to /sid. That only affects
the shell where I type that command; the rest of my system will still be
happily running Jessie.
Mount the special filesystems
That mostly works, but not quite, because I get a lot of errors like
permission denied: /dev/null.
/dev/null is a device: you can write to it and the bytes disappear,
as if into a black hole except without Hawking radiation.
Since /dev is implemented by the kernel and udev, in the chroot
it's just an empty directory. And if a program opens /dev/null in
the chroot, it might create a regular file there and actually write to it.
You wouldn't want that: it eats up disk space and can slow things down a lot.
The way to fix that is before you chroot:
mount --bind /dev /sid/dev
which will make /sid/dev a mirror of the real /dev.
It has to be done before the chroot because inside the chroot,
you no longer have access to the running system's /dev.
But there is a different syntax you can use after chrooting:
mount -t proc proc proc/
mount --rbind /sys sys/
mount --rbind /dev dev/
It's a good idea to do this for /proc and /sys as well,
and Debian recommends adding /dev/pts (which must be done after you've
mounted /dev), even though most of these probably won't come into play
during your upgrade.
Mount /boot
Finally, on my multi-boot system, I have one shared /boot
partition with kernels for Jessie, Sid and any other distros I have
installed on this system. (That's
somewhat
hard to do using grub2
but easy on Debian
though you may need to
turn
off auto-update and
Debian
is making it harder to use extlinux now.)
Anyway, if you have a separate /boot partition, you'll want it mounted
in the chroot, in case the update needs to add a new kernel.
Since you presumably already have the same /boot mounted on the
running system, use mount --bind for that as well.
So here's the final set of commands to run, as root:
mount /sid
mount --bind /proc /sid/proc
mount --bind /sys /sid/sys
mount --bind /dev /sid/dev
mount --bind /dev/pts /sid/dev/pts
mount --bind /boot /sid/boot
chroot /sid
And then you can proceed with your apt-get update,
apt-get dist-upgrade etc.
When you're finished, you can unmount everything with one command:
But one thing that bugs me about live distros:
they're set up with default settings and don't
have a lot of the programs I want to use. Even getting a terminal
takes quite a lot of clicks on most distros. If only they would save
their settings!
It's possible to make a live USB stick "persistent", but not much is
written about it. Most of what's written tells you to create the USB
stick with usb-creator -- a GUI app that I've tried periodically for
the past two years without ever once succeeding in creating a bootable
USB stick.
Even if usb-creator did work, it wouldn't work with a multi-boot
stick like this one, because it would want to overwrite the whole drive.
So how does persistence really work? What is usb-creator doing, anyway?
How persistence works: Casper
The best howto I've found on Ubuntu persistence is
LiveCD
Persistence. But it's long and you have to wade through a lot of
fdisk commands and similar arcana. So here's how to take your
multi-distro stick and make at least one of the installs persistent.
Ubuntu persistence uses a package called casper which overlays
the live filesystem with the contents of another filesystem.
Figuring out where it looks for that filesystem is the key.
Casper looks for its persistent storage in two possible places: a
partition with the label "casper-rw", and a file named
"casper-rw" at the root of its mounted partitions.
So you could make a separate partition labeled "casper-rw", using your
favorite partitioning tool, such as gparted or fdisk. But if you already
have your multi-distro stick set up as one big partition, it's just as
easy to create a file. You'll have to decide how big to make the file,
based on the size of your USB stick.
I'm using a 4G stick, and I chose 512M for my persistent partition:
Next, create a filesystem inside that file. I'm not sure what the
tradeoffs are among various filesystem types -- no filesystem is
optimized for being run as a loopback file read from a vfat USB stick
that was also the boot device. So I flipped a coin and used ext3:
$ mkfs.ext3 /path/to/casper-rw
/path/to/casper-rw is not a block special device.
Proceed anyway? (y,n) y
One more step: you need to add the persistent flag to your boot
options. If you're following the multi-distro USB stick tutorial I
linked to earlier, that means you should edit boot/grub/grub.cfg on
the USB stick, find the boot stanza you're using for Ubuntu, and make
the line starting with linux look something like this:
linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=$isofile quiet splash noprompt persistent --
Now write the stick, unmount it, and try booting your live install.
Testing: did it work?
The LiveCD/Persistence page says persistent settings aren't
necessarily saved for the default "ubuntu" user, so it's a good idea
to make a new user. I did so.
Oops -- about that Ubuntu new user thing
But at least in Ubuntu Oneiric: there's a problem with that. If you
create a user, even as class Administrator (and of course you do want
to be an Administrator), it doesn't ask you for a password. If you
now log out or reboot, your new user should be saved -- but you won't
be able to do anything with the system, because anything that requires
sudo will prompt you for your nonexistent password. Even attempting to
set a password will prompt you for the nonexistent password.
Apparently you can "unlock" the user at the time you create it, and
then maybe it'll let you set a password. I didn't know this beforehand,
so here's how to set a password on a locked user from a terminal:
$ sudo passwd username
For some reason, sudo will let you do this without prompting for a
password, even though you can't do anything administrative through the GUI.
Testing redux
Once you're logged in as your new user, try making some changes.
Add and remove some items from the unity taskbar. Install a couple
of packages. Change the background.
Now try rebooting. If your casper-rw file worked, it should remember your
changes.
When you're not booted from your live USB stick, you can poke around
in the filesystem it uses by mounting it in "loopback" mode.
Plug the stick into a running Linux machine, mount it the usb stick,
then mount it with
$ sudo mount -o loop /path/to/casper-rw /mnt
/path/to is wherever you mounted your usb stick -- e.g. /media/whatever.
With the file mounted in loopback mode,
you should be able to adjust settings or add new files without
needing to boot the live install -- and they should show up the
next time you use the live install.
My live Ubuntu Oneiric install is so much more fun to use now!
Someone asked on a mailing list whether to upgrade to a new OS release
when her current install was working so well. I thought I should write
up how I back up my old systems before attempting a risky upgrade
or new install.
On my disks, I make several relatively small partitions, maybe
15G or so (pause to laugh about what I would have thought ten or
even five years ago if someone told me I'd be referring to 15G
as "small"), one small shared /boot partition, a swap partition,
and use the rest of the disk for /home or other shared data.
Now you can install a new release, like 9.04, onto a new partition
without risking your existing install.
If you prefer upgrading rather than running the installer, you can
do that too. I needed a jaunty (9.04) install to test
whether a bug was fixed. But my intrepid (8.10) is working fine and
I know there are some issues with jaunty, so I didn't want to risk
the working install. So from Intrepid, I copied the whole root
partition over to one of my spare root partitions, sda5:
(that last step takes quite a while: you're copying the whole system.)
Now there are a couple of things you have to do to make that /jaunty
partition work as a bootable install:
1. /dev on an ubuntu system isn't a real file system, but something
magically created by the kernel and udev. But to boot, you need some
basic stuff there. When you're up and running, that's stored in
/dev/.static, so you can copy it like this:
cp -ax /dev/.static/dev/ /jaunty/
2021: Ignore this whole section.
For several years, Linux distros
have been able to create /dev on their own, without needing any seed
files. So there's no need to create any devices. Skip to #2, fstab
which is definitely still needed.
Note: it used to work to copy it to /jaunty/dev/.
The exact semantics of copying directories in cp and rsync, and
where you need slashes, seem to vary with every release.
The important thing is that you want /jaunty/dev to end up
containing a lot of devices, not a directory called dev or
a directory called .static. So fiddle with it after the cp -ax
if you need to.
Note 2: Doesn't it just figure? A couple of days
after I posted this, I found out that the latest udev has removed
/dev/.static so this doesn't work at all any more. What you can do
instead is:
cd /jaunty/dev
/dev/MAKEDEV generic
Note 3: If you're running MAKEDEV from Fedora, it
will target /dev instead of the current directory, so you need
MAKEDEV -d /whatever/dev generic .
However, caution: on Debian and Ubuntu -d deletes the
devices. Check man MAKEDEV first to be sure.
Ain't consistency wonderful?
2. /etc/fstab on the system you just created points to the wrong
root partition, so you have to fix that. As root, edit /etc/fstab
in your favorite editor (e.g. sudo vim /etc/fstab or whatever)
and find the line for the root filesystem -- the one where the
second entry on the line is /. It'll look something like this:
The easy fix is to change that to point to your new disk partition:
# jaunty is now on /dev/sda5
/dev/sda5 / ext3 relatime,errors=remount-ro 0 1
If you want to do it the "right", ubuntu-approved way, with UUIDs,
you can get the UUID of your disk this way:
ls -l /dev/disk/by-uuid/ | grep sda5
Take the UUID (that's the big long hex number with the dashes) and
put it after the UUID= in the original fstab line.
While you're editing /etc/fstab, be sure to look for any lines that
might mount /dev/sda5 as something other than root and delete them
or comment them out.
The following section describes how to update grub1.
Since most distros now use grub2, it is out of date.
With any luck, you can run update-grub and it will notice
your new partition; but you might want to fiddle with entries in
/etc/grub.d to label it more clearly.
Now you should have a partition that you can boot into and upgrade.
Now you just need to tell grub about it. As root, edit
/boot/grub/menu.lst and find the line that's booting your current
kernel. If you haven't changed the file yourself, that's probably
right after a line that says:
Make a copy of this whole stanza, so you have two identical copies,
and edit one of them. (If you edit the first of them, the new OS
it will be the default when you boot; if you're not that confident,
edit the second copy.) Change the two UUIDs to point to your new disk
partition (the same UUID you just put into /etc/fstab) and change
the Title to say 9.04 or Jaunty or My Copy or whatever you want the
title to be (this is the title that shows up in the grub menu when
you first boot the machine).
Now you should be able to boot into your new partition.
Most things should basically work -- certainly enough to start
a do-release-upgrade without risking your original
install.
I finally got around to upgrading to the current Ubuntu, Intrepid
Ibex. I know Intrepid has been out for months and Jaunty is just
around the corner; but I was busy with the run-up to a couple of
important conferences when Intrepid came out, and couldn't risk
an upgrade. Better late than never, right?
The upgrade went smoothly, though with the usual amount of
babysitting, watching messages scroll by for a couple of hours
so that I could answer the questions that popped up
every five or ten minutes. Question: Why, after all these years
of software installs, hasn't anyone come up with a way to ask all
the questions at the beginning, or at the end, so the user can
go have dinner or watch a movie or sleep or do anything besides
sit there for hours watching messages scroll by?
XKbOptions: getting Ctrl/Capslock back
The upgrade finished, I rebooted, everything seemed to work ...
except my capslock key wasn't doing ctrl as it should. I checked
/etc/X11/xorg.conf, where that's set ... and found the whole
file commented out, preceded by the comment:
# commented out by update-manager, HAL is now used
Oh, great. And thanks for the tip on where to look to get my settings
back. HAL, that really narrows it down.
Google led me to a forum thread on
Intrepid
xorg.conf - input section. The official recommendation is to
run sudo dpkg-reconfigure console-setup ... but of
course it doesn't allow for options like ctrl/capslock.
(It does let you specify which key will act as the Compose key,
which is thoughtful.)
Fortunately, the release
notes give the crucial file name: /etc/default/console-setup.
The XKBOPTIONS= line in that file is what I needed.
It also had the useful XKBOPTIONS="compose:menu" option left over
from my dpkg-configure run. I hadn't known about that before; I'd
been using xmodmap to set my multi key. So my XKBOPTIONS now looks
like: "ctrl:nocaps,compose:menu".
Fixing the network after resume from suspend
Another problem I hit was suspending on my desktop machine.
It still suspended, but after resuming, there was no network.
The problem turned out to lie in
/etc/acpi/suspend.d/55-down-interfaces.sh.
It makes a list of interfaces which should be brought up and down like this:
IFDOWN_INTERFACES="`cat /etc/network/run/ifstate | sed 's/=.*//'`"
IFUP_INTERFACES="`cat /etc/network/run/ifstate`"
However, there is no file /etc/network/run/ifstate, so this always
fails and so /etc/acpi/resume.d/62-ifup.sh fails to bring
up the network.
Google to the rescue again. The bad thing about Ubuntu is that they
change random stuff so things break from release to release. The good
thing about Ubuntu is a zillion other people run it too, so whatever
problem you find, someone has already written about. Turns
out ifstate is actually in /var/run/network/ifstate
now, so making that change in
/etc/acpi/suspend.d/55-down-interfaces.sh
fixes suspend/resume.
It's bug
295544, fixed in Jaunty and nominated for Intrepid (I just learned
about the "Nominate for release" button, which I'd completely missed
in the past -- very useful!) Should be interesting to see if the fix
gets pushed to Intrepid, since networking after resume is completely
broken without it.
Otherwise, it was a very clean upgrade -- and now I can build
the GIMP trunk again, which was really the point of the exercise.