Shallow Thoughts : tags : linux
Akkana's Musings on Open Source Computing and Technology, Science, and Nature.
Mon, 21 Oct 2024
An upgrade on Debian unstable ("sid") a few days ago left me unable to ping.
When I tried, I got
ping: socket: Operation not permitted
with an additional reason of
missing cap_net_raw+p capability or setuid?
Ping worked fine as root, so it was a permission problem.
After some discussion on IRC with several helpful people in
#debian-next, I learned two ways of enabling it
(but read to the end before doing either of these,
since there's a better way).
Read more ...
Tags: linux, debian
[
12:37 Oct 21, 2024
More linux |
permalink to this entry |
]
Thu, 18 Apr 2024
I mentioned last month that I'm learning guitar. It's been going well
and I'm having fun. But I've gotten to the point where I sometimes get
chords confused: a song is listed as using E major and I play D major
instead.
Also, it's important to practice transitions between chords,
which is easy when you only know three chords; but with eight or so,
I had stopped practicing transitions in general and was only practicing
the ones that occur in songs I like to play.
I found myself wishing I had something like flash cards for guitar chords.
Someone must have already written that, right? But I couldn't find
anything promising with a web search. And besides, it's more fun to
write programs than to flail at unhelpful search engines, and you
always end up learning something new.
Read more ...
Tags: guitar, linux, programming, python, audio
[
20:02 Apr 18, 2024
More linux |
permalink to this entry |
]
Sat, 23 Mar 2024
I mentioned before that I'm taking beginner guitar lessons.
Justin recommends using a
metronome for some of the practicing, and that makes sense:
I notice that sometimes when I practice I try to go too fast,
which might or might not be good for learning the chord changes
but it also leads to more mistakes and worse chord quality.
There are probably lots of phone metronome apps,
but I'm usually practicing near my computer (where I watch
the lessons and where I keep all my notes on chords and rhythms for
particular songs), so I thought it would be nice to have a metronome
on Linux.
Read more ...
Tags: guitar, music, cmdline, linux
[
18:37 Mar 23, 2024
More linux |
permalink to this entry |
]
Tue, 26 Dec 2023
In October I wrote about making a
Windows 10 that Boots off a USB Stick,
From Linux.
A Debian update today or yesterday (Merry Christmas!) broke that
and I spent a few hours today chasing that down.
There's a package called ovmf
that puts BIOS/firmware
related files
in /usr/share/OVMF/. The command I used in the earlier article
included the flag -bios /usr/share/OVMF/OVMF_CODE.fd
but as of today, -bios
apparently doesn't work any more
with any of the files there.
Read more ...
Tags: linux, windows, virtualization, qemu, debian
[
18:01 Dec 26, 2023
More linux |
permalink to this entry |
]
Fri, 06 Oct 2023
Last year I wrote about my efforts to
share
files in QEMU between a Linux host and Windows guest.
Someone in the comments pointed me to WinFSP and WinSSHFS,
and I was able to get file sharing working that way, after
installing both those packages on the Windows guest.
But I recently found a much easier way, using QEMU's built-in SMB handler.
Read more ...
Tags: virtualization, QEMU, linux, samba
[
08:02 Oct 06, 2023
More linux |
permalink to this entry |
]
Sun, 01 Oct 2023
In 2019, I wrote about struggling to get any sort of Windows booting off
an external USB stick, in order to
Install Lenovo Firmware Packaged as a .exe on a Linux Machine.
I ended up needing to borrow a real Windows machine and install Rufus
on it.
In 2023, things are much better. Aki at atkdinosaurus has written a
clear, concise tutorial on that topic:
How to create a Windows 10 installation on a USB stick in UEFI mode.
I love that it's all command-line, so you can duplicate the steps exactly.
Read more ...
Tags: linux, windows, virtualization, qemu, debian
[
10:07 Oct 01, 2023
More linux |
permalink to this entry |
]
Thu, 28 Sep 2023
My search for a good desktop Mastodon client has led me down a path
that involved learning some fun ways to interact with existing
browser windows on Linux with X programs like
xdotool and wmctrl.
Like many people, I've switched from The App Formerly Known As Twitter
to Mastodon (where I'm
@akkana@fosstodon.org).
But the next question was which Mastodon app to use.
Read more ...
Tags: linux, firefox, browsers, python, X11, window managers, openbox
[
11:48 Sep 28, 2023
More linux |
permalink to this entry |
]
Fri, 25 Aug 2023
When I wrote about
Getting Linux System Notifications under Openbox,
I ended up tossing out the whole notification system and using zenity
to pop up a dialog directly. Specifically, a command like
XAUTHORITY=~/.Xauthority DISPLAY=:0 zenity --title "Hello" --info --text="Hello world"
But customizing zenity to make it more attention-getting turned out to
be more difficult than expected ...
Read more ...
Tags: linux, openbox, window managers
[
13:55 Aug 25, 2023
More linux |
permalink to this entry |
]
Mon, 21 Aug 2023
I've been a fan of Linux's lightweight window managers, particularly
Openbox, for many years.
But admittedly, there are some things they don't generally handle.
One of those is system notifications.
Mostly, I've been happy to go without notifications. Firefox is
forever asking me whether I want to turn them on for particular websites
(which wouldn't work, but Firefox doesn't know that), usually websites
I'm visiting for the first time and probably don't ever want to visit again,
let alone let them spam me with ads as notifications outside my browser.
But every now and then it would be handy.
Read more ...
Tags: linux, openbox, window managers
[
14:03 Aug 21, 2023
More linux |
permalink to this entry |
]
Tue, 08 Aug 2023
A few days ago I wrote about finding a way to
triage
videos by adding captions to mplayer. Better than nothing, but I
really wanted something like
pho or
MetaPho where I could
add the tags in the program itself, rather than keeping notes on a
piece of paper.
Turns out it wasn't that hard using VLC's Python bindings.
Read more ...
Tags: linux, video
[
13:18 Aug 08, 2023
More linux |
permalink to this entry |
]
Fri, 04 Aug 2023
I've recently hit a wall that I'd been avoiding: how to triage a bunch
of new videos and decide which ones are worth keeping.
For still photos, I do that with my
Pho
program if I just want to make yes or no decisions, or maybe mark two
or three categories. When the time comes for real tagging — which
images have sunsets, which have bicycles, which have coyotes and so on
— I use my MetaPho.
I take a lot of photos, so efficient triaging and tagging is important
to me.
But what about videos?
Read more ...
Tags: linux, video
[
20:23 Aug 04, 2023
More linux |
permalink to this entry |
]
Tue, 04 Apr 2023
After learning
how to prevent RawTherapee from intercepting requests for a file manager,
I'm happy not to have unwanted RawTherapee windows randomly popping up
whenever some program decides it wants to show me a directory.
For instance, in Firefox's Download Manager, there's a little folder
icon you can click on -- but it doesn't do anything useful if you
don't have a file manager installed.
I suppose I could install a file manager; thunar is relatively lightweight.
But it seems silly to have to install a whole GUI program I'll never
otherwise use just to find out where files were stored. Once I know
where to look, a terminal, with shell autocomplete, works fine for
navigating my directories, and is much faster and less RSI-inducing
than a mouse-based file manager.
Which raises the question:
can I make the system do something useful on directory requests,
and just show me where the file was stored, or give me a terminal
already chdired to the right place? Sort of a fake file manager?
It turned out to be fairly easy.
Read more ...
Tags: linux, mime, cmdline
[
11:24 Apr 04, 2023
More linux |
permalink to this entry |
]
Mon, 27 Mar 2023
I've been annoyed for some time by the way that Zoom, when it finishes
processing a recording after a meeting, pops up a ... Raw Therapee window??
RawTherapee is a program for handling RAW image files, the kind that
many digital cameras can generate but that most image apps can't read.
It's a fine program. But it's not a file manager, nor is it a video player.
It makes absolutely no sense to pop it up to handle a video file.
And it's very slow to start up, so I would leave a Zoom meeting, and
then half a minute later this weird window would pop up for no
apparent reason.
I've seen a few other programs, like wine, pop up these RawTherapee windows.
I've been trying for many months to figure out why
this happens, and I've finally found the answer, and a fix.
Read more ...
Tags: linux, cmdline, mime
[
16:26 Mar 27, 2023
More linux |
permalink to this entry |
]
Sun, 19 Mar 2023
I back up my computer to a local disk (well, several redundant local disks)
using rsync
. (I don't particularly trust cloud providers,
and in any case our internet connection is very slow, especially for upload,
so waiting hours while the entire contents of my disk uploads isn't appealing.)
To save space and time, I have script that includes a list of files
and directories I don't need to back up: browser cache directories,
object files, build directories, generated files like thumbnails,
large video files, downloaded source, and so on.
I also have a list of files I do want to back up even though
they'd otherwise be excluded. For instance, I sometimes have local changes
in my GIMP source directory, outsrc/gimp-master/gimp/, even
though most of outsrc doesn't need to be backed up.
Or /blog/tags/build in my local mirror of the shallowsky
website, even though I have a rule that says directories named
build shouldn't usually be backed up.
I've been using rsync's --include
and --exclude
to handle this.
But I discovered yesterday that I'd been using them wrong, and some
things I thought were getting backed up, weren't.
It took some reading and experimenting before I figured out how
these rsync flags actually work — which doesn't seem to be
well explained anywhere.
Read more ...
Tags: backups, linux, cmdline, python
[
16:11 Mar 19, 2023
More linux/cmdline |
permalink to this entry |
]
Sun, 18 Dec 2022
Back in 2015, I wrote a script for the mutt mailer (or any
plaintext mail program, really) to
view
MS Word documents (or other unfriendly formats) attached to emails.
(This is unfortunately something that comes up constantly in email
exchanges with League of Women Voters people —
Read more ...
Tags: email, mutt, programming, python, linux
[
18:52 Dec 18, 2022
More tech/email |
permalink to this entry |
]
Wed, 23 Nov 2022
Update: I have found a much easier way, using QEMU's built-in
Samba. See QEMU Windows Guest: Sharing Files with the Host.
Unexpectedly, one of the hardest parts of
Migrating
a VirtualBox Windows Virtual Machine to qemu/kvm/virt-manager
was finding a way to exchange files between Linux and Windows.
In virtualbox, setting up a shared folder is trivial.
In QEMU, not so much.
Read more ...
Tags: linux, virtualization, QEMU, windows
[
17:37 Nov 23, 2022
More linux |
permalink to this entry |
]
Sat, 12 Nov 2022
Ever since I got my Lenovo Carbon X1, I've wished there was some way
to limit the battery charge. I keep it plugged in to a USB hub and external
monitor most of the time, which means that the battery is at 100% for
weeks on end. That isn't particularly good for lithium ion batteries:
it's better for battery life to stop charging at around 80%.
Lots of laptops, including Dells and Apples, have a charge limit feature in
their BIOS, but I searched through the CX1's BIOS several times and
never found anything, so I'd resigned myself..
But just this week I accidentally stumbled on a way to set this at runtime!
Read more ...
Tags: linux, laptop, lenovo
[
16:17 Nov 12, 2022
More linux/laptop |
permalink to this entry |
]
Mon, 10 Oct 2022
I boot Linux in text mode, with all the boot-time messages showing.
There are several reasons for this, but one is that
I want to be able to see any errors that might arise — boot-time
errors aren't otherwise shown to the user.
However, many Linux distros, including Debian and Ubuntu, clear the
screen before showing a login prompt, making it impossible to read the
last few messages or find any errors.
Some years back, I looked into why this was happening, and found the
answer in Stop
Clearing My God Damned Console. It comes down to a line in
getty@tty1.service: TTYVTDisallocate=yes
.
Change that to TTYVTDisallocate=no
and the terminal
will stop clearing before you log in.
Read more ...
Tags: linux, debian
[
19:08 Oct 10, 2022
More linux |
permalink to this entry |
]
Mon, 25 Apr 2022
A couple of small tips on QEMU/KVM/VirtManager that I picked up while
migrating my Windows 10 virtual machine from VirtualBox, for use
once you
get
virt-manager running and
migrate
your VirtualBox VMs to virt-manager/QEMU:
Read more ...
Tags: linux, virtualization, QEMU, windows
[
14:33 Apr 25, 2022
More linux |
permalink to this entry |
]
Sat, 16 Apr 2022
A month ago I wrote about
Getting virt-manager Running on Debian.
The ultimate goal of this was to migrate my Windows 10 install from
VirtualBox to QEMU, because VirtualBox is becoming increasingly
difficult to install on Linux, especially on Debian, which has
removed VirtualBox from Bookworm (testing) and there are indications that
it might be removed from Sid (unstable) as well. I gather there's
something unsavory about the license now that Oracle owns it,
but I haven't been following the details.
Anyway, after getting virt-manager running, I'd been putting off the
rest of the migration out of a suspicion that there lay dragons.
I was right: it took several days of struggling, but I now have
Windows 10 working under virt-manager and qemu/kvm. Here's how.
Read more ...
Tags: linux, virtualization, QEMU, windows
[
18:20 Apr 16, 2022
More linux |
permalink to this entry |
]
Sun, 20 Mar 2022
When I bought my Carbon X1 laptop a few years ago, the sound card was
new and not yet well supported by Linux. (I knew that before I bought,
and decided it was a good gamble that support would improve soon.)
Early on, there was a long thread on Lenovo's forum discussing,
in particular, how to get the microphone working. The key, apparently,
was the SOF (Sound Open Firmware) support, which was standard
in the 5.3 Linux kernel
that came with Ubuntu 10.10,
but needed an elaborate
script to get working in earlier kernels.
It worked fine on Ubuntu. But under Debian, the built-in
mic didn't work.
Read more ...
Tags: linux, laptop
[
11:15 Mar 20, 2022
More linux/laptop |
permalink to this entry |
]
Thu, 17 Feb 2022
A conversation that happens every so often on a Linux chat channel:
That happened again a few weeks ago, and one of the virt-manager
enthusiasts on the channel wanted to help me track down the problems.
Since I didn't have anything much going on, I agreed, and kept at it
instead of giving up after the first few iterations.
It took about 45 minutes of
fiddling, installing more packages, web searching on the error messages
and discussing them on IRC, then fiddling some more,
getting a little further with each package I installed.
In the end, I did get virt-manager running.
Here's the list of packages I had to install, as well
as adding myself to the groups kvm, libvirt and libvirt-qemu:
apt install virt-manager libvirt-daemon qemu qemu-kvm libvirt0 libvirt-bin virt-manager bridge-utils libvirt-daemon-system qemu-system-x86 qemu-utils dnsmasq gir1.2-spiceclientgtk-3.0
Part of the problem, apparently, is that Debian's virt-manager
package isn't set up to require on all the other packages it needs to run.
They might be there in the "recommends" and "suggests", but
I don't install those by default, since
they tend to pull in all sorts of silly bloatware I'll never want.
With most packages, the "depends" are all that's needed to use a
package in its basic form, and "recommends" and "suggests" are for optional
extra features.
But if you install just virt-manager without at least some
of the suggested and/or recommended packages, it won't run at all.
I haven't run any real virtual machines yet under virt-manager, but
I think it's working now.
At least I'm to the point where I can boot from a Debian installer
ISO and see the initial screen.
Tags: virtualization, linux, QEMU
[
00:00 Feb 17, 2022
More linux |
permalink to this entry |
]
Fri, 11 Feb 2022
It's nice to be back on a relatively minimal Debian install,
instead of Ubuntu-with-everything. But one thing that I have
to admit I appreciated about Ubuntu: printing "just worked".
Turn on a printer, call up the print menu in any app, and the
printer I turned on would be there in the menu, without any need
of struggling with CUPS configurations.
Ubuntu was using Avahi, the Linux version of Apple's Zeroconf/Bonjour
framework, to discover printers. I knew that I'd probably need to
install Avahi if I wanted easy printer configuration on Debian.
But as it turned out, getting printing working was both harder, and easier.
Read more ...
Tags: linux, debian, cups, printing
[
18:14 Feb 11, 2022
More linux |
permalink to this entry |
]
Mon, 17 Jan 2022
For many years, I used extlinux as my boot loader to avoid having to
deal with
the
annoying and difficult grub2. But that was on MBR machines.
I never got the sense that extlinux was terribly well supported
in the newer UEFI/Secure Boot world. So when I bought my current
machine a few years ago, I bit the bullet and let Ubuntu's installer
put grub2 on the hard drive.
One of the things I lost in that transition was a boot splash image.
Read more ...
Tags: linux, debian, ubuntu, grub, boot
[
19:29 Jan 17, 2022
More linux |
permalink to this entry |
]
Tue, 28 Dec 2021
When I bought my new laptop several years ago, I chose Ubuntu as
its first distro even though I usually run Debian.
For one thing, Ubuntu has an excellent installer.
Second, they seem to do more testing on cutting-edge hardware, so I
thought the chances were better that hardware on a brand-new laptop
would be supported.
Ubuntu has been working fine for a couple of years, but with 21.10
("Impish Indri") it took a precipitous downturn.
Read more ...
Tags: linux, boot, grub, debian, ubuntu
[
19:53 Dec 28, 2021
More linux |
permalink to this entry |
]
Fri, 11 Sep 2020
In the previous article
I wrote about how the two X selections, the primary and clipboard,
work. But I glossed over the details of key bindings to copy and paste
the two selections in various apps.
That's because it's complicated. Although having the two selections
available is really a wonderful feature, I can understand why so many
people are confused about it or think that copy and paste just
generally doesn't work on Linux -- because apps are so woefully
inconsistent about their key bindings, and what they do have is
so poorly documented.
"Why don't they all just use the standard Ctrl-C, Ctrl-V?" you ask.
(Cmd-C, Cmd-V for Mac users, but I'm not going to try to include the
Mac versions of every binding in this article; Mac users will have
to generalize.)
Simple: those keys have well-known and very commonly used bindings
already, which date back to long before copy and paste were invented.
Read more ...
Tags: linux, X11, cmdline, emacs, vim, editors
[
12:54 Sep 11, 2020
More linux |
permalink to this entry |
]
Mon, 07 Sep 2020
There's so much confusion about copy and paste in Linux.
Many people, coming from the Windows or Mac worlds, complain about
copy/paste not working right. And while it's true that some apps
don't handle copy/paste very well (Firefox in particular is notably
flaky in this area), usually the problem is that nobody has ever
told them about one of Linux's best features:
the two types of selection, Primary and Clipboard.
The Primary Selection
When you sweep your mouse across some words to highlight them,
or double-click to highlight a word, or triple-click to highlight a line,
whatever you've highlighted is now in the primary selection.
Read more ...
Tags: linux, X11, cmdline
[
12:28 Sep 07, 2020
More linux |
permalink to this entry |
]
Fri, 14 Aug 2020
I have a new camera, a Sony a6100. Frustrated with the bugs and
limitations of my ancient Rebel XSi, I decided to jump into the world
of mirrorless cameras.
So far I'm pleased with it (though sometimes frustrated by its Byzantine
menu hierarchy). One of its nice features, which the Rebel didn't have
at all, is movies, and I've been shooting a lot of short movies of
radio controlled airplanes when we fly at Overlook Park.
That's the problem: a lot of short movies. I'd like to make them
available on YouTube so that my fellow pilots can see them; but
uploading a movie to YouTube is a low and elaborate process, and
uploading eight or so short clips of less than a minute each is
just too much work. It would be so much easier if I could edit them
together and upload just one movie.
Time to learn some basic video editing.
Read more ...
Tags: video, linux
[
16:02 Aug 14, 2020
More linux |
permalink to this entry |
]
Mon, 20 Jul 2020
Giving talks has certainly changed a lot since last year.
All those skills we practice in Toastmasters -- using the space,
expressive gestures, projecting your voice, making eye contact
with all sections of the audience? Meaningless now. In an age of
quarantine and video conferencing meetings, speakers need to learn new
skills.
Fortunately, there's Toastmasters for a painless, fun way to practice.
I was scheduled to give a talk on browser privacy.
My local LWV has a Privacy Study Group that I'm co-chairing,
and we had a meeting coming up on privacy while browsing the web.
I knew I wanted to show a series of demos in multiple browsers,
including additional windows like the Developer Tools window.
I also wanted to record the talk so I could upload it later.
In Zoom, the process of canceling a shared window and then
starting another share is slow, fumbly and error-prone.
I knew this was something I needed to practice before the talk,
to find a way to smooth the transitions..
Read more ...
Tags: speaking, video, linux, X11
[
16:35 Jul 20, 2020
More speaking |
permalink to this entry |
]
Sun, 05 Jul 2020
... which is what I have now on my Carbon X1 gen 7 laptop.
Early reviews of this particular laptop praised its supposedly
excellent speakers (as laptops go), but that has never been apparent on Linux.
It is an improvement over the previous version -- the microphone works
now, which it didn't in 19.10 -- though in the
meantime I acquired a Samson GoPro based on
Embedded.fm's
recommendation (it works very well).
But although the internal mic works now, the sound from the built-in
speakers is just terrible, even worse than it was before.
The laptop has four speakers, but Ubuntu is using only two.
Read more ...
Tags: linux, ubuntu, lenovo
[
12:44 Jul 05, 2020
More linux |
permalink to this entry |
]
Thu, 07 May 2020
Controlling PulseAudio from the Command Line
#tags linux,audio,pulseaudio,ubuntu,cmdline
Controlling
PulseAudio via pavucontrol is all very nice, but it's time
consuming and fiddly: you have to do a lot of clicking in a lot of
tabs any time you want to change anything.
So I've been learning how to control PulseAudio from the command line,
so I can make aliases to switch between speakers quickly, or set audio
defaults at login time.
That was going to be a blog post, but I think this is going to be an
evolving document for quite some time, so instead, I just made it a
page on the Linux section of my website:
Controlling PulseAudio from the Command Line.
I also wrote a Python script,
pulsehelper.py,
that uses some of these commands to provide clearer output and easier
switching. It even uses color and bold fonts if you have the
termcolor module installed. Like the document, this script is likely
to be evolving for quite some time.
Happy listening and recording!
Tags: linux, audio, pulseaudio, ubuntu
[
12:21 May 07, 2020
More linux |
permalink to this entry |
]
Mon, 04 May 2020
(Note: this is not an alphabet post. You may have noticed I'm
a little stuck on I. I hope to get un-stuck soon; but first, here
are a pair of articles on configuring audio on Linux.)
I'm a very late adopter for PulseAudio. In the past, on my minimal
Debian machines,
nearly
any sound problem could be made better by apt-get remove pulseaudio.
But pulse seems like it's working better since those days,
and a lot of applications (like Firefox) require it, so it's
time to learn how to use it. Especially in these days
of COVID-19 and video conferencing, when I'll need to be using the
microphone and speakers a lot more. (I'd never actually had a reason
to use the microphone on my last laptop.)
Beginner tutorials always start with something like "Go into System
Preferences and click on Audio", leaving out anyone who doesn't use
the standard desktop. The standard GUI PulseAudio controller is
pavucontrol. It has four tabs.
Read more ...
Tags: linux, audio, pulseaudio, ubuntu
[
18:04 May 04, 2020
More linux |
permalink to this entry |
]
Mon, 30 Mar 2020
It was surprisingly hard to come up with a "D" to write about,
without descending into Data geekery (always a temptation).
Though you may decide I've done that anyway with today's topic.
Out for a scenic drive to shake off some of the house-bound cobwebs,
I got to thinking about how so many places are named after the Devil.
California was full of them -- the Devil's Punchbowl, the Devil's
Postpile, and so forth -- and nearly every western National Park
has at least one devilish feature.
How many are there really? Happily, there's an easy way to answer
questions like this: the
Geographic Names page on the USGS website,
which hosts the Geographic Names Information System (GNIS).
You can download entire place name files for a state, or
you can search for place name matches at:
GNIS Feature Search.
When I searched there for "devil", I got 1883 hits -- but many of them
don't actually include the word "Devil". What, are they taking lessons
from Google about searching for things that don't actually match the
search terms?
I decided I wanted to download the results so I could
count them more easily.
The page offers View & Print all or
Save as pipe "|" delimited file. I chose to save the file.
Read more ...
Tags: GIS, mapping, data, cmdline, linux
[
16:30 Mar 30, 2020
More linux/cmdline |
permalink to this entry |
]
Sat, 30 Nov 2019
My new Lenovo Carbon X1 Gen 7 has one irritating problem: the trackpad
sometimes disappears, flooding dmesg with messages like
"i2c_designware i2c_designware.1: controller timed out".
Once this happens, the only fix is to reboot.
Lenovo has a fix -- new trackpad firmware -- but unlike their BIOS
updates, which are installable from Linux, device firmware updates
are distributed as Windows EXE files that require running Windows
on the bare metal, leaving Linux users out in the cold.
Ironic, since Lenovo is so popular among Linux users and is a member
of
the Linux
Firmware Service, and the CX1 is supposedly Ubuntu certified.
Those Linux users on the forums who managed to install the firmware
update raved about it, saying that indeed it solved their problem. But
finding a way to to install it led me on a not-so-merry four-day quest.
Here's how I installed the firmware, in the end:
Make a Windows to Go using Rufus on a Real Windows Box
Back up anything you don't want to lose, because you never know.
- Borrow a real Windows box. I tried many times
using Windows inside VirtualBox and QEmu on top of Linux,
but it never worked.
On Windows, install Rufus.
Download the
Windows
10 Installer ISO (5 gigabytes, give or take)
Find a USB stick or SD card, 16G or larger. Actually, find a bunch
of them: this process is incredibly finicky about the stick you use
and the only way you find out is that it doesn't work and you
have to try again (see below).
Use Rufus to create a Windows to Go image. The alternative
is to make a Windows installer; that won't work, because you can't
run anything useful from the installer, and you don't want to
actually install Windows, or you wouldn't be in this fix
in the first place.
Be patient: creating a W2G image takes several hours.
Click on Rufus' log file button (it's the rightmost of
four obscure icons down near the lower left of the Rufus window;
it has a mouseover tooltip) at any time to see what's happening;
if things don't go right you might at least get some idea why.
When the W2G stick is finished (whew!), move it to your Linux machine
and mount its second partition (/dev/sdb2 or whatever).
This will tell you it wasn't properly unmounted and it's fixing it,
giving you a heart attack about whether Linux is going to change the
filesystem in some way that makes it fail after you waited all that
time creating it.
Copy the firmware .exe to it. Wherever you want; I just put it at
the root of the filesystem. Sync and unmount it.
Boot your computer from the USB stick. This will
take forever and may fail if the phase of the moon is wrong.
If you're lucky and the planets are in alignment, eventually a
Windows installer will come up and ask you a bunch of annoying questions
about language, keyboard, whether you consent to having Microsoft
spy on you in a skillion different ways, name, password, three
security questions, etc. Meanwhile, you're having another heart
attack because does this mean it's going to install Windows to
your real disk on top of Linux? Hopefully not -- at least it
didn't in my case -- but here's where you really want to have that
recent backup.
If you make it all the way through the questions and get a Windows
screen, rejoice! Navigate to wherever you put your exe and run it.
Cross your fingers -- maybe you're done!
If it hangs or bluescreens during boot, or Rufus fails to create
the W2G stick in the first place, try running Rufus again with a
different USB stick.
I think I tried five before finally finding one that worked,
and the successful one (a Transcend SD card in an old Patriot USB
adapter) wasn't the newest, or the fastest, or the largest.
It's a mystery.
Some Approaches that Didn't Work
Before I finally got this working, I wasted four days trying many
other approaches. Many of them sound very clever and reasonable and
ought to work, but they didn't work for me. These include:
Install Windows
PE or the full Windows 10 installer directly onto a USB stick
using qemu using a command like this:
sudo qemu-system-x86_64 -drive file=/dev/sdX,format=raw -cdrom Win10_1903_V2_English_x64.iso -cpu host -enable-kvm -m 2048 -display gtk -nic none
That brought up a window in which the Windows installer booted and
proceeded to install using the USB stick mounted at /dev/sdX as the disk.
At least in theory.
I tried this several times first without the -nic none
,
and every time, the installer eventually hit an infinite loop of
"Why did my PC restart?" bluescreens.
Adding the -nic none
cured that problem, but I ended up
with a USB stick that wasn't bootable.
This was my favorite method in theory because it's clever and I
understand at least a little what it was supposedly doing. I wish it
had actually worked. Possibly if I knew enough about Windows and UEFI
boot architecture, it might have been possible to patch the stick to
make it bootable.
Various other attempts to make a bootable WinPE -- I liked
the minimal WinPE idea so I kept trying it in different ways, but
I've lost track of all the different methods.
Use vboxmanage to clone the Win10 I already had in Virtualbox
to a USB stick, as suggested
here.
Use FreeDOS. "This program must be run under Win32."
Use Windows 10 inside Virtualbox to run the initial exe to
extract the installer inside, another exe, then run that in
FreeDOS. "This program cannot be run in DOS mode."
Boot from a Win10 installer, use Shift-F10 to get a command prompt
window, and run the exe from that. "The image file is valid, but is
for a machine type other than the current machine."
Use Windows 10 running inside Virtualbox to run Rufus to make a
Windows to Go stick. I had no trouble making a regular USB installer
using Rufus in virtual Windows, but when I tried Win to Go, it failed
every time. Sometimes it failed because it couldn't read the USB stick
properly (even though I had configured Virtualbox to pass it through);
other times it failed because it was confused by the Win10 installer
ISO, which was on a Virtualbox shared drive (because I didn't have
that much spare space in the virtual Windows instance). Apparently
there's something not entirely reliable about Virtualbox's USB handling
and its shared folders, at least for very data transfers.
Use WinToUSB or some such program. There seem to be lots of
different programs with names similar to this, they're all closed
source (Rufus is open source), it's hard to tell which ones are real
and which are malware, and when I did think I'd finally found one from
a reliable source, it just hung when I tried to run it.
Use a backup made with Macrium Reflect on a real Windows box,
and boot from that. "The version of [exename] is not compatible with
the version of Windows you're running. Check your computer's system
information and then contact the software publisher."
So, lots of different ways. Some of them have worked at some time for
someone. Also, I never did try Wine. I don't think Wine would be able
to run the actual exe and update the trackpad firmware (I was afraid
to try it), but it's possible that Rufus in Wine might have been able
to make a Windows To Go stick.
If anyone manages that -- or any other way of getting this to work --
I'd love to hear about it.
Tags: linux, windows
[
20:16 Nov 30, 2019
More linux |
permalink to this entry |
]
Sun, 24 Nov 2019
I have a new laptop, a birthday present to myself last month.
For once, rather than buying a cut-rate netbook, I decided to
treat myself to a fancy Lenovo Carbon X1 with an up-to-date processor
and lots of RAM.
Since I have way more resources than I'm used to, I decided I'd try
installing a full Ubuntu and not trying to pare it down to a super
lightweight system. I'm still running the lightweight, fast, highly
configurable Openbox window manager instead of a full Gnome desktop:
Openbox does just what I tell it and no more, and doesn't surprise me
with random redesigns. But I did let Ubuntu install some system
utilities I've always avoided in the past, like NetworkManager and
PulseAudio. I decided I'd give them a chance, see if they've gotten
better since I last checked.
They have, though they're still a bit of a hassle to deal with.
NetworkManager can be controlled through nmcli, which is poorly
documented but works okay if you google long enough to find the
proper incancations. PulseAudio gave me a bit more trouble.
The standard GUI for controlling PulseAudio is pavucontrol
.
It showed two audio devices: "USB PnP Audio Device Analog Stereo" and
"Built-in Audio Analog Stereo". Turns out the USB PnP option is a
sound card built into the USB hub, a Totu tt-hb003a 11-in-1 USB-C hub
that lets me connect to a charger, external monitor, SD and micro-SD
slots, and extra USB ports without juggling a lot of extra cables.
Pulse assumes -- probably reasonably, though it's wrong in this case
-- that if I have a USB audio device connected, I probably want to use
it in preference to the laptop's built-in audio. That would make sense
if I had external speakers plugged in, but I left all my computer
speakers behind when I moved. I should probably order some speakers.
But meanwhile, I needed to persuade PulseAudio to ignore the hub and
use the laptop's built-in sound system.
Mute/Unmute via the Keyboard
The Lenovo, like most laptops, has a dedicated key for muting, Fn-F1.
It even has a little light on it to show whether it's muted. In
Openbox, pressing Fn-F1 actually muted the sound, and even turned on
the light. This is probably because I'd previously set
key="XF86AudioMute" to run amixer set Master toggle
in .config/openbox/rc.xml, which worked on my Pulse-free
pared-down Debian netbook. The problem is that pressing iFn-F1 again
didn't bring the sound back. Instead, it was unmuting the USB hub's audio.
Clicking "Set as fallback" on the built-in audio in pavucontrol made
no difference.
It turns out that it is virtually impossible to persuade PulseAudio to
use "Built-in Audio" when a "USB PnP Audio Device" is available. I
finally found the secret: in pavucontrol's Configuration tab,
set Profile for the PnP USB device to Off. Now only the
built-in device shows up in the other tabs.
But that amixer command still wasn't unmuting properly, so the next
step was to find a command that would actually unmute. Someone on
#linux suggested pactl set-sink-mute @DEFAULT_SINK@ toggle
and that worked great from the command line. But when I tried to bind it
in Openbox to the XF86AudioMute key, it did nothing. I still
don't understand why not; I wasted a lot of time comparing my
shell environment to openbox's environment and never found the
difference.
Back to web searching, I found an
askbuntu thread suggesting
some Openbox stanzas. In particular, it apparently works better to use
alsamixer rather than pactl. This finally worked for toggling mute:
<keybind key="XF86AudioMute">
<action name="Execute">
<command>amixer -q -D pulse sset Master toggle</command>
</action>
</keybind>
Volume Controls via Function Keys
Partial success! Unfortunately, the volume control commands in that
same askbuntu post, amixer -q -D pulse sset Master 3%+ unmute
,
did nothing. I had already noticed that in pavucontrol, the volume
controls didn't work either. In fact, if I started some music playing
and then called up alsamixer, channels like Master and Speaker didn't
do anything; the only channel that affected volume was ALSA PCM. After some
fiddling, I discovered that I had to change Master to PCM and
remove the -D pulse:
<keybind key="XF86AudioRaiseVolume">
<action name="Execute">
<command>amixer sset PCM 4%+ unmute</command>
</action>
</keybind>
<keybind key="XF86AudioLowerVolume">
<action name="Execute">
<command>amixer sset PCM 4%- unmute</command>
</action>
</keybind>
I'm sure I'll eventually need to fiddle some more. For one thing,
if I ever want to use audio during a talk (as I did briefly at my
Stonehenge talk earlier this year) I'll need to figure out how to
enable a temporary HDMI sound sink quickly without needing to fiddle
with pavucontrol. But for now, I'm happy to have
the basic laptop volume and mute keys working.
Tags: linux, audio, cmdline, window managers
[
15:43 Nov 24, 2019
More linux |
permalink to this entry |
]
Fri, 15 Nov 2019
Sometimes I tend to ramble on, and wonder if articles I'm writing are
really too long for a blog post. I try to keep them under about 200
lines, but sometimes a really meaty topic demands more.
It occurred to me to wonder how long a typical Shallow Thoughts post is.
A quick measure is lines, which I can measure this way starting
in the directory where I have the source files for all my past posts:
find . -name '*.blx' -exec wc -l '{}' \; | sort -h >/tmp/bloglen.dat
The find produces lines like:
79 ./linux/cmdline/random-command.blx
so if I sort -h (human-readable numbers), it will sort on the first
column and give me a sorted list of all posts in order of size.
The shortest posts, three of them, were only five lines;
the longest was 346 lines.
But what's the distribution of lengths?
I can plot the sorted data easily with gnuplot:
gnuplot -p -e 'plot "/tmp/bloglen.dat"'
or, if I didn't want the temp file, I could have done that all
with one command:
find . -name '*.blx' -exec wc -l '{}' \; | sort -h | gnuplot -p -e 'plot "/dev/stdin"'
That's kind of interesting. But I was really more interested in seeing
a frequency distribution: do I have a lot more shorter posts, or
longer ones? For that I do need the temp file.
I wasted some time trying to find a way in gnuplot to plot frequency
distribution. The best I found was
set style fill solid
plot '/tmp/bloglen' u ($1):(1) t 'data' smooth frequency w boxes
pause mouse close
(put that in a file and then run gnuplot on that file).
But it's not actually right: the bargraph shows 1 for lots of blog
post lengths that aren't represented in the data.
I finally gave up on gnuplot, having wasted enough time that I could
easily have written a Python script, and did so, which only took a
few minutes.
import matplotlib.pyplot as plt
posts = []
with open('/tmp/bloglen') as fp:
for line in fp:
posts.append(int(line.split()[0]))
plt.hist(posts, bins=max(posts))
plt.show()
Turns out I'm doing pretty well at keeping them under 200 lines.
The vast majority of posts are fairly short, with a peak around 50 lines,
and relatively few exceed 200. Only a couple of outliers get over 300.
I think I'm okay with that. Whether you, the readers, agree --
well, feel free to tell me!
For comparison, this post is 95 lines.
Tags: linux, cmdline, gnuplot, blogging
[
21:28 Nov 15, 2019
More blogging |
permalink to this entry |
]
Sun, 03 Nov 2019
I was planning to teach a class on Raspberry Pis, and I wanted to
start with the standard Raspbian image, update it, and add some
programs like GIMP that we'd use in the class. And I wanted to do
that just once, before burning the image to a bunch of different SD cards.
Is there a way to do that on my regular Linux box, with its nice
fast processor and disk?
Why yes, there is, and it's pretty easy. But there are a lot
of unclear or misleading tutorials out there, so I hope this is
a bit simpler and easier to follow.
I got most of this from a tutorial that no longer seems to be available
(but I'll include the link in case it comes back):
Solar Staker/RPI Image/Creation.
I've tested this on Ubuntu 19.10, Debian Stretch and Buster; the
instructions should be pretty general except for the name of the
loopback mount. Commands you type are in bold;
the rest is the output you should see. $ is your normal shell
prompt and # is a root prompt.
Required Packages
You'll need kpartx, qemu and some supporting packages.
On Debian or Ubuntu:
$ sudo apt install kpartx qemu binfmt-support qemu-user-static
You've probably already
downloaded
a Raspbian SD card image.
Set Up the Loopback Devices
kpartx can read the Raspbian ISO image and split it into the two
filesystems it would have if you wrote it to an SD card.
It turns the filesystems into loopback devices you can mount
like regular filesystems.
$ sudo kpartx -av 2019-09-26-raspbian-buster-lite.img
add map loop10p1 (253:1): 0 524288 linear 7:10 8192
add map loop10p2 (253:2): 0 3858432 linear 7:10 532480
Make a note of those loopback device names. They may not always
be loop10p1 and loop10p2, but they'll probably always end in
1 and 2. 1 is the Raspbian /boot filesystem, 2 is the Raspbian root.
(Optional) Check and Possibly Resize the Raspbian Filesystem
Make sure that the Raspbian filesystem is intact, and that it's a
reasonable size.
In practice, I didn't find this made any difference (everything was
fine to begin with), but it doesn't hurt to make sure.
$ sudo e2fsck -f /dev/mapper/loop10p2
e2fsck 1.45.3 (14-Jul-2019)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
rootfs: 44367/120720 files (0.3$ non-contiguous), 292014/482304 blocks
$ sudo resize2fs /dev/mapper/loop10p2
resize2fs 1.45.3 (14-Jul-2019)
The filesystem is already 482304 (4k) blocks long. Nothing to do!
Mount the Loopback Filesystems
You're ready to mount the two filesystems.
A Raspbian SD card image contains two filesystems.
Partition 1 is a small vfat /boot
filesystem containing the kernel and some other files it needs for
booting, plus the two important configuration files cmdline.txt
and config.txt.
Partition 2 is the Raspbian root filesystem in ext4 format.
The Raspbian root includes an empty /boot directory; mount
the root first, then mount the Raspbian boot partition on
Raspbian's /boot:
$ sudo mkdir /mnt/pi_image
$ sudo mount /dev/mapper/loop10p2 /mnt/pi_image
$ sudo mount /dev/mapper/loop10p1 /mnt/pi_image/boot
Prepare for Chroot
You're going to chroot to the Raspbian filesystem.
Chroot limits the filesystem you can access, so when you type /,
instead of your host filesystem's / you'll see the root of the
Raspbian filesystem, /mnt/pi_image. That means you
won't have access to your host system's /usr/bin, any more.
But qemu needs /usr/bin/qemu-arm-static to be able to emulate ARM
binaries in user mode. So copy that to the Raspbian filesystem so
you'll still be able to access it after the chroot:
$ sudo cp /usr/bin/qemu-arm-static /mnt/pi_image/usr/bin
Chroot to the Raspbian System
$ sudo chroot /mnt/pi_image /bin/bash
root:/#
Now you're running in Raspbian's root filesystem. All the binaries in
your path (e.g. /bin/ls, /bin/bash) are ARM binaries,
but if you try to run them, qemu will see qemu-arm-static and
run the program as though you're on an actual Raspberry Pi.
Run Stuff!
Now you can run Raspbian commands.
# file /bin/ls
/bin/ls: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=67a394390830ea3ab4e83b5811c66fea9784ee69, stripped
#
# /bin/ls
bin dev home lost+found mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
# file /bin/cat
/bin/cat: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=2239a192f2f277bd1a4892e39a41eba97266b91f, stripped
#
# cat /etc/issue
Raspbian GNU/Linux 10 \n \l
You can even install packages or update the whole system:
# apt update
Get:1 http://raspbian.raspberrypi.org/raspbian buster InRelease [15.0 kB]
Get:2 http://archive.raspberrypi.org/debian buster InRelease [25.2 kB]
Get:3 http://raspbian.raspberrypi.org/raspbian buster/main armhf Packages [13.0 MB]
Get:4 http://archive.raspberrypi.org/debian buster/main armhf Packages [259 kB]
Fetched 13.3 MB in 20s (652 kB/s)
Reading package lists... Done
# apt dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
busybox initramfs-tools initramfs-tools-core klibc-utils libklibc linux-base
pigz
The following packages will be upgraded:
dhcpcd5 e2fsprogs file firmware-atheros firmware-brcm80211 firmware-libertas
firmware-misc-nonfree firmware-realtek libcom-err2 libext2fs2 libmagic-mgc
libmagic1 libraspberrypi-bin libraspberrypi-dev libraspberrypi-doc
libraspberrypi0 libss2 libssl1.1 libxml2 libxmuu1 openssh-client
openssh-server openssh-sftp-server openssl raspberrypi-bootloader
raspberrypi-kernel raspi-config rpi-eeprom rpi-eeprom-images ssh sudo
wpasupplicant
32 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 133 MB of archives.
After this operation, 3192 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Pretty neat! Although you're not actually running Raspbian, you can
run Raspbian executables with the Raspbian root filesystem mounted
as though you were actually running on your Raspberry Pi.
Cleaning Up
When you're done with the chroot, just exit that shell (Ctrl-D
or exit
).
If you want to undo everything else afterward:
$ sudo rm /mnt/pi_image/usr/bin/qemu-arm-static
$ sudo umount /mnt/pi_image/boot
$ sudo umount /mnt/pi_image
$ sudo kpartx -dv /dev/loop0
$ sudo losetup -d /dev/loop0
$ sudo rmdir /mnt/pi_image
Limitations
Keep in mind you're not really running Raspbian.
You never booted the Raspbian kernel, and you can't test things
that depend on Raspbian's init system, like whether networking works,
let alone running the Raspbian X desktop or accessing GPIO pins.
This is an ARM emulator, not a Raspberry Pi emulator.
More details
If you want to read more about qemu user mode and how it lets you run
binaries from other architectures, I recommend these links:
Tags: linux, raspberry pi, virtualization, QEMU
[
16:14 Nov 03, 2019
More linux |
permalink to this entry |
]
Thu, 31 Oct 2019
Someone on ##linux was talking about "bro pages", which turns out to
be a site that collects random short examples of how to use Linux
commands. It reminded me of
Command Line Magic,
a Twitter account I follow that gives sometimes entertaining or useful
command-line snippets.
I hadn't been to that page on the Twitter website in a while (I
usually use bitlbee for Twitter), and clicking through some of the
tweets on the "Who to follow" accounts took me to someone who'd made
a GNU
CoreUtils cheat sheet. I didn't really want the printed cheat
sheet, but I was interested in the commands used to generate it.
The commands involved downloading an HTML page and didn't work any
more -- the page was still there but its format has changed -- but
that got me to thinking about how it might be fun to generate
something that would show me a random command and its description,
starting not from coreutils but from the set of all commands I have
installed.
I can get a list of commands from the installed man pages in
/usr/share/man -- section 1, for basic commands, and section
8, for system-admin commands. (The other sections are for things
like library routines, system calls, files etc.)
So I can pick a random man page like this:
ls -1 /usr/share/man/man1/ /usr/share/man/man8 | shuf -n 1
which gives me a filename like
xlsfonts.1.gz.
The man pages are troff format, gzipped. You can run zcat on
them, but extracting the name and description still isn't entirely
trivial. In most cases, it comes right after the .SH NAME
line, so you could do something like
zcat $(ls -1 /usr/share/man/man1/* /usr/share/man/man8/* | shuf -n 1) | grep -A1 NAME | tail -1
(the * for the two directories causes ls to list the full pathname,
like
/usr/share/man/man1/xlsfonts.1.gz, instead of just the
filename, xlsfonts.1.gz).
But that doesn't work in every case: sometimes the description is more than
one line, or there's a line between the NAME line and the actual description.
A better way is to use apropos (man -k), which already knows how to
search through man pages and parse them to extract the command name and
description. For that, you need to
start with the filename (I'm going to drop those *s from the command since
I don't need the full pathname any more) and get rid of everything
after the first '.'.
You can do that with sed 's_\.[0-9].*__'
:
it looks for everything starting with a dot (\.
) followed
by a digit ([0-9]
-- sed doesn't understand \d
)
followed by anything (.*
) and replaces all of it with nothing,
the empty string.
Here's the full command:
apropos $(ls -1 /usr/share/man/man1/ /usr/share/man/man8 | shuf -n 1 | sed 's_\.[0-9].*__')
Sometimes it will give more than one command: for instance,
just now, testing it, it found /usr/share/man/man8/snap.8.gz,
pared that down to just snap, and apropos snap
found ten different commands. But that's unusual; most of the time
you'll just get one or two, and of course you could add another
| shuf -n 1
if want to make sure you get only one line.
Update: man -f
is a better solution: that will give a single
apropos-like description line for only the command picked by the
first shuf command.
man -f $(ls -1 /usr/share/man/man1/ /usr/share/man/man8 | shuf -n 1 | sed 's_\.[0-9].*__')
It's kind of a fun way to discover new commands you may not have
heard of. I'm going to put it in my .zlogin.
Tags: linux, cmdline
[
13:22 Oct 31, 2019
More linux/cmdline |
permalink to this entry |
]
Wed, 10 Apr 2019
For many years I've wished I could take a raster map image, like a
geology map, an old historical map, or a trail map, and overlay it
onto the map shown in OsmAnd
so I can use it on my phone while walking around.
I've tried many times, but there are so many steps and I never
found a method that worked.
Last week, the ever helpful Bart Eisenberg posted to the OsmAnd list
a video he'd made:
Displaying
web-based maps with MAPC2MAPC: OsmAnd Maps & Navigation.
Bart makes great videos ... but in this case, MAPC2MAPC
turned out to be a Windows program so it's no help to a Linux user.
Darn!
But seeing his steps laid out inspired me to try again, and gave me
some useful terms for web searching. And this time I finally succeeded.
I was also helped by a post to the OsmAnd list by A Thompson,
How to get aerial image into offline use?,
though I needed to change a few of the steps.
(Note: click on any of the screenshots here to see a larger version.)
Georeference the Image Using QGIS
Update in Feb 2024: Several things have changed in QGIS georeferencing
(the version I'm using now is 3.28.15-Firenze),
so note the updated sections below.
The first step is to georeference the image: turn the plain raster image
into a GeoTiff that has references showing where on Earth its corners are.
It turns out there's an open source program that can do that, QGIS.
Although it's poorly documented,
it's fairly easy once you figure out the trick.
I started with the tutorial
Georeferencing Basics,
but it omits one important point, which I finally found in BBRHUFT's
How
to Georeference a map in QGIS. Step 11 is the key: the Coordinate
Reference System (CRS) must be the same in the georeferencing window
as it is in the main QGIS window. That sounds like a no-brainer, but
in practice, the lists of possible CRSes shown in the two windows
don't overlap, so unless you follow BBRHUFT's advice and
type 3857 into the filter box in both windows, you'll likely
end up with CRSes that don't match. It'll look like it's working, but
the resulting GeoTiff will have coordinates nowhere near where they
should be
Instead, follow BBRHUFT's advice and type 3857 into the filter
box in both windows. The "WGS 84 / Pseudo Mercator" CRS will show up
and you can use it in both places. Then the GeoTiff will come out in
the right place.
If you're starting from a PDF, you may need to convert it to a raster format
like PNG or JPG first. GIMP can do that.
So, the full QGIS steps are:
-
Start qgis, and bring up Project > Project Properties.
- Click on CRS, type 3857 in the filter box,
and make sure the CRS is set to WGS 84 / Pseudo Mercator.
- (optional) While you're in Project Properties, you may want to
click on General and set
Display Coordinates Using: Decimal degrees.
By default qgis always uses "meters" (measured from where?)
which seems like a bizarre and un-useful default.
- In Plugins > Manage and Install Plugins...,
install two plugins: Quick Map Services and
Georeferencer GDAL.
Update: both OSM and the Georeferencer are now included by default,
so you no longer have to install these plugins.
- Back in the main QGIS window, go to Web > QuickMapServices
(that's one of the plugins you just installed) and pick
a background map, probably something from OpenStreetMap.
Once you find one that loads, navigate to the area you're
interested in.
Update: Sometimes, including an OpenStreetMap is as easy as clicking on
the OpenStreetMap line in the upper left pane of the QGIS window.
Other times there is no such line, and you have to use
Web > QuickMapServices. I don't know how QGIS decides.
(All the OpenStreetMap tile sets print "API Key Required" all over
everything. You can get an API key from Thunderforest -- I use one
for PyTopo -- but it won't help you here, because nobody seems to
know how to get QGIS to use an API key, though I found dozens of
threads with people asking. If you find a way, let me know.)
-
Start the georeferencer:
Raster > Georeferencer > Georeferencer....
It may prompt you for a CRS; if so, choose WGS 84 / Pseudo Mercator
if it's available, and if it's not, type 3857 in the filter box
to find it.
- In the Georeferencer window, File > Open Raster...
Update: If it isn't showing the files you want to open, you may
need to adjust the
Files of type menu to make sure it includes your file type.
When I ran it just now, I had two JPG files to georeference.
The first time, it showed those files automatically.
When I went to open the second file, it showed nothing about I had
to change Files of type to JPEG JFIF before
I could open the file.
- Select matching points. This part is time consuming but fun.
Zoom in to your raster map in the Georeferencer window and pick a
point that's easy to identify: ideally the intersection of two
roads, or a place where a road crosses a river, or similar
identifying features. Make sure you can identify the same point
in both the Georeferencer window and the main QGIS window
(see the yellow arrows in the diagram).
-
Click on the point in the Georeferencer window, then click "from
map canvas".
- In the main QGIS window, click on the corresponding point.
You can zoom with the mousewheel and pan by middle mouse dragging.
Repeat for a total of four to eight points.
You'll see yellow boxes pop up in both windows showing the matching points.
If you don't, there's something wrong; probably your CRSes don't
match.
-
In the Georeferencer window, call up
Settings > Transformation Settings.
Choose a Transformation type (most people seem to like Thin Plate Spline),
a Resampling method (Lanczos is good), a target SRS (make sure it's
3857 WGS 84 / Pseudo Mercator to match the main window).
Set Output raster to your desired filename, ending in .tiff.
Be sure to check the "Load in QGIS when done" box at the bottom.
Click OK.
- File > Start Georeferencing. It's quite fast, and will
create your .tiff file when it's finished, and should load the TIFF
as a new layer in the main QGIS window.
Check it and make sure it seems to be lined up properly with
geographic features. If you need to you can right-click on the layer
and adjust its transparency.
-
In the Georeferencer window, File > Save GCP Points as...
in case you want to go back and do this again. Now you can close
the Georeferencer window.
- Project > Save to save the QGIS project, so if you want
to try again you don't have to repeat all the steps.
Now you can close qgis. Whew!
Convert the GeoTiff to Map Tiles
The ultimate goal is to convert to OsmAnd's sqlite format, but there's
no way to get there directly. First you have to convert it to map tiles
in a format called mbtiles.
QGIS has a plug-in called QTiles but it didn't work for me: it briefly
displayed a progress bar which then disappeared without creating any files.
Fortunately, you can do the conversion much more easily with
gdal_translate, which at least on Debian is part of
the gdal-bin package.
gdal_translate filename.tiff filename.mbtiles
That will create tiles for a limited range of zoom levels (maybe only
one zoom level). gdalinfo will tell you the zoom levels in the
file. If you want to be able to zoom out and still see your overlay,
you might want to add wider zoom levels, which you can do like this:
gdaladdo -r nearest filename.mbtiles 2 4 8 16
Incidentally, gdal can also create a directory of tiles suitable for
a web slippy map, though you don't need that for OsmAnd. For that,
use gdal2tiles, which on Debian is part of
the python-gdal package:
mkdir tiles
gdal2tiles filename.tiff tiles
Not only does it create tiles, it also includes multiple HTML files
you can use to display those tiles using the Leaflet, OpenLayers or
Google Maps JavaScript libraries. Very cool!
Create the OsmAnd sqlite file
Tarwirdur has written a nice simple Python script to translate from
mbtiles to OsmAnd sqlite:
mbtiles2osmand.py.
Download it then run
mbtiles2osmand.py filename.mbtiles filename.sqlitedb
So easy to use! Most of the other references I saw said to use
Mobile Atlas Creator (MOBAC)
and that looked a lot more complicated.
Incidentally, Bart's video says MAPC2MAPC calls the format
"Locus/Rmaps/Galileo/OSMAND (sqlite)", which might be useful to know
for web search purposes.
Install in OsmAnd
Once you have the .sqlitedb file, copy it to OsmAnd's tiles folder
in whatever way you prefer. For me, that's
adb push file.sqlitedb $androidSD/Android/data/net.osmand.plus/files/tiles
where $androidSD is the /storage/whatever location of
my device's SD card.
Then start OsmAnd and tap on the icon in the upper left for your current mode
(car, bike, walking etc.) to bring up the Configure map menu.
Scroll down to Overlay or Underlay map, enable one of
those two and you should be able to choose your newly installed map.
You can adjust the overlay's transparency with a slider that's visible
at the bottom of the map (the blue slider just above the distance scale),
so you can see your overlay and the main map at the same time.
The overlay disappears if you zoom out too far, and I haven't yet figured
out what controls that; I'm still working on those details.
Sure, this process is a lot of work. But the result is worth it.
Check out the geologic layers we walked through on a portion of
a recent hike in Rendija Canyon (our hike is the purple path).
Tags: GIS, mapping, linux, osmand, qgis, openstreetmap
[
19:08 Apr 10, 2019
More mapping |
permalink to this entry |
]
Fri, 18 Jan 2019
My machine has recently developed an overheating problem.
I haven't found a solution for that yet -- you'd think Linux would
have a way to automatically kill or suspend processes based on CPU
temperature, but apparently not -- but my investigations led me
down one interesting road: how to write
a Python script that finds CPU hogs.
The psutil module can get a list
of processes with psutil.process_iter()
, which returns
Process
objects that have a cpu_percent()
call.
Great! Except it always returns 0.0, even for known hogs like Firefox,
or if you start up a VLC and make it play video scaled to the monitor size.
That's because cpu_percent()
needs to run twice,
with an interval in between:
it records the elapsed run time and sees how much it changes.
You can pass an interval to cpu_percent()
(the units aren't documented, but apparently they're seconds).
But if you're calling it on more than one process -- as you usually
will be -- it's better not to wait for each process.
You have to wait at least a quarter of a second to get useful
numbers, and longer is better. If you do that for every process on the
system, you'll be waiting a long time.
Instead, use cpu_percent()
in non-blocking mode.
Pass None as the interval (or leave it blank since None is
the default), then loop over the process list and call
proc.cpu_percent(None)
on each process, throwing away the
results the first time.
Then sleep for a while and repeat the loop: the second time,
cpu_percent()
will give you useful numbers.
def hoglist(delay=5):
'''Return a list of processes using a nonzero CPU percentage
during the interval specified by delay (seconds),
sorted so the biggest hog is first.
'''
proccesses = list(psutil.process_iter())
for proc in proccesses:
proc.cpu_percent(None) # non-blocking; throw away first bogus value
print("Sleeping ...")
sys.stdout.flush()
time.sleep(delay)
print()
procs = []
for proc in proccesses:
percent = proc.cpu_percent(None)
if percent:
procs.append((proc.name(), percent))
print(procs)
procs.sort(key=lambda x: x[1], reverse=True)
return procs
if __name__ == '__main__':
prohogscs = hoglist()
for p in hogs:
print("%20s: %5.2f" % p)
It's a useful trick. Though actually applying this to a daemon
that responds to temperature, to solve my overheating problem, is more
complicated. For one thing, you need rules about special processes. If
your Firefox goes wonky and starts making your X server take lots of
CPU time, you want to suspend Firefox, not the X server.
Tags: programming, python, linux
[
15:54 Jan 18, 2019
More programming |
permalink to this entry |
]
Sun, 16 Sep 2018
The laser printers we bought recently can print on both sides of the
page. Nice feature! I've never had access to a printer that can do
that before.
But that requires figuring out how to tell the printer to do the right
thing. Reading the man page for lp, I spotted the sides option:
print -o sides=two-sided-long-edge. But that doesn't work by itself.
Adding -n 2 looked like the way to go, but
nope! That gives you one sheet that has page 1 on both sides, and a
second sheet that has page 2 on both sides. Because of course that's
what a normal person would want. Right.
The real answer, after further research and experimentation,
turned out to be the collate=true option:
lp -o sides=two-sided-long-edge -o collate=true -d printername file
Tags: linux, cmdline, printing
[
11:05 Sep 16, 2018
More linux |
permalink to this entry |
]
Mon, 03 Sep 2018
Continuing the discussion of USB networking from a Raspberry Pi Zero
or Zero W (Part
1: Configuring an Ethernet Gadget and
Part
2: Routing to the Outside World): You've connected your Pi Zero to another
Linux computer, which I'll call the gateway computer, via a micro-USB cable.
Configuring the Pi end is easy. Configuring the gateway end is easy as
long as you know the interface name that corresponds to the gadget.
ip link
gave a list of several networking devices;
on my laptop right now they include lo, enp3s0, wlp2s0 and enp0s20u1.
How do you tell which one is the Pi Gadget?
When I tested it on another machine, it showed up as
enp0s26u1u1i1. Even aside from my wanting to script it, it's
tough for a beginner to guess which interface is the right one.
Try dmesg
Sometimes you can tell by inspecting the output of dmesg | tail
.
If you run dmesg shortly after you initialized the gadget (either by
plugging the USB cable into the gateway computer, you'll see some
lines like:
[ 639.301065] cdc_ether 3-1:1.0 enp0s20u1: renamed from usb0
[ 9458.218049] usb 3-1: USB disconnect, device number 3
[ 9458.218169] cdc_ether 3-1:1.0 enp0s20u1: unregister 'cdc_ether' usb-0000:00:14.0-1, CDC Ethernet Device
[ 9462.363485] usb 3-1: new high-speed USB device number 4 using xhci_hcd
[ 9462.504635] usb 3-1: New USB device found, idVendor=0525, idProduct=a4a2
[ 9462.504642] usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 9462.504647] usb 3-1: Product: RNDIS/Ethernet Gadget
[ 9462.504660] usb 3-1: Manufacturer: Linux 4.14.50+ with 20980000.usb
[ 9462.506242] cdc_ether 3-1:1.0 usb0: register 'cdc_ether' at usb-0000:00:14.0-1, CDC Ethernet Device, f2:df:cf:71:b9:92
[ 9462.523189] cdc_ether 3-1:1.0 enp0s20u1: renamed from usb0
(Aside: whose bright idea was it that it would be a good idea to rename
usb0 to enp0s26u1u1i1, or wlan0 to wlp2s0? I'm curious exactly who finds
their task easier with the name enp0s26u1u1i1 than with usb0. It
certainly complicated all sorts of network scripts and howtos when the
name wlan0 went away.)
Anyway, from inspecting that dmesg output you can probably figure out
the name of your gadget interface. But it would be nice to have
something more deterministic, something that could be used from a script.
My goal was to have a shell function in my .zshrc, so I could type
pigadget
and have it set everything up automatically.
How to do that?
A More Deterministic Way
First, the name starts with en, meaning it's an ethernet interface,
as opposed to wi-fi, loopback, or various other types of networking
interface. My laptop also has a built-in ethernet interface,
enp3s0, as well as lo0,
the loopback or "localhost" interface, and wlp2s0,
the wi-fi chip, the one that used to be called wlan0.
Second, it has a 'u' in the name. USB ethernet interfaces start with
en and then add suffixes to enumerate all the hubs involved.
So the number of 'u's in the name tells you how many hubs are involved;
that enp0s26u1u1i1 I saw on my desktop had two hubs in the way,
the computer's internal USB hub plus the external one sitting on my desk.
So if you have no USB ethernet interfaces on your computer,
looking for an interface name that starts with 'en' and has at least
one 'u' would be enough. But if you have USB ethernet, that
won't work so well.
Using the MAC Address
You can get some useful information from the MAC address,
called "link/ether" in the ip link
output.
In this case, it's f2:df:cf:71:b9:92
, but -- whoops! --
the next time I rebooted the Pi, it became ba:d9:9c:79:c0:ea
.
The address turns out to be
randomly
generated and will be different every time. It is possible to set
it to a fixed value, and that thread has some suggestions on how,
but I think they're out of date, since they reference a kernel module
called g_ether whereas the module on my updated Raspbian
Stretch is called cdc_ether. I haven't tried.
Anyway, random or not, the MAC address also has one useful property:
the first octet (f2 in my first example)
will always have the '2' bit set, as an indicator that it's a "locally
administered" MAC address rather than one that's globally unique.
See the Wikipedia page
on MAC addressing for details on the structure of MAC addresses.
Both f2 (11110010 in binary) and ba (10111010 binary) have the
2 (00000010) bit set.
No physical networking device, like a USB ethernet dongle, should have
that bit set; physical devices have MAC addresses that indicate what
company makes them. For instance, Raspberry Pis with networking, like
the Pi 3 or Pi Zero W, have interfaces that start with b8:27:eb.
Note the 2 bit isn't set in b8.
Most people won't have any USB ethernet devices connected that have
the "locally administered" bit set. So it's a fairly good test for
a USB ethernet gadget.
Turning That Into a Shell Script
So how do we package that into a pipeline so the shell -- zsh, bash or
whatever -- can check whether that 2 bit is set?
First, use ip -o link
to print out information about all
network interfaces on the system.
But really you only need the ones starting with en
and
containing a u
. Splitting out the u isn't easy at this
point -- you can check for it later -- but you can at least limit it to lines
that have en
after a colon-space. That gives output like:
$ ip -o link | grep ": en"
5: enp3s0: mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000\ link/ether 74:d0:2b:71:7a:3e brd ff:ff:ff:ff:ff:ff
8: enp0s20u1: mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000\ link/ether f2:df:cf:71:b9:92 brd ff:ff:ff:ff:ff:ff
Within that, you only need two pieces: the interface name (the second word)
and the MAC address (the 17th word). Awk is a good tool for picking
particular words out of an output line:
$ ip -o link | grep ': en' | awk '{print $2, $17}'
enp3s0: 74:d0:2b:71:7a:3e
enp0s20u1: f2:df:cf:71:b9:92
The next part is harder: you have to get the shell to loop over those
output lines, split them into the interface name and the MAC address,
then split off the second character of the MAC address and test it
as a hexadecimal number to see if the '2' bit is set. I suspected that
this would be the time to give up and write a Python script, but no,
it turns out zsh and even bash can test bits:
ip -o link | grep en | awk '{print $2, $17}' | \
while read -r iff mac; do
# LON is a numeric variable containing the digit we care about.
# The "let" is required so LON will be a numeric variable,
# otherwise it's a string and the bitwise test fails.
let LON=0x$(echo $mac | sed -e 's/:.*//' -e 's/.//')
# Is the 2 bit set? Meaning it's a locally administered MAC
if ((($LON & 0x2) != 0)); then
echo "Bit is set, $iff is the interface"
fi
done
Pretty neat! So now we just need to package it up into a shell function
and do something useful with $iff when you find one with the bit set:
namely, break out of the loop, call ip a add
and
ip link set
to enable networking to the Raspberry Pi
gadget, and enable routing so the Pi will be able to get to
networks outside this one. Here's the final function:
# Set up a Linux box to talk to a Pi0 using USB gadget on 192.168.0.7:
pigadget() {
iface=''
ip -o link | grep en | awk '{print $2, $17}' | \
while read -r iff mac; do
# LON is a numeric variable containing the digit we care about.
# The "let" is required so zsh will know it's numeric,
# otherwise the bitwise test will fail.
let LON=0x$(echo $mac | sed -e 's/:.*//' -e 's/.//')
# Is the 2 bit set? Meaning it's a locally administered MAC
if ((($LON & 0x2) != 0)); then
iface=$(echo $iff | sed 's/:.*//')
break
fi
done
if [[ x$iface == x ]]; then
echo "No locally administered en interface:"
ip a | egrep '^[0-9]:'
echo Bailing.
return
fi
sudo ip a add 192.168.7.1/24 dev $iface
sudo ip link set dev $iface up
# Enable routing so the gadget can get to the outside world:
sudo sh -c 'echo 1 > /proc/sys/net/ipv4/ip_forward'
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
}
Tags: raspberry pi, linux, networking
[
18:41 Sep 03, 2018
More linux |
permalink to this entry |
]
Fri, 31 Aug 2018
I wrote some time ago about how to use a
Raspberry
Pi over USB as an "Ethernet Gadget". It's a handy way to talk to
a headless Pi Zero or Zero W if you're somewhere where it doesn't already
have a wi-fi network configured.
However, the setup I gave in that article doesn't offer a way for the
Pi Zero to talk to the outside world. The Pi is set up to use the
machine on the other end of the USB cable for routing and DNS, but that
doesn't help if the machine on the other end isn't acting as a router
or a DNS host.
A lot of the ethernet gadget tutorials I found online
explain how to do this on Mac and Windows, but it was tough to find
an example for Linux. The best I found was for Slackware,
How
to connect to the internet over USB from the Raspberry Pi Zero,
which should work on any Linux, not just Slackware.
Let's assume you have the Pi running as a gadget and you can talk to it,
as discussed in the previous article, so you've run:
sudo ip a add 192.168.7.1/24 dev enp0s20u1
sudo ip link set dev enp0s20u1 up
substituting your network number and the interface name that the Pi
created on your Linux machine, which you can find in
dmesg | tail
or
ip link
. (In Part 3
I'll talk more about how to find the right interface name
if it isn't obvious.)
At this point, the network is up and you should be able to ping the Pi
with the address you gave it, assuming you used a static IP:
ping 192.168.7.2
If that works, you can ssh to it, assuming you've enabled ssh.
But from the Pi's end, all it can see is your machine; it can't
get out to the wider world.
For that, you need to enable IP forwarding and masquerading:
sudo sh -c 'echo 1 > /proc/sys/net/ipv4/ip_forward'
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
Now the Pi can route to the outside world, but it still doesn't have
DNS so it can't get any domain names. To test that, on the gateway machine
try pinging some well-known host:
$ ping -c 2 google.com
PING google.com (216.58.219.110) 56(84) bytes of data.
64 bytes from mia07s25-in-f14.1e100.net (216.58.219.110): icmp_seq=1 ttl=56 time=78.6 ms
64 bytes from mia07s25-in-f14.1e100.net (216.58.219.110): icmp_seq=2 ttl=56 time=78.7 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 78.646/78.678/78.710/0.032 ms
Take the IP address from that -- e.g. 216.58.219.110 -- then go to a shell
on the Pi and try ping -c 2 216.58.219.110
, and you should
see a response.
DNS with a Public DNS Server
Now all you need is DNS. The easy way is to use one of the free DNS
services, like Google's 8.8.8.8. Edit /etc/resolv.conf and add
a line like
nameserver 8.8.8.8
and then try pinging some well-known hostname.
If it works, you can make that permanent by editing /etc/resolv.conf,
and adding this line:
name_servers=8.8.8.8
Otherwise you'll have to do it every time you boot.
Your Own DNS Server
But not everyone wants to use public nameservers like 8.8.8.8.
For one thing, there are privacy implications: it means you're telling
Google about every site you ever use for any reason.
Fortunately, there's an easy way around that, and you don't even
have to figure out how to configure bind/named. On the gateway box,
install dnsmasq, available through your distro's repositories.
It will use whatever nameserver you're already using on that machine,
and relay it to other machines like your Pi that need the information.
I didn't need to configure it at all; it worked right out of the box.
In the next article, Part 3:
more about those crazy interface names (why is it
enp0s20u1 on my laptop but enp0s26u1u1i1 on my desktop?),
how to identify which interface is the gadget by using its MAC,
and how to put it all together into a shell function so you can
set it up with one command.
Tags: raspberry pi, linux, networking
[
15:25 Aug 31, 2018
More linux |
permalink to this entry |
]
Thu, 23 Aug 2018
I try to
avoid Grub2
on my Linux machines, for reasons I've discussed before.
Even if I run it, I usually block it from auto-updating /boot since that
tends to overwrite other operating systems.
But on a couple of my Debian machines, that has meant needing to notice
when a system update has installed a new kernel, so I can update the
relevant boot files. Inevitably, I fail to notice, and end up running
an out of date kernel.
But didn't Debian use to have a /boot/vmlinuz that always
linked to the latest kernel? That was such a good idea: what happened
to that?
I'll get to that. But before I found out, I got sidetracked trying to
find a way to check whether my kernel was up-to-date, so I could have
it warn me of out-of-date kernels when I log in.
That turned out to be fairly easy using uname and a little shell
pipery:
# Is the kernel running behind?
kernelvers=$(uname -a | awk '{ print $3; }')
latestvers=$(cd /boot; ls -1 vmlinuz-* | sort --version-sort | tail -1 | sed 's/vmlinuz-//')
if [[ $kernelvers != $latestvers ]]; then
echo "======= Running kernel $kernelvers but $latestvers is available"
else
echo "The kernel is up to date"
fi
I put that in my .login. But meanwhile I discovered that that
/boot/vmlinuz link still exists -- it just isn't enabled
by default for some strange reason. That, of course, is the right
way to make sure you're on the latest kernel, and you can do it with the
linux-update-symlinks command.
linux-update-symlinks
is called automatically when you install a new kernel -- but by
default it updates symlinks in the root directory, /, which isn't
much help if you're trying to boot off a separate /boot
partition.
But you can configure it to notice your /boot partition.
Edit /etc/kernel-img.conf and change link_in_boot
to yes:
link_in_boot = yes
Then linux-update-symlinks will automatically update the
/boot/vmlinuz link whenever you update the kernel,
and whatever bootloader you prefer can point to that image.
It also updates /boot/vmlinuz.old to point to the previous kernel
in case you can't boot from the new one.
Update: To get linux-update-symlinks to update symlinks to reflect
the current kernel, you need to reinstall the package for the current kernel,
e.g. apt-get install --reinstall linux-image-4.18.0-3-amd64
.
Just apt-get install --reinstall linux-image-amd64
isn't enough.
Tags: linux, debian, shell
[
20:14 Aug 23, 2018
More linux/kernel |
permalink to this entry |
]
Sun, 12 Aug 2018
About three weeks ago, a Debian (testing) update made a significant
change on my system: it added a 30-minute suspend timeout. If I left
the machine unattended for half an hour, it would automatically go
to sleep.
What's wrong with that? you ask. Why not just let it sleep if you're
going to be away from it that long?
But sometimes there's a reason to leave it running. For instance, I
might want to track an ongoing discussion in IRC, and occasionally
come back to check in. Or, more important, a long-running job that
doesn't require user input, like a system backup, or ripping a CD.
or testing a web service.
None of those count as "activity" to keep the system awake: only
mouse and keyboard motions count.
There are lots of pages that point to the file
/etc/systemd/logind.conf, where you can find commented-out
lines like
#IdleAction=ignore
#IdleActionSec=30min
The comment at the top of the file says that these are the defaults
and references the logind.conf man page. Indeed, man logind.conf
says that setting IdleAction=ignore should prevent annything from happening,
and that setting IdleActionSec=120min should lead to a longer delay.
Alas, neither is true. This file is completely ignored as far as I can
tell, and I don't know why it's there, or where the 30 minute setting
is coming from.
What actually did work was in
Debian's Suspend wiki page.
I was skeptical since the page hasn't been updated since Debian Jessie
(Stretch, the successor to Jessie, has been out for more than a year
now) and the change I was seeing just happened in the last month.
I was also leery because the only advice it gives is
"For systems which should never attempt any type of suspension, these
targets can be disabled at the systemd level". I do suspend my system,
frequently; I just don't want it to happen unless I tell it to, or with
a much longer timeout than 30 minutes.
But it turns out the command it suggests does work:
sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
and it doesn't disable suspending entirely: I can still suspend manually,
it just disables autosuspend. So that's good enough.
Be warned: the page says next:
Then run systemctl restart systemd-logind.service
or reboot.
It neglects to mention that restarting systemd-logind.service
will kill your X session, so don't run that command if you're
in the middle of anything.
It would be nice to know where the 30-second timeout had been coming
from, so I could enable it after, say, 90 or 120 minutes. A timeout
sounds like a good thing, if it's something the user can configure.
But like so many systemd functions, no one who writes documentation
seems to know how it actually works, and those who know aren't telling.
Tags: linux, systemd
[
13:36 Aug 12, 2018
More linux |
permalink to this entry |
]
Fri, 20 Jul 2018
Such a classic Linux story.
For a video I'll be showing during tonight's planetarium presentation
(Sextants,
Stars, and Satellites: Celestial Navigation Through the Ages, for
anyone in the Los Alamos area),
I wanted to get HDMI audio working from my laptop, running Debian Stretch.
I'd done that once before on this laptop
(HDMI
Presentation Setup Part I and
Part
II) so I had some instructions to follow; but while aplay -l
showed the HDMI audio device, aplay -D plughw:0,3
didn't play anything and alsamixer
and alsamixergui
only showed two devices, not the long list of devices I was used to seeing.
Web searches related to Linux HDMI audio all pointed to pulseaudio,
which I don't use,
and I was having trouble finding anything for plain ALSA without pulse.
In the old days, removing pulseaudio used to be the cure
for practically every Linux audio problem. But I thought to myself,
It's been a couple years since I actually tried pulse, and people have
told me it's better now. And it would be a relief to have pulseaudio
working so things like Firefox would Just Work. Maybe I should try
installing it and see what happens.
So I ran an aptitude search pulseaudio
to find the
package name I'd need to install. Imagine my surprise when it turned
out that it was already installed!
So I did some more web searching to find out how to talk to pulse and
figure out how to enable HDMI, or un-mute it, or whatever it was I needed.
But to no avail: everything I found was stuff like "In the Ubuntu
audio panel, do this". The few pages I found that listed commands
to run didn't help -- the commands all gave errors.
Running short on time,
I reverted to the old days: aptitude purge pulseaudio
.
Rebooted to make sure the audio system was reset,
ran alsamixergui
and sure enough, there were all my normal devices, including the IEC958 device
for HDMI, which was indeed muted. I unmuted it, tried the video again --
and music blasted from my TV's speakers.
I'm sure there are machines where pulseaudio works. There are even
a few people who have audio setups complicated enough to need
something like pulseaudio. But in 2018, just as in 2006,
aptitude purge pulseaudio
is the easiest solution
to a Linux sound problem.
Tags: linux, audio, pulseaudio, hdmi, speaking
[
14:17 Jul 20, 2018
More linux |
permalink to this entry |
]
Sat, 09 Jun 2018
I did the work to
built
my own Firefox primarily to fix a couple of serious regressions
that couldn't be fixed any other way. I'll start with the one that's
probably more common (at least, there are many people complaining
about it in many different web forums): the fact that Firefox won't
play sound on Linux machines that don't use PulseAudio.
There's a bug with a long discussion of the problem,
Bug 1345661 - PulseAudio requirement breaks Firefox on ALSA-only systems;
and the discussion in the bug links to another
discussion
of the Firefox/PulseAudio problem). Some comments in those
discussions suggest that some near-future version of Firefox may
restore ALSA sound for non-Pulse systems; but most of those comments
are six months old, yet it's still not fixed in the version Mozilla
is distributing now.
In theory, ALSA sound is easy to enable.
Build
pptions in Firefox are controlled through a file called mozconfig.
Create that file at the top level of your build directory, then add to it:
ac_add_options --enable-alsa
ac_add_options --disable-pulseaudio
You can see other options with ./configure --help
Of course, like everything else in the computer world, there were
complications. When I typed mach build
, I got:
Assertion failed in _parse_loader_output:
Traceback (most recent call last):
File "/home/akkana/outsrc/gecko-dev/python/mozbuild/mozbuild/mozconfig.py", line 260, in read_mozconfig
parsed = self._parse_loader_output(output)
File "/home/akkana/outsrc/gecko-dev/python/mozbuild/mozbuild/mozconfig.py", line 375, in _parse_loader_output
assert not in_variable
AssertionError
Error loading mozconfig: /home/akkana/outsrc/gecko-dev/mozconfig
Evaluation of your mozconfig produced unexpected output. This could be
triggered by a command inside your mozconfig failing or producing some warnings
or error messages. Please change your mozconfig to not error and/or to catch
errors in executed commands.
mozconfig output:
------BEGIN_ENV_BEFORE_SOURCE
... followed by a many-page dump of all my environment variables, twice.
It turned out that was coming from line 449 of
python/mozbuild/mozbuild/mozconfig.py:
# Lines with a quote not ending in a quote are multi-line.
if has_quote and not value.endswith("'"):
in_variable = name
current.append(value)
continue
else:
value = value[:-1] if has_quote else value
I'm guessing this was added because some Mozilla developer sets
a multi-line environment variable that has a quote in it but doesn't
end with a quote. Or something. Anyway, some fairly specific case.
I, on the other hand, have a different specific case: a short
environment variable that includes one or more single quotes,
and the test for their specific case breaks my build.
(In case you're curious why I have quotes in an environment variable:
The prompt-setting code in my .zshrc includes a variable called
PRIMES. In a login shell, this is set to the empty string,
but in subshells, I add ' for each level of shell under the login shell.
So my regular prompt might be (hostname)-, but if I run a
subshell to test something, the prompt will be (hostname')-,
a subshell inside that will be (hostname'')-, and so on.
It's a reminder that I'm still in a subshell and need to exit when
I'm done testing. In theory, I could do that with SHLVL, but SHLVL
doesn't care about login shells, so my normal shells inside X are
all SHLVL=2 while shells on a console or from an ssh are
SHLVL=1, so if I used SHLVL I'd have to have some special case
code to deal with that.
Also, of course I could use a character other than a single-quote.
But in the thirty or so years I've used this, Firefox is the first
program that's ever had a problem with it. And apparently I'm not the
first one to have a problem with this:
bug 1455065
was apparently someone else with the same problem. Maybe that will show
up in the release branch eventually.)
Anyway, disabling that line fixed the problem:
# Lines with a quote not ending in a quote are multi-line.
if False and has_quote and not value.endswith("'"):
and after that,
mach build
succeeded, I built a new
Firefox, and lo and behond! I can play sound in YouTube videos and
on Xeno-Canto again, without needing an additional browser.
Tags: web, firefox, linux
[
16:49 Jun 09, 2018
More tech/web |
permalink to this entry |
]
Sun, 03 Jun 2018
With Firefox Quantum, Mozilla has moved away from
letting users configure the browser they way they like.
If I was going to switch to Quantum as my everyday browser,
there were several problems I needed to fix first -- and they all
now require modifying the source code, then building the whole browser
from scratch.
I'll write separately about fixing the specific problems; but first
I had to build Firefox. Although I was a Firefox developer way back in
the day, the development environment has changed completely since then,
so I might as well have been starting from scratch.
Setting up a Firefox build
I started with Mozilla's
Linux
build preparation page. There's a script
called bootstrap.py that's amazingly comprehensive. It will
check what's installed on your machine and install what's needed for a
Firefox build -- and believe me, there are a lot of dependencies.
Don't take the "quick" part of the "quick and easy" comment at the
beginning of the script too seriously; I think on my machine, which
already has a fairly full set of build tools, the script was
downloading additional dependencies for 45 minutes or so. But it
was indeed fairly easy: the script asks lots of questions about
optional dependencies, and usually has suggestions, which I mostly
followed.
Eventually bootstrap.py finishes loading the dependencies and
gets to the point of wanting to check out the mozilla-unified
repository, and that's where I got into trouble.
The script wants to check out the bleeding edge tip of Mozilla development.
That's what you want if you're a developer checking in to the project.
What I wanted was a copy of the currently released Firefox, but with
a chance to make my own customizations. And that turns out to be difficult.
Getting a copy of the release tree
In theory, once you've checked out mozilla-unified with Mercurial,
assuming you let bootstrap.py enable the recommended "firefoxtree"
hg extension (which I did), you can switch to the release branch
with:
hg pull release
hg up -c release
That didn't work for me: I tried it numerous times over the course of
the day, and every time it died with
"abort: HTTP request error (incomplete response; expected 5328 bytes
got 2672)" after "adding manifests" when it started "adding file changes".
That sent me on a long quest aided by someone in Mozilla's helpful
#introduction channel, where they help people with build issues.
You might think it would be a common thing to want to build a copy of
the released version of Firefox, and update it when a new release
comes out. But apparently not; although #introduction is a friendly
and helpful channel, everyone seemed baffled as to why hg up
didn't work and what the possible alternatives might be.
Bundles and Artifacts
Eventually someone pointed me to the long list of
"bundle"
tarballs and advised me on how to get a release tarball there.
I actually did that, and (skipping ahead briefly) it built and ran;
but I later discovered that "bundles"
aren't actually hg repositories and can't be updated. So once you've
downloaded your 3 gigabytes or so of Mozilla stuff and built it, it's
only good for a week or two until the next Mozilla release, when
you're now hopelessly out of date and have to download a whole nuther
bundle. Bundles definitely aren't the answer, and they aren't well
supported or documented either. I recommend staying away from them.
I should also mention "artifact builds". These sound like a great
idea: a lot of the source is already built for you, so you just build
a little bit of it. However, artifact builds are only available for a
few platforms and branches. If your OS differs in any way from whoever
made the artifact build, or if you're requesting a branch, you're
likely to waste a lot of time (like I did) downloading stuff only to
get mysterious error messages. And even if it works, you won't be able
to update it to keep on top of security fixes. Doesn't seem like a
good bet.
GitHub to the rescue
Okay, so Mercurial's branch switching doesn't work.
But it turns out you don't have to use Mercurial. There's a
GitHub mirror for Firefox
called gecko-dev, and after cloning it you can use normal
git commands to switch branches:
git clone https://github.com/mozilla/gecko-dev.git
cd gecko-dev/
git checkout -t origin/release
You can verify you're on the right branch with
git branch -vv
, or if you want to list all branches
and their remotes, git branch -avv
.
Finally: a Firefox release branch that you can actually update!
Building Firefox
Once you have a source tree, you can use the all-powerful mach
script to build the current release of Firefox:
./mach build
Of course that takes forever -- hours and hours, depending on how
fast your machine is.
Running your New Firefox
The build, after it finishes, helpfully tells you to test it with
./mach run
, which runs your newly-built firefox with a
special profile, so it doesn't interfere with your running build.
It also prints:
For more information on what to do now, see https://developer.mozilla.org/docs/Developer_Guide/So_You_Just_Built_Firefox
Great! Except there's no information there on how to package or run
your build -- it's just a placeholder page asking people to contribute
to the page.
It turns out that obj-whatever/dist/bin is the directory that
corresponds to the tarball you download from Mozilla, and you can run
/path/to/mozilla-release/obj-whatever/dist/bin/firefox from anywhere.
I tried filing a bug request to have a sub-page created explaining how
to run a newly built Firefox, but it hasn't gotten any response.
Maybe I'll just edit the "So You Just Built" page.
Update, 7 months later:
Nobody ever did respond to my bug, but someone on Mozilla's #introduction
channel for help with builds gave me a blessing to modify the page directly,
which I did.
Specifically, I added:
Your new Firefox executable can be found in: $OBJDIR/dist/bin/firefox. You can run it from there.
If you need to move it, e.g. to another machine, you can run:
./mach package
This should create an OS-specific package, e.g. a tarball on Linux, which will appear in $OBJDIR/dist. You can also copy the $OBJDIR/dist/bin directory -- be sure to use a copy method that expands soft links -- but the result will be much larger than what you get with mach package.
On Windows you may want to read about
Windows Installer Builds.
Incidentally, my gecko-dev build takes 16G of disk space, of which
9.3G is things it built, which are helpfully segregated in
obj-x86_64-pc-linux-gnu.
Tags: firefox, linux
[
15:55 Jun 03, 2018
More tech/web |
permalink to this entry |
]
Sat, 10 Mar 2018
Our makerspace got a donation of a bunch of Galileo gen2 boards from Intel
(image
from Mwilde2
on Wikimedia commons).
The Galileo line has been discontinued, so there's no support and
no community, but in theory they're fairly interesting boards.
You can use a Galileo in two ways: you can treat it
like an Arduino, after using the Arduino IDE to download a
Galileo hardware definition since they're not Atmega chips. They
even have Arduino-format headers so you can plug in an Arduino shield.
That works okay (once you figure out that you need to download
the Galileo v2 hardware definitions, not the regular Galileo).
But they run Linux under the hood, so you can also use them as a
single-board Linux computer.
Serial Cable
The first question is how to talk to the board. The documentation is terrible,
and web searches aren't much help because these boards were never
terribly popular. Worse, the v1 boards seem to have been more widely
adopted than the v2 boards, so a lot of what you find on the web
doesn't apply to v2. For instance, the v1 required a special serial cable
that used a headphone jack as its connector.
Some of the Intel documentation talks about how you can load a special
Arduino sketch that then disables the Arduino bootloader and instead
lets you use the USB cable as a serial monitor. That made me nervous:
once you load that sketch, Arduino mode no longer works until you
run a command on Linux to start it up again. So if the sketch doesn't
work, you may have no way to talk to the Galileo.
Given the state of the documentation I'd already struggled with for
Arduino mode, it didn't sound like a good gamble. I thought a real
serial cable sounded like a better option.
Of course, the Galileo documentation doesn't tell you what needs to
plug in where for a serial cable. The board does have a standard FTDI
6-pin header on the board next to the ethernet jack, and the labels on
the pins seemed to correspond to the standard pinout on my Adafruit
FTDI Friend: Gnd, CTS, VCC, TX, RX, RTS. So I tried that first, using
GNU screen to connect to it from Linux just like I would a Raspberry
Pi with a serial cable:
screen /dev/ttyUSB0 115200
Powered up the Galileo and sure enough, I got boot messages and was
able to log in as root with no password. It annoyingly forces orange
text on a black background, making it especially hard to read on
a light-background terminal, but hey, it's a start.
Later I tried a Raspberry Pi serial cable, with just RX (green), TX (white)
and Gnd (black) -- don't use the red VCC wire since the Galileo is already
getting power from its own power brick -- and that worked too. The Galileo
doesn't actually need CTS or RTS. So that's good: two easy ways to talk to
the board without buying specialized hardware. Funny they didn't bother
to mention it in the docs.
Blinking an LED from the Command Line
Once connected, how do you do anything? Most of the
Intel
tutorials on Linux are useless, devoting most of their space
to things like how to run Putty on Windows and no space at all to
how to talk to pins. But I finally found a
discussion thread
with a Python example for Galileo. That's not immediately helpful
since the built-in Linux doesn't have python installed (nor gcc,
natch). Fortunately, the Python example used files in /sys
rather than a dedicated Python library;
we can access /sys files just as well from the shell.
Of course, the first task is to blink an LED on pin 13. That
apparently corresponds to GPIO 7 (what are the other arduino/GPIO
correspondences? I haven't found a reference for that yet.) So you
need to export that pin (which creates /sys/class/gpio/gpio7
and set its direction to out
. But that's not enough: the
pin still doesn't turn on when you
echo 1 > /sys/class/gpio/gpio7/value
. Why not?
I don't know, but the Python script exports three other pins --
46, 30, and 31 -- and echoes 0 to 30 and 31. (It does this without
first setting their directions to out
, and if you try
that, you'll get an error, so I'm not convinced the Python script
presented as the "Correct answer" would actually have worked. Be warned.)
Anyway, I ended up with these shell lines as
preparation before the Galileo can actually blink:
# echo 7 >/sys/class/gpio/export
# echo out > /sys/class/gpio/gpio7/direction
# echo 46 >/sys/class/gpio/export
# echo 30 >/sys/class/gpio/export
# echo 31 >/sys/class/gpio/export
# echo out > /sys/class/gpio/gpio30/direction
# echo out > /sys/class/gpio/gpio31/direction
# echo 0 > /sys/class/gpio/gpio30/value
# echo 0 > /sys/class/gpio/gpio31/value
And now, finally, you can control the LED on pin 13 (GPIO 7):
# echo 1 > /sys/class/gpio/gpio7/value
# echo 0 > /sys/class/gpio/gpio7/value
or run a blink loop:
# while /bin/true; do
> echo 1 > /sys/class/gpio/gpio7/value
> sleep 1
> echo 0 > /sys/class/gpio/gpio7/value
> sleep 1
> done
Searching Fruitlessly for a "Real" Linux Image
All the Galileo documentation is emphatic that you should download
a Linux distro and burn it to an SD card rather than using the Yocto
that comes preinstalled. The preinstalled Linux apparently has no
persistent storage, so not only does it not save your Linux programs,
it doesn't even remember the current Arduino sketch.
And it has no programming languages and only a rudimentary busybox shell.
So finding and downloading a Linux distro was the next step.
Unfortunately, that mostly led to dead ends. All the official Intel
docs describe different download filenames, and they all point to
generic download pages that no longer include any of the filenames
mentioned. Apparently Intel changed the name for its Galileo images
frequently and never updated its documentation.
After forty-five minutes of searching and clicking around,
I eventually found my way to
Intel® IoT Developer Kit Installer Files,
which includes sizable downloads with names like
- iss-iot-linux_12-09-16.tar.bz2 (324.07 MB),
- intel-iot-yocto.tar.xz (147.53 MB),
- intel-iot-wrs-pulsar-64.tar.xz (283.86 MB),
- intel-iot-wrs-32.tar.xz (386.16 MB), and
- intel-iot-ubuntu.tar.xz (209.44 MB)
From the size, I suspect those are all Linux images. But what are they
and how do they differ? Do any of them still have working repositories?
Which ones come with Python, with gcc, with GPIO support,
with useful development libraries? Do any of them get security updates?
As far as I can tell, the only way to tell is to download each image,
burn it to a card, boot from it, then explore the filesystem
trying to figure out what distro it is and how to try updating it.
But by this time I'd wasted three hours and gotten no
further than the shell commands to blink a single LED, and I ran out of
enthusiasm. I mean, I could spend five more hours on this, try several
of the Linux images, and see which one works best. Or I could spend
$10 on a Raspberry Pi Zero W that has abundant documentation,
libraries, books, and community howtos. Plus wi-fi, bluetooth and HDMI,
none of which the Galileo has.
Arduino and Linux Living Together
So that's as far as I've gone. But I do want to note
one useful thing I stumbled upon while searching
for information about Linux distributions:
Starting Arduino
sketch from Linux terminal shows how to run an Arduino sketch
(assuming it's already compiled) from Linux:
sketch.elf /dev/ttyGS0 &
It's a fairly cool option to have. Maybe one of these days, I'll pick
one of the many available distros and try it.
Tags: hardware, linux, intel, galileo, arduino
[
13:54 Mar 10, 2018
More hardware |
permalink to this entry |
]
Thu, 01 Mar 2018
I updated my Debian Testing system via apt-get upgrade
,
as one does during the normal course of running a Debian system.
The next time I went to a locally hosted website, I discovered PHP
didn't work. One of my websites gave an error, due to a directive
in .htaccess; another one presented pages that were full of PHP code
interspersed with the HTML of the page. Ick!
In theory, Debian updates aren't supposed to change configuration files
without asking first, but in practice, silent and unexpected Apache
bustage is fairly common. But for this one, I couldn't find anything
in a web search, so maybe this will help.
The problem turned out to be that /etc/apache2/mods-available/
includes four files:
$ ls /etc/apache2/mods-available/*php*
/etc/apache2/mods-available/php7.0.conf
/etc/apache2/mods-available/php7.0.load
/etc/apache2/mods-available/php7.2.conf
/etc/apache2/mods-available/php7.2.load
The appropriate files are supposed to be linked from there into
/etc/apache2/mods-enabled. Presumably, I previously had a link
to ../mods-available/php7.0.* (or perhaps 7.1?); the upgrade to
PHP 7.2 must have removed that existing link without replacing it with
a link to the new ../mods-available/php7.2.*.
The solution is to restore those links, either with ln -s
or with the approved apache2 commands (as root, of course):
# a2enmod php7.2
# systemctl restart apache2
Whew! Easy fix, but it took a while to realize what was broken, and
would have been nice if it didn't break in the first place.
Why is the link version-specific anyway? Why isn't there a file called
/etc/apache2/mods-available/php.* for the latest version?
Does PHP really change enough between minor releases to break websites?
Doesn't it break a website more to disable PHP entirely than to swap in
a newer version of it?
Tags: linux, debian, apache, web
[
10:31 Mar 01, 2018
More linux |
permalink to this entry |
]
Fri, 02 Feb 2018
When I work with a Raspberry Pi from anywhere other than home,
I want to make sure I can do what I need to do without a network.
With a Pi model B, you can use an ethernet cable. But that doesn't
work with a Pi Zero, at least not without an adapter.
The lowest common denominator is a serial cable, and I always
recommend that people working with headless Pis get one of these;
but there are a lot of things that are difficult or impossible over
a serial cable, like file transfer, X forwarding, and running any
sort of browser or other network-aware application on the Pi.
Recently I learned how to configure a Pi Zero as a USB ethernet gadget,
which lets you network between the Pi and your laptop using only a
USB cable.
It requires a bit of setup, but it's definitely worth it.
(This apparently only works with Zero and Zero W, not with a Pi 3.)
The Cable
The first step is getting the cable.
For a Pi Zero or Zero W, you can use a standard micro-USB cable:
you probably have a bunch of them for charging phones (if you're
not an Apple person) and other devices.
Set up the Pi
Setting up the Raspberry Pi end requires editing
two files in /boot, which you can do either on the Pi itself,
or by mounting the first SD card partition on another machine.
In /boot/config.txt add this at the end:
dtoverlay=dwc2
In /boot/cmdline.txt, at the end of the long list of options
but on the same line, add a space, followed by:
modules-load=dwc2,g_ether
Set a static IP address
This step is optional. In theory you're supposed to use some kind of
.local address that Bonjour (the Apple protocol that used to be called
zeroconf, and before that was called Rendezvous, and on Linux machines
is called Avahi). That doesn't work on
my Linux machine. If you don't use Bonjour, finding the Pi over the
ethernet link will be much easier if you set it up to use a static IP
address. And since there will be nobody else on your USB network
besides the Pi and the computer on the other end of the cable, there's
no reason not to have a static address: you're not going to collide
with anybody else.
You could configure a static IP in /etc/network/interfaces,
but that interferes with the way Raspbian handles wi-fi via
wpa_supplicant and dhcpcd; so you'd have USB networking but your
wi-fi won't work any more.
Instead, configure your address in Raspbian via dhcpcd.
Edit /etc/dhcpcd.conf and add this:
interface usb0
static ip_address=192.168.7.2
static routers=192.168.7.1
static domain_name_servers=192.168.7.1
This will tell Raspbian to use address 192.168.7.2
for
its USB interface. You'll set up your other computer to use 192.168.7.1.
Now your Pi should be ready to boot with USB networking enabled.
Plug in a USB cable (if it's a model A or B) or a micro USB cable
(if it's a Zero), plug the other end into your computer,
then power up the Pi.
Setting up a Linux machine for USB networking
The final step is to configure your local computer's USB ethernet
to use 192.168.7.1.
On Linux, find the name of the USB ethernet interface. This will only
show up after you've booted the Pi with the ethernet cable plugged in to
both machines.
ip a
The USB interface will probably start eith
en and will probably
be the last interface shown.
On my Debian machine, the USB network showed up as enp0s26u1u1.
So I can configure it thusly (as root, of course):
ip a add 192.168.7.1/24 dev enp0s26u1u1
ip link set dev enp0s26u1u1 up
(You can also use the older
ifconfig rather than
ip:
sudo ifconfig enp0s26u1u1 192.168.7.1 up
)
You should now be able to ssh into your Raspberry Pi
using the address 192.168.7.2, and you can make an appropriate entry
in /etc/hosts, if you wish.
For a less hands-on solution, if you're using Mac or Windows, try
Adafruit's
USB gadget tutorial.
It's possible that might also work for Linux machines running Avahi.
If you're using Windows, you might prefer
CircuitBasics'
ethernet gadget tutorial.
Happy networking!
Update: there's now a
Part 2: Routing to the Outside World
and
Part 3: an Automated Script.
Tags: raspberry pi, linux, networking
[
14:53 Feb 02, 2018
More linux |
permalink to this entry |
]
Thu, 25 Jan 2018
(Wherein I rant about how bad CUPS has become.)
I had to set up two new printers recently. CUPS hasn't gotten any
better since the last time I bought a printer, maybe five years ago;
in fact, it's gotten quite a bit worse. I'm amazed at how difficult
it was to add these fairly standard laser printers, both of which
I'd researched beforehand to make sure they worked with Linux.
It took me about three hours for the first printer. The second one,
a few weeks later, "only" took about 45 minutes ... at which point
I realized I'd better write everything down so it'll be faster
if I need to do it again, or if I get the silly notion that I
might want to print from another computer, like my laptop.
I used the CUPS web interface; I didn't try any of the command-line tools.
Figure out the connection type
In the CUPS web interface, after you log in and click on Administration,
whether you click on Find New Printers or Add Printer,
you're faced with a bunch of identical options with no clue how to
choose between them. For example, Find New Printers
with a Dell E310dw connected shows:
Available Printers
- [Add This Printer] Virtual Braille BRF Printer (CUPS-BRF)
- [Add This Printer] Dell Printer E310dw (Dell Printer E310dw)
- [Add This Printer] Dell Printer E310dw (Dell Printer E310dw)
- [Add This Printer] Dell Printer E310dw (Dell Printer E310dw (driverless))
What is a normal human supposed to do with this? What's the difference
between the three E210dw entries and which one am I supposed to choose?
(Skipping ahead: None of them.)
And why is it finding a virtual Braille BRF Printer?
The only way to find out the difference is to choose one,
click on Next and look carefully at the URL. For the three E310dw
options above, that gives:
- dnssd://Dell%20Printer%20E310dw._ipp._tcp.local/?uuid=[long uuid here]
- lpd://DELL316BAA/BINARY_P1
- ipp://DELL316BAA.local:631/ipp/print
Again skipping ahead: none of those are actually right.
Go ahead, try all three of them and see.
You'll get error messages about empty PPD files.
But while you're trying them, write down, for each one,
the URL listed as Connection (something like the dnssd:,
lpd: or ipp: URLs listed above); and note, in the driver list
after you click on your manufacturer, how many entries there are
for your printer model, and where they show up in the list.
You'll need that information later.
Download some drivers
Muttering about the idiocy of all this -- why ship empty drivers that
won't install? Why not just omit drivers if they're not available?
Why use the exact same name for three different printer entries and
four different driver entries? --
the next step is to download and install the manufacturer's drivers.
If you're on anything but Redhat, you'll probably
either need to download an RPM and unpack it, or else google for the
hidden .deb files that exist on both Dell's and Brother's websites
that their sites won't actually find for you.
It might seem like you could just grab the PPD from inside those RPM
files and put it wherever CUPS is finding empty ones, but I never got
that to work. Much as I dislike installing proprietary .deb files,
for both printers that was the only method I found that worked.
Both Dell and Brother have two different packages to install.
Why two and what's the difference? I don't know.
Once you've installed the printer driver packages, you can go back
to the CUPS Add Printer screen. Which hasn't gotten any clearer
than before. But for both the Brother and the Dell, ipp: is the only
printer protocol that worked. So try each entry until you find
the one that starts with ipp:
.
Set up an IP address and the correct URL
But wait, you're not done. Because CUPS gives you a URL like
ipp://DELL316BAA.local:631/ipp/print
, and whatever that
.local thing is, it doesn't work. You'll be able to install the
printer, but when you try to print to it it fails with
"unable to locate printer".
(.local apparently has something to do with assuming you're
running a daemon that does "Bonjour", the latest name for the Apple
service discovery protocol that was originally called Rendezvous, then
renamed to Zeroconf, then to Bonjour. On Linux it's called Avahi, but
even with an Avahi daemon this .local thing didn't work for me.
At least it made me realize that I had the useless Avahi daemon
running, so now I can remove it.).
So go back to Add Printer and click on Internet Printing
Protocol (ipp) under Other network printers and click
Continue. That takes you to a screen that suggests that you
want URLs like:
http://hostname:631/ipp/
http://hostname:631/ipp/port1
ipp://hostname/ipp/
ipp://hostname/ipp/port1
lpd://hostname/queue
socket://hostname
socket://hostname:9100
None of these is actually right. What these printers want --
at least, what both the Brother and the Dell wanted -- was
ipp://printerhostname:631/ipp/print
printerhostname?
Oh, did I forget to mention static IP? I definitely recommend that you
make a static IP for your printer, or at least add it to your router's
DHCP list so it always gets the same address. Then you can make
an entry in /etc/hosts for printerhostname. I guess that
.local thing was supposed to compensate for an address that
changes all the time, which might be a nifty idea if it worked,
but since it doesn't, make a static IP and use it in your ipp:
URL.
Choose a driver
Now, finally! you can move on to choosing a driver. After you pick the
manufacturer, you'll be presented with a list that probably includes
at least three entries for your printer model. Here's where it helps
if you paid attention to how the list looked before you
installed the manufacturer's drivers: if there's a new entry for
your printer that wasn't there before, that's the non-empty one
you want. If there are two or more new entries for your printer that
weren't there before, as there were for the Dell ... shrug, all you
can do is pick one and hope.
Of course, once you manage to get through configuration to "Printer
successfully added", you should immediately run
Maintenance->Print Test Page. You may have to power cycle
the printer first since it has probably gone to sleep while you were
fighting with CUPS.
All this took me maybe three hours the first time, but it only took me
about 45 minutes the second time. Hopefully now that I've written this,
it'll be much faster next time. At least if I don't succumb to the siren
song of thinking a fairly standard laser printer ought to have a driver
that's already in CUPS, like they did a decade ago,
instead of always needing a download from the manufacturer.
If laser printers are this hard I don't even want to think about
what it's like to install a photo printer on Linux these days.
Tags: linux, printing, cups, rant
[
16:19 Jan 25, 2018
More linux |
permalink to this entry |
]
Sun, 26 Nov 2017
I wrote earlier about how to use an
IR
remote on Raspbian Jessie.
It turns out several things have changed under Raspbian Stretch.
Update, August 2019:
Apparently these updated instructions don't work any more either.
What apparently works now:
In /boot/config.txt, add:
dtoverlay=gpio-ir,gpio_pin=18
In /etc/lirc/lirc_options.conf, add:
driver=default
device = /dev/lirc0
/etc/modules and /etc/lirc/hardware.conf/
don't need to be changed. Thanks to Sublim21 on #raspberrypi for the tip.
Here's the older procedure and discussion.
Here's the abbreviated procedure for Stretch:
Install LIRC
$ sudo apt-get install lirc
Enable the LIRC Overlay
Eedit /boot/config.txt as root, look for
this line and uncomment it:
# Uncomment this to enable the lirc-rpi module
dtoverlay=lirc-rpi
Or if you prefer to use a pin other than 18,
change the pin assignment like this:
# Uncomment this to enable the lirc-rpi module
dtoverlay=lirc-rpi,gpio_in_pin=25,gpio_out_pin=17
See /boot/overlays/README for more information on overlays.
Fix the LIRC Options
Edit /etc/lirc/lirc_options.conf,
comment out the existing driver and device lines,
and add:
driver = default
device = /dev/lirc0
Reboot and stop the daemon
Reboot the Pi.
Now a bunch of LIRC daemons will be running. You don't want them
while you're configuring, and if you're eventually going to be
reading button presses from Python, you don't want them at all.
Disable them temporarily with
sudo systemctl stop lircd
which seems to be shorthand for
sudo systemctl stop lircd.socket
sudo systemctl stop lircd.service
Be sure to check with ps aux | grep lirc
to make sure you've
turned them off.
If you want to disable them permanently,
sudo systemctl disable lircd.socket lircd.service lircd-setup.service lircd-uinput.service lircmd.service
I got that list from:
systemctl list-unit-files | grep lirc
But you need them if you want to read from the
/var/run/lirc/lircd socket.
Use mode2 to verify it sees the buttons
With the daemons not running, a program called
mode2
can verify that your device's buttons are being seen at all.
I have no idea why it's named that, or what Mode 1 is.
mode2 -d /dev/lirc0
You should see lots of output. If you don't, double-check your wiring
and everything else you've done up to now.
Set up an lircd.conf
Here's where it gets difficult. On Jessie, you could run
irrecord -d /dev/lirc0 ~/lircd.conf
as described in my
earlier article.
However, that doesn't work on stretch. There's apparently a
bug in the
irrecord in stretch that makes it generate a file that doesn't work.
If you try it and it doesn't work, run
tail -f /var/log/messages | grep lirc
and you may see Info: Cannot configure the rc device for /dev/lirc0
and when you press buttons you'll see Notice: repeat code without
last_code received but you won't get any keys.
If you have a working lirc setup from a Jessie machine, try it first.
If it doesn't work, there's a script you can try that converts older
lirc conf files to a newer format. The safest way to try it is to
copy (with cp -a) the whole /etc/lirc directory to a local
directory and run:
/usr/share/lirc/lirc-old2new your-local-copy
Or if you feel brave, back up
/etc/lirc and run
sudo /usr/share/lirc/lirc-old2new with no arguments.
Either way, you should get an
lirc.conf that has a
chance of working with stretch.
If you don't have a working Jessie config, you're in trouble.
You might be able to edit the one from irrecord to make it work.
Here's the first part of my
working Jessie lircd.conf:
begin remote
name /home/pi/lircd.conf
bits 16
flags SPACE_ENC|CONST_LENGTH
eps 30
aeps 100
header 9117 4494
one 569 1703
zero 569 568
ptrail 575
repeat 9110 2225
pre_data_bits 16
pre_data 0xFD
gap 108337
toggle_bit_mask 0x0
begin codes
KEY_POWER 0x00FF
KEY_VOLUMEUP 0x807F
KEY_STOP 0x40BF
KEY_BACK 0x20DF
KEY_PLAYPAUSE 0xA05F
KEY_FORWARD 0x609F
KEY_DOWN 0x10EF
and here's the corresponding part of the nonworking one generated on Stretch:
begin remote
name DingMai
bits 32
flags SPACE_ENC|CONST_LENGTH
eps 30
aeps 100
header 9117 4494
one 569 1703
zero 569 568
ptrail 575
repeat 9110 2225
gap 108337
toggle_bit_mask 0x0
frequency 38000
begin codes
KEY_POWER 0x00FD00FF 0xBED8F1BC
KEY_VOLUMEUP 0x00FD807F 0xBED8F1BC
KEY_STOP 0x00FD40BF 0xBED8F1BC
KEY_BACK 0x00FD20DF 0xBED8F1BC
KEY_PLAYPAUSE 0x00FDA05F 0xBED8F1BC
KEY_FORWARD 0x00FD609F 0xBED8F1BC
KEY_DOWN 0x00FD10EF 0xBED8F1BC
It looks like setting bits to 16 and then using the second quartet
from each key might work. So try that if you're stuck.
Once you get irw working, you're home free. The Python modules
probably still won't do anything useful, but you can use my
pyirw.py
script as a model for a simple way to read keys from the lirc daemon.
In case you hit problems beyond what I saw, I found
this
discussion useful, which links to a complete
GitHub
gist of instructions for setting up lirc on Stretch.
Those instructions have a couple of extra steps involving module loading
that it turned out I didn't need, and on the other hand
it doesn't address the problems I saw with irrecord.
It looks like lirc configuration is a black art, not a science.
See what works for you. Good luck!
Tags: raspberry pi, linux, remote
[
12:00 Nov 26, 2017
More hardware |
permalink to this entry |
]
Thu, 26 Oct 2017
Our makerspace got some new Arduino kits that come with a bunch of fun
parts I hadn't played with before, including an IR remote and receiver.
The kits are intended for Arduino and there are Arduino libraries to
handle it, but I wanted to try it with a Raspberry Pi as well.
It turned out to be much trickier than I expected to read signals from
the IR remote in Python on the Pi. There's plenty of discussion online,
but most howtos are out of date and don't work, or else they assume you
want to use your Pi as a media center and can't be adapted to more
general purposes. So here's what I learned.
Update: this page is for Raspbian Jessie.
If you've upgraded to Stretch, read the update,
Reading an IR Remote on a Raspberry Pi Stretch with LIRC,
then come back here and jump forward to Set up a lircd.conf.
Install LIRC and enable the drivers on the Pi
The LIRC package reads and decodes IR signals, so start there:
$ sudo apt-get install lirc python-lirc python3-lirc
Then you have to enable the lirc daemon. Assuming the sensor's pin is
on the Pi's GPIO 18, edit /boot/config.txt as root, look for
this line and uncomment it:
# Uncomment this to enable the lirc-rpi module
dtoverlay=lirc-rpi
Reboot. Then use a program called mode2 to make sure you can
read from the remote at all, after first making sure the
lirc daemon isn't running:
$ sudo service lirc stop
$ ps aux | grep lirc
$ mode2 -d /dev/lirc0
Press a few keys. If you see a lot of output, you're good. If not,
check your wiring.
Set up a lircd.conf
You'll need to make an lircd.conf file mapping the codes the
buttons send to symbols like KEY_PLAY. You can do that -- ina
somewhat slow and painstaking process -- with irrecord.
First you'll need a list of valid key names. Get that with
irrecord -l
and you'll probably want to keep that window up so you can search
or grep in it. Open another window and run:
$ irrecord -d /dev/lirc0 ~/lircd.conf
I had to repeat the command a couple of times; the first few times
it couldn't read anything. But once it's running, then for
each key on the remote, first, find the key name
that most closely matches what you want the key to do (for instance,
if the key is the power button, irrecord -l | grep -i power
will suggest KEY_POWER and KEY_POWER2). Type or paste that key name
into irrecord -d, then press the key.
At the end of this, you should have a ~/lircd.conf.
Some guides say to copy that lircd.conf to /etc/lirc/ andI
did, but I'm not sure it matters if you're going to be running your
programs as you rather than root.
Then enable the lirc daemon that you stopped back when you were testing
with mode2.
In /etc/lirc/hardware.conf, START_LIRCMD is commented out,
so uncomment it.
Then edit /etc/lirc/hardware.conf as specified in
alexba.in's
"Setting Up LIRC on the RaspberryPi".
Now you can start the daemon:
sudo service lirc start
and verify that it's running:
ps aux | grep lirc
.
Testing with irw
Now it's time to test your lircd.conf:
irw
Press buttons, and hopefully you'll see lines like
0000000000fd8877 01 KEY_2 /home/pi/lircd.conf
0000000000fd08f7 00 KEY_1 /home/pi/lircd.conf
0000000000fd906f 00 KEY_VOLUMEDOWN /home/pi/lircd.conf
0000000000fd906f 01 KEY_VOLUMEDOWN /home/pi/lircd.conf
0000000000fda05f 00 KEY_PLAYPAUSE /home/pi/lircd.conf
If they correspond to the buttons you pressed, your lircd.conf is working.
Reading Button Presses from Python
Now, most tutorials move on to generating a .lircrc
file which sets up your machine to execute programs automatically when
buttons are pressed, and then you can test with ircat.
If you're setting up your Raspberry Pi as a media control center,
that's probably what you want (see below for hints if that's your goal).
But neither .ircrc nor ircat did anything useful for me,
and executing programs is overkill if you just want to read keys from Python.
Python has modules for everything, right?
The Raspbian repos have python-lirc, python-pylirc and python3-lirc,
and pip has a couple of additional options. But none of the packages I
tried actually worked. They all seem to be aimed at setting up media
centers and wanted lircrc files without specifying what they
need from those files. Even when I set up a .lircrc they didn't work.
For instance,
in python-lirc,
lirc.nextcode() always returned an empty list, [].
I didn't want any of the "execute a program" crap that a .lircrc implies.
All I wanted to do was read key symbols one after another -- basically
what irw does. So I looked at the
irw.c
code to see what it did, and it's remarkably simple. It opens a
socket and reads from it. So I tried implementing that in Python, and
it worked fine:
pyirw.py:
Read LIRC button input from Python.
While initially debugging, I still saw those
0000000000fda05f 00 KEY_PLAYPAUSE /home/pi/lircd.conf
lines printed on the terminal, but after a reboot they went away,
so they might have been an artifact of running irw.
If You Do Want a .lircrc ...
As I mentioned, you don't need a .lircrc just to read keys
from the daemon. But if you do want a .lircrc because you're
running some sort of media center, I did find two ways of generating one.
There's a bash script called lirc-config-tool floating around
that can generate .lircrc files. It's supposed to be included
in the lirc package, but for some reason Raspbian's lirc package omits
it. You can find and download the bash script witha web search
for lirc-config-tool source, and it works fine on Raspbian. It
generates a bunch of .lircrc files that correspond to various possible
uses of the remote: for instance, you'll get an mplayer.lircrc, a
mythtv.lircrc, a vlc.lircrc and so on.
But all those lircrc files lirc-config-tool generates use only
small subsets of the keys on my remote, and I wanted one that included
everything. So I wrote a quickie script called
gen-lircrc.py
that takes your lircd.conf as input and generates a
simple lircrc containing all the buttons represented there.
I wrote it to run a program called "beep" because I was trying to
determine if LIRC was doing anything in response to the lircrc (it
wasn't); obviously, you should edit the generated .lircrc and
change the prog = beep
to call your target programs instead.
Once you have a .lircrc, I'm not sure how you get lircd to use it
to call those programs. That's left as an exercise for the reader.
Tags: raspberry pi, linux, remote
[
11:21 Oct 26, 2017
More hardware |
permalink to this entry |
]
Thu, 28 Sep 2017
Someone at our makerspace found a fun Halloween project we could do
at Coder Dojo: a
motion
sensing pumpkin that laughs evilly when anyone comes near.
Great! I've worked with both PIR sensors and ping rangefinders,
and it sounded like a fun project to mentor. I did suggest, however,
that these days a Raspberry Pi Zero W is cheaper than an Arduino, and
playing sounds on it ought to be easier since you have frameworks like
ALSA and pygame to work with.
The key phrase is "ought to be easier".
There's a catch: the Pi Zero and Zero W don't
have an audio output jack like their larger cousins. It's possible to
get analog audio output from two GPIO pins (use the term "PWM output"
for web searches), but there's a lot of noise. Larger Pis have a built-in
low-pass filter to screen out the noise, but on a Pi Zero you have to
add a low-pass filter. Of course, you can buy HATs for Pi Zeros that
add a sound card, but if you're not super picky about audio quality,
you can make your own low-pass filter out of two resistors and two capacitors
per channel (multiply by two if you want both the left and right channels).
There are lots of tutorials scattered around the web about how to add
audio to a Pi Zero, but I found a lot of them confusing; e.g.
Adafruit's
tutorial on Pi Zero sound has three different ways to edit the
system files, and doesn't specify things like the values of the
resistors and capacitors in the circuit diagram (hint: it's clearer if you
download the Fritzing file, run Fritzing and click on each resistor).
There's a clearer diagram in
Sudomod Forums:
PWM Audio Guide, but I didn't find that until after I'd made my own,
so here's mine.
Parts list:
- 2 x 270 Ω resistor
- 2 x 150 Ω resistor
- 2 x 10 nF or 33nF capacitor
- 2 x 1μF electrolytic capacitor
- 3.5mm headphone jack, or whatever connection you want to use to
your speakers
And here's how to wire it:
(Fritzing file, pi-zero-audio.fzz.)
This wiring assumes you're using pins 13 and 18 for the left and right
channels. You'll need to configure your Pi to use those pins.
Add this to /boot/config.txt:
dtoverlay=pwm-2chan,pin=18,func=2,pin2=13,func2=4
Testing
Once you build your circuit up, you need to test it.
Plug in your speaker or headphones, then make sure you can play
anything at all:
aplay /usr/share/sounds/alsa/Front_Center.wav
If you need to adjust the volume, run alsamixer
and
use the up and down arrow keys to adjust volume. You'll have to press
up or down several times before the bargraph actually shows a change,
so don't despair if your first press does nothing.
That should play in both channels. Next you'll probably be curious
whether stereo is actually working. Curiously, none of the tutorials
address how to test this. If you ls /usr/share/sounds/alsa/
you'll see names like Front_Left.wav, which might lead you to
believe that aplay /usr/share/sounds/alsa/Front_Left.wav
might play only on the left. Not so: it's a recording of a voice
saying "Front left" in both channels. Very confusing!
Of course, you can copy a music file to your Pi, play it (omxplayer
is a nice commandline player that's installed by default and handles
MP3) and see if it's in stereo. But the best way I found to test
audio channels is this:
speaker-test -t wav -c 2
That will play those ALSA voices in the correct channel, alternating
between left and right.
(MythTV has a good
Overview
of how to use speaker-test.
Not loud enough?
I found the volume plenty loud via earbuds, but if you're targeting
something like a Halloween pumpkin, you might need more volume.
The easy way is to use an amplified speaker (if you don't mind
putting your nice amplified speaker amidst the yucky pumpkin guts),
but you can also build a simple amplifier.
Here's one that looks good, but I haven't built one yet:
One Transistor Audio for Pi Zero W
Of course, if you want better sound quality, there are various places
that sell HATs with a sound chip and line or headphone out.
Tags: raspberry pi, electronics, audio, linux, hardware
[
15:49 Sep 28, 2017
More hardware |
permalink to this entry |
]
Sun, 17 Sep 2017
When I upgraded my Asus laptop to Stretch, one of the things that
stopped working was the screen brightness keys (Fn-F5 and Fn-F6).
In Debian Jessie they had always just automagically worked without
my needing to do anything, so I'd never actually learned how to set
brightness on this laptop. The fix, like so many things, is easy
once you know where to look.
It turned out the relevant files are in
/sys/class/backlight/intel_backlight.
cat /sys/class/backlight/intel_backlight/brightness
tells you the current brightness;
write a number to /sys/class/backlight/intel_backlight/brightness
to change it.
That at least got me going (ow my eyes, full brightness is
migraine-inducing in low light) but of course I wanted it back
on the handy function keys.
I wrote a
script named "dimmer", with a symlink to "brighter", that goes like this:
#!/bin/zsh
curbright=$(cat /sys/class/backlight/intel_backlight/brightness)
echo dollar zero $0
if [[ $(basename $0) == 'brighter' ]]; then
newbright=$((curbright + 200))
else
newbright=$((curbright - 200))
fi
echo from $curbright to $newbright
sudo sh -c "echo $newbright > /sys/class/backlight/intel_backlight/brightness"
That let me type "dimmer" or "brighter" to the shell to change the
brightness, with no need to remember that /sys/class/whatsit path.
I got the names of the two function keys by running xev
and typing Fn and F5, then Fn and F6.
Then I edited my Openbox ~/.config/openbox/rc.xml, and added:
<keybind key="XF86MonBrightnessDown">
<action name="Execute">
<execute>dimmer<execute>
</action>
</keybind>
<keybind key="XF86MonBrightnessUp">
<action name="Execute">
<execute>brighter<execute>
</action>
</keybind>
Tags: linux, laptop
[
19:57 Sep 17, 2017
More linux/laptop |
permalink to this entry |
]
Sun, 30 Jul 2017
I've remapped my CapsLock key to be another Ctrl key since time
immemorial. (Actually, since the ridiculous IBM PC layout replaced
the older keyboards that had Ctrl there already, to the left of the A.)
On normal current Debian distros, that's fairly easy:
you can edit /etc/default/keyboard to have
XKBOPTIONS="ctrl:nocaps
.
You might think that would work in Raspbian, since it also has
/etc/default/keyboard and raspi-config writes keyboard
options to it if you set any (though of course CapsLock isn't among
the choices it offers you). But it doesn't work in the PIXEL
desktop: there, that key still acts as a Caps Lock.
Apparently lxde (under PIXEL's hood) overrides the keyboard options
in /etc/default/keyboard without actually giving you a UI to
set them. But you can add your own override by editing
~/.config/lxkeymap.cfg.
Make the option line look something like this:
option = ctrl:nocaps
Then when you restart PIXEL, you should have a Control key where
CapsLock used to be.
Tags: linux, raspberry pi
[
10:30 Jul 30, 2017
More linux |
permalink to this entry |
]
Sat, 24 Jun 2017
Someone forwarded me a message from the Albuquerque Journal.
It was all about "New Mexico\222s schools".
Sigh. I thought I'd gotten all my Mutt charset problems fixed long
ago. My system locale is set to en_US.UTF-8, and accented characters
in Spanish and in people's names usually show up correctly.
But I do see this every now and then.
When I see it, I usually assume it's a case of incorrect encoding:
whoever sent it perhaps pasted characters from a Windows Word document
or something, and their mailer didn't properly re-encode them into
the charset they were using to send the message.
In this case, the message had
User-Agent: SquirrelMail/1.4.13
.
I suspect it came from a "Share this" link on the newspaper's website.
I used vim to look at the source of the message, and it had
Content-Type: text/plain; charset=iso-8859-1
For the bad characters, in vim I saw things like
New Mexico<92>s schools
I checked an old web page I'd bookmarked years ago that had a table
of the iso-8859-1 characters, and sure enough, hex 0x92 was an apostrophe.
What was wrong?
I got some help on the #mutt IRC channel, and, to make a long story
short, that web table I was using was wrong.
ISO-8859-1 doesn't include any characters in the range 8x-9x,
as you can see on
the Wikipedia
ISO/IEC 8859-1.
What was happening was that the page was really cp1252: that's where
those extra characters, like hex 92/octal 222 for an apostrophe,
or hex 96/octal 226 for a dash (nitpick: that's an en dash, but it
was used in a context that called for an em dash; if someone is going
to use something other than the plain old ASCII dash - you'd think
they'd at least use the right one. Sheesh!)
Anyway, the fix for this is to tell mutt when it sees iso-8859-1,
use cp1252 instead:
charset-hook iso-8859-1 cp1252
Voilà! Now I could read the article about
New Mexico's schools.
A happy find related to this: it turns out there's a better way of
looking up ISO-8859 tables, and I can ditch that bookmark to the old,
erroneous page. I've known about man ascii
forever, but
someone I'd never thought to try other charsets. Turns out
man iso_8859-1
and man iso_8859-15
have built-in tables too. Nice!
(Sadly, man utf-8
doesn't give a table. Of course,
that would be a long man page, if it did!)
Tags: mutt, charsets, linux
[
11:06 Jun 24, 2017
More linux |
permalink to this entry |
]
Mon, 05 Jun 2017
I know, I know. We use mailers like mutt because we don't believe in
HTML mail and prefer plaintext. Me, too.
But every now and then a situation comes up where it would be useful
to send something with emphasis. Or maybe you need to highlight
changes in something. For whatever reason, every now and then
I wish I had a way to send HTML mail.
I struggled with that way back, never did find a way, and ended up
writing a
Python
script, htmlmail.py to send an HTML page, including images, as email.
Sending HTML Email
But just recently I found a neat mutt hack. It turns out it's quite
easy to send HTML mail.
First, edit the HTML source in your usual mutt message editor (or
compose the HTML some other way, and insert the file). Note: if
there's any quoted text, you'll have to put a <pre> around
it, or otherwise turn it into something that will display nicely
in HTML.
Write the file and exit the editor. Then,
in the Compose menu, type Ctrl-T to edit the attachment type.
Change the type from text/plain to text/html.
That's it! Send it, and it will arrive looking like a regular HTML
email, just as if you'd used one of them newfangled gooey mail clients.
(No inline images, though.)
Viewing HTML Email
Finding out how easy that was made me wonder why the other direction
isn't easier. Of course, I have my mailcap set up so that mutt uses
lynx automatically to view HTML email:
text/html; lynx -dump %s; nametemplate=%s.html; copiousoutput
Lynx handles things like paragraph breaks and does in okay job of
showing links; but it completely drops all emphasis, like bold,
italic, headers, and colors. My terminal can display all those styles
just fine. I've also tried links, elinks, and w3m, but none of them
seem to be able to handle any text styling.
Some of them will do bold if you run them interactively, but none
of them do italic or colors, and none of them will do bold with -dump,
even if you tell them what terminal type you want to use.
Why is that so hard?
I never did find a solution, but it's worth noting some useful
sites I found along the way. Like tips for
testing
bold, italics etc. in a terminal:, and for
testing
whether the terminal supports italics, which gave me these useful
shell functions:
echo -e "\e[1mbold\e[0m"
echo -e "\e[3mitalic\e[0m"
echo -e "\e[4munderline\e[0m"
echo -e "\e[9mstrikethrough\e[0m"
echo -e "\e[31mHello World\e[0m"
echo -e "\x1B[31mHello World\e[0m"
ansi() { echo -e "\e[${1}m${*:2}\e[0m"; }
bold() { ansi 1 "$@"; }
italic() { ansi 3 "$@"; }
underline() { ansi 4 "$@"; }
strikethrough() { ansi 9 "$@"; }
red() { ansi 31 "$@"; }
And in testing, I found that a lot of fonts didn't offer italics.
One that does is Terminus, so if your normal font doesn't,
you can run a terminal with Terminus:
xterm -fn '-*-terminus-bold-*-*-*-20-*-*-*-*-*-*-*'
Not that it matters since none of the text-mode browsers offer italic
anyway. But maybe you'll find some other use for italic in a terminal.
Tags: mutt, email, linux
[
18:28 Jun 05, 2017
More linux |
permalink to this entry |
]
Tue, 25 Apr 2017
I'm taking a MOOC that includes equations involving Greek letters
like epsilon. I'm taking notes online, in Emacs, using the
iimage
mode tricks for taking MOOC class notes in emacs
that I worked out a few years back.
Iimage mode works fine for taking screenshots of the blackboard in the
videos, but sometimes I'd prefer to just put the equations inline
in my file. At first I was typing out things like
E = epsilon * sigma * T^4
but that's silly, and of course the professor isn't spelling out the
Greek letters like that when he writes the equations on the blackboard.
There's got to be a way to type Greek letters on this US keyboard.
I know how to type things like accented characters using the
"Multi key" or
"Compose key". In /etc/default/keyboard I have
XKBOPTIONS="ctrl:nocaps,compose:menu,terminate:ctrl_alt_bksp"
which, among other things, sets the compose key to be my "Menu" key,
which I never used otherwise.
There's a file, /usr/share/X11/locale/en_US.UTF-8/Compose,
that includes all the built-in compose key sequences.
And you can define your own sequences in ~/.XCompose.
Update: after changing ~/.XCompose, you don't need to reboot or
restart X, but you do need to restart any program where you want to use
the new binding: your terminal, editor or whatever.
I have a shell function in my .zshrc,
composekey() {
grep -i $1 /usr/share/X11/locale/en_US.UTF-8/Compose
}
so I can type something like
composekey epsilon
and find out how to type specific codes. But that didn't work so well
for Greek letters. It turns out this is how you type them:
<dead_greek> <A> : "Α" U0391 # GREEK CAPITAL LETTER ALPHA
<dead_greek> <a> : "α" U03B1 # GREEK SMALL LETTER ALPHA
<dead_greek> <B> : "Β" U0392 # GREEK CAPITAL LETTER BETA
<dead_greek> <b> : "β" U03B2 # GREEK SMALL LETTER BETA
<dead_greek> <D> : "Δ" U0394 # GREEK CAPITAL LETTER DELTA
<dead_greek> <d> : "δ" U03B4 # GREEK SMALL LETTER DELTA
<dead_greek> <E> : "Ε" U0395 # GREEK CAPITAL LETTER EPSILON
<dead_greek> <e> : "ε" U03B5 # GREEK SMALL LETTER EPSILON
... and so forth. And this
<dead_greek>
key isn't
actually defined in most US/English keyboard layouts: you can check
whether it's defined for you with:
xmodmap -pke | grep dead_greek
Of course you can use xmodmap to define a key to be <dead_greek>.
I stared at my keyboard for a bit, and decided that, considering how
seldom I actually need to type Greek characters, I didn't see the point
of losing a key for that purpose (though if you want to, here's a
thread on
how to map <dead_greek> with xmodmap).
I decided it would make much more sense to map it to the compose key
with a prefix, like 'g', that I don't need otherwise. I can do that in
~/.XCompose like this:
<Multi_key> <g> <A> : "Α" U0391 # GREEK CAPITAL LETTER ALPHA
<Multi_key> <g> <a> : "α" U03B1 # GREEK SMALL LETTER ALPHA
<Multi_key> <g> <B> : "Β" U0392 # GREEK CAPITAL LETTER BETA
<Multi_key> <g> <b> : "β" U03B2 # GREEK SMALL LETTER BETA
<Multi_key> <g> <D> : "Δ" U0394 # GREEK CAPITAL LETTER DELTA
<Multi_key> <g> <d> : "δ" U03B4 # GREEK SMALL LETTER DELTA
<Multi_key> <g> <E> : "Ε" U0395 # GREEK CAPITAL LETTER EPSILON
<Multi_key> <g> <e> : "ε" U03B5 # GREEK SMALL LETTER EPSILON
... and so forth.
And now I can type [MENU] g e
and a lovely ε appears,
at least in any app that supports Greek fonts, which is most of them
nowadays.
Tags: linux, X11
[
12:57 Apr 25, 2017
More linux |
permalink to this entry |
]
Fri, 31 Mar 2017
Used to be that you could see your mounted filesystems by typing
mount
or df
. But with modern Linux kernels,
all sorts are implemented as virtual filesystems -- proc, /run,
/sys/kernel/security, /dev/shm, /run/lock, /sys/fs/cgroup -- I have
no idea what most of these things are except that they make it
much more difficult to answer questions like "Where did that ebook
reader mount, and did I already unmount it so it's safe to unplug it?"
Neither mount
nor df
has a simple option
to get rid of all the extraneous virtual filesystems and only show
real filesystems.
http://unix.stackexchange.com/questions/177014/showing-only-interesting-mount-p
oints-filtering-non-interesting-types
had some suggestions that got me started:
mount -t ext3,ext4,cifs,nfs,nfs4,zfs
mount | grep -E --color=never '^(/|[[:alnum:]\.-]*:/)'
Another answer there says it's better to use
findmnt --df
,
but that still shows all the tmpfs entries
(
findmnt --df | grep -v tmpfs
might do the job).
And real mounts are always mounted on a filesystem path starting with /,
so you can do mount | grep '^/'
.
But it also turns out that mount will accept a blacklist of types
as well as a whitelist: -t notype1,notype2...
I prefer the idea of excluding a blacklist of filesystem types
versus restricting it to a whitelist; that way if I mount something
unusual like curlftpfs that I forgot to add to the whitelist,
or I mount a USB stick with a filesystem type I don't use very
often (ntfs?), I'll see it.
On my system, this was the list of types I had to disable (sheesh!):
mount -t nosysfs,nodevtmpfs,nocgroup,nomqueue,notmpfs,noproc,nopstore,nohugetlbfs,nodebugfs,nodevpts,noautofs,nosecurityfs,nofusectl
df
is easier: like findmnt
, it excludes
most of those filesystem types to begin with, so there are only a few
you need to exclude:
df -hTx tmpfs -x devtmpfs -x rootfs
Obviously I don't want to have to type either of those commands every
time I want to check my mount list.
SoI put this in my .zshrc.
If you call mount or df with no args, it applies the filters,
otherwise it passes your arguments through.
Of course, you could make a similar alias for findmnt
.
# Mount and df are no longer useful to show mounted filesystems,
# since they show so much irrelevant crap now.
# Here are ways to clean them up:
mount() {
if [[ $# -ne 0 ]]; then
/bin/mount $*
return
fi
# Else called with no arguments: we want to list mounted filesystems.
/bin/mount -t nosysfs,nodevtmpfs,nocgroup,nomqueue,notmpfs,noproc,nopstore,nohugetlbfs,nodebugfs,nodevpts,noautofs,nosecurityfs,nofusectl
}
df() {
if [[ $# -ne 0 ]]; then
/bin/df $*
return
fi
# Else called with no arguments: we want to list mounted filesystems.
/bin/df -hTx tmpfs -x devtmpfs -x rootfs
}
Update: Chris X Edwards suggests
lsblk
or lsblk -o 'NAME,MOUNTPOINT'
.
it wouldn't have solved my problem because it only shows /dev devices,
not virtual filesystems like sshfs, but it's still a command
worth knowing about.
Tags: linux, cmdline
[
12:25 Mar 31, 2017
More linux |
permalink to this entry |
]
Fri, 27 Jan 2017
A web page I maintain (originally designed by someone else) specifies
Times font. On all my Linux systems, Times displays impossibly
tiny, at least two sizes smaller than any other font that's ostensibly
the same size. So the page is hard to read. I'm forever tempted to
get rid of that font specifier, but I have to assume that other people
in the organization like the professional look of Times, and that this
pathologic smallness of Times and Times New Roman is just a Linux font quirk.
In that case, a better solution is to alias it, so that pages that use
Times will choose some larger, more readable font on my system.
How to do that was in this excellent, clear post:
How To Set Default Fonts and Font Aliases on Linux .
It turned out Times came from the gsfonts package, while
Times New Roman came from msttcorefonts:
$ fc-match Times
n021003l.pfb: "Nimbus Roman No9 L" "Regular"
$ dpkg -S n021003l.pfb
gsfonts: /usr/share/fonts/type1/gsfonts/n021003l.pfb
$ fc-match "Times New Roman"
Times_New_Roman.ttf: "Times New Roman" "Normal"
$ dpkg -S Times_New_Roman.ttf
dpkg-query: no path found matching pattern *Times_New_Roman.ttf*
$ locate Times_New_Roman.ttf
/usr/share/fonts/truetype/msttcorefonts/Times_New_Roman.ttf
(
dpkg -S
doesn't find the file because msttcorefonts
is a package that downloads a bunch of common fonts from Microsoft.
Debian can't distribute the font files directly due to licensing
restrictions.)
Removing gsfonts fonts isn't an option; aside from some documents and web
pages possibly not working right (if they specify Times or Times New Roman
and don't provide a fallback), removing gsfonts takes gnumeric and abiword
with it, and I do occasionally use gnumeric. And I like having the
msttcorefonts installed (hey, gotta have Comic Sans! :-) ).
So aliasing the font is a better bet.
Following Chuan Ji's page, linked above,
I edited ~/.config/fontconfig/fonts.conf (I already had one,
specifying fonts for the fantasy and cursive web families),
and added these stanzas:
<match>
<test name="family"><string>Times New Roman</string></test>
<edit name="family" mode="assign" binding="strong">
<string>DejaVu Serif</string>
</edit>
</match>
<match>
<test name="family"><string>Times</string></test>
<edit name="family" mode="assign" binding="strong">
<string>DejaVu Serif</string>
</edit>
</match>
The page says to log out and back in, but I found that restarting
firefox was enough. Now I could load up a page that specified Times
or Times New Roman and the text is easily readable.
Tags: linux, fonts
[
14:47 Jan 27, 2017
More linux |
permalink to this entry |
]
Sun, 26 Jun 2016
We had a little crisis Friday when our server suddenly stopped
accepting ssh connections.
The problem turned out to be denyhosts, a program that looks for
things like failed login attempts and blacklists IP addresses.
But why was our own IP blacklisted? It was apparently because I'd
been experimenting with a program called mailsync, which used to be
a useful program for synchronizing IMAP folders with local mail folders.
But at least on Debian, it has broken in a fairly serious way, so that
it makes three or four tries with the wrong password before it
actually uses the right one that you've configured in .mailsync.
These failed logins are a good way to get yourself blacklisted, and
there doesn't seem to be any way to fix mailsync or the c-client
library it uses under the covers.
Okay, so first, stop using mailsync. But then how to get our IP
off the server's blacklist? Just editing /etc/hosts.deny
didn't do it -- the IP reappeared there a few minutes later.
A web search found lots of solutions -- you have to edit a long
list of files, but no two articles had the same file list.
It appears that it's safest to remove the IP from every
file in /var/lib/denyhosts.
So here are the step by step instructions.
First, shut off the denyhosts service:
service denyhosts stop
Go to /var/lib/denyhosts/ and grep for any file that
includes your IP:
grep aa.bb.cc.dd *
(If you aren't sure what your IP is as far as the outside world
is concerned,
Googling what's my IP
will helpfully tell you, as well as giving you a list of other sites
that will also tell you.)
Then edit each of these files in turn, removing your IP from them
(it will probably be at the end of the file).
When you're done with that, you have one more file to edit:
remove your IP from the end of /etc/hosts.deny
You may also want to add your IP to /etc/hosts.allow,
but it may not make much difference, and if you're on a dynamic IP it
might be a bad idea since that IP will eventually
be used by someone else.
Finally, you're ready to re-start denyhosts:
service denyhosts start
Whew, un-blocked. And stay away from mailsync. I wish I knew of
a program that actually worked to keep IMAP and mbox mailboxes in sync.
Tags: linux
[
12:59 Jun 26, 2016
More linux |
permalink to this entry |
]
Thu, 09 Jun 2016
I needed to merge some changes from a development file into the
file on the real website, and discovered that the program I most
often use for that, meld, is in one of its all too frequent periods
where its developers break it in ways that make it unusable for a few months.
(Some of this is related to GTK, which is a whole separate rant.)
That led me to explore some other diff/merge alternatives.
I've used tkdiff quite a bit for viewing diffs,
but when I tried to use it to merge one file into another
I found its merge just too hard to use. Likewise for emacs:
it's a wonderful editor but I never did figure out how to get ediff
to show diffs reliably, let alone merge from one file to another.
But vimdiff looked a lot easier and had a lot more documentation available,
and actually works pretty well.
I normally run vim in an xterm window, but for a diff/merge tool, I
want a very wide window which will show the diffs side by side.
So I used gvimdiff instead of regular vimdiff:
gvimdiff docs.dev/filename docs.production/filename
Configuring gvimdiff to see diffs
gvimdiff initially pops up a tiny little window, and it ignores Xdefaults.
Of course you can resize it, but who wants to do that every time?
You can control the initial size by setting the lines
and columns variables in .vimrc.
About 180 columns by 60 lines worked pretty well for my fonts
on my monitor, showing two 80-column files side by side.
But clearly I don't want to set that in .vimrc so that it
runs every time I run vim;
I only want that super-wide size when I'm running a side-by-side diff.
You can control that by checking the &diff variable in .vimrc:
if &diff
set lines=58
set columns=180
endif
If you do decide to resize the window, you'll notice that the separator
between the two files doesn't stay in the center: it gives you lots of
space for the right file and hardly any for the left.
Inside that same &diff clause, this somewhat arcane incantation
tells vim to keep the separator centered:
autocmd VimResized * exec "normal \<C-w>="
I also found that the colors, in the vim scheme I was using, made it
impossible to see highlighted text. You can go in and edit the color
scheme and make your own, of course, but an easy way quick fix is to set
all highlighting to one color, like yellow, inside the if $diff
section:
highlight DiffAdd cterm=bold gui=none guibg=Yellow
highlight DiffDelete cterm=bold gui=none guibg=Yellow
highlight DiffChange cterm=bold gui=none guibg=Yellow
highlight DiffText cterm=bold gui=none guibg=Yellow
Merging changes
Okay, once you can view the differences between the two files,
how do you merge from one to the other? Most online sources are
quite vague on that, but it's actually fairly easy:
]c | jumps to the next difference
|
---|
[c | jumps to the previous difference
|
---|
dp | makes them both look like the left side
(apparently stands for diff put
|
---|
do | makes them both look like the right side
(apparently stands for diff obtain
|
---|
The only difficult part is that it's not really undoable.
u (the normal vim undo keystroke) works inconsistently after
dp: the focus is generally in the left window, so u applies
to that window, while dp modified the right window and the undo doesn't
apply there. If you put this in your .vimrc
nmap du :wincmd w<cr>:normal u<cr>:wincmd w<cr>
then you can use
du to undo changes in the right window,
while
u still undoes in the left window. So you still have to
keep track of which direction your changes are going.
Worse, neither undo nor this du command restores the
highlighting showing there's a difference between the two files.
So, really, undoing should be reserved for emergencies; if you try
to rely on it much you'll end up being unsure what has and hasn't changed.
In the end, vimdiff probably works best for straightforward diffs,
and it's probably best get in the habit of always merging
from right to left, using do. In other words, run
vimdiff file-to-merge-to file-to-merge-from
,
and think about each change before doing it to make it less
likely that you'll need to undo.
And hope that whatever silly transient bug in meld drove you to use
vimdiff gets fixed quickly.
Tags: linux, editors, vim
[
20:10 Jun 09, 2016
More linux/editors |
permalink to this entry |
]
Sat, 07 May 2016
I recently let Firefox upgrade itself to 46.0.1, and suddenly I
couldn't type anything any more. The emacs/readline editing bindings,
which I use probably thousands of times a day, no longer worked.
So every time I typed a Ctrl-H to delete the previous character,
or Ctrl-B to move back one character, a sidebar popped up.
When I typed Ctrl-W to delete the last word, it closed the tab.
Ctrl-U, to erase the contents of the urlbar, opened a new View Source
tab, while Ctrl-N, to go to the next line, opened a new window.
Argh!
(I know that people who don't use these bindings are rolling their
eyes and wondering "What's the big deal?" But if you're a touch typist,
once you've gotten used to being able to edit text without moving your
hands from the home position, it's hard to imagine why everyone else
seems content with key bindings that require you to move your
hands and eyes way over to keys like Backspace or Home/End that aren't
even in the same position on every keyboard. I map CapsLock to Ctrl
for the same reason, since my hands are too small to hit the
PC-positioned Ctrl key without moving my whole hand. Ctrl
was to the left of the "A" key on nearly all computer keyboards
until IBM's 1986 "101 Enhanced Keyboard", and it made a lot more
sense than IBM's redesign since few people use Caps Lock very often.)
I found a bug filed on the broken bindings, and lots of people
commenting online, but it wasn't until I found out that Firefox 46 had
switched to GTK3 that I understood had actually happened. And adding
gtk3 to my web searches finally put me on the track to finding the
solution, after trying several other supposed fixes that weren't.
Here's what actually worked: edit
~/.config/gtk-3.0/settings.ini
and add, inside the
[Settings]
section, this line:
gtk-key-theme-name = Emacs
I think that's all that was needed. But in case that doesn't do it,
here's something I had already tried, unsuccessfully,
and it's possible that you actually need it in addition to the
settings.ini change
(I don't know how to undo magic Gnome settings so I can't test it):
gsettings set org.gnome.desktop.interface gtk-key-theme "Emacs"
Tags: linux, gtk, gtk3, emacs, firefox
[
18:11 May 07, 2016
More linux |
permalink to this entry |
]
Thu, 17 Mar 2016
I switched a few weeks ago from unstable ("Sid") to testing ("Stretch")
in the hope that my system, particularly X, would break less often.
The very next day, I updated and discovered I couldn't use my system
at night any more, because the program I use to
reduce
the screen brightness by tweaking X gamma no longer worked.
Neither did other related programs, such as xgamma and xcalib.
The Dell monitor I use doesn't have reasonable hardware brightness controls:
strangely, the brightness button works when the monitor is connected
over VGA, but if I want to use the sharper HDMI connection, brightness
adjustment no longer works. So I depend on software brightness adjustment
in order to use my computer at night when the room is dim.
Fortunately, it turns out there's a workaround. xrandr
has options for both brightness and gamma:
xrandr --output HDMI1 --brightness .5
xrandr --output HDMI1 --gamma .5:.5:.5
I've always put xbrightness on a key, so I can use a function key to
adjust brightness interactively up and down according to conditions.
So a command that sets brightness to .5 or .8 isn't what I need;
I need to get the current brightness and set it a little brighter
or a little dimmer. xrandr doesn't offer that, so I needed to script it.
You can get the current brightness with
xrandr --verbose | grep -i brightness
But I was hoping there would be a more straightforward way to get
brightness from a program.
I looked into Python bindings for xrandr; there are some,
but with no documentation and no examples. After an hour
of fiddling around, I concluded that I could waste the rest of the day
poring through the source code and trying things hoping something would
work; or I could spend fifteen minutes
using subprocess.call()
to wrap the command-line xrandr.
So subprocesses it was. It made for a nice short script,
much simpler than the old xbrightness C program that used
<X11/extensions/xf86vmode.h>
and
XF86VidModeGetGammaRampSize()
:
xbright
on github.
Tags: linux, X11, debian
[
11:01 Mar 17, 2016
More linux |
permalink to this entry |
]
Fri, 05 Feb 2016
Debian's Unstable ("Sid") distribution has been terrible lately.
They're switching to a version of X that doesn't require root,
and apparently
the X
transition has broken all sorts of things in ways that are hard to fix
and there's no ETA for when things might get any better.
And, being Debian, there's no real bug system so you can't just CC
yourself on the bug to see when new fixes might be available to try.
You just have to wait, try every few days and see if the system
That's hard when the system doesn't work at all. Last week, I was
booting into a shell but X wouldn't run, so at least I could pull
updates. This week, X starts but the keyboard and mouse don't work at all,
making it hard to run an upgrade.
has been fixed.
Fortunately, I have an install of Debian stable ("Jessie") on this
system as well. When I partition a large disk I always reserve several
root partitions so I can try out other Linux distros, and when running
the more experimental versions, like Sid, sometimes that's a life saver.
So I've been running Jessie while I wait for Sid to get fixed.
The only trick is: how can I upgrade my Sid partition while running
Jessie, since Sid isn't usable at all?
I have an entry in /etc/fstab that lets me mount my Sid
partition easily:
/dev/sda6 /sid ext4 defaults,user,noauto,exec 0 0
So I can type
mount /sid
as myself, without even needing
to be root.
But Debian's apt upgrade tools assume everything will be on /,
not on /sid. So I'll need to use chroot /sid
(as root)
to change the root of the filesystem to /sid. That only affects
the shell where I type that command; the rest of my system will still be
happily running Jessie.
Mount the special filesystems
That mostly works, but not quite, because I get a lot of errors like
permission denied: /dev/null
.
/dev/null is a device: you can write to it and the bytes disappear,
as if into a black hole except without Hawking radiation.
Since /dev is implemented by the kernel and udev, in the chroot
it's just an empty directory. And if a program opens /dev/null in
the chroot, it might create a regular file there and actually write to it.
You wouldn't want that: it eats up disk space and can slow things down a lot.
The way to fix that is before you chroot:
mount --bind /dev /sid/dev
which will make /sid/dev a mirror of the real /dev.
It has to be done before the chroot because inside the chroot,
you no longer have access to the running system's /dev.
But there is a different syntax you can use after chrooting:
mount -t proc proc proc/
mount --rbind /sys sys/
mount --rbind /dev dev/
It's a good idea to do this for /proc and /sys as well,
and Debian recommends adding /dev/pts (which must be done after you've
mounted /dev), even though most of these probably won't come into play
during your upgrade.
Mount /boot
Finally, on my multi-boot system, I have one shared /boot
partition with kernels for Jessie, Sid and any other distros I have
installed on this system. (That's
somewhat
hard to do using grub2
but easy on Debian
though you may need to
turn
off auto-update and
Debian
is making it harder to use extlinux now.)
Anyway, if you have a separate /boot partition, you'll want it mounted
in the chroot, in case the update needs to add a new kernel.
Since you presumably already have the same /boot mounted on the
running system, use mount --bind
for that as well.
So here's the final set of commands to run, as root:
mount /sid
mount --bind /proc /sid/proc
mount --bind /sys /sid/sys
mount --bind /dev /sid/dev
mount --bind /dev/pts /sid/dev/pts
mount --bind /boot /sid/boot
chroot /sid
And then you can proceed with your apt-get update
,
apt-get dist-upgrade
etc.
When you're finished, you can unmount everything with one command:
umount --recursive /sid
Some helpful background reading:
Tags: linux, debian, chroot, install, boot
[
11:43 Feb 05, 2016
More linux/install |
permalink to this entry |
]
Sun, 31 Jan 2016
My mouse died recently: the middle button started bouncing, so a
middle button click would show up as two clicks instead of one.
What a piece of junk -- I only bought that Logitech some ten years ago!
(Seriously, I'm pretty amazed how long it lasted, considering it wasn't
anything fancy.)
I replaced it with another Logitech, which turned out to be quite
difficult to find. Turns out most stores only sell cordless mice these
days. Why would I want something that depends on batteries to use
every day at my desktop?
But I finally found another basic corded Logitech mouse (at Office Depot).
Brought it home and it worked fine, except that the speed was way too fast,
much faster than my old mouse. So I needed to find out how to change
mouse speed.
X11 has traditionally made it easy to change mouse acceleration,
but that wasn't what I wanted. I like my mouse to be fairly linear,
not slow to start then suddenly zippy. There's no X11 property for
mouse speed; it turns out that to set mouse speed, you need to call
it Deceleration.
But first, you need to get the ID for your mouse.
$ xinput list| grep -i mouse
⎜ ↳ Logitech USB Optical Mouse id=11 [slave pointer (2)]
Armed with the ID of 11, we can find the current speed (deceleration)
and its ID:
$ xinput list-props 11 | grep Deceleration
Device Accel Constant Deceleration (259): 3.500000
Device Accel Adaptive Deceleration (260): 1.000000
Constant deceleration is what I want to set, so I'll use that ID of 259
and set the new deceleration to 2:
$ xinput set-prop 11 259 2
That's fine for doing it once. But what if you want it to happen
automatically when you start X? Those constants might all stay the
same, but what if they don't?
So let's build a shell pipeline that should work even if the constants aren't.
First, let's get the mouse ID out of xinput list
.
We want to pull out the digits immediately following "id=", and nothing else.
$ xinput list | grep Mouse | sed 's/.*id=\([0-9]*\).*/\1/'
11
Save that in a variable (because we'll need to use it more than once)
and feed it in to list-props
to get the deceleration ID.
Then use sed again, in the same way, to pull out just the thing in
parentheses following "Deceleration":
$ mouseid=$(xinput list | grep Mouse | sed 's/.*id=\([0-9]*\).*/\1/')
$ xinput list-props $mouseid | grep 'Constant Deceleration'
Device Accel Constant Deceleration (262): 2.000000
$ xinput list-props $mouseid | grep 'Constant Deceleration' | sed 's/.* Deceleration (\([0-9]*\)).*/\1/'
262
Whew! Now we have a way of getting both the mouse ID and the ID for
the "Constant Deceleration" parameter, and we can pass them in to
set-prop
with our desired value (I'm using 2) tacked
onto the end:
$ xinput set-prop $mouseid $(xinput list-props $mouseid | grep 'Constant Deceleration' | sed 's/.* Deceleration (\([0-9]*\)).*/\1/') 2
Add those two lines (setting the mouseid, then the final xinput line)
wherever your window manager will run them when you start X.
For me, using Openbox, they go in .config/openbox/autostart.
And now my mouse will automatically be the speed I want it to be.
Tags: linux, X11, mouse
[
13:42 Jan 31, 2016
More linux |
permalink to this entry |
]
Sun, 27 Dec 2015
Debian "Sid" (unstable) stopped working on my Thinkpad X201 as of the
last upgrade -- it's dropping mouse and keyboard events. With any luck
that'll get straightened out soon -- I hear I'm not the only one
having USB problems with recent Sid updates. But meanwhile,
fortunately, I keep a couple of spare root partitions so I can
try out different Linux distros. So I decided to switch to the
current Debian stable version, "Jessie".
The mouse and keyboard worked fine there. Except it turned out I had
never fully upgraded that partition to the "Jessie"; it was still on
"Wheezy". So, with much trepidation, I attempted an
apt-get update; apt-get dist-upgrade
After an interminable wait for everything to download, though, I was
faced with a blue screen asking this:
No bootloader integration code anymore.
The extlinux package does not ship bootloader integration anymore.
If you are upgrading to this version of EXTLINUX your system will not boot any longer if EXTLINUX was the only configured bootloader.
Please install GRUB.
<Ok>
No -- it's not okay! I have
good
reasons for not using grub2 -- besides which, extlinux on
exact machine has been working fine for years under Debian Sid.
If it worked on Wheezy and works on Sid, why wouldn't it work on
the version in between, Jessie?
And what does it mean not to ship "bootloader integration", anyway?
That term is completely unclear, and googling was no help.
There have been various Debian bugs filed but of course, no
explanation from the developers for exactly what does and doesn't work.
My best guess is that what Debian means by "bootloader integration"
is that there's a script that looks at /boot/extlinux/extlinux.conf,
figures out which stanza corresponds to the current system,
figures out whether there's a new kernel being installed that's
different from the one in extlinux.conf, and updates the
appropriate kernel and initrd lines to point to the new kernel.
If so, that's something I can do myself easily enough. But what if
there's more to it? What would actually happen if I upgraded the
extlinux package?
Of course, there's zero documentation on this. I found plenty of
questions from people who had hit this warning, but most were from
newbies who had no idea what extlinux was or why their systems were
using it, and they were advised to install grub. I only found one hit
from someone who was intentionally using extlinux. That person aborted
the install, held back the package so the potentially nonbooting new
version of extlinux wouldn't be installed, then updated extlinux.conf
by hand, and apparently that worked fine.
It sounded like a reasonable bet. So here's what I did (as root, of course):
- Open another terminal window and run
ps aux | grep apt
to find the apt-get dist-upgrade process and kill it.
(sudo pkill apt-get
is probably an easier approach.)
Ensure that apt has exited and there's a shell prompt in the window
where the scary blue extlinux warning was.
echo "extlinux hold" | dpkg --set-selections
apt-get dist-upgrade
and wait forever for all the
packages to install
aptitude search linux-image | grep '^i'
to find out
what kernel versions are installed. Pick one. I picked 3.14-2-686-pae
because that happened to be the same kernel I was already running,
from Sid.
ls -l /boot
and make sure that kernel is there,
along with an initrd.img of the same version.
- Edit /boot/extlinux/extlinux.conf and find the stanza
for the Jessie boot. Edit the kernel and append initrd
lines to use the right kernel version.
It worked fine. I booted into jessie with the kernel I had specified.
And hooray -- my keyboard and mouse work, so I can continue to use my
system until Sid becomes usable again.
Tags: linux, debian, extlinux
[
17:28 Dec 27, 2015
More linux/install |
permalink to this entry |
]
Fri, 30 Oct 2015
In
Part
I of HDMI Presentation Setup on Linux, I covered the basics of getting
video and audio working over HDMI. Now I want to cover some finer-grained
details: some problems I had, and ways to make it easier to enable
HDMI when you need it.
Testing follies, delays, and screen blinking/flashing woes
While I was initially trying to get this working, I was using my own
short sound clip (one of the alerts I use for IRC) and it wasn't
working. Then I tried the test I showed in part I,
$ aplay -D plughw:0,3 /usr/share/sounds/alsa/Front_Center.wav
and that worked fine.
Tried my sound clip again -- nothing. I noticed that my clip was mono
and 8-bit while the ALSA sample was stereo and 16-bit, and I wasted a
lot of time in web searches on why HDMI would play one and not the other.
Eventually I figured out that the reason my short
clip wasn't playing was that there's a delay when switching on HDMI sound,
and the first second two two of any audio may be skipped.
I found lots of complaints about people missing the first few seconds
of sound over HDMI, so this problem is quite common, and I haven't
found a solution.
So if you're giving a talk where you need to play short clips -- for
instance, a talk on bird calls -- be aware of this. I'm probably
going to make a clip of a few seconds of silence, so I can play silence
before every short clip to make sure I'm fully switched over to HDMI
before the clip starts:
aplay -D plughw:0,3 silence.wav osprey.wav
Another problem, probably related, when first starting an audio file:
the screen blinks brieftly off then on again, then blinks again a
little while after the clip ends. ("Flicker" turns out to be a better
term to use when web searching, though I just see a single blink, not
continued flickering). It's possible this is something about my home
TV, and I will have to try it with another monitor somewhere to see if
it's universal. It sounds like
kernel
bug 51421: Enabling HDMI sound makes HDMI video flicker,
but that bug was marked resolved in 2012 and I'm seeing this in 2015
on Debian Jessie.
Making HDMI the sound default
What a pain, to have to remember to add -D plughw:0,3 every time you
play a sound. And what do you do for other programs that don't have
that argument?
Fortunately, you can make HDMI your default sound output.
Create a file in your home directory called .asoundrc with
this in it (you may be able to edit this down -- I didn't try)
and then all audio will go to HDMI:
pcm.dmixer {
type dmix
ipc_key 1024
ipc_key_add_uid false
ipc_perm 0660
slave {
pcm "hw:0,3"
rate 48000
channels 2
period_time 0
period_size 1024
buffer_time 0
buffer_size 4096
}
}
pcm. !default {
type plug
slave.pcm "dmixer"
}
Great! But what about after you disconnect? Audio will still be going
to HDMI ... in other words, nowhere. So rename that file:
$ mv .asoundrc asoundrc-hdmi
Then when you connect to HDMI, you can copy it back:
$ cp asoundrc-hdmi .asoundrc
What a pain, you say again! This should happen automatically!
That's possible, but tricky: you have to set up udev rules and scripts.
See this
Arch
Linux discussion on HDMI audio output switching automatically
for the gory details. I haven't bothered, since this is something I'll
do only rarely, when I want to give one of those multimedia presentations
I sometimes contemplate but never actually give.
So for me, it's not worth fighting with udev when, by the time I
actually need HDMI audio, the udev syntax probably will have changed again.
Aliases to make switching easy
But when I finally do break down and design a multimedia presentation,
I'm not going to be wanting to do all this fiddling in the presentation
room right before the talk. I want to set up aliases to make it easy.
There are two things that need to be done in that case:
make HDMI output the default, and make sure it's unmuted.
Muting can be done automatically with amixer. First run amixer
with no arguments to find out the channel name (it gives a lot of output,
but look through the "Simple mixer control" lines, or speed that up
with amixer | grep control
.
Once you know the channel name (IEC958 on my laptop), you can run:
amixer sset IEC958 unmute
The rest of the alias is just shell hackery to create a file called
.asoundrc with the right stuff in it, and saving .asoundrc before
overwriting it. My alias in .zshrc is set up so that I can
say hdmisound on or hdmisound off (with no arguments, it
assumes on), and it looks like this:
# Send all audio output to HDMI.
# Usage: hdmisound [on|off], default is on.
hdmisound() {
if [[ $1 == 'off' ]]; then
if [[ -f ~/.asoundrc ]]; then
mv ~/.asoundrc ~/.asoundrc.hdmi
fi
amixer sset IEC958 mmute
else
if [[ -f ~/.asoundrc ]]; then
mv ~/.asoundrc ~/.asoundrc.nohdmi
fi
cat >> ~/.asoundrc <<EOF
pcm.dmixer {
type dmix
ipc_key 1024
ipc_key_add_uid false
ipc_perm 0660
slave {
pcm "hw:0,3"
rate 48000
channels 2
period_time 0
period_size 1024
buffer_time 0
buffer_size 4096
}
}
pcm. !default {
type plug
slave.pcm "dmixer"
}
EOF
amixer sset IEC958 unmute
fi
}
Of course, I could put all that .asoundrc content into a file and just
copy/rename it each time. But then I have another file I need to make
sure is in place on every laptop; I decided I'd rather make the alias
self-contained in my .zshrc.
Tags: linux, audio, speaking
[
11:57 Oct 30, 2015
More linux/laptop |
permalink to this entry |
]
Tue, 27 Oct 2015
For about a decade now I've been happily connecting to projectors to
give talks. xrandr --output VGA1 --mode 1024x768
switches
on the laptop's external VGA port, and xrandr --auto
turns it off again after I'm disconnected. No fuss.
But increasingly, local venues are eschewing video projectors and
instead using big-screen TVs, some of which offer only an HDMI port,
no VGA. I thought I'd better figure out how to present a talk over HDMI
now, so I'll be ready when I need to know.
Fortunately, my newest laptop does have an HDMI port. But in case it
ever goes on the fritz and I have to use an older laptop, I discovered
you can buy VGA to HDMI adaptors rather cheaply (about $10) on ebay.
I bought one of those, tested it on my TV at home and it at least
worked there. Be careful when shopping: you want to make sure you're
getting something that takes VGA in and outputs HDMI, rather than the
reverse. Ebay descriptions aren't always 100% clear on that, but if you
check the gender of the connector in the photo and make sure it's right
to plug into the socket on your laptop, you should be all right.
Once you're plugged in (whether via an adaptor, or native HDMI built
into your laptop), connecting is easy, just like connecting with VGA:
xrandr --output HDMI1 --mode 1024x768
Of course, you can modify the resolution as you see fit.
I plan to continue to design my presentations for a 1024x768
resolution for the forseeable future. Since my laptop is 1366x1024,
I can use the remaining 342-pixel-wide swath for my speaker notes
and leave them invisible to the audience.
But for GIMP presentations, I'll probably want to use the full width
of my laptop screen. --mode 1366x768
didn't work -- that
resolution wasn't available -- but running xrandr
with
no arguments got me a list of available resolutions, which included
1360x768. That worked fine and is what I'll use for GIMP talks and
other live demos where I want more screen space.
Sound over HDMI
My Toastmasters club had a tech session where a few of us tried out
the new monitor in our meeting room to make sure we could use it.
One person was playing a video with sound. I've never used sound in
a talk, but I've always wanted to find an excuse to try it.
Alas, it didn't "just work" -- xrandr's video settings have nothing
to do with ALSA's audio settings. So I had to wait until I got home so
I could do web searches and piece together the answer.
First, run
aplay -l
, which should show something like this:
$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: Intel [HDA Intel], device 0: STAC92xx Analog [STAC92xx Analog]
Subdevices: 0/1
Subdevice #0: subdevice #0
card 0: Intel [HDA Intel], device 3: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
Find the device number for the HDMI device, which I've highlighted here:
in this case, it's 3 (which seems to be common on Intel chipsets).
Now you can run a test:
$ aplay -D plughw:0,3 /usr/share/sounds/alsa/Front_Center.wav
Playing WAVE '/usr/share/sounds/alsa/Front_Center.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Mono
If you don't hear anything, don't worry: the HDMI channel is probably muted
if you've never used it before. Run either
alsamixer
or
alsamixergui
.
Now find the channel representing your HDMI connection.
(HDMI must be plugged in for this to work.) In alsamixer, it's called
S/PDIF; in alsamixergui, it's called IEC958.
If you look up either of those terms,
Wikipedia S/PDIF
will tell you that S/PDIF is the Sony/Philips Digital Interconnect Format,
a data protocol and a set of physical specifications.
Those physical specifications appear to have nothing to do with video,
and use connectors that are nothing like HDMI. So it doesn't make much
sense. Just remember that if you see IEC958 or S/PDIF in ALSA, that's
probably your HDMI channel.
In the alsamixergui screenshot, IEC958 is muted: you can tell because
the little speaker icon at the top of the column is bright white.
If it were unmuted, the speaker icon would be grey like most of the others.
Yes, this seems backward. It's Linux audio: get used to obscure user interfaces.
In the alsamixer screenshot, the mutes are at the bottom of each column,
and MM indicates a channel is muted (like the Beep channel in the screenshot).
S/PDIF is not muted here, though it appears to be at zero volume.
(The 00 doesn't tell you it's at zero volume; 00 means it's
not muted. What did I say about Linux audio?)
ALSA apparently doesn't let you adjust the volume of HDMI output:
presumably they expect that your HDMI monitor will have its own volume
control. If your S/PDIF is muted, you can use your right-arrow key to
arrow over to the S/PDIF channel, then type m to toggle muting.
You can exit alsamixer with Ctrl-C (Q and Ctrl-Q don't work).
Now try that aplay -D command again and see if it works.
With any luck, it will (loudly).
A couple of other tests you might want to try:
speaker-test -t sine -f 440 -c 2 -s 1 -D hw:0,3
plays a sine wave.
speaker-test -c 2 -r 48000 -D hw:0,3
runs a general speaker test sequence.
In
Part
II of Linux HDMI Presentations,
I'll cover some problems I had, and how to write an alias
to make it easy to turn HDMI audio on and off.
Tags: linux, audio, speaking
[
14:36 Oct 27, 2015
More linux/laptop |
permalink to this entry |
]
Thu, 22 Oct 2015
I went to a night sky photography talk on Tuesday. The presenter
talked a bit about tips on camera lenses, exposures; then showed
a raw image and prepared to demonstrate how to process it to bring
out the details.
His slides disappeared, the screen went blank, and then ... nothing.
He wrestled with his laptop for a while. Finally he said "Looks like
I'm going to need a network connection", left the podium and headed
out the door to find someone to help him with that.
I'm not sure what the networking issue was:
the nature center has open wi-fi, but you know how it is during talks:
if anything can possibly go wrong with networking, it will, which is
why a good speaker tries not to rely on it. And I'm not blaming this
speaker, who had clearly done plenty of preparation and thought
he had everything lined up.
Eventually they got the network connection, and he connected to Adobe.
It turns out the problem was that Adobe Photoshop is now cloud-based.
Even if you have a local copy of the software, it insists on checking
in with Adobe at least every 30 days. At least, that's the theory.
But he had used the software on that laptop earlier that same day,
and thought he was safe. But that wasn't good enough, and Photoshop
picked the worst possible time -- a talk in front of a large audience
-- to decide it needed to check in before letting him do anything.
Someone sitting near me muttered "I'd been thinking about buying that,
but now I don't think I will." Someone else told me afterward that
all Photoshop is now cloud-based; older versions still work,
but if you buy Photoshop now, your only option is this cloud version
that may decide ... at the least opportune moment ... that you can't
use your software any more.
I'm so glad I use Free software like GIMP. Not that things can't go
wrong giving a GIMP talk, of course. Unexpected problems or bugs can
arise with any software, and you take that risk any time you give a
live demo.
But at least with Free, open source software like GIMP, you know you
own the software and it's not suddenly going to refuse to run without
a license check. That sort of freedom is what makes the difference
between free as in beer, and Free as in speech.
You can practice your demo carefully before the talk
to guard against most bugs and glitches; but all the practice in the
world won't guard against software that won't start.
I talked to the club president afterward and offered to give a GIMP
talk to the club some time soon, when their schedule allows.
Tags: linux, gimp, open source, photoshop
[
10:24 Oct 22, 2015
More gimp |
permalink to this entry |
]
Thu, 15 Oct 2015
Update, December 2022:
viewmailattachments has been integrated with another mutt helper, viewhtmlmail.py, which can show HTML messages complete with embedded images. It's described
in the article View Mail Attachments from Mutt
and the script is at
viewmailattachments.py. It no longer uses the "please wait" screen described in this article, but the rest of the discussion still applies.
I seem to have fallen into a nest of Mac users whose idea of
email is a text part, an HTML part, plus two or three or seven attachments
(no exaggeration!) in an unholy combination of .DOC, .DOCX, .PPT and other
Microsoft Office formats, plus .PDF.
Converting to text in mutt
As a mutt user who generally reads all email as plaintext,
normally my reaction to a mess like that would be "Thanks, but
no thanks". But this is an organization that does a lot of good work
despite their file format habits, and I want to help.
In mutt, HTML mail attachments are easy.
This pair of entries in ~/.mailcap takes care of them:
text/html; firefox 'file://%s'; nametemplate=%s.html
text/html; lynx -dump %s; nametemplate=%s.html; copiousoutput
Then in .muttrc, I have
auto_view text/html
alternative_order text/plain text
If a message has a text/plain part, mutt shows that. If it has text/html
but no text/plain, it looks for the "copiousoutput" mailcap entry,
runs the HTML part through lynx (or I could use links or w3m) and
displays that automatically.
If, reading the message in lynx, it looks to me like the message has
complex formatting that really needs a browser, I can go to
mutt's attachments screen and display the attachment in firefox
using the other mailcap entry.
Word attachments are not quite so easy, especially when there are a
lot of them. The straightforward way is to save each one to a file,
then run LibreOffice on each file, but that's slow and tedious
and leaves a lot of temporary files behind.
For simple documents, converting to plaintext is usually good
enough to get the gist of the attachments.
These .mailcap entries can do that:
application/msword; catdoc %s; copiousoutput
application/vnd.openxmlformats-officedocument.wordprocessingml.document; docx2txt %s -; copiousoutput
Alternatives to catdoc include wvText and antiword.
But none of them work so well when you're cross-referencing five
different attachments, or for documents where color and formatting
make a difference, like mail from someone who doesn't know how to get
their mailer to include quoted text, and instead distinguishes their
comments from the text they're replying to by making their new
comments green (ugh!)
For those, you really do need a graphical window.
I decided what I really wanted (aside from people not sending me these
crazy emails in the first place!) was to view all the attachments as
tabs in a new window. And the obvious way to do that is to convert
them to formats Firefox can read.
Converting to HTML
I'd used wvHtml to convert .doc files to HTML, and it does a decent
job and is fairly fast, but it can't handle .docx. (People who send
Office formats seem to distribute their files fairly evenly between
DOC and DOCX. You'd think they'd use the same format for everything
they wrote, but apparently not.) It turns out LibreOffice has a
command-line conversion program, unoconv, that can handle any format
LibreOffice can handle. It's a lot slower than wvHtml but it does a
pretty good job, and it can handle .ppt (PowerPoint) files too.
For PDF files, I tried using pdftohtml, but it doesn't always do so well,
and it's hard to get it to produce a single HTML file rather than a
directory of separate page files. And about three quarters of PDF files
sent through email turn out to be PDF in name only: they're actually
collections of images of single pages, wrapped together as a PDF file.
(Mostly, when I see a PDF like that I just skip it and try to get the
information elsewhere. But I wanted my program at least to be able to
show what's in the document, and let the user choose whether to skip it.)
In the end, I decided to open a firefox tab and let Firefox's built-in
PDF reader show the file, though popping up separate mupdf windows is
also an option.
I wanted to show the HTML part of the email, too. Sometimes there's
formatting there (like the aforementioned people whose idea of quoting
messages is to type their replies in a different color), but there can
also be embedded images. Extracting the images and showing them in a
browser window is a bit tricky, but it's a problem I'd already solved
a couple of years ago:
Viewing HTML mail messages from Mutt (or other command-line mailers).
Showing it all in a new Firefox window
So that accounted for all the formats I needed to handle.
The final trick was the firefox window. Since some of these conversions,
especially unoconv, are quite slow, I wanted to pop up a window right
away with a "converting, please wait..." message.
Initially, I used a javascript: URL, running the command:
firefox -new-window "javascript:document.writeln('<br><h1>Translating documents, please wait ...</h1>');"
I didn't want to rely on Javascript, though. A data: URL, which I
hadn't used before, can do the same thing without javascript:
firefox -new-window "data:text/html,<br><br><h1>Translating documents, please wait ...</h1>"
But I wanted the first attachment to replace the contents of that same
window as soon as it was ready, and then subsequent attachments open a
new tab in that window.
But it turned out that firefox is inconsistent about what -new-window
and -new-tab do; there's no guarantee that -new-tab will show up in
the same window you recently popped up with -new-window, and
running just firefox URL
might open in either the new
window or the old, in a new tab or not, or might not open at all.
And things got even more complicated after I decided that I should use
-private-window to open these attachments in private browsing mode.
In the end, the only way firefox would behave in a repeatable,
predictable way was to use -private-window for everything.
The first call pops up the private window, and each new call opens
a new tab in the private window. If you want two separate windows
for two different mail messages, you're out of luck: you can't have
two different private windows. I decided I could live with that;
if it eventually starts to bother me, I can always give up on Firefox
and write a little python-webkit wrapper to do what I need.
Using a file redirect instead
But that still left me with no way to replace the contents of the
"Please wait..." window with useful content. Someone on #firefox
came up with a clever idea: write the content to a page with a
meta redirect.
So initially, I create a file pleasewait.html that includes the header:
<meta http-equiv="refresh" content="2;URL=pleasewait.html">
(other HTML, charset information, etc. as needed).
The meta refresh means Firefox will reload the file every two seconds.
When the first converted file is ready, I just change the header
to redirect to
URL=first_converted_file.html.
Meanwhile, I can be opening the other documents in additional tabs.
Finally, I added the command to my .muttrc. When I'm viewing a message
either in the index or pager screens, F10 will call the script and
decode all the attachments.
macro index <F10> "<pipe-message>~/bin/viewmailattachments\n" "View all attachments in browser"
macro pager <F10> "<pipe-message>~/bin/viewmailattachments\n" "View all attachments in browser"
Whew! It was trickier than I thought it would be.
But I find I'm using it quite a bit, and it takes a lot of the pain
out of those attachment-full emails.
The script is available at:
viewmailattachments.py
on GitHub.
Tags: email, mutt, programming, python, firefox, linux
[
15:18 Oct 15, 2015
More linux |
permalink to this entry |
]
Sun, 11 Oct 2015
X stopped working after my last Debian update.
Rather than run a login manager, I typically log in on the console.
Then in my .zlogin file, I have:
if [[ $(tty) == /dev/tty1 ]]; then
# do various things first, then:
startx -- -dumbSched >& $HOME/.xsession-errors
fi
Ignore
-dumbSched
for now; it's a fix for a timing problem
openbox has when bringing up initial windows. The relevant part here
is that I redirect both standard output and standard error to a
file named .xsession-errors. That means that if I run GIMP or firefox
or any other program from a menu, and later decide I need to see
their output to look for error messages, all I have to do is check
that file.
But as of my last update, that no longer works.
Plain startx
, without the output redirection, works fine.
But with the redirect, X pauses for five or ten seconds, then exits,
giving me my prompt back but with messed-up terminal settings, so I
have to type reset
before I do anything else.
Of course, I checked that .xsession file for errors, and also the
~/.local/share/xorg/Xorg.0.log file it referred me to (which is where
X stores its log now that it's no longer running as root).
It seems the problem is this:
Fatal server error:
(EE) xf86OpenConsole: VT_ACTIVATE failed: Operation not permitted
Which wasn't illuminating but at least gave me a useful search keyword.
I found a fair number of people on the web having the same problem.
It's related to the recent Xorg change that makes it possible to run
Xorg as a regular user, not root. Not that running as a user should
have anything to do with capturing standard output and error. But
apparently Xorg running as a user is dependent on what sort of virtual
terminal it was run from; and the way it determines the controlling
terminal, apparently, is by checking stderr (and maybe also stdout).
Here's a slightly longer description of what it's doing,
from the ever useful
Arch
Linux forums.
I'm fairly sure there are better ways of determining a process's
controlling terminal than using stderr. For instance, a casual web
search turned up ctermid;
or you could do checks on /dev/tty
. There are probably
other ways.
The Arch Linux thread linked above, and quite a few others, suggest
adding the server option -keeptty
when starting X. The
Xorg manual isn't encouraging about this as a solution:
- -keeptty
- Prevent the server from detaching its initial controlling
terminal. This option is only useful when debugging the server.
Not all platforms support (or can use) this option.
But it does work.
I found several bugs filed already on the redirection problem.
Freedesktop has a bug report on it, but it's more than a year old
and has no comments or activity:
Freedesktop
bug 82732: rootless X doesn't start if stderr redirected.
Redhat has a bug report:
Xorg
without root rights breaks by streams redirection,
and supposedly added a fix way back in January
in their package version xorg-x11-xinit-1.3.4-3.fc21 ...
though it looks like
their
fix is simply to enable -keeptty
automatically,
which is better than nothing but doesn't seem ideal.
Still, it does suggest that it's probably not harmful to use that
workaround and ignore what the Xorg man page says.
Debian didn't seem to have a bug filed on the problem yet (not
terribly surprising, since they only enabled it in unstable a
few days ago), so I used reportbug
to attempt to file one.
I would link to it here if Debian had an actual bug system that
allowed searching for bugs (they do have a page entitled "BTS Search"
but it gives "Internal Server Error", and the alternate google groups
bug search doesn't find my bug), or if their bug reporting system
acknowledged new bugs by emailing the submitter the bug number.
In truth I strongly suspect that reportbug
is actually a
no-op and doesn't actually do anything with the emailed report.
But I'm not sure the Debian bug matters since the real bug is Xorg's,
and it doesn't look like they're very interested in the problem.
So anyone who wants to be able to access output of programs running
under X probably needs to use -keeptty
for the forseeable
future.
Update: the bug acknowledgement came in six hours later.
It's bug
801529.
Tags: X11, Xorg, linux
[
12:31 Oct 11, 2015
More linux |
permalink to this entry |
]
Sun, 26 Jul 2015
I've had no end of trouble with my Asus 1015E's trackpad.
A discussion of laptops on a mailing list -- in particular, someone's
concerns that the nifty-looking Dell XPS 13, which is available preloaded
with Linux, has had reviewers say that the trackpad doesn't work well --
reminded me that I'd never posted my final solution.
The Asus's trackpad has two problems. First, it's super sensitive to
taps, so if any part of my hand gets anywhere near the trackpad while
I'm typing, suddenly it sees a mouse click at some random point on the
screen, and instead of typing into an emacs window suddenly I find I'm
typing into a live IRC client. Or, worse, instead of typing my
password into a password field, I'm typing it into IRC.
That wouldn't have been so bad on the old style of trackpad, where I
could just turn off taps altogether and use the hardware buttons; this
is one of those new-style trackpads that doesn't have any actual buttons.
Second, two-finger taps don't work. Three-finger taps work just fine,
but two-finger taps: well, I found when I wanted a right-click (which
is what two-fingers was set up to do), I had to go TAP, TAP, TAP, TAP
maybe ten or fifteen times before one of them would finally take.
But by the time the menu came up, of course, I'd done another tap
and that canceled the menu and I had to start over. Infuriating!
I struggled for many months with synclient's settings for tap
sensitivity and right and left click emulation. I tried enabling
syndaemon, which is supposed to disable clicks as long as you're
typing then enable them again afterward, and spent months playing
with its settings, but in order to get it to work at all, I had to
set the timeout so long that there was an infuriating wait after I
stopped typing before I could do anything.
I was on the verge of giving up on the Asus and going back to
my Dell Latitude 2120, which had an excellent trackpad (with buttons)
and the world's greatest 10" laptop keyboard. (What the Dell doesn't
have is battery life, and I really hated to give up the Asus's light
weight and 8-hour battery life.) As a final, desperate option, I
decided to disable taps completely.
Disable taps? Then how do you do a mouse click?
I theorized, with all Linux's flexibility, there must be some way to
get function keys to work like mouse buttons. And indeed there is.
The easiest way seemed to be to use xmodmap (strange to find xmodmap
being the simplest anything, but there you go). It turns out that a
simple line like
xmodmap -e "keysym F1 = Pointer_Button1"
is most of what you need. But to make it work, you need to enable
"mouse keys":
xkbset m
But for reasons unknown, mouse keys will expire after some set timeout
unless you explicitly tell it not to. Do that like this:
xkbset exp =m
Once that's all set up, you can disable single-finger taps with synclient:
synclient TapButton1=0
Of course, you can disable 2-finger and 3-finger taps by setting them
to 0 as well. I don't generally find them a problem (they don't work
reliably, but they don't fire on their own either), so I left them enabled.
I tried it and it worked beautifully for left click. Since I was still
having trouble with that two-finger tap for right click, I put that on
a function key too, and added middle click while I was at it. I don't
use function keys much, so devoting three function keys to mouse
buttons wasn't really a problem.
In fact, it worked so well that I decided it would be handy to have
an additional set of mouse keys over on the other side of the keyboard,
to make it easy to do mouse clicks with either hand. So I defined
F1, F2 and F3 as one set of mouse buttons, and F10, F11 and F12 as another.
And yes, this all probably sounds nutty as heck. But it really is a
nice laptop aside from the trackpad from hell; and although I thought
Fn-key mouse buttons would be highly inconvenient, it took surprisingly
little time to get used to them.
So this is what I ended up putting in .config/openbox/autostart file.
I wrap it in a test for hostname, since I like to be able to use the
same configuration file on multiple machines, but I don't need this
hack on any machine but the Asus.
if [ $(hostname) == iridum ]; then
synclient TapButton1=0 TapButton2=3 TapButton3=2 HorizEdgeScroll=1
xmodmap -e "keysym F1 = Pointer_Button1"
xmodmap -e "keysym F2 = Pointer_Button2"
xmodmap -e "keysym F3 = Pointer_Button3"
xmodmap -e "keysym F10 = Pointer_Button1"
xmodmap -e "keysym F11 = Pointer_Button2"
xmodmap -e "keysym F12 = Pointer_Button3"
xkbset m
xkbset exp =m
else
synclient TapButton1=1 TapButton2=3 TapButton3=2 HorizEdgeScroll=1
fi
Tags: linux, laptop, trackpad, synaptics, synclient, syndaemon, xmodmap
[
20:54 Jul 26, 2015
More linux |
permalink to this entry |
]
Fri, 15 May 2015
I have a bunch of devices that use VFAT filesystems. MP3 players,
camera SD cards, SD cards in my Android tablet. I mount them through
/etc/fstab, and the files always look executable, so when
I ls -f
them, they all have asterisks after their names.
I don't generally execute files on these devices; I'd prefer the
files to have a mode that doesn't make them look executable.
I'd like the files to be mode 644 (or 0644 in most programming
languages, since it's an octal, or base 8, number). 644 in binary
is 110 100 100, or as the Unix ls
command puts it,
rw-r--r--.
There's a directive, fmask, that you can put in fstab
entries to control the mode of files when the device is mounted.
(Here's Wikipedia's long
umask article.)
But how do you get from the mode you want the files to be, 644,
to the mask?
The mask (which corresponds to the umask
command)
represent the bits you don't want to have set. So, for instance,
if you don't want the world-execute bit (1) set, you'd put 1 in the mask.
If you don't want the world-write bit (2) set, as you likely don't, put
2 in the mask. So that's already a clue that I'm going to want the
rightmost byte to be 3: I don't want files mounted from my MP3 player
to be either world writable or executable.
But I also don't want to have to puzzle out the details of all nine bits
every time I set an fmask. Isn't there some way I can take the mode I
want the files to be -- 644 -- and turn them into the mask I'd need to
put in /etc/fstab or set as a umask?
Fortunately, there is. It seemed like it ought to be straightforward,
but it took a little fiddling to get it into a one-line command I can type.
I made it a shell function in my .zshrc:
# What's the complement of a number, e.g. the fmask in fstab to get
# a given file mode for vfat files? Sample usage: invertmask 755
invertmask() {
python -c "print '0%o' % (~(0777 & 0$1) & 0777)"
}
This takes whatever argument I give to it -- $1 -- and takes
only the three rightmost bytes from it, (0777 & 0$1). It takes
the bitwise NOT of that, ~. But the result of that is a negative
number, and we only want the three rightmost bytes of the result,
(result) & 0777, expressed as an octal number -- which
we can do in python by printing it as %o. Whew!
Here's a shorter, cleaner looking alias that does the same thing,
though it's not as clear about what it's doing:
invertmask1() {
python -c "print '0%o' % (0777 - 0$1)"
}
So now, for my MP3 player I can put this in /etc/fstab:
UUID=0000-009E /mp3 vfat user,noauto,exec,fmask=133,shortname=lower 0 0
Tags: linux, cmdline
[
10:27 May 15, 2015
More linux/cmdline |
permalink to this entry |
]
Mon, 23 Feb 2015
Generally, when I work on a website, I maintain a local copy of all
the files. Ideally, I use version control (git, svn or whatever),
but failing that, I use rsync over ssh to keep my files in sync with the
web server's files.
But I'm helping with a local nonprofit's website, and the cheap web
hosting plan they chose doesn't offer ssh, just ftp.
While I have to question the wisdom of an ISP that insists that its
customers use insecure ftp rather than a secure encrypted protocol, that's
their problem. My problem is how to keep my files in sync with theirs.
And the other folks working on the website aren't developers and are
very resistant to the idea of using any version control system, so
I have to be careful to check for changed files before modifying anything.
In web searches, I haven't found much written about reasonable workflows
on an ftp-only web host. I struggled a lot with scripts calling ncftp
or lftp.
But then I discovered curftpfs, which makes things much easier.
I put a line in /etc/fstab like this:
curlftpfs#user:password@example.com/ /servername fuse rw,allow_other,noauto,user 0 0
Then all I have to do is type mount /servername
and the
ftp connection is made automagically. From then on, I can treat it like a
(very slow and somewhat limited) filesystem.
Well, that's not quite all I had to do. There are a few other steps.
Of course, you have to install curlftpfs (duh). But then you'll also
need permission to write the local directory where you're trying to
mount. I suggest using group permissions for that:
# chmod 775 /servername
# chgrp adm /servername
and then edit /etc/group and add yourself to the adm group.
Unfortunately, that means you'll have to log in again, since
/etc/group is only reset on login.
For instance, if I want to rsync, I can
rsync -avn --size-only /servername/subdir/ ~/servername/subdir/
for any particular subdirectory I want to check. A few things to know
about this:
- I have to use --size-only because timestamps aren't reliable.
I'm not sure whether this is a problem with the ftp protocol, or
whether this particular ISP's server has problems with its dates.
I suspect it's a problem inherent in ftp, because if I ls -l,
I see things like this:
-rw-rw---- 1 root root 7651 Feb 23 2015 guide-geo.php
-rw-rw---- 1 root root 1801 Feb 14 17:16 guide-header.php
-rw-rw---- 1 root root 8738 Feb 23 2015 guide-table.php
Note that a file modified a week ago shows a modification time,
but files modified today show only a day and year, not a time.
I'm not sure what to make of this.
- Note the -n flag. I don't automatically rsync from the server
to my local directory, because if I have any local changes newer
than what's on the server they'd be overwritten. So I check the
diffs by hand with tkdiff or meld before copying.
- It's important to rsync only the specific directories you're
working on. You really don't want to see how long it takes to get
the full file tree of a web server recursively over ftp.
How do you change and update files?
It is possible to edit the files on the curlftpfs
filesystem directly. But at least with emacs, it's incredibly slow:
emacs likes to check file modification dates whenever you change anything,
and that requires an ftp round-trip so it could be ten or twenty seconds
before anything you type actually makes it into the file, with even
longer delays any time you save.
So instead, I edit my local copy, and when I'm ready to push to the server,
I cp filename /servername/path/to/filename
.
Of course, I have aliases and shell functions to make all of this
easier to type, especially the long pathnames: I can't rely on
autocompletion like I usually would, because autocompleting a
file or directory name on /servername requires an ftp round-trip to ls
the remote directory.
Oh, and version control? I use a local git repository. Just because the
other people working on the website don't want version control is no
reason I can't have a record of my own changes.
None of this is as satisfactory as a nice git or svn repository and
a good ssh connection. But it's a lot better than struggling with
ftp clients every time you need to test a file.
Tags: linux, web
[
19:46 Feb 23, 2015
More linux |
permalink to this entry |
]
Thu, 19 Feb 2015
Someone on the SVLUG list posted about a shell script he'd written to
find core dumps.
It sounded like a simple task -- just
locate core | grep -w core
, right?
I mean, any sensible packager avoids naming files or directories
"core" for just that reason, don't they?
But not so: turns out in the modern world, insane numbers of software
projects include directories called "core", including projects that
are developed primarily on Linux so you'd think they would avoid it ...
even the kernel.
On my system, locate core | grep -w core | wc -l
returned 13641 filenames.
Okay, so clearly that isn't working. I had to agree with the SVLUG
poster that using "file" to find out which files were actual core
dumps is now the only reliable way to do it. The output looks like this:
$ file core
core: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV), too many program headers (375)
The poster was using a shell script, but I was fairly sure it could
be done in a single shell pipeline. Let's see: you need to run locate
to find any files with 'core" in the name.
Then you pipe it through
grep to make sure the filename is actually core: since locate gives
you a full pathname, like
/lib/modules/3.14-2-686-pae/kernel/drivers/edac/edac_core.ko
or
/lib/modules/3.14-2-686-pae/kernel/drivers/memstick/core,
you want lines where only the final component is core --
so core has a slash before it and an end-of-line (in grep
that's denoted by a dollar sign, $) after it. So grep '/core$'
should do it.
Then take the output of that locate | grep
and run file
on it, and pipe the output of that file command through grep
to find the lines that include the phrase 'core file'.
That gives you lines like
/home/akkana/geology/NorCal/pinnaclesGIS/core: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV), too many program headers (523)
But those lines are long and all you really need are the filenames;
so pass it through sed to get rid of anything to the right of "core"
followed by a colon.
Here's the final command:
file `locate core | grep '/core$'` | grep 'core file' | sed 's/core:.*//'
On my system that gave me 11 files, and they were all really core dumps.
I deleted them all.
Tags: linux, cmdline
[
12:54 Feb 19, 2015
More linux |
permalink to this entry |
]
Mon, 22 Dec 2014
I'm working on my Raspberry Pi crittercam again. I got a battery, so
it can be a standalone box -- it was such a hassle to set it up with
two power cords dangling from it at all times -- and set it up to run
automatically at boot time.
But there was one aspect of the camera that wasn't automated: if close
enough to the house to see the wi-fi router, I want it to mount a
filesystem from our server and store its image files there. That
makes it a lot easier to check on its progress, and also saves wear
on the Pi's SD card.
Only one problem: I was using sshfs to mount the disk remotely, and
ssh always prompts me for a password.
Now, there are a gazillion tutorials on how to set up an ssh key.
Just do a web search for ssh key
or
passwordless ssh key
. They vary a bit in their details,
but they're all the same in the important aspects. They're all the
same in one other detail: none of them work for me. I generate a new
key (various types) with no pass phrase, I copy it to the server's
authorized keys file (several different ways, two possible filenames),
I try to ssh -- and I'm prompted for a password.
After much flailing I finally found out what was missing.
In addition to those two steps, you need to modify your
.ssh/config file to tell it which key to use.
This is especially critical if you have multiple keys on the client
machine, or if you've named the file anything but the default id_dsa
or id_rsa.
So here are the real steps for making an ssh key.
Assume the server, the machine to which you want to ssh,
is named "myserver". But these steps are all run on the client
machine, the one from which you want to run ssh.
ssh-keygen -t rsa -C "Comment"
When it prompts you for a filename, give it a full pathname,
e.g.
~/.ssh/id_rsa_myserver.
Type in a pass phrase, or hit return twice if you want to be able to
ssh without a password.
Update May 2016: this now fails with
Saving key ~/.ssh/id_rsa_myserver failed: No such file or directory
(duh, of course the file doesn't exist, I'm asking you to create it).
To get around this, specify the file on the command line:
ssh-keygen -t rsa -C "Comment" -f ~/.ssh/id_rsa_myserver
Update, April 2018: Do use RSA: DSA keys have now been deprecated.
If you make a DSA rather than an RSA key, ssh will just ignore it
and prompt you for a login password. No helpful error message or anything
explaining why it's ignored.
Now copy your key to the remote machine:
ssh-copy-id -i .ssh/id_rsa_myserver user@myserver
You can omit the
user@ if you're using the same username on
both machines. You'll have to type in your password on myserver.
Then on the local machine,
edit ~/.ssh/config, and add an entry like this:
Host myserver
User my_username
IdentityFile ~/.ssh/id_rsa_myserver
The User line is optional, and refers to your username on myserver
if it's different from the one on the client. For instance, on the
Raspberry Pi, everything has to run as root because most of the
hardware and camera libraries can't work any other way. But I
want it using my user ID on the server side, not root.
Update July 2021: You may need one more step. Keyed ssh will fail
silently if it doesn't like the permissions in the .ssh/ directory.
If it's still prompting you for a password, try, on the remote server:
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
Eliminating strict host key checking
Of course, you can use this to go the other way too, and ssh to your Pi
without needing to type a password every time. If you do that, and if
you have several Pis, Beaglebones, plug computers or other little
Linux gizmos which sometimes share the same IP address, you may run
into the annoying whine ssh is prone to:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
The only way to get around this once it happens is by editing
~/.ssh/known_hosts, finding the line corresponding to the pi,
and removing it (or just removing the whole file).
You're supposed to be able to turn off this check with
StrictHostKeyChecking no
, but it doesn't work.
Fortunately, there's a trick I discovered several years ago
and discussed in
Three SSH tips.
Here's how the Pi entry ends up looking in my desktop's
~/.ssh/config:
Host pipi
HostName pi
User pi
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
IdentityFile ~/.ssh/id_pi
Tags: ssh, linux, raspberry pi, crittercam, security, maker
[
16:25 Dec 22, 2014
More linux |
permalink to this entry |
]
Thu, 18 Dec 2014
Recently Firefox started refusing to run flash, including youtube videos
(about the only flash I run).
A bar would appear at the top of the page saying
"This plug-in is vulnerable and should be upgraded".
Apparently Adobe had another security bug.
There's an "Update now" button in the Firefox bar, but it's a chimera:
Firefox has never known how to install plug-ins for Linux (there are
longstanding bugs filed on why it claims to be able to but can't), and
it certainly doesn't know how to update a Debian package.
I use a Firefox downloaded
from Mozilla.org, but flash from
Debian's flashplugin-nonfree package.
So I figured updating Debian -- apt-get update; apt-get dist-upgrade
-- would fix it. Nope. I still got the same message.
A little googling found several pages recommending
update-flashplugin-nonfree --install
; I tried that but
it didn't help either. It seemed to download a tarball, but as far as
I could tell it never unpacked or installed the tarball it downloaded.
What finally did the trick was
apt-get install --reinstall flashplugin-nonfree
That downloaded a new tarball, AND unpacked and installed it.
After restarting Firefox, I was able to view the video I'd been trying
to watch.
Tags: linux, firefox, flash
[
15:21 Dec 18, 2014
More linux |
permalink to this entry |
]
Tue, 02 Dec 2014
I recently discovered that my ancient stereo turntable didn't survive
our move. So all those LPs I brought along, intending to rip to mp3
when I had more time, will never see bits.
So I need to buy new versions of some of that old music. In particular,
I'd lately been wanting to listen to my old Flanders and Swann albums.
Flanders and Swann were a terrific comedy music duo (think Tom Lehrer
only less scientifically oriented) from the 1960s.
So I ordered a CD of
The
Complete Flanders & Swann, which contains all three of the
albums I inherited from my parents. Woohoo! I ran a little script I
have that rips a whole CD to a directory of separate MP3 songs, and
I was all set.
Until I listened to it. It turns out that when the LP album was turned
into a CD, they put the track breaks in the wrong place. These albums
are recordings of live performances. Each song has a spoken intro,
giving a little context for the song that follows. On the CD, each
track starts with a song, and ends with the spoken intro for the next
song. That's no problem if you always listen to whole albums in order.
But I like to play individual tracks, or listen to music on random play.
So this wasn't going to work at all.
I tried using audacity to copy the intro from the end of one track and
paste it onto the beginning of another. That worked, but it was tedious
and fiddly. A little research showed me a much better way.
First: Rip the whole CD
First I needed to rip the whole CD as one gigantic track. My script
had been running cdparanoia tracknumber filename.wav
.
But it took some study of the cdparanoia manual before I finally
found the way to rip a whole CD to one track: you can specify a
range of tracks, starting at 0 and omitting the end track.
cdparanoia 0- outfile.wav
Use Audacity to split and save the tracks
Now what's the best way to split a recording into separate tracks?
Fortunately the Audacity manual has a nice page on that very subject:
Splitting a recording into separate tracks.
Mostly, the issue is setting labels -- with
Tracks->Add Label at Selection or
Tracks->Add Label at Playback Position.
Use Ctrl-1 to zoom as much as you need to see where the short pauses are.
Then listen to the audio, pausing or clicking and setting labels
appropriately.
It's a bit fiddly. For instance, if you pause your listening to set a
label, you might want to save the audacity project so you don't lose
the label positions you've set so far. But you can't save unless you
Stop the playback; and that loses the current playback position which
you may not yet have set a label for. Even if you have set a label for
it, you'll need to click to set the selection to the label you just made
if you want to continue playing from where you left off. It all seems
a little silly and unintuitive ... but after a few tries you'll find a
routine that works for you.
When all your labels are set, then File->Export Multiple....
You will have to go through a bunch of dialogs involving metadata for
each track; just hit return, since audacity ignores any metadata you
type in and won't actually write it to the MP3 file. I have no idea
why it always prompts for metadata then doesn't use it, but you can
use a program like id3tool later to add proper metadata to the tracks.
So, no, the tools aren't perfect. On the other hand, I now have a nice
set of Flanders and Swann tracks, and can listen to Misalliance, Ill
Wind and The GNU Song complete with their proper introductions.
Tags: linux, audio
[
13:35 Dec 02, 2014
More linux |
permalink to this entry |
]
Tue, 18 Nov 2014
Am I the only one who's always confused about when holidays happen?
Partly it's software, I guess.
In these days of everybody keeping their schedules on Google's or
Apple's servers, maybe most people keep up on these things.
But being the dinosaur I am, I'm still resistant to keeping my
schedule in the cloud on a public server. What if I need to check for
upcoming events while I'm on a trip out in the remote desert somewhere?
(Not to mention the obvious privacy considerations.)
For years I used PalmOS PDAs, but when I switched to Android and
discovered how poor the offline calendar options are,
I decided that I should learn how to use the old Unix standby.
It's been pretty handy. I run
remind ~/[remind-file-name]
when I log in in the
morning, and it gives me a nice summary of upcoming events:
DPU Solar surcharge meeting, 5:30-8:30 tomorrow
NMGLUG meeting in 2 days' time
Of course, I can also have it email me with reminders, or pop up a
window, but so far I haven't felt the need.
I can also display a nice calendar showing upcoming events for this
month or the next several months. I made a couple of aliases:
mycal () {
months=$1
if [[ x$months = x ]]
then
months=1
fi
remind -c$months ~/Docs/Lists/remind
}
mycalp () {
months=$1
if [[ x$months = x ]]
then
months=2
fi
remind -p$months ~/Docs/Lists/remind | rem2ps -e -l > /tmp/mycal.ps
gv /tmp/mycal.ps &
}
The first prints an ascii calendar; the second displays a nice
postscript calendar complete with little icons for phases of the moon.
But what about those holidays?
Okay, that gives me a good way of storing reminders about appointments.
But I still don't know when holidays are.
(I had that problem with the PalmOS scheduling program, too -- it
never knew about holidays either.)
Web searching didn't help much.
Unfortunately, "remind" is a terrible name in this age of search engines.
If someone has already solved this problem, I sure wasn't able to find
any evidence of it. So instead, I went to
Wikipedia's
list of US holidays, with the
remind man page in
another tab, and wrote remind stanzas for each one -- except Easter,
which is much more complicated.
But wait -- it turns out that remind already has code to calculate
Easter! It just needs a slightly more complicated stanza: instead of
the standard form of
REM 1 Apr +1 MSG April Fool's Day %b
I need to use this form:
REM [trigger(easterdate(today()))] +1 MSG Easter %b
The %b in each case is what gives you the notice of when the event is
in your reminders, e.g. "Easter tomorrow" or "Easter in two days' time".
The +1 is how far beforehand you want to be reminded of each event.
So here's my remind file for US holidays. I make no guarantees that
every one is right, though I did check them for the next 12 months
and they all seem to be working.
#
# US Holidays
#
REM 1 Jan +3 MSG New Year's Day %b
REM Mon 15 Jan +2 MSG MLK Day %b
REM 2 Feb MSG Groundhog Day %b
REM 14 Feb +2 MSG Valentine's Day %b
REM Mon 15 Feb +2 MSG President's Day %b
REM 17 Mar +2 MSG St Patrick's Day %b
REM 1 Apr +9 MSG April Fool's Day %b
REM [trigger(easterdate(today()))] +1 MSG Easter %b
REM 22 Apr +2 MSG Earth Day %b
REM Fri 1 May -7 +2 MSG Arbor Day %b
REM Sun 8 May +2 MSG Mother's Day %b
REM Mon 1 Jun -7 +2 MSG Memorial Day %b
REM Sun 15 Jun MSG Father's Day
REM 4 Jul +2 MSG 4th of July %b
REM Mon 1 Sep +2 MSG Labor Day %b
REM Mon 8 Oct +2 MSG Columbus Day %b
REM 31 Oct +2 MSG Halloween %b
REM Tue 2 Nov +4 MSG Election Day %b
REM 11 Nov +2 MSG Veteran's Day %b
REM Thu 22 Nov +3 MSG Thanksgiving %b
REM 25 Dec +3 MSG Christmas %b
Tags: linux
[
14:07 Nov 18, 2014
More linux |
permalink to this entry |
]
Sun, 14 Sep 2014
Global key bindings in emacs. What's hard about that, right?
Just something simple like
(global-set-key "\C-m" 'newline-and-indent)
and you're all set.
Well, no. global-set-key
gives you a nice key binding
that works ... until the next time you load a mode that wants
to redefine that key binding out from under you.
For many years I've had a huge collection of mode hooks that run when
specific modes load. For instance, python-mode defines \C-c\C-r, my
binding that normally runs revert-buffer, to do something called run-python.
I never need to run python inside emacs -- I do that in a shell window.
But I fairly frequently want to revert a python file back to the last
version I saved. So I had a hook that ran whenever python-mode loaded
to override that key binding and set it back to what I'd already set
it to:
(defun reset-revert-buffer ()
(define-key python-mode-map "\C-c\C-r" 'revert-buffer) )
(setq python-mode-hook 'reset-revert-buffer)
That worked fine -- but you have to do it for every mode that
overrides key bindings and every binding that gets overridden.
It's a constant chase, where you keep needing to stop editing
whatever you wanted to edit and go add yet another mode-hook to
.emacs after chasing down which mode is causing the problem.
There must be a better solution.
A web search quickly led me to the StackOverflow discussion
Globally
override key bindings. I tried the techniques there; but they
didn't work.
It took a lot of help from the kind folks on #emacs, but after an hour
or so they finally found the key: emulation-mode-map-alists
.
It's only
barely
documented -- the key there is "The “active” keymaps in each alist
are used before minor-mode-map-alist and minor-mode-overriding-map-alist" --
and there seem to be no examples anywhere on the web for how to use it.
It's a list of alists mapping names to keymaps. Oh, clears it right up! Right?
Okay, here's what it means. First you define a new keymap and add your
bindings to it:
(defvar global-keys-minor-mode-map (make-sparse-keymap)
"global-keys-minor-mode keymap.")
(define-key global-keys-minor-mode-map "\C-c\C-r" 'revert-buffer)
(define-key global-keys-minor-mode-map (kbd "C-;") 'insert-date)
Now define a minor mode that will use that keymap. You'll use that
minor mode for basically everything.
(define-minor-mode global-keys-minor-mode
"A minor mode so that global key settings override annoying major modes."
t "global-keys" 'global-keys-minor-mode-map)
(global-keys-minor-mode 1)
Now build an alist consisting of a list containing a single dotted
pair: the name of the minor mode and the keymap.
;; A keymap that's supposed to be consulted before the first
;; minor-mode-map-alist.
(defconst global-minor-mode-alist (list (cons 'global-keys-minor-mode
global-keys-minor-mode-map)))
Finally, set emulation-mode-map-alists to a list containing only
the global-minor-mode-alist.
(setf emulation-mode-map-alists '(global-minor-mode-alist))
There's one final step. Even though you want these bindings to be
global and work everywhere, there is one place where you might not
want them: the minibuffer. To be honest, I'm not sure if this part
is necessary, but it sounds like a good idea so I've kept it.
(defun my-minibuffer-setup-hook ()
(global-keys-minor-mode 0))
(add-hook 'minibuffer-setup-hook 'my-minibuffer-setup-hook)
Whew! It's a lot of work, but it'll let me clean up my .emacs file and
save me from endlessly adding new mode-hooks.
Tags: editors, emacs, linux
[
16:46 Sep 14, 2014
More linux/editors |
permalink to this entry |
]
Sun, 07 Sep 2014
I read about cool computer tricks all the time. I think "Wow, that
would be a real timesaver!" And then a week later, when it actually
would save me time, I've long since forgotten all about it.
After yet another session where I wanted to open a frequently opened
file in emacs and thought "I think I made a
bookmark
for that a while back", but then decided it's easier to type the whole
long pathname rather than go re-learn how to use emacs bookmarks,
I finally decided I needed a reminder system -- something that would
poke me and remind me of a few things I want to learn.
I used to keep cheat sheets and quick reference cards on my desk;
but that never worked for me. Quick reference cards tend to be
50 things I already know, 40 things I'll never care about and 4 really
great things I should try to remember. And eventually they get
burned in a pile of other papers on my desk and I never see them again.
My new system is working much better. I created a file in my home
directory called .reminders, in which I put a few -- just a few
-- things I want to learn and start using regularly. It started out
at about 6 lines but now it's grown to 12.
Then I put this in my .zlogin (of course, you can do this for any
shell, not just zsh, though the syntax may vary):
if [[ -f ~/.reminders ]]; then
cat ~/.reminders
fi
Now, in every login shell (which for me is each new terminal window
I create on my desktop), I see my reminders. Of course, I don't read
them every time; but I look at them often enough that I can't forget
the existence of great things like
emacs bookmarks, or
diff <(cmd1) <(cmd2)
.
And if I forget the exact
keystroke or syntax, I can always cat ~/.reminders
to
remind myself. And after a few weeks of regular use, I finally have
internalized some of these tricks, and can remove them from my
.reminders file.
It's not just for tech tips, either; I've used a similar technique
for reminding myself of hard-to-remember vocabulary words when I was
studying Spanish. It could work for anything you want to teach yourself.
Although the details of my .reminders are specific to Linux/Unix and zsh,
of course you could use a similar system on any computer. If you don't
open new terminal windows, you can set a reminder to pop up when you
first log in, or once a day, or whatever is right for you. The
important part is to have a small set of tips that you see regularly.
Tags: linux, cmdline, tech
[
21:10 Sep 07, 2014
More tech |
permalink to this entry |
]
Tue, 02 Sep 2014
I was using strace to figure out how to set up a program, lftp, and
a friend commented that he didn't know how to use it
and would like to learn. I don't use strace often, but when I do,
it's indispensible -- and it's easy to use. So here's a little tutorial.
My problem, in this case, was that I needed to find out what
configuration file I needed to modify in order to set up an alias
in lftp. The lftp man page tells you how to define an alias, but doesn't
tell you how to save it for future sessions; apparently you have
to edit the configuration file yourself.
But where? The man page suggested
a couple of possible config file locations -- ~/.lftprc and
~/.config/lftp/rc -- but neither of those existed. I wanted
to use the one that already existed. I had already set up bookmarks
in lftp and it remembered them, so it must have a config file already,
somewhere. I wanted to find that file and use it.
So the question was, what files does lftp read when it starts up?
strace lets you snoop on a program and see what it's doing.
strace shows you all system calls being used by a program.
What's a system call? Well, it's anything in section 2 of the Unix manual.
You can get a complete list by typing: man 2 syscalls
(you may have to install developer man pages first -- on Debian that's
the manpages-dev package). But the important thing is that most
file access calls -- open, read, chmod, rename, unlink (that's how you
remove a file), and so on -- are system calls.
You can run a program under strace directly:
$ strace lftp sitename
Interrupt it with Ctrl-C when you've seen what you need to see.
Pruning the output
And of course, you'll see tons of crap you're not interested in,
like rt_sigaction(SIGTTOU) and fcntl64(0, F_GETFL). So let's get rid
of that first. The easiest way is to use grep. Let's say I want to know
every file that lftp opens. I can do it like this:
$ strace lftp sitename |& grep open
I have to use |& instead of just | because strace prints its
output on stderr instead of stdout.
That's pretty useful, but it's still too much. I really don't care
to know about strace opening a bazillion files in
/usr/share/locale/en_US/LC_MESSAGES, or libraries like
/usr/lib/i386-linux-gnu/libp11-kit.so.0.
In this case, I'm looking for config files, so I really only want to know
which files it opens in my home directory. Like this:
$ strace lftp sitename |& grep 'open.*/home/akkana'
In other words, show me just the lines that have either the word "open"
or "read" followed later by the string "/home/akkana".
Digression: grep pipelines
Now, you might think that you could use a simpler pipeline with two greps:
$ strace lftp sitename |& grep open | grep /home/akkana
But that doesn't work -- nothing prints out. Why? Because grep, under
certain circumstances that aren't clear to me, buffers its output, so
in some cases when you pipe grep | grep, the second grep will wait
until it has collected quite a lot of output before it prints anything.
(This comes up a lot with tail -f
as well.)
You can avoid that with
$ strace lftp sitename |& grep --line-buffered open | grep /home/akkana
but that's too much to type, if you ask me.
Back to that strace | grep
Okay, whichever way you grep for open and your home directory,
it gives:
open("/home/akkana/.local/share/lftp/bookmarks", O_RDONLY|O_LARGEFILE) = 5
open("/home/akkana/.netrc", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/home/akkana/.local/share/lftp/rl_history", O_RDONLY|O_LARGEFILE) = 5
open("/home/akkana/.inputrc", O_RDONLY|O_LARGEFILE) = 5
Now we're getting somewhere! The file where it's getting its bookmarks
is
~/.local/share/lftp/bookmarks -- and I probably can't use that
to set my alias.
But wait, why doesn't it show lftp trying to open those other config files?
Using script to save the output
At this point, you might be sick of running those grep pipelines over
and over. Most of the time, when I run strace, instead of piping it
through grep I run it under script to save the whole output.
script is one of those poorly named, ungoogleable commands, but it's
incredibly useful. It runs a subshell and saves everything that appears
in that subshell, both what you type and all the output, in a file.
Start script, then run lftp inside it:
$ script /tmp/lftp.strace
Script started on Tue 26 Aug 2014 12:58:30 PM MDT
$ strace lftp sitename
After the flood of output stops, I type Ctrl-D or Ctrl-C to exit lftp,
then another Ctrl-D to exit the subshell script is using.
Now all the strace output was in /tmp/lftp.strace and I can
grep in it, view it in an editor or anything I want.
So, what files is it looking for in my home directory and why don't
they show up as open attemps?
$ grep /home/akkana /tmp/lftp.strace
Ah, there it is! A bunch of lines like this:
access("/home/akkana/.lftprc", R_OK) = -1 ENOENT (No such file or directory)
stat64("/home/akkana/.lftp", 0xbff821a0) = -1 ENOENT (No such file or directory)
mkdir("/home/akkana/.config", 0755) = -1 EEXIST (File exists)
mkdir("/home/akkana/.config/lftp", 0755) = -1 EEXIST (File exists)
access("/home/akkana/.config/lftp/rc", R_OK) = 0
So I should have looked for access and stat as well as
open.
Now I have the list of files it's looking for. And, curiously,
it creates ~/.config/lftp if it doesn't exist already, even though
it's not going to write anything there.
So I created ~/.config/lftp/rc and put my alias there. Worked fine.
And I was able to edit my bookmark in ~/.local/share/lftp/bookmarks
later when I had a need for that. All thanks to strace.
Tags: linux, debugging, cmdline
[
13:06 Sep 02, 2014
More linux/cmdline |
permalink to this entry |
]
Thu, 28 Aug 2014
For the last several months, I repeatedly find myself in a mode where
my terminal isn't working quite right. In particular, Ctrl-C doesn't
work to interrupt a running program. It's always in a terminal where
I've been doing web work. The site I'm working on sadly has only ftp
access, so I've been using ncftp to upload files to the site, and git
and meld to do local version control on the copy of the site I keep on
my local machine. I was pretty sure the problem was coming from either
git, meld, or ncftp, but I couldn't reproduce it.
Running reset
fixed the problem. But since I didn't know
what program was causing the problem, I didn't know when I needed to
type reset
.
The first step was to find out which of the three programs was at fault.
Most of the time when this happened, I wouldn't notice until hours
later, the next time I needed to stop a program with Ctrl-C.
I speculated that there was probably some way to make zsh run a check
after every command ... if I could just figure out what to check.
Terminal modes and stty -a
It seemed like my terminal was getting put into raw mode.
In programming lingo, a terminal is in raw mode when characters
from it are processed one at a time, and special characters like
Ctrl-C, which would normally interrupt whatever program is running,
are just passed like any other character.
You can list your terminal modes with stty -a:
$ stty -a
speed 38400 baud; rows 32; columns 80; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = ;
eol2 = ; swtch = ; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R;
werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;
-parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts
ignbrk -brkint ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl -ixon -ixoff
-iuclc -ixany -imaxbel iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
-isig icanon -iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt
echoctl echoke
But that's a lot of information. Unfortunately there's no single flag
for raw mode; it's a collection of a lot of flags.
I checked the interrupt character:
yep, intr = ^C
, just like it should be. So what was
the problem?
I saved the output with stty -a >/tmp/stty.bad
, then
I started up a new xterm and made a copy of what it should
look like with stty -a >/tmp/stty.good
. Then I looked
for differences: meld /tmp/stty.good /tmp/stty.bad
.
I saw these flags differing in the bad one: ignbrk ignpar -iexten -ixon,
while the good one had -ignbrk -ignpar iexten ixon. So I should be
able to run:
$ stty -ignbrk -ignpar iexten ixon
and that would fix the problem. But it didn't. Ctrl-C still didn't work.
Setting a trap, with precmd
However, knowing some things that differed did give me something to
test for in the shell, so I could test after every command and find
out exactly when this happened. In zsh, you do that by defining a
precmd function, so here's what I did:
precmd()
{
stty -a | fgrep -- -ignbrk > /dev/null
if [ $? -ne 0 ]; then
echo
echo "STTY SETTINGS HAVE CHANGED \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!"
echo
fi
}
Pardon all the exclams. I wanted to make sure I saw the notice when it happened.
And this fairly quickly found the problem: it happened when I suspended
ncftp with Ctrl-Z.
stty sane and isig
Okay, now I knew the culprit, and that if I switched to a different ftp
client the problem would probably go away. But I still wanted to know
why my stty command didn't work, and what the actual terminal
difference was.
Somewhere in my web searching I'd stumbled upon some pages suggesting
stty sane
as an alternative to reset
.
I tried it, and it worked.
According to man stty
, stty sane is equivalent to
$ stty cread -ignbrk brkint -inlcr -igncr icrnl -iutf8 -ixoff -iuclc -ixany imaxbel opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke
Eek! But actually that's helpful. All I had to do was get a bad
terminal (easy now that I knew ncftp was the culprit), then try:
$ stty cread
$ stty -ignbrk
$ stty brkint
... and so on, trying Ctrl-C each time to see if things were back to normal.
Or I could speed up the process by grouping them:
$ stty cread -ignbrk brkint
$ stty -inlcr -igncr icrnl -iutf8 -ixoff
... and so forth. Which is what I did. And that quickly narrowed it
down to
isig. I ran reset, then ncftp again to get the terminal
in "bad" mode, and tried:
$ stty isig
and sure enough, that was the difference.
I'm still not sure why meld didn't show me the isig difference.
But if nothing else, I learned a bit about debugging stty settings,
and about stty sane
, which is a much nicer way of
resetting the terminal than reset
since it doesn't
clear the screen.
Tags: linux, cmdline, debugging, mysteries
[
15:41 Aug 28, 2014
More linux |
permalink to this entry |
]
Tue, 08 Jul 2014
My new home office with the big picture windows and the light
streaming in come with one downside: it's harder to see my screen.
A sensible person would, no doubt, keep the shades drawn when working,
or move the office to a nice dim interior room without any windows.
But I am not sensible and I love my view of the mountains, the gorge
and the birds at the feeders. So accommodations must be made.
The biggest problem is finding the mouse cursor. When I first
sit down at my machine, I move my mouse wildly around looking for any
motion on the screen. But the default cursors, in X and in most
windows, are little subtle black things. They don't show up at all.
Sometimes it takes half a minute to figure out where the mouse pointer is.
(This wasn't helped by a recent bug in Debian Sid where the USB mouse
would disappear entirely, and need to be unplugged from USB and
plugged back in before the computer would see it. I never did find a
solution to that, and for now I've downgraded from Sid to Debian testing
to make my mouse work. I hope they fix the bug in Sid eventually,
rather than porting whatever "improvement" caused the bug to more
stable versions. Dealing with that bug trained me so that when I can't
see the mouse cursor, I always wonder whether I'm just not seeing it,
or whether it really isn't there because the kernel or X has lost
track of the mouse again.)
What I really wanted was bigger mouse cursor icons in bright colors
that are visible against any background. This is possible, but it
isn't documented at all. I did manage to get much better cursors,
though different windows use different systems.
So I wrote up what I learned.
It ended up too long for a blog post, so I put it on a separate page:
X Cursor
Themes for big and contrasty mouse cursors.
It turned out to be fairly complicated. You can replace the existing
cursor font, or install new cursor "themes" that many (but not all)
apps will honor. You can change theme name and size (if you choose a
scalable theme), and some apps will honor that. You have to specify
theme and size separately for GTK apps versus other apps. I don't know
what KDE/Qt apps do.
I still have a lot of unanswered questions. In particular, I was unable
to specify a themed cursor for xterm windows, and for non text areas
in emacs and firefox, and I'd love to know how to do that.
But at least for now, I have a great big contrasty blue mouse cursor that
I can easily see, even when I have the shades on the big windows open
and the light streaming in.
Tags: X11, linux
[
10:25 Jul 08, 2014
More linux |
permalink to this entry |
]
Sat, 28 Dec 2013
I've been scanning a bunch of records with Audacity (using as a guide
Carla Schroder's excellent Book of
Audacity and a
Behringer
UCA222 USB audio interface -- audacity doesn't seem able to record
properly from the built-in sound card on any laptop I own, while it
works fine with the Behringer.
Audacity's user interface isn't great for assembly-line recording of
lots of tracks one after the other, especially on a laptop with a
trackpad that doesn't work very well, so I wasn't always as organized
with directory names as I could have been, and I ended up with a mess.
I was periodically backing up the recordings to my desktop, but as I
shifted from everything-in-one-directory to an organized system, the
two directories got out of sync.
To get them back in sync, I needed a way to answer this question:
is every file inside directory A (maybe in some subdirectory of it)
also somewhere under subdirectory B? In other words, can I safely
delete all of A knowing that anything in it is safely stored in B,
even though the directory structures are completely different?
I was hoping for some clever find | xargs
way to do it,
but came up blank. So eventually I used a little zsh loop:
one find to get the list of files to test, then for each of
those, another find inside the target directory, then test
the exit code of find to see if it found the file.
(I'm assuming that if the songname.aup file is there, the songname_data
directory is too.)
for fil in $(find AAA/ -name '*.aup'); do
fil=$(basename $fil)
find BBB -name $fil >/dev/null
if [[ $? != 0 ]]; then
echo $fil is not in BBB
fi
done
Worked fine. But is there an easier way?
Tags: shell, cmdline, linux, programming
[
10:36 Dec 28, 2013
More linux/cmdline |
permalink to this entry |
]
Wed, 13 Nov 2013
Last week I wrote about some tests I'd made to answer the question
Does
scrolling output make a program slower?
My test showed that when running a program that generates lots of output,
like an rsync -av, the rsync process will slow way down as it waits for
all that output to scroll across whatever terminal client you're using.
Hiding the terminal helps a lot if it's an xterm or a Linux console,
but doesn't help much with gnome-terminal.
A couple of people asked in the comments about the actual source of
the slowdown. Is the original process -- the rsync, or my test script,
that's actually producing all that output -- actually blocking waiting
for the terminal? Or is it just that the CPU is so busy doing all that
font rendering that it has no time to devote to the original program,
and that's why it's so much slower?
I found pingu on IRC (thanks to JanC) and the group had a very
interesting discussion, during which I ran a series of additional tests.
In the end, I'm convinced that CPU allocation to the original process
is not the issue, and that output is indeed blocked waiting for the
terminal to display the output. Here's why.
First, I installed a couple of performance meters and looked at the
CPU load while rendering. With conky, CPU use went up equally (about
35-40%) on both CPU cores while the test was running. But that didn't
tell me anything about which processes were getting all that CPU.
htop was more useful. It showed X first among CPU users, xterm second,
and my test script third. However, the test script never got more than
10% of the total CPU during the test; X and xterm took up nearly all
the remaining CPU.
Even with the xterm hidden, X and xterm were the top two CPU users.
But this time the script, at number 3, got around 30% of the CPU
rather than 10%. That still doesn't seem like it could account for the
huge difference in speed (the test ran about 7 times faster with xterm
hidden); but it's interesting to know that even a hidden xterm will
take up that much CPU.
It was also suggested that I try running it to /dev/null,
something I definitely should have thought to try before.
The test took .55 seconds with its output redirected to /dev/null,
and .57 seconds redirected to a file on disk (of course, the kernel
would have been buffering, so there was no disk wait involved).
For comparison, the test had taken 56 seconds with xterm visible
and scrolling, and 8 seconds with xterm hidden.
I also spent a lot of time experimenting with sleeping for various
amounts of time between printed lines.
With time.sleep(.0001) and xterm visible, the test took 104.71 seconds.
With xterm shaded and the same sleep, it took 98.36 seconds, only 6 seconds
faster. Redirected to /dev/null but with a .0001 sleep, it took 97.44 sec.
I think this argues for the blocking theory rather than the CPU-bound one:
the argument being that the sleep gives the program a chance
to wait for the output rather than blocking the whole time.
If you figure it's CPU bound, I'm not sure how you'd explain the result.
But a .0001 second sleep probably isn't very accurate anyway -- we
were all skeptical that Linux can manage sleep times that small.
So I made another set of tests, with a .001 second sleep every 10
lines of output. The results:
65.05 with xterm visible; 63.36 with xterm hidden; 57.12 to /dev/null.
That's with a total of 50 seconds of sleeping included
(my test prints 500000 lines).
So with all that CPU still going toward font rendering, the
visible-xterm case still only took 7 seconds longer than the /dev/null
case. I think this argues even more strongly that the original test,
without the sleep, is blocking, not CPU bound.
But then I realized what the ultimate test should be. What happens when
I run the test over an ssh connection, with xterm and X running on my
local machine but the actual script running on the remote machine?
The remote machine I used for the ssh tests was a little slower than the
machine I used to run the other tests, but that probably doesn't make
much difference to the results.
The results? 60.29 sec printing over ssh (LAN) to a visible xterm;
7.24 sec doing the same thing with xterm hidden. Fairly similar to
what I'd seen before when the test, xterm and X were all running on the
same machine.
Interestingly, the ssh process during the test took 7% of my CPU,
almost as much as the python script was getting before, just to
transfer all the output lines so xterm could display them.
So I'm convinced now that the performance bottleneck has nothing to do
with the process being CPU bound and having all its CPU sucked away by
rendering the output, and that the bottleneck is in the process being
blocked in writing its output while waiting for the terminal to catch up.
I'd be interested it hear further comments -- are there other
interpretations of the results besides mine?
I'm also happy to run further tests.
Tags: performance, linux, programming, python
[
17:19 Nov 13, 2013
More linux |
permalink to this entry |
]
Fri, 08 Nov 2013
While watching my rsync -av messages scroll by during a big backup,
I wondered, as I often have, whether that -v (verbose) flag was slowing
my backup down.
In other words: when you run a program that prints lots of output,
so there's so much output the terminal can't display it all in real-time
-- like an rsync -v on lots of small files --
does the program wait ("block") while the terminal catches up?
And if the program does block, can you speed up your backup by
hiding the terminal, either by switching to another desktop, or by
iconifying or shading the terminal window so it's not visible?
Is there any difference among the different ways of hiding the
terminal, like switching desktops, iconifying and shading?
Since I've never seen a discussion of that, I decided to test it myself.
I wrote a very simple Python program:
import time
start = time.time()
for i in xrange(500000):
print "Now we have printed", i, "relatively long lines to stdout."
print time.time() - start, "seconds to print", i, "lines."
I ran it under various combinations of visible and invisible terminal.
The results were striking.
These are rounded to the nearest tenth of a second, in most cases
the average of several runs:
Terminal type | Seconds
|
xterm, visible | 56.0
|
xterm, other desktop | 8.0
|
xterm, shaded | 8.5
|
xterm, iconified | 8.0
|
Linux framebuffer, visible | 179.1
|
Linux framebuffer, hidden | 3.7
|
gnome-terminal, visible | 56.9
|
gnome-terminal, other desktop | 56.7
|
gnome-terminal, iconified | 56.7
|
gnome-terminal, shaded | 43.8
|
Discussion:
First, the answer to the original question is clear. If I'm displaying
output in an xterm, then hiding it in any way will make a huge
difference in how long the program takes to complete.
On the other hand, if you use gnome-terminal instead of xterm,
hiding your terminal window won't make much difference.
Gnome-terminal is nearly as fast as xterm when it's displaying;
but it apparently lacks xterm's smarts about not doing
that work when it's hidden. If you use gnome-terminal,
you don't get much benefit out of hiding it.
I was surprised how slow the Linux console was (I'm using the framebuffer
in the Debian 3.2.0-4-686-pae on Intel graphics). But it's easy to see
where that time is going when you watch the output: in xterm, you see
lots of blank space as xterm skips drawing lines trying to keep up
with the program's output. The framebuffer doesn't do that:
it prints and scrolls every line, no matter how far behind it gets.
But equally interesting is how much faster the framebuffer is when
it's not visible. (I typed Ctrl-alt-F2, logged in, ran the program,
then typed Ctrl-alt-F7 to go back to X while the program ran.)
Obviously xterm is doing some background processing that the framebuffer
console doesn't need to do. The absolute time difference, less than four
seconds, is too small to worry about, but it's interesting anyway.
I would have liked to try it my test a base Linux console, with no framebuffer,
but figuring out how to get a distro kernel out of framebuffer mode was
a bigger project than I wanted to tackle that afternoon.
I should mention that I wasn't super-scientific about these tests.
I avoided doing any heavy work on the machine while the tests were running,
but I was still doing light editing (like this article), reading mail and
running xchat. The times for multiple runs were quite consistent, so I
don't think my light system activity affected the results much.
So there you have it. If you're running an output-intensive program
like rsync -av and you care how fast it runs, use either xterm or the
console, and leave it hidden most of the time.
Tags: performance, linux, programming, python
[
15:17 Nov 08, 2013
More linux |
permalink to this entry |
]
Sun, 03 Nov 2013
While practicing a talk the other night on my new Asus laptop, my
Logitech remote presenter,
which had worked fine a few hours earlier, suddenly became flaky.
When I clicked the "next slide" button, sometimes there would be a
delay of up to ten seconds; sometimes it never worked at all, and I
had to click it again, whereupon the slide might advance once, twice,
or not at all. Obviously not useful.
Realizing that I'd been plugged into AC earlier in the day, and now
was running on battery, I plugged in the AC adaptor. And sure enough,
the presenter worked fine, no delays or glitches. So battery was the issue.
What's different about running on batteries?
I immediately suspected laptop-mode, which sets different power profiles
to help laptops save battery life when unplugged.
The presenter acts as a USB keyboard, sending key events like PAGE DOWN,
and on other distros (specifically Arch Linux) I've had problems with
USB keyboard devices disappearing when laptop-mode is active.
So I moved /etc/init.d/laptop-mode out of the way to disable it,
and rebooted. Tried the presenter again: no improvement.
But it was laptop-mode anyway.
Apparently even though /etc/init.d/laptop-mode says in its header
that its purpose is to start and stop laptop-mode, apparently
laptop-mode starts even without that file.
The key is the configuration file
/etc/laptop-mode/conf.d/usb-autosuspend.conf,
where I changed the line
CONTROL_USB_AUTOSUSPEND="auto"
to
CONTROL_USB_AUTOSUSPEND=0
In theory, I should have been able to do
service laptop-mode restart
to test it,
but I didn't trust that since I'd just established that
/etc/init.d/laptop-mode didn't actually control laptop-mode.
So I rebooted.
And the presenter worked just fine!
I was able to give my talk that afternoon without plugging in the AC cord.
Ironically, this particular talk was on giving tech talks, and one of
my points was that being prepared and practicing beforehand is
critical to giving a good talk. I'm sure glad I did that extra
practice run with the presenter and no power cord!
Tags: linux, laptop, speaking
[
16:13 Nov 03, 2013
More speaking |
permalink to this entry |
]
Sun, 11 Aug 2013
Want to get started controlling hardware from your BeagleBone Black?
I've found a lot of the documentation and tutorials a little sketchy,
so here's what I hope is a quick start guide.
I used the
Adafruit
Python GPIO library for my initial hacking.
It's easy to set up:
once you have your network set up, run these commands:
opkg update && opkg install python-pip python-setuptools python-smbus
pip install Adafruit_BBIO
Pinout diagrams
First, where can you plug things in? The BBB has two huge header blocks,
P8 and P9,
but trying to find pinout diagrams for them is a problem. Don't blindly
trust any diagram you find on the net; compare it against several
others, and you may find there are big differences. I've found a
lot of mislabeled BBB diagrams out there.
The best I've found so far are the two tables at
elinux.org/BeagleBone.
No pictures, but the tables are fairly readable and seem to be
correct.
The
official
BeagleBone Black hardware manual is the reference you're actually
supposed to use.
It's a 121-page PDF full of incomprehensible and unexplained abbreviations.
Good luck! The pin tables for P8 and P9 are on pp. 80 and 82.
P8 and P8 are identified on p. 78.
Blinking an LED: basic GPIO output
For basic GPIO output, you have a wide choice of pins.
Use the tables to identify power and ground, then pick a GPIO pin
that doesn't seem to have too many other uses.
The Adafruit library can identify pins either by their location
on the P8 and P9 headers, e.g. "P9_11", or by GPIO number, e.g.
"GPIO0_26". Except -- with the latter designation, what's that extra
zero between GPIO and _26? Is it always 0? Adafruit doesn't explain it.
So for now I'm sticking to the PN_NN format.
I plugged my LED and resistor into ground (there are lots of ground
terminals -- I used pin 0 on the P9 header) and pin 11 on P9.
It's one line to enable it, and then you can turn it on and off:
import Adafruit_BBIO.GPIO as GPIO
GPIO.setup("P9_11", GPIO.OUT)
GPIO.output("P9_11", GPIO.HIGH)
GPIO.output("P9_11", GPIO.LOW)
GPIO.output("P9_11", GPIO.HIGH)
Or make it blink:
import time
while True:
GPIO.output("P9_11", GPIO.HIGH)
time.sleep(.5)
GPIO.output("P9_11", GPIO.LOW)
time.sleep(.5)
Fading an LED: PWM output
PWM is harder. Mostly because it's not easy to find out which pins
can be used for GPIO. All the promotional literature on the BBB says
it has 8 GPIO outputs -- but which ones are those?
If you spend half an hour searching for "pwm" in that long PDF manual
and collecting a list of pins with "pwm" in their description,
you'll find 13 of them on P9 and 12 on P8. So that's no help.
After comparing a bunch of references and cross-checking pin numbers
against the descriptions in the hardware manual, I think this is the list:
P9_14 P9_16 P9_21 P9_22 P9_28 P9_31
P8_13 P8_19
I haven't actually verified all of them yet, though.
Once you've found a pin that works for PWM, the rest is easy.
Start it and set an initial frequency with PWM.start(pin, freq),
and then you can change the duty cycle with set_duty_cycle(pin, cycle)
where cycle is a number between 0 and 100. The duty cycle
is the reverse of what you might expect: if you have an LED plugged
in, a duty cycle of 0 will be brightest, 100 will be dimmest.
You can also change the frequency with PWM.set_frequency(pin, freq).
I'm guessing freq is in Hertz, but they don't actually say.
When you're done, you should call PWM.stop() and PWM.cleanup().
Here's how to fade an LED from dim to bright ten times:
import Adafruit_BBIO.PWM as PWM
PWM.start("P9_14", 50)
for j in range(10):
for i in range(100, 0, -1):
PWM.set_duty_cycle("P9_14", i)
time.sleep(.02)
PWM.stop("P9_14")
PWM.cleanup()
Tags: hardware, linux, beaglebone, maker
[
13:36 Aug 11, 2013
More hardware |
permalink to this entry |
]
Tue, 16 Jul 2013
Just a couple of tips for communicating with your BeagleBone Black
once you have it flashed with the latest Angstrom:
Configure the USB network
The Beaglebone Black running Angstrom has a wonderful feature:
when it's connected to your desktop via the mini USB cable, you
can not only mount it like a disk, but you can also set up networking
to it.
If the desktop is running Linux, you should have all the drivers you need.
But you might need a couple of udev rules to make the network device
show up. I ran the
mkudevrule.sh
from the official Getting Started page, which creates four rules
and then runs sudo udevadm control --reload-rules
to enable them
without needing to reboot your Linux machine.
Now you're ready to boot the BeagleBone Black running Angstrom.
Ideally you'll want it connected to your desktop machine by
the mini USB cable, but also plugged in to a separate power supply.
Your Linux machine should see it as a new network device, probably
eth1: run ifconfig -a
to see it.
The Beagle is already configured as 192.168.7.2. So you can talk
to it by configuring your desktop machine to be .1 on the same network:
ifconfig eth1 192.168.7.1
So now you can ssh from your desktop machine to the BBB,
or point your browser at the BBB's built in web pages.
Make your Linux machine a router for the Beaglebone
If you want the Beaglebone Black to have access to the net, there are
two more things you need to do.
First, on the BBB itself, run these two lines:
/sbin/route add default gw 192.168.7.1
echo "nameserver 8.8.8.8" >> /etc/resolv.conf
You'll probably want to add these lines to the end of
/usr/bin/g-ether-load.sh on the BBB, so they'll be run
automatically every time you boot.
Then, back on your Linux host, do this:
sudo iptables -A POSTROUTING -t nat -j MASQUERADE
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward > /dev/null
Now you should be able to ping, ssh or otherwise use the BBB to get
anywhere on the net.
Once your network is running, you might want to run
/usr/bin/ntpdate -b -s -u pool.ntp.org
to set the time, since the BBB doesn't have a real-time clock (RTC).
Serial monitor
If you're serious about playing with hardware, you'll probably want
a serial cable, for those times when something goes wrong and your
board isn't talking properly over USB.
I use the Adafruit
console cable -- it's meant for Raspberry Pi but it works fine
with the BeagleBone, since they both use the same 3.3v logic levels.
It plugs in to the pins right next to the "P9" header, on the
power-supply-plug side of the board.
The header has six pins: plug the black wire (ground) into pin 1,
the one closest to the power plug and ethernet jack. Plug the green wire
(RXD) into pin 4, and the white wire (TXD) into 5, the next-to-last pin.
Do not plug in the red power wire -- leave it hanging.
It's 5 volts and might damage the BBB if you plug it in to the
wrong place.
In case my photo isn't clear enough (click for a larger image),
Here's a diagram made by a helpful person on #beagle:
BeagleBone
Black serial connections.
Once the cable is connected, now what? Easy:
screen /dev/ttyUSB0 115200
That requires read-write privileges on the serial device /dev/ttyUSB0;
if you get a Permission Denied type error, it probably means you need
to add yourself to group dialout. (Changing groups requires logging out
and logging back in, so if you're impatient, just run screen as root.)
I used the serial monitor while flashing my new Angstrom image
(which is how I found out about how it spends most of that hour
updating Gnome desktop stuff). On the Raspberry Pi, I was dependent
on the serial cable for all sorts of things while I worked on hardware
projects; on the BeagleBone, I suspect I may use the USB networking
feature a lot more. Still, the serial cable will be handy to have when
things go wrong, or if I use a different Linux distro (like Debian) that
doesn't enable the USB networking feature.
Tags: hardware, linux, beaglebone, maker
[
19:41 Jul 16, 2013
More hardware |
permalink to this entry |
]
Sat, 13 Jul 2013
I finally got a shiny new BeagleBone Black! This little board
looks like it should be just the ticket for robotics, a small, cheap,
low-power Linux device that can also talk to hardware like an Arduino,
with plenty of GPIO pins, analog, PWM, serial, I2C and all.
I plugged in the BeagleBone Black via the mini USB cable, and it
powered up and booted. It comes with a Linux distro, Angstrom, already
installed on its built-in flash memory. I had already known that it
would show up as a USB storage device -- you can mount it like a disk,
and read the documentation (already there on the filesystem) that way.
Quite a nice feature.
What I didn't know until I read the
Getting
Started guide was that it had an even slicker feature: it also
shows up as a USB network device.
All I had to do was run a script,
mkudevrule.sh,
to set up some udev rules, and then
ifconfig -a
on my desktop showed a new device named eth1.
The Beagle is already configured as 192.168.7.2, so I configured eth1
to be on the same network:
ifconfig eth1 192.168.7.1
and I was able to point my browser directly at a mini http server
running on the device, which gives links to all the built-in documentation.
Very slick and well thought out!
But of course, what I really wanted was to log in to the machine
itself. So I tried ssh 192.168.7.2
and ... nothing.
It turns out that the Angstrom that ships on current BBBs has a bug,
and ssh often doesn't work. The cure is to download a new Angstrom
image and re-flash the machine.
It was getting late in the evening, so I postponed that until the
following day. And a good thing I did: the flashing process turned out
to be very time consuming and poorly documented, at least for Linux users.
So here's how to do it.
Step 1: Use a separate power supply
Most of the Beaglebone guides recommend just powering
the BBB through its provided mini USB cable. Don't believe it.
At least, my first attempt at flashing failed, while repeating
exactly the same steps with the addition of an external power supply
worked just fine, and I've heard from other people who have had
similar problems trying to power the BBB through cable USB cable.
Fortunately, when I ordered the BBB I ordered a 2A power supply with
it. It's hard to believe that it ever really draws 2 amps, but that's
what Adafruit recommended, so that's what I ordered.
One caution: the BBB will start booting as soon as you
apply any power, whether from an external supply or the USB cable.
So it might be best to leave the USB cable disconnected during
the flashing process.
Get the eMMC-flasher image and copy it to the SD card
Download the image for the BeagleBone Black eMMC flasher from
Beagleboard
Latest Images.
They don't tell you the size of the image, but it's 369M.
The uncompress it. It's a .xz file, which I wasn't previously familiar
with, but I already had an uncompressor for it, unxz:
unxz BBB-eMMC-flasher-2013.06.20.img.xz
After uncompressing, it was 583M.
You'll need a microSD card to copy the image to. The Beagleboard folks
don't say how much space you need, but I found a few pages
talking about needing a 4G card. I'm not clear why you'd need that
for an image barely over half a gig, but 4G is what I happened to
have handy, so that's what I used.
Put the card in whatever adapter you need, plug it in to your Linux
box, and unmount it if got mounted automatically. Then copy the image
to the card -- just the base card device, not the first partition.
Replace X with the appropriate drive name (b in my case):
dd bs=1M if=BBB-eMMC-flasher-2013.06.20.img of=/dev/sdX
The copy will take quite a while.
Boot off the card
With the BBB powered off, insert the microSD card.
Find the "user boot" button. It's a tiny button right on top of the
microSD card reader. While holding it down, plug in your power supply
to power the BBB on. Keep holding the button down until you see all
four of the bright blue LEDs come on, then release the button.
Then wait. A long time. A really long time.
The LEDs should flash erratically during this period.
Most estimates I found on the web estimated 30-45 minutes to flash a
new version of Angstrom, but for me it took an hour and six minutes.
You'll know when it's done when the LEDs stop blinking erratically.
Either they'll all turn on steady (success) or they'll all go off
(failure).
Over an hour? Why so long?
I wondered that, of course, so in my second attempt at flashing, once
I had the serial cable plugged in, I ran ps periodically to see what
it was doing.
And for nearly half that time -- over 25 minutes -- what it was doing
was configuring Gnome.
Seriously. This Angstrom distribution for a tiny board half the size
of your hand runs a Gnome desktop -- and when it flashes its OS,
it doesn't just copy files, it runs Gnome configuration scripts for
every damn program on the system.
Okay. I'm a little less impressed with the Beagle's Angstrom setup now.
Though I still think this USB-ethernet thing is totally slick.
Tags: hardware, linux, beaglebone, maker
[
14:30 Jul 13, 2013
More hardware |
permalink to this entry |
]
Sun, 09 Jun 2013
I recently went on an upgrading spree on my main computer. In the hope
of getting more up-to-date libraries, I updated my Ubuntu to 13.04
"Raring Ringtail", and Debian to unstable "Sid". Most things went fine
-- except for Firefox.
Under both Ringtail and Sid, Firefox became extremely unstable.
I couldn't use it for more than about fifteen minutes before it would
freeze while trying to access some web resource. The only cure when
that happened was to kill it and start another Firefox.
This was happening with the exact same Firefox -- a 21.0 build from
mozilla.org -- that I was using without any problems on older versions
of Debian and Ubuntu; and with the exact same profile. So it was
clearly something that had changed about Debian and Ubuntu.
The first thing I do when I hit a Firefox bug is test with
a fresh profile. I have all sorts of Firefox customizations, extensions
and other hacks. In fact, the customizations are what keep me tied
to Firefox rather than jumping to some other browser. But they do,
too often, cause problems. I have a generic profile I keep around
for testing, so I fired it up and used it for browsing for a day.
Firefox still froze, but not as often.
Disabling Extensions
Was it one of my extensions?
I went to the Tools->Add-ons to try disabling them all ...
and Firefox froze. Bingo! That was actually good news. Problems like
"Firefox freezes a lot" are hard to debug. "Firefox freezes every time
I open Tools->Add-ons" are a whole lot easier.
Now I needed to find some other way of disabling extensions to see if
that helped.
I went to my Firefox profile directory and moved everything
in the extensions directory into a new directory I made called
extensions.sav. Then I started moving them back one by one,
each time starting Firefox and calling up Tools->Add-ons.
It turned out two extensions were causing the freeze: Open in Browser
and Custom Tab Width. So I left those off for the time being.
Disabling Themes
Along the way, I discovered that clicking on Appearance in
Tools->Add-ons would also cause a freeze, so my visual
theme was also a problem. This wasn't something I cared about:
some time back when Mozilla started trumpeting their themeability,
I clicked around and picked up some theme involving stars and planets.
I could live without that.
But how do you disable a theme?
Especially if you can't go to Tools->Add-ons->Appearance?
Turns out everything written on the web on this is wrong. First,
everything on themes on mozilla.org assumes you can get to that
Appearance tab, and doesn't even consider the possibility that you
might have to look in your profile and remove a file.
Search further and you might find references to files named
lightweighttheme-header and lightweighttheme-footer, neither of
which existed in my profile.
But I did have a directory called lwtheme.
So I removed that, plus four preferences in prefs.js that included
the term "lightweightThemes".
After a restart, my theme was gone, I was able to view that Appearance tab,
and I was able to browse the web for nearly 4 hours before firefox hung again.
Darn! That wasn't all of it.
Debugging the environment
But soon after that I had a breakthrough.
I discovered a page on my bank's website that froze Firefox every time.
But that was annoying for testing, since it required logging in then
clicking through several other pages, and you never know what a bank
website might decide to do if you start logging in over and over.
I didn't want to get locked out.
But then I was checking an episode in one of the podcasts I listen to,
which involved going to the link
http://downloads.bbc.co.uk/podcasts/radio4/moreorless/rss.xml
-- and Firefox froze, on a simple RSS link. I restarted and tried
again -- another freeze. I'd finally found the Rosetta stone,
something that hung Firefox every time. Now I could do some serious testing!
I'd had friends try this using the same version of Firefox and Ubuntu,
without seeing a freeze. Was it something about my user environment?
I created a new user, switched to another virtual console (Ctrl-Alt-F2)
and logged in as my new user, then ran X. This was a handy way to test:
I could get to my normal user's X session in Ctrl-Alt-F7, while the new
user's X session was on Ctrl-Alt-F8. Since I don't have Gnome or KDE
installed on this machine, the new user came up with a default Openbox
session. It came up at the wrong resolution -- the X11 in the newest
Linux distros apparently doesn't read the HDMI monitor properly --
but I wasn't worried about that.
And when I ran Firefox as the new user (letting it create a new profile)
and middlemouse-pasted the BBC RSS URL, it loaded it, without freezing.
Now we're getting somewhere.
Now I knew it was something about my user environment.
I tried copying all of ~/.config from my user to the new user. No hang.
I tried various other configuration files. Still no hang.
The X initialization
I'll skip some steps here, and just mention that in trying to fix the
resolution problem, so I didn't have to do all my debugging at 1024x768,
I discovered that if I used my .xinitrc file to start X, I'd get a freezy
Firefox. If I didn't use my .xinitrc, and defaulted to the system one,
Firefox was fine. Even if I removed everything else from my .xinitrc,
and simply ran openbox from it, that was enough to make Firefox hang.
Okay, what was the system doing? I poked around /etc/X11:
it was running /etc/X11/Xsession. I copied that file to my
.xinitrc and started X. No hang.
Xsession does a bunch of things, but one of the main things it does is run
every script in the /etc/X11/Xsession.d directory.
So I made a copy of that directory inside my home directory, and modified
.xinitrc to execute those files instead. Then I started moving them
aside to see which ones made a difference.
And I found it. /etc/X11/Xsession.d/75dbus_dbus-launch was the
file that mattered.
75dbus_dbus-launch takes the name of the program that's
going to be executed -- in this case that was x-session-manager, which
links to /etc/alternatives/x-session-manager, which links to
/usr/bin/openbox-session -- and instead runs
/usr/bin/dbus-launch --exit-with-session x-session-manager
.
Now that I knew that, I moved everything aside and made a little
.xinitrc that ran
/usr/bin/dbus-launch --exit-with-session openbox-session
.
And Firefox didn't crash.
Dbus
So it all comes down to dbus. I was already running dbus: ps shows
/usr/bin/dbus-daemon --system running -- and that worked fine
for everything dbussy I normally do, like run "gimp image.jpg" and
have it open in my already running GIMP.
But on Ringtail and Sid, that isn't enough for Firefox. For some
reason, on these newer systems, Firefox requires a second
dbus daemon -- it shows up in ps as
/usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
-- for the X session. If it doesn't have that, it's fine for a while,
and then, hours later, it will mysteriously freeze while waiting for
a network resource.
Why? I have no idea. No one I've asked seems to know anything about
how dbus works, the difference between system and session dbus daemons,
or why any of it it would have this effect on Firefox.
I filed a Firefox bug,
Bug 881122,
though I don't have much hope of anyone being interested in a bug
that only affects Linux users using nonstandard X sessions.
But maybe I'm not the only one. If your Firefox is hanging and
you found your way here, I hope I've given you some ideas.
And if anyone has a clue as to what's really happening and why
dbus would have that effect, I'd love to hear from you.
Tags: firefox, mozilla, debugging, linux, debian, ubuntu, dbus
[
20:08 Jun 09, 2013
More linux |
permalink to this entry |
]
Wed, 15 May 2013
Checking versions in Debian-based systems is a bit of a pain.
This happens to me a couple of times a month: for some reason I need
to know what version of something I'm currently running -- often a
library, like libgtk. aptitude show
will tell you all about a package -- but only if you know its exact name.
You can't do aptitude show libgtk
or even
aptitude show '*libgtk*'
-- you have to know that the
package name is libgtk2.0-0. Why is it libgtk2.0-0? I have no idea,
and it makes no sense to me.
So I always have to do something like
aptitude search libgtk | egrep '^i'
to find out what
packages I have installed that matches the name libgtk, find the
package I want, then copy and paste that name after typing
aptitude show
.
But it turns out it's super easy in Python to query Debian packages using the
Python
apt package. In fact, this is all the code you need:
import sys
import apt
cache = apt.cache.Cache()
pat = sys.argv[1]
for pkgname in cache.keys():
if pat in pkgname:
pkg = cache[pkgname]
instver = pkg.installed
if instver:
print pkg.name, instver.version
Then run
aptver libgtk
and you're all set.
In practice, I wanted nicer formatting, with columns that lined up, so
the actual script is a little longer. I also added a -u flag to show
uninstalled packages as well as installed ones. Amusingly, the code to
format the columns took about twice as many lines as the code that does the
actual work. There doesn't seem to be a standard way of formatting
columns in Python, though there are lots of different implementations
on the web. Now there's one more -- in my
aptver
on github.
Tags: linux, debian, ubuntu, python, programming
[
16:07 May 15, 2013
More linux |
permalink to this entry |
]
Tue, 26 Mar 2013
Sometimes I need to take a URL from some text app -- like a shell window,
or an IRC comment -- and open it in Firefox.
If it's a standard http://, that's trivial: I highlight the URL
with my house (often a doubleclick will do it), go to my Firefox window
and middleclick somewhere in the content area, anywhere that's not
a link, and Firefox goes to the URL.
That works because selecting anything, in X, copies the selection to
the Primary selection buffer. The Primary selection is different from
the Clipboard selection that's used with Windows and Mac style
Ctrl-X/Ctrl-C/Ctrl-V copy and paste; it's faster and doesn't require
switching between keyboard and mouse. Since your hand is already on the
mouse (from selecting the text), you don't have to move to the keyboard
to type Ctrl-C, then back to the mouse to go to the Firefox window,
then back to the keyboard to type Ctrl-V.
But it fails in some cases. Like when someone says in IRC,
"There's a great example of that at coolhacks.org/greatexample".
You can highlight coolhacks.org/greatexample
and middleclick
in Firefox all you want, but Firefox doesn't recognize it as a URL and
won't go there. Or if I want to highlight a couple of search terms and
pass them into a Google search.
(Rant: middlemouse used to work for these cases, but it was
disabled -- without even an option for getting it back -- due to a lot of
whining in bugzilla by people coming from Windows backgrounds who
didn't like middleclick paste because they found it unexpected, yet
who weren't willing to turn it off with the
middlemouse.contentLoadURL
pref).
So in those cases, what I've been doing is:
- Highlight the URL or search terms
- Switch to the Firefox window
- Ctrl-L -- this focuses and highlights the URL bar without
changing the X selection
- Backspace (or Ctrl-U, Ctrl-K, any key that clears the URLbar)
- Move the mouse to the URL bar (a small target) and middleclick
- Hit Enter
It works, but it's a lot more steps, and entails several switches
between keyboard and mouse. Frustrating!
It would be a little less frustrating if I had a key binding in Firefox
that said "Paste the current X primary selection." A web search shows
that quite a few other people have been bothered by this problem --
for instance, here
and here
-- but without any solutions. Apparently in a lot of apps, Ctrl-Insert
inserts the Primary X selection -- but in Firefox and a few others,
it inserts the Clipboard instead, just like Ctrl-C.
I could write my own fix, by unzipping Firefox's omni.ja file
and editing various .xul and .js files inside it. But if I were doing
that, I could just as easily revert Firefox's original behavior of
going to the link. Neither of these is difficult; the problem is that
every time I Firefox updates (which is about twice a week these days),
things break until I manually go in and unzip the jar and make my
changes again. I used to do that, but I got tired of needing to do it
so often. And I tried to do it via a Firefox extension, until Mozilla
changed the Firefox extension API so that extensions couldn't modify
key bindings any more.
Since Firefox changes so often, it's nicer to have a solution that's
entirely outside of Firefox. And a comment in one of those discussion
threads gave me an idea: make a key binding in my window manager that
uses xset to copy the primary selection to the clipboard, then
use my crikey
program to insert a fake Ctrl-V that Firefox will see.
Here's a command to do that:
xsel -o -p | xsel -i -b; crikey -s 1 "^V"
xsel -o
prints a current X selection, and -p specifies
the Primary. xsel -i
sets an X selection to whatever it
gets on standard input (which in this case will be whatever was in the
Primary selection), and -b tells it to set the Clipboard selection.
Then crikey -s 1 "^V" waits one second (I'll probably reduce this after
more testing) and then generates an X event for a Ctrl-V.
I bound that command to Ctrl-Insert in my window manager, Openbox,
like this:
<keybind key="C-Insert">
<action name="Execute">
<execute>/bin/sh -c 'xsel -o -p | xsel -i -b; crikey -s 1 "^V"'</execute>
</action>
</keybind>
Openbox didn't seem happy with the pipe, so I wrapped the whole thing in
a
sh -c
.
Now, whenever I type Ctrl-Insert, whatever program I'm in will do a
Ctrl-V but insert the Primary selection rather than the Clipboard.
It should work in other recalcitrant programs, like LibreOffice, as well.
In Firefox, now, I just have to type Ctrl-L Ctrl-Insert Return.
Of course, it's easy enough to make a binding specific to Firefox
that does the Ctrl-L and the Return automatically. I've bound that
to Alt-Insert, and its execute line looks like this:
<execute>/bin/sh -c 'xsel -o -p | xsel -i -b; crikey -s 1 "^L^V\\n"'</execute>
Fun with Linux! Now the only hard part will be remembering to use the
bindings instead of doing things the hard way.
Tags: linux, firefox, X11, cmdline, crikey
[
20:35 Mar 26, 2013
More linux/cmdline |
permalink to this entry |
]
Sat, 16 Mar 2013
I'm at PyCon, and I spent a lot of the afternoon in the Raspberry Pi lab.
Raspberry Pis are big at PyCon this year -- because everybody at
the conference got a free RPi! To encourage everyone to play, they
have a lab set up, well equipped with monitors, keyboards, power
and ethernet cables, plus a collection of breadboards, wires, LEDs,
switches and sensors.
I'm primarily interested in the RPi as a robotics controller,
one powerful enough to run a camera and do some minimal image processing
(which an Arduino can't do).
And on Thursday, I attended a PyCon tutorial on the Python image processing
library SimpleCV.
It's a wrapper for OpenCV that makes it easy to access parts of images,
do basic transforms like greyscale, monochrome, blur, flip and rotate,
do edge and line detection, and even detect faces and other objects.
Sounded like just the ticket, if I could get it to work on a Raspberry Pi.
SimpleCV can be a bit tricky to install on Mac and Windows, apparently.
But the README on the SimpleCV
git repository gives an easy 2-line install for Ubuntu. It doesn't
run on Debian Squeeze (though it installs), because apparently it
depends on a recent version of pygame and Squeeze's is too old;
but Ubuntu Pangolin handled it just fine.
The question was, would it work on Raspbian Wheezy? Seemed like a
perfect project to try out in the PyCon RPi lab. Once my RPi was
set up and I'd run an apt-get update
, I used
used netsurf (the most modern of the lightweight browsers available
on the RPi) to browse to the
SimpleCV
installation instructions.
The first line,
sudo apt-get install ipython python-opencv python-scipy python-numpy python-pygame python-setuptools python-pip
was no problem. All those packages are available in the Raspbian repositories.
But the second line,
sudo pip install https://github.com/ingenuitas/SimpleCV/zipball/master
failed miserably. Seems that pip likes to put its large downloaded
files in /tmp; and on Raspbian, running off an SD card, /tmp quite
reasonably is a tmpfs, running in RAM. But that means it's quite small,
and programs that expect to be able to use it to store large files
are doomed to failure.
I tried a couple of simple Linux patches, with no success.
You can't rename /tmp to replace it with a symlink to a directory on the
SD card, because /tmp is always in use. And pip makes a new temp directory
name each time it's run, so you can't just symlink the pip location to
a place on the SD card.
I thought about rebooting after editing the tmpfs out of /etc/fstab,
but it turns out it's not set up there, and it wasn't obvious how to
disable the tmpfs. Searching later from home, the size is
set in /etc/default/tmpfs. As for disabling the tmpfs and using the
SD card instead, it's not clear. There's a block of code in
/etc/init.d/mountkernfs.sh that makes that decision; it looks like
symlinking /tmp to somewhere else might do it, or else commenting out
the code that sets RAMTMP="yes". But I haven't tested that.
Instead of rebooting, I downloaded the file to the SD card:
wget https://github.com/ingenuitas/SimpleCV/master
But it turned out it's not so easy to pip install from a local file.
After much fussing around I came up with this, which worked:
pip install http:///home/pi/master --download-cache /home/pi/tmp
That worked, and the resulting SimpleCV install worked nicely!
I typed some simple tests into the simplecv shell, playing around
with their built-in test image "lenna":
img = Image('lenna')
img.show()
img.binarize().show()
img.toGray().show()
img.edges().show()
img.invert().show()
And, for something a little harder, some face feature detection:
let's find her eyes and outline them in yellow.
img.listHaarFeatures()
img.findHaarFeatures('eye.xml').draw(color=Color.YELLOW)
SimpleCV is lots of fun! And the edge detection was quite fast on the RPi --
this may well be usable by a robot, once I get the motors going.
Tags: raspberry pi, python, programming, hardware, linux, maker
[
21:43 Mar 16, 2013
More linux/install |
permalink to this entry |
]
Mon, 04 Mar 2013
My Lenovo laptop has a nifty button, Fn-F5, to toggle wi-fi and bluetooth
on and off. Works fine, and the indicator lights (of which the Lenovo
has many -- it's quite nice that way) obligingly go off or on.
But when I suspend and resume, the settings aren't remembered.
The machine always comes up with wireless active, even if it wasn't
before suspending.
Since wireless can be a drain on battery life, as well as a potential
security issue, I don't want it on when I'm not actually using it.
So I wanted a way to turn it off programmatically.
The answer, it turns out, is rfkill.
$ rfkill list
0: tpacpi_bluetooth_sw: Bluetooth
Soft blocked: yes
Hard blocked: no
0: phy0: Wireless LAN
Soft blocked: yes
Hard blocked: no
tells you what hardware is currently enabled or disabled.
To toggle something off,
$ rfkill block bluetooth
$ rfkill block wifi
Type rfkill -h
for more details on arguments you can use.
Fn-F5 still works to enable or disable them together.
I think this is being controlled by /etc/acpi/ibm-wireless.sh,
though I can't find where it's tied to Fn-F5.
You can make it automatic by creating /etc/pm/sleep.d/.
(That's on Ubuntu; of course, the exact file location may vary with distro
and version.) To disable wireless on resume, do this:
#! /bin/sh
case "$1" in
resume)
rfkill block bluetooth
rfkill block wifi
;;
esac
exit $?
Of course, you can also tie that into other things, like your current
network scheme, or what wireless networks are visible (which you can
get with iwlist wlan0 scan
).
Tags: linux, ubuntu, laptop, tip
[
19:46 Mar 04, 2013
More linux/laptop |
permalink to this entry |
]
Sat, 19 Jan 2013
I'm fiddling with a serial motor controller board, trying to get it
working with a Raspberry Pi. (It works nicely with an Arduino, but
one thing I'm learning is that everything hardware-related is far
easier with Arduino than with RPi.)
The excellent Arduino library helpfully
provided by Pololu
has a list of all the commands the board understands. Since it's Arduino,
they're in C++, and look something like this:
#define QIK_GET_FIRMWARE_VERSION 0x81
#define QIK_GET_ERROR_BYTE 0x82
#define QIK_GET_CONFIGURATION_PARAMETER 0x83
[ ... ]
#define QIK_CONFIG_DEVICE_ID 0
#define QIK_CONFIG_PWM_PARAMETER 1
and so on.
On the Arduino side, I'd prefer to use Python, so I need to get
them to look more like:
QIK_GET_FIRMWARE_VERSION = 0x81
QIK_GET_ERROR_BYTE = 0x82
QIK_GET_CONFIGURATION_PARAMETER = 0x83
[ ... ]
QIK_CONFIG_DEVICE_ID = 0
QIK_CONFIG_PWM_PARAMETER = 1
... and so on ... with an indent at the beginning of each line since
I want this to be part of a class.
There are 32 #defines, so of course, I didn't want to make all those
changes by hand. So I used vim. It took a little fiddling -- mostly
because I'd forgotten that vim doesn't offer + to mean "one or more
repetitions", so I had to use * instead. Here's the expression
I ended up with:
.,$s/\#define *\([A-Z0-9_]*\) *\(.*\)/ \1 = \2/
In English, you can read this as:
From the current line to the end of the file (,.$/
),
look for a pattern
consisting of only capital letters, digits and underscores
([A-Z0-9_]
). Save that as expression #1 (\( \)
).
Skip over any spaces, then take the rest of the line (.*
),
and call it expression #2 (\( \)
).
Then replace all that with a new line consisting of 4 spaces,
expression 1, a spaced-out equals sign, and expression 2
( \1 = \2
).
Who knew that all you needed was a one-line regular expression to
translate C into Python?
(Okay, so maybe it's not quite that simple.
Too bad a regexp won't handle the logic inside the library as well,
and the pin assignments.)
Tags: regexp, linux, programming, python, editors, vim
[
21:38 Jan 19, 2013
More linux/editors |
permalink to this entry |
]
Sat, 01 Dec 2012
Every now and then. I find myself puzzled by which Linux system
messages are going to which files in /var/log. I vaguely knew it
was all configurable via the syslog service, specifically the file
/etc/rsyslog.conf (on some systems it's still/etc/syslog.conf)
and the directory /etc/rsyslog.d.
But every time I started wading
into that 822-line rsyslog.conf(5) man page I end up
deciding "perhaps another time".
But when you have a problem at work and messages are being
logged to the wrong place and filling up the disk, "another time" arrives.
If you're like most people, you don't need to know esoterica like how
to use a plug-in to log in a special custom format to a named pipe;
you just want to know how to change the file so you see the messages
you want to see, or so you don't have the same messages being logged
in three different places. The man page isn't good about separating
the practical information everyone needs from the esoterica.
Strangely, there doesn't seem to be much in the way of simple rsyslog
web tutorials, either.
So now, I present to you:
rsyslog.conf(5): the "Good Parts" Version
First, remember that
you don't need to create an /etc/rsyslog.conf from scratch.
You just need to be able to read the existing one and
modify it a little. So start with the file you already have.
MODULES section
rsyslog has tons of modules available. They're listed in the man page
with no clue offered as to what they all do. Just leave that section
alone and don't worry about it.
GLOBAL DIRECTIVES
You can probably leave that section alone too.
But if you need to configure something globally -- for instance,
the user who will own the files in /var/log -- the rsyslog.conf
man page actually isn't too bad in describing the various options.
rsyslog.d
On some systems, notably Ubuntu, most of the configuration happens
in smaller files in rsyslog.d rather than in the main
rsyslog.conf. What follows applies to those files as well
as the main one.
Rules section
The rest of the file(s) comprise rules for what gets logged where.
Each rule includes a selector (what gets logged) and an action (where
it will get logged). Each selector includes a facility (what type of
message we're talking about) and a priority (how important it is).
But a selector can have several facilities/priorities, which is
what makes the file look so complicated.
Enough theory. Let's look at some practical examples.
These examples are taken from a plug computer running Debian.
auth,authpriv.* /var/log/auth.log
Messages of type auth or authpriv (the facility),
with any priority, get logged to the file /var/log/auth.log.
*.*;auth,authpriv.none -/var/log/syslog
Any message type, with any priority, gets logged to /var/log/syslog;
except for auth and authpriv messages (which is good since
they're already being logged to auth.log). The special priority
none prevents those messages from being logged even though
they would have been included in the *.*.
What's that dash in front of the filename?
It's not documented in the man page, but it turns out to mean
"Don't sync after every write to the file". Except that rsyslogd
won't sync anyway, unless you add a special directive in the Global
Directives section. So for most people, a dash makes no difference
one way or the other -- it will be ignored.
So why is it there in the file, especially since the man page doesn't
even document it? I have no idea; probably no one from the various
distros has audited these files for years.
daemon.* -/var/log/daemon.log
There are quite a few facilities n addition to auth and daemon.
Here's the list:
auth, authpriv, cron, daemon, kern, lpr, mail, news,
syslog, user, uucp and local0 through local7.
There are also a couple of deprecated ones: security (considered
to be the same as auth) and mark (for internal use only).
Selector priorities
Priorities can be:
debug, info, notice, warning, err, crit, alert, emerg;
plus the deprecated warn, error and panic (treated as
warning, err and emerg).
*.=debug;\
auth,authpriv.none;\
news.none;mail.none -/var/log/debug
Normally, specifying a priority like debug means to log anything
of that priority or higher. So specifying *.debug would
log everything. Adding an equals sign, =debug, means log only
debug messages but nothing higher. This rule also excludes
auth, authpriv, news and mail messages even if they're debug.
*.=info;*.=notice;*.=warn;\
auth,authpriv.none;\
cron,daemon.none;\
mail,news.none -/var/log/messages
Yow! The familiar /var/log/messages sure has a complicated rule.
It gets anything of priority info, notice, or warn, unless they're
facility aurh, authpriv, cron, daemon, mail or news.
Note that this overlaps with some of the other rules. So you'll see
a lot of the same messages showing up in messages and
syslog (which you'll recall got *.*). That was what got me
started down this road: all that duplicated logging on our space-limited
plug computers.
daemon.*;mail.*;\
news.err;\
*.=debug;*.=info;\
*.=notice;*.=warn |/dev/xconsole
Notice that until now, the rules have logged only to filenames.
Remember all the rest of that complicated man page, explaining all
the other actions besides filenames? Here's the only one that
typically comes up: there's a device called /dev/xconsole where
you can direct errors, and then you can run the xconsole program
(you might have to specify the file, xconsole -file /dev/xconsole
)
to see the output.
Applying this to specific messages
How do you figure out which messages have what facilities and priorities?
That's the tricky part.
Mostly, you can't. You can guess, or look at the source code, or
just change rules in rsyslog.conf and see what happens.
But with a little experimentation, this guide should help you configure
your own syslog configuration file and get rid of all those redundant
messages filling up your disk.
Good luck!
Tags: linux
[
14:34 Dec 01, 2012
More linux |
permalink to this entry |
]
Wed, 14 Nov 2012
(This is a guest post by David North.)
Debian developers tend to get overzealous in their dependency lists, probably to avoid constant headaches from fringe cases whose favorite programs fail because they also need some obscure library or package support (and yes, I'm talking to you, Ubuntu). But what if you don't want some goofy dependency (and the cascade of other crap it pulls in?)
As a small aside, aptitude/apt-get hold <pkg> is terrific if you just want to keep a package at a pre-horkage level, but for some obcure reason you can't "hold" a package that isn't installed. So that won't work as of 11/2012.
You can however generate an equivalent package with a higher version number and install it, which naturally blocks the offending package. Even better, the replacement package need do nothing at all other than satisfy the apt database. Even better, the whole thing is incredibly simple.
First install the "equivs" package. This will deliver two programs:
- equivs-control
- equivs-build
Officially you should start with 'equivs-control <:pkgname>' which will create a file 'pkgname' in the current directory. Inside are various fields but you only need eight and can simply delete the rest. Here's approximately what you should end up with for a fictional package "pkgname":
Section: misc
Priority: optional
Standards-Version: 3.9.2
Package: pkgname
Version: 1:42
Maintainer: Your Name <your@email.address>
Architecture: all
Description: fake pkgname to block a dumb dependency
The first three lines are just boilerplate, though you may have to increment the standards-version at some point if you reuse the file. No changes are needed now.
The pkgname does actually have to match the name of the package you want to block. The version must be higher than that of the target package. Maintainer need not be you, but it's a good idea to at least use a name you recognize as yourself. Architecture can be left as "all" unless you're doing something extra tricky. Description is not necessary but a good idea; put your notes here.
The only trick is the version. Note the 1:42 structure here. The first number is the "epoch" in debian-speak, and may or may not be used. In practice I've never seen an epoch greater than one, so I suggest using either 1 or 2 here rather than just leaving it blank. You can see the epoch number in a package when you use aptitude show <pkgname>. The version is the number immediately after the colon, and for safety's sake should be considerably larger than the version you're trying to block (to avoid future updates). I like to use "42" for obvious reasons unless the actual package version is too close. Factoid: if no "epoch" is indicated debian will assume epoch 0, which will not show up as a zero in a .deb (or in aptitude show) but rather as a blank. The version number will have no colon in this event.
Having done this, all you need do is issue the command 'equivs-build path-to-pkgname' (preferably from the same directory) and you get a fake deb to install with dpkg -i. Say goodbye to the dependency.
One more trick: once you have your file <pkgname> with the Eight
Important Fields, you can pretty much skip using equivs-control. All it
does is make the initial text file, and it will be easier to edit the
one you already have with a new package name (and rename the file at
the same time). Note, however, this handy file will not necessarily be
useful on other debian-based systems or later installs, so running
equivs-control after a big upgrade or moving to another distro is very
good practice. If you compare the files and they have the same entries,
great. If not, use the new ones.
Tags: linux, debian, ubuntu, install
[
11:50 Nov 14, 2012
More linux/install |
permalink to this entry |
]
Fri, 09 Nov 2012
I've been using my
Raspberry
Pi mostly headless -- I'm interested in using it to control hardware.
Most of my experimenting is at home, where I can plug the Pi's built-in
ethernet directly into the wired net.
But what about when I venture away from home, perhaps to a group
hacking session, or to give a talk? There's no wired net at most of
these places, and although you can buy USB wi-fi dongles, wi-fi is so
notoriously flaky that I'd never want to rely on it, especially as my
only way of talking to the Pi.
Once or twice I've carried a router along, so I could set up my own
subnet -- but that means an extra device, ten times as big as the Pi,
and needing its own power supply in a place where power plugs may be scarce.
The real solution is a crossover ethernet cable.
(My understanding is that you can't use a normal ethernet cable
between two computers; the data send and receive lines will end up crossed.
Though I may be wrong about that -- one person on #raspberrypi reported
using a normal ethernet cable without trouble.)
Buying a crossover cable at Fry's was entertaining. After several minutes
of staring at the dozens of bins of regular ethernet cables, I finally
found the one marked crossover, and grabbed it. Immediately, a Fry's
employee who had apparently been lurking in the wings rushed over to
warn me that this wasn't a normal cable, this wasn't what I wanted,
it was a weird special cable. I thanked him and assured him that was
exactly what I'd come to buy.
Once home, with my laptop connected to wi-fi, I plugged one end into
the Pi and the other end into my laptop ... and now what? How do I
configure the network so I can talk to the Pi from the laptop, and
the Pi can gateway through the laptop to the internet?
The answer is IP masquerading. Originally I'd hoped to give the
Pi a network address on the same networking (192.168.1) as the laptop.
When I use the Pi at home, it picks a network address on 192.168.1,
and it would be nice not to have to change that when I travel elsewhere.
But if that's possible, I couldn't find a way to do it.
Okay, plan B: the laptop is on 192.168.1 (or whatever network the wi-fi
happens to assign), while the Pi is on a diffferent network, 192.168.0.
That was relatively easy, with some help from the
Masquerading
Simple Howto.
Once I got it working, I wrote a script, since there are quite a few
lines to type and I knew I wouldn't remember them all.
Of course, the script has to be run as root.
Here's the script, on github:
masq.
I had to change one thing from the howto: at the end, when it sets
up security, this line is supposed to enable incoming
connections on all interfaces except wlan0:
iptables -A INPUT -m state --state NEW -i ! wlan0 -j ACCEPT
But that gave me an error, Bad argument `wlan0'
.
What worked instead was
iptables -A INPUT -m state --state NEW ! -i wlan0 -j ACCEPT
Only a tiny change: swap the order of -i and !. (I sent a correction
to the howto authors but haven't heard back yet.)
All set! It's a nice compact way to talk to your Pi anywhere.
Of course, don't forget to label your crossover cable, so you don't
accidentally try to use it as a regular ethernet cable.
Now please excuse me while I go label mine.
Update: Ed Davies has a great followup,
Crossover Cables
and Red Tape, that talks about how to set up a
subnet if you don't need the full masquerading setup, why
non-crossover cables might sometimes work, and a good convention for
labeling crossover cables: use red tape. I'm going to adopt that
convention too -- thanks, Ed!
Tags: raspberry pi, hardware, linux, networking, maker
[
16:57 Nov 09, 2012
More hardware |
permalink to this entry |
]
Sat, 08 Sep 2012
In setting up a laptop
-- Debian "Squeeze" with a Gnome 2 desktop --
for an invalid who will be doing most of her computing from bed,
we hit a snag. Two snags, actually: both related to the switching
between the trackpad and an external trackball.
Disabling and re-enabling the trackpad
First, the trackpad gets in the way when she's typing. "Disable
touchpad while typing" was already set, but it doesn't actually
work -- the mouse was always moving when her palm brushed against it.
On her desktop computer, she's always used a Logitech trackball --
never really got the hang of mice, but that trackball always worked
well for her. And fortunately, unlike a mouse, a trackball works just
fine from bed.
Once the trackball is working, there's really no need to have the
trackpad enabled. So why not just turn it off when the external trackball
is there? I thought I'd once seen a preference like that ... but
it was nowhere to be found in the Gnome 2 desktop.
It turns out the easiest way to disable a trackpad is this:
synclient TouchpadOff=1
Using 0 instead of 1 turns it back on.
So we gave her shell aliases for both these commands.
A web search will show various approaches to writing udev rules to run
something like that automatically, but she felt it was easy enough to
type a command when she switches modes, so we're going with that for now.
Emulate the middle button on an external mouse or trackball
We thought we were done -- until we tried to paste that alias into her
shell and discovered that 2-button paste doesn't work for
external mice in Squeeze.
Usually, when you have a mouse-like device that has only two buttons,
you can click the left and right buttons together to emulate a middle
click. She'd been using that on her old Ubuntu Lucid install, and it
works on pretty much every trackpad I've used.
But it didn't work with the USB trackball on Squeeze.
Gnome used to have a preference for middle button emulation,
but it's gone now.
There's a program you can install called
gpointing-device-settings that offers a 2-button emulation setting
... but it doesn't save the settings anywhere.
And since it's a GUI program you can't make it part of your login or
boot process -- you'd have to go through and click to set it every time.
Not happening.
2-buttom emulation is an X setting -- one of the settings that used to be
specified in Xorg.conf, and now wanders around to different places
on every distro. A little web searching didn't turn up a likely
candidate for Squeeze, but it did turn up a way that's probably
more distro independent: the xinput command.
After installing xinput, you need the X ID of the external mouse or trackball.
xinput list
should show you something like this (plus more stuff for keyboards
and possibly other devices):
$ xinput list
Virtual core pointer id=2 [master pointer (3)]
Virtual core XTEST pointer id=4 [slave pointer (2)]
SynPS/2 Synaptics TouchPad id=10 [slave pointer (2)]
Kensington Kensington USB/PS2 Orbit id=13 [slave pointer (2)]
Once you have the id of the external device, list its properties:
$ xinput list-props 13 ~ 9:01PM
Device 'Kensington Kensington USB/PS2 Orbit':
Device Enabled (132): 1
... long list of other properties ...
Evdev Middle Button Emulation (303): 0
Evdev Middle Button Timeout (304): 50
... more properties ...
You can see that middle button emulation is disabled (0).
So turn it on:
$ xinput --set-prop 13 "Evdev Middle Button Emulation" 1
Click both buttons together, and sure enough -- a middle button paste!
I added that to the alias that turns the trackpad off -- though of course,
it could also be added to a udev rule that fires automatically
when the mouse is plugged in.
Tags: linux, laptop
[
15:34 Sep 08, 2012
More linux/install |
permalink to this entry |
]
Wed, 15 Aug 2012
The Linux file listing program, ls, has been frustrating me for some
time with its ever-changing behavior on symbolic links.
For instance, suppose I have a symlink named Maps that points to
a directory on another disk called /data/Maps. If I say
ls ~/Maps
, I might want to see where the link points:
lrwxrwxrwx 1 akkana users 12 Jun 17 2009 Maps -> /data/Maps/
or I might equally want to see the contents of the /data/Maps directory.
Many years ago, the Unix ls program magically seemed to infer when I
wanted to see the link and what it points to, versus when I wanted to
see the contents of the directory the link points to. I'm not even
sure any more what the rule was; just that I was always pleasantly
surprised that it did what I wanted. Now, in modern Linux, it usually
manages to do the opposite of what I want. But the behavior has
changed several times until, I confess, I'm no longer even sure of
what I want it to do.
So if I'm not sure whether I usually want it to show the symlink or
follow it ... why not make it do both?
There's no ls flag that will do that. But that's okay -- I can make
a shell function to do what I want..
Current ls flags
First let's review man ls
to see the relevant flags
we do have, searching for the string "deref".
I find three different flags to tell ls to dereference a link:
-H (dereference any link explicitly mentioned on the command line --
even though ls does that by default);
--dereference-command-line-symlink-to-dir (do the same if it's a
directory -- even though -H already does that, and even though ls
without any flags also already does that); and -L (dereference links
even if they aren't mentioned on the command line). The GNU ls
maintainers are clearly enamored with dereferencing symlinks.
In contrast, there's one flag, -d, that says not to dereference
links (when used in combination with -l).
And -d isn't useful in general (you can't make it part of a
normal ls alias) because -d also has another, more primary meaning:
it also prevents you from listing the contents of normal,
non-symlinked directories.
Solution: a shell function
Let's move on to the problem of how to show both the link information
and the dereferenced file.
Since there's no ls flag to do it, I'll have to do it by looping
over the arguments of my shell function. In a shell test, you can
use -h to tell if a file is a symlink. So my first approach was to
call ls -ld
on all the symlinks to show what the point to:
ll() {
/bin/ls -laFH $*
for f in $*; do
if [[ -h $f ]]; then
echo -n Symlink:
/bin/ls -ld $f
fi
done
}
Terminally slashed
That worked on a few simple tests. But when I tried to use it for real
I hit another snag: terminal slashes.
In real life, I normally run this with autocompletion. I don't
type ll ~/Maps
-- I'm more likely to type
like ll Ma<tab>
-- the tab looks for files beginning
with Ma and obligingly completes it as Maps/
-- note the
slash at the end.
And, well, it turns out /bin/ls -ld Maps/
no longer shows
the symlink, but derefernces it instead -- yes, never mind that the
man page says -d won't dereference symlinks. As I said, those ls
maintainers really love dereferencing.
Okay, so if I want to not dereference, since there's no ls flag
that means really don't dereference, I mean it -- my little zsh
function needs to find a way of stripping any terminal slash on each
directory name. Of course, I could do it with sed:
f=`echo $f | sed 's/\/$//'`
and that works fine, but ... ick. Surely zsh has a better way?
In fact, there's a better way that even works in bash (thanks
to zsh wizard Mikachu for this gem):
f=${f%/}
That "remove terminal slash" trick has already come in handy in a
couple of other shell functions I use -- definitely a useful trick
if you use autocompletion a lot.
Making the link line more readable
But wait: one more tweak, as long as I'm tweaking. That long ls -ld line,
lrwxrwxrwx 1 akkana users 12 Jun 17 2009 Maps -> /data/Maps/
is way too long and full of things I don't really care about
(the permissions, ownership and last-modified date on a symlink aren't
very interesting). I really only want the last three words,
Maps -> /data/Maps/
Of course I could use something like awk to get that. But zsh has
everything -- I bet it has a clever way to separate words.
And indeed it does: arrays. The documentation isn't very clear and
not all the array functions worked as the docs implied, but
here's what ended up working: you can set an array variable by
using parentheses after the equals sign in a normal variable-setting
statement, and after that, you can refer to it using square brackets.
You can even use negative indices, like in python, to count back
from the end of an array. That made it easy to do what I wanted:
line=( $(/bin/ls -ld $f ) )
echo -E Symlink: $line[-3,-1]
Hooray zsh! Though it turned out that -3 didn't work for directories
with spaces in the name, so I had to use [9, -1] instead.
The echo -E
is to prevent strange things happening
if there are things like backslashes in the filename.
The completed shell function
I moved the symlink-showing function into a separate function,
so I can call it from several different ls aliases, and here's
the final result:
show_symlinks() {
for f in $*; do
# Remove terminal slash.
f=${f%/}
if [[ -h $f ]]; then
line=( $(/bin/ls -ld $f ) )
echo -E Symlink: $line[9,-1]
fi
done
}
ll() {
/bin/ls -laFH $*
show_symlinks $*
}
Bash doesn't have arrays like zsh, so replace those two lines with
echo -n 'Symlink: '
/bin/ls -ld $f | cut -d ' ' -f 10-
and the rest of the function should work just fine.
Tags: shell, cmdline, linux, zsh
[
20:22 Aug 15, 2012
More linux/cmdline |
permalink to this entry |
]
Tue, 31 Jul 2012
Raspberry Pi, the tiny, cheap,
low-power Linux computer, dropped their order restrictions a few weeks
ago, and it finally became possible for anyone to order one. I
immediately (well, a day later, since the two sites that sell them
were slashdotted with people trying to order) put in an order with
Newark/element14. They said they were backordered six weeks, but
I wasn't in a hurry -- I just wanted to get in the queue.
Imagine my surprise when half a week later I got a notice that my
Pi had shipped! I got it yesterday. Thanks, Element14!
The Pi comes with no OS preloaded -- it boots off the SD card.
a download page where you can get an image of Debian Wheezy
their recommendation), Arch, or several other Linux distros.
I downloaded their latest Wheezy image and unzipped it.
But instructions on what to do from there are scanty, and tend to be
heavy on "click on this, then drag to here" directives that
make no sense if you're not using whatever desktop they assume you have.
So here's what ended up working.
Writing the SD card with dd
First, make sure you downloaded the image correctly:
sha1sum 2012-07-15-wheezy-raspbian.zip
and compare the
sum it prints out with the one on the download page.
Then get an appropriate SD card. The image is sized for a 2G card, so
that's what I used, but you can use a larger card if needed ...
you'll only get 2G initially but you can resize the partition later.
Plug the SD card into a reader on your regular Linux desktop/laptop
machine, and figure out which device it is:
I used cat /proc/partitions
.
Then, assuming the SD card is in /dev/sdb (make sure of this! you don't
want to destroy your system disk by writing the wrong place!)
dd bs=1M if=2012-07-15-wheezy-raspbian.img of=/dev/sdb
sync
Wait a while, make sure all the lights are off in your SD drive,
then remove the SD card from the reader. (Yes, even if you're about
to mount it to change something.)
Headless Raspberry Pi
Now you have an SD card that will probably boot your Pi.
If you want to run X on it and see a desktop, you'll need a USB keyboard
and mouse, some sort of monitor, and the appropriate cable.
That stopped me.
The Pi needs either an HDMI to DVI cable -- which I don't have, though
I will buy one as soon as I get a chance -- or an RCA composite video cable.
I think our old CRT TV can take composite video, but what I see
on the net suggests this is a poor solution for the Pi since
the resolution and image quality aren't really adequate.
But in any case, one of my main intended uses for the Pi involves using
it headless, as a robotics controller, in connection with an
Arduino or other
hardware for analog device control.
So the Pi needs to be able to boot without a monitor, taking commands
via its built-in ethernet interface, probably using ssh.
That means making some changes to the SD card.
Reinsert the card. (Why not just leave it in place? Because the image
you just wrote changed the partition table, and your computer won't
see the change unless you remove and reinsert the card.)
The card now has two partitions on it -- you can check that via
/proc/partitions. The first is the /boot partition,
where you shouldn't need to change anything. The second is the root
filesystem. Mount the second partition if your system didn't do that
automatically:
mount /dev/sdb2 /mnt
Now specify a static IP address, so you'll always know how to
get to your Pi. Edit /mnt/etc/network/interfaces and change the
iface eth0 inet dhcp
line to something like this,
using numbers that will work for your local network:
iface eth0 inet static
address 192.168.1.50
netmask 255.255.255.0
gateway 192.168.1.1
Now, if you google for other people who want to ssh in to their
Raspberry Pis or run them headless, you will find approximately
1,532,776 pages telling you that to enable sshd you'll need to
rename a file named boot_enable_ssh.rc somewhere on the /boot partition
to boot.rc.
Disregard this. There is no such file on the current
official wheezy pi images, and you will go crazy looking for it.
Happily, it turns out that the current images have the ssh server
enabled by default. You can verify that by looking at /mnt/etc/init.d/ssh
and seeing that it starts sshd. So after setting a static IP, you're
ready to umount /mnt
You're done! Remove the card, stick it in the Raspberry Pi,
plug in an ethernet cable, then power it with a micro USB cable.
Wait a minute or two (it's not the world's fastest booter,
and you should be able to ssh pi@192.168.1.50 or whatever
address you gave it. Log in with the password specified on the
Downloads page where you got the OS image ... and you're good to go.
Fun! Now I'm off to find an HDMI-DVI cable.
Tags: raspberry pi, hardware, linux, maker
[
21:26 Jul 31, 2012
More hardware |
permalink to this entry |
]
Sat, 16 Jun 2012
I ran ubuntu-bug
to report a bug. After collecting some
dependency info, the program asked me if I wanted to load the bug
report page in a browser. Of course I did -- but it launched chromium,
where I don't have any of my launchpad info loaded, rather than firefox.
So how do you change the default browser in Ubuntu?
The program that controls that, and lots of similar defaults,
is update-alternatives.
update-alternatives with no arguments gives a long usage statement that
isn't too clear. You need to know the various category names ("groups")
before you can do much. Here's how to get a list of all the groups:
update-alternatives --get-selections
But that's still a long list. To find the entries that might be pointing
to chrome or chromium, I narrowed it down:
update-alternatives --get-selections | grep chrom
That narrowed it down:
x-www-browser and gnome-www-browser both pointed
to chromium. So let's try to change that to firefox:
$ update-alternatives --set gnome-www-browser /usr/local/firefox11/firefox
update-alternatives: error: alternative /usr/local/firefox11/firefox for gnome-www-browser not registered, not setting.
Whoops! The problem here is that I'm running a firefox installed from
Mozilla.org, not the one that comes with Ubuntu.
What if I want to make that my default browser?
What does it mean for an application to be "registered"?
Well, no one seems to have documented that.
I found it discussed briefly here:
What is Ubuntu's Definition of a “Registered Application”?,
but the only solutions seemed to involve hand-editing desktop files to
add icons, and there's no easy way to figure out how much of
the desktop file it needs. That sounded way too complicated.
Thanks to Lyz and Maco for the real answer: skip update-alternatives
entirely, and change the symbolic links in /etc/alternatives by hand.
$ sudo rm /etc/alternatives/gnome-www-browser
$ sudo ln -s /usr/local/firefox11/firefox /etc/alternatives/gnome-www-browser
$ sudo rm /etc/alternatives/x-www-browser
$ sudo ln -s /usr/local/firefox11/firefox /etc/alternatives/x-www-browser
That was much simpler, and worked fine: now applications that need to
call up a browser will use firefox instead of chromium.
Tags: ubuntu, debian, linux
[
17:04 Jun 16, 2012
More linux |
permalink to this entry |
]
Wed, 30 May 2012
In a previous article I wrote about
how
to use stdeb to turn a Python script
into source and binary Debian/Ubuntu packages.
You can distribute a .deb file that people can download and install;
but it's a lot easier for people to install if you set up a repository,
so they can get automatic updates from you.
If you're targeting Ubuntu, the best way to do that is to set up a
Launchpad Personal
Package Archive, or PPA.
Create your PPA
First, create your PPA.
If you don't have a Launchpad account yet,
create one, add a GPG key, and sign the Code of Conduct.
Then log in to your account and click on Create a new PPA.
You'll have to pick a name and a display name for your PPA.
The default is "ppa", and many people leave personal PPAs as that.
You might want to give it a display name of yourname-ppa
or something similar if it's for a collection of stuff;
or you're only going to use it for software related to one program or
package, name it accordingly.
Ubuntu requires nonstandard paths
When you're creating your package with stdeb,
if you're ultimately targeting a PPA, you'll only need the souce dsc
package, not the binary deb.
But as you'll see, you'll need to rebuild it to make Launchpad happy.
If you're intending to go through the
Developer.ubuntu.com
process, there are specific requirements for version
numbering and tarball naming -- see "Packaging" in the
App
Review Board Guidelines.
Your app will also need to install unusual locations --
in particular, any files it installs, including the script itself,
need to be in
/opt/extras.ubuntu.com/<packagename> instead of a more
standard location.
How the user is supposed to run these apps (run a script to add
each of /opt/extras.ubuntu.com/* to your path?) is not clear to me;
I'm not sure this app review thing has been fully thought out.
In any case, you may need to massage your setup.py accordingly,
and keep a separate version around for when you're creating the
Ubuntu version of your app.
There are also apparently some
problems
loading translation files for an app in /opt/extras.ubuntu.com
which may require some changes to your Python code.
Prepare and sign your package
Okay, now comes the silly part. You know that source .dsc package
you just made? Now you have to unpack it and "build" it before you
can upload it. That's partly because you have to sign it
with your GPG key -- stdeb apparently can't do the signing step.
Normally, you'd sign a package with
debsign deb_dist/packagename_version.changes
(then type your GPG passphrase when prompted).
Unfortunately, that sort of signing doesn't work here.
If you used stdeb's bdist_deb to generate both binary and
source packages, the .changes file it generates will contain
both source and binary and Launchpad will reject it.
If you used sdist_dsc to generate only the source package,
then you don't have a .changes file to sign and submit to Launchpad.
So here's how you can make a signed, source-only .changes file
Launchpad will accept.
Since this will extract all your files again, I suggest doing this in
a temporary directory to make it easier to clean up afterward:
$ mkdir tmp
$ cd tmp
$ dpkg-source -x ../deb_dist/packagename_version.dsc
$ cd packagename_version
Now is a good time to take a look at the
deb_dist/packagename_version/debian/changelog that stdeb created,
and make sure it got the right version and OS codename for the
Ubuntu release you're targeting -- oneiric, precise, quantal or whatever.
stdeb's default is "unstable" (Debian) so you'll probably need to change it.
You can cross-check this information in the
deb_dist/packagename_version.changes file, which is the file
you'll actually be uploading to the PPA.
Finally, build and sign your source package:
$ debuild -S -sa
[type your GPG passphrase when prompted, twice]
$ dput ppa:yourppa ../packagename_version_source.changes
Upload the package
Finally, it's time to upload the package:
$ dput ppa:your-ppa-name deb_dist/packagename_version.changes
This will give you some output and eventually probably tell you
Successfully uploaded packages.
It's lying -- it may have failed. Watch your inbox
for messages. If Launchpad rejects your changes, you should get an
email fairly quickly.
If Launchpad accepts the changes, you'll get an Accepted email.
Great! But don't celebrate quite yet. Launchpad still has to build
your package before it can be installed. If you try to add your PPA
now, you'll get a 404.
Wait for Launchpad to build
You might as well add your repository now so you can install from it
once it's ready:
$ sudo add-apt-repository ppa:your-ppa-name
But don't apt-get update
yet!
if you try that too soon, you'll get a 404, or an Ign meaning
that the repository exists but there are no packages in it for
your architecture.
It might be as long as a few hours before Launchpad builds your package.
To keep track of this, go to your Launchpad PPA page (something like
https://launchpad.net/~yourname/+archive/ppa) and look under
PPA Statistics for something like "1 package waiting to build".
Click on that link, then in the page that comes up, click on the link
like i386 build of pkgname version in ubuntu precise RELEASE.
That should give you a time estimate.
Wondering why it's being built for i386 when Python should be
arch independent? Worry not -- that's just the architecture that's
doing the building. Once it's built, your package should install anywhere.
Once the Launchpad build page finally says the package is built,
it's finally safe to run the usual apt-get update
.
Add your key
But when you apt-get update you may get an error like this:
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 16126D3A3E5C1192
Obviously you have your own public key, so what's up?
You have to import the key from Ubuntu's keyserver,
and then export it into apt-key, before apt can use it --
even if it's your own key.
For this, you need the last 8 digits given in the NO PUBKEY message.
Take those 8 digits and run these two commands:
gpg --keyserver keyserver.ubuntu.com --recv 3E5C1192
gpg --export --armor 3E5C1192 | sudo apt-key add -
I'm told that apt-add-repository is supposed to add the key automatically,
but it didn't for me. Maybe it will if you wait until after your package
is built before calling apt-add-repository.
Now if you apt-get update
, you should see no errors.
Finally, you can apt-get install pkgname
.
Congratulations! You have a working PPA package.
Tags: ubuntu, linux, programming
[
13:34 May 30, 2012
More programming |
permalink to this entry |
]
Sat, 26 May 2012
I write a lot of little Python scripts. And I use Ubuntu and Debian.
So why aren't any of my scripts packaged for those distros?
Because Debian packaging is absurdly hard, and there's very little
documentation on how to do it. In particular, there's no help on how
to take something small, like a Python script,
and turn it into a package someone else could install on a Debian
system. It's pretty crazy, since
RPM
packaging of Python scripts is so easy.
Recently at the Ubuntu Developers' Summit, Asheesh of OpenHatch pointed me toward
a Python package called stdeb that simplifies a lot of the steps
and makes Python packaging fairly straightforward.
You'll need a setup.py file to describe your Python script, and
you'll probably want a .desktop file and an icon.
If you haven't done that before, see my article on
Packaging Python for MeeGo
for some hints.
Then install python-stdeb.
The package has some requirements that aren't listed
as dependencies, so you'll need to install:
apt-get install python-stdeb fakeroot python-all
(I have no idea why it needs python-all, which installs only a
directory
/usr/share/doc/python-all with some policy
documentation files, but if you don't install it, stdeb will fail later.)
Now create a config file for stdeb to tell it what Debian/Ubuntu version
you're going to be targeting, if it's anything other than Debian unstable
(stdeb's default).
Unfortunately, there seems to be no way to pass this on the command
line rather than in a config file. So if you want to make packages for
several distros, you'll have to edit the config
file for every distro you want to support.
Here's what I'm using for Ubuntu 12.04 Precise Pangolin:
[DEFAULT]
Suite: precise
Now you're ready to run stdeb. I know of two ways to run it.
You can generate both source and binary packages, like this:
python setup.py --command-packages=stdeb.command bdist_deb
Or you can generate source packages only, like this:
python setup.py --command-packages=stdeb.command sdist_dsc
Either syntax creates a directory called deb_dist. It contains a lot of
files including a source .dsc, several tarballs, a copy of your source
directory, and (if you used bdist_deb) a binary .deb package.
If you used the bdist_deb form, don't be put off that
it concludes with a message:
dpkg-buildpackage: binary only upload (no source included)
It's fibbing: the source .dsc is there as well as the binary .deb.
I presume it prints the warning because it creates them as
separate steps, and the binary is the last step.
Now you can use dpkg -i to install your binary deb, or you can use
the source dsc for various purposes, like creating a repository or
a Launchpad PPA. But those involve a lot more steps -- so I'll
cover that in a separate article about creating PPAs.
Update: you can find that article here:
Creating
packages for a Launchpad PPA.
Tags: debian, ubuntu, linux, programming, python
[
11:44 May 26, 2012
More programming |
permalink to this entry |
]
Wed, 18 Apr 2012
My new toy: a Samsung Galaxy Player 5.0!
So far I love it. It's everything my old
Archos 5
wanted to be when it grew up, except the Archos never grew up.
It's the same size, a little lighter weight, reliable hardware
(no random reboots), great battery life, fast GPS, modern Android 2.3,
and the camera is even not too bad (though it certainly wouldn't tempt
me away from my Canon).
For the record, Dave got a Galaxy Player 4.0, and it's very nifty too,
and choosing between them was a tough choice -- the 4-inch is light and
so easy to hold, and it uses replaceable batteries, while the 5-inch's
screen is better for reading and maps.
USB-storage devices don't register
I love the Galaxy ... but there's one thing that bugs me about it.
When I plug it in to Linux, dmesg reports two new storage devices, one for
main storage and one for the SD card. Just like most Android devices, so far.
The difference is that these Samsung devices aren't fully there.
They don't show up in /proc/partitions or in /dev/disk/by-uuid,
dmesg doesn't show sizes for them, and, most important,
they can't be mounted by UUID from an fstab entry, like
UUID=62B0-C667 /droidsd vfat user,noauto,exec,fmask=111,shortname=lower 0 0
That meant I couldn't mount it as myself -- I had to become root,
figure out where it happened to show up this time (/dev/sdf or
wherever, depending on what else might be plugged in),
mount it, then do all my file transfers as root.
I found that if I mounted it explicitly using the device pathname --
mount /dev/sdf /mnt
-- then subsequently the device shows
up normally, even after I unmount it. So I could check dmesg to find
the device name, mount it as root, unmount as root, then mount it
again as myself using my fstab entry. What a pain!
A kernel expert I asked thought it looked like the Samsung is pretending
to be a removable device, but only "plugging in" when the system
actually tries to access it. Annoying. So how do you get Linux to
"access" it?
Udev: still an exercise in frustration
The obvious solution is a udev rule. Some scrutiny of
/lib/udev/rules.d/60-persistent-storage.rules found some
rules that did this intriguing thing:
IMPORT{program}="ata_id --export $tempnode"
.
Naturally, this mysterious ata_id is undocumented.
It's hidden in /lib/udev/ata_id, and I found
this tiny ata_id
man page online since there's none available in Ubuntu.
Running ata_id /dev/sdf
seemed to do what I needed:
it made the device show up in /proc/partitions and
/dev/disk/by-uuid, and after that, I could mount it without
being root.
I created a file named /etc/udev/rules.d/59-galaxy.rules, with
the rule:
KERNEL=="sd[b-g]", SUBSYSTEMS=="usb", ATTRS{idVendor}=="04e8", SYMLINK+="galaxy-%k-%n", IMPORT{program}="ata_id --export $tempnode"
When I tested it with udevadm test /block/sdf
, not only did
the rule trigger correctly, but it actually ran ata_id and the device
became visible -- even though udevadm test states clearly
no programs will be run. How do I love udev -- let me count the ways.
But a reboot revealed that udev was not actually running the rule when
I actually plugged the Galaxy in -- the devices did not become visible.
Did I mention how much I love udev?
Much simpler: a shell alias
But one thing I'd noticed in all this: side by side with
/dev/disk/by-uuid is a directory called /dev/disk/by-id.
And the Samsung devices did show up there, with names like
usb-Android_UMS_Composite_c197022a2b41c1ae-0:0.
Faced with the prospect of spending the rest of the day trying random
udev rules, rebooting each time since that's the only way to test udev,
I decided to cheat and find another way. I made a shell alias:
alias galaxy='sudo sh -c "for d in /dev/disk/by-id/usb-Android*; do /lib/udev/ata_id --export \$d; done"'
Typing galaxy
will now cause the Samsung to register
both devices; and from then on I can mount and unmount them without
needing root, using my normal fstab entries.
Update: This works for the Nook's main storage, too -- just add
x/dev/disk/by-id/usb-B_N_Ebook_Disk* to the list -- but it
doesn't work reliably for the Nook's SD card. The SD card does show
up in /dev/disk/by-id along with main storage; but running ata_id
on it doesn't make its UUID appear. I may just change my fstab entry
to refer to the /dev/disk/by-id device directly.
Tags: android, linux, udev
[
13:43 Apr 18, 2012
More linux/kernel |
permalink to this entry |
]
Sat, 24 Mar 2012
A thread on the Ubuntu-devel-discuss mailing list last month asked
about how
to find out what processes are making outgoing network connectsion
on a Linux machine. It referenced Ubuntu
bug
820895: Log File Viewer does not log "Process Name", which is
specific to Ubuntu's iptables logging of apps that are already blocked
in iptables ... but the question goes deeper.
Several years ago, my job required me to use a program -- never mind
which one -- from a prominent closed-source company. This program was
doing various annoying things in addition to its primary task --
operations that got around the window manager and left artifacts
all over my screen, operations that potentially opened files other
than the ones I asked it to open -- but in addition, I noticed that
when I ran the program, the lights on the DSL modem started going crazy.
It looked like the program was making network connections, when it had
no reason to do that. Was it really doing that?
Unfortunately, at the time I couldn't find any Linux command that would
tell me the answer. As mentioned in the above Ubuntu thread, there are
programs for Mac and even Windows to tell you this sort of information,
but there's no obvious way to find out on Linux.
The discussion ensuing in the ubuntu-devel-discuss thread tossed
around suggestions like apparmor and selinux -- massive, complex ways
of putting up fortifications your whole system. But nobody seemed to
have a simple answer to how to find information about what apps
are making network connections.
Well, it turns out there are a a couple ofsimple way to get that list.
First, you can use ss:
$ ss -tp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 ::1:58466 ::1:ircd users:(("xchat",1063,43))
ESTAB 0 0 192.168.1.6:57526 140.211.166.64:ircd users:(("xchat",1063,36))
ESTAB 0 0 ::1:ircd ::1:58466 users:(("bitlbee",1076,10))
ESTAB 0 0 192.168.1.6:54253 94.125.182.252:ircd users:(("xchat",1063,24))
ESTAB 0 0 192.168.1.6:52167 184.72.217.144:https
users:(("firefox-bin",1097,47))
Update:
you might also want to add listening connections where programs
are listening for incoming connections: ss -tpla
Though this may be less urgent if you have a firewall in place.
-t shows only TCP connections (so you won't see all the interprocess
communication among programs running on your machine). -p prints the
process associated with each connection.
ss can do some other useful things, too, like show all the programs
connected to your X server right now, or show all your ssh connections.
See man ss
for examples.
Or you can use netstat:
$ netstat -A inet -p
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 imbrium.timochari:51800 linuxchix.osuosl.o:ircd ESTABLISHED 1063/xchat
tcp 0 0 imbrium.timochari:59011 ec2-107-21-74-122.:ircd ESTABLISHED 1063/xchat
tcp 0 0 imbrium.timochari:54253 adams.freenode.net:ircd ESTABLISHED 1063/xchat
tcp 0 0 imbrium.timochari:58158 s3-1-w.amazonaws.:https ESTABLISHED
1097/firefox-bin
In both cases, the input is a bit crowded and hard to read. If all you
want is a list of processes making connections, that's easy enough to do
with the usual Unix utilities like grep and sed:
$ ss -tp | grep -v Recv-Q | sed -e 's/.*users:(("//' -e 's/".*$//' | sort | uniq
$ netstat -A inet -p | grep '^tcp' | grep '/' | sed 's_.*/__' | sort | uniq
Finally, you can keep an eye on what's going on by using watch to run
one of these commands repeatedly:
watch ss -tp
Using watch with one of the pipelines to print only process names is
possible, but harder since you have to escape a lot of quotation marks.
If you want to do that, I recommend writing a script.
And back to the concerns expressed on the Ubuntu thread,
you could also write a script to keep logs of which processes made
connections over the course of a day. That's definitely a tool I'll
keep in my arsenal.
Tags: networking, linux, networking, cmdline
[
12:28 Mar 24, 2012
More linux |
permalink to this entry |
]
Tue, 03 Jan 2012
Like most Linux users, I use virtual desktops. Normally my browser
window is on a desktop of its own.
Naturally, it often happens that I encounter a link I'd like to visit
while I'm on a desktop where the browser isn't visible. From some apps,
I can click on the link and have it show up. But sometimes, the link is
just text, and I have to select it, change to the browser desktop,
paste the link into firefox, then change desktops again to do something
else while the link loads.
So I set up a way to load whatever's in the X selection in firefox no
matter what desktop I'm on.
In most browsers, including firefox, you can tell your existing
browser window to open a new link from the command line:
firefox http://example.com/
opens that link in your
existing browser window if you already have one up, rather than
starting another browser. So the trick is to get the text you've selected.
At first, I used a program called xclip. You can run this command:
firefox `xclip -o`
to open the selection. That worked
okay at first -- until I hit my first URL in weechat that was so long
that it was wrapped to the next line. It turns out xclip does odd things
with multi-line output; depending on whether it thinks the output is
a terminal or not, it may replace the newline with a space, or delete
whatever follows the newline. In any case, I couldn't find a way to
make it work reliably when pasted into firefox.
After futzing with xclip for a little too long, trying to reverse-engineer
its undocumented newline behavior, I decided it would be easier just to
write my own X clipboard app in Python. I already knew how to do that,
and it's super easy once you know the trick:
mport gtk
primary = gtk.clipboard_get(gtk.gdk.SELECTION_PRIMARY)
if primary.wait_is_text_available() :
print primary.wait_for_text()
That just prints it directly, including any newlines or spaces.
But as long as I was writing my own app, why not handle that too?
It's not entirely necessary on Firefox: on Linux, Firefox has some
special code to deal with pasting multi-line URLs, so you can copy
a URL that spans multiple lines, middleclick in the content area and
things will work. On other platforms, that's disabled, and some Linux
distros disable it as well; you can enable it by going to
about:config
and searching for single
,
then setting the preference
editor.singlelinepaste.pasteNewlines to 2.
However, it was easy enough to make my Python clipboard app do the
right thing so it would work in any browser. I used Python's re
(regular expressions) module:
#!/usr/bin/env python
import gtk
import re
primary = gtk.clipboard_get(gtk.gdk.SELECTION_PRIMARY)
if not primary.wait_is_text_available() :
sys.exit(0)
s = primary.wait_for_text()
# eliminate newlines, and any spaces immediately following a newline:
print re.sub(r'[\r\n]+ *', '', s)
That seemed to work fine, even on long URLs pasted from weechat
with newlines and spaces, like that looked like
http://example.com/long-
url.html
All that was left was binding it so I could access it from anywhere.
Of course, that varies depending on your desktop/window manager.
In Openbox, I added two items to my desktop menu in menu.xml:
<item label="open selection in Firefox">
<action name="Execute"><execute>sh -c 'firefox `xclip -o`'</execute></action>
</item>
<item label="open selection in new tab">
<action name="Execute"><execute>sh -c 'firefox -new-tab `xclip -o`'</execute></action>
</item>
I also added some code in rc.xml inside
<context name="Desktop">, so I can middle-click
or control-middle-click on the desktop to open a link in the browser:
<mousebind button="Middle" action="Press">
<action name="Execute">
<execute>sh -c 'firefox `pyclip`'</execute>
</action>
</mousebind>
<mousebind button="C-Middle" action="Press">
<action name="Execute">
<execute>sh -c -new-tab 'firefox `pyclip`'</execute>
</action>
</mousebind>
I set this up maybe two hours ago and I've probably used it ten or
fifteen times already. This is something I should have done long ago!
Tags: tech, firefox, linux, cmdline
[
22:37 Jan 03, 2012
More linux |
permalink to this entry |
]
Tue, 13 Dec 2011
One of the distros I'm trying on my Dell Latitude 2120 laptop is Arch Linux.
I like a lot of things about Arch; I had to stop using it on my previous
laptop because something broke in the wi-fi drivers, but I always regretted
giving it up. And it seems to work quite well on the Dell, with one
exception: the system beep (which other distros don't support at all)
was ear-shatteringly loud, far too loud to consider being able to use
this laptop in a public space. Clearly that needed to be fixed.
The usual approach to system beeps, unloading or blacklisting the
pcspkr module, had no effect. xset b off
turned the beep
off in X; I could also set its pitch and duration to change it to a
nice quiet click, though I wasn't able to change the volume that way.
I actually do like having a system beep, as long as it's fairly quiet
and won't disturb people nearby. Unfortunately, xset b only affects
the bell while in X; it didn't have any effect on the deafening sound
Arch gave upon shutdown or reboot.
It turns out that on some laptops, including this Dell, the system beep
goes not through the old-style pcspkr driver, but through the
normal sound card. And the sound card has a separate channel for the
system beep, so even if you have your volume turned down, the beep
may still be at 100%. All I needed to do was run alsamixer
and find out what the channel was called: "Beep".
Given that, I could use the amixer program to ensure the beep volume
will be sane when I log in. I added the following to .zlogin
(for zsh; obviously, adjust for your own shell):
amixer -q -c 0 set Beep 5
That gave me a nice quiet beep. If I need to turn it off completely,
amixer can do that too: amixer -q -c 0 set Beep mute
(curiously, amixer -q -c 0 set Beep 0
doesn't actually
set the volume to zero, just sets it very low).
That volume setting applies to the shutdown beep, too, fortunately.
Though what I'd really like is to have quiet beeps while I'm running, but no
shutdown beep at all. I don't understand the purpose of the shutdown beep;
obviously I know when I've told my machine to shut down or reboot, so
why do I need an audible reminder? But I've been unable to find anything
explaining what's causing this beep. I tried adding a
amixer -q -c 0 set Beep mute
to /etc/rc.local.shutdown,
but it didn't help; apparently the shutdown beep is called before that
file is run. Which strongly suggests it is being run by Arch Linux,
not by something in the BIOS. But nobody I've asked had any suggestions
as to its source, or how to change it. Another enduring Linux mystery ...
Update: I mentioned xset b as a way to adjust beeps inside X -- in
addition to turning beeps totally off, you can also set pitch, duration,
and sometimes volume though the volume part didn't work for me.
But outside X, you can make similar adjustments
with setterm, e.g. setterm -bfreq 400 -blength 50
.
Thanks to Mikachu for the tip!
Tags: linux, archlinux, alsa
[
12:05 Dec 13, 2011
More linux |
permalink to this entry |
]
Sun, 11 Dec 2011
Need your Ubuntu clock to stay in sync with a dual-boot Windows install?
It seems to have changed over the years, and google wasn't finding any
pages for me offering suggestions that still worked.
Turns out, in Ubuntu Oneiric Ocelot,
it's controlled by the file /etc/default/rcS: set
UTC=no
.
Apparently it's also possible to get Windows to understand a
UTC
system clock using a registry tweak.
Ironically, that page, which I found by searching for
windows system clock utc
, also has the answer for setting
Ubuntu to local time. So if I'd searched for Windows in the first
place, I wouldn't have had to puzzle out the Ubuntu solution myself.
Go figure!
Tags: linux, ubuntu, tip
[
13:56 Dec 11, 2011
More linux |
permalink to this entry |
]
Thu, 24 Nov 2011
A few days ago, I wrote about
how to
set up and configure extlinux (syslinux) as a bootloader.
But on Debian or Ubuntu,
if you make changes to files like /boot/extlinux/extlinux.conf
directly, they'll be overwritten.
The configuration files are regenerated by a program
called extlinux-update, which runs automatically every time you
update your kernel. (Specifically, it runs from the postinst script of
the linux-base package:
you can see it in /var/lib/dpkg/info/linux-base.postinst.)
So what's a Debian user to do if she wants to customize the menus,
add a splash image or boot other operating systems?
First, if you decide you really don't want Debian overwriting your
configuration files, you can change disable updates
by editing /etc/default/extlinux.
Just be aware you won't get your boot menu updated when you install new
kernels -- you'll have to remember to update them by hand.
It might be worth it: the automatic update is nearly as annoying as
the grub2 updater: it creates two automatic entries for every kernel
you have installed. So if you have several distros installed, each
with a kernel or two in your shared /boot,
you'll get an entry to boot Debian Squeeze with the
Ubuntu Oneiric kernel, one for Squeeze with the Natty kernel,
one for Squeeze with the Fedora 16 kernel ... as well as entries
for every kernel you have that's actually owned by Debian.
And then for each of these, you'll also get a second entry,
to boot in recovery mode. If you have several distros installed,
it makes for a very long and confusing boot menu!
It's a shame that the auto-updater doesn't restrict itself to kernels
managed by the packaging system, which would be easy enough to do.
(Wonder if they would accept a patch?)
You might be able to fudge something that works right by setting up
symlinks so that the only readable kernels actually live on the root
partition, so Debian can't read the kernels from the other
distros. Sounds a bit complicated and I haven't tried it.
For now, I've turned off automatic updating on my system.
But if your setup is simpler --
perhaps just one Debian or one Ubuntu partition plus some non-Linux
entries such as BSD or Windows -- here's how to set up Debian-style
automatic updating and still keep all your non-Linux boot entries
and your nice menu customizations.
Debian automatic updates and themes
First, take a quick look at /etc/default/extlinux and customize
anything there you might need, like the names of the kernels, kernel
boot parameters or timeout.
See man extlinux-update
for details.
For configuring menu colors, image backgrounds and such, you'll need to
make a theme. You can see a sample theme by installing the package
syslinux-themes-debian -- but watch out.
If you haven't configured apt not to pull in suggested packages, that
may bring back grub or grub-legacy, which you probably don't want.
You can make a theme without needing that package, though.
Create a directory /usr/share/syslinux/themes/mythemename
(the extlinux-update man page claims you can put a theme anywhere and
specify it by its full path, but it lies). Create a directory called
extlinux inside it, and make a file with everything you want
from extlinux.conf. For example:
default 0
prompt 1
timeout 50
ui vesamenu.c32
menu title Welcome to my Linux machine!
menu background mysplash.png
menu color title 1;36 #ffff8888 #00000000 std
menu color unsel 0 #ffffffff #00000000 none
menu color sel 7 #ff000000 #ffffff00 none
include linux.cfg
menu separator
include themes/mythemename/other.cfg
Note that last line: you can include other files from your theme.
For instance, you can create a file called other.cfg
with entries for other partitions you want to boot:
label oneiric
menu label Ubuntu Oneiric Ocelot
kernel /vmlinuz-3.0.0-12-generic
append initrd=/initrd.img-3.0.0-12-generic root=UUID=c332b3e2-5c38-4c50-982a-680af82c00ab ro quiet
label fedora
menu label Fedora 16
kernel /vmlinuz-3.1.0-7.fc16.i686
append initrd=/initramfs-3.1.0-7.fc16.i686.img root=UUID=47f6b1fa-eb5d-4254-9fe0-79c8b106f0d9 ro quiet
menu separator
LABEL Windows
KERNEL chain.c32
APPEND hd0 1
Of course, you could have a debian.cfg, an ubuntu.cfg,
a fedora.cfg etc. if you wanted to have multiple distros
all keeping their kernels up-to-date. Or you can keep the whole
thing in one file, theme.cfg. You can make a theme as complex
or as simple as you like.
Tags: linux, boot, extlinux, syslinux, debian, ubuntu
[
12:26 Nov 24, 2011
More linux/install |
permalink to this entry |
]
Sun, 20 Nov 2011
When my new netbook arrived, I chose Debian Squeeze as the first Linux
distro to install, because I was under the impression it still used grub1,
and I wanted to
avoid grub2.
I was wrong -- Squeeze uses grub2. Uninstalling grub2, installing grub-legacy
and running grub-install and update-grub didn't help; it turns out
even in Debian's grub-legacy package, those programs come from
grub2's grub-common package.
What a hassle! But maybe it was a blessing in disguise -- I'd been
looking for an excuse to explore extlinux as a bootloader as a way
out of the grub mess.
Extlinux is one of the many spinoffs of syslinux -- the bootloader
used for live CDs and many other applications. It's not as commonly used as
a bootloader for desktops and laptops, but it's perfectly capable of that.
It's simple, well tested and has been around for years. And it supports
the few things I want out of a bootloader: it has a simple
configuration file that lives on the /boot partition;
it can chain-load Windows, on machines with a Windows partition;
it even offers pretty graphical menus with image backgrounds.
Since there isn't much written about how to use extlinux, I wrote up
my experiences along with some tips for configuring it. It came out
too long for a blog article, so instead I've made it its own page:
How to install
extlinux (syslinux) as a bootloader.
Tags: linux, boot, extlinux, syslinux
[
16:19 Nov 20, 2011
More linux/install |
permalink to this entry |
]
Mon, 31 Oct 2011
My new netbook (about which, more later) has a trackpad with
areas set aside for both vertical and horizontal scrolling.
Vertical scrolling worked out of the box on Squeeze without needing
any extra fiddling. My old Vaio had that too, and I loved it.
Horizontal scrolling took some extra research.
I was able to turn it on with a command:
synclient HorizEdgeScroll=1
(thank you,
AbsolutelyTech).
Then it worked fine, for the duration of that session.
But it took a lot more searching to find out the right place to set
it permanently. Nearly every page you find on trackpad settings tells
you to edit /etc/X11/xorg.conf. Distros haven't normally
used xorg.conf in over three years! Sure, you can generate
one, then edit it -- but surely there's a better way.
And there is. At least in Debian Squeeze and Ubuntu Natty, you can edit
/usr/share/X11/xorg.conf.d/50-synaptics.conf
and add this options line inside the "InputClass" section:
options "HorizEdgeScroll" "1"
Don't forget the quotes, or X won't even start.
In theory, you can use this for any of the Synaptics driver options --
tap rate and sensitivity, even multitouch. Your mileage may vary --
horizontal scroll is the only one I've tested so far. But at least
it's easier than generating and maintaining an xorg.conf file!
Tags: linux, X11, laptop
[
16:20 Oct 31, 2011
More linux/laptop |
permalink to this entry |
]
Fri, 28 Oct 2011
I wrote a few days ago about my
multi-distro
Linux live USB stick. Very handy!
But one thing that bugs me about live distros:
they're set up with default settings and don't
have a lot of the programs I want to use. Even getting a terminal
takes quite a lot of clicks on most distros. If only they would save
their settings!
It's possible to make a live USB stick "persistent", but not much is
written about it. Most of what's written tells you to create the USB
stick with usb-creator -- a GUI app that I've tried periodically for
the past two years without ever once succeeding in creating a bootable
USB stick.
Even if usb-creator did work, it wouldn't work with a multi-boot
stick like this one, because it would want to overwrite the whole drive.
So how does persistence really work? What is usb-creator doing, anyway?
How persistence works: Casper
The best howto I've found on Ubuntu persistence is
LiveCD
Persistence. But it's long and you have to wade through a lot of
fdisk commands and similar arcana. So here's how to take your
multi-distro stick and make at least one of the installs persistent.
Ubuntu persistence uses a package called casper which overlays
the live filesystem with the contents of another filesystem.
Figuring out where it looks for that filesystem is the key.
Casper looks for its persistent storage in two possible places: a
partition with the label "casper-rw", and a file named
"casper-rw" at the root of its mounted partitions.
So you could make a separate partition labeled "casper-rw", using your
favorite partitioning tool, such as gparted or fdisk. But if you already
have your multi-distro stick set up as one big partition, it's just as
easy to create a file. You'll have to decide how big to make the file,
based on the size of your USB stick.
I'm using a 4G stick, and I chose 512M for my persistent partition:
$ dd if=/dev/zero of=/path/to/casper-rw bs=1M count=512
Be patient: this step takes a while.
Next, create a filesystem inside that file. I'm not sure what the
tradeoffs are among various filesystem types -- no filesystem is
optimized for being run as a loopback file read from a vfat USB stick
that was also the boot device. So I flipped a coin and used ext3:
$ mkfs.ext3 /path/to/casper-rw
/path/to/casper-rw is not a block special device.
Proceed anyway? (y,n) y
One more step: you need to add the persistent flag to your boot
options. If you're following the multi-distro USB stick tutorial I
linked to earlier, that means you should edit boot/grub/grub.cfg on
the USB stick, find the boot stanza you're using for Ubuntu, and make
the line starting with linux look something like this:
linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=$isofile quiet splash noprompt persistent --
Now write the stick, unmount it, and try booting your live install.
Testing: did it work?
The LiveCD/Persistence page says persistent settings aren't
necessarily saved for the default "ubuntu" user, so it's a good idea
to make a new user. I did so.
Oops -- about that Ubuntu new user thing
But at least in Ubuntu Oneiric: there's a problem with that. If you
create a user, even as class Administrator (and of course you do want
to be an Administrator), it doesn't ask you for a password. If you
now log out or reboot, your new user should be saved -- but you won't
be able to do anything with the system, because anything that requires
sudo will prompt you for your nonexistent password. Even attempting to
set a password will prompt you for the nonexistent password.
Apparently you can "unlock" the user at the time you create it, and
then maybe it'll let you set a password. I didn't know this beforehand,
so here's how to set a password on a locked user from a terminal:
$ sudo passwd username
For some reason, sudo will let you do this without prompting for a
password, even though you can't do anything administrative through the GUI.
Testing redux
Once you're logged in as your new user, try making some changes.
Add and remove some items from the unity taskbar. Install a couple
of packages. Change the background.
Now try rebooting. If your casper-rw file worked, it should remember your
changes.
When you're not booted from your live USB stick, you can poke around
in the filesystem it uses by mounting it in "loopback" mode.
Plug the stick into a running Linux machine, mount it the usb stick,
then mount it with
$ sudo mount -o loop /path/to/casper-rw /mnt
/path/to is wherever you mounted your usb stick -- e.g. /media/whatever.
With the file mounted in loopback mode,
you should be able to adjust settings or add new files without
needing to boot the live install -- and they should show up the
next time you use the live install.
My live Ubuntu Oneiric install is so much more fun to use now!
Tags: ubuntu, linux, install, grub
[
15:41 Oct 28, 2011
More linux/install |
permalink to this entry |
]
Tue, 25 Oct 2011
Linux live USB sticks (flash drivers) are awesome. You can carry them
anywhere and give a demo of Linux on anyone's computer, any time. But
how do you keep track of them? Especially since USB sticks don't have
any place to write a label. How do you remember that the shiny blue
stick is the one with Ubuntu Oneiric, the black one has Ubuntu Lucid,
the other blue one that's missing its top is Debian ... and so forth.
It's impossible! Plus, such a waste -- you can hardly buy a flash drive
smaller than 4G these days, and then you go and devote it to a 700Mb
ISO designed to fit on a CD. Silly.
The answer: get one big USB stick and put lots of distros on it,
using grub to let you choose at boot time.
To create my stick, I followed the easy instructions at
HOWTO:
Booting LiveCD ISOs from USB flash drive with Grub2.
I found that tutorial quite simple, so I'm not going to duplicate
the instructions there.
I used the non-LUA version, since my grub on Ubuntu Natty didn't seem
to support LUA.
Basically you run grub-install to the stick,
create a directory called iso where you stick all your ISO files,
then create a grub.cfg with magic incantations to boot each ISO.
Ah, wait ... magic incantations?
The tutorial is missing one important part: what if you want to use an ISO
that isn't already mentioned in the tutorial? If Ubuntu's entry is
linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=$isofile quiet splash noprompt --
and Parted Magic's is
linux (loop)/pmagic/bzImage iso_filename=$isofile edd=off noapic load_ramdisk=1 prompt_ramdisk=0 rwnomce sleep=10 loglevel=0
then you know there's some magic going on there.
I knew I needed at least the Ubuntu "alternate installer", since it
allows installing a command-line system without the Unity desktop, and
Debian Squeeze, since that's currently the most power-efficient Linux
for laptops, in addition to the distros mentioned in the tutorial.
How do you figure out what to put in those grub.cfg lines?
Here's how to figure it out from the ISO file. I'll use the Debian Squeeze
ISO as an example.
Step 1: mount the ISO file.
$ sudo mount -o loop /pix/boot/isos/debian-6.0.0-i386-netinst.iso /mnt
Step 2: find the kernel
$ ls /mnt/*/vmlinuz /mnt/*/bzImage
/mnt/install.386/vmlinuz
Step 3: find the initrd. It might have various names, and might or
might not be compressed, but the name will almost always start with init.
$ ls /mnt/*/vmlinuz /mnt/*/init*
/mnt/install.386/initrd.gz
Unmount the ISO file.
$ umount /mnt
The trick in steps 2 and 3 is that nearly all live ISO images put the
kernel and initrd a single directory below the root. If you're using
an ISO that doesn't, you may have to search more deeply (try /mnt/*/*).
In the case of Debian Squeeze, now I have the two filenames:
/install.386/vmlinuz and /install.386/initrd.gz. (I've removed the
/mnt part since that won't be there when I'm booting from the USB stick.)
Now I can edit boot/grub/grub.cfg and make a boot stanza for Debian:
menuentry "Debian Squeeze" {
set isofile="/boot/isos/debian-6.0.0-i386-netinst.iso"
loopback loop $isofile
linux (loop)/install.386/vmlinuz iso_filename=$isofile quiet splash noprompt --
initrd (loop)/install.386/initrd.gz
}
Here's the entry for the Ubuntu alternate installer:
menuentry "Oneiric 11.10 alternate" {
set isofile="/boot/isos/ubuntu-11.10-alternate-i386.iso"
loopback loop $isofile
linux (loop)/install/vmlinuz iso_filename=$isofile
initrd (loop)/install/initrd.gz
}
It sounds a little convoluted, I know -- but you only have to do it
once, and then you have this amazing keychain drive with every Linux
distro on it you can think of.
Amaze your friends!
Tags: linux, install, ubuntu, debian, grub
[
22:21 Oct 25, 2011
More linux/install |
permalink to this entry |
]
Sat, 03 Sep 2011
Fairly often, I want a list of subdirectories inside a
particular directory. For instance, when posting blog entries,
I may need to decide whether an entry belongs under "linux"
or some sub-category, like "linux/cmdline" -- so I need to remind
myself what categories I have under linux.
But strangely, Linux offers no straightforward way to ask that question.
The ls
command lists directories -- along with the files.
There's no way to list just the directories. You can list the directories
first, with the --group-directories-first option.
Or you can flag the directories specially: ls -F
appends a slash to each directory name, so instead of linux
you'd see linux/
. But you still have to pick the directories
out of a long list of files. You can do that with grep, of course:
ls -1F ~/web/blog/linux | grep /
That's a one, not an ell: it tells ls to list files one per line.
So now you get a list of directories, one per line, with a slash
appended to each one. Not perfect, but it's a start.
Or you can use the find
program, which has an option
-type d
that lists only directories. Perfect, right?
find ~/web/blog/linux -maxdepth 1 -type d
Except that lists everything with full pathnames:
/home/akkana/web/blog/linux, /home/akkana/web/blog/linux/editors,
/home/akkana/web/blog/linux/cmdline and so forth. Way too much noise
to read quickly.
What I'd really like is to have just a list of directory names --
no slashes, no newlines. How do we get from ls or find output to that?
Either we can start with find and strip off all the path information,
either in a loop with basename or with a sed command; or start with ls
-F, pick only the lines with slashes, then strip off those slashes.
The latter sounds easier.
So let's go back to that ls -1F ~/web/blog/linux | grep /
command. To strip off the slashes, you can use sed's s (substitute)
command. Normally the syntax is sed 's/oldpat/newpat/'. But since
slashes are the pattern we're substituting, it's better to use
something else as the separator character. I'll use an underscore.
The old pattern, the one I want to replace, is / -- but I only want to
replace the last slash on the line, so I'll add a $ after it,
representing end-of-line. The new pattern I want instead of the slash
is -- nothing.
So my sed argument is 's_/$__'
and the command becomes:
ls -1F ~/web/blog/linux | grep / | sed 's_/$__'
That does what I want. If I don't want them listed one per line, I can
fudge that using backquotes to pass the output of the whole command to
the shell's echo command:
echo `ls -1F ~/web/blog/linux | grep / | sed 's_/$__'`
If you have a lot of directories to list and you want ls's nice
columnar format, that's a little harder.
You can ls the list of directories (the names inside the backquotes),
ls `your long command`
-- except that now that you've stripped off the path information,
ls won't know where to find the files. So you'd have to change
directory first:
cd ~/web/blog/linux; ls -d `ls -1F | grep / | sed 's_/$__'`
That's not so good, though, because now you've changed directories
from wherever you were before. To get around that, use parentheses
to run the commands inside a subshell:
(cd ~/web/blog/linux; ls -d `ls -1F | grep / | sed 's_/$__'`)
Now the cd only applies within the subshell, and when the command
finishes, your own shell will still be wherever you started.
Finally, I don't want to have to go through this discovery process
every time I want a list of directories. So I turned it into a couple
of shell functions, where $* represents all the arguments I pass to
the command, and $1 is just the first argument.
lsdirs() {
(cd $1; /bin/ls -d `/bin/ls -1F | grep / | sed 's_/$__'`)
}
lsdirs2() {
echo `/bin/ls -1F $* | grep / | sed 's_/$__'`
}
I specify /bin/ls because I have a function overriding ls in my .zshrc.
Most people won't need to, but it doesn't hurt.
Now I can type lsdirs ~/web/blog/linux
and get a nice
list of directories.
Update, shortly after posting:
In zsh (which I use), there's yet another way: */
matches
only directories. It appends a trailing slash to them, but
*(/)
matches directories and omits the trailing slash.
So you can say
echo ~/web/blog/linux/*(/:t)
:t strips the directory part of each match.
To see other useful : modifiers, type
ls *(:
then hit TAB.
Thanks to Mikachu for the zsh tips. Zsh can do anything, if you can
just figure out how ...
Tags: cmdline, shell, pipelines, linux
[
11:22 Sep 03, 2011
More linux/cmdline |
permalink to this entry |
]
Sat, 27 Aug 2011
I switched to the current Debian release, "Squeeze", quite a few
months ago on my Sony Vaio laptop. I've found that Squeeze, with its older
kernel and good attention to power management (compared to the
power
management regressions in more recent kernels), gets much better
battery life than either Arch Linux or Ubuntu on this machine. I'm using
Squeeze as the primary OS at least until the other distros get their
kernel power management sorted out.
I did have to solve a couple of minor problems when switching over, though.
Suspend/Resume quirks
The first problem was that my Vaio TX650 would freeze on resuming from
suspend -- something that every other Linux distro has handled out of
the box on this machine.
The solution turned out to be simple though
non-obvious, apparently a problem with controlling power to the display:
sudo pm-suspend --quirk-dpms-on
That wasn't easy to find, but ever since then the machine has been
suspending without a single glitch. And it's a true suspend, unlike
Ubuntu Natty, which on this machine will use up a full battery if I
leave it suspended all day -- Natty uses nearly as much power when
suspended as it does running.
Adjusting screen brightness: debugging ACPI
Of course, once I got that sorted out, there were the usual collection
of little changes I needed to make. Number one was that it didn't
automatically handle brightness adjustment with the Fn-F5 and Fn-F6 keys.
It turned out my
previous
technique for handling the brightness keys
didn't work, because the names of the ACPI events in /etc/acpi/events
had changed. Previously, /etc/acpi/events/sony-brightness-down
had contained references to the Sony I/O Control, or SPIC:
event=sony/hotkey SPIC 00000001 00000010
action=/etc/acpi/sonybright.sh down
That device didn't exist on Squeeze. To find out what I needed now,
I ran
acpi-listen
and typed the function-key combos in question.
That gave me the codes I needed. I changed the
sony-brightness-down
file to read:
event=video/brightnessdown BRTDN 00000087 00000000
action=/etc/acpi/sonybright.sh down
It's probably a good thing, changing to be less Sony-specific ...
but as a user it's one of those niggling annoyances that I have to
go chase down every time I upgrade to a new Linux version.
Tags: linux, suspend, acpi, debian, sony, vaio
[
12:07 Aug 27, 2011
More linux/laptop |
permalink to this entry |
]
Wed, 27 Apr 2011
Today I have three tips I've found useful with ssh.
Clearing ssh control sockets
We had a network failure recently while I had a few ssh connections open.
After the network came back up, when I tried to ssh to one host, it always
complained
Control socket connect(/home/username/ssh-username@example.com:port): Connection refused
-- but then it proceeded to connect anyway.
Another server simply failed to connect.
Here's how to fix that: on the local machine -- not the remote one --
there's a file named /home/username/ssh-username@example.com:port.
Remove that file, and ssh will work normally again.
Connection Sharing
I think the stuck control socket happened because I was using
ssh connection sharing,
a nifty feature introduced a few years back that lets ssh-based commands
re-use an existing connection without re-authenticating.
Suppose you have an interactive ssh session to a remote host,
and you need to copy some files over with a program like scp.
Normally, each scp command needs to authenticate with remotehost,
sometimes more than once per command. Depending on your setup and
whether you're running a setup like ssh-agent, that might mean you
have to retype your password several times, or wait while it
verifies your host key.
But you can make those scps re-use your existing connection.
Add this to .ssh/config:
Host *
ControlMaster auto
ControlPath ~/ssh-%r@%h:%p
Eliminating strict host key checking
The final tip is for my biggest ssh pet peeve: strict host key checking.
That's the one where you ssh to a machine you use all the time
and you get:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
... and many more lines of stuff like that.
And all it really means is something like
- that machine has re-installed its OS, or rebooted into a different
distro; or
- the DHCP server has given that IP address to a different machine
from the one that had it last time.
The really frustrating thing is that there's no flag you can pass to
ssh to tell it "look, it's fine, just let me in." The only solution is to
edit ~/.ssh/known_hosts, find the line corresponding with that
host (not so easy if you've forgotten to add HashKnownHosts no
)
so you can actually see the hostnames) and delete it -- or delete the
whole ~/.ssh/known_hosts file. ssh does have an option
for StrictHostKeyChecking no
, and the documentation implies
it might help; but it doesn't get you past this error, and it doesn't
prevent ssh asking for confirmation when it sees a new host, either.
I'm not sure what it actually does.
What does get you past the error? Here's a fun trick.
Add a stanza like this to .ssh/config:
Host 192.168.1.*
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Translation: for every host on your local net (assuming it's 192.168.1),
pretend /dev/null has the list you'd normally get from ~/.ssh/known_hosts.
Since there's definitely no host line there matching your intended
remote host, it will ask you whether it's okay, then connect.
I wish I could find a way to eliminate the prompt too (I thought that
was what StrictHostKeyChecking no
was supposed to do);
but at least you can just hit return, which is a lot easier than
editing known_hosts every time.
And yes, this all makes ssh less secure. You know what? For hosts on
my local network, that are sitting in the same room with me, I'm
really not too concerned about that.
Tags: linux, ssh
[
17:53 Apr 27, 2011
More linux |
permalink to this entry |
]
Wed, 20 Apr 2011
(or: Fixing a multi flash card reader on Ubuntu Natty)
For the first time after installing Ubuntu Natty, I needed to upload
some photos from my camera -- and realized with a sinking feeling
that I now had the new UDEV, which no longer lets you use the
udev
all_partitions directive, so that cards inserted into a multi flash
card reader will show up as /dev/sdb1 or whatever the appropriate
device name is.
Without all_partitions, you get the initial
sdb, sdc, sdd and sde for the various slots in the card reader, but
since there's no card there when the machine boots, and the reader
doesn't send an event when you insert a card later,
you never get a mountable /dev/sdb1 device.
But the udev developers in their infinite wisdom removed
all_partitions some time last year, apparently without providing any
replacement for it. So you can no longer solve this problem through
udev rules.
Static udev devices
Fortunately, there's another way, which is actually easier (though
less flexible) than udev rules: udev static devices. You can create
the devices you need once, and tell udev to create exactly those
devices every time.
To begin, first find out what your base devices are.
Look through dmesg | more
for your card reader.
Mine looks something like this:
[ 3.304938] scsi 4:0:0:0: Direct-Access Generic USB SD Reader 1.00 PQ
: 0 ANSI: 0
[ 3.305440] scsi 4:0:0:1: Direct-Access Generic USB CF Reader 1.01 PQ
: 0 ANSI: 0
[ 3.305939] scsi 4:0:0:2: Direct-Access Generic USB xD/SM Reader 1.02 PQ
: 0 ANSI: 0
[ 3.306438] scsi 4:0:0:3: Direct-Access Generic USB MS Reader 1.03 PQ
: 0 ANSI: 0
[ 3.306876] sd 4:0:0:0: Attached scsi generic sg1 type 0
[ 3.307020] sd 4:0:0:1: Attached scsi generic sg2 type 0
[ 3.307165] sd 4:0:0:2: Attached scsi generic sg3 type 0
[ 3.307293] sd 4:0:0:3: Attached scsi generic sg4 type 0
[ 3.313181] sd 4:0:0:1: [sdc] Attached SCSI removable disk
[ 3.313806] sd 4:0:0:0: [sdb] Attached SCSI removable disk
[ 3.314430] sd 4:0:0:2: [sdd] Attached SCSI removable disk
[ 3.315055] sd 4:0:0:3: [sde] Attached SCSI removable disk
Notice that the SD reader is scsi 4:0:0:0, and a few lines later, 4:0:0:0
is mapped to sdb. They're out of order, so make sure you match those scsi
numbers. If I want to read SD cards, /dev/sdb is where to look.
(Note: sd in "sdb" stands for "SCSI disk", while SD in "SD card"
stands for "Secure Digital". Two completely different meanings for
the same abbreviation -- just an unfortunate coincidence to make
this all extra confusing.)
To create static devices, I'll need the major and minor device numbers
for the devices I want to create. Since I know the SD card slot is sdb,
I can get those with ls:
$ ls -l /dev/sdb
brw-rw---- 1 root disk 8, 16 2011-04-20 09:43 /dev/sdb
The b at the beginning of the line tells me it's a block device;
the major and minor device numbers for the base SD card device are 8 and 16.
To get the first partition on that card, use the same major device and
add one to the minor device: 8 and 17.
Now you can create new static block devices, as root, using mknod
in the /lib/udev/devices directory:
$ sudo mknod /lib/udev/devices/sdb1 b 8 17
$ sudo mknod /lib/udev/devices/sdb2 b 8 18
$ sudo mknod /lib/udev/devices/sdb3 b 8 19
Update: Previously I had here
$ sudo mknod b 8 17 /lib/udev/devices/sdb1
but the syntax seems to have changed as of mid-2012.
Although my camera only uses one partition, sdb1, I created devices
for a couple of extra partitions because I
sometimes
partition cards that way.
If you only use flash cards for cameras and MP3 players, you
may not need anything beyond sdb1.
You can make devices for the other slots in the card reader the same way.
The memory stick reader showed up as scsi 4:0:0:3 or sde, and
/dev/sde has device numbers 8, 64 ... so to read the memory stick
from Dave's Sony camera, I'd need:
$ sudo mknod /lib/udev/devices/sde1 b 8 65
You don't have to call the devices sdb1, either. You can call them
sdcard1 or whatever you like. However, the base device will still
be named sdb (unless you write a udev rule to change that).
fstab entry
I like to use fstab entries and keep control over what's mounted,
rather than letting the system automatically mount everything it sees.
I can do that with this entry in /etc/fstab:
/dev/sdb1 /sdcard vfat user,noauto,exec,fmask=111,shortname=lower 0 0
plus
sudo mkdir /sdcard
.
Now, whenever I insert an SD card and want to mount it, I type
mount /sdcard
as myself. There's no need for sudo because of the user directive.
Tags: linux, udev, ubuntu
[
20:22 Apr 20, 2011
More linux |
permalink to this entry |
]
Tue, 22 Mar 2011
Over the weekend I tried installing Debian's new release,
"Squeeze", on my Vaio TX650 laptop.
I used a "net install" CD, the one that installs only the bare minimum
then goes to the net for anything else. I used Expert mode, because I
needed to set a static IP address and keep it from overwriting my grub
configuration.
Most of the install went smoothly -- until I got to the last big step
near the end, "Select and install software", where it froze at 1%.
A little web searching (on another machine) gave me the hint that
the Debian installer prints a log on the fourth console, Ctrl-Alt-F4.
Checking that log made the problem clear:
aptitude was complaining about packages
without a proper GPG signature -- type Yes to continue without verifying
signatures. But since this was running inside the installer, there's no
place to type Yes -- that Ctrl-Alt-F4 console is merely displaying messages,
not accepting input, and the installer doesn't accept any input for
aptitude.
Fortunately, "Select and install software" isn't crucial to the net
install process. I don't actually know what software it would have
installed -- it never asked me to choose any -- but without it, you
should still have a working minimal Debian on the disk. So I made
another console on Ctrl-Alt-F2, ran ps aux
, found that
aptitude was the highest numbered process running, and killed it.
Upon returning to the installer (Ctrl-Alt-F1), I was able to skip
"Select and install software", finish the install process and reboot.
Upon rebooting, I logged in as root and ran apt-get update
.
It complained about GPG errors; but now I could do something about it.
I ran apt-get upgrade
and confirmed that I wanted to proceed
even without verifying package signatures. When that was over, the
problem was fixed: a subsequent apt-get update
ran
without errors.
This ISO was downloaded (from the kernel.org mirror, I believe)
a few days after the official release.
I'm told that Debian changes the keys at the last minute before a
release; perhaps the new keys don't make it into the ISO images on
all the mirrors. Or maybe they just messed up with the Squeeze release.
Anyway, it was fairly easily solved, but seemed like a disappointing
and silly problem. A web search found lots of people people hitting
this problem; it's a shame that the installer can't run aptitude in
a mode where it won't prompt and hang up the whole install.
Alas, it's probably all academic anyway, since suspend/resume
doesn't work. It freezes on resume, with a black screen --
another common Debian problem, judging by what I see on the net.
I'm a bit surprised, since every other distro I've tried has suspended
the Vaio beautifully. But after hours of messing with it over the
weekend, I ran out of time and conceded defeat.
Tags: install, debian, linux
[
22:49 Mar 22, 2011
More linux/install |
permalink to this entry |
]
Sun, 20 Mar 2011
It's time for another installment of "Where have the control/capslock
adjustments migrated to?" This time it's for the latest Debian
release, "Squeeze".
Ever since they stopped making keyboards with the control key to the
left of the A,
I've remapped my CapsLock key to be another Control key. I never
need CapsLock, but I use Control constantly all day while editing text.
Some people prefer to swap Control and CapsLock.
But the right way to do that changes periodically.
For the last few years, since
Ubuntu
Intrepid, you could set XKbOptions for Control and Capslock in
/etc/default/console-setup. But that no longer works in Squeeze.
It turns out Squeeze introduced a new file,
/etc/default/keyboard, so any keyboard options previously
had in console-setup need to move to keyboard.
For me, that's these lines:
XKBMODEL="pc104"
XKBLAYOUT="us"
XKBVARIANT=""
XKBOPTIONS="ctrl:nocaps,compose:menu,terminate:ctrl_alt_bksp"
though I suspect only the last line matters.
This wasn't well covered on the web. There aren't many howtos covering
Squeeze yet, but I found the hint I needed in a terse Debian IRCbot factoid:
Factoid
capslock says
For console-setup, append ",ctrl:nocaps" to the value of XKBOPTIONS
within /etc/default/console-setup (/etc/default/keyboard on Squeeze).
That factoid assumes you already have XKBOPTIONS set; as shipped,
it's empty, so skip that initial comma.
I was going to conclude with a link to the documentation on XKBOPTIONS,
or XKbOptions as it was capitalized in xorg.conf ... but there
doesn't seem to be any. It's not in any of the Xorg man pages like
xorg.conf(5) where I expected to find it; nor can I find anything
on the web beyond howtos like this one from people who have figured
out a few specific options. Anyone know?
Tags: install, debian, linux
[
12:54 Mar 20, 2011
More linux/install |
permalink to this entry |
]
Tue, 15 Mar 2011
It's another episode of "How to use Linux to figure out CarTalk puzzlers"!
This time you don't even need any programming.
Last week's puzzler was
A
Seven-Letter Vacation Curiosity. Basically, one couple hiking
in Northern California and another couple carousing in Florida
both see something described by a seven-letter word containing
all five vowels -- but the two things they saw were very different.
What's the word?
That's an easy one to solve using basic Linux command-line skills --
assuming the word is in the standard dictionary. If it's some esoteric
word, all bets are off. But let's try it and see. It's a good beginning
exercise in regular expressions and how to use the command line.
There's a handy word list in /usr/share/dict/words, one word per line.
Depending on what packages you have installed, you may have bigger
dictionaries handy, but you can usually count on /usr/share/dict/words
being there on any Linux system. Some older Unix systems may have it in
/usr/dict/words instead.
We need a way to choose all seven letter words.
That's easy. In a regular expression, . (a dot) matches one letter.
So ....... (seven dots) matches any seven letters.
(There's a more direct way to do that: the expression .\{7\}
will also match 7 letters, and is really a better way. But personally,
I find it harder both to remember and to type than the seven dots.
Still, if you ever need to match 43 characters, or 114, it's good to know the
"right" syntax.)
Fine, but if you grep ....... /usr/share/dict/words
you get a list of words with seven or more letters. See why?
It's because grep prints any line where it finds a match -- and a
word with nine letters certainly contains seven letters within it.
The pattern you need to search for is '^.......$' -- the up-caret ^
matches the beginning of a line, and the dollar sign $ matches the end.
Put single quotes around the pattern so the shell won't try to interpret
the caret or dollar sign as special characters. (When in doubt, it's
always safest to put single quotes around grep patterns.)
So now we can view all seven-letter words:
grep '^.......$' /usr/share/dict/words
How do we choose only the ones that contain all the letters a e i o and u?
That's easy enough to build up using pipelines, using the pipe
character | to pipe the output of one grep into a different grep.
grep '^.......$' /usr/share/dict/words | grep a
sends that list of 7-letter words through another grep command to
make sure you only see words containing an a.
Now tack a grep for each of the other letters on the end, the same way:
grep '^.......$' /usr/share/dict/words | grep a | grep e | grep i | grep o | grep u
Voilà! I won't spoil the puzzler, but there are two words that
match, and one of them is obviously the answer.
The power of the Unix command line to the rescue!
Tags: cmdline, regexp, linux, shell, pipelines, puzzles
[
11:00 Mar 15, 2011
More linux/cmdline |
permalink to this entry |
]
Mon, 28 Feb 2011
Another year of
SCALE,
the Southern CAlifornia Linux Expo, is over, and it was as good as ever.
Talks
A few standout talks:
Leigh Honeywell's keynote was a lively and enjoyable discussion of
hackerspaces --
from the history of the movement to a discussion of some of the
coolest and most innovative hackerspaces around today. She had plenty of
stories and examples that left everyone in the audience itching to
get involved with a local hackerspace, or start one if necessary.
John Wise and Eugene Clement of
LinuxAstronomy.com
presented the entertaining
"A Reflection on Classroom Robotics with Linux Robots in classrooms".
They've taught kids to build and program robots that follow lines,
solve mazes, and avoid obstacles. The students have to figure out
how to solve problems, details like when and how far to back up.
What a fantastic class! I can't decide if I'd rather teach a class
like that or take it myself ...
but either way, I enjoyed the presentation.
They also had a booth in the exhibit
hall where they and several of their students presented their
Arduino-based robots exploring simulated Martian terrain.
Jonathan Thomas spoke about his OpenShot video editor and the
development community behind it, with lots of video samples of what
OpenShot can do.
Sounds like a great program and a great community as well:
I'll definitely be checking out OpenShot
next time I need to edit a video.
It's worth mentioning that both the robotics talk and the OpenShot one
were full of video clips that ran smoothly without errors.
That's rare at conferences -- videos so often cause problems
in presentations (OpenOffice is particularly bad at them).
These presenters made it look effortless, which most likely points to
a lot of preparation and practice work beforehand.
Good job, guys!
Larry Bushey's "Produce An Audio Podcast Using Linux" was clear and
informative, managing to cover the technology, both hardware and
software, and the social factors like how often to broadcast, where
to host, and how to get the word out and gain and keep listeners
while still leaving plenty of time for questions.
The Exhibit Hall
In between talks I tried to see some of the exhibit hall, which was
tough, with two big rooms jam-packed with interesting stuff.
Aside from LinuxAstronomy and their robots,
there were several other great projects for getting technology into schools:
Partimus from the bay area, and Computers4Kids more local to LA,
both doing excellent work.
The distro booths all looked lively. Ubuntu California's booth was
always so packed that it was tough getting near to say hi, Fedora was well
attended and well stocked with CDs, and SuSE had a huge array of
givaways and prizes. Debian, Gentoo, Tiny Core and NetBSD were there as well.
Distro Dilemma and "the Hallway track"
Late in the game I discovered even Arch Linux had a booth hidden off
in a corner. I spent some time there hoping I might get help for my
ongoing Arch font rendering problem, but ended up waiting a long time
for nothing. That left me with a dilemma for my talk later that day:
Arch works well on my laptop except that fonts sometimes render with
chunks missing, making them ugly and hard to read; but a recent update
of Ubuntu Lucid pulled in some weird X change that keeps killing my
window manager at unpredictable times. What a choice! In the end I
went with Ubuntu, and indeed X did go on the fritz, so I had to do
without my live demo and stick to my prepared slides. Not a tragedy,
but annoying. The talk went well otherwise.
I had a great conversation with Asheesh from the
OpenHatch project
about how to make open source projects more welcoming to new
contributors. It's something I've always felt strongly about, but I
feel powerless to change existing projects so I don't do anything.
Well, OpenHatch is doing something about it, and I hope I'll be able
to help.
The Venue
Not everything was perfect. The Hilton is a new venue for SCALE,
and there were some issues.
On Saturday, every room was full, with people
lining the walls and sitting on floors. This mostly was not a room size
problem, merely a lack of chairs. Made me wonder if we should go all
opensource on them and everybody bring their own lawn chair if
the hotel can't provide enough.
Parking was a problem too. The Hilton's parking garage fills up early,
so plan on driving for ten minutes through exhaust-choked tunnels
hoping to find a space to squeeze into. We got lucky, so I didn't
find out if you have to pay if you give up and exit without
finding a spot.
Then Sunday afternoon they ran short of validation
tickets (the ones that reduce the cost from $22 to $9), and it wasn't
clear if there was any hope of more showing up (eventually some did).
To top it off, when we finally left on Sunday
the payment machine at the exit swallowed my credit card, requiring
another 15 minutes of waiting for someone to answer the buzzer.
Eventually the parking manager came down to do a magic reset rite.
So I didn't come away with a great impression of the Hilton.
But it didn't detract much from a wonderful conference full of
interesting people -- I had a great time, and would (and do) recommend
SCALE to everyone with any interest in Linux.
But it left me musing about the pros and cons of different venues ...
a topic I will discuss in a separate post.
Tags: conferences, linux, speaking, scale, scale9x
[
22:39 Feb 28, 2011
More conferences |
permalink to this entry |
]
Fri, 25 Feb 2011
This week's Linux Planet article continues the Plug Computer series,
with
Cross-compiling
Custom Kernels for Little Linux Plug Computers.
It covers how to find and install a cross-compiler (sadly, your
Linux distro probably doesn't include one), configuring the Linux
kernel build, and a few gotchas and things that I've found not to
work reliably.
It took me a lot of trial and error to figure out some of this --
there are lots of howtos on the web but a lot of them skip the basic
steps, like the syntax for the kernel's CROSS_COMPILE argument --
but you'll need it if you want to enable any unusual drivers like
GPIO. So good luck, and have fun!
Tags: tech, linux, plug, kernel, writing
[
12:30 Feb 25, 2011
More tech |
permalink to this entry |
]
Thu, 10 Feb 2011
This week's Linux Planet article continues my
Plug
Computing part 1 from two weeks ago.
This week, I cover the all-important issue of "unbricking":
what to do when you mess something up and the plug doesn't boot
any more. Which happens quite a lot when you're getting started
with plugs.
Here it is:
Un-Bricking
Linux Plug Computers: uBoot, iBoot, We All Boot for uBoot.
If you want more exhaustive detail, particular on those uBoot scripts and
how they work, I go through some of the details in a brain-dump I wrote
after two weeks of struggling to unbrick my first GuruPlug:
Building
and installing a new kernel for a SheevaPlug.
But don't worry if that page isn't too clear;
I'll cover the kernel-building part more clearly in my next
LinuxPlanet article on Feb. 24.
Tags: tech, linux, plug, uboot, writing
[
10:15 Feb 10, 2011
More tech |
permalink to this entry |
]
Thu, 27 Jan 2011
My article this week on Linux Planet is an introduction to Plug
Computers: tiny Linux-based "wall wart" computers that fit in a
box not much bigger than a typical AC power adaptor.
Although they run standard Linux (usually Debian or Ubuntu),
there are some gotchas to choosing and installing plug computers.
So this week's article starts with the basics of choosing a model
and connecting to it; part II, in two weeks, will address more
difficult issues like how to talk to uBoot, flash a new kernel or
recover if things go wrong.
Read part I here: Tiny
Linux Plug Computers: Wall Wart Linux Servers.
Tags: tech, linux, plug
[
19:36 Jan 27, 2011
More tech |
permalink to this entry |
]
Tue, 18 Jan 2011
At work, I'm testing some web programming on a server where we use a
shared account -- everybody logs in as the same user. That wouldn't
be a problem, except nearly all Linuxes are set up to use colors in
programs like ls and vim that are only readable against a dark background.
I prefer a light background (not white) for my terminal windows.
How, then, can I set things up so that both dark- and light-backgrounded
people can use the account? I could set up a script that would set up
a different set of aliases and configuration files, like when I
changed
my vim colors.
Better, I could fix all of them at once by
changing my terminal's idea of colors -- so when the remote machine
thinks it's feeding me a light color, I see a dark one.
I use xterm, which has an easy way of setting colors: it has a list
of 16 colors defined in X resources. So I can change them in ~/.Xdefaults.
That's all very well. But first I needed a way of seeing the existing
colors, so I knew what needed changing, and of testing my changes.
Script to show all terminal colors
I thought I remembered once seeing a program to display terminal colors,
but now that I needed one, I couldn't find it.
Surely it should be trivial to write. Just find the
escape sequences and write a script to substitute 0 through 15, right?
Except finding the escape sequences turned out to be harder than I
expected. Sure, I found them -- lots of them, pages that
conflicted with each other, most giving sequences that
didn't do anything visible in my xterm.
Eventually I used script
to capture output from a vim session
to see what it used. It used <ESC>[38;5;Nm to set color
N, and <ESC>[m to reset to the default color.
This more or less agreed Wikipedia's
ANSI
escape code page, which says <ESC>[38;5; does "Set xterm-256
text coloor" with a note "Dubious - discuss". The discussion says this
isn't very standard. That page also mentions the simpler sequence
<ESC>[0;Nm to set the
first 8 colors.
Okay, so why not write a script that shows both? Like this:
#! /usr/bin/env python
# Display the colors available in a terminal.
print "16-color mode:"
for color in range(0, 16) :
for i in range(0, 3) :
print "\033[0;%sm%02s\033[m" % (str(color + 30), str(color)),
print
# Programs like ls and vim use the first 16 colors of the 256-color palette.
print "256-color mode:"
for color in range(0, 256) :
for i in range(0, 3) :
print "\033[38;5;%sm%03s\033[m" % (str(color), str(color)),
print
Voilà! That shows the 8 colors I needed to see what vim and ls
were doing, plus a lovely rainbow of other possible colors in case I ever
want to do any serious ASCII graphics in my terminal.
Changing the X resources
The next step was to change the X resources. I started
by looking for where the current resources were set, and found them
in /etc/X11/app-defaults/XTerm-color:
$ grep color /etc/X11/app-defaults/XTerm-color
irrelevant stuff snipped
*VT100*color0: black
*VT100*color1: red3
*VT100*color2: green3
*VT100*color3: yellow3
*VT100*color4: blue2
*VT100*color5: magenta3
*VT100*color6: cyan3
*VT100*color7: gray90
*VT100*color8: gray50
*VT100*color9: red
*VT100*color10: green
*VT100*color11: yellow
*VT100*color12: rgb:5c/5c/ff
*VT100*color13: magenta
*VT100*color14: cyan
*VT100*color15: white
! Disclaimer: there are no standard colors used in terminal emulation.
! The choice for color4 and color12 is a tradeoff between contrast, depending
! on whether they are used for text or backgrounds. Note that either color4 or
! color12 would be used for text, while only color4 would be used for a
! Originally color4/color12 were set to the names blue3/blue
!*VT100*color4: blue3
!*VT100*color12: blue
!*VT100*color4: DodgerBlue1
!*VT100*color12: SteelBlue1
So all I needed to do was take the ones that don't show up well --
yellow, green and so forth -- and change them to colors that work
better, choosing from the color names in /etc/X11/rgb.txt
or my own RGB values. So I added lines like this to my ~/.Xdefaults:
!! color2 was green3
*VT100*color2: green4
!! color8 was gray50
*VT100*color8: gray30
!! color10 was green
*VT100*color10: rgb:00/aa/00
!! color11 was yellow
*VT100*color11: dark orange
!! color14 was cyan
*VT100*color14: dark cyan
... and so on.
Now I can share accounts, and
I no longer have to curse at those default ls and vim settings!
Update: Tip from Mikachu: ctlseqs.txt
is an excellent reference on terminal control sequences.
Tags: color, X11, linux, programming, python, tips
[
10:56 Jan 18, 2011
More linux |
permalink to this entry |
]
Thu, 13 Jan 2011
My latest article on Linux Planet is a
review
of Arch Linux.
I've been quite favorably impressed with Arch. It's a good, solid,
straightforward distro that's very well suited to folks who like
to administer their systems via the command-line -- or who want to
learn how to do that.
I've been running it on my laptop for a few months, because it has
excellent performance, without a lot of the bloatware you see in
a lot of other distros, and it boots fast.
The only real problem I've had involves fonts. I see nasty font
artifacts -- sometimes subtle, a line or a few pixels missing from
certain letters -- but sometimes severe, as in
this screenshot
or this one.
In the article I talk about some solutions I've found that make
the problems less bad, but I haven't found any way to make them
go away entirely.
Unfortunately, since the font problems are worst inside browsers and
I use my laptop for presentations at conferences, this may eventually
drive me off Arch. I hope not -- I hope I can find a solution --
because otherwise, Arch has been nothing short of a pleasure.
Tags: writing, linux
[
20:36 Jan 13, 2011
More writing |
permalink to this entry |
]
Thu, 09 Dec 2010
My article this week on Linux Planet concerns
Advanced
Linux Server Troubleshooting (part 2).
It's two loosely related topics: exploring the /proc filesystem, and
how to use it to find information on a running process; and several
ways to get stack traces from Python programs.
This (as well as
Troubleshooting
part I) arose from a problem we had at work, where we use
Linux plug computers (ARM-based Linux appliances) running Python
scripts. It's not uncommon for Python networking scripts to go into
never-never-land, waiting forever on a network connection without
timing out. Since plug computers tend not to be outfitted with the
latest and greatest tools like gdb and debug versions of libraries,
we've needed to find more creative ways of figuring out what
processes are doing to make sure our programs are ready for anything.
Tags: writing, debugging, linux
[
11:44 Dec 09, 2010
More linux |
permalink to this entry |
]
Wed, 24 Nov 2010
How do you troubleshoot a process that's running away, sucking up
too much CPU, or not doing anything at all?
Today on Linux Planet:
Troubleshooting
Linux Servers: top and Other Basic System Tools.
This is part I, covering basics like top, strace and gdb.
Part II will get into hairier stuff and tips for debugging Python
applications.
Tags: writing, debugging, linux
[
21:06 Nov 24, 2010
More linux |
permalink to this entry |
]
Tue, 05 Oct 2010
I've previously written about
how to use 'cryptoloop' encryption on a flash drive or SD card.
An encrypted SD card or USB stick is very handy when you have personal
files you want to take with you between several different machines.
But modern Gnome systems can't read cryptoloop. Or, rather, they can,
but you have to fiddle with them as root -- they won't recognize and
mount the filesystem automatically.
It turns out that's because the "new way", instead of cryptoloop,
is to use a system called LUKS. But it has a few pitfalls, and there's
no documentation about how to use it on a system that doesn't recognize
it automatically. So here's some.
Creating a LUKS filesystem
(Updated December 2023)
Palimpsest, described below, no longer seems to exist.
But it's just as easy to set up a LUKS filesystem
with cryptsetup.
I'm using "SECRET" as the disk label; this is a name
you'll use to mount the disk later. Of course, all these commands
require root.
cryptsetup -v luksFormat --label SECRET /dev/TARGET_DISK_PARTITION
Read more about luksFormat options in
man cryptsetup-luksFormat
.
Update: This used to be enough to make the device available in /dev/mapper,
but in 2024, you have to open it explicitly:
sudo cryptsetup luksOpen /dev/PARTITION SECRET
Now your new encrypted device is available on /dev/mapper/SECRET.
Next, create a filesystem on the encrypted device:
mkfs.ext4 /dev/mapper/SECRET
Finally, make sure you can mount it:
mount /dev/mapper/SECRET /mnt
(or wherever you prefer to mount it).
This next part, written in 2010, is now obsolete:
The easiest way is to use a program called palimpsest,
available on Ubuntu in the gnome-disk-utility package.
Run palimpsest with no arguments; click on the appropriate
storage device, then click the obvious buttons to create partitions,
label and format them. Click on the box to encrypt the partition,
type your password, then sit back and wait while it creates the
partition.
The label you give the partition is important: it will be used later
to mount it.
All straightforward, right? Except for the one part that isn't:
there's a button for safely removing the device after the busy cursor
has stopped, and it never works.
It always says the device is busy. Running a sync from a terminal doesn't
work; waiting ten minutes doesn't help. So just shrug, quit palimpsest
and eject the device. If you're lucky it created everything okay.
Mounting a LUKS filesystem from Gnome or Another Desktop
In theory, you should be able to plug in the device and after a few
seconds you'll be prompted for your password. If it doesn't, which
sometimes happens, try again, wait longer this time and cross your fingers.
If it still doesn't mount, try the command-line version in the next section.
Even if it doesn't work, you might get a useful error message.
Mounting a LUKS filesystem from the commandline
Assuming you used the partition label "SECRET" when you created the
LUKS encrypted partition, the physical partition is on /dev/sdb2,
and you want to mount it on /media/SECRET
(which already exists), these two commands (as root) will mount it:
sudo cryptsetup luksOpen /dev/sdb2 SECRET
(prompts for sudo password)
(prompts for LUKS password)
sudo mount /dev/mapper/SECRET /media/SECRET
Easy -- yet a bit frustrating. There seems to be no way to do
this purely through /etc/fstab, so you have to remember the cryptsetup
command, or write an alias or script to do the two steps for you.
And you always have to type your sudo password as well as the password
for the filesystem, whereas with cryptoloop you only needed the
filesystem's password.
In the end, I'm not convinced LUKS is a win. But since it's so hard to
manage cryptoloop filesystems from a Gnome desktop, it's probably worth
hassling with LUKS if you need to be able to interoperate with Gnome.
Update: I wrote that yesterday. Today, maybe three weeks after I
started using the card on a fairly regular basis to transfer
personal files between home and laptop, I had a filesystem failure:
I wrote to the card from the desktop, synced, unmounted, put it in
the laptop -- and got I/O errors and "You must specify filesystem
type" trying to mount it. I was able to fsck, and it apparently
restored from an old journal -- including old data.
No loss here, because the card is just a copy of what was on the
desktop machine.
But the lesson here is: these encrypted cards are great for emergency
backups. But you probably don't want to rely on one as your main
storage for anything important.
Tags: linux, encryption, security
[
13:57 Oct 05, 2010
More linux |
permalink to this entry |
]
Wed, 29 Sep 2010
We hit an interesting problem at work recently. A coworker made a deb
package which, during installation, needed to figure out the ID of
the user running it, so it could make files writable by that user.
Of course, while a package is being installed it's run by root,
so the trick is to find out who you were before you
sudo
ed
or
su
ed to root.
He was using the command who am i
-- reasonable, since
it's been a staple since the early days of Unix. For those not familiar
with the command, /usr/bin/who
, if given two arguments,
regardless of what those arguments are, will print information about
the current logged-in user. It also offers a -m
option
to do the same thing. So who am i
, who a b
,
and who -m
should all print a line like:
$ who am i
akkana pts/1 2010-09-29 09:33 (:0.0)
Except they don't. For me, they printed nothing at all -- which broke
my colleague's install script.
A quick poll among friends on IRC showed that who am i
worked for some people, failed for others, with no obvious logic to it.
It's the terminal
It took some digging to find out what was going on, but the difference
turned out to be the terminal being used. The who
program
-- with or without -m -- gets its info from /var/run/utmp, a
file that maintains a record of who's logged in to the system.
And it turns out some terminals create a utmp entry, while others don't.
So:
Program | Creates utmp entry?
|
gnome-terminal | yes
|
konsole | yes
|
xterm | no
|
xfterm4 | yes
|
terminator | no
|
rxvt | no
|
roxterm | yes
|
I use xterm myself. Xterm is documented (in its man page) to modify
the utmp entry, and it has a command-line flat, +ut
,
plus two X resources, ptyHandshake
and utmpInhibit
.
None of the three work: setting
XTerm*ptyHandshake: true
XTerm*utmpInhibit: false
then running
xterm +ut
still doesn't show up in
who
.
I guess that's a bug in xterm (or Ubuntu's version of xterm).
How do you get the real user?
Okay, so who am i
clearly isn't a reliable way of getting
the user ID. What can you use instead?
Several people suggested the id
program. It has a -r
option which supposedly prints the real UID. Unfortunately, what it
really does is print:
$ id -r
id: cannot print only names or real IDs in default format
The man page doesn't offer any suggestions how to use a format other
than default, so we're kinda stuck there.
Update: people keep suggesting id -ru
to me.
Evidently I wasn't very clear in this article: the goal is to
get the real id of the login user.
In other words, if you're logged in as mary and using sudo,
you want mary, not root.
Alas, adding -u to id's flags gets only the effective user id: -u
wins over -r. This is very easy to test: sudo id -ru
prints 0, as does id -ru
inside su.
But elly on Freenode had a great suggestion:
stat -c '%U' `readlink /proc/self/fd/0`
What does this do?
/proc/self
is a symlink to /proc/pid
,
a directory where you can find out all sorts of information about
a process.
One of the things you can find out about a process is open file
descriptors: in particular, standard input, output and error.
So /proc/self/fd/0 corresponds to standard input
of the current process -- which in the example above is readlink
.
What is readlink? Well, /proc/self/fd/0
, in the normal case,
is actually a symlink to the terminal controlling the process.
readlink
prints the file to which that link points --
for instance, /dev/pts/1. That's the terminal being used.
Now that we know the name of the terminal, all we need to do is find out
who owns it. (This is the information who am i
would have
gotten from utmp, had there been a utmp entry.)
ls -l /dev/pts/1
will show you that it's you, even if you
run it as sudo ls -l /dev/pts/1
. You could take that and
strip off fields to get the username, but stat
, as elly
suggested, is a much better way of doing that.
Put it all together, and stat -c '%U' `readlink /proc/self/fd/0
gets standard input for the current process, follows the link to get
the controlling terminal, then finds out who owns that terminal.
That's you!
A similar but slightly shorter solution suggested by Mikachu:
stat -c %u `tty`
Tags: linux, cmdline
[
17:39 Sep 29, 2010
More linux/cmdline |
permalink to this entry |
]
Mon, 13 Sep 2010
I've been setting up a new Lenovo X201 laptop at work.
(Yes, work -- I've somehow fallen into an actual job where I go in to
an actual office at least some of the time. How novel!)
At the office I have a Lenovo docking station, attached to a monitor
and keyboard. The monitor, naturally, has a different resolution from
the laptop's own monitor.
Under Gnome and compiz, when I plugged in the monitor, I could either let
the monitor mirror the laptop display -- in which case X would refuse to
work at greater than 1024x768, much smaller than the native resolution
of either the laptop screen or the external monitor -- or I could call
up the classic two-monitor configuration dialog, where I could
configure the external monitor to be its correct size and sit
alongside the computer's monitor. I had to do this every time I
plugged in.
If I wanted to work on the big screen,
then when I undocked, I had to drag all the windows on all desktops
back to the built-in LCD first, or they'd be lost.
Using just the external monitor and turning off the laptop screen
didn't seem to be an allowed option.
That all lasted for about two days. Gnome and I just don't get along.
Pretty soon gdm was mysteriously refusing to let me log in (probably didn't
like my under-1000 user id), and after wasting half a day fighting
it I gave up and reverted with relief to my familiar Openbox desktop.
But now I'm in the Openbox world and don't have that dialog anyway.
What are my options?
xrandr monitor detection
Fortunately, I already knew about
using
xrandr to send to a projector; it was only a little more complicated
using it for the monitor in the docking station. Running xrandr with
no arguments prints all the displays it currently sees, so you can tell
whether an external display or projector is connected and even what
resolutions it supports.
I used that for a code snippet in my .xinitrc:
# Check whether the external monitor is connected: returns 0 on success
xrandr | grep VGA | grep " connected "
if [ $? -eq 0 ]; then
xrandr --output VGA1 --mode 1600x900;
xrandr --output LVDS1 --off
else
xrandr --output VGA1 --off
xrandr --output LVDS1 --mode 1280x800
fi
That worked nicely. When I start X it checks for an external monitor,
and if it finds one it turns off the laptop display (so it's off when
the laptop is sitting closed in the docking station) and sets the
screen size for the external monitor.
Making it automatic
All well and good. I worked happily all day in the docking station,
suspended the laptop and un-docked it, brought it home, woke it up --
and of course the display was still off. Oops.
Okay, so it also needs the same check when resuming from suspend.
That used to be in /etc/acpi/resume.d, but in Lucid they've
moved it (because we definitely wouldn't want users to get complacent
and think they know how to configure things!) and now it lives in
/etc/pm/sleep.d. I created a new file,
/etc/pm/sleep.d/20_enable_display which looks like this:
#!/bin/sh
case "$1" in
resume)
# Check whether the external monitor is connected:
# returns 0 on success
xrandr | grep VGA | grep " connected "
if [ $? -eq 0 ]; then
xrandr --output VGA1 --mode 1600x900
xrandr --output LVDS1 --off
else
xrandr --output VGA1 --off
xrandr --output LVDS1 --mode 1280x800
fi
hsetroot -center `find -L $HOME/Backgrounds -name "*.*" | $HOME/bin/randomline`
;;
suspend|hibernate)
;;
*)
;;
esac
exit $?
Neat! Now, every time I wake from suspend, the laptop checks whether
an external monitor is connected and sets the resolution accordingly.
And it re-sets the background (using my
random
wallpaper method) so I don't get a tiled background on the big monitor.
Update: hsetroot -fill works better than -center
given that I'm displaying background images on two different
resolutions. Of course, if I wanted to get fancy I could make
separate background sets, one for each monitor, and choose images
from the appropriate set.
We're almost done. Two more possible adjustments.
Detecting undocking
First, while poking around in /etc/acpi I noticed a script
named undock.sh. In theory, I can put the same code snippet in
there, and then if I un-dock the laptop without suspending it first,
it will immediately change resolution. I haven't actually tried that yet.
Projectors
Second, this business of turning off the built-in display if there's
anything plugged into the VGA port is going to break if I use this
laptop for presentations, since a projector will also show up as VGA1.
So the code may need to be a little smarter. For example:
xrandr | grep VGA | grep " connected " | grep 16.0x
The theory here is that an external monitor will be able to do 1680 or
1600, so it will have a line like
VGA1 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 434mm x 270mm
.
The 1680x matches the 16.0x pattern in grep.
A projector isn't likely to do more than 1280, so it won't match the
pattern '16.0x'. However, that isn't very robust; it will probably
fail for one of those fancy new 1920x1080 monitors.
You could extend it with
xrandr | grep VGA | grep " connected " | egrep '16.0x|19.0x'
but that's getting even more hacky ... and it might be time to start
writing some more intelligent code.
Which doubtless I'll do if I ever get a 1920x1080 monitor.
Tags: linux, X11, laptop
[
23:11 Sep 13, 2010
More linux/laptop |
permalink to this entry |
]
Thu, 09 Sep 2010
Part 2 in my Hugin series is out, in which I discuss how to rescue
difficult panoramas that confuse Hugin.
Hugin is an amazing program, but if you get outside the bounds of
the normal "Assistant" steps, the user interface can be a bit
confusing -- and sometimes it does things that are Just Plain Weird.
But with help from some folks on IRC, I found out that a newer
version of Hugin can fix those problems, and worked out how to do
it (as well as lots of ways that seemed like they should work, but
didn't).
Read the gory details in:
Hugin
part 2: Rescuing Difficult Panoramas.
There will be a Hugin Part 3, and possibly even a Part 4, discussing
things Hugin can do beyond panoramas.
Tags: writing, linux, graphics, panorama, hugin
[
14:58 Sep 09, 2010
More writing |
permalink to this entry |
]
Thu, 26 Aug 2010
A couple of weeks ago in my
Fotoxx
article I discussed using Fotoxx to create panoramas.
But for panoramas bigger than a couple of images, you're much better
off using the Linux panorama app: Hugin.
Hugin is very impressive, and much too capable to be summarized in a
single short article, so I'm planning three. This week's article is a
basic introduction:
Painless
Panorama Stitching with Hugin.
Tags: writing, linux, graphics, panorama, hugin
[
15:11 Aug 26, 2010
More writing |
permalink to this entry |
]
Thu, 12 Aug 2010
Dave stumbled on a neat little photo editor while tricking out his
old Vaio (P3/650 MHz, 192M RAM) and looking for lightweight apps.
It's called Fotoxx and it's quite impressive: easy to use and packed
with useful features.
So I wrote about it in this week's Linux Planet article:
Fotoxx,
the Greatest Little Linux Photo Editor You've Never Heard Of.
At first, I was most impressed by the Warp tool -- much easier to
use than GIMP's IWarp, though it's rather slow and not quite as
flexible as IWarp. But once I got to writing the article, I was
blown away by two additional features: it has an automatic panorama
stitcher and an HDR tool. GIMP doesn't have either of these
features, at all.
Now, panorama stitching used to be a big deal, but it isn't so much
any more now that Hugin has gotten much easier to use. (My article
in two weeks will be about Hugin.) Fotoxx isn't quite that flexible:
it can only stitch two images at a time, and can't handle images
with a lot of overlap. (But Hugin has some limitations too.)
But HDR -- wow! I've been meaning to learn more about making HDR
images in GIMP -- although it has no HDR tool, there are plug-ins to
make it a bit easier to assemble one, just like my Pandora plug-in
makes it a little easier to assemble panoramas. But now I don't need
to -- fotoxx handles it automatically.
I won't be switching from GIMP any time soon for regular photo
editing, of course -- GIMP is still much more flexible. But fotoxx
is definitely worth a look, and I'll be keeping it installed to make
HDR images, if nothing else.
Tags: writing, linux, graphics, panorama, HDR
[
15:44 Aug 12, 2010
More writing |
permalink to this entry |
]
Tue, 22 Jun 2010
Another in the continuing series of
"This isn't documented anywhere and Google searches aren't helpful":
Dave's printer was failing to print on Ubuntu Lucid. The only
failure mode: pages came out with the single printed line,
Unable to open the initial device, quitting.
What a great, helpful error message! Thanks, CUPS!
Web searching found lots of people using HP printers, using various
versions of the HPLIP software to installer newer drivers and
otherwise reconfigure their HP setups.
Only problem: this printer isn't an HP. It's a Brother HL2070N,
which has given very good service and, until now, worked flawlessly
with every OS we've tried including Linux.
The solution? It turns out the problem really is HPLIP -- even if
the printer isn't an HP. What this message really meant was:
HPLIP isn't installed, and Ubuntu's CUPS refuses to work without it.
apt-get install hplip hplip-data
got the printer
working again.
Wouldn't it be nice if CUPS bothered to print error messages that
gave some hint of the real problem? Ideally, visible on the computer
to the user,
rather than on the printed page, so you don't have to waste 25 pages
of dead tree while you try to narrow down the problem?
Update: After further testing, we have established that a standard
Gnome-based Ubuntu Lucid machine needs hplip and hplip-data installed,
while a Lucid machine without Gnome needs those two plus hpijs. Or you
can get around needing any of them by ignoring CUPS' recommendation
for which driver to use, and choosing the Gutenprint driver in the
CUPS configuration screens.
A reader asked me if we had checked the CUPS error log. On one
machine, the log file was virtually empty; on another, there
actually were some lines about hpijs (nothing about hplip),
intermixed with a lot of debug chatter and a large number of errors
that were fairly clearly unrelated to anything we were doing.
Tags: linux, printing, cups
[
22:46 Jun 22, 2010
More linux |
permalink to this entry |
]
Fri, 18 Jun 2010
While I was in Europe, Dave stumbled on a handy alias on his Mac to
check the time where I was:
date -v +10
(+10 is the offset
from the current time). But when he tried to translate this to Linux,
he found that the -v flag from FreeBSD's
date
program
wasn't available on the GNU
date
on Linux.
But I suggested he could do the same thing with the TZ environment variable.
It's not documented well anywhere I could find, but if you set TZ to
the name of a time zone, date
will print out the time for
that zone rather than your current one.
So, for bash:
$ TZ=Europe/Paris date # time in Paris
$ TZ=GB date # time in Great Britain
$ TZ=GMT-02 date # time two timezones east of GMT
or for csh:
% ( setenv TZ Europe/Paris; date)
% ( setenv TZ GB; date)
% ( setenv TZ GMT-02; date)
That's all very well. But when I tried
% ( setenv TZ UK; date)
% ( setenv TZ FR; date)
they gave the wrong time, even though Wikipedia's
list
of time zones seemed to indicate that those abbreviations were okay.
The trick seems to be that setting TZ only works for abbreviations
in /usr/share/zoneinfo/, or maybe in /usr/share/zoneinfo/posix/.
If you give an abbreviation, like UK or FR or America/San_Francisco,
it won't give you an error, it'll just print GMT as if that was what
you had asked for.
So this trick is useful for printing times abroad -- but if you want
to be safe, either stick to syntaxes like GMT-2, or make a script that
checks whether your abbreviation exists in the directory before
calling date, and warns you rather than just printing the wrong time.
Tags: linux, tips, travel, cmdline
[
14:04 Jun 18, 2010
More linux/cmdline |
permalink to this entry |
]
Sun, 30 May 2010
I've been so busy with
Libre Graphics
Meeting -- a whirlwind of GIMP caucuses, open source graphics,
free art and sharing of ideas --
that I forgot to notice that part 2 of my kdenlive
article was up on Linux Planet.
Making
Movies in Linux with Kdenlive, part 2: Spice up Those Kdenlive Videos.
Tags: writing, linux, video
[
03:45 May 30, 2010
More writing |
permalink to this entry |
]
Thu, 13 May 2010
A couple of weeks ago, I shot a lot of short video clips with my
digital camera at an indoor fun fly (in the intervals when I wasn't
crashing around with the other crazy pilots).
But then ... what to do with a bunch of disconnected video clips?
I've uploaded short clips to youtube before, but never extracted the
good parts and edited them together. And most video editing programs
look pretty complex.
The answer turned out to be kdenlive, which was surprisingly easy to
use -- once I got past one initial bug. So I wrote up the details.
Part I, covering the basics of how to get started and combine clips,
is on Linux Planet:
Making
Movies in Linux with Kdenlive.
Watch for part II in a couple of weeks, where I'll cover transition
effects, music and titles.
Tags: writing, linux, video
[
19:25 May 13, 2010
More writing |
permalink to this entry |
]
Sun, 09 May 2010
Ubuntu's latest release, 10.04 "Lucid Lynx", really seems remarkably
solid. It boots much faster than any Ubuntu of the past three years,
and has some other nice improvements too.
But like every release, they made some pointless random undocumented
changes that broke stuff. The most frustrating has been getting my
front-panel flash card reader to work under Lucid's new udev,
so I could read SD cards from my camera and PDA.
The SD card slot shows up as /dev/sdb, but unless there's a card
plugged in at boot time, there's no /dev/sdb1 that you can
actually mount.
hal vs udisks
Prior to Lucid, the "approved" way of creating sdb1 was to
let hald-addons-storage poll every USB device every so
often, to see if anyone has plugged in a card and if so, check its
partition table and create appropriate devices.
That's a lot of polling -- and in any case, hald isn't standard on
Lucid, and even when it's installed, it sometimes runs and sometimes
doesn't. (I haven't figured out what controls whether it decides to run).
Hal isn't even supposed to be needed on Lucid -- it's supposed to use
devicekit (renamed to) udisks for that.
Except I guess they couldn't quite figure out how to get udisks working
in time, so they patched things together so that on Gnome systems, hald
does the same old polling stuff -- and on non Gnome systems, well,
maybe it does and maybe it doesn't. And maybe you can't read your
camera cards. Oh well!
udev rules
But on systems prior to Lucid there was another way:
make a udev rule to create sdb1 through sdb15 every time. I have an older
article
on setting up udev rules for multicard readers, but none of my old
udev rules worked on Lucid.
After many rounds of udevadm info -a -p /block/sdb
and udevadm test /block/sdb
, service udev restart
,
and many reboots, I finally found a rule that worked.
Create a /etc/udev/rules.d/71-multicard-reader.rules file
containing the following:
# Create all devices for multicard reader:
KERNEL=="sd[b-g]", SUBSYSTEMS=="usb", ATTRS{idVendor}=="1d6b", ATTRS{idProduct}=="0002", OPTIONS+="all_partitions,last_rule"
Replace the 1d6b and 0002 with the vendor and product of your own device,
as determined with udevadm info -a -p /block/sdb
... and
don't be tempted to use the vendor and device ID you get from lsusb,
because those are different.
What didn't work that used to? String matches. Some of them.
For example, this worked:
KERNEL=="sd[b-g]", SUBSYSTEMS=="scsi", ATTRS{model}=="*SD*", NAME{all_partitions}="sdcard"
but these didn't:
KERNEL=="sd[b-g]", SUBSYSTEMS=="scsi", ATTRS{model}=="*SD*Reader*", NAME{all_partitions}="sdcard"
KERNEL=="sd[a-g]", SUBSYSTEMS=="scsi", ATTRS{model}=="USB SD Reader ", NAME{all_partitions}="cardsd"
Update: The first of those two lines does indeed work now, whereas
it didn't when I was testing. It's possible that this has something
to do with saving hardware states and needing an extra
udevadm trigger
, as suggested in Alex's
Changes in Ubuntu Lucid to udev.
According to udevadm info
, the model is "USB SD Reader " (three
spaces at the end). But somehow "*SD*" matches this while "*SD*Reader*"
and the exact string do not. Go figure.
Numeric order
I'd like to have this rule run earlier, so it runs before
/lib/udev/rules.d/60-persistent-storage.rules and could use
OPTIONS+="last_rule" to keep the persistent storage rules from firing
(they run a lot of unnecessary external programs for each device).
But if I rename the rule from 71-multicard-reader.rules to 59-,
it doesn't run at all. Why? Shrug. It's not like udevadm test
will tell me.
Other things I love (not) about the new udev
- I love how if you give the
udevadm info
arguments in the wrong
order, -p -a, it means something else and gives an error message.
- I love how
udevadm test
doesn't actually test the same
rules udev will use, so it's completely unrelated to anything.
- I love the complete lack of documentation on things like string
matching and how the numeric order is handled.
- I love how you can't match both the device name (a string) and
the USB IDs in the same rule, because one is SUBSYSTEMS=="scsi" and the
other is SUBSYSTEMS=="usb".
- Finally, I love how there's no longer any way to test udev rules on
a running system -- if you want it to actually create new devices, you
have to reboot for each new test.
service udev restart
and
udevadm control --reload-rules
don't touch existing devices.
Gives me that warm feeling like maybe I'm not missing out on the full
Windows experience by using Linux.
Tags: linux, ubuntu, kernel, udev, install
[
21:51 May 09, 2010
More linux/kernel |
permalink to this entry |
]
Wed, 05 May 2010
On a Linux list, someone was having trouble with wireless networking,
and someone else said he'd had a similar problem and solved it by
reinstalling Kubuntu from scratch. Another poster then criticised
him for that:
"if
the answer is reinstall, you might as well downgrade to Windows.",
and later added,
"if
"we should understand a problem, and *then* choose a remedy to match."
As someone who spends quite a lot of time trying to track down root
causes of problems so that I can come up with a fix that doesn't
involve reinstalling, I thought that was unfair.
Here is how I replied on the list (or you can go straight to
the
mailing list version):
I'm a big fan of understanding the root cause of a problem and solving it
on that basis. Because I am, I waste many days chasing down problems
that ought to "just work", and probably would "just work" if I
gave in and installed a bone stock Ubuntu Gnome desktop with no
customizations. Modern Linux distros (except maybe Gentoo) are
written with the assumption that you aren't going to change anything
-- so reverting to the original (reinstalling) will often fix a problem.
Understanding this stuff *shouldn't* take days of wasted time -- but
it does, because none of this crap has decent documentation. With a
lot of the underlying processes in Linux -- networking, fonts, sound,
external storage -- there are plenty of "Click on the System Settings
menu, then click on ... here's a screenshot" howtos, but not much
"Then the foo daemon runs the /etc/acpi/bar.sh script, which calls
ifconfig with these arguments". Mostly you have to reverse-engineer
it by running experiments, or read the source code.
Sometimes I wonder why I bother. It may be sort of obsessive-compulsive
disorder, but I guess it's better than washing my hands 'til they bleed,
or hoarding 100 cats. At least I end up with a nice customized system
and more knowledge about how Linux works. And no cat food expenses.
But don't get on someone's case because he doesn't have days to
waste chasing down deep understanding of a system problem. If
you're going to get on someone's case, go after the people who
write these systems and then don't document how they actually work,
so people could debug them.
Tags: linux, tech, documentation
[
19:37 May 05, 2010
More linux |
permalink to this entry |
]
Thu, 22 Apr 2010
On Linux Planet, an article about the
/etc/fstab file and
how to customize it:
Understanding
fstab.
Tags: writing, linux
[
11:37 Apr 22, 2010
More writing |
permalink to this entry |
]
Thu, 15 Apr 2010
Quick tip from Dave, passed along to someone else trying to use
an Apple keyboard on Linux:
On Linux, for some reason Apple keyboards' function keys don't work
by default.
Most of them try to run special functions instead, like
volume up/down or play/pause.
But you can get normal function keys by talking to the kernel module
that drives the keyboard:
echo 2 > /sys/module/hid_apple/parameters/fnmode
This will only last until shutdown, so put that line in /etc/rc.local
or a similar place so it runs every time you boot.
Here's an
Ubuntu help page on
Apple Keyboards with more information and other tricks.
Tags: linux, apple, kernel, keyboard
[
16:03 Apr 15, 2010
More linux/kernel |
permalink to this entry |
]
Thu, 08 Apr 2010
On Linux Planet, an article about how Upstart manages the Linux boot
process, how it's used in various distros, and how to explore and
control it:
The
Upstart Startup Manager (Linux Boot Camp part 2)
Tags: writing, linux, boot
[
10:49 Apr 08, 2010
More writing |
permalink to this entry |
]
Fri, 02 Apr 2010
A while back I worked on an
error
handler for bash that made the
shell a lot friendlier for newbies (or anyone else, really).
Linux Planet gave me the chance to write it up in more detail,
explaining a bit more about how it works:
Making
Bash Error Messages Friendlier.
Tags: writing, linux, shell
[
17:17 Apr 02, 2010
More writing |
permalink to this entry |
]
Thu, 01 Apr 2010
Second time I've run into this -- time to write it down.
Trying to run
debootstrap
to install the new Ubuntu beta, I kept hitting a snag right at the
beginning: debootstrap kept saying the filesystem was mounted with
the wrong permissions, and it couldn't create an executable file
or a device.
I have lines for each of my spare filesystems in /etc/fstab, like this:
/dev/sda4 /lucid ext3 relatime,user,noauto 0 0
That way, if I'm booted into one OS but I want to check a file from
another, I can mount it without needing sudo, just by typing
mount /lucid
.
Not being able to create executable files means it's mounted with
the noexec
flag. I checked another machine and saw
that it was using lines like
/dev/sda4 /lucid ext3 exec,user,noauto 0 0
I added the "exec," to fstab, unmounted and remounted ... and it was
still mounted with noexec
.
Turns out on some Linux versions, making a filesystem user
mountable turns off exec
even if you've specified it
explicitly. You have to add the exec
after the
user
.
But that still didn't make debootstrap happy, because it couldn't
create a device. That's a separate fstab option, dev
,
and user
implies nodev
.
So here's the fstab entry that finally worked:
/dev/sda4 /lucid ext3 relatime,user,exec,dev,noauto 0 0
Tags: linux, filesystem, fstab, tips
[
23:07 Apr 01, 2010
More linux |
permalink to this entry |
]
Sat, 27 Mar 2010
Three times now I've gotten myself into a situation where I was trying
to install Ubuntu and for some reason couldn't burn a CD. So I
thought hey, maybe I can make a bootable USB image on this handy
thumb drive here. And spent the next three hours unsuccessfully
trying to create one. And finally gave up, got in the car and went to buy
a new CD burner or find someone who could burn the ISO to a CD because
that's really the only way you can install or run Ubuntu.
There are tons of howtos on the web for creating live USB sticks for
Ubuntu. Almost all of them start with "First, download the CD image
and burn it to a CD. Now, boot off the CD and ..."
The few that don't discuss apps like usb-creator-gtk or unetbootin
tha work great if you're burning the current Ubuntu Live CD image
from a reasonably current Ubuntu machine, but which fail miserably
in every other case (wildly pathological cases like burning the
current Ubuntu alternate installer CD from the last long-term-support
version of Ubuntu. I mean, really, should that be so unusual?)
Tonight, I wanted a bootable USB of Fedora 12. I tried the Ubuntu
tools already mentioned, but usb-creator-gtk won't even try with
an image that isn't Ubuntu, and unetbootin wrote something but the
resulting stick didn't boot.
I asked on the Fedora IRC channel, where a helpful person
pointed me to this paragraph on
copying an ISO image with dd.
Holy mackerel! One command:
dd if=Fedora-12-i686-Live.iso of=/dev/sdf bs=8M
and in less than ten minutes it was ready. And it booted just fine!
Really, Ubuntu, you should take a look at Fedora now and then.
For machines that are new enough, USB boot is much faster and easier
than CD burning -- so give people an easy way to get a bootable USB
version of your operating system. Or they might give up and try
a distro that does make it easy.
Tags: linux, install, fedora, ubuntu
[
23:01 Mar 27, 2010
More linux/install |
permalink to this entry |
]
Thu, 25 Mar 2010
My latest article is up on Linux Planet:
How
Linux Boots: Linux Boot Camp (Part I: SysV Init)
It describes the boot sequence, from grub to kernel loading to init
scripts to starting X. Part I covers the classic "SysV Init" model
still used to some extent by every distro; part II will cover
Upstart, the version that's gradually working its way into some of
the newer Linux releases.
Tags: writing, linux, boot, grub
[
15:25 Mar 25, 2010
More writing |
permalink to this entry |
]
Thu, 11 Mar 2010
Part 3 and final of my series on configuring Ubuntu's new grub2 boot menu.
I translate a couple of commonly-seen error messages, but most of
the article is devoted to multi-boot machines. If you have several
different operating systems or Linux distros installed on separate
disk partitions, grub2 has some unpleasant surprises, so see my
article for some (unfortunately very hacky) workarounds for its
limitations.
Why
use Grub2? Good question!
(Let me note that I didn't write the title, though I don't disagree
with it.)
Tags: writing, linux, boot, grub, ubuntu
[
10:56 Mar 11, 2010
More writing |
permalink to this entry |
]
Tue, 09 Mar 2010
A friend was trying to get some of her laptop's function keys working
under Ubuntu, and that reminded me that I'd been meaning to do the
same on my Vaio TX 650P.
My brightness keys worked automagically -- I suspected via the scripts
in /etc/acpi -- and that was helpful in tracking down the rest of the
information I needed. But it still took a bit of fiddling since
(surprise!) how this stuff works isn't documented.
Update: That "isn't documented" remark applies to the ACPI system.
Matt Zimmerman points out that there is some good documentation on
the rest of the key-handling system, and pointed me to two really
excellent pages:
Hotkeys architecture
and
Hotkeys
Troubleshooting.
Recommended reading!
Here's the procedure I found.
First, use acpi_listen to find out what events are generated
by the key you care about. Not all keys generate ACPI events.
I haven't get figured out what controls this -- possibly the kernel.
When you type the key, you're looking for something like this:
sony/hotkey SPIC 00000001 00000012
You may get separate events for key down and key up. It's your choice
as to which one matters.
Once you know the code for your key, it's time to make it do something.
Create a new file in /etc/acpi/events -- I called mine sony-lcd-btn.
It doesn't matter what you call it -- acpid will read all of them.
(Yes, that means every time you start up it's reading all those
toshiba and asus files even if you have a Lenovo or Sony.
Looks like a nice place to shave off a little boot time.)
The file is very simple and should look something like this:
# /etc/acpi/events/sony-lcd-btn
event=sony/hotkey SPIC 00000001 00000012
action=/etc/acpi/sonylcd.sh
Now create a script for the action you specified in the event file.
I created a script /etc/acpi/sonylcd.sh that looks like this:
#! /bin/bash
# temporary, for testing:
echo "LCD button!" >/dev/console
Now restart acpid: service acpid restart
if you're
on karmic, or /etc/init.d/acpid restart
on earlier releases.
Press the button. If you're running from the console (or using a
tool like xconsole), and you got all
the codes right, you should be able to see the echo from your script.
Now you can do anything you want. For instance, when I press the LCD
button I generally want to run this:
xrandr --output VGA --mode 1024x768
Or to make it toggle, I could write a slightly smarter script using
xrandr --query to find out the current mode and behave accordingly.
I'll probably do that at some point when I have a projector handy.
Tags: linux, acpi,
[
17:15 Mar 09, 2010
More linux/kernel |
permalink to this entry |
]
Thu, 25 Feb 2010
Part 2 of my 3-parter on configuring Ubuntu's new grub2 boot menu
covers cleaning up all the bogus menu entries (if you have a
multiple-boot system) and some tricks on setting color and image
backgrounds:
Cleaning
up your boot menu (Grub2 part 2).
Tags: writing, linux, boot, grub, ubuntu
[
22:49 Feb 25, 2010
More writing |
permalink to this entry |
]
Wed, 24 Feb 2010
I'm finally getting caught up after
SCALE 8x,
this year's Southern CA Linux Expo.
A few highlights (not even close to a comprehensive list):
Friday:
The UbuCon and Women in Open Source (WIOS) were both great successes,
with a great speaker list and good attendance. It was hard to choose
between them.
Malakai Wade, Mirano Cafiero, and Saskia Wade, two 12-year-olds and an
8-year-old, presenting on "Ultimate Randomness - Girl voices in open source".
Great stuff! They sang, they discussed their favorite apps, they
showed an animated video made with open source tools of dolls
in a dollhouse. Lots of energy, confidence and fun. Loved it!
I hope to see more of these girls.
I liked Nathan Haines demo of "Quickly", an app for rapid development
of python-gtk apps. It looks like a great app, especially for
beginning programmers, though his demo did also illustrate the
problems with complex UIs filled with a zillion similar toolbuttons.
(I'm not criticising Nathan; I find UIs like that very difficult to use,
especially under pressure like a live demo in front of an audience.)
Happily, the UbuCon and WIOS scheduled their lightning talks at
different times (though UbuCon's conflicted with WIOS's "How to give
a Lightning Talk" session). So lightning talk junkies enjoyed two
hours of talks back to back, plus the chance to give two different
talks to different audiences. Hectic but a lot of fun.
Saturday
I was a little disappointed with the Git Tips & Tricks panel; I wanted
more git tips and less discussion of projects that happen to use Git.
I liked Don Marti's section on IkiWiki;
it looks like a great tool and I wish Don had had more time to present.
I liked Emma Jane Hogbin's useful and interesting talk on "Looking
Beautiful in Print", full of practical tips for how to design good
flyers and brochures using tools like OpenOffice.
Diana Chen, who got introduced to open source only a year ago at SCALE
7x, gets the award for courage: she gave a talk on "Learning python
for non-programmers" using a borrowed laptop that I'm not sure she'd
even seen before the presentation. Unfortunately, the
laptop turned out to be poorly suited to the task (no Python installed?
Dvorak keymap?) so Diana struggled to show what she'd planned, but
she came through and her demos eventually worked great.
I hope she wasn't too discouraged by the difficulties, and keeps
presenting -- preferably with more time to practice ahead of time.
The room was absolutely packed --
they had to bring in lots more chairs and there were still a lot of
people standing. There's obviously a huge amount of interest in
beginner programming talks at this conference!
Shawn Powers' talk, "Linux is for Smart People, and You're Not as Dumb
as You Think", was as entertaining as the title suggested --
an excellent beginner-track talk that I think everyone enjoyed.
Sunday
I'm not going to review Sunday's program, because I was busy
obsessing over my own "Featherweight Linux" talk. I'll just say that
SCALE is a great place to give a talk -- the audience was great, with
excellent questions and no heckling and, most important, they laughed
when I hoped they would. :-)
Exhibitors
I didn't get to spend much time on the show floor, but it looked
active and fun.
The Linux Astronomy folks
had a fantastic display, with a big table with a simulated Martian landscape
and a couple of robotic rovers exploring it and a robotic telescope
driven by a milling machine program, as well as computers exhibiting a
selection of Linux astronomy, science and math-teaching software.
ZaReason had a booth, and my mom was able to get info on how to get
a spare battery for her laptop. (Can I take a moment to say how cool
it is to be wandering around a Linux conference with my mom, who's
carrying her own Linux netbook?)
An Ubuntu/Canonical table was testing people's laptops for
compatibility with the next Ubuntu release. (There may have been
other distros tested as well; I wasn't clear on that.)
Engineers
Without Borders, Orange County looked really interesting and
assured me that not all of them were in Orange County, and there's
activity up here in the Bay Area as well. Definitely on my list
to learn more.
Linux Pro magazine was giving out copies of Linux Pro and Ubuntu User,
both fantastic magazines packed with good articles.
Beginners and Hobbyists
One notable feature of SCALE is the low price. This conference is very
affordable, which means there are a lot of hobbyists, beginners and
even people just considering trying Linux. They've offered a "Beginner
track" for several years, though not all the talks in that track are
really accessible to beginners (speakers: here's your chance to propose
that great beginner talk the other conferences aren't interested in!
Help some new folks!)
There's a lot of energy and diversity and a wide range of interests
and knowledge -- yet there's still plenty of depth for hardcore
Linux geeks.
Overall, a fantastic conference. The SCALE organizers do a great job
of organizing everything, and if there were any glitches they weren't
evident from the outside.
Tags: conferences, linux
[
15:34 Feb 24, 2010
More conferences |
permalink to this entry |
]
Sat, 20 Feb 2010
I gave a lightning talk at the Ubucon -- the Ubuntu miniconf -- at the
SCALE 8x, Southern
California Linux Expo yesterday. I've been writing about grub2
for Linux Planet but it left
me with some, well, opinions that I wanted to share.
A lightning talk
is an informal very short talk, anywhere from 2 to 5 minutes.
Typically a conference will have a session of lightning talks,
where anyone can get up to plug a project, tell a story or flame about
an annoyance. Anything goes.
I'm a lightning talk junkie -- I love giving them, and I
love hearing what everyone else has to say.
I had some simple slides for this particular talk. Generally I've
used bold or other set-offs to indicate terms I showed on a slide.
SCALE 8x, by
the way, is awesome so far, and I'm looking forward to the next two days.
Grub2 3-minute lightning talk
What's a grub? A soft wriggly worm.
But it's also the Ubuntu Bootloader.
And in Karmic, we have a brand new grub: grub2!
Well, sort of. Karmic uses Grub 2 version 1.97 beta4.
Aside from the fact that it's a beta -- nuff said about that --
what's this business of grub TWO being version ONE point something?
Are you hearing alarm bells go off yet?
But it must be better, right?
Like, they say it cleans up partition numbering.
Yay! So that confusing syntax in grub1, where you have to say
(hd0,0) that doesn't look like anything else on Linux,
and you're always wanting to put the parenthesis in the wrong place
-- they finally fixed that?
Well, no. Now it looks like this: (hd0,1)
THEY KEPT THE CONFUSING SYNTAX BUT CHANGED THE NUMBER!
Gee, guys, thanks for making things simpler!
But at least grub2 is better at graphics, right? Like what if
you want to add a background image under that boring boot screen?
A dark image, because the text is white.
Except now Ubuntu changes the text color to black.
So you look in the config file to find out why ...
if background_image `make_system_path_relative...
set color_normal=black/black
... there it is! But why are there two blacks?
Of course, there's no documentation. They can't be fg/bg --
black on black wouldn't make any sense, right?
Well, it turns out it DOES mean foreground and background -- but the second
"black" doesn't mean black. It's a special grub2 code for "transparent".
That's right, they wrote this brand new program from scratch, but they
couldn't make a parser that understands "none" or "transparent".
What if you actually want text with a black background? I have
no idea. I guess you're out of luck.
Okay, what about dual booting? grub's great at that, right?
I have three distros installed on this laptop. There's a shared /boot
partition. When I change something, all I have to do is edit a file
in /boot/grub. It's great -- so much better than lilo! Anybody remember
what a pain lilo was?
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by /usr/sbin/grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#
Oops, wait -- not with grub2. Now I'm not supposed to edit
that file. Instead, I edit files in TWO places,
/etc/grub.d and /etc/default/grub.conf, and then
run a program in a third place, /usr/bin/update-grub.
All this has to be done from the same machine where you installed
grub2 -- if you're booted into one of your other distros, you're out
of luck.
grub2 takes us back to the bad old days of lilo.
FAIL
Grub2 really is a soft slimy worm after all.
But I have some ideas for workarounds. If you care, watch my next
few articles on LinuxPlanet.com.
Update: links to Linux Planet articles:
Part 1: Grub2 worms into Ubuntu
Part 2: Cleaning up your boot menu
Part 3: Why use Grub2? Good question!
Tags: grub, ubuntu, linux, boot, speaking, conferences
[
11:29 Feb 20, 2010
More linux |
permalink to this entry |
]
Thu, 11 Feb 2010
Upgraded to Ubuntu 9.10 Karmic and wondering how to configure your
boot menu or set it up for multiple boots?
Grub2 Worms Into Ubuntu (part 1)
is an introductory tutorial -- just enough to get you started.
More details will follow in parts 2 and 3.
Tags: writing, linux, boot, grub, ubuntu
[
17:40 Feb 11, 2010
More writing |
permalink to this entry |
]
Mon, 25 Jan 2010
Ever since I upgraded to Ubuntu 9.10 "Karmic koala", printing text files
has been a problem. They print out with normal line height, but in a
super-wide font so I only get about 48 ugly characters per line.
Various people have reported the problem -- for instance,
bug 447961
and this
post -- but no one seemed to have an answer.
I don't have an answer either, but I do have a workaround. The problem
is that Ubuntu is scaling incorrectly. When it thinks it's putting
10 characters per inch (cpi) on a line, it's actually using a font that
only fits 6 characters. But if you tell it to fit 17 characters per inch,
that comes out pretty close to the 10cpi that's supposed to be the default:
lpr -o cpi=17 filename
As long as you have to specify the cpi, try different settings for it.
cpi=20
gives a nice crisp looking font with about 11.8
characters per inch.
If needed, you can adjust line spacing with lpi=NN
as well.
Update: The ever-vigilant Till Kamppeter has tracked the problem
down to the font used by texttopdf for lp/lpr printing. Interesting details in
bug
447961.
Tags: printing, ubuntu, linux, tips
[
16:36 Jan 25, 2010
More linux |
permalink to this entry |
]
Thu, 14 Jan 2010
Didn't get the calendar you wanted for Christmas this year?
Print your own, with your choice of photos and holidas.
My Linux Planet Photo
Calendar article shows how.
Tags: writing, linux, calendar
[
17:53 Jan 14, 2010
More writing |
permalink to this entry |
]
Tue, 15 Dec 2009
I've been using fetchmail for a couple of years to get mail from the
mail server to my local machine. But it had one disadvantage: it meant
that I had to have postfix (or a similar large and complex MTA)
configured and running on every machine I use, even the lightweight
laptop.
I run procmail to filter my mail into folders -- Linuxchix mail into
one folder, GIMP mailing lists into another, and so forth -- and it
seemed like it ought to be possible for fetchmail to call procmail
directly, without going through postfix.
I found several suggestions on the web -- for instance,
fetchmail-procmail-sendmail
-- but they didn't work for me. fetchmail downloaded each message, passed
it to procmail, and procmail appended it to the relevant mailbox
without the appropriate "From "
header that mail programs
need to tell when each new message starts.
Finally, on a tip from bma on #linuxchix and after a little
experimentation, I added this line to ~/.fetchmailrc:
mda /usr/bin/procmail -f %F -m /home/username/.procmailrc
Works great! And it's a lot faster than going through postfix.
Tags: linux, email, fetchmail, procmail
[
15:07 Dec 15, 2009
More tech/email |
permalink to this entry |
]
Sat, 05 Dec 2009
I had been dithering about whether to buy another inkjet to replace
the Epson C86 that died earlier this year. The Epson wasn't all that
old, but its nozzles wouldn't unclog, and reviews of Epson's latest
printers aren't at all complimentary.
HP looked like the best solution, since they're the only printer
manufacturer that supports Linux directly.
I new wasn't going to buy Canon, because their closed protocols mean
that every Linux driver has to be reverse engineered, and I certainly
didn't want a Lexmark (see our last experience with Lexmark in
Cracking the
Lexmark Code).
But which HP? Their array of models is baffling, and no one seems to
know the difference between Deskjets, Officejets and Photosmarts,
or whether the inks fade, or whether the nozzles are built into the
ink cartridges (so a clogged nozzle doesn't mean a dead printer
like it does with Epson). And there's no way to get print samples.
So I dithered and stalled -- until Fry's put the HP Deskjet F4280 on
sale for $20.
The online reviews were fairly positive.
And for that price, and with Linux support, how bad could it be?
Answer: not bad at all.
It set up pretty easily in CUPS, though the CUPS test page didn't work
even after several tries. Fortunately, I don't need to print CUPS test pages.
Printing worked fine from GIMP, Firefox and OpenOffice.
The print quality is surprisingly good.
(Note: the F4280 is not the same printer as the Photosmart C4280,
which caused some confusion at Fry's when I tried to actually buy one).
Text and web page on regular paper come out crisp and sharp.
"High quality" on good photo paper looks like a photo as long as
you don't examine it too closely. It'll be fine for my holiday
greeting cards, business cards and most other tasks involving photos.
"Photo quality" takes a lot longer, and is indeed better
than "High" if you examine it with a loupe. Nobody's going to confuse
it with a real photo print under magnification,
but it'll look fine on the wall.
Here's the part that impressed me most: it can print all the way to
the edge of the paper with no hassle. I could never do that with the
C86: though the hardware was supposedly capable of it, the Gutenprint
drivers -- the reason I'd been sticking with Epson all those years --
never could handle it (and tended to print yellow smears on the
borders if you tried it). Good job, HP!
It's an "All in one" so it has a built-in scanner too (no fax).
SANE (on Ubuntu 9.10) doesn't see the scanner, and I haven't tried to
track that down since I already have a good scanner. I wouldn't have
bought an "all in one" except that dedicated printers are quite a bit
more expensive.
Update: it's the usual Ubuntu permissions problem,
combined with new udev rules. Root sees the scanner, users don't,
unless you add lines to two different udev rules files. In
/lib/udev/rules.d/40-libsane.rules, add:
ATTRS{idVendor}=="03f0", ATTRS{idProduct}=="2504", ENV{libsane_matched}="yes"
then create a new file /etc/udev/rules.d/45-libsane.rules and
put in it:
SYSFS{idVendor}=="03f0", SYSFS{idProduct}=="2504", MODE="664", GROUP="scanner"
More details are in
bug
121082.
And wow, the scanner output is really bad. I mean really, really bad.
I'm happy with the printer but I'll definitely keep my old Epson scanner.
Reviews complain that the F4280 is rather ink-hungry and the
ink cartridges are overpriced; but every inkjet printer review says
that (probably with good reason).
I don't print that much, so I'm not too worried.
And of course I know nothing about long term reliability
or how fade-resistant the prints will be.
Ask me in six months. But so far I'm quite pleased.
A nice printer with excellent Linux drivers.
Tags: printing, linux
[
20:47 Dec 05, 2009
More tech |
permalink to this entry |
]
Wed, 25 Nov 2009
Continuing the discussion of those funny characters you sometimes
see in email or on web pages, today's Linux Planet article
discusses how to convert and handle encoding errors, using
Python or the command-line tool recode:
Mastering
Characters Sets in Linux (Weird Characters, part 2).
Tags: writing, linux, unicode, i18n, charsets, ascii, programming, python
[
15:06 Nov 25, 2009
More writing |
permalink to this entry |
]
Mon, 16 Nov 2009
A week ago I wrote about my mouse woes and how they were
solved
by enabling the "Enable modesetting on intel by default" kernel option.
But all was still not right with X. I could boot into a console,
start X, and everything was fine -- but once X was running, I
couldn't go back to console mode. Switching to a console, e.g.
Ctrl-Alt-F2, did nothing except make the mouse cursor disappear,
and any attempt to quit X to go back to my login shell put the
monitor in sleep mode permanently -- the machine was still up, and
I could ssh in or ctrl-alt-Delete to reboot, but nothing else I did
would bring my screen back.
It wasn't strictly an Ubuntu problem, though this showed up with
Karmic; I have a Gentoo install on another partition and it had the
same problem. And I knew it was a kernel problem, because the Ubuntu
kernel did let me quit X.
I sought in vain among the kernel's various Graphics settings.
You might think that "enable modesetting" would be related to,
you know, being unable to change video modes ... but it wasn't.
I tried different DRM options and switching framebuffer on and off.
Though, oddly, enabling framebuffer didn't actually seem to enable
the framebuffer.
Finally I stepped through the Graphics section of make
menuconfig
comparing my settings with a working kernel, and
saw a couple of differences that didn't look at all important:
"Select compiled-in fonts" and "VGA 8x16 font". Silly, but
what could they hurt? I switched them on and rebuilt.
And on the next boot, I had a framebuffer, and mode switching.
So be warned: those compiled-in fonts are not optional if you
want a framebuffer; and you'd better want a framebuffer, because that
isn't optional either if you want to be able to get out of X once
you start it.
Tags: linux, intel, X11, kernel
[
20:22 Nov 16, 2009
More linux/kernel |
permalink to this entry |
]
Thu, 12 Nov 2009
or: Why do I See All Those Those Weird Characters?
Today's Linux Planet article concerns those funny characters you sometimes
see in email or on web pages, like when somebody puts
“random squiggles’ around a phrase
when they probably meant “double quotes”:
Character
Sets in Linux or: Why do I See Those Weird Characters?.
Today's article covers only what users need to know.
A followup article will discuss character encoding
from a programmer's point of view.
Tags: writing, linux, unicode, i18n, charsets, ascii
[
16:34 Nov 12, 2009
More writing |
permalink to this entry |
]
Tue, 10 Nov 2009
I've been seeing intermittent mouse failures since upgrading to Ubuntu
9.10 "Karmic".
At first, maybe one time out of five I would boot, start X, and find
that I couldn't move my mouse pointer. But after building a 2.6.31.4
kernel, things got worse and it happened nearly every time.
It wasn't purely an X problem; if I enabled gpm, the mouse failed in the
console as well as in X. And it wasn't hardware, because if I used
Ubuntu 9.10's standard kernel, my mouse worked every time.
After much poking around with kernel options, I discovered that if I
tunred off the Direct Rendering manager ("Intel 830M, 845G, 852GM, 855GM,
865G (i915 driver)"), my mouse would work. But that wasn't a
satisfactory solution; aside from not being able to run Google Earth,
it seems that Intel graphics needs DRM even to get reasonable
performance redrawing windows. Without it, every desktop switch means
watching windows slowly redraw over two or three seconds.
(Aside: why is it that Intel cards with shared CPU memory need DRM
to draw basic 2-D windows, when my ancient ATI Radeon cards without
shared memory had no such problems?)
But I think I finally have it nailed. In the kernel's Direct Rendering
Manager options (under Graphics), the "Intel 830M, 845G, 852GM, 855GM,
865G (i915 driver)" using its "i915 driver" option has a new sub-option:
"Enable modesetting on intel by default".
The help says:
CONFIG_DRM_I915_KMS:
Choose this option if you want kernel modesetting enabled by default,
and you have a new enough userspace to support this. Running old
userspaces with this enabled will cause pain. Note that this causes
the driver to bind to PCI devices, which precludes loading things
like intelfb.
Sounds optional, right? Sounds like, if I want to build a kernel that
will work on both karmic and jaunty, I should leave that off
so as not to "cause pain".
But no. It turns out it's actually mandatory on karmic. Without it,
there's a race condition where about 80-90% of the time, hal won't
see a mouse device at all, so the mouse won't work either in X or
even on the console with gpm.
It's sort of the opposite of the
"Remove sysfs features which may confuse old userspace tools"
in General Setup, where the name implies that it's optional on
new distros like Karmic, but in fact, if you leave it on, the
kernel won't work reliably.
So be warned when configuring a kernel for brand-new distros.
There are some new pitfalls, and options that worked in the past
may not work any longer!
Update: see also the
followup
post for two more non-optional options.
Tags: linux, ubuntu, intel, X11, kernel
[
23:34 Nov 10, 2009
More linux/kernel |
permalink to this entry |
]
Sun, 08 Nov 2009
Helping people get started with Linux shells, I've noticed they
tend to make two common mistakes vastly more than any others:
- Typing a file path without a slash, like etc/fstab
- typing just a filename, without a command in front of it
The first boils down to a misunderstanding of how the Linux file
system hierarchy works. (For a refresher, you might want to check out
my Linux Planet article
Navigating
the Linux Filesystem.)
The second problem is due to forgetting the rules of shell grammar.
Every shell sentence needs a verb, just like every sentence in English.
In the shell, the command is the verb: what do you want to do?
The arguments, if any, are the verb's direct object:
What do you want to do it to?
(For grammar geeks, there's no noun phrase for a subject because shell
commands are imperative. And yes, I ended a sentence with a preposition,
so go ahead and feel superior if you believe that's incorrect.)
The thing is, both mistakes are easy to make, especially when you're
new to the shell, perhaps coming from a "double-click on the file and let
the computer decide what you should do with it" model. The shell model
is a lot more flexible and (in my opinion) better -- you, not
the computer, gets to decide what you should do with each file --
but it does take some getting used to.
But as a newbie, all you know is that you type a command and get some
message like "Permission denied." Why was permission denied? How are
you to figure out what the real problem was? And why can't the shell
help you with that?
And a few days ago I realized ... it can! Bash, zsh and
similar shells have a fairly flexible error handling mechanism.
Ubuntu users have seen one part of this, where if you type a command
you don't have installed, Ubuntu gives you a fancy error message
suggesting what you might have meant and/or what package you might
be missing:
$ catt /etc/fstab
No command 'catt' found, did you mean:
Command 'cat' from package 'coreutils' (main)
Command 'cant' from package 'swap-cwm' (universe)
catt: command not found
What if I tapped into that same mechanism and wrote a more general
handler that could offer helpful suggestions when it looked like the user
forgot the command or the leading slash?
It turns out that Ubuntu's error handler uses a ridiculously specific
function called command_not_found_handle that can't be used for
other errors. Some helpful folks I chatted with on #bash felt, as I
did, that such a specific mechanism was silly. But they pointed me to
a more general error trapping mechanism that turned out to work fine
for my purposes.
It took some fussing and fighting with bash syntax, but I have a basic
proof-of-concept. Of course it could be expanded to cover a lot more
types of error cases -- and more types of files the user might want
to open.
Here are some sample errors it catches:
$ schedule.html
bash: ./schedule.html: Permission denied
schedule.html is an HTML file. Did you want to run: firefox schedule.html
$ screenshot.jpg
bash: ./screenshot.jpg: Permission denied
screenshot.jpg is an image file. Did you want to run:
pho screenshot.jpg
gimp screenshot.jpg
$ .bashrc
bash: ./.bashrc: Permission denied
.bashrc is a text file. Did you want to run:
less .bashrc
vim .bashrc
$ ls etc/fstab
/bin/ls: cannot access etc/fstab: No such file or directory
Did you forget the leading slash?
etc/fstab doesn't exist, but /etc/fstab does.
You can find the code here:
Friendly shell errors
and of course I'm happy to take suggestions or contributions for how
to make it friendlier to new shell users.
Tags: linux, shell, help, education, programming
[
15:07 Nov 08, 2009
More linux |
permalink to this entry |
]
Mon, 02 Nov 2009
The syntax to log in automatically (without gdm or kdm) has changed
yet again in Ubuntu Karmic Koala. It's similar to the
Hardy
autologin, but the file has moved:
under Karmic,
/etc/event.d is no longer used, as documented
in the
releasenotes
(though, confusingly, it isn't removed when you upgrade, so it may still
be there taking up space and looking like it's useful for something).
The new location is
/etc/init/tty1.conf.
So here are the updated instructions:
Create /usr/bin/loginscript if you haven't already,
containing something like this:
#! /bin/sh
/bin/login -f yourusername
Then edit /etc/init/tty1.conf and look for the
respawn
line, and replace the line after it,
exec /sbin/getty -8 38400 tty1
, with this:
exec /sbin/getty -n -l /usr/bin/loginscript 38400 tty1
As far as I know, it's safe to delete /etc/event.d since it's now unused.
I haven't verified that yet. Better rename it first, and see if anything
breaks.
Tags: linux, ubuntu, boot
[
20:46 Nov 02, 2009
More linux/install |
permalink to this entry |
]
My laptop, a Sony Vaio TX650P, badly needed a kernel update.
2.6.28.3 had been running so well that I just hadn't gotten around
to changing it. When I finally updated to 2.6.31.5, nearly everything
worked, with one exception:
Fn-F5 and Fn-F6 no longer adjusted screen brightness.
I found that there were lots of bugs filed on this problem --
kernel.org
bug 12816,
Ubuntu
bug 414810,
Fedora
bug 519105
and so on. A few of them had terse comments like "Can you try this
patched binary release?" but none of them had a source patch,
or any hints as to the actual nature of the problem.
But one of them did point me to
/sys/class/backlight. In my working 2.6.28.3 kernel, that
directory contained a sony subdirectory containing useful files
that let me query or change the brightness:
echo 1 >/sys/class/backlight/sony/brightness
On my nonworking 2.6.31.5 kernel, I had /sys/class/backlight
but there was no sony subdirectory there.
grep SONY .config
in
the two kernels revealed that my working kernel had SONY_LAPTOP set,
while the nonworking one did not. No problem! Just figure out where
SONY_LAPTOP is in the configuration (it turns out to be at the very
end of "device drivers" under "X86 Platform Specific Device Drivers"),
make menuconfig
, set SONY_LAPTOP and rebuild ... right?
Well, no. make menuconfig
showed me lots of laptop
manufacturers in the "Platform Specific" category, but Sony
wasn't one of them. Of course, since it didn't show the option it
also didn't offer a chance to read the help for the option either,
which might have told me what its dependencies were.
Time for a recursive grep of kernel source:
grep -r SONY_LAPTOP .
arch/x86/configs/i386_defconfig told me the option did indeed
still exist, and where to find it. drivers/platform/x86/Kconfig
lists the option itself, and says it depends on INPUT and RFKILL.
RFKILL? A bit more poking around located that one near the end of
"Networking support", with the name "RF switch subsystem support".
(I love how user-visible names for kernel options only marginally
resemble the CONFIG names.)
It's apparently intended for
"control over RF switches found on many WiFi and Bluetooth cards,"
something I don't seem to need on this laptop (my WiFi works fine
without it) -- except that the kernel for some reason won't let me
build the ACPI brightness control without it.
So that's the secret. Go to "Networking support", set
"RF switch subsystem support", then back up
to "Device drivers", scroll down to the end to find
"X86 Platform Specific Device Drivers" and inside it you'll now see
"Sony Laptop Extras". Enable that, and the Fn-F5/F6 keys (as well as
/sys/class/backlight/sony/brightness) will work again.
Tags: linux, kernel, sony, laptop
[
10:07 Nov 02, 2009
More linux/kernel |
permalink to this entry |
]
Sun, 01 Nov 2009
My mom got a netbook! A
ZaReason Terra,
in lovely metallic brown. (I know "metallic brown" sounds odd -- I
was skeptical before I saw it -- but take it from me, it looks great.)
It's cute and lightweight, with a nice keyboard with a clicky
IBM-keyboard-style feel, and a meta key with a Tux penguin on it
rather than a silly Windows logo. The only criticism so far is
that the comma and period keys are narrower than the rest, so all
three of us keep hitting slash when we mean period.
It comes preinstalled with Ubuntu (currently 9.04 Jaunty)
with a full Gnome desktop. I've never been much of a Gnome fan,
but this time we thought we'd try keeping it for a while and
see how Mom likes it. We can always switch to something faster,
like Openbox, later.
Of course, a lot of things needed configuration, like getting rid of
one of the two toolbars. (In this age of cinema-width screens, why is it
that the major desktops, like Gnome and even Apple, insist on sucking
away vertical space with multiple menubars/toolbars?)
(And don't get me started on Evolution's preferences panes that are
too big to fit on a netbook screen, yet have no scrollbars; and
although the preference window is resizable, Gnome won't let you
drag a window past the top of the screen so you can resize it taller.)
What stymied us, though, was the Gnome keyring and the way it
prompts you for a password -- even if you've already typed in a login
password -- whenever it tries to connect to the wireless network.
Web searches revealed that we were far from the only people who
found this annoying and wanted to turn it off.
There are lots of howtos. Unfortunately, every howto is
different -- apparently gnome-keyring changes its user interface with
every release, but somehow none of these UI changes ever make it
easier to find your way to the place where you can turn off the
password prompting. So here's one for Jaunty.
Howto turn off the Gnome-keyring master password in Ubuntu Jaunty
The key is a program called "seahorse", which you can get to
via Applications->Accessories->Password and Encryption Keys.
Click on the Passwords tab: you'll probably see two lines,
login password and master password.
According to some of the earlier howtos, these two passwords need to
be the same in order for the following steps to work.
Right-click on
login password and choose Unlock (I'm not sure if that
step is necessary, but we did it).
Then from the same right-click menu,
choose Change Password and make the new password empty.
Of course, it will warn you about this horribly insecure behavior
and how you're an idiot to want to do this. Your choice!
Tags: linux, gnome, ui
[
15:31 Nov 01, 2009
More linux |
permalink to this entry |
]
Thu, 22 Oct 2009
Part III of building your own kernel,
Tricky
kernel options, covers some of the more confusing options
you'll encounter when configuring your kernel.
Meanwhile, I'm still jazzed about the great howto that Nathan Willis
of Worldlabel.com wrote a few days ago for my Gimp labels scripts:
Fast labels and Card layout with Gimplabels.
Tags: writing, linux, kernel
[
15:03 Oct 22, 2009
More writing |
permalink to this entry |
]
Sat, 10 Oct 2009
Part II of building your own kernel,
Configuring
a New Linux Kernel, covers how to run menuconfig, how to disable
modules and slim down the kernel to only the parts you need,
and a few important options to look for.
Part III, in two weeks, will tour some specific kernel options
and what they do.
Tags: writing, linux, kernel
[
10:19 Oct 10, 2009
More writing |
permalink to this entry |
]
Fri, 25 Sep 2009
My latest article is up on Linux Planet:
Building
Your Own Linux Kernel, Part I.
"But aren't there a gazillion howtos already on the web on kernel building?"
I thought so too. But when someone showed up on LinuxChix recently
asking for help building her kernel, I went looking -- and all the
howtos I could find were out of date (even the README in the very
latest kernel gives instructions based on LILO, not GRUB).
More important, none of them offered help in that all-important
question: How do I start with a configuration file I know will work?
My quick-and-dirty howto shows you how to take your distro's
configuration file and build the latest mainstream kernel based on
it. The next article will cover how to change that configuration and
tune it for your own machine.
Tags: writing, linux, kernel
[
10:45 Sep 25, 2009
More writing |
permalink to this entry |
]
Sun, 13 Sep 2009
Dear Adobe: Please update your instructions when you update your
install packages
I had a circus a few nights ago trying to help my mom get her flash
plugin updated. Not because of anything she was doing; because Adobe's
out of date instructions were just plain wrong.
It gave me more insight into why people say "Linux is hard to
use" ... which has little to do with Linux, and everything to do
with outside forces that seem to go out of their way to make things
hard for Linux users.
See, Mom's Firefox auto-updated to a new version, which started whining
about her flash version being insecure and telling her to update it.
It pointed her to Adobe's site,
get.adobe.com/flashplayer.
She went there and was presented with a long list of options for
different types of download.
She's on Ubuntu, so the Ubuntu deb might have worked --
but it might not, since she's running a Firefox from Mozilla.org
rather than the one from Ubuntu.
(Ubuntu's Firefox on Hardy was notoriously crashy,
and she has enough problems with the Mozilla version crashing.)
I told her I usually use the tarball, and install it as myself, not as
root. In the past, the flash installer has always been very good about
noticing I'm not root and installing to ~/.mozilla/plugins.
I didn't expect problems.
So she downloaded the tarball and tried to follow their
instructions,
which look like this:
- Click the download link to begin installation. A dialog box will
appear asking you where to save the file.
- Save the .tar.gz file to your desktop and wait for the file to
download completely.
- Unpackage the file. A directory called
install_flash_player_10_linux will be created.
- In terminal, navigate to this directory and type
./flashplayer-installer to run the installer. Click Enter. The
installer will instruct you to shut down your browser(s).
- Once the installation is complete, the plug-in will be installed
in your Mozilla browser. To verify, launch Mozilla and choose Help >
About Plug-ins from the browser menu.
The first problem is "Unpackage the file."
Honestly, how hard is it to give people a hint that "unpackage" means
"type tar xf install_flash_player_10_linux.tar.gz
"?
As long as you're writing instructions anyway, why not tell people
the actual command instead of expecting them to figure it out somehow?
"In terminal, navigate to this directory" -- if you know your user
will be typing shell commands in a terminal, why not tell them to
cd
rather than expecting them to figure that out from "navigate"?
(Mom figured that one out -- go Mom! -- but a lot of users wouldn't.)
Except -- OOPS! try following the instructions and you can't cd ...
because it turns out the flash 10 "installer" doesn't contain a
directory, or indeed an installer, at all.
It's a tarball containing one file, libflashplayer.so.
Now, setting aside the question of why anyone would
use tar to package a single file -- why not just make the file
available for download and tell users where to put it? --
they give you no hint as to where this libflashplayer.so
file is supposed to go. If you don't happen to know how Firefox
sets up its plugins, you're out of luck.
Fortunately, I happen to know where the file goes.
I told Mom to mv libflashplayer.so ~/.mozilla/plugins/
and all was well. But ... sheesh! With instructions like this on
something as (unfortunately) widely needed as the Flash plugin,
how can a newbie ever expect to get anywhere?
For newbies reading this, the real instructions for
installing Adobe's flash 10 tarball are:
- Download their file, which is named
install_flash_player_10_linux.tar.gz
- Open a terminal and cd to wherever you downloaded it, e.g.
cd ~/Desktop
tar xf install_flash_player_10_linux.tar.gz
mv libflashplayer.so ~/.mozilla/plugins/
- Restart firefox, make sure flash works, and (once you're sure,
at your option)
rm install_flash_player_10_linux.tar.gz
Tags: linux, help, newbie, flash
[
23:53 Sep 13, 2009
More linux |
permalink to this entry |
]
Fri, 11 Sep 2009
Linux Planet requested an article on multicore processors and how to
make the most of them. Happily, I've been playing with that anyway
lately, so I was happy to oblige:
Get
the Most Out of Your Multicore Processor:
Two heads are better than one!
Tags: writing, linux, performance
[
22:59 Sep 11, 2009
More writing |
permalink to this entry |
]
Sun, 06 Sep 2009
Someone was asking for help building XEphem on the XEphem mailing list.
It was a simple case of a missing include file, where the only trick
is to find out what package you need to install to get that file.
(This is complicated on Ubuntu, which the poster was using,
by the way they fragment the X developement headers into a maze of
a xillion tiny packages.)
The solution -- apt-file -- is so simple and easy to use, and yet
a lot of people don't know about it. So here's how it works.
The poster reported getting these compiler errors:
ar rc libz.a adler32.o compress.o crc32.o uncompr.o deflate.o trees.o zutil.o inflate.o inftrees.o inffast.o
ranlib libz.a
make[1]: Leaving directory `/home/gregs/xephem-3.7.4/libz'
gcc -I../../libastro -I../../libip -I../../liblilxml -I../../libjpegd -I../../libpng -I../../libz -g -O2 -Wall -I../../libXm/linux86 -I/usr/X11R6/include -c -o aavso.o aavso.c
In file included from aavso.c:12:
../../libXm/linux86/Xm/Xm.h:56:27: error: X11/Intrinsic.h: No such file or directory
../../libXm/linux86/Xm/Xm.h:57:23: error: X11/Shell.h: No such file or directory
../../libXm/linux86/Xm/Xm.h:58:23: error: X11/Xatom.h: No such file or directory
../../libXm/linux86/Xm/Xm.h:59:34: error: X11/extensions/Print.h: No such file or directory
In file included from ../../libXm/linux86/Xm/Xm.h:60,
from aavso.c:12:
../../libXm/linux86/Xm/XmStrDefs.h:1373: error: expected `=', `,', `;', `asm' or `__attribute__' before `char'
In file included from ../../libXm/linux86/Xm/Xm.h:60,
from aavso.c:12:
../../libXm/linux86/Xm/XmStrDefs.h:5439:28: error: X11/StringDefs.h: No such file or directory
In file included from ../../libXm/linux86/Xm/Xm.h:61,
from aavso.c:12:
../../libXm/linux86/Xm/VirtKeys.h:108: error: expected `)' before `*' token
In file included from ../../libXm/linux86/Xm/Display.h:49,
from ../../libXm/linux86/Xm/DragC.h:48,
from ../../libXm/linux86/Xm/Transfer.h:44,
from ../../libXm/linux86/Xm/Xm.h:62,
from aavso.c:12:
../../libXm/linux86/Xm/DropSMgr.h:88: error: expected specifier-qualifier-list before `XEvent'
../../libXm/linux86/Xm/DropSMgr.h:100: error: expected specifier-qualifier-list before `XEvent'
How do you go about figuring this out?
When interpreting compiler errors, usually what matters is the
*first* error. So try to find that. In the transcript above, the first
line saying "error:" is this one:
../../libXm/linux86/Xm/Xm.h:56:27: error: X11/Intrinsic.h: No such file or directory
So the first problem is that the compiler is trying to find a file
called Intrinsic.h that isn't installed.
On Debian-based systems, there's a great program you can use to find
files available for install: apt-file. It's not installed by default,
so install it, then update it, like this (the update will take a long time):
$ sudo apt-get install apt-file
$ sudo apt-file update
Once it's updated, you can now find out what package would install a
file like this:
$ apt-file search Intrinsic.h
libxt-dev: /usr/include/X11/Intrinsic.h
tendra: /usr/lib/TenDRA/lib/include/x5/t.api/X11/Intrinsic.h
In this case two two packages could install a file by that name.
You can usually figure out from looking which one is the
"real" one (usually the one with the shorter name, or the one
where the package name sounds related to what you're trying to do).
If you're stil not sure, try something like
apt-cache show libxt-dev tendra
to find out more
about the packages involved.
In this case, it's pretty clear that tendra is a red herring,
and the problem is likely that the libxt-dev package is missing.
So apt-get install libxt-dev
and try the build again.
Repeat the process until you have everything you need for the build.
Remember apt-file if you're not already using it.
It's tremendously useful in tracking down build dependencies.
Tags: open source, linux, programming, debian, ubuntu
[
11:25 Sep 06, 2009
More linux |
permalink to this entry |
]
Mon, 31 Aug 2009
Over the years, I've kept a few sets of records in the Datebook app
on my PalmOS PDA -- health records and such.
I've been experimenting with a few python plotting packages
(pycha, CairoPlot and a few others) and I wanted to try plotting
one of my Datebook databases.
Not so fast. It seems that it's been a year or more since I last
crunched any of this data -- and in the time since then,
pilot-link has bumped its version numbers and is now shipping
libpisock.so.9 instead of .8.
So what? Well, the problem is that Linux hasn't offered any way
to read Palm Datebook files for years. The pilot-link package
offered on most distros used to include a program
called pilot-datebook, but it was deleted from the source several
years ago. Apparently it was hard to maintain.
Back when it first disappeared, I built the previous version of
the source, stuck the pilot-datebook binary in ~/bin/linux and
have been using it ever since. Which worked fine -- until
libpisock.so.8 was no longer there. (Linking .9 to .8 didn't work either.)
This is all the more ironic because I don't need pilot-datebook to
talk to the PDA with libpisock -- all I want to do is parse the format
of a file I've already uploaded.
Off to hunt for an old version of the source. I started at
pilot-link.org, but gave up after a while -- they don't seem to
have source there except for the latest couple of versions, nor
do they have any documentation. Ironically, in their FAQ the very
first question is "How can I read the databook entries from a Palm
backup?" but the FAQ page is broken and the "answer" is actually
another unrelated FAQ question.
Anyway, no help there. I tried googling for old tarballs but there doesn't
seem to be anything like archive.org for source code.
All I found was the original
pilot-datebook
page, with a tarball that you insert into a copy of pilot-link 0.9.5
then modify the Makefile. Might work but that's really old.
So I fell back on old distributions. I guessed that Ubuntu Dapper was
old enough that it might still have pilot-datebook. So I went to the
Dapper pilot-link
source and downloaded the source tarball (curiously, they don't offer
src debs -- you have to download the tarball and patches separately).
Of course, it doesn't build on Ubuntu Jaunty. It had various
entertaining errors ranging from wanting a mysterious
tcl.m4
file not present in the code ... to not being
able to find <iostream.h< because all the C++ stdlib files have
recently been renamed to remove the .h ... to a change in the
open() system call where I needed to add permissions argument
for O_CREAT.
But I did get it working! So now I have a pilot-datebook program
that builds and runs on Ubuntu Jaunty, and parses my DatebookDB.pdb file.
Since I bet I'm not the only one in the world who occasionally wants
to read a Palm Datebook file, I've put my working version of the
source here:
pilot-link_0.11.8.jaunty.tar.gz.
After the usual configure and make, if all you want is pilot-datebook,
cd src/pilot-datebook
then copy both pilot-datebook
and the directory .libs to wherever you want to install them.
And yeah, it would be better to write a standalone program that just
parsed the format. But it's hard to justify that for what's
essentially a dead platform. The real solution is to quit using
a Palm for this, import the data into some common format and keep it
on my Linux workstation from now on.
Tags: palm, linux, programming, patch
[
12:39 Aug 31, 2009
More linux |
permalink to this entry |
]
Thu, 27 Aug 2009
Part 2 of my Linux bloat article looks at information you can get
from the kernel via some useful files in /proc, at three scripts
that display that info, and also at how to use exmap, an app and
kernel module that shows you a lot more about what resources your
apps are using.
How
Do You Really Measure Linux Bloat?
Tags: writing, programming, linux, performance, bloat
[
20:52 Aug 27, 2009
More writing |
permalink to this entry |
]
Sun, 23 Aug 2009
Noticing that every article I write ends up including a section on
"This doesn't work under Ubuntu; here's a link to the year-old bug
with a patch and several workarounds," I decided to try Fedora 11
and see if it was any better.
I downloaded and burned the latest netinst CD ISO and booted it
on the Atom machine. It greeted me with this prompt:
1.
2.
Select CD-ROM boot type:
I am not getting warm fuzzies about Fedora being the solution to
the Linux distro problem.
What do you think? Should I choose 1. or 2. ?
Tags: linux, install, fedora
[
21:09 Aug 23, 2009
More linux/install |
permalink to this entry |
]
Wed, 19 Aug 2009
Sometimes I love open source. A user contacted me about my program
Crikey!,
which lets you generate key events to do things like assign a key
that will type in a string you don't want to type in by hand.
He had some thorny problems where crikey was failing, and a few
requests, like sending Alt and other modifier keys.
We corresponded a bit, figured out exactly how things should work
and some test cases, went through a couple iterations of changes
where I got lots of detailed and thoughtful feedback and more test cases,
and now Crikey can do an assortment of new useful stuff.
New features: crikey now handles number codes like \27, modifier
keys like \A for alt, does a better job with symbols like
\(Return\), and handles a couple of new special characters
like \e for escape.
It also works better at sending window manager commands,
like "\A\t" to change the active window.
I've added some better documentation on all the syntaxes it
understands, both on the web page and in the -h and -l (longhelp)
command-line arguments, and made a release: crikey 0.8.3.
Plus: a list of great regression tests that I can use when testing
future updates (in the file TESTING in the tarball).
Tags: programming, crikey, X11, linux
[
17:38 Aug 19, 2009
More programming |
permalink to this entry |
]
Thu, 13 Aug 2009
Continuing my Linux Planet series on Linux performance monitoring,
the latest article looks at bloat and how you can measure it:
Finding
and Trimming Linux Bloat.
This one just covers the basics.
The followup article, in two weeks, will dive into more detail
on how to analyze what resources programs are really using.
Tags: writing, programming, linux, performance, bloat
[
11:27 Aug 13, 2009
More writing |
permalink to this entry |
]
Tue, 21 Jul 2009
It's been a day -- or week, month -- of performance monitoring.
I'm posting this
while sitting in an excellent OSCON tutorial on Linux
System and Network Performance Monitoring, by
Darren Hoch.
It's full of great information and I'm sure his web site is
equally useful.
And it's a great extension to topic that's been occupying me
over the past few months: performance tracking to slim down
software that might be slowing a Linux system down.
That's the topic of one of my two OSCON talks this Wednesday:
"Featherweight Linux: How to turn a netbook or older laptop into a Ferrari."
Although I don't go into anywhere near the detail Darren does,
a lot of the principles are the same, and I know I'll find a use
for a lot of his techniques. The talk also includes a free bonus
tourist tip for San Jose visitors.
Today's Linux Planet article is related to my Featherweight talk:
What's
Bogging Down Your Linux PC? Tracking Down Resource Hogs.
Usually they publish my articles on Thursdays, but I asked for an
early release since it's related to tomorrow's talk.
For anyone at OSCON in San Jose, I hope you can come to Featherweight late
Wednesday afternoon, or to my other talk, Wednesday just after lunch,
"Bug Fixing for Everyone (even non-programmers!)" where I'll go over
the steps programmers use while fixing bugs, and show that anyone can
fix simple bugs even without any prior knowledge of programming.
Tags: linux, performance, conferences, oscon09
[
11:58 Jul 21, 2009
More conferences |
permalink to this entry |
]
Thu, 02 Jul 2009
Suspend (sleep) works very well on the dual-Atom desktop. The only
problem with it is that the mouse or keyboard wake it up. I don't mind
the keyboard, but the mouse is quite sensitive, so a breeze through
the window or a big truck driving by on the street can jiggle the
mouse and wake the machine when I'm away.
I've been through all the BIOS screens looking for a setting to flip,
but there's nothing there. Some web searching told me that under
Windows, there's a setting you can change that will affect this,
but I couldn't find anything similar for Linux, until finally
drc clued me in to /proc/acpi/wakeup.
cat /proc/acpi/wakeup
will tell you all the events that can cause your machine to wake up
from various sleep states.
Unfortunately, they're also obscurely coded. Here are mine:
Device S-state Status Sysfs node
SLPB S4 *enabled
P32 S4 disabled pci:0000:00:1e.0
UAR1 S4 enabled pnp:00:0a
PEX0 S4 disabled pci:0000:00:1c.0
PEX1 S4 disabled
PEX2 S4 disabled pci:0000:00:1c.2
PEX3 S4 disabled pci:0000:00:1c.3
PEX4 S4 disabled
PEX5 S4 disabled
UHC1 S3 disabled pci:0000:00:1d.0
UHC2 S3 disabled pci:0000:00:1d.1
UHC3 S3 disabled pci:0000:00:1d.2
UHC4 S3 disabled pci:0000:00:1d.3
EHCI S3 disabled pci:0000:00:1d.7
AC9M S4 disabled
AZAL S4 disabled pci:0000:00:1b.0
What do all those symbols mean? I have no clue. Apparently the codes
come from the BIOS's DSDT code, and since it varies from board to
board, nobody has published tables of likely translations.
The only two wakeups that were enabled for me were SLPB and UAR1.
SLPB apparently stands for SLeeP Button, and Rik suggested UAR
probably stood for Universal Asynchronous Receiver (the more familiar
term UART both receives and Transmits.)
Some of the other devices in the list can possibly be identified by
comparing their pci: codes against lspci
, but not those two.
Time for some experimentation.
You can toggle any of these by writing to the wakeup device:
echo UAR1 >/proc/acpi/wakeup
It turned out that to disable mouse and keyboard wakeup, I had to
disable both SLPB and UAR1. With both disabled, the machine wakes
up when I press the power button.
(What the SLeeP Button is, if it's not the power button, I don't know.)
My mouse and keyboard are PS/2. For a USB mouse and keyboard, look
for something like USB0, UHC0, USB1.
The UAR1 setting is remembered even across boots: there's no need to
do anything to make sure the setting is remembered. But the SLPB
setting resets every time I boot. So I edited /etc/rc.local and
added this line:
echo SLPB >/proc/acpi/wakeup
Tags: linux, kernel, performance
[
10:21 Jul 02, 2009
More linux/kernel |
permalink to this entry |
]
Wed, 24 Jun 2009
I've been enjoying my
random
system beeps, different every day. At least up until yesterday,
when I didn't seem to have one. Today I didn't have one either,
and discovered that was because the beep module was no longer loaded.
Why not? Well, I updated my kernel to tweak some ACPI parameters
(fruitlessly, as it turns out; I'm trying to get powertop to give
me more information but I haven't found the magic combination of
kernel parameters it wants on this machine) and so I did a
make modules_install
. And it seems that
make modules_install
starts out by doing
rm -rf /lib/modules/VERSION/kernel
which removed
my externally built beep module along with everything else.
I couldn't find documentation on this, but I did find
Intel
Wireless bug 556 which talks about the issue. Apparently
somewhere along the way 2.6 started doing this rm -rf,
but you can get around it by installing outside the kernel
directory.
In other words, instead of
cp beep.ko /lib/modules/2.6.29.4/kernel/drivers/input/misc/
do
cp beep.ko /lib/modules/2.6.29.4/drivers/input/misc/
Then your external module won't get wiped out at the next
modules_install
.
I've let the maintainer of
Fancy Beeper
know about this, so it won't be a problem for that module,
but it's a good tip to know about in general --
I'm sure there are lots of modules that hit this problem.
Tags: linux, kernel, tips
[
10:30 Jun 24, 2009
More linux/kernel |
permalink to this entry |
]
Wed, 17 Jun 2009
A couple of year ago I figured out how to make
custom
system beep sounds on Linux, like MacOS has done forever.
But then I changed machines and somehow never got around to setting it
up on any other machine.
But the Intel dual-Atom board doesn't seem to support a system beep --
there's no obvious place on the motherboard to plug in the connector
going to the case speaker. How odd!
With the alternative being no beep at all,
I dusted off my old blog post and went to see if
Fancy Beeper
Daemon kernel module still existed. Happily, it does, and it's
up-to-date for current kernels, so all I had to do was download the
latest and build it. Easy! Then I added "beep" to the list of
automatically loaded modules in /etc/modules
,
blacklisted the pcspkr module using the
/etc/modprobe.d/00local
technique, and I was all set.
Except for the really important question: what sound to choose?
I did a little web searching for free sounds and downloaded some samples
to try out. Then I added a few bird calls from my
Stokes
Field Guide to Western Bird Songs CD,
editing them in audacity to make them shorter and
more appropriate for system beeps.
But I still couldn't decide on just one ... and why should I?
I've really been enjoying my
random
wallpaper: every time I log in, I get a different desktop background.
It's fun to see a new picture every day.
Why not do the same for my system beep?
That's no problem, using the same
randomline
script I use for wallpaper. I just put this in my .xinitrc:
$HOME/bin/mybeepd `find $HOME/Music/beeps -name "*.wav" | randomline` &
and now I get a different beep sound each day.
Yesterday it was a loon. Today it's a cow mooing.
Tags: linux, audio, desktop
[
21:11 Jun 17, 2009
More linux |
permalink to this entry |
]
Sun, 07 Jun 2009
I upgraded to Ubuntu's current 9.04 release, "Jaunty Jackalope", quite a
while ago, but I haven't been able to use it because its X server
crashes or hangs regularly. (Fortunately I only upgraded a copy
of my working 8.10 "Intrepid" install, on a separate partition.)
The really puzzling thing, though, wasn't the crashes, but the fact
that X acceleration didn't work at all. Programs like tuxracer
(etracer) and Google earth would display at something like one frame
update every two seconds, and glxinfo | grep renderer
said
OpenGL renderer string: Software Rasterizer
But that was all on my old desktop machine, with an
ATI Radeon 9000 card that I know no one cares about much.
I have a new machine now! An Intel dual Atom D945GCLF2D board
with 945 graphics. Finally, a graphics chip that's supported!
Now everything would work!
Well, not quite -- there were major teething pains, including
returning the first nonworking motherboard, but that's a
separate article. Eventually I got it running nicely with Intrepid.
DRI worked! Tuxracer worked! Even Google Earth worked! Unbelievable!
I copied the Jaunty install from my old machine to a partition on
the new machine. Booted into it and -- no DRI.
Just like on the Radeon.
Now, there's a huge pile of bugs in Ubuntu's bug system on problems
with video on Jaunty, all grouped by graphics card manufacturer even
though everybody seems to be
seeing pretty much the same problems on every chipset.
But hardly any of the bugs talk about not getting any DRI at all --
they're all about whether EXA acceleration works
better or worse than XAA and whether it's worth trying UXA.
I tried them all: EXA and UXA both gave me no DRI, while XAA
crashed/rebooted the machine every time. Clearly, there was something
about my install that was disabling DRI, regardless of
graphics card. But I poked and prodded and couldn't figure out what it was.
The breakthrough came when, purely by accident, I ran that same
glxinfo | grep renderer
from a root shell. Guess what?
OpenGL renderer string: Mesa DRI Intel(R) 945G GEM 20090326 2009Q1 RC2 x86/MMX/SSE2
As me (non-root), it still said "Software Rasterizer."
It was a simple permissions problem! But wait ... doesn't X run as root?
Well, it does, but the DRI part doesn't, as it turns out.
(This is actually a good thing, sort of, in the long term:
eventually the hope is to get X not to need root permissions either.)
Armed with the keyword "permissions" I went back to the web, and the
Troubleshooting
Intel Performance page on the Ubuntu wiki, and found the
solution right away.
(I'd looked at that page before but never got past the part right at
the beginning that says it's for problems involving EXA vs. UXA
vs. XAA, which mine clearly wasn't).
The Solution
In Jaunty, the user has to be in group video to use DRI in X.
But if you've upgraded from an Ubuntu version prior to Jaunty, where
this wasn't required, you're probably not in that group. The upgrader
(I used do-release-upgrade) doesn't check for this or warn you
that you have desktop users who aren't in the video group,
so you're on your own to find out about the problem.
Fixing it is easy, though:
edit /etc/group as root and add your user(s) to the group.
You might think this would have been an error worth reporting,
say, at X startup, or in glxinfo, or even in /var/log/Xorg.0.log.
You'd think wrong. Xorg.0.log blithely claims that DRI is enabled
and everything is fine, and there's no indication of an error
anywhere else.
I hope this article makes it easier for other people with this problem
to find the solution.
Tags: linux, ubuntu, X11, jaunty
[
20:23 Jun 07, 2009
More linux/install |
permalink to this entry |
]
Wed, 03 Jun 2009
The Walden West pond is hopping -- literally!
This afternoon around 3pm the pond's resident bullfrogs,
who normally just float quietly in the scum on the surface,
would suddenly hop out of the water for no obvious
reason, then settle back down a few feet away.
One pair was apparently mating like that, the larger frog hopping
onto the back of the smaller frog, then immediately off again.
And the pond was full of sound, sometimes with two or more
frogs booming at once. Bullfrogs in stereo!
I didn't have the SLR along, but some of the frogs were close enough
(and calm enough not to submerge when we got near them) that I was
able to get a few decent shots.
But I really wanted to capture that sound. So I put the camera
in video mode and shot a series of videos hoping to catch some
of the music ... and did.
They sound like this:
bullfrog (mp3, 24kb).
Despite the title of this entry, the recording doesn't have any
interesting stereo effects; the only microphone was the one built
in to my Canon A540. It did okay, though! You'll just have to
use your imagination to place two frogs as you listen, one 20 feet
to the left and the other 15 feet to the right.
How to extract the audio from a camera video
(Non open source people can quit reading here.)
Extracting the audio was a little tricky. I found lots of pages ostensibly
telling me how to do it with mencoder, but none of them seemed to work.
This did:
mplayer -vc null -af volume=15 -vo null -ao pcm -benchmark mvi_8992.avi
I added that -af volume=15
argument to make the sound
louder, since it was a bit quiet as it came from the camera.
That produced a file named audiodump.wav, which I turned into an
mp3 like this:
lame audiodump.wav bullfrogs.mp3
Tags: nature, bullfrog, linux, audio
[
21:42 Jun 03, 2009
More nature |
permalink to this entry |
]
Wed, 13 May 2009
Someone asked on a mailing list whether to upgrade to a new OS release
when her current install was working so well. I thought I should write
up how I back up my old systems before attempting a risky upgrade
or new install.
On my disks, I make several relatively small partitions, maybe
15G or so (pause to laugh about what I would have thought ten or
even five years ago if someone told me I'd be referring to 15G
as "small"), one small shared /boot partition, a swap partition,
and use the rest of the disk for /home or other shared data.
Now you can install a new release, like 9.04, onto a new partition
without risking your existing install.
If you prefer upgrading rather than running the installer, you can
do that too. I needed a jaunty (9.04) install to test
whether a bug was fixed. But my intrepid (8.10) is working fine and
I know there are some issues with jaunty, so I didn't want to risk
the working install. So from Intrepid, I copied the whole root
partition over to one of my spare root partitions, sda5:
mkfs.ext3 /dev/sda5
mkdir /jaunty
mount /dev/sda5 /jaunty
cp -ax / /jaunty
(that last step takes quite a while: you're copying the whole system.)
Now there are a couple of things you have to do to make that /jaunty
partition work as a bootable install:
1. /dev on an ubuntu system isn't a real file system, but something
magically created by the kernel and udev. But to boot, you need some
basic stuff there. When you're up and running, that's stored in
/dev/.static, so you can copy it like this:
cp -ax /dev/.static/dev/ /jaunty/
2021: Ignore this whole section.
For several years, Linux distros
have been able to create /dev on their own, without needing any seed
files. So there's no need to create any devices. Skip to #2, fstab
which is definitely still needed.
Note: it used to work to copy it to /jaunty/dev/.
The exact semantics of copying directories in cp and rsync, and
where you need slashes, seem to vary with every release.
The important thing is that you want /jaunty/dev to end up
containing a lot of devices, not a directory called dev or
a directory called .static. So fiddle with it after the cp -ax
if you need to.
Note 2: Doesn't it just figure? A couple of days
after I posted this, I found out that the latest udev has removed
/dev/.static so this doesn't work at all any more. What you can do
instead is:
cd /jaunty/dev
/dev/MAKEDEV generic
Note 3: If you're running MAKEDEV from Fedora, it
will target /dev instead of the current directory, so you need
MAKEDEV -d /whatever/dev generic
.
However, caution: on Debian and Ubuntu -d deletes the
devices. Check man MAKEDEV
first to be sure.
Ain't consistency wonderful?
2. /etc/fstab on the system you just created points to the wrong
root partition, so you have to fix that. As root, edit /etc/fstab
in your favorite editor (e.g. sudo vim /etc/fstab or whatever)
and find the line for the root filesystem -- the one where the
second entry on the line is /. It'll look something like this:
# /dev/sda1
UUID=f7djaac8-fd44-672b-3432-5afd759bc561 / ext3 relatime,errors=remount-ro 0 1
The easy fix is to change that to point to your new disk partition:
# jaunty is now on /dev/sda5
/dev/sda5 / ext3 relatime,errors=remount-ro 0 1
If you want to do it the "right", ubuntu-approved way, with UUIDs,
you can get the UUID of your disk this way:
ls -l /dev/disk/by-uuid/ | grep sda5
Take the UUID (that's the big long hex number with the dashes) and
put it after the UUID= in the original fstab line.
While you're editing /etc/fstab, be sure to look for any lines that
might mount /dev/sda5 as something other than root and delete them
or comment them out.
The following section describes how to update grub1.
Since most distros now use grub2, it is out of date.
With any luck, you can run update-grub and it will notice
your new partition; but you might want to fiddle with entries in
/etc/grub.d to label it more clearly.
Now you should have a partition that you can boot into and upgrade.
Now you just need to tell grub about it. As root, edit
/boot/grub/menu.lst and find the line that's booting your current
kernel. If you haven't changed the file yourself, that's probably
right after a line that says:
## ## End Default Options ##
It will look something like this:
title Ubuntu 8.10, kernel 2.6.27-11-generic
uuid f7djaac8-fd44-672b-3432-5afd759bc561
kernel /vmlinuz-2.6.27-11-generic root=UUID=f7djaac8-fd44-672b-3432-5afd759bc561 ro
initrd /initrd.img-2.6.27-11-generic
Make a copy of this whole stanza, so you have two identical copies,
and edit one of them. (If you edit the first of them, the new OS
it will be the default when you boot; if you're not that confident,
edit the second copy.) Change the two UUIDs to point to your new disk
partition (the same UUID you just put into /etc/fstab) and change
the Title to say 9.04 or Jaunty or My Copy or whatever you want the
title to be (this is the title that shows up in the grub menu when
you first boot the machine).
Now you should be able to boot into your new partition.
Most things should basically work -- certainly enough to start
a do-release-upgrade
without risking your original
install.
Tags: linux, ubuntu, install
[
10:44 May 13, 2009
More linux/install |
permalink to this entry |
]
Tue, 12 May 2009
Apress asked me to make some short screencast videos illustrating
GIMP tips, to help advertise the second edition of
Beginning GIMP.
I've never made videos (except for putting a digital camera in
video mode) so it's been an interesting learning experience, and
I was surprised at how easy it was in the end.
My Apress contact suggested XVidCap and Wink as possible options.
XVidCap looked quite interesting but didn't seem to work in its Ubuntu
incarnation. Wink worked very nicely and produced flash videos that
worked in a browser with no extra fiddling ... but unfortunately
when I tried plugging in a microphone, Wink didn't seem able to read it.
And it seemed to slow down all my cursor movements, rather than
following my actions in real-time.
But while working with those two, I stumbled across recordmydesktop.
It records mouse movements in real-time and it handles the microphone too.
It has several front ends available (such as gtk-recordmydesktop) but
I found the basic command-line version easiest to use.
To make a video in the upper left 1024x768 section of my screen:
recordmydesktop -width 1024 -height 768 -o layermask.ogv
Since I need to make sure all the action happens within that
rectangle, I made a special desktop background that has a nice, not
too distracting image in just that 1024x768 rectangle.
Any other windows I'm using in that desktop
(such the terminal window I'm using to control recordmydesktop)
stay outside that area.
recordmydesktop starts recording right away. I run through my tutorial
steps, narrating as I go, and when I'm done, I move the mouse back to
the terminal window where I started the recording and hit Ctrl-C, and
recordmydesktop stops recording and encodes the video (which takes a while).
It saves to ogg format, .ogv. Of course, most web surfers can't view
that, and youtube doesn't accept it either (at least, ogg isn't on its
list of allowed formats) so I needed to translate it into something
else. Youtube suggests mpeg4, so that's what I used. Luckily, I already had
a mencoder incantation that some helpful person gave me a long time ago:
mencoder movie.ogv -oac pcm -ovc lavc -lavcopts vcodec=mpeg4:vqmin=2:vlelim=-4:vcelim=9:lumi_mask=0.05:dark_mask=0.01:vhq -o movie.mp4
Only one problem: the audio came out very faint and difficult to hear.
I'm sure that's a problem with the microphone I'm using, a cheap OEM
model that came with some computer or other many years ago -- it's
been sitting in a box since I normally have no use for a microphone.
But it turns out mencoder can amplify the volume, with -af volume=X
where X is decibels. A little experimentation with mplayer -af volume=X
on the original ogg suggested a value around 15 or 20, so the final
encoding was:
mencoder movie.ogv -af volume=19 -oac pcm -ovc lavc -lavcopts vcodec=mpeg4:vqmin=2:vlelim=-4:vcelim=9:lumi_mask=0.05:dark_mask=0.01:vhq -o movie.mp4
But Apress tells me that their Windows boxen had trouble with the mp4
and they had to run it through something called "Handbrake", so maybe
some other format would have worked better. Here are two other mencoder
incantations I know (without the sound amplification):
mencoder movie.ogv -oac pcm -ovc lavc -o movie.divx
(divx -- Windows sometimes has trouble with this too)
mencoder movie.ogv -oac pcm -ovc lavc -lavcopts vcodec=mpeg1video -o movie.mpeg
(mpeg1)
It's not great cinema, but the end result is up on the Apress page for the
GIMP book.
Tags: linux, video, screencasts
[
11:14 May 12, 2009
More linux |
permalink to this entry |
]
Sat, 18 Apr 2009
Long ago I wrote about
getting
my multi-flash card reader to work using udev rules.
This always evokes horrified exclaimations from people in the
Ubuntu project -- "You shouldn't need to do that!" But there are
several reasons for wanting special udev rules for multi-card readers.
You might want your SD card to show up in the same place every time
(is it /dev/sdb1 or /dev/sdc1 today?); or you might be trying to
reduce polling to cut down your CPU and battery use.
But my older article referred to a script that no longer exists, and as
I recently had to update my udev rules on a fairly fresh Intrepid install,
I needed something more up-to-date and less dependent on Ubuntu's
specific udev scripts (which change frequently).
I found a wonderful forum article,
Create your
own udev rules to control removable devices,
that explains exactly how to find out the names of your devices
and make rules for them.
Another excellent article with essentially the same information is
Linux Format's
Connect your devices with udev.
Start by guessing at the current device name: for example, in this
particular session, my SD card reader showed up on /dev/sdd.
Find out the corresponding /block device name for it, like this:
udevinfo -q path -n /dev/sdd
Update: In Ubuntu jaunty, udevinfo is gone.
But you can substitute udevadm info
for udevinfo,
with the same flags.
In my case, the SD reader was /block/sdd. Now pass that into
udevinfo -a, like so:
udevinfo -a -p /block/sdd
and look for a few items that you can use to identify that
slot uniquely. If you can find a make or model, that's ideal.
For my card reader, I chose
KERNEL=="sdd"
SUBSYSTEMS=="scsi"
ATTRS{model}=="CardReader SD "
Note that SUBSYSTEM was scsi: usb-storage devices (handled by the scsi
system) sometimes show up as usb and sometimes as scsi.
Now you're ready to create some udev rules. In your favorite text
editor, create a new file named
/etc/udev/rules.d/59-multicard-reader.rules
.
You can name it whatever you want, but make sure the number
at the beginning is lower than the number of the udev rule
that would otherwise create the device's name -- in this case,
60-persistent-storage.rules.
Now write your udev rule. Include the identifying lines you picked out
from udevinfo -a:
KERNEL=="sd[a-g]", SUBSYSTEMS=="scsi", ATTRS{vendor}=="USB2.0 ", ATTRS{model}=="CardReader SD ", NAME{all_partitions}="card-sd", group=plugdev
A few things to notice. First, I used KERNEL=="sd[a-g]"
instead of just sdd, in case the devices might some day show up in
a different order.
The NAME field can be whatever you choose.
NAME{all_partitions}="card-sd"
will make the device show
up as /dev/card-sd, so to mount the first partition I'll use /dev/card-sd1.
The {all_partitions}
part tells the kernel to create
partitions like /dev/card-sd1 even if there's no SD card inserted
in the slot when you boot. Otherwise, you have to run
touch /dev/card-sd
after inserting a card to get
the device created -- or run a daemon like hald-addons-storage
that polls the device a few times every second checking to see if
anything has been inserted (as Ubuntu normally prefers to do).
GROUP="plugdev"
ensures the devices will be owned by
the group named "plugdev". This isn't particularly important since
you'll probably be mounting the cards using /etc/fstab lines or
some sort of automount daemon.
Pause and reflect sadly on the confusing coincidence of "scsi disk"
and "secure digital" both having the same abbreviation, so that
you need context to tell what each of these "sd"s means.
Test your new udev line by restarting udev:
/etc/init.d/udev restart
and see if your new device is there in /dev. If it is, you're all set!
Now you can add the rest of the devices from your multicard reader:
go back to the udevinfo steps and find out what each device is
called, then add a line for each of them.
Tags: ubuntu, udev, linux
[
16:45 Apr 18, 2009
More linux |
permalink to this entry |
]
Wed, 08 Apr 2009
I was curious whether Linux could read the CPU temperature on Dave's
new Mac Mini. I normally read the temperature with something like
this:
cat /proc/acpi/thermal_zone/ATF0/temperature
(the ATF0 part varies from machine to machine).
Though this doesn't work on all machines -- on my AMD desktop
it always returns the same number, which, I'm told, means that
the BIOS probably has some code that looks something like this:
if (OS == "Win95" || OS == "Win98") {
return get_win9x_temp();
}
else if (OS == "WinNT" || OS == "WinXP" || OS == "Vista") {
return get_nt_temp();
}
else {
return 40;
}
Anyway, I wondered whether the Mac would have that problem
(with different OS names, of course).
There wasn't anything in /proc/acpi/thermal_zone on the Mac,
but /proc is deprecated and we're all supposed to be moving to /sys,
right? But nobody writes about the new way to get the temperature
from /sys; most people are still using the old /proc way.
Took some digging, but I found it:
cat /sys/class/thermal/thermal_zone0/temp
It's in thousandths of a degree C now, rather than straight degrees C.
And on the Mini? Nope, it's not there either. If Dave needs the
temperature he needs to stick to OS X, or else figure out lm_sensors.
Update: Matthew Garrett has an excellent blog article on
the OS entries
reported to ACPI. Apparently Linux since 2.6.29 has claimed to
be "Microsoft Windows NT" to avoid just the sort of problem I
mentioned. Though that leaves me confused about why my desktop
machine always reports 40C.
Thanks to JanC for pointing me to that article!
Tags: linux, acpi, kernel, tips
[
21:54 Apr 08, 2009
More linux/kernel |
permalink to this entry |
]
Tue, 07 Apr 2009
Today's award concerns clarity of error messages.
My desktop machine has been getting flakier for a week or two.
Strange messages at boot, CDROM drive unable to burn reliably or
verify after burning, and finally it culminated in a morning where
it wouldn't boot at all. Turned out (after much experimentation)
to be not one but two bad IDE cables -- and these were the
snazzy expensive heavy-duty cables, not the cheap ribbon cables,
in a box that hadn't been opened for months. Weird.
Anyway, since I had the system disk out anyway (to recover data from
it) I left it out, migrated my data to the newer, bigger disk and
installed a new Ubuntu Intrepid.
Been meaning to do that anyway -- running two disks just adds to the
noise, heat and power usage and doesn't really add that much speed.
It took a couple of hours to get the system working the way I want it
-- installing things I need, like tcsh, vim, emacs, plucker, vlc, sox
etc. and cleaning up some of the longstanding Ubuntu udev and kernel
configuration bugs that keep various hardware from working.
I thought I had everything ready when I noticed I wasn't getting
any sound alerts, so I tried playing a sample .wav file, and got
a rather unusual error:
(clavius)- play sample.wav
ALSA lib confmisc.c:768:(parse_card) cannot find card '0'
ALSA lib conf.c:3513:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory
ALSA lib confmisc.c:392:(snd_func_concat) error evaluating strings
ALSA lib conf.c:3513:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory
ALSA lib confmisc.c:1251:(snd_func_refer) error evaluating name
ALSA lib conf.c:3513:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:3985:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2196:(snd_pcm_open_noupdate) Unknown PCM default
play soxio: Can't open output file `default': cannot open audio device
What does that mean?
Well, it turns out what it means is ... my user wasn't in the
"audio" group, so I didn't have write permission on the sound device.
I added myself to "audio" in /etc/groups and sound worked fine in my
next session.
Now, I've seen some fairly obscure error messages in my time,
but this one may just win my all-time obscurity award. 9 lines and 744
characters to say "Can't open $device."
And with all that, it still managed
to omit the one piece of information that might have been helpful:
the name of the device it was trying to open (so that an ls -l
would have told me the problem right away).
Impressive!
Tags: linux, alsa, user interface, humor
[
14:23 Apr 07, 2009
More tech |
permalink to this entry |
]
Sat, 14 Mar 2009
When I upgraded to Ubuntu Intrepid recently, I pulled in a newer GTK+,
version 2.14.4. And when I went to open a file in GIMP, I got a surprise:
my "bookmarks" were no longer visible without scrolling down.
In the place where the bookmarks used to be, instead was a list of ...
what are those things? Oh, I see ... they're all the filesystems
listed with "noauto" in my /etc/fstab --the filesystems that
aren't mounted unless somebody asks for them, typically by plugging
in some piece of hardware.
There are a lot of these. Of course there's one for the CDROM drive
(I never use floppies so at some point I dropped that entry).
I have another entry for Windows-formatted partitions that show up on
USB, like when I plug in a digital camera or a thumb drive.
I also have one of those front panel flash card readers with 4 slots,
for reading SD cards, memory sticks, compact flash, smart media etc.
Each of those shows up as a different device,
so I treat them separately and mount SD cards as /sdcard,
memory sticks as /stick and so on.
In addition, there are entries corresponding to
other operating systems installed on this multi-boot machine, and
to several different partitions on my external USB backup drive.
These are all listed in /etc/fstab with entries like this:
/dev/hdd /cdrom udf,iso9660 user,noauto 0 0
/dev/sde1 /pix vfat rw,user,fmask=133,noauto 0 0
The GTK developers, in their wisdom, have realized that what the file
selector really needs to be.
I mean, I was just thinking while opening a file in GIMP the other day,
"Browsing image files on filesystems that are actually mounted
is so tedious.
I wish I could do something else instead, like view my /etc/fstab file
to see a list of unmounted filesystems for which I might decide to
plug in an external device."
Clicking on one of the unmounted filesystems (even right-clicking!)
gives an error:
Could not mount sdcard
mount: special device /dev/sdb1 does not exist
So I guess the intent is that I'll plug in my external drive or camera,
then use the gtk file selector from a program like GIMP as the means to
mount it. Um ... don't most people already have some way of mounting
new filesystems, whether it's an automatic mount from HAL or typing
mount
in a terminal?
(And before you ask, yes, for the time being I have dbus and hal and
fam and gamin and all that crap running.)
The best part
But I haven't even told you the best part yet. Here it is:
If you mount a filesystem manually, e.g. mount /dev/sdb1
/mnt
...
it doesn't show up in the list!
So this enormous list of filesystems that's keeping me from seeing
my file selector bookmarks ... doesn't even include filesystems that
are really there!
Tags: gtk, user interface, humor, linux
[
12:59 Mar 14, 2009
More linux |
permalink to this entry |
]
Wed, 11 Mar 2009
I finally got around to upgrading to the current Ubuntu, Intrepid
Ibex. I know Intrepid has been out for months and Jaunty is just
around the corner; but I was busy with the run-up to a couple of
important conferences when Intrepid came out, and couldn't risk
an upgrade. Better late than never, right?
The upgrade went smoothly, though with the usual amount of
babysitting, watching messages scroll by for a couple of hours
so that I could answer the questions that popped up
every five or ten minutes. Question: Why, after all these years
of software installs, hasn't anyone come up with a way to ask all
the questions at the beginning, or at the end, so the user can
go have dinner or watch a movie or sleep or do anything besides
sit there for hours watching messages scroll by?
XKbOptions: getting Ctrl/Capslock back
The upgrade finished, I rebooted, everything seemed to work ...
except my capslock key wasn't doing ctrl as it should. I checked
/etc/X11/xorg.conf, where that's set ... and found the whole
file commented out, preceded by the comment:
# commented out by update-manager, HAL is now used
Oh, great. And thanks for the tip on where to look to get my settings
back. HAL, that really narrows it down.
Google led me to a forum thread on
Intrepid
xorg.conf - input section. The official recommendation is to
run sudo dpkg-reconfigure console-setup
... but of
course it doesn't allow for options like ctrl/capslock.
(It does let you specify which key will act as the Compose key,
which is thoughtful.)
Fortunately, the release
notes give the crucial file name: /etc/default/console-setup.
The XKBOPTIONS= line in that file is what I needed.
It also had the useful XKBOPTIONS="compose:menu" option left over
from my dpkg-configure run. I hadn't known about that before; I'd
been using xmodmap to set my multi key. So my XKBOPTIONS now looks
like: "ctrl:nocaps,compose:menu"
.
Fixing the network after resume from suspend
Another problem I hit was suspending on my desktop machine.
It still suspended, but after resuming, there was no network.
The problem turned out to lie in
/etc/acpi/suspend.d/55-down-interfaces.sh.
It makes a list of interfaces which should be brought up and down like this:
IFDOWN_INTERFACES="`cat /etc/network/run/ifstate | sed 's/=.*//'`"
IFUP_INTERFACES="`cat /etc/network/run/ifstate`"
However, there is no file /etc/network/run/ifstate, so this always
fails and so
/etc/acpi/resume.d/62-ifup.sh fails to bring
up the network.
Google to the rescue again. The bad thing about Ubuntu is that they
change random stuff so things break from release to release. The good
thing about Ubuntu is a zillion other people run it too, so whatever
problem you find, someone has already written about. Turns
out ifstate is actually in /var/run/network/ifstate
now, so making that change in
/etc/acpi/suspend.d/55-down-interfaces.sh
fixes suspend/resume.
It's bug
295544, fixed in Jaunty and nominated for Intrepid (I just learned
about the "Nominate for release" button, which I'd completely missed
in the past -- very useful!) Should be interesting to see if the fix
gets pushed to Intrepid, since networking after resume is completely
broken without it.
Otherwise, it was a very clean upgrade -- and now I can build
the GIMP trunk again, which was really the point of the exercise.
Tags: ubuntu, linux, install, X11, networking
[
18:28 Mar 11, 2009
More linux/install |
permalink to this entry |
]
Sun, 08 Mar 2009
USB flash drives and SD cards are getting so big -- you can really
carry a lot of stuff around on them. Like a backup of your mail
directory, your dot files, your website, stuff you'd really rather
not lose. You can even slip an SD card into your wallet.
But that convenient small size also means it's easy to lose your
USB key, or leave it somewhere. Someone could pick it up and have
access to anything you put there. Wouldn't it be nice to encrypt it?
There are lots of howtos on the net, like
this and
this,
but most of them are out of date and need a bit of fiddling to get
working. So here's one that works on Ubuntu hardy and a 2.6.28 kernel.
I'll assume that the drive is on /dev/sdb.
First, consider making two partitions on the flash drive.
The first partition is a normal unencrypted vfat partition, so you can
use it to transfer files to other machines, even Windows and Mac.
The second partition will be your encrypted file system.
Use fdisk, gparted or your favorite partitioning
tool to make the partitions, then create a filesystem on the first:
mkfs.vfat /dev/sdb1
Update:
On many distros you'll probably need to load the "cryptoloop" module
before proceeding with the following steps. Mount it like this (as root):
modprobe cyptoloop
Easy, moderate security version
Now create the encrypted partition (you'll need to be root).
I'll start with a relatively low security version -- good enough to
keep out most casual snoops who happen to pick up your flash drive.
losetup -e aes /dev/loop0 /dev/sdb2
[type a password or pass phrase]
mkfs.ext2 /dev/loop0
losetup -d /dev/loop0
I used ext2 as the filesystem type. I want a Linux filesystem, not a
Windows one, so I can store dot-filenames like .gnupg/ and so I can
make symbolic links. I chose ext2 rather
than a journalled filesystem like ext3 because of kernel configuration
warnings: to get encrypted mounts working I had to enable
loopback and cryptoloop support under drivers/block
devices, and the cryptoloop help says:
WARNING: This device is not safe for journaled file systems like
ext3 or Reiserfs. Please use the Device Mapper crypto module
instead, which can be configured to be on-disk compatible with the
cryptoloop device.
I hunted around a bit for this "device mapper crypto module" and
couldn't figure out where or what it was (I wish kernel help files
would give either the path to the option, or the CONFIG_ name in
.config!) so I decided I'd better stick with non-journalled
filesystems for the time being.
Now the filesystem is ready to use. Mount it with:
mount -o loop,encryption=aes /dev/sdb2 /mnt
[type your password/phrase]
Higher security version
There are several ways you can increase security. Of course, you can
choose other encryption algorithms than aes -- that choice is up to you.
You can also use a random key, rather than a password you type in.
Save the random key to a file on your system (obviously, you'll want
to back that up somewhere else). This has both advantages and
disadvantages: anyone who has access to your computer has the key
and can read your encrypted disk, but on the other hand, someone who
finds your flash drive somewhere will be very, very unlikely to be
able to use it. Set up a random key like this:
dd if=/dev/random > /etc/loop.key bs=1 count=60
mkfs.ext2 /dev/loop0
losetup -e aes -p 0 /dev/loop0 /dev/sdb2 < /etc/loop.key
(of course, you can make it quite a bit longer than 60 bytes).
(Update: skipped mkfs.ext2 step originally.)
Then mount it with:
cat /etc/loop.key | mount -p0 -o loop,encryption=aes /dev/sdb2 /crypt
Finally, most file systems write predictable data, like superblock
backups, at predictable places. This can make it easier for someone to
break your encryption. In theory, you can foil this by specifying
an offset so those locations are no longer so predictable.
Add -o 123456
(of course, use your own offset, not that
one) to the losetup
line, and ,offset=123456
in the options of the mount
command.
In practice, though, offset doesn't work for me: I get
ioctl: LOOP_SET_STATUS: Invalid argument
whenever I try to specify an offset. I haven't pursued it;
I'm not trying to hide state secrets, so
I'm more worried about putting off casual snoops and data thieves.
Tags: linux, security, filesystems, backups
[
15:10 Mar 08, 2009
More tech/security |
permalink to this entry |
]
Fri, 06 Mar 2009
We were talking about fonts on #gimp-users and someone mentioned
fontmatrix as a way to tag and organize fonts. (Tagging of resources
like fonts is on ongoing GIMP project, and with any luck will be
available in a future release.)
I tried fontmatrix and found it complex and inscrutable.
But it made me look for smaller font-tagging projects, and that led me to
FontyPython.
It's fairly small, it's Python, it's already included in Ubuntu ...
and how can you not like a project with a name like that?
When you start up, you need to choose a font folder to view any fonts.
Unfortunately,
the standard place to put fonts on a modern Linux system, ~/.fonts,
is not an option: fontypython won't look in directories starting with
a dot. The only way to view fonts installed in .fonts is to specify
it on the command line:
fontypython .fonts
I talked to the author, and it turns out the intent is quite
different: you're intended to keep your font list somewhere else
(say, ~/myfonts) and use fontypython to move fonts in and
out of ~/.fonts, with the Install button.
That model doesn't quite match my workflow --
I'd have to keep telling apps like GIMP to rescan the font list as
I added and removed fonts (and other apps besides GIMP mostly need
to be restarted to see new fonts) --
but it's probably ideal for some people.
When you first start up fontypython it displays the first page of your fonts.
Instead of a scrollbar, you page through using the Back/Forward
buttons or the option menu down below the font list.
By default, fonts are displayed quite large; you can change the size
in File->Settings if you want to see more at once.
It's time to start categorizing!
To do that, you need to create some pogs,
a silly term taken from tyPOGraphy. Pogs are just categories of font.
Click on New Pog in the buttons at the bottom right of the
window and choose a name for your first pog -- you might want pogs
for "script", or "handwriting", or "gothic", or "outline".
Once you've created some pogs, select one in
the Target list along the right edge of the window.
That's your active pog. Add fonts to the pog by clicking
on them in the list (a big red checkmark will appear over the font).
Mark as many as you want to move, then click
"Put fonts into pogname"
at the bottom of the window. Those fonts will grey out, to indicate
that they're members of the current pog.
To view fonts by pog -- to view all your handwriting or
script fonts --
use the Pogs tab near the upper left of the window.
Nothing to it! Unfortunately, the python routines fontypython uses
fails on a few fonts; but that's true of most font viewers
(like gtkfontsel), and fontypython does better than many.
It does offer a way to screen out bad fonts that can't display,
in case you have any fonts that cause serious problems like crashes
(I didn't).
Quite a useful program if you're a font junkie like I am!
I'm looking forward to using it for real projects.
Tags: fonts, linux
[
13:13 Mar 06, 2009
More linux |
permalink to this entry |
]
Fri, 27 Feb 2009
I've used my simple network schemes setup for many years. No worries
about how distros come up with a new network configuration GUI every
release; no worries about all the bloat that NetworkManager insists
on before it will run; no extra daemons running all the time polling
my net just in case I might want to switch networks suddenly.
It's all command-line based; if I'm at home, I type
netscheme home
and my network will be configured for that setup until I tell it
otherwise.
If I go to a cafe with an open wi-fi link, I type
netscheme wifi
; I have other schemes for places
I go where I need a wireless essid or WEP key. It's all very easy
and fast.
Last week for SCALE I decided it was silly to have to su and create
a new scheme file for conferences where all I really needed was the
name of the network (the essid), so I added a quick hack to my
netscheme script so that typing netscheme foo
, where
there's no existing scheme by that name, will switch to a scheme
using foo as the essid. Worked nicely, and that inspired
me to update the "documentation".
I wrote an elaborate page on my network schemes back around 2003,
but of course it's all changed since then and I haven't done much
toward updating the page. So I've rewritten it completely, taking
out most of the old cruft that doesn't seem to apply any more.
It's here:
Howto Set Up
Multiple Network Schemes on a Linux Laptop.
Tags: linux, laptop, networking
[
10:51 Feb 27, 2009
More linux/laptop |
permalink to this entry |
]
Fri, 06 Feb 2009
I've written before about how I'd like to get a netbook like an Asus Eee,
except that the screen resolution puts me off: no one makes a netbook
with vertical resolution of more than 600. Since most projectors prefer
1024x768, I'm wary of buying a laptop that can't display that resolution.
(What was wrong with my beloved old Vaio? Nothing, really, except that
the continued march of software bloat means that a machine that can't
use more than 256M RAM is hurting when trying to run programs
(*cough* Firefox *cough) that start life by grabbing about 90M and
goes steadily up from there. I can find lightweight alternatives for
nearly everything else, but not for the browser -- Dillo just doesn't
cut it.)
Ebay turned out to be the answer: there are lots of subnotebooks
there, nice used machines with full displays at netbook prices.
And so a month before LCA I landed a nice Vaio TX650 with 1.5G RAM,
Pentium M, Intel 915GM graphics and Centrino networking.
All nice Linux-supported hardware.
But that raised another issue: how do widescreen laptops
(the TX650 is 1366x768) talk to a projector?
I knew it was possible -- I see people presenting from widescreen
machines all the time -- but nobody ever writes about how it works.
The first step was to get it talking to an external monitor at all.
I ran a VGA cable to my monitor, plugged the other end into the Vaio
(it's so nice not to need a video dongle!) and booted. Nothing. Hmm.
But after some poking and googling, I learned that
with Intel graphics, xrandr is the answer:
xrandr --output VGA --mode 1024x768
switches the external VGA signal on, and
xrandr --auto
switches it back off.
Update, April 2010: With Ubuntu Lucid, this has changed and now it's
xrandr --output VGA1 --mode 1024x768
-- in other words, VGA changed to VGA1. You can run xrandr
with no arguments to get a list of possible output devices and find
out whether X sees the external projector or screen correctly.
Well, mostly. Sometimes it doesn't work -- like, unfortunately,
at the lightning talk session, so I had to give my
talk without visuals. I haven't figured that out yet.
Does the projector have to be connected before I run xrandr?
Should it not be connected until after I've already run xrandr?
Once it's failed, it doesn't help to run xrandr again ... but
a lot of fiddling and re-plugging the cable and power cycling the
projector can sometimes fix the problem, which obviously isn't helpful
in a lightning talk situation.
Eventually I'll figure that out and blog it (ideas, anyone?)
but the real point of today's article is resolution. What I
wanted to know was: what happened to that wide 1366-pixel screen when
I was projecting 1024 pixels? Would it show me some horrible elongated
interpolated screen? Would it display on the left part of the laptop
screen, or the middle part?
The answer, I was happy to learn, is that it does the best thing
possible: it sends the leftmost 1024 pixels to the projector, while
still showing me all 1366 pixels on the laptop screen.
Why ... that means ... I can write notes for myself, to display in
the rightmost 342 screen pixels!
All it took was a little bit of
CSS hacking
in my
HTML slide
presentation package, and it worked fine.
Now I have notes just like my Mac friends with their Powerpoint and
their dual-head video cards, only I get to use Linux and HTML.
How marvellous! I could get used to this widescreen stuff.
Tags: laptop, X11, linux, speaking, projector, lca2009, linux.conf.au
[
22:12 Feb 06, 2009
More linux/laptop |
permalink to this entry |
]
Tue, 13 Jan 2009
I've been wanting for a long time to make Debian and Ubuntu
repositories so people can install
pho with apt-get,
but every time I try to look it up I get bogged down.
But I got mail from a pho user who really wanted that, and even
suggested a howto.
That howto
didn't quite do it, but it got me moving to look for a better one,
which I eventually found in the
Debian
Repository Howto.
It wasn't complete either, alas, so it took some trial-and-error
before it actually worked. Here's what finally worked:
I created two web-accessible directories, called hardy and etch.
I copied all the files created by dpgk-buildpkg on each distro --
.deb, .dsc, .tar.gz, and .changes (I don't think
this last file is used by anything) -- into each directory
(renaming them to add -etch and -hardy as appropriate).
Then:
% cd hardy/
% dpkg-scanpackages . /dev/null | gzip > Packages.gz
% dpkg-scansources . /dev/null | gzip > Sources.gz
% cd ../etch/
% dpkg-scanpackages . /dev/null | gzip > Packages.gz
% dpkg-scansources . /dev/null | gzip > Sources.gz
It gives an error,
** Packages in archive but missing from override file: **
but seems to work anyway.
Now you can use one of the following /etc/apt/sources.list lines:
deb http://shallowsky.com/apt/hardy ./
deb http://shallowsky.com/apt/etch ./
After an apt-get update, it saw pho, but it warned me
WARNING: The following packages cannot be authenticated!
pho
Install these packages without verification [y/N]?
There's some discussion in the
SecureAPT page
on the Debian wiki, but it's a bit involved and I'm not clear if
it helps me if I'm not already part of the official Debian keychain.
This page on
Release
check of non Debian sources was a little more helpful, and told me
how to create the Release and Release.gpg file -- but then I just get
a different error,
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY
And worse, it's an error now, not just a warning,
preventing any
apt-get update.
Going back to the SecureApt page, under
Setting up a secure apt repository they give the two steps the
other page gave for creating Release and Release.gpg, with a third
step: "Publish the key fingerprint, that way your users will know what
key they need to import in order to authenticate the files in the
archive."
So apparently if users don't take steps to import the key manually,
they can't update at all. Whereas if I leave out the Release and
Release.gpg files, all they have to do is type y when they see the
warning. Sounds like it's better to leave off the key.
I wish, though, that there was a middle ground, where I could offer the
key for those who wanted it without making it harder for those
who don't care.
Tags: programming, pho, debian, ubuntu, linux
[
21:14 Jan 13, 2009
More linux |
permalink to this entry |
]
Sun, 04 Jan 2009
I got myself a GPS unit for Christmas.
I've been resisting the GPS siren song for years -- mostly because I
knew it would be a huge time sink involving months of futzing with
drivers and software trying to get it to do something useful.
But my experience at an OpenStreetMap
mapping
party got me fired up about it, and I ordered a Garmin Vista Cx.
Shopping for a handheld GPS is confusing. I was fairly convinced I
wanted a Garmin, just because it's the brand used by most people in
the open source mapping community so I knew they were likely to work.
I wanted one with a barometric altimeter, because I
wanted that data from my hikes and bike rides (and besides,
it's fun to know how much you've climbed on an outing; I used to have
a bike computer with an altimeter and it was a surprisingly good
motivator for working harder and getting in better shape).
But Garmin has a bazillion models and I never found any comparison
page explaining the differences among the various hiking eTrex models.
Eventually I worked it out:
Garmin eTrex models, decoded
- C
- Color display. This generally also implies USB connectivity
instead of serial, just because the color models are newer.
- H
- High precision (a more sensitive satellite receiver).
- x
- Takes micro-SD cards. This may not be important for storing
tracks and waypoints (you can store quite a long track with the
built-in memory) but they mean that you can load extra base maps,
like topographic data or other useful features.
- Vista, Summit
- These models have barometric altimeters and magnetic compasses.
(I never did figure out the difference between a Vista and a Summit,
except that in the color models (C), Vistas take micro-SD cards (x)
while Summits don't, so there's a Summit C and HC while Vistas
come in Cx and HCx. I don't know what the difference is between
a monochrome Summit and Vista.)
- Legend, Venture
- These have no altimeter or compass.
A Venture is a Legend that comes without the bundled
extras like SD card, USB cable and base maps, so it's cheaper.
For me, the price/performance curve pointed to the Vista Cx.
Loading maps
Loading base maps was simplicity itself, and I found lots of howtos
on how to use downloadable maps. Just mount the micro-SD card on any
computer, make a directory called Garmin, and name the file
gmapsupp.img.
I used the CloudMade map
for California, and it worked great.
There are lots of howtos on generating your own maps, too,
and I'm looking forward to making some with topographic data
(which the CloudMade maps don't have). The most promising
howtos I've found so far are the
OSM
Map On Garmin page on the OSM wiki and the much more difficult,
but gorgeous,
Hiking
Biking Mapswiki page.
Uploading tracks and waypoints
But the real goal was to be able to take this toy out on a hike,
then come back and upload the track and waypoint files.
I already knew, from the mapping party, that Garmins have an odd
misfeature: you can connect them in usb-storage mode, where they look
like an external disk and don't need any special software ... but then
you can't upload any waypoints. (In fact, when I tried it with my
Vista Cx I didn't even see the track file.) To upload tracks and
waypoints, you need to use something that speaks Garmin protocol:
namely, the excellent GPSBabel.
So far so good. How do you call GPSbabel?
Luckily for me, just before my GPS arrived,
Iván Sánchez Ortega posted a
useful
little gpsbabel script
to the OSM newbies list and I thought I was all set.
But once I actually had the Vista in hand, complete with track and
waypoints from a walk around the block, it turned out it wasn't quite
that simple -- because Ubuntu didn't create the /dev/ttyUSB0 that
Iván's script used. A web search found tons of people having that
problem on Ubuntu and talking about various workarounds, involving
making sure the garmin_usb driver is blacklisted in
/etc/modprobe.d/blacklist (it was already), adding a
/etc/udev/rules.d/45-garmin.rules file that changes permissions
and ownership of ... um, I guess of the file that isn't being created?
That didn't make much sense. Anyway, none of it helped.
But finally I found the fix: keep the garmin_usb driver blacklisted
use "usb:" as the device to pass to GPSBabel rather than
"/dev/ttyUSB0". So the commands are:
gpsbabel -t -i garmin -f usb: -o gpx -F tracks.gpx
gpsbabel -i garmin -f usb: -o gpx -F waypoints.gpx
Like so many other things, it's easy once you know the secret!
Viewing tracklogs works great in Merkaartor, though I haven't yet
found an app that does anything useful with the elevation data.
I may have to write one.
Update: After I wrote this but before I was able to post it,
a discussion on the OSM Newbies list with someone
who was having similar troubles resulted in this useful wiki page:
Garmin
on GNU/Linux. It may also be worth checking the
Discussion
tab on that wiki page for further information.
Update, October 2011:
As of Debian Squeeze or Ubuntu Natty, you need two steps:
- Add a line to /etc/modprobe.d/blacklist.conf:
blacklist garmin_gps
- Create a udev file,
/etc/udev/rules.d/51-garmin.rules, to set the permissions so
that you can access the device without being root. It contains the line:
ATTRS{idVendor}=="091e", ATTRS{idProduct}=="0003", MODE="0660", GROUP="plugdev"
Then use gpsbabel with usb:
and you should be fine.
Tags: gps, mapping, GIS, linux, ubuntu, udev, openstreetmap
[
16:31 Jan 04, 2009
More mapping |
permalink to this entry |
]
Wed, 26 Nov 2008
I love my new monitor. The colors are great, it's sharp, the angles
are good. Only one problem: it's really really bright.
It has the usual baffling "push buttons at random trying to figure out
how to navigate the menu system" brightness control -- which dims
the monitor's perceived brightness by about .003% if I take it all
the way to the bottom of the scale. (This is apparently a bug that
some of these Dells have and others don't.)
It has contrast, too -- but
the monitor won't change contrast when running through the DVI cable
(this is even documented in the monitor's manual).
I have no idea why. It makes me wonder whether there's normally a way
of changing brightness over a DVI cable; but lots of googling hasn't
brought enlightenment on that score.
I tried the VGA cable. The display was very noticeably less sharp,
though pressing the monitor's "auto adjust" improved it a lot.
Contrast adjustment did work (and helped) using the VGA cable,
but it also turned everything green. I was able to improve the
color cast a bit with
xgamma -ggamma .75 -bgamma .9
but this was all looking like quite a hassle. I wanted an easier way
to change brightness. xgamma wasn't it: it works well for fine-tuning
but its brightness curve is way off if you try to depart by much from
full brightness.
Enter xbrightness and
xbrightness-gui (Mikael Magnusson to the rescue again! He
knew about these excellent programs, and perhaps equally important,
he had a copy of xbrightness-gui, which seems to have vanished from
the web.)
xbrightness is an excellent little command-line program that sets the
X gamma curve to appropriate values. Just run xbrightness BRIGHTNESS
passing it a value between 0 and 65535. xbrightness-gui is an
interactive program that lets you drag curves around for each of
the three color curves, or the combined image, with a user interface
very similar to GIMP's Curves tool. You can even save and load curves.
xbrightness-gui's coolness notwithstanding, the simple xbrightness was
really all I needed. It does a fine job of adjusting the monitor
brightness while keeping colors neutral. The version I was using
was Mikael's version, to which he'd added the ability to adjust colors
too (much like xgamma does, but using more useful curves). It turns
out I don't need the color correction, but it's nice to know it's there.
But what I did need was a way to query the current brightness, and,
more important, a way to bump the brightness a little bit up and down.
So I added those features. Getting the current brightness isn't
actually something you can do, since the whole gamma curve for the
three channels is what you perceive as brightness. I didn't try to
estimate perceived brightness based on the whole curve; I just took
the value of the highest value for each color, and their average or
maximum.
Then I tied my new increment/decrement into key bindings in Openbox.
I bound W-F5 (the Windows key plus F5) to xbrightness -2560, and
W-F6 to xbrightness +2560, so I can go up or down in brightness by
pressing keys without having to type any five-digit numbers.
I've made available the old xbrightness-gui, since it's no longer
available anywhere else; a patch that integrates my changes and Mika's
into xbrightness-0.3; and the patched xbrightness tarball.
They're all at http://shallowsky.com/software/xbrightness/.
One other fun thing about using X gamma settings to adjust
brightness. The first night I used it, I noticed at some point that my
cursor looked very different -- it had become blindingly white.
It turns out that the cursor is implemented at a lower level and
doesn't go through the X gamma system. So turning the brightness
down via gamma curves doesn't affect the cursor, which remains always
at full brightness. It's quite a nice side effect -- the cursor is
much more visible than it normally is.
Tags: linux, monitor, brightness
[
12:37 Nov 26, 2008
More linux |
permalink to this entry |
]
Sat, 15 Nov 2008
Dave and I recently acquired a lovely trinket from a Mac-using friend:
an old 20-inch Apple Cinema Display.
I know what you're thinking (if you're not a Mac user): surely
Akkana's not lustful of Apple's vastly overpriced monitors when
brand-new monitors that size are selling for under $200!
Indeed, I thought that until fairly recently. But there actually
is a reason the Apple Cinema displays cost so much more than seemingly
equivalent monitors -- and it's not the color and shape of the bezel.
The difference is that Apple cinema displays are a technology called
S-IPS, while normal consumer LCD monitors -- those ones you
see at Fry's going for around $200 for a 22-inch 1680x1050 -- are
a technology called TN. (There's a third technology in between the
two called S-PVA, but it's rare.)
The main differences are color range and viewing angle.
The TN monitors can't display full color: they're only
6 bits per channel. They simulate colors outside that range
by cycling very rapidly between two similar colors
(this is called "dithering" but it's not the usual use of the term).
Modern TN monitors are
astoundingly fast, so they can do this dithering faster than
the eye can follow, but many people say they can still see the
color difference. S-IPS monitors show a true 8 bits per color channel.
The viewing angle difference is much easier to see. The published
numbers are similar, something like 160 degrees for TN monitors versus
180 degrees for S-IPS, but that doesn't begin to tell the story.
Align yourself in front of a TN monitor, so the colors look right.
Now stand up, if you're sitting down, or squat down if you're
standing. See how the image suddenly goes all inverse-video,
like a photographic negative only worse? Try that with an S-IPS monitor,
and no matter where you stand, all that happens is that the image
gets a little less bright.
(For those wanting more background, read
TN Film, MVA,
PVA and IPS – Which one's for you?, the articles on
TFT Central,
and the wikipedia
article on LCD technology.)
Now, the comparison isn't entirely one-sided. TN monitors have their
advantages too. They're outrageously inexpensive. They're blindingly
fast -- gamers like them because they don't leave "ghosts" behind
fast-moving images. And they're very power efficient (S-IPS monitors,
are only a little better than a CRT). But clearly, if you spend a lot
of time editing photos and an S-IPS monitor falls into your
possession, it's worth at least trying out.
But how? The old Apple Cinema display has a nonstandard connector,
called ADC, which provides video, power and USB1 all at once.
It turns out the only adaptor from a PC video card with DVI output
(forget about using an older card that supports only VGA) to an ADC
monitor is the $99 adaptor from the Apple store. It comes with a power
brick and USB plug.
Okay, that's a lot for an adaptor, but it's the only game in town,
so off I went to the Apple store, and a very short time later I had
the monitor plugged in to my machine and showing an image. (On Ubuntu
Hardy, simply removing xorg.conf was all I needed, and X automatically
detected the correct resolution. But eventually I put back one section
from my old xorg.conf, the keyboard section that specifies
"XkbOptions" to be "ctrl:nocaps".)
And oh, the image was beautiful. So sharp, clear, bright and colorful.
And I got it working so easily!
Of course, things weren't as good as they seemed (they never are, with
computers, are they?) Over the next few days I collected a list of
things that weren't working quite right:
- The Apple display had no brightness/contrast controls; I got
a pretty bad headache the first day sitting in front of that
full-brightness screen.
- Suspend didn't work. And here when I'd made so much progress
getting suspend to work on my desktop machine!
- While X worked great, the text console didn't.
The brightness problem was the easiest. A little web searching led me
to acdcontrol, a
commandline program to control brightness on Apple monitors.
It turns out that it works via the USB plug of the ADC connector,
which I initially hadn't connected (having not much use for another
USB 1.1 hub). Naturally, Ubuntu's udev/hal setup created the device
in a nonstandard place and with permissions that only worked for root,
so I had to figure out that I needed to edit
/etc/udev/rules.d/20-names.rules and change the hiddev line to read:
KERNEL=="hiddev[0-9]*", NAME="usb/%k", GROUP="video", MODE="0660"
That did the trick, and after that acdcontrol worked beautifully.
On the second problem, I never did figure out why suspending with
the Apple monitor always locked up the machine, either during suspend
or resume. I guess I could live without suspend on a desktop, though I
sure like having it.
The third problem was the killer. Big deal, who needs text consoles,
right? Well, I use them for debugging, but what was more important,
also broken were the grub screen (I could no longer choose
kernels or boot options) and the BIOS screen (not something
I need very often, but when you need it you really need it).
In fact, the text console itself wasn't a problem. It turns out the
problem is that the Apple display won't take a 640x480 signal.
I tried building a kernel with framebuffer enabled, and indeed,
that gave me back my boot messages and text consoles (at 1280x1024),
but still no grub or BIOS screens. It might be possible to hack a grub
that could display at 1280x1024. But never being able to change BIOS
parameters would be a drag.
The problems were mounting up. Some had solutions; some required
further hacking; some didn't have solutions at all. Was this monitor
worth the hassle? But the display was so beautiful ...
That was when Dave discovered TFT
Central's search page -- and we learned that the Dell 2005FPW
uses the exact same Philips tube as the
Apple, and there are lots of them for sale used,.
That sealed it -- Dave took the Apple monitor (he has a Mac, though
he'll need a solution for his Linux box too) and I bought a Dell.
Its image is just as beautiful as the Apple (and the bezel is nicer)
and it works with DVI or VGA, works at resolutions down to 640x480
and even has a powered speaker bar attached.
Maybe it's possible to make an old Apple Cinema display work on a Mac.
But it's way too much work. On a PC, the Dell is a much better bet.
Tags: linux, tech, graphics, monitor, S-IPS, TN, ADC, DVI
[
21:57 Nov 15, 2008
More tech |
permalink to this entry |
]
Wed, 12 Nov 2008
I checked my Spam Assassin "probably" folder for the first time in too
long, and discovered that I was getting tons of false positives,
perfectly legitimate messages that were being filed as spam.
A little analysis of the X-Spam-Status: headers showed that all of
the misfiled messages (and lots of messages that didn't quite make it
over the threshold) were hitting a rule called DNS_FROM_SECURITYSAGE.
It turned out that this rule
is
obsolete and has been removed from Spam Assassin, but it
hasn't
yet been removed from Debian, at least not from Etch.
So I filed a Debian bug. Or at least I think I did -- I got an
email acknowledgement from submit@bugs.debian.org but it didn't
include a bug number and Debian's
HyperEstraier based search engine
linked off the bug page
doesn't find it (I used reportbug).
Anyway, if you're getting lots of SECURITYSAGE false hits, edit
/usr/share/spamassassin/20_dnsbl_tests.cf and comment out the
lines for DNS_FROM_SECURITYSAGE and, while you're at it, the lines
for RCVD_IN_DSBL, which is also
obsolete. Just to be safe, you might also want to add
score DNS_FROM_SECURITYSAGE 0
in your .spamassassin/user_prefs (or equivalent systemwide file) as well.
Now if only I could figure out why it was setting
FORGED_RCVD_HELO and UNPARSEABLE_RELAY on messages from what seems
to be perfectly legitimate senders ...
Tags: linux, spam, bugs
[
22:54 Nov 12, 2008
More linux |
permalink to this entry |
]
Thu, 06 Nov 2008
My latest Linux Planet article,
Why
Firefox Rocks on Linux, discusses Linux-specific Firefox
shortcuts involving the middle mouse button, the URLbar and
the scrollbar.
It's getting
good
Diggs, too, and comments from people who found the tips helpful,
which is great. A lot of people don't know about some of these great
Linux time-savers, but these are the sort of things that make me
love Linux and stick with it even when it gets frustrating.
I hate to think of people missing out just because there's no
obvious way to discover some of the shortcuts!
Tags: writing, mozilla, firefox, linux
[
21:44 Nov 06, 2008
More writing |
permalink to this entry |
]
Sun, 12 Oct 2008
Someone on LinuxChix' techtalk list asked whether she could get
tcsh to print "[no output]" after any command that doesn't produce
output, so that when she makes logs to help her co-workers, they
will seem clearer.
I don't know of a way to do that in any shell (the shell would have
to capture the output of every command; emacs' shell-mode does that
but I don't think any real shells do) but it seemed like it ought
to be straightforward enough to do as a regular expression substitute
in vi. You're looking for lines where a line beginning with a prompt
is followed immediately by another line beginning with a prompt;
the goal is to insert a new line consisting only of "[no output]"
between the two lines.
It turned out to be pretty easy in vim. Here it is:
:%s/\(^% .*$\n\)\(% \)/\1[no results]\r\2/
Explanation:
- :
- starts a command
- %
- do the following command on every line of this file
- s/
- start a global substitute command
- \(
- start a "capture group" -- you'll see what it does soon
- ^
- match only patterns starting at the beginning of a line
- %
- look for a % followed by a space (your prompt)
- .*
- after the prompt, match any other characters until...
- $
- the end of the line, after which...
- \n
- there should be a newline character
- \)
- end the capture group after the newline character
- \(
- start a second capture group
- %
- look for another prompt. In other words, this whole
- expression will only match when a line starting with a prompt
- is followed immediately by another line starting with a prompt.
- \)
- end the second capture group
- /
- We're finally done with the mattern to match!
- Now we'll start the replacement pattern.
- \1
- Insert the full content of the first capture group
- (this is also called a "backreference" if you want
- to google for a more detailed explanation).
- So insert the whole first command up to the newline
- after it.
- [no results]
- After the newline, insert your desired string.
- \r
- insert a carriage return here (I thought this should be
- \n for a newline, but that made vim insert a null instead)
- \2
- insert the second capture group (that's just the second prompt)
- /
- end of the substitute pattern
Of course, if you have a different prompt, substitute it for "% ".
If you have a complicated prompt that includes time of day or
something, you'll have to use a slightly more complicated match
pattern to match it.
Tags: regexp, shell, CLI, linux, editors
[
14:34 Oct 12, 2008
More linux/editors |
permalink to this entry |
]
Thu, 09 Oct 2008
Ever been annoyed by the file in your home directory,
.sudo_as_admin_successful? You know, the one file with the name
so long that it alone is responsible for making ls print out your
home directory in two columns rather than three or four?
And if you remove it, it comes right back after the next time
you run sudo?
Here's what's creating it (credit goes to Dave North for figuring
out most of this).
It's there because you're in the group admin,
and it's there to turn off a silly bash warning.
It's specific to Ubuntu (at least, Fedora doesn't do it).
Whenever you log in under bash, if bash sees that you're in the
admin group in /etc/groups, it prints this warning:
To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.
Once you sudo to root, if you're in the admin group, sudo
creates an empty file named .sudo_as_admin_successful
in your home directory.
That tells bash, the next time you log in, not to print the
stupid warning any more.
Sudo creates the file even if your login shell isn't bash and so
you would never have seen the stupid warning. Hey, you might some
day go back to bash, right?
If you want to reclaim your ls columns and get rid of the file
forever, it's easy:
just edit /etc/group and remove yourself from the admin group.
If you were doing anything that required being in the admin group,
substitute another group with a different name.
Tags: linux, bash, sudo, annoyances, ubuntu
[
18:33 Oct 09, 2008
More linux |
permalink to this entry |
]
Sat, 04 Oct 2008
Dave and I were testing some ways of speeding up the booting process,
which is how he came to be looking at my Vaio's console with no X
running. "What's wrong with that font?" he asked.
I explained how Ubuntu always starts the boot process with a perfectly
fine font, then about 80% of the way through boot it deliberately
changes it to a garbled, difficult to read that was clearly not
designed for 1024x761. Been meaning for ages to figure out how to
fix it, never spent the time ... Okay, it said "Setting up console
font and keymap" just before it changes the font.
That message should be easy to find.
Maybe I should take a few minutes now and look into it.
The message comes from /etc/init.d/console-setup
,
which runs a program called setupcons
, which has a
man page. setupcons
uses /etc/default/console-setup
which includes the following section:
# Valid font faces are: VGA (sizes 8, 14 and 16), Terminus (sizes
# 12x6, 14, 16, 20x10, 24x12, 28x14 and 32x16), TerminusBold (sizes
# 14, 16, 20x10, 24x12, 28x14 and 32x16), TerminusBoldVGA (sizes 14
# and 16), Fixed (sizes 13, 14, 15, 16 and 18), Goha (sizes 12, 14 and
# 16), GohaClassic (sizes 12, 14 and 16).
FONTFACE="Fixed"
FONTSIZE="16"
The hard part of changing the console font in the past has always been
finding out what console fonts are available. So having a list right
there in the comment is a big help.
Okay, let's try changing it to Terminus and running setupcons again.
Nope, error message. How about VGA? Success, looks fine. That was easy!
But while I was in that file, what about the keymap? That's another
thing I've been meaning to fix for ages ... under Debian, Redhat and
earlier Ubuntu versions I had a .kmap.gz console map that turned my
capslock key into a Control key (the way God intended). But Ubuntu
changed things all around so the old fix didn't work any more.
I found a thread from
December from someone who wanted to make the exact same change,
for the same reason, but the only real advice in the thread involved
an elaborate ritual involving defining keymaps for X and Gnome then
applying them to the console. Surely there was a better way.
It seemed pretty clear that /etc/console-setup/boottime.kmap.gz
was the keymap it was using. I tried substituting my old keymap, but
since I'd written it to inherit from other keymaps that no longer
existed, loadkeys can't use it. Eventually I just gunzipped
boottime.kmap.gz, found the Caps Lock key (keycode 29), replaced
all the Caps_Locks
with Controls
and gzipped
it back up again. And it worked!
Gary Vollink has a more detailed description, and the process hasn't
changed much since his page on
Getting "Control"
on the "Caps Lock".
Another gem linked to from the Ubuntu thread was this
excellent
article on keyboard layouts under X by Daniel Paul O'Donnell.
It's not relevant to the problem of setting the console keymap,
but it looks like a very useful reference on how various
international character input methods work under X.
Tags: linux, ubuntu, fonts
[
22:33 Oct 04, 2008
More linux |
permalink to this entry |
]
Mon, 22 Sep 2008
Part
III in the Linux Astronomy series on Linux Planet covers two 3-D apps,
Stellarium and Celestia.
Writing this one was somewhat tricky because
the current Ubuntu, "Hardy", has a bug in its Radeon handling
and both these apps lock my machine up pretty quickly, so I went
through a lot of reboot cycles getting the screenshots.
(I found lots of bug reports and comments on the web, so I know
it's not just me.)
Fortunately I was able to test both apps and grab a few screenshots
on Fedora 8 and Ubuntu "Feisty" without encountering crashes.
(Ubuntu sure has been having a lot of
trouble with their X support lately! I'm going to start keeping
current Fedora and Suse installs around for times like this.)
Tags: writing, astronomy, linux, ubuntu, bugs
[
22:10 Sep 22, 2008
More writing |
permalink to this entry |
]
Fri, 12 Sep 2008
I have a new article on XEphem on Linux Planet,
following up to the KStars article two weeks ago:
Viewing
the Night Sky with Linux, Part II: Visit the Planets With XEphem.
Tags: writing, astronomy, linux
[
11:50 Sep 12, 2008
More writing |
permalink to this entry |
]
Sun, 31 Aug 2008
I wanted to get a list of who'd been contributing the most in a
particular open source project. Most projects of any size have a
ChangeLog file, in which check-ins have entries like this:
2008-08-26 Jane Hacker <hacker@domain.org>
* src/app/print.c: make sure the Portrait and Landscape
* buttons update according to the current setting.
I wanted to take each entry, save the name of the developer checking
in, then eventually count the number of times each name occurs (the
number of times that developer checked in) and print them in order
from most check-ins to least.
Getting the names is easy: for check-ins in the last 9 years, I just
want the lines that start with "200". (Of course, if I wanted earlier
check-ins I could make the match more general.)
grep "^200" ChangeLog
But now I want to trim the line so it includes only the
contributor's name. A bit of sed geekery can do that: the date is a
fixed format (four characters, a dash, two, dash, two, then two
spaces, so "^....-..-.. " matches that pattern.
But I want to remove the email address part too
(sometimes people use different email addresses
when they check in). So I want a sed pattern that will match
something at the front (to discard), something in the middle (keep that part)
and something at the end (discard).
Here's how to do that in sed:
grep "^200" ChangeLog | sed 's/^....-..-.. \(.*\)<.*$/\1/'
In English, that says: "For each line in the ChangeLog that starts
with 200, find a pattern at the beginning consisting of any four
characters, a dash, two characters, dash, two characters, dash, and
two spaces; then immediately after that, save all characters up to
a < symbol; then throw away the < and any characters that follow
until the end of the line."
That works pretty well! But it's not quite right: it includes the
two spaces after the name as part of the name. In sed, \s matches
any space character (like space or tab).
So you'd think this should work:
grep "^200" ChangeLog | sed 's/^....-..-.. \(.*\)\s+<.*$/\1/'
\s+ means it will require that at least one and maybe more space
characters immediately before the < are also discarded.
But it doesn't work. It turns out the reason is that the \(.*\)
expression is "greedier" than the \s+: so the saved name expression
grabs the first space, leaving only the second to the \s+.
The way around that is to make the name expression specify that it
can't end with a space. \S is the term for "anything that's not a
space character"; so the expression becomes
grep "^200" ChangeLog | sed 's/^....-..-.. \(.*\S\)\s\+<.*$/\1/'
(the + turned out to need a backslash before it).
We have the list of names! Add a | sort
on the end to
sort them alphabetically -- that will make sure you get all the
"Jane Hacker" lines listed together. But how to count them?
The Unix program most frequently invoked after sort
is uniq
, which gets rid of all the repeated lines.
On a hunch, I checked out the man page, man uniq
,
and found the -c option: "prefix lines by the number of occurrences".
Perfect! Then just sort them by the number, from largest to
smallest:
grep "^200" ChangeLog | sed 's/^....-..-.. \(.*\S\)\s+<.*$/\1/' | sort | uniq -c | sort -rn
And we're done!
Now, this isn't perfect since it doesn't catch "Checking in patch
contributed by susan@otherhost.com" attributions -- but those aren't in
a standard format in most projects, so they have to be handled by hand.
Disclaimer: Of course, number of check-ins is not a good measure of
how important or productive someone is. You can check in a lot of
one-line fixes, or you can write an important new module and submit
it for someone else to merge in. The point here wasn't to rank
developers, but just to get an idea who was checking into the tree
and how often.
Well, that ... and an excuse to play with nifty Linux shell pipelines.
Tags: shell, CLI, linux, pipelines, regexp
[
12:12 Aug 31, 2008
More linux |
permalink to this entry |
]
Thu, 28 Aug 2008
I have an article on Linux Planet! The first of many, I hope.
At least the first of a short series on Linux astronomy programs,
starting with the one that's easiest to use: KStars.
It's oriented toward binocular observing, with suggestions
for good targets for beginners.
Viewing
the Night Sky with Linux, Part I: KStars
Tags: writing, astronomy, linux
[
22:46 Aug 28, 2008
More writing |
permalink to this entry |
]
Mon, 04 Aug 2008
No postings for a while -- I was too tied up with getting ready for
OSCON, and now that it's over, too tied up with catching up with
stuff that gotten behind.
A few notes about OSCON:
It was a good conference -- lots of good speakers, interesting topics
and interesting people. Best talks: anything by Paul Fenwick,
anything by Damian Conway.
The Arduino
tutorial was fun too. It's a little embedded processor with a
breadboard and sockets to control arbitrary electronic devices,
all programmed over a USB plug using a Java app.
I'm not a hardware person at all (what do
those resistor color codes mean again?) but even I, even after coming
in late, managed to catch up and build the basic circuits they
demonstrated, including programming them with my laptop. Very cool!
I'm looking forward to playing more with the Arduino when I get a
spare few moments.
The conference's wi-fi network was slow and sometimes flaky (what else is new?)
but they had a nice touch I haven't seen at any other conference:
Wired connections, lots of them, on tables and sofas scattered
around the lounge area (and more in rooms like the speakers' lounge).
The wired net was very fast and very reliable. I'm always surprised
I don't see more wired connections at hotels and conferences, and
it sure came in handy at OSCON.
The AV staff was great, very professional and helpful. I was speaking
first thing Monday morning (ulp!) so I wanted to check the room Sunday
night and make sure my laptop could talk to the projector and so
forth. Everything worked fine.
Portland is a nice place to hold a convention -- the light rail is
great, the convention center is very accessible, and street parking
isn't bad either if you have a car there.
Dave went with me, so it made more sense for us to drive.
The drive was interesting because the central valley was so thick
with smoke from all the fires (including the terrible Paradise fire
that burned for so long, plus a new one that had just started up near
Yosemite) that we couldn't see Mt Shasta when driving right by it.
It didn't get any better until just outside of Sacramento. It must
have been tough for Sacramento valley residents, living in that for
weeks! I hope they've gotten cleared out now.
I finally saw that Redding Sundial bridge I've been hearing so much
about. We got there just before sunset, so we didn't get to check the
sundial, but we did get an impressive deep red smoky sun vanishing
into the gloom.
Photos here.
End of my little blog-break, and time to get back to
scrambling to get caught up on writing and prep for the
GetSET Javascript class for high
school girls. Every year we try to make it more relevant and
less boring, with more thinking and playing and less rote typing.
I think we're making progress, but we'll see how it goes next week.
Tags: oscon08, conferences, linux, travel, portland, hardware, maker
[
23:00 Aug 04, 2008
More conferences |
permalink to this entry |
]
Sun, 22 Jun 2008
I decided to stick a tentative toe into the current millennium and
get myself a cellphone.
I sense your shock and amazement -- from people who know me, that
I would do such a thing, and from everybody else at the concept that
there's anybody in 2008 who didn't already have one.
I really don't think cellphones are evil, honest!
(Except in the hands of someone driving a car -- wouldja please
just put the phone down and pay attention to the friggin' road?)
The truth is that I just don't much like talking on the phone, and
generally manage fine with email. The land-line phone works fine for
the scant time I spend on the phone, and I have to have the land line
anyway (as part of the DSL package) so why pay another monthly bill
for a second phone?
Prepaid plans looked like just the ticket, and that's what I got.
With a cute little Motorola V195s. New toy! Rock!
It can take custom MP3 ringtones and Java games ...
but of course I don't want theirs, I want to
make my own. So I wanted to talk to the phone from Linux.
The charger plug was a familiar shape -- looked a lot like a standard
mini USB connector. Could the hardware be that easy? Sure enough, it's
a standard mini USB. Kudos to Motorola for making that so easy!
Now what about software?
My initial web searches led me down a false trail paved with programs
like wammu and gnokii. I learned that I needed to enable ACM in my
kernel (that's the modem protocol most cellphones use over USB),
so as long as I was building a new kernel anyway, I grabbed the
latest tarball from kernel.org (2.6.25.7). With that done,
I was able to talk to the phone with gnokii, but the heavily
Nokia-oriented program didn't show me much that looked useful.
Moto4lin is the answer
I set the project aside for a while. But half a week later while
looking for something else, I stumbled across
moto4lin,
which turned out to be exactly what I needed.
I had to run as root, or else when I try to connect, it prints on stderr:
sendControl Error:[error sending control message: Operation not permitted]
) but I'm sure that can be solved somehow.
So run as root, click Connect, click File Manager if you're not
already in that mode, then click Update List and it reads
the files. Once they're there, you can click around in the folder
list on the left looking for the audio files (on my phone, they're in
a directory called audio somewhere under C, not A). Excellent!
Creating a ringtone leads to a kernel debugging digression
Okay, now I needed a ringtone. I wanted to use a bit of birdsong,
so I loaded one of the tracks I use for
tweet
into Audacity and fiddled semi-randomly until I figured out how
to cut and save a short clip. It would only save as WAV, but
lame clip.wav clip.mp3
solved that just fine.
(Update: the easiest way is to select the clip
you want, then do File->Export Selection...)
Except ... somewhere along the way, the clips stopped playing.
I couldn't even play the original ogg track from tweet. It *looked*
like it was playing ... it found the track, printed information about
it, showed a running time-counter for the appropriate amount of time
... but made no sound.
It eventually turned out that the problem was that shiny new 2.6.25.7
kernel I'd downloaded. A bug introduced in 2.6.24 to the ymfpci sound
card driver makes Yamaha sound cards unable to play anything with a
bitrate of 44100 (which happens to be the typical CD bitrate).
After a lot of debugging I eventually filed
bug 10963
with a patch that reverts the old, working code from 2.6.23.17.
Ringtone success
Okay, a typical open source digression. But while I was still trying
to track down the kernel bug, I meanwhile found
this
Razr page that tipped me off that I might need a different
bitrate for ringtones anyway. So I converted it with:
lame -b 40 mock.wav mock.mp3
(which also made it playable on the new kernel.)
I also found some useful information in the lengthy
Ubuntu
forums discussion of moto4lin.
In the end, I was able to transfer the file easily to the motorola
phone, and to use it as my nifty new ringtone. Success! Too bad nobody
ever calls me and this phone is mostly for outgoing calls ...
Now to look for some fun Java apps.
Tags: cellphone, usb, linux
[
20:27 Jun 22, 2008
More linux |
permalink to this entry |
]
Thu, 22 May 2008
Dave needed something scanned. Oh, good! The first use of a scanner under
a new distro is always an interesting test. Though the last few
Ubuntu releases have been so good about making scanners "just work"
that I was beginning to take scanners for granted.
"Sure, no problem," I told Dave, taking the sketch he gave me.
Ha! Famous last words.
For Hardy, I guess the Ubuntu folks decided that users had
had it too easy for a while and it was time to throw us a challenge.
Under Hardy, scanning works fine as root, but normal users can't
access the scanner. sane-find-scanner
sees the scanner,
but xsane and the xsane-gimp plug-in can't talk to it (except as root).
It turns out the code for noticing you plugged in a scanner and
setting appropriate permissions (like making it group "scanner")
has been removed from udev, the obvious place for it ... and moved
into hal. Except, you guessed it, whatever hal is supposed to be
doing isn't working, so the device's group is never set to "scanner"
to make it accessible to non-root users.
Lots of people are hitting this and filing bugs (search for
scanner permissions), in particular
bug
121082 and bug
217571.
Fortunately, the fix is quite easy if you have a copy of your old
gutsy install: just copy /etc/udev/rules.d/45-libsane.rules from
gutsy to the same place on hardy.
(If you don't have your gutsy udev rules handy, I attached the file to the
latter of the two bugs I linked above.)
Then udev will handle your scanner just like it used to,
and you don't have to wait for the hal developers to figure out
what's wrong with the new hal rules.
Tags: linux, ubuntu, scanner, udev, hal
[
16:56 May 22, 2008
More linux |
permalink to this entry |
]
Fri, 16 May 2008
My laptop's clock has been drifting. I suspect the clock battery is
low (not surprising on a 7-year-old machine). But after an hour of
poking and prodding, I've been unable to find a way to expose the
circuit board under the keyboard, either from the top (keyboard)
side -- though I know how to remove individual keycaps, thanks to a reader
who sent me detailed instructions a while back (thanks, Miles!) --
or the bottom. Any expert on Vaio SR laptops know how this works?
Anyway, that means I have to check and reset the time periodically.
So this morning I did a time check and found it many hours off.
No, wait -- actually it was pretty close; it only looked like it
was way off because the system had suddenly decided it was in UTC,
not PDT. But how could I change that back?
I checked /etc/timezone -- sure enough, it was set to UTC. So I
changed that, copying one from a debian machine -- "US/Pacific",
but that didn't do it, even after a reboot.
I spent some time reading man hwclock
-- there's a lot
of good reading in that manual page, about the relation between the
system (kernel) clock and the hardware clock. Did you know that
you're not supposed to use the date command to set the system
time while the system is running? Me neither -- I do that all the
time. Hmm. Anyway, interesting reading, but nothing useful about
the system time zone.
It has an extensive SEE ALSO list at the end, so I explored some
of those documents.
/usr/share/doc/util-linux/README.Debian.hwclock
is full of lots of interesting information, well worth reading,
but it didn't have the answer. man tzset
sounded
promising, but there was no such man page (or program) on my system.
Just for the heckofit, I tried typing tz
[tab]
to see if I had any other timezone-related programs installed ...
and found tzselect. And there was the answer, added almost as an
afterthought at the end of the manual page:
Note that tzselect will not actually change the timezone for you.
Use 'dpkg-reconfigure tzdata' to achieve this.
Sure enough,
dpkg-reconfigure tzdata
let me set
the time zone. And it even seems to be remembered through a reboot.
Tags: linux, debian, ubuntu, vaio
[
11:04 May 16, 2008
More linux |
permalink to this entry |
]
Fri, 02 May 2008
This has been a good week for fonts: two longstanding mysteries solved.
The first concerns the bitstream vera sans mono I've been using
as a terminal font in apps like rxvt and xterm. I'd been specifying it in
~/.Xdefaults like this:
XTerm*font: -bitstream-bitstream vera sans mono-bold-r-normal-*-12-*-*-*-*-*-iso10646-1
The mystery is that I'd noticed that in xterm, the font looked
slightly different -- slightly uglier -- than in rxvt (both apps
use the same X class name of XTerm). It was hard to put my finger on
what was different -- the shape of all the letters looked the same,
but it just seemed a little more ragged, and a little less compact,
in xterm. I figured it was just a minor difference in their drawing
code, or something.
Well, I was fiddling with fonts (trying to get the new-to-me
"Inconsolata" font working) and I noticed that iso10646 bit.
I didn't know what 10646 was, but shouldn't it be 8859-1 or 8859-15,
the codes for the Latin-1 alphabet? After finishing up my Inconsolata
experiments, when I set the font back to Vera I changed the line to
XTerm*font: -bitstream-bitstream vera sans mono-bold-r-normal-*-12-*-*-*-*-*-iso8859-15
and moved on to other things.
Until the next morning, when I booted up to a surprise: my main
terminal window no longer fit on the screen. It seems it had reverted
to the other (uglier) version of Vera Sans Mono, which is also very
slightly taller, so instead of being a couple of lines shorter than
the screen height, it was a couple of lines too tall to fit.
I checked .Xdefaults -- yes, it was still Vera. What was going on?
I finally remembered the one thing I had changed:
the language setting on the font, from 10646-1 to 8858-15. I changed
it back: sure enough, now the font was pretty again and the terminal
was short enough to fit.
I fired up xfontsel and did some experimenting. It turned out the
difference between the two almost-identical Vera sans mono bold roman
fonts is a field xfontsel calls "spc". It can be either 'c' or 'm'.
The 'c' version is the pretty, compact font; the 'm' is the uglier,
taller one. For some reason, specifying 10646-1 makes "spc" default
to 'c', while 8859-15 makes it default to 'm'. But specifying 'c'
in the font specifier gets the good version regardless of which
language is specified.
So this would work:
XTerm*font: -bitstream-bitstream vera sans mono-bold-r-normal-*-12-*-*-*-c-*-*-*
But then I read up on 10646-1 and it turns out to mean "the
whole unicode character set". That sounds like a good idea,
so I kept it in my font specifier after all:
XTerm*font: -bitstream-bitstream vera sans mono-bold-r-normal-*-12-*-*-*-c-*-iso10646-1
(For the moment I still didn't know what spc, c or n meant;
read on if you're curious.)
The second insight concerned a longstanding mystery of Dave's.
He has been complaining for quite a while about the way
Ubuntu's modern pango-based apps all refuse to see bitmapped fonts.
(It bothered me too, but less so, because the terminal and editor
apps I use can see X fonts.)
Dave has an Ubuntu install on one machine that he's been upgrading
release after release, which does see his bitmapped fonts.
But any fresh Ubuntu installation fails to see the fonts.
What was the difference?
We knew about the trick of going into /etc/fonts/conf.d,
removing the symbolic link 70-yes-bitmaps.conf and replacing it
with a link to /etc/fonts/conf.avail/70-yes-bitmaps.conf ...
But doing that doesn't actually change anything, and bitmap
fonts still don't show up.
The secret turned out to be that you need to run
fc-cache -fv
after changing the font/conf.d links. This apparently never
happens on its own -- not on a reboot, not on installing or
uninstalling font packages. Somehow it had happened once on Dave's
good install, and that's why it worked there but nowhere else.
I'm not sure how anyone is supposed to find out about fc-cache --
there's no man fontconfig,
and the /etc/fonts/conf.avail/README offers no clue,
just misleadingly says "Fontconfig scans this directory".
man fc-cache
mentions /usr/share/doc/fontconfig/fontconfig-user.html,
which doesn't exist; it turns out on Ubuntu it's actually
/usr/share/doc/fontconfig-config/fontconfig-user.html.
But wait, that's just an html-ized manual page for fonts-conf,
so actually you could just run man fonts-conf
...
your guess is as good as mine why the fc-cache man page sends
you on a hunt for html files instead.
man fonts-conf
is good reading -- it even solves the
mystery of that spc parameter. It stands for spacing
and can be proportional, dual-width, monospace or charcell.
Aha! And there's lots more useful-looking information in that
manual page as well.
Tags: linux, fonts, i18n, mysteries
[
15:58 May 02, 2008
More linux |
permalink to this entry |
]
Tue, 29 Apr 2008
Since updating to Hardy, I've been getting mail from Anacron:
/etc/cron.weekly/slocate:
slocate: fatal error: load_file: Could not open file: /etc/updatedb.conf: No such file or directory
That's the script that updates the database for locate,
Linux's fast find system.
I figured I must have screwed something up when I moved
that slocate cron script from cron.daily
to cron.weekly (because I hate having my machine slow to a
crawl as soon as I boot it in the morning, and it doesn't bother me
if the database doesn't necessarily have files added in the
last day or two).
But after talking to some other folks and googling for Ubuntu bugs,
I discovered I wasn't the only one getting that mail, and there was
already a
bug covering it. Comparing my setup with another Hardy user's,
I found that the file slocate was failing to find, /etc/updatedb.conf,
belongs to a different package, mlocate. If mlocate is installed,
then slocate's cron script works; otherwise, it doesn't.
Sounds like slocate should have a dependency that pulls in mlocate,
no?
But wait, what do these two packages do? Let's try a little
aptitude search locate
:
p dlocate - fast alternative to dpkg -L and dpkg -S
p kio-locate - kio-slave for the locate command
i locate - maintain and query an index of a directory
p mlocate - quickly find files on the filesystem based
i slocate - Secure replacement of findutil's locate
Okay, forget the first two, but we have locate, mlocate, and slocate.
How do they relate?
Worse, if I install mlocate (so slocate will work) and then look in my
cron directories, it turns out I now have, count 'em, five
different cron scripts that run updatedb. They are:
In cron.daily:
locate: 72 lines! but a lot of that is comments and pruning,
and a lot of fiddling to figure out what version of the kernel is
running to see whether it can pass any advanced flags when it tries
to renice the process. In the end it calls
updatedb.findutils
(note no full path, though it
uses a full path when it checks for it earlier in the script).
slocate: A much simpler but unfortunately buggy 20 lines.
It checks for /etc/updatedb.conf, runs it if it exists, fiddles
with ionice, checks again for /etc/updatedb.conf, and based
on whether it finds it, runs either /usr/bin/slocate -u
or /usr/bin/slocate -u -f proc
. The latter path is what
was failing and sending root mail every time the script was run.
mlocate: an even slimmer 12 line script, which checks for
/usr/bin/updatedb.mlocate and, if it exists, fiddles ionice then
runs /usr/bin/updatedb.mlocate
.
In cron.weekly:
Two virtually identical scripts called find.notslocate and
find.notslocate.dpkg-new, which differ only in dpkg-new having
more elaborate ionice options. They both run updatedb
.
And which updatedb would that be? Probably /usr/bin/updatedb, which
links to /etc/alternatives/updatedb, which probably links to either
updatedb.mlocate or updatedb.slocate, whichever you've installed
most recently. But in either case, it's hard to see why you'd need
this script running weekly if you're already running both flavors
of updatedb from other scripts cron.daily. And having two copies
of the script is just plain wrong (and there was already a
bug
filed on it). (As long as you're poking around
in cron.daily and cron.weekly, check and see if you have
any more of these extra dpkg-new or dpkg-old scripts -- they might be
slowing down your machine for no reason.)
Further research reveals that mlocate is a new(ish) package intended
to replace slocate. (There was a long discussion of that on
ubuntu-devel,
leading to the replacement of slocate with mlocate very late in
the Hardy development cycle. There was also lots of discussion of
"tracker", apparently a GUI fast find tool that can only search in
the user's home directory.)
What is this mlocate?
The m stands for "merge": the advantage of mlocate is
that it can merge new results into its existing database instead
of replacing the whole thing every time. Sounds good, right?
However, the down side is that mlocate apparently can't
to purge its database of old files that no longer
exist, and these files will clutter up your locate results.
Running locate -e
will keep them from being printed --
but there seems
to be no way to set this permanently, via an environment variable
or .locaterc file, nor to tell updatedb.mlocate to clean up its database.
So you'll need to alias locate to locate -e
if you want sensible behavior. Or go back to slocate. Sigh.
Cleaning up
The important thing is to get rid of most of those spurious updatedb
cron scripts. You might choose to run updatedb daily, weekly, or only
when you choose to run it; but you probably don't want five different
scripts running two different versions of updatedb at different times.
The packages obviously aren't cleaning up after themselves, so let's
do a little manual cleanup.
That find.slocate script looks suspicious. In fact, if you run
dpkg -S find.notslocate
, you find out that it doesn't
belong to any package -- not only should the .dpkg-old version not
be there, neither should the other one! So out they go.
As for slocate and mlocate,
it's important to know that the two packages can coexist:
installing mlocate doesn't remove slocate or vice versa.
A clean Hardy install should have only mlocate; upgrades from Gutsy
are more likely to have a broken slocate.
Having both packages probably isn't what you want. So pick one, and
remove or disable the other. If mlocate is what you want,
apt-get purge slocate
and just make sure that
/etc/cron.*/slocate disappears. If you decide you want slocate,
it's a little trickier since the slocate package is broken;
but you can fix it by creating an empty /etc/updatedb.conf so
updatedb.slocate won't fail.
Tags: linux, boot, ubuntu, install
[
21:48 Apr 29, 2008
More linux/install |
permalink to this entry |
]
Tue, 22 Apr 2008
Seems like each new Ubuntu release makes a few gratuitous changes
to the syntax of system files. Today's change involves autologin,
controlled by the "upstart" system (here's what I wrote about the
previous syntax for
autologin
under upstart).
The /usr/bin/loginscript still hasn't changed, and this still works:
#! /bin/sh
/bin/login -f yourusername
But the syntax has changed a little for the getty line in
/etc/event.d/tty1:
respawn is now on its own line (I don't know if that matters --
I still can't find any documentation on this file's syntax,
though I found a new upstart
page that links to some blog entries illustrating how upstart
can be used to start system daemons like dbus).
And the getty now needs an exec before it.
Like this:
respawn
exec /sbin/getty -n -l /usr/bin/loginscript 38400 tty1
Update: this changed
again in Karmic Koala: the file has moved from /etc/event.d/tty1
to /etc/init/tty1.conf.
Tags: linux, ubuntu, boot
[
15:27 Apr 22, 2008
More linux |
permalink to this entry |
]
Sun, 20 Apr 2008
I finally had a moment to upgrade my desktop to Ubuntu's "Hardy Heron".
I followed the same procedure as when I went from feisty to gutsy:
- cp -ax / /hardy
- cp -ax /dev/.static/dev/* /hardy/dev/
- Fix up files like /hardy/etc/fstab and /boot/grub/menu.lst
- Reboot into the newly copied gutsy
- do-release-upgrade -d
It took an hour or two to pull down all the files, followed by a long
interval of occasionally typing Y or N, and then I was ready to start
cleaning up some of the packages I'd noticed flying by that I didn't
want. Oops! I couldn't remove or install anything with apt-get,
because: dpkg --configure -a
But I couldn't dpkg --configure -a
because several
packages were broken.
The first broken package was plucker,
which apparently had failed to install any files.
Its postinstall script was failing because it had no
files to operate on; and then I couldn't do anything further with it
because apt-get wouldn't do anything until I did a
dpkg --reconfigure -a
I finally got out of that by dpkg -P plucker; then after several
more dpkg --reconfigure -a rounds I was eventually able to apt-get
install plucker (which installed just fine the second time).
But apt still wasn't happy, because it wanted to run the trigger for
initramfs-tools, which wouldn't run because it wanted kernel modules
for some specific kernel version in /lib/modules. I didn't have any
kernel modules because I'm not running Ubuntu's kernel (I'm stuck on
2.6.23 because
bug 10118
makes all 2.6.24 variants unable to sync with USB Palm devices).
But I couldn't remove initramfs-tools because udev
(along with a bunch of other less important packages) depends on it.
I finally found my way out of that by removing
/var/lib/dpkg/triggers/initramfs-tools.
I reported it as
bug 220094.
Update: I forgot to mention one important thing I hit both on
this machine and earlier, on the laptop: /usr/bin/play (provided by
the "sox" package) no longer works because it now depends on a
zillion separate libraries. apt-get install libsox-fmt-all
to get all of them.
Tags: linux, ubuntu, debian, install
[
21:02 Apr 20, 2008
More linux/install |
permalink to this entry |
]
Thu, 10 Apr 2008
Dave has been experimenting with xorg configuration lately -- trying
to figure out why the latest Xorg no longer supports 1600x1200 on
his monitor. (I've looked for bug reports and found gazillions of
them, all blaming it on the video card but involving three different
makes of video card, so color me skeptical.)
Anyway, part of this has involved taking out parts of his
/etc/X11/xorg.conf file to see which parts might be causing the
problem, and he's found something interesting.
What do you suppose is the minimal useful xorg.conf file?
You might suppose, oh, screen and monitor sections, an input section
for the keyboard and another one for a generic mouse, and that might
be all you need ... right?
Okay, try it. Let's start with a really minimal file -- nothing --
and gradually add sections. To try it, make a backup of your current
xorg.conf, then zero out the file:
cd /etc/X11
mv xorg.conf xorg.conf.sav
cp /dev/null xorg.conf
Now exit X if you hadn't already, and start it up again (or
let gdm do it for you).
Be prepared to do repairs from the console in case X doesn't start up: e.g.
sudo cp /etc/X11/xorg.conf.bak /etc/X11/xorg.conf
What happened?
In my case, on the laptop running Hardy beta, X starts right up and
looks just the same as it did before.
xorg.conf -- who needs it?
A specious question, of course, which has a perfectly good answer:
anyone who needs a resolution other than whatever xorg picks as the default;
anyone with additional hardware, like a wacom tablet;
anyone who wants customizations like XkbOptions = ctrl:nocaps.
There are lots of reasons to have an xorg.conf. But it's fun to
know that at least on some machines, it's possible to run without one.
Update: turns out this is part of Ubuntu's new
BulletProof X
feature. It doesn't work on other distros or older versions.
Thanks to James D for the tip.
Tags: linux, X11
[
11:25 Apr 10, 2008
More linux |
permalink to this entry |
]
I burned a CD for the Ubuntu hardy beta alternate installer.
I used k3b since that's been a good, fairly reliable burning app
with a well designed UI -- I've been using it for years despite
not running a KDE desktop.
I selected "Burn CD Image", reduced the speed (burning apps are
always wildly optimistic about speed, with the result that they
create a lot of coasters) and checked the box for "verify contents
after burning".
The burn went fine, and k3b ejected the CD, then sucked it back in
again for the verification stage. At that point k3b started spewing
lots of errors to the terminal, things like "/dev/hdd: READ 10
failed!"
and "Failed to init HAL context!"
repeated many times.
How annoying! k3b has added a new dependency on hal, and although it
can burn a CD just fine, without hal it then forgets where the CD
drive was so it can read the CD back in to verify it.
Fortunately dd /dev/cdrw | md5sum
worked fine to verify
that the burn was correct. I guess it's time to investigate other
CD burning programs.
Tags: linux, hal
[
11:04 Apr 10, 2008
More linux |
permalink to this entry |
]
Mon, 07 Apr 2008
On a lunchtime bird walk on Monday I saw one blue heron and at least
five green herons (very unusuual to see so many of those).
Maybe that helped prepare me for installing the latest
Ubuntu beta, "Hardy Heron", Monday afternoon.
I was trying the beta primarily in the hope that it would fix a
serious video out regression that appeared in Gutsy (the current
Ubuntu) in January.
My beloved old Vaio SR17 laptop can't switch video signals on the
fly like some laptops can; I've always needed to boot it with an
external monitor or projector connected. But as long as it saw a
monitor at boot time, it would remember that state through many
suspend cycles, so I could come out of suspend, plug in to a projector
and be ready to go. But beginning some time in late January, somehow
Gutsy started doing something that turned off the video signal when
suspending. To talk to a projector, I could reboot with the projector
connected (I hate making an audience watch that! and besides, it takes
away the magic). I also discovered that switching to one of
the alternate consoles, then back (ctl-alt-F2 ctl-alt-F7) got a signal
going out on the video port -- but I found out the hard way, in front
of an audience, that it was only a 640x480 signal, not the 1024x768
signal I expected. Not pretty! I could either go back to Feisty ...
or try upgrading to Hardy.
I've already written about the handy
debootstrap
lightweight install process I used.
(I did try the official Hardy "alternate installer" disk first, but
after finishing package installation it got into a spin lock
trying to configure kernel modules, so I had to pull the plug and
try another approach.)
This left me with a system that was very minimal indeed, so I spent
the next few hours installing packages, starting with
tcsh, vim (Ubuntu's minimal install has something called vim, but
it's not actually vim so you tend to get lots of errors about parsing
your .vimrc until you install the real vim),
acpi and acpi-support (for suspending),
and the window system: xorg and friends. To get xorg, I started with:
apt-get install xserver-xorg-video-savage xbase-clients openbox xloadimage xterm
Then there was the usual exercise of aptitude search font
and installing everything on that list that seemed relevant to
European languages (I don't really need to scroll through dozens of
Tamil, Thai, Devanagari and Bangla fonts every time I'm looking for a
fancy cursive in GIMP).
But I hit a problem with that pretty early on: turns out most of
the fonts I installed weren't actually showing up in xlsfonts,
xfontsel, gtkfontsel, or, most important, the little xlib program
I'm using for a talk I need to give in a couple weeks.
I filed it as bug
212669, but kept working on it, and when a clever person on
#ubuntu+1 ("redwhitewaldo") suggested I take a look at the
x-ttcidfont-conf README, that gave me enough clue to get me
the rest of the way. Turns out there's a Debian
bug with the solution, and the workaround is easy until the
Ubuntu folks pick up the update.
I hit a few other problems, like the
PCMCIA/udev
problem I've described elsewhere ... but mostly, my debootstrapped
Hardy Heron is working quite well.
And in case you're wondering whether Hardy fixed the video signal
problem, I'm happy to say it does. Video out is working just fine.
Tags: linux, ubuntu, install, fonts, vaio
[
19:31 Apr 07, 2008
More linux/install |
permalink to this entry |
]
Sun, 06 Apr 2008
Some time ago, I wished for a simple Linux
"Tarball
installer", something that could install a minimal install of
a Linux distribution onto an existing partition or directory,
skipping all the flaky and error-prone hardware-guessing that
installers do.
It turns out Debian (and therefore also Ubuntu) has had this for
years, and it's totally cool. It's called debootstrap.
Some folks on the #ubuntu+1 channel told me about it, and I found
a nice clear
howto
article on how to use it for Debian. It works just the same
for Ubuntu.
First, get the .deb package for the debootstrap you want to use.
Here's
debootstrap
for Ubuntu Hardy Heron. Install it with dpkg -i
.
Then run it, giving it the name of the system you want to install
and the directory (or mounted partition) where you want to install
it. Like this:
debootstrap hardy /mnt/hda3
That's all! It fetches the files it needs from the online
repositories. It takes no time at all -- this really is a minimal
system.
Then you need to do some fiddling to turn it into a bootable system.
That includes (all paths relative to the newly installed filesystem
unless otherwise stated):
- Set up etc/fstab to list the fileystems on the disk,
and to mount / from the filesystem you just installed
- Define the hostname in etc/hostname
- Set up a grub boot stanza in /boot/grub/menu.lst
(that's /boot on the current system, which should be the
same as /boot in the new fstab you just created).
Use whatever kernel you were using for your old system, for now.
Now you're read to reboot into the new system. Of course, since this is
a very minimal system, you have a lot more work to do.
Hardly anything is installed, and nothing has been configured for you.
Some things may be challenging (for example, as I write this, X is
installed but most of the fonts aren't showing up properly, which
may be a bug in Hardy).
Anyway, you can get a good start by mounting your old system's root
directory and copying some starter files from there, starting with these:
- Set up your important configuration files:
/etc/network/interfaces, /etc/hosts, /etc/resolv.conf,
/etc/passwd etc.
- edit /etc/apt/sources.list to include
restricted universe multiverse
- Install a kernel package if you're using distro kernels
- Install vim if you're a vim user -- remember, ubuntu comes with
something called vim that
isn't really vim.
- Create users and homedirs and such
- Install all the other stuff you want -- X, gimp/gtk, development
tools, editors, shells -- all that stuff that makes the system
feel like home. You're on your own there, so have fun!
Tags: linux, debian, install
[
13:54 Apr 06, 2008
More linux/install |
permalink to this entry |
]
Fri, 04 Apr 2008
I'm experimenting with Ubuntu's "Hardy Heron" beta on the laptop, and
one problem I've hit is that it never configures my network card properly.
The card is a cardbus 3Com card that uses the 3c59x driver.
When I plug it in, or when I boot or resume after a suspend, the
card ends up in a state where it shows up in ifconfig eth0
,
but it isn't marked UP. ifup eth0
says it's already up;
ifdown eth0
complains
error: SIOCDELRT: No such process
but afterward, I can run ifup eth0
and this time it
works. I've made an alias, net
, that does
sudo ifdown eth0; sudo ifup eth0
. That's silly --
I wanted to fix it so it happened automatically.
Unfortunately, there's nothing written anywhere on debugging udev.
I fiddled a little with udevmonitor
and
udevtest /class/net/eth0
and it looked like udev
was in fact running the ifup rule in
/etc/udev/rules.d/85-ifupdown.rules, which calls:
/sbin/start-stop-daemon --start --background --pid file /var/run/network/bogus --startas /sbin/ifup -- --allow auto $env{INTERFACE}
So I tried running that by hand (with $env{INTERFACE} being eth0)
and, indeed, it didn't bring the interface up.
But that suggested a fix: how about adding --force
to that ifup line? I don't know why the card is already in a state
where ifup doesn't want to handle it, but it is, and maybe
--force
would fix it. Sure enough: that worked fine,
and it even works when resuming after a suspend.
I filed bug
211955 including a description of the fix. Maybe there's some
reason for not wanting to use --force
in 85-ifupdown
(why wouldn't you always want to configure a network card when it's
added and is specified as auto and allow-hotplug in
/etc/network/interfaces?) but if so, maybe someone will
suggest a better fix.
Tags: linux, ubuntu, udev, networking
[
14:41 Apr 04, 2008
More linux |
permalink to this entry |
]
Tue, 05 Feb 2008
A month or so back, I spent some time fiddling with the
options for the Synaptics touchpad driver. The Alps (not Synaptics)
trackpad on my laptop has always worked okay with just the standard
PS/2 mouse driver, but in recent kernels it's become overly sensitive
to taps, registering spurious clicks when I'm in the middle of typing
a word (so suddenly I'm typing in a completely different window
without knowing it).
I eventually got it working. I tried various options, but here's what I
settled on:
Section "InputDevice"
Identifier "Trackpad"
Driver "synaptics"
Option "SHMConfig" "true"
Option "SendCoreEvents" "true"
Option "Device" "/dev/psaux"
Option "Protocol" "auto-dev"
Option "MinSpeed" "0.5"
Option "MaxSpeed" "0.75"
# AccelFactor defaults to .0015 -- synclient -l to check
Option "TouchpadOff" "2"
Option "Emulate3Buttons" "true"
EndSection
Section "InputDevice"
Identifier "Configured Mouse"
Driver "mouse"
Option "CorePointer"
Option "Device" "/dev/input/mice"
Option "Protocol" "ExplorerPS/2"
Option "ZAxisMapping" "4 5"
Option "Emulate3Buttons" "true"
EndSection
Life was groovy (I thought).
Fast forward to LCA, a few days before my talk,
when I decide to verify that I can run my USB mouse and the
slide-advancing presentation gizmo through a hub off the single USB
port. Quel surprise: the USB mouse doesn't work at all!
I didn't really need a mouse for that presentation (it was on GIMP
scripting, not GIMP image editing) so I put it on the back burner,
and came back to it when I got home. As I suspected, the USB mouse
was working fine if I commented out the Synaptics entry from
xorg.conf; it just couldn't run both at the same time.
A little googling led me to the answer, in a thread called Can't
use Synaptics TouchPad and USB Mouse -- it wasn't the first google
hit for synaptics "xorg.conf" usb mouse, so perhaps this entry
will help its google-fu. The important part I was missing was in the
"ServerLayout" section:
InputDevice "Trackpad" "AlwaysCore"
InputDevice "Configured Mouse" "CorePointer"
Adding "AlwaysCore" and "CorePointer" parts was what did the trick.
Thanks to "finferflu" who posted the right answer in the thread.
Tags: linux, X11
[
22:54 Feb 05, 2008
More linux |
permalink to this entry |
]
Sun, 23 Dec 2007
I use wireless so seldom that it seems like each time I need it, it's
a brand new adventure finding out what has changed since the last time
to make it break in a new and exciting way.
This week's wi-fi adventure involved Ubuntu's current "Gutsy Gibbon"
release and my prism54 wireless card. I booted the machine,
switched to the right
(a href="http://shallowsky.com/linux/networkSchemes.html">network
scheme, inserted the card, and ... no lights.
ifconfig -a
showed the card on eth1 rather
than eth0.
After some fiddling, I ejected the card and re-inserted it; now
ifconfig -a
showed it on eth2. Each time I
inserted it, the number incremented by one.
Ah, that's something I remembered from
Debian
Etch -- a problem with the udev "persistent net rules" file in
/etc/udev.
Sure enough, /etc/udev/70-persistent-net.rules had two entries
for the card, one on eth1 and the other on eth2. Ejecting and
re-inserting added another one for eth3. Since my network scheme is
set up to apply to eth0, this obviously wouldn't work.
A comment in that file says it's generated from
75-persistent-net-generator.rules. But unfortunately,
the rules uesd by that file are undocumented and opaque -- I've never
been able to figure out how to make a change in its behavior.
I fiddled around for a bit, then gave up and chose the brute force
solution:
- Remove /etc/udev/75-persistent-net-generator.rulesa
- Edit /etc/udev/70-persistent-net.rules to give the
device the right name (eth1, eth0 or whatever).
And that worked fine. Without 75-persistent-net-generator.rules
getting in the way, the name seen in 70-persistent-net.rules
works fine and I'm able to use the network.
The weird thing about this is that I've been using Gutsy with my wired
network card (a 3com) for at least a month now without this problem
showing up. For some reason, the persistent net generator doesn't work
for the Prism54 card though it works fine for the 3com.
A scan of the Ubuntu bug repository reveals lots of other people
hitting similar problems on an assortment of wireless cards;
bug
153727 is a fairly typical report, but the older
bug 31502
(marked as fixed) points to a likely reason this is apparently so
common on wireless cards -- apparently some of them report the wrong
MAC address before the firmware is loaded.
Tags: linux, ubuntu, udev, networking
[
19:02 Dec 23, 2007
More linux |
permalink to this entry |
]
Thu, 20 Dec 2007
I had a chance to spend a day at the AGU conference last week. The
American Geophysical Union is a fabulous conference -- something like
14,000 different talks over the course of the week, on anything
related to earth or planetary sciences -- geology, solar system
astronomy, atmospheric science, geophysics, geochemistry, you name it.
I have no idea how regular attendees manage the information overload
of deciding which talks to attend. I wasn't sure how I would, either,
but I started by going
through the schedule
for the day I'd be there, picking out a (way too long) list of
potentially interesting talks, and saving them as lines in a file.
Now I had a file full of lines like:
1020 U22A MS 303 Terrestrial Impact Cratering: New Insights Into the Cratering Process From Geophysics and Geochemistry II
Fine, except that I couldn't print out something like that -- printers
stop at 80 columns. I could pass it through a program like "fold" to
wrap the long lines, but then it would be hard to scan through quickly
to find the talk titles and room numbers. What I really wanted was to
wrap it so that the above line turned into something like:
1020 U22A MS 303 Terrestrial Impact Cratering: New Insights
Into the Cratering Process From Geophysics
and Geochemistry II
But how to do that? I stared at it for a while, trying to figure out
whether there was a clever vim substitute that could handle it.
I asked on a couple of IRC channels, just in case there was some
amazing Linux smart-wrap utility I'd never heard of.
I was on the verge of concluding that the answer was no, and that I'd
have to write a python script to do the wrapping I wanted, when
Mikael emitted a burst of line noise:
%s/\(.\{72\}\)\(.*\)/\1^M^I^I^I\2/
Only it wasn't line noise. Seems Mikael just happened to have been
reading about some of the finer points of vim regular expressions
earlier that day, and he knew exactly the trick I needed -- that
.\{72\}
, which matches lines that are at least 72
characters long. And amazingly, that expression did something very
close to what I wanted.
Or at least the first step of it. It inserts the first line break,
turning my line into
1020 U22A MS 303 Terrestrial Impact Cratering: New Insights
Into the Cratering Process From Geophysics and Geochemistry II
but I still needed to wrap the second and subsequent lines.
But that was an easier problem -- just do essentially the same thing
again, but limit it to only lines starting with a tab.
After some tweaking, I arrived at exactly what I wanted:
%s/^\(.\{,65\}\) \(.*\)/\1^M^I^I^I\2/
%g/^^I^I^I.\{58\}/s/^\(.\{,55\}\) \(.*\)/\1^M^I^I^I\2/
I had to run the second line two or three times to wrap the very long
lines.
Devdas helpfully translated the second one into English:
"You have 3 tabs, followed by 58 characters, out of
which you match the first 55 and put that bit in $1, and the capture
the remaining in $2, and rewrite to $1 newline tab tab tab $2."
Here's a more detailed breakdown:
Line one:
% | Do this over the whole file
|
---|
s/ | Begin global substitute
|
---|
^ | Start at the beginning of the line
|
---|
\( | Remember the result of the next match
|
---|
.\{,65\}_ | Look for up to 65 characters with a space at the end
|
---|
\) \( | End of remembered pattern #1, skip a space, and
start remembered pattern #2
|
---|
.*\) | Pattern #2 includes everything to the end of the line
|
---|
/ | End of matched pattern; begin replacement pattern
|
---|
\1^M | Insert saved pattern #1 (the first 65 lines ending with a
space) followed by a newline
|
---|
^I^I^I\2 | On the second line, insert three tabs then
saved pattern #2
|
---|
/ | End replacement pattern
|
---|
Line two:
%g/ | Over the whole file, only operate on lines with this pattern
|
---|
^^I^I^I | Lines starting with three tabs
|
---|
.\{58\}/ | After the tabs, only match lines that still have at
least 58 characters
(this guards against wrapping already wrapped lines
when it's run repeatedly)
|
---|
s/ | Begin global substitute
|
---|
^ | Start at the beginning of the line
|
---|
\( | Remember the result of the next match
|
---|
.\{,55\} | Up to 55 characters
|
---|
\) \( | End of remembered pattern #1, skip a space, and
start remembered pattern #2
|
---|
.*\) | Pattern #2 includes everything to the end of the line
|
---|
/ | End of matched pattern; begin replacement pattern
|
---|
\1^M | The first pattern (up to 55 chars) is one line
|
---|
^I^I^I\2 | Three tabs then the second pattern
|
---|
/ | End replacement pattern
|
---|
Greedy and non-greedy brace matches
The real key is those curly-brace expressions, \{,65\}
and \{58\}
-- that's how you control how many characters
vim will match and whether or not the match is "greedy".
Here's how they work (thanks to Mikael for explaining).
The basic expression is {M,N}
--
it means between M and N matches of whatever precedes it.
(Vim requires that the first brace be escaped -- \{}. Escaping the
second brace is optional.)
So .{M,N}
can match anything between M and N characters
but "prefer" N, i.e. try to match as many as possible up to N.
To make it "non-greedy" (match as few as possible, "preferring" M),
use .{-M,N}
You can leave out M, N, or both; M defaults to 0 while N defaults to
infinity. So {} is short for {0,∞} and is
equivalent to *, while {-} means {-0,∞}, like a non-greedy
version of *.
Given the string: one, two, three, four, five
,.\{}, | matches , two, three, four,
|
---|
,.\{-}, | matches , two,
|
---|
,.\{5,}, | matches , two, three, four,
|
---|
,.\{-5,}, | matches , two, three,
|
---|
,.\{,2}, | matches nothing
|
---|
,.\{,7}, | matches , two,
|
---|
,.\{5,7}, | matches , three,
|
---|
Of course, this syntax is purely for vim; regular expressions are
unfortunately different in sed, perl and every other program.
Here's a fun
table of
regexp terms in various programs.
Tags: linux, editors, regexp
[
12:44 Dec 20, 2007
More linux/editors |
permalink to this entry |
]
Fri, 14 Dec 2007
Looking for a volume control that might me installed on mom's
XFCE4-based Xubuntu desktop, I tried running xfce4-mixer.
The mixer came up fine -- but after I exited, I discovered that
my xchat had gone all wonky. None of my normal key bindings worked,
my cursor was blinking, and the fonts used for tabs was about half its
normal size. Over in my Firefox window, key bindings were also
affected.
I've seen this sort of thing happen before with Gnome apps, and
had found a way to solve it using
gconf-editor. That app was not installed, so I installed it and
discovered that it didn't help.
I tried killing the running gconfd-2, removing .gconf/ and .gconfd/
from my home directory, then removing the four gnome directories
(.gnome/, .gnome2/, .gnome2_private/, and .gnome_private/).
Nothing helped xchat (though Firefox did return to normal).
After much flailing and annoying people by restarting xchat repeatedly,
it turned out the problem was that xfce-mixer had started a daemon
called xfce-mcs-manager, which is like gconf, only
different. Like gconf, it mucks with settings of all running gtk
programs without asking first. It runs simultaneously with gconf,
but overrides gconf, which in turn overrides the values set in
~/.gtkrc-2.0.
Killing xfce-mcs-manager caused my running xchat
to revert to its normal settings.
... Well, *almost* revert. A few key bindings didn't get reset, as
I discovered when I hit a ctrl-W to erase the last word and found
myself disconnected from the channel. Another xchat restart, with
xfce-mcs-manager not running, fixed that.
Aside from the ever-present issue of "Where do I look when some
unfriendly program decides to change the settings in running
applications?" (which begs the question,
"What genius thought it would be a good idea to give any random app
like a volume control the power to change settings in every other
gtk application currently running on the system? And do they have
their medications adjusted better now?")
there's another reason this is interesting.
See, if an arbitrary app like xfce-mcs-manager can send a message to
xchat to change key bindings like ctrl-W ... then maybe I could write
a program that could send a similar message telling xchat to cancel
those compiled-in bindings like ctrl-F and ctrl-L, ones that it doesn't
allow the user to change. If I could get something like that working,
I could use a standard xchat -- I'd no longer need to patch the source
and build my own.
Tags: linux, gnome
[
21:12 Dec 14, 2007
More linux |
permalink to this entry |
]
Fri, 07 Dec 2007
(A culture of regressions, part 2)
I've been running on Ubuntu's latest, "Gutsy gibbon", for maybe
a month now. Like any release, it has its problems that I've needed to
work around. Like many distros, these problems won't be fixed before
the next release. But unlike other distros, it's not just lack of
developer time; it turns out Ubuntu's developers point to an official
policy as a reason not to fix bugs.
Take the case of the
aumix
bug. Aumix just plain doesn't work in gutsy. It prints,
"aumix: SOUND_MIXER_READ_DEVMASK" and exits.
This turns out to be some error in the way it was compiled.
If you apt-get the official ubuntu sources, build the package
and install it yourself, it works fine. So somehow they got a glitch
during the process of building it, and produced a bad binary.
(Minor digression -- does that make this a GPL violation? Shipping
sources that don't match the distributed binary? No telling what
sources were used to produce the binary in Gutsy. Not that anyone
would actually want the sources for the broken aumix, of course.)
It's an easy fix, right? Just rebuild the binary from the source
in the repository, and push it to the servers.
Apparently not. A few days ago, Henrik Nilsen Omma wrote in the bug:
This bug was nominated for Gutsy but does currently not qualify for a 7.10 stable release update (SRU) and the nomination is therefore declined.
According the the SRU policy, the fix should already be deployed and
tested in the current development version before an update to the
stable releases will be considered. [ ... ]
See: https://wiki.ubuntu.com/StableReleaseUpdates.
Of course, I clicked on the link to receive enlightenment.
Ubuntu's Stable Release page explains
Users of the official release, in contrast, expect a high degree of
stability. They use their Ubuntu system for their day-to-day work, and
problems they experience with it can be extremely disruptive. Many of
them are less experienced with Ubuntu and with Linux, and expect a
reliable system which does not require their intervention.
by way of explaining the official Ubuntu policy on updates:
Stable release updates will, in general, only be issued in order to
fix high-impact bugs. Examples of such bugs include:
- Bugs which may, under realistic circumstances, directly cause a
security vulnerability
- Bugs which represent severe regressions from the previous release of Ubuntu
- Bugs which may, under realistic circumstances, directly cause a
loss of user data
Clearly aumix isn't a security vulnerability or a loss of user data.
But I could make a good argument that a package that doesn't work ...
ever ... for anyone ... constitutes a severe regression from
the previous version of that package.
Ubuntu apparently thinks that users get used to packages not working,
and grow to like it. I guess that if you actually fixed
packages that you broke, that would be disruptive to users of the
stable release.
I'm trying to picture these Ubuntu target users, who embrace
regressions and get upset when something that doesn't work at all gets
fixed so that it works as it did in past releases. I can't say I've
ever actually met a user like that myself. But evidently the Ubuntu
Updates Team knows better.
Update: I just have to pass along Dave's comment:
"When an organization gets to the point where it spends more energy
on institutional processes for justifying not fixing
something than on just fixing it -- it's over."
Update: Carla Schroder has also
written
about this.
Tags: linux, ubuntu, audio
[
11:21 Dec 07, 2007
More linux |
permalink to this entry |
]
Sat, 01 Dec 2007
With what I learned
last week,
I've been able to type accented characters into GTK apps such as xchat,
and a few other apps such as emacs.
That's nice -- but I was still having trouble reading accented
characters in mutt, or writing them in vim to send through mutt
(darn terminal apps).
The biggest problem was the terminal. I was using urxvt,
but it turns out that urxvt won't let me type any nonascii characters.
It just ignores my multi-key sequences, or prints a space instead
of the character I wanted.
I have no idea why, but switching to plain ol' xterm solved that problem.
Of course, I had to make sure that I was using a font that supported
the characters I wanted (ISO 8859-1 or 8859-15 or something similar),
which leaves out my favorite terminal font (Schumacher Clean bold),
but Bitstream Vera Sans Mono bold is almost as readable.
Of course, it's important to have your locale variables set
appropriately. There are several locale variables:
- LC_CTYPE
- Which encodings to use for typing and displaying characters.
- LC_MESSAGES
- Which translations to use, in programs that offer them.
- LC_COLLATE
- How to sort alphabetically (this one also affects whether ls
groups capitalized filenames first).
- LC_ALL
- Overrides any of the others.
- LANG
- The default, in case none of the other variables is set.
There are a few others which control very specific features like
time, numbers, money, addresses and paper size:
type
locale
to see all of them.
Once I switched to xterm, I was able to set either LANG or LC_CTYPE to
either en_US.UTF-8
or en_US.ISO-8859-1
.
I set LC_COLLATE and LANG or LC_MESSAGES to C, so that I get the
default (usually US) translations for programs and so that ls groups
all the capitalized files first.
Along the way, I learned about yet another
way to type accented characters.
setxkbmap -model pc104 -layout us -variant intl
switches to an international layout, at which point typing certain
punctuation (like ' or ~) is assumed to be a prefix key. So instead
of typing [Multi] ~ n, I can just type ~ n. The catch: it makes it
harder to type quotes or tildes by themselves (you have to type a
space after the quote or tilde).
Even faster, the international layout also offers shortcuts to many
common characters with the "AltGr" key, which I'd heard about
for years but never knew how to enable. AltGr is the right alt
key, and typing, say, AltGr followed by n gives an ñ.
You can see a full map at
Wikipedia
(AltGr characters are blue, quote prefixes are red).
To get back to a US non-international layout:
setxkbmap -model pc104 -layout us
Of course, these aren't the only keyboard layouts to choose from --
there are lots, plus you can define your own. And I was going to
write a little bit about that, except it turns out they've changed
it all around again since I last did that two years ago (don't you
love the digital world?). So that will have to wait for another time.
But the place to start exploring is /usr/share/X11/xkb.
The file symbols/us contains the definitions for those US
keyboards, and I believe it's included via the files in the
rules directory, probably rules/base, base.xml and base.lst.
From there you're on your own. But the standard layouts probably
follow the ones in the Wikipedia article on
keyboard layouts
Tags: linux, i18n, keyboard
[
16:48 Dec 01, 2007
More linux |
permalink to this entry |
]
Fri, 30 Nov 2007
I upgraded my system to the latest Ubuntu, "Gutsy Gibbon", recently.
Of course, it's always best
to make a backup before doing a major upgrade. In this case, the goal
was to back up my root partition to another partition on the same
disk and get it working as a bootable Ubuntu, which I could then
upgrade, saving the old partition as a working backup.
I'll describe here a couple of silly snags I hit,
to save you from making the same mistakes.
Linux offers lots of ways to copy filesystems.
I've used tar in the past, with a command like (starting in /gutsy):
tar --one-file-system -cf - / | tar xvf - > /tmp/backup.out
but cp seemed like an easier way, so I want to try it.
I mounted my freshly made backup partition as /gutsy and started a
cp -ax /* /gutsy
(-a does the right thing for
permissions, owner and group, and file type; -x tells it to stay
on the original filesystem).
Count to ten, then check what's getting copied.
Whoops! It clearly wasn't staying on the original filesystem.
It turned out my mistake was that /*
.
Pretty obvious in hindsight what cp was doing: for each entry in /
it did a cp -ax, staying on the filesystem for that entry, not on
the filesystem for /. So /home, /boot, /proc, etc. all got copied.
The solution was to remove the *: cp -ax / /gutsy
.
But it wasn't quite that simple.
It looked like it was working -- a long wait, then cp finished
and I had what looked like a nice filesystem backup.
I adjusted /gutsy/etc/fstab so that it would point to the right root,
set up a grub entry, and rebooted. Disaster! The boot hung right after
Attached scsi generic sg0 type 0
with no indication of
what was wrong.
Rebooting into the old partition told me that what's supposed to
happen next is: * Setting preliminary keymap...
But the crucial error message was actually
several lines earlier: Warning: unable to open an initial
console
. It hadn't been able to open /dev/console.
Now, in the newly copied filesystem,
there was no /dev/console: in fact, /dev was empty. Nothing had been
copied because /dev is a virtual file system, created by udev.
But it turns out that the boot process needs some static devices in
/dev, before udev has created anything. Of course, once udev's
virtual filesystem has been mounted on /dev, you can no longer read
whatever was in /dev on the root partition in order to copy it
somewhere else. But udev nicely gives you access to it,
in /dev/.static/dev. So what I needed to do to get my new partition
booting was:
cp -ax /dev/.static/dev/ /gutsy/dev/
With that done, I was able to boot into my new filesystem and upgrade
to Gutsy.
Tags: linux, ubuntu, backups
[
23:48 Nov 30, 2007
More linux |
permalink to this entry |
]
Thu, 22 Nov 2007
Happy Thanksgiving, everyone! Today's holiday tip involves
how to type international characters.
For the online Spanish class I've been taking, so far I've been
able to manage without having to type characters like
ñ or á. Usually, if I need one I can find it in one of
the class examples, copy it, and paste it wherever I need it. But
obviously that would be tedious if I needed to type much.
I hacked up a quickie workaround:
a python
script that shows a set of buttons, one for each accented
character I'm likely to need. Clicking a button copies that character
to the clipboard, so I can now paste via mouse middleclick or ctrl-V.
(I'm sure that sounds pathetic to those of you who type accented
characters every day, but it's not something most US English speakers
need to do. And besides, now I know how to access the X clipboard
from Python-GTK -- hooray for learning new things from procrastination
projects!)
Anyway, Mikael Magnusson took pity on me and explained in simple
language how to use the X "Multi key" to type these characters the
right way (well, a right way, anyway). Since all the online
instructions I've seen have been rather complicated, here are the
simple instructions for any of my fellow US monolingists who'd
like to expand their horizons:
First, choose a key for the "Multi key" that you're not using for
anything else. A lot of people use one of the Alt or Windows keys,
but I use both of those already. What I don't use is the Menu key
(that little key down by the right Ctrl key, at least on my keyboard)
since not many Linux apps support it anyway.
Find the keycode for that key, by firing up xev
and
typing the key. For my Menu key, the keycode is 117.
Now type:
xmodmap -e "keycode 117 = Multi_key"
Now you're ready to type a sequence like:
[Menu] ~ n
to type an n-tilde,
[Menu] ' a
for an accented a, or
[menu] ? ?
for the upside-down question mark,
in any app that supports those characters.
Of course, you don't want to type that xmodmap command every time you
log in, so to make it permanent, put this in your .Xmodmap (you're on
your own for figuring out whether your X environment reads .Xmodmap
automatically or whether you need to tell it to run
xmodmap .Xmodmap
when X starts up):
keycode 117 = Multi_key
I have one final useful international input tidbit to offer:
how to type Unicode characters by number.
Hold ctrl+shift+U, then release U but keep holding the
other two while you type a numeric sequence. (This may only work in
gtk apps.) For instance, try this: hold down ctrl and shift, then
type: u 2 6 6 c. Cool, huh?
You can use the "gucharmap" program to find other
neat sequences (hint: View->By Unicode Block otherwise
you'll never find anything).
Now it's time to check the turkey. Have a good day, everyone!
Update: for configuring your own Multi-key shortcuts, see
my later article on
https://shallowsky.com/blog/linux/typing-greek.html.
Tags: linux, i18n, keyboard
[
17:03 Nov 22, 2007
More linux |
permalink to this entry |
]
Wed, 14 Nov 2007
I spent a couple of fruitless hours today trying to install PCLinuxOS,
a well-reviewed new Linux distro, on my Vaio. I got lots of help
from the nice folks on the IRC channel, and tried lots of different
approaches, but no dice -- their Live CD won't boot because it doesn't
grok PCMCIA CDROM drives, and they have no other installation method
besides using the live CD.
Which brings me to today's question:
Why do Linux distros have installers at all?
That probably sounds like a silly question. Of course you need an
installer to get the system onto your disk ... don't you?
Well, yes and no. You could make it a lot simpler than anyone
currently does.
What if you distributed a Linux distro as a filesystem image?
Make it tar, zip, CD iso or whatever format you like -- but
provide the user with a tree of files that, when copied onto a
hard drive, can boot as a running Linux.
Set up this minimal installation filesystem so that when you boot
into it, you get a commandline (X hasn't been configured yet)
and a set of scripts that make it easy to go the rest of the way.
From your running minimal system, you can configure X, set up
networking, install more programs from the distro repositories (or
even from a CD image), and do all the rest of the machine-specific
configuration that an installer does.
The key is that this is all happening from a running system,
not from some cobbled-together installer kernel or live CD.
If you have a problem with any step, you still have a basic
running system, and tools to fix the problem. You avoid the
usual loop:
- Run installer
- Spend 20 minutes answering questions
- Spend 45 minutes waiting for installer
- Discover it failed
- Start over with slightly different parameters
If your X configuration fails, try running X configuration again --
no need to do another install from the beginning. If it doesn't
see your network card -- ditto. Since this debugging all happens
from a normal running Linux, you can use the normal Linux tools you're
used to, not some busybox-based installer.
This model would be much more hardware agnostic than current installers:
- You can install on systems with weirdo graphics cards;
- You can install on systems that need special drivers for the
network card or other hardware;
- You can install on systems with no CDROM or an external CDROM;
- You can install even if you don't have access to a CD burner.
Another advantage is that it makes it very easy to
make a customized version of your distro: just take the basic
system image, change the part that needs changing and tar it up again.
Some distros have gone a little way with this, with an installer
that gives you a starter system, then scripts to download the
rest -- but I've never seen one that made the minimal system
available as a filesystem image, with easy instructions on going
to the next step.
What about the people who aren't already running Linux or aren't
comfortable writing a filesystem image to a partition?
No problem. They get a CD image with a very simple installer --
it handles the partitioning, copies the minimal install to the
partition, and updates grub. It's as prone to hardware compatibility
issues as any installer (though far less so than a live CD is)
but it's still better than the current model, because it won't be
trying to configure hardware until the user reboots into a working
minimal system.
Of course, Live CDs are still cool -- on machines where they
actually work -- for showing Linux to people not ready to commit
to an install. But don't tie your installer to that. Give people
a simpler way to install, one that's fast and straightforward and
hardware agnostic and easy to understand or customize.
The tarball installer. An idea whose time has come.
Now if I could just persuade the distros to offer it.
Update: a couple of people told me about
Dynebolic, a distro that
apparently does just that -- you install by copying a directory
on the CD onto your partition, or rsyncing from their site. Nice!
Tags: linux
[
23:59 Nov 14, 2007
More linux |
permalink to this entry |
]
Wed, 07 Nov 2007
I've been a tcsh user for many years. Back in the day, there were lots
of reasons for preferring csh to sh, mostly having to do with command
history. Most of those reasons are long gone -- modern bash and
tcsh are vastly improved from those early shells, and borrow from each
other, so the differences are fairly minor.
Back in July, I solved
the last blocker that had been keeping me off bash,
so I put some effort into migrating all my .cshrc settings into
a .bashrc so I could give bash a fair shot. It almost won; but
after four months, I've switched back to tcsh, due mostly to a single
niggling bash bug that I just can't seem to solve.
After all these years, the crucial difference is still history.
Specifically, history amnesia: bash has an annoying habit of
forgetting history commands just when I most want them back.
Say I type some longish command.
After it runs, I hit return a couple of times, wait a while, do
a couple of other things, then decide I want to call that command
back from history so I can run something similar, maybe with the
filename changed or a different flag. I ctrl-P or up-arrow ... and
the command isn't there!
If I type history
at this point, I'll see most of my
command history ... with an empty line in place of the line I was
hoping to repeat. The command is gone. My only option is to remember
what I typed, and type it all again.
Nobody seems to know why this happens, and it's sporadic, doesn't
happen every time. Friends have been able to reproduce it, so it's
not just me or my weird settings. It drives me batty.
It wouldn't be so bad except it always seems to happen on the
tricky commands that I really didn't want to retype.
It's too bad, because otherwise I had bash nicely whipped into shape,
and it does have some advantages over tcsh. Some of the tradeoffs:
tcsh wins
- Totally reliable history: commands never disappear.
- History tab completion: typing
!a<TAB>
expands to the last command that started with a. In bash, I have
to type !a:p
to see the command, then
!!
to execute it.
- When I tab-complete a file and there are multiple matches, tcsh shows
them, or at least beeps (depending on configuration). In bash I have
to hit a second tab just in case there might be matches.
- When I tab-complete a directory, tcsh adds the / automatically.
(Arguable. I find I want the / roughly 3/4 of the time.)
- tcsh doesn't drop remote connections if I suspend (with ~^Z).
bash drops me within a minute or two, regardless of settings like
$TMOUT. Bash users tell me I could solve this by using screen,
but that seems like a heavyweight workaround when tcsh "just works".
- tcsh sees $REMOTEHOST and $DISPLAY automatically when I ssh.
bash doesn't: ssh -X helps, but I still need some tricky
code in .bash_profile.
- aliases can have arguments, e.g.
alias llth 'ls -laFt \!* | head'
In bash these have to be functions, which means they don't show
up typing "which" or "alias".
- Prompt settings are more flexible -- there are options like %B for
bold. In bash you have to get the terminal type and use the
ansi color escape sequances, which don't include bold.
- Easier command editing setup -- behaviors like
getting
word-erase to stop at punctuation
don't involve chasing through multiple semi-documented programs,
and the answer doesn't vary with version.
- Documentation -- tcsh's is mostly in man tcsh, bash's is
scattered all over man pages for various helper programs.
And it's hard to google for bash help because "bash" as a keyword
gets you lots of stuff not related to shells.
Of course, you bash users, set me straight if I missed out
on some bash options that would have solved some of these problems.
And especially if you have a clue about the evil disappearing
history commands!
bash wins
- You don't have to
rehash
every time you add a program
or change your path. That's a real annoyance of tcsh, and I could
understand a person used to bash rejecting tcsh on this alone.
Update: Holger Weiß has written
a tcsh patch to fix this, and it has been accepted as of
November 2009. Looking forward to the next version!
- History remembers entire multi-line commands, and shows them
with semicolons when you arrow back through history. Very nice.
tcsh only remembers the first line and you have to retype the rest.
- Functions: although I don't like having to use them instead of
aliases, they're certainly powerful and nice to have.
Of course, bash and tcsh aren't the only shells around.
From what I hear, zsh blends the good features of bash and tcsh.
One of these days I'll try it and see.
Tags: linux, shell, CLI, bash, csh
[
22:58 Nov 07, 2007
More linux |
permalink to this entry |
]
Sat, 08 Sep 2007
It's always amazed me that Linux doesn't let you customize the system
beep. You can call
xset b volume pitch duration
to
make it beep higher or lower, or you can turn it off or switch
to "visual bell"; but there's nothing like the way most other OSes
let you customize it to any sound you want. (Some desktops let you
customize the desktop alerts, but that doesn't do anything about the
beeping you get from vim, or emacs, or bash, or a hundred other
programs that run in terminals.)
Today someone pointed me toward a
Gentoo
wiki page that led me to
Fancy Beeper
Daemon. This is a small kernel module that adds a device,
/dev/beep, which outputs a byte every time there's a beep.
You can write a very simple daemon (several samples in Python are
included with the module) which reads /dev/beep and plays the
sound of your choice.
I downloaded beep-2.6.15+.tar.gz, but it needed one tweak
to build it under 2.6.23-rc3: I needed to add
#include <linux/sched.h>
among the includes at the beginning of beep.c.
After that, it compiled and installed just fine.
Since it's a kernel module, it works in consoles as well as under X.
/dev/beep is created with owner and group root, and readable/
writable only by owner and group, so to test it I had to
chmod 666 /dev/beep
to run the daemon. I fixed that by
making a file in /etc/udev/rules.d called 41-beep.rules,
containing:
KERNEL=="beep", GROUP="audio"
Finally, based on the nice samples that came with the module, I wrote
my own very simple Python daemon,
playbeepd,
that uses aplay.
Lots of fun! I don't know how much I'll use it, but it's good to have
the option of custom beep sounds.
Tags: linux, audio, kernel
[
21:47 Sep 08, 2007
More linux |
permalink to this entry |
]
Sat, 25 Aug 2007
On a seemingly harmless trip to Fry's,
my mother got a look at the 22-inch widescreen LCD monitors
and decided she had to have one. (Can't blame her ... I've been
feeling the urge myself lately.)
We got the lovely new monitor home, plugged it in, configured X
and discovered that the screen showed severe vertical banding.
It was beautiful at low resolutions, but whenever we went to
the monitor's maximum resolution of 1680x1050, the bands appeared.
After lots of testing, we tentatively pinned the problem
down to the motherboard.
It turns out ancient machines with 1x AGP motherboards
can't drive that many pixels properly,
even if the video card is up to the job. Who knew?
Off we trooped to check out new computers.
We'd been hinting for quite some time that it might be about
time for a new machine, and Mom was ready to take the plunge
(especially if it meant not having to return that beautiful monitor).
We were hoping to find something with a relatively efficient Intel Core 2
processor and Intel integrated graphics: I've been told the Intel
graphics chip works well with Linux using open source drivers.
(Mom, being a person of good taste, prefers Linux, and none of us
wanted to wrestle with the proprietary nvidia drivers).
We found a likely machine at PC Club. They were even willing to
knock $60 off the price since she didn't want Windows.
But that raised a new problem. During our fiddling with her old
machine, we'd tried burning a Xubuntu CD, to see if the banding
problem was due to the old XFree86 she was running. Installing it hadn't
worked: her CD burner claimed it burned correctly, but the resulting
CD had errors and didn't pass verification. So we needed a CD burned.
We asked PC Club when buying the computer whether we might burn the
ISO to CD, but apparently that counts as a "data transfer" and their
minimum data transfer charge is $80. A bit much.
No problem -- a friend was coming over for dinner that night,
and he was kind enough to bring his Mac laptop ...
and after a half hour of fiddling, we determined that his burner
didn't work either (it gave a checksum error before starting the
burn). He'd never tried burning a CD on that laptop.
What about Kinko's? They have lots of data services, right?
Maybe they can burn an ISO. So we stopped at Kinko's after dinner.
They, of course, had never heard of an ISO image and had no idea how
to burn one on their Windows box.
Fearing getting a disk with a filesystem containing one file named
"xubuntu-7.04-alternate-i386.iso", we asked if they had a mac,
since we knew how to burn an ISO there.
They did, though they said sometimes the CD burner was flaky.
We decided to take the risk.
Burning an ISO on a mac isn't straightforward -- you have to do
things in exactly the right order.
It took some fast talking to persuade them of the steps ("No, it
really won't work if you insert the blank CD first. Yes, we're quite
sure") and we had to wait a long time for Kinko's antivirus software
to decide that Xubuntu wasn't malware, but 45 minutes and $10 later,
we had a disc.
And it worked! We first set up the machine in the living room, away
from the network, so we had to kill aptitude update
when the install hung installing "xubuntu-desktop" at 85%
(thank goodness for alternate consoles on ctl-alt-F2) but otherwise
the install went just fine. We rebooted, and Xubuntu came up ...
at 1280x1024, totally wrong. Fiddling with the resolution in xorg.conf
didn't help; trying to autodetect the monitor with
dpkg-reconfigure xorg
crashed the machine and we had to
power cycle.
Back to the web ... turns out that Ubuntu "Feisty" ships with a bad
Intel driver. Lots of people have hit the problem, and we found a
few elaborate workarounds involving installing X drivers from various
places, but nothing simple. Well, we hadn't come
this far to take all the hardware back now.
First we moved the machine into the computer room, hooked up
networking and reinstalled xubuntu with a full network, just in
case. The resolution was still wrong.
Then, with Dave in the living room calling out steps off a web page
he'd found, we began the long workaround process.
"First," Dave suggested, reading, "check the version of
xserver-xorg-video-intel.
Let's make sure we're starting with the same version this guy is."
dpkg -l xserver-xorg-video-intel
... "Uh, it isn't
installed," I reported. I tried installing it. "It wants to remove
xserver-xorg-video-i810." Hmm! We decided we'd better do it,
since the rest of the instructions depended on having the
intel, not i810, driver.
And that was all it needed! The intel driver autodetected the monitor
and worked fine at 1680x1050.
So forget the elaborate instructions for trying X drivers from various
sources.
The problem was that xubuntu installed the wrong driver:
the i810 driver instead of the more generic intel driver.
(Apparently that bug is fixed for the next Ubuntu release.)
With that fix, it was only a few more minutes before Mom was
happily using her new system, widescreen monitor and all.
Tags: linux, X11, ubuntu
[
14:23 Aug 25, 2007
More linux |
permalink to this entry |
]
Sat, 18 Aug 2007
I'm forever having problems connecting to wireless networks,
especially with my Netgear Prism 54 card. The most common failure mode:
I insert the card and run
/etc/init.d/networking restart
(udev is supposed to handle this, but that stopped working a month
or so ago). The card looks like it's connecting,
ifconfig eth0
says it has the right IP address and it's marked up -- but try to
connect anywhere and it says "no route to host" or
"Destination host unreachable".
I've seen this both on networks which require a WEP key
and those that don't, and on nets where my older Prism2/Orinoco based
card will connect fine.
Apparently, the root of the problem
is that the Prism54 is more sensitive than the Prism2: it can see
more nearby networks. The Prism2 (with the orinoco_cs driver)
only sees the strongest network, and gloms onto it.
But the Prism54 chooses an access point according to arcane wisdom
only known to the driver developers.
So even if you're sitting right next to your access point and the
next one is half a block away and almost out of range, you need to
specify which one you want. How do you do that? Use the ESSID.
Every wireless network has a short identifier called the ESSID
to distinguish it from other nearby networks.
You can list all the access points the card sees with:
iwlist eth0 scan
(I'll be assuming
eth0 as the ethernet device throughout this
article. Depending on your distro and hardware, you may need to
substitute
ath0 or
eth1 or whatever your wireless card
calls itself. Some cards don't support scanning,
but details like that seem to be improving in recent kernels.)
You'll probably see a lot of ESSIDs like "linksys" or
"default" or "OEM" -- the default values on typical low-cost consumer
access points. Of course, you can set your own access point's ESSID
to anything you want.
So what if you think your wireless card should be working, but it can't
connect anywhere? Check the ESSID first. Start with iwconfig:
iwconfig eth0
iwconfig lists the access point associated with the card right now.
If it's not the one you expect, there are two ways to change that.
First, change it temporarily to make sure you're choosing the right ESSID:
iwconfig eth0 essid MyESSID
If your accesspoint requires a key, add key nnnnnnnnnn
to the end of that line. Then see if your network is working.
If that works, you can make it permanent. On Debian-derived distros,
just add lines to the entry in /etc/network/interfaces:
wireless-essid MyESSID
wireless-key nnnnnnnnnn
Some older howtos may suggest an interfaces line that looks like this:
up iwconfig eth0 essid MyESSID
Don't get sucked in. This "up" syntax used to work (along with pre-up
and post-up), but although man interfaces still mentions it,
it doesn't work reliably in modern releases.
Use wireless-essid instead.
Of course, you can also use a gooey tool like
gnome-network-manager to set the essid and key. Not being a
gnome user, some time ago I hacked up the beginnings of a standalone
Python GTK tool to configure networks. During this week's wi-fi
fiddlings, I dug it out and blew some of the dust off:
wifi-picker.
You can choose from a list of known networks (including both essid and
key) set up in your own configuration file, or from a list of essids
currently visible to the card, and (assuming you run it as root)
it can then set the essid and key to whatever you choose.
For networks I use often, I prefer to set up a long-term
network
scheme, but it's fun to have something I can run once to
show me the visible networks then let me set essid and key.
Tags: linux, networking, debian, ubuntu
[
15:44 Aug 18, 2007
More linux |
permalink to this entry |
]
Sun, 12 Aug 2007
The best thing at Linuxworld was the Powertop BOF,
despite the fact that it ended up stuck in a room with no projector.
The presenter, Arjan van de Ven, coped well with the setback and
managed just fine.
The main goal of Powertop is to find applications that are polling or
otherwise waking the CPU unnecessarily,
draining power when they don't need to.
Most of the BOF focused on "stupid stuff": programs that wake up
too often for no reason. Some examples he gave (many of these will
be fixed in upcoming versions of the software):
- gnome-screensaver checked every 2 sec to see if the mouse moved
(rather than using the X notification for mouse move);
- gnome volume checked 10 times a second whether the volume has changed;
- gnome-clock woke up once a second to see if the minute had rolled
over, rather than checking once a minute;
- firefox in an ssl layer polled 10 times a second in case there was a
notification;
- the gnome file monitor woke up 40 times a second to check a queue
even if there was nothing in the queue;
- evolution woke up 10 times a second;
- the fedora desktop checked 10 times a second for a smartcard;
- gksu used a 10000x/sec loop (he figures someone mistook
milliseconds/microseconds: this alone used up 45 min on one battery test run)
- Adobe's closed-source flash browser plugin woke up 2.5 times a
second, and acroread had similar problems (this has been reported to
Adobe but it's not clear if a fix is coming any time soon).
And that's all just the desktop stuff, without getting into other
polling culprits like hal and the kernel's USB system. The kernel
itself is often a significant culprit: until recently, kernels woke
up once a millisecond whether they needed to or not. With the recent
"tickless" option that appeared in the most recent kernel, 2.6.22,
the CPU won't wake up unless it needs to.
A KDE user asked if the KDE desktop was similarly bad. The answer
was yes, with a caveat: Arjan said he gave a presentation a while back
to a group of KDE developers, and halfway through, one of the
developers interrupted him when he pointed out a problem
to say "That's not true any more -- I just checked in a fix while
you were talking." It's nice to hear that at least some developers
care about this stuff! Arjan said most developers responded
very well to patches he'd contributed to fix the polling issues.
(Of course, those of us who use lightweight window managers like
openbox or fvwm have already cut out most of these gnome and kde
power-suckers. The browser issues were the only ones that applied
to me, and I certainly do notice firefox' polling: when the laptop
gets slow, firefox is almost always the culprit, and killing it
usually brings performance back.)
As for hardware, he mentioned that
some linux LCD drivers don't really dim the backlight when you
reduce brightness -- they just make all the pixels darker.
(I've been making a point of dimming my screen when running off batteries;
time to use that Kill-A-Watt and find out if it actually matters!)
Wireless cards like the ipw100 use
a lot of power even when not transmitting -- sometimes even more than
when they're transmitting -- so turning them off can be a big help.
Using a USB mouse can cut as much as half an hour off a battery.
The 2.6.23 kernel has lots of new USB power saving code, which should help.
Many devices have activity every millisecond,
so there's lots of room to improve.
Another issue is that even if you get rid of the 10x/sec misbehavers,
some applications really do need to wake up every second or so. That's
not so bad by itself, but if you have lots of daemons all waking up at
different times, you end up with a CPU that never gets to sleep.
The solution is to synchronize them by rounding the wakeup times to
the nearest second, so that they all wake up at
about the same time, and the CPU can deal with them
all then go back to sleep. But there's a trick: each machine has to
round to a different value. You don't want every networking
application on every machine across the internet all waking up at once
-- that's a good way to flood your network servers. Arjan's phrase:
"You don't want to round the entire internet" [to the same value].
The solution is a new routine in glib: timeout_add_seconds.
It takes a hash of the hostname (and maybe other values) and uses that
to decide where to round timeouts for the current machine.
If you write programs that wake up on a regular basis, check it out.
In the kernel, round_jiffies does something similar.
After all the theory, we were treated to a demo of powertop in action.
Not surprisingly, it looks a bit like top. High on the screen
is summary information telling you how much time your CPU is spending
in the various sleep states. Getting into the deeper sleep states is
generally best, but it's not quite that simple: if you're only getting
there for short periods, it takes longer and uses more power to get
back to a running state than it would from higher sleep states.
Below that is the list of culprits: who is waking your CPU up most
often? This updates every few seconds, much like the top
program. Some of it's clear (names of programs or library routines);
other lines are more obscure if you're not a kernel hacker, but
I'm sure they can all be tracked down.
At the bottom of the screen is a geat feature: a short hint telling
you how you could eliminate the current top offender (e.g. kill the
process that's polling). Not only that, but in many cases powertop
will do it for you at the touch of a key. Very nice! You can try
disabling things and see right away whether it helped.
Arjan stepped through killing several processes and showing the
power saving benefits of each one. (I couldn't help but notice, when
he was done, that the remaining top offender, right above nautilus,
was gnome-power-manager. Oh, the irony!)
It's all very nifty and I'm looking forward to trying it myself.
Unfortunately, I can't do that on the
laptop where I really care about battery life. Powertop requires a
kernel API that went in with the "tickless" option, meaning it's
in 2.6.22 (and I believe it's available as a patch for 2.6.21).
My laptop is stuck back on 2.6.18 because of an IRQ handling bug (bug 7264).
Powertop also requires ACPI, which I have to disable
because of an infinite loop in kacpid (bug 8274,
Ubuntu bug
75174). It's frustrating to have great performance tools like
powertop available, yet not be able to use them because of kernel
regressions. But at least I can experiment with it on my desktop
machine.
Tags: linux, conferences, linuxworld, laptop, gnome
[
14:06 Aug 12, 2007
More linux |
permalink to this entry |
]
Sat, 11 Aug 2007
Last week was the annual trek to Linuxworld.
There wasn't much of interest on the exhibit floor. Lots of
small companies doing virtualization or sysadmin tools.
The usual assortment of publishers. A few big companies,
but fewer than in past years. Not much swag. Dave commented
that there was a much higher "bunny quotient" this year than
last (lots of perky booth bunnies, very few knowledgeable people
working the floor). The ratio of Linux to Windows in the big-company
booths was much lower than last year, especially at AMD and HP,
who both had far more Windows machines visible than Linux ones.
The most interesting new hardware was the Palm Foleo. It looks
like a very thin 10-inch screen laptop, much like my own Vaio only
much thinner and lighter, with a full QWERTY keyboard with a good
feel to it. The booth staff weren't very technical, but apparently
it sports a 300MHz Intel processor, built-in wi-fi and bluetooth,
a resolution a hair under 1024x768 (I didn't write down the
exact numbers and their literature doesn't say), a claimed battery
life of 5 hours, and runs a Linux from Wind River.
The booth rep I talked to said
it would run regular Linux apps once they were recompiled for
the processor, but he didn't seem very technical and I doubt it
runs X, so I'm not sure I believe that. For a claimed price of
around $400 it looks potentially quite interesting.
Their glossy handout says it has VGA out and can display PowerPoint
presentations, which was interesting since the only powerpoint
reader I know of on Linux is OpenOffice and I don't see that
running on 300MHz (considering how slow it is on my P3 700).
Apparently they're using Documents To Go from DataVis, a PalmOS app.
Aside from that there wasn't much of interest going on.
They split up the "Dot Org Pavilion" this year so not all the
community groups were in the same place, which was a bummer --
usually that's where all the interesting booths are (local LUGs,
FSF, EFF, Debian, Ubuntu and groups like that: no Mozilla booth
this time around). But this year
the dotorgs were too spread out to offer a good hangout spot.
It didn't look like there was much of interest at the conference
either: this year they gave us Exhibit Hall pass attendees a free
ticket to attend one of the paid talks, and I couldn't find one
on the day we attended that looked interesting enough to bother.
However, that changed at the end of the day with the BOF sessions.
The Intel Powertop BOF was an easy choice -- I've been curious about
Powertop ever since it was announced, and was eager to hear more about
it from one of the developers. The BOF didn't disappoint, though the
room did: they didn't even provide a projector (!), so we all had
to cluster around the presenter's laptop when he wanted to show
something. Too bad! but it didn't keep the BOF from being full of
interesting information.
I'll split that off into a separate article.
Tags: linux, conferences, linuxworld
[
12:34 Aug 11, 2007
More conferences |
permalink to this entry |
]
Sat, 28 Jul 2007
It's a small thing, but xchat's blinking text cursor has irritated
me for a while. It's not so much the blink itself that bothers me,
but that it makes the mouse pointer flicker. (I have no idea why
blinking the cursor should make the X mouse pointer flicker,
but it was pretty clear that they were in sync.)
I've also seen fingers pointed at
cursor blink as a laptop battery eater (one more reason the CPU has
to wake up every second or so when it might otherwise have been
idling) though I've seen no numbers on how significant that might
be (probably not very, on most laptops).
Anyway, there are reasons enough to look into turning off the blink.
Xchat is a gtk application, of course. There are lots and
lots of pages on the web telling developers about gtk's
gtk-cursor-blink property (and related properties like
gtk-cursor-blink-timeout) and what they do in at a library
level, so it's clearly possible. But I found nothing about where a
user should set these properties to make gtk find them.
Here's the answer. Add to $HOME/.gtkrc-2.0
(or any other file loaded by $HOME/.gtkrc-2.0):
gtk-cursor-blink = 0
I had to restart X (maybe shutting down all gtk apps would have
been enough) before I saw any effect.
While I was searching, I did find a nice page on
How to disable
blinking cursors in lots of different apps. Its gtk section
didn't seem to do anything here (maybe it only works if a gnome
desktop is running), but it's a good page nontheless,
full of useful advice for turning off cursor blink in other programs.
Tags: linux, gnome
[
20:50 Jul 28, 2007
More linux |
permalink to this entry |
]
Tue, 17 Jul 2007
I've been a happy csh/tcsh user for decades. But every now and then I
bow to pressure and try to join the normal bash-using Linux world.
But I always come up against one problem right away: word erase
(Control-W). For those who don't use ^W, suppose I type something like:
% ls /backups/images/trips/arizona/2007
Then I suddenly realize I want utah in 2007, not arizona.
In csh, I can hit ^W twice and it erases the last two words, and I'm
ready to type
u<tab>
.
In bash, ^W erases the whole path leaving
only "ls", so it's no help here.
It may seem like a small thing, but I use word erase hundreds of
times a day and it's hard to give it up. Google was no help, except
to tell me I wasn't the only one asking.
Then the other day
I was chatting about this issue with a friend who uses zsh for that
reason (zsh is much more flexible at defining key bindings)
and someone asked, "Is that like Meta-Delete?"
It turned out that Alt-Backspace
(like many Linux applications, bash calls the Alt key "Meta",
and Linux often confuses Delete and Backspace)
did exactly what I wanted. Very promising!
But Alt-Backspace is not easy to type, since it's not reachable from
the "home" typing position.
What I needed, now that I knew bash and readline had the function,
was a way to bind it to ^W.
Bash's binding syntax is documented, though the functions available
don't seem to be. But bind -p | grep word
gave me
some useful information. It seems that \C-w was bound to
"unix-word-rubout" (that was the one I didn't want) whereas "\e\C-?"
was bound to "backward-kill-word".
("\e\C-?" is an obscure way of saying Meta-DEL: \e is escape, and
apparently bash, like emacs, treats ESC followed by a key as the same
as pressing Alt and the key simultaneously. And Control-question-mark
is the Delete character in ASCII.)
So my task was to bind \C-w to backward-kill-word.
It looked like this ought to work:
bind '\C-w:backward-kill-word'
... Except it didn't.
bind -p | grep w
showed that C-W was still bound to "unix-word-rubout".
It turned out that it was the terminal (stty) settings causing
the problem: when the terminal's werase (word erase)
character is set, readline hardwires that character to do
unix-word-rubout and ignores any attempts to change it.
I found the answer in a
bash
bug report. The stty business was introduced in readline 5.0,
but due to complaints, 5.1 was slated to add a way to override
the stty settings. And happily, I had 5.2! So what was this new
way override method? The posting gave no hint, but eventually I found it.
Put in your .inputrc:
set bind-tty-special-chars Off
And finally my word erase worked properly and I could use bash!
Tags: linux, CLI, shell, bash
[
16:22 Jul 17, 2007
More linux |
permalink to this entry |
]
Thu, 28 Jun 2007
I upgraded my laptop's Ubuntu partition from Edgy to Feisty.
Debian Etch works well, but it's just too old and I can't build
software like GIMP that insists on depending on cutting-edge
libraries.
But Feisty is cutting edge in other ways, so
it's been a week of workarounds, in two areas: Firefox and the kernel.
I'll start with Firefox.
Firefox crashes playing flash
First, the way Ubuntu's Firefox crashes when running Flash.
I run flashblock, fortunately, so I've been able to browse the web
just fine as long as I don't click on a flashblock button.
But I like being able to view the occasional youtube video,
and flash 7 has worked fine for me on every distro except Ubuntu.
I first saw the problem on Edgy, and upgrading to Feisty didn't cure the
problem.
But it wasn't their Firefox build; my own "kitfox" firefox
build crashed as well. And it wasn't their flash installation; I've
never had any luck with either their adobe flash installer nor their
opensource libswfdec, so I'm running the same old flash 7 plug-in
that I've used all along for other distros.
To find out what was really happening, I ran Firefox from the
commandline, then went to a flash page. It turned out it was
triggering an X error:
The error was: 'BadMatch (invalid parameter attributes)'.
(Details: serial 104 error_code 8 request_code 145 minor_code 3)
That gave me something to search for. It turns out there's a longstanding
Ubuntu
bug, 14911 filed on this issue, with several workarounds.
Setting the environment variable XLIB_SKIP_ARGB_VISUALS to 1 fixed the
problem, but, reading farther in the bug, I saw that the real problem
was that Ubuntu's installer had, for some strange reason, configured
my X to use 16 bit color instead of 24. Apparently this is pretty
common, and due to some bug involving X's and Mozilla's or Flash's
handling of transparency, this causes flash to crash Mozilla.
So the solution is very simple. Edit /etc/X11/xorg.conf, look
for the DefaultDepth line, and if it's 16, that's your problem.
Change it to 24, restart X and see if flash works. It worked for me!
Eliminating Firefox's saved session pester dialog
While I was fiddling with Firefox, Dave started swearing. "Why does
Firefox always make me go through this dialog about restoring the last
session? Is there a way to turn that off?"
Sure enough, there's no exposed preference for this, so I poked around
about:config, searched for browser and found
browser.sessionstore.resume_from_crash. Doubleclick that
line to change it to false and you'll have no more pesky
dialog.
For more options related to session store, check the
Mozillazine
Session Restore page.
Kernel: runaway kacpid
Alas, having upgraded to Feisty expressly so that I could build
cutting-edge programs like GIMP, I discovered that I couldn't build
anything at all. Anything that uses heavy CPU for more than a minute
or two triggers a kernel daemon, kacpid, to grab most of the CPU for
itself. Being part of the kernel (even though it has a process ID),
kacpi is unkillable, and prevents the machine from shutting down,
so once this happens the only solution is to pull the power plug.
This has actually been a longstanding Ubuntu problem
(bug 75174)
but it used to be that disabling acpid would stop kacpid from
running away, and with feisty, that no longer helps.
The bug is also
kernel.org
bug 8274.
The Ubuntu bug suggested that disabling cpufreq solved it for one
person. Apparently the only way to do that is to build a new kernel.
There followed a long session of attempted kernel building.
It was tricky because of course I couldn't build on the
target machine (inability to build being the problem I was trying to
solve), and even if I built on my desktop machine,
a large rsync of the modules directory would trigger a
runaway kacpi. In the end, building a standalone kernel with
no modules was the only option.
But turning off cpufreq didn't help, nor did any of the other obvious
acpi options. The only option which stops kacpid is to disable acpi
altogether, and use apm. I'm sorry to lose hibernate, and temperature
monitoring, but that appears to be my only option with modern kernels.
Sigh.
Kernel: Hangs for 2 minutes at boot time initializing sound card
While Dave and I were madly trying to find a set of config options to
build a 2.6.21 that would boot on a Vaio (he was helping out with his
SR33 laptop, starting from a different set of config options) we both
hit, at about the same time, an odd bug: partway through boot, the
kernel would initialize the USB memory stick reader:
sd 0:0:0:0: Attached scsi removable disk sda
sd 0:0:0:0: Attached scsi generic sg0 type 0
and then it would hang, for a long time. Two minutes, as it turned
out. And the messages after that were pretty random: sometimes related
to the sound card, sometimes to the network, sometimes ... GConf?!
(What on earth is GConf doing in a kernel boot sequence?)
We tried disabling various options to try to pin down the culprit:
what was causing that two minute hang?
We eventually narrowed the blame to the sound card (which is a Yamaha,
using the ymfpci driver). And that was enough information for google
to find this
Linux Kernel Mailing List thread. Apparently the sound maintainer
decided, for some reason, to make the ymfpci driver depend on an
external firmware file ... and then didn't include the firmware file,
nor is it included in the alsa-firmware package he references in that
message. Lovely. I'm still a little puzzled about the timeout: the
post does not explain why, if a firmware file isn't found on the
disk, waiting for two minutes is likely to make one magically appear.
Apparently it will be fixed in 2.6.22, which isn't much help for
anyone who's trying to run a kernel on any of the 2.6.21.* series
in the meantime. (Isn't it a serious enough regression to fix in
2.6.21.*?) And he didn't suggest a workaround, except that
alsa-firmware package which doesn't actually contain the firmware
for that card.
Looks like it's left to the user to make things work.
So here's what to do: it turns out that if you take a 2.6.21 kernel,
and substitute the whole sound/pci/ymfpci directory from a 2.6.20
kernel source tree, it builds and boots just fine. And I'm off and
running with a standalone apm kernel with no acpi; sound works, and I
can finally build GIMP again.
So it's been quite a week of workarounds. You know, I used to argue
with all those annoying "Linux is not ready for the desktop"
people. But sometimes I feel like Linux usability is moving in the
wrong direction. I try to imagine explaining to my mac-using friends
why they should have to edit /etc/X11/xorg.conf because their distro
set up a configuration that makes Firefox crash, or why they need to
build a new kernel because the distributed ones all crash or hang
... I love Linux and don't regret using it, but I seem to need
workarounds like this more often now than I did a few years ago.
Sometimes it really does seem like the open source world is moving
backward, not forward.
Tags: linux, ubuntu, mozilla, firefox, kernel, audio
[
23:24 Jun 28, 2007
More linux |
permalink to this entry |
]
Sun, 17 Jun 2007
It was a bit over two years ago that I
switched from
icewm to fvwm as my window manager. Fvwm proved to be very fast,
very configurable, and "good enough" most of the time. But lately,
I've found myself irritated with it, particularly with its tendency to
position windows off screen (which got a lot worse in 2.5.18).
It looked like it was time to try another window manager, so when
I learned that the
Openbox
project is headed by a fellow
LinuxChixor, I had to try it.
Openbox impressed me right away. I'd tried it once before, a couple of years
ago, when I found it a little inconsistent and immature. It's grown up a
lot since then! It's still very fast and lightweight, but it has good
focus handling, excellent window positioning, a good configuration
window (obconf), and a wide variety of themes which are pretty but
still don't take up too much of my limited screen space.
But more important, what it has is a very active and friendly
community. I hit a couple of snags, mostly having to do with focus
handling while switching desktops (the problem that drove me off
icewm to fvwm), so I hopped onto the IRC channel and found myself
chatting with the active developers, who told me that most of my
problems had already been fixed in 3.4, and there were .deb files
on the website for both of the distros I'm currently using.
Indeed, that cured the problem; and when I later hit a more esoteric
focus bug, the developers (particularly Dana Jansens) were all over it
and fixed it that same day. Wow!
Since then I've been putting it through its paces. I have yet to see
a window positioned badly in normal usage, and it handles several
other problems I'd been seeing with fvwm, like focus handling when
popping up dialogs (all those secondary GIMP Save-as dialogs that
don't get focused when they appear). It's just as flexible as fvwm
was when it comes to keyboard and mouse configuration, maybe even
more so (plus it has lots of useful default bindings I might not
have thought of, like mousewheel bindings to change desktops or
"shade" a window).
I was going to stay out of theme configuration, because there were
several pretty good installed themes already. But then in response to
a half-joking question on my part of whether a particular theme came
in blue, someone on the IRC channel made me a custom theme file -- and
I couldn't resist tweaking it from there, and discovered that tweaking
openbox themes is just as easy as fiddling with its other defaults.
I don't use transparency (I find it distracting), but my husband is
addicted to transparent windows, so when I noticed on the web site
that openbox handles transparency I pointed him there. (He's been
using an old Afterstep, from back when it was still small and light,
but it's been a constant battle getting it to build under newer gccs.)
He reports that openbox handles transparency as well as afterstep did,
so he's switched too.
I haven't looked at the openbox code yet, but based on how fast the
developers add features and fix bugs, I bet it's clean, and I
hope I can contribute at some point.
Anyway, great focus handling, great window positioning, fast,
lightweight, super configurable, and best of all a friendly and
helpful developer and user community. What more could you ask for in a
window manager?
I'm an openbox convert. Thanks, Dana, Mikachu and all the rest.
Tags: linux, X11, window managers
[
14:13 Jun 17, 2007
More linux |
permalink to this entry |
]
Tue, 15 May 2007
The new
Debian Etch installation
on my laptop was working pretty well.
But it had one weirdness: the ethernet card was on eth1, not eth0.
ifconfig -a
revealed that eth0 was ... something else,
with no IP address configured and a really long MAC address.
What was it?
Poking around dmesg revealed that it was related to the IEEE 1394 and
the eth1394 module. It was firewire networking.
This laptop, being a Vaio, does have a built-in firewire interface
(Sony calls it i.Link). The Etch installer, when it detected no
network present, had noted that it was "possible, though unlikely"
that I might want to use firewire instead, and asked whether to
enable it. I declined.
Yet the installed system ended up with firewire networking not only
installed, but taking the first network slot, ahead of any network
cards. It didn't get in the way of functionality, but it was annoying
and clutters the output whenever I type ifconfig -a
.
Probably took up a little extra boot time and system resources, too.
I wanted it gone.
Easier said than done, as it turns out.
I could see two possible approaches.
- Figure out who was setting it to eth1, and tell it to ignore
the device instead.
- Blacklist the kernel module, so it couldn't load at all.
I begain with approach 1.
The obvious culprit, of course, was udev. (I had already ruled out
hal, by removing it, rebooting and observing that the bogus eth0 was
still there.) Poking around /etc/udev/rules.d revealed the file
where the naming was happening: z25_persistent-net.rules.
It looks like all you have to do is comment out the two lines
for the firewire device in that file. Don't believe it.
Upon reboot, udev sees the firewire devices and says "Oops!
persistent-net.rules doesn't have a rule for this device. I'd better
add one!" and you end up with both your commented-out line, plus a
brand new uncommented line. No help.
Where is that controlled? From another file,
z45_persistent-net-generator.rules. So all you have to do is
edit that file and comment out the lines, right?
Well, no. The firewire lines in that file merely tell udev how to add
a comment when it updates z25_persistent-net.rules.
It still updates the file, it just doesn't comment it as clearly.
There are some lines in z45_persistent-net-generator.rules
whose comments say they're disabling particular devices, by adding a rule
GOTO="persistent_net_generator_end"
. But adding that
in the firewire device lines caused the boot process to hang.
There may be a way to ignore a device from this file, but I haven't
found it, nor any documentation on how this system works.
Defeated, I switched to approach 2: prevent the module from loading at
all. I never expect to use firewire networking, so it's no loss. And indeed,
there are lots of other modules loaded I'd like to blacklist, since
they represent hardware this machine doesn't have. So it would be
nice to learn how.
I had a vague memory of there having been a file with a name like
/etc/modules.blacklist some time back in the Pliocene.
But apparently no such file exists any more.
I did find /etc/modprobe.d/blacklist, which looked
promising; but the comment at the beginning of that file says
# This file lists modules which will not be loaded as the result of
# alias expsnsion, with the purpose of preventing the hotplug subsystem
# to load them. It does not affect autoloading of modules by the kernel.
Okay, sounds like this file isn't what I wanted. (And ... hotplug? I
thought that was long gone, replaced by udev scripts.)
I tried it anyway. Sure enough, not what I wanted.
I fiddled with several other approaches before Debian diva Erinn Clark
found this helpful page.
I created a file called /etc/modprobe.d/00local
and added this line to it:
install eth1394 /bin/true
and on the next boot, the module was no longer loaded, and no longer
showed up as a bogus ethernet device. Hurray!
This /etc/modprobe.d/00local technique probably doesn't bear
examining too closely. It has "hack" written all over it.
But if that's the only way to blacklist problematic modules,
I guess it's better than nothing.
Tags: linux, debian, kernel, networking
[
19:10 May 15, 2007
More linux |
permalink to this entry |
]
Since I'd already tried the latest Ubuntu on my desktop, I wanted to
check out Debian's latest, "Etch", on my laptop.
The installer was the same as always, and the same as the Ubuntu
installer. No surprises, although I do like the way Debian gives
me a choice of system types to install (Basic desktop, Web server,
etc. ... though why isn't "Development" an option?) compared to
Ubuntu's "take the packages we give you and deal with it later"
approach.
Otherwise, the install went very much like a typical Ubuntu install.
I followed the usual procedures and workarounds so as not to overwrite
the existing grub, to get around the Vaio hardware issues, etc.
No big deal, and the install went smoothly.
The good
But the real surprise came on booting into the new system.
Background: my Vaio SR-17 has a quirk (which regular readers will have
heard about already): it has one PCMCIA slot, which is needed for either
the external CDROM drive or a network card. This means that at any one
time, you can have a network, or a CDROM, but not both. This tends to
throw Debian-based installers into a tizzy -- you have to go through
five or more screens (including timing out on DHCP even after you've
told it that you have no network card) to persuade the installer that
yes, you really don't have a network and it's okay to continue anyway.
That means that the first step after rebooting into the new system is
always configuring the network card. In Ubuntu installs, this
typically means either fiddling endlessly with entries in the System
or Admin menus, or editing /etc/network/interfaces.
Anticipating a vi session, I booted into my new Etch and inserted the
network card (a 3COM 3c59x which often confuses Ubuntu).
Immediately, something began spinning in the upper taskbar.
Curious, I waited, and in ten seconds or so
a popup appeared informing me "You are now connected to the wired net."
And indeed I was! The network worked fine.
Kudos to debian -- Etch is the first distro which
has ever handled this automatically.
(I still need to edit /etc/network/interfaces to set my static IP
address -- network manager
Of course, since this was my laptop, the next most important feature
is power management. Happily,
both sleep and hibernate worked correctly,
once I installed the hibernate package. That had been my biggest
worry: Ubuntu was an early pioneer in getting ACPI and power
management code working properly, but it looks like Debian has
caught up.
The bad
I did see a couple of minor glitches.
First, I got a lot of system hangs in X. These turned out to be the
usual dri problem on S3 video cards. It's a well known bug, and I wish
distros would fix it!
I've also gotten at least one kernel OOPS, but I have a theory
about what might be causing that. Time will tell whether it's
a real problem.
It took a little googling to figure out the line I needed to add to
/etc/apt/sources.list in order to install programs that weren't
included on the CD.
(Etch automatically adds lines for security updates, but not for getting
new software). But fortunately, lots of other people have already asked
this in a variety of forums. The answer is:
deb http://http.us.debian.org/debian etch main contrib non-free
My husband had suggested that Etch might be lighter weight than Ubuntu
and less dependent on hal (which I always remove from my laptop,
because its
constant hardware polling
makes noise and sucks power). But no: Etch installed hal, and
any attempt to uninstall it takes with it the whole gnome desktop
environment, plus network-manager (that's apparently that nice app
that noticed my network card earlier) and rhythmbox. I don't actually
use the gnome desktop or these other programs, but it would be nice
to have the option of trying them when I want to check something out.
So for now I've resorted to the temporary solution:
mv /usr/sbin/hald /usr/sbin/hald-not
The ugly
Etch looks fairly nice, and I'm looking forward to exploring it.
I'm mostly kidding about the "ugly". I did hit one minor bit of
ugliness involving network devices which led me on a two-hour chase
... but I'll save that for its own article.
Tags: linux, debian, vaio
[
14:29 May 15, 2007
More linux |
permalink to this entry |
]
Sun, 13 May 2007
In the last installment,
I got the Visor driver working. My sitescooper process also requires
that I have a local web server (long story), so I needed Apache. It
was already there and running (curiously, Apache 1.3.34, not Apache 2),
and it was no problem to point the DocumentRoot to the right place.
But when I tested my local site,
I discovered that although I could see the text on my website, I
couldn't see any of the images. Furthermore, if I right-clicked on any
of those images and tried "View image", the link was pointing to the
right place (http://localhost/images/foo.jpg). The file
(/path/to/mysite/images/foo.jpg) existed with all the right
permissions. What was going on?
/var/log/apache/error.log gave me the clue. When I was trying to
view http://localhost/images/foo.jpg, apache was throwing this error:
[error] [client 127.0.0.1] File does not exist: /usr/share/images/foo.jpg
/usr/share/images? Huh?
Searching for usr/share/images in /etc/apache/httpd.conf gave the
answer. It turns out that Ubuntu, in their infinite wisdom, has
decided that no one would ever want a directory called images
in their webspace. Instead, they set up an alias so that any
reference to /images gets redirected to /usr/share/images.
WTF?
Anyway, the solution is to comment out that stanza of httpd.conf:
<IfModule mod_alias.c>
# Alias /icons/ /usr/share/apache/icons/
#
# <Directory /usr/share/apache/icons>
# Options Indexes MultiViews
# AllowOverride None
# Order allow,deny
# Allow from all
# </Directory>
#
# Alias /images/ /usr/share/images/
#
# <Directory /usr/share/images>
# Options MultiViews
# AllowOverride None
# Order allow,deny
# Allow from all
# </Directory>
</IfModule>
I suppose it's nice that they provided an example for how to use
mod_alias. But at the cost of breaking any site that has directories
named /images or /icons? Is it just me, or is that a bit crazy?
Tags: linux, ubuntu, web
[
22:55 May 13, 2007
More linux |
permalink to this entry |
]
When we left off,
I had just found a workaround for my Feisty Fawn installer problems
and had gotten the system up and running.
By now, it was late in the day, time for my
daily Sitescooper run to grab some news to read on my Treo PDA.
The process starts with making a backup (pilot-xfer -s).
But pilot-xfer failed because it couldn't find the device,
/dev/ttyUSB1. The system was seeing the device connection --
dmesg said
[ 1424.598770] usb 5-2.3: new full speed USB device using ehci_hcd and address 4
[ 1424.690951] usb 5-2.3: configuration #1 chosen from 1 choice
"configuration #1"? What does that mean? I poked around /etc/udev a
bit and found this rule in rules.d/60-symlinks.rules:
# Create /dev/pilot symlink for Palm Pilots
KERNEL=="ttyUSB*", ATTRS{product}=="Palm Handheld*|Handspring *|palmOne Handheld", \
SYMLINK+="pilot"
Oh, maybe they were calling it /dev/pilot1? But no, there was nothing
matching /dev/*pilot*, just as there was nothing matching
/dev/ttyUSB*.
But this time googling led me right to the bug,
bug
108512. Turns out that for some reason (which no one has
investigated yet), feisty doesn't autoload the visor module when
you plug in a USB palm device the way other distros always have.
The temporary workaround is sudo modprobe visor
;
the long-term workaround is to add visor to /etc/modules.
On the subject of Feisty's USB support, though, I do have some good
news to report.
My biggest motivation for upgrading from edgy was because USB2 had
stopped working a few months ago --
bug 54419.
I hoped that the newer kernel in Feisty might fix the problem.
So once I had the system up and running, I plugged my trusty
hated-by-edgy MP3 player into the USB2 hub, and checked dmesg.
It wasn't working -- but the error message was actually useful.
Rather than obscure complaints like
end_request: I/O error, dev sde, sector 2033440
or
device descriptor read/64, error -110
or
3:0:0:0: rejecting I/O to dead device
it had a message (which I've since lost) about "insufficient power".
Now that's something I might be able to do something about!
So I dug into my bag o' cables and found a PS/2 power adaptor that
fit my USB2 hub, plugged it in, plugged the MP3 player into the hub,
and voila! it was talking on USB2 again.
Tags: linux, ubuntu, udev, palm, pda, usb
[
21:10 May 13, 2007
More linux |
permalink to this entry |
]
Sat, 12 May 2007
I finally found some time to try the latest Ubuntu, "Feisty Fawn", on
my desktop machine.
I used a xubuntu alternate installer disk, since I don't need the
gnome desktop, and haven't had much luck booting from the Ubuntu
live CDs lately. (I should try the latest, but my husband had already
downloaded and burned the alternate disk and I couldn't work up the
incentive to download another disk image.
The early portions of the install were typical ubuntu installer:
choose a few language options, choose manual disk partitioning,
spend forever up- and down-arrowing through the partitioner trying
to persuade it not to automount every partition on the disk (after
about the sixth time through I gave up and just let it mount the
partitions; I'll edit /etc/fstab later) then begin the install.
Cannot find /lib/modules/2.6.20-15-generic
update-initramfs: failed for /boot/initrd.img-2.6.0-15-generic
Couldn't install grub, and warning direly, "This is a fatal error".
But then popcorn on #linuxchix found
Ubuntu
bug 37527. Turns out the problem is due to using an existing /boot
partition, which has other kernels installed. Basically, Ubuntu's
new installer can't handle this properly. The workaround is to
go through the installer without a separate /boot partition, let it
install its kernels to /boot on the root partition (but don't let it
install grub, even though it's fairly insistent about it), then reboot
into an old distro and copy the files from the newly-installed feisty
partition to the real /boot. That worked fine.
The rest of the installation was smooth, and soon I was up and
running. I made some of my usual tweaks (uninstall gdm, install tcsh,
add myself to /etc/password with my preferred UID, install fvwm and
xloadimage, install build-essentials and the zillion other packages
needed to compile anything useful) and I had a desktop.
Of course, the adventure wasn't over. There was more fun yet to come!
But I'll post about that separately.
Tags: linux, ubuntu
[
20:36 May 12, 2007
More linux |
permalink to this entry |
]
Fri, 27 Apr 2007
"Would you take a look at this?" my husband asked. I glanced over --
he was on the Gnome desktop on his newly-installed Debian Etch system,
viewing some of his system icons with
pho.
Specifically, an xchat icon, an X with some text across it.
"So?" I shrugged.
He pointed to his panel. "But it's really using that icon."
A little yellow happy-face-with-blob thing.
He right-clicked on the panel icon and brought up a dialog.
"See, it should be using /usr/share/pixmaps/xchat.png.
Now, I run pho /usr/share/pixmaps/xchat.png
..."
And sure enough, the image it said it was using wasn't the image
it was actually putting in the panel.
That jogged a memory. "That happened to me once back when I used
Gnome. Try a locate xchat | grep png
.
I think it was using an icon from somewhere else -- that might find
it for you."
Sure enough, there were several xchat png images on his system.
I suggested going one step further, and actually viewing all of them:
pho `locate xchat | grep png`
We stepped through the images, and sure enough, we found the
icon he was seeing.
It was at /usr/share/icons/gnome/32x32/apps/xchat.png (with a larger
sibling at /usr/share/icons/gnome/48x48/apps/xchat.png).
Good of Gnome to pretend let the user customize the icon location,
even though it actually doesn't bother to use the icon specified
there! At least you get a nice feeling of empowerment from pretending
to choose the icon.
Later in the day, continuing to fiddle with the desktop settings,
Dave burst out laughing. "You've got to see this. It's so Gnome."
When I saw it, I had to laugh too. You may think you know
what you want, but Gnome knows better! If you've ever tried to
customize Gnome, you'll laugh, too, when you see the short video
we took of it:
Gnome knows
best (764K).
Tags: linux, gnome, humor
[
19:21 Apr 27, 2007
More linux |
permalink to this entry |
]
Thu, 12 Apr 2007
My laptop has always been able to sleep (suspend to RAM), one way
or another, but I had never managed it on a desktop machine.
Every time I tried running something like
apm -s, apm -S, echo 3 >/sys/power/state, or Ubuntu's
/etc/acpi/sleep.sh, the machine would sleep nicely, then when I
resumed it would come up partway then hang, or would simply boot
rather than resuming.
Dave was annoyed by it too: his Mac G4 sleeps just fine, but none
of his Linux desktops could. And finally he got annoyed enough to
spend half a day playing with different options. With what he
learned, both he and I now have desktops that can suspend to RAM
(his under Debian Sarge, mine under Ubuntu Edgy).
One step was to install hibernate (available as
a deb package in both Sarge and Edgy, but distros which don't offer
it can probably get it from somewhere on suspend2.net).
The hibernate program suspends to disk by default (which
is what its parent project, suspend2, is all about) but it
can also suspend to RAM, with the following set of arcane arguments:
hibernate -v 4 -F /etc/hibernate/ram.conf
(the
-v 4 adds a lot of debugging output; remove it once
you have things working).
Though actually, in retrospect I suspect I didn't need to install
hibernate at all, and Ubuntu's /etc/acpi/sleep.sh script would
have done just as well, once I'd finished the other step:
Fiddle with BIOS options. Most BIOSes have a submenu named something
like "Power Management Options", and they're almost always set wrong
by default (if you want suspend to work). Which ones are wrong
depends on your BIOS, of course. On Dave's old PIII system, the
key was to change "Sleep States" to include S3 (S3 is the ACPI
suspend-to-RAM state). He also enabled APM sleep, which was disabled
by default but which works better with the older Linux kernels he
uses under Sarge.
On my much newer AMD64 system, the key was an option to "Run VGABIOS
if S3 Resume", which was turned off by default. So I guess it wasn't
re-enabling the video when I resumed. (You might think this would
mean the machine comes up but doesn't have video, but it's never
as simple as that -- the machine came up with its disk light solid
red and no network access, so it wasn't just the screen that was
futzed.)
Such a simple fix! I should have fiddled with BIOS settings long
ago. It's lovely to be able to suspend my machine when I go away
for a while. Power consumption as measured on the Kill-a-Watt
goes down to 5 watts, versus 3 when the machine is "off"
(desktop machines never actually power off, they're always sitting
there on standby waiting for you to press the power button)
and about 75 watts when the machine is up and running.
Now I just have to tweak the suspend scripts so that it gives me a
new desktop background when I resume, since I've been having so much
fun with my random
wallpaper script.
Later update: Alas, I was too optimistic. Turns out it actually only
works about one time out of three. The other two times, it hangs
after X comes up, or else it initially reboots instead of resuming.
Bummer!
Tags: linux, laptop, suspend, ubuntu
[
11:07 Apr 12, 2007
More linux |
permalink to this entry |
]
Wed, 14 Mar 2007
Carla Schroder's latest (excellent) article,
Cheatsheet:
Master Linux Package Management,
spawned a LinuxChix discussion of the subtleties of Debian package
management (which includes other Debian-based distros such as
Ubuntu, Knoppix etc.)
Specifically, we were unclear on the differences among
apt-get
upgrade or
dist-upgrade,
aptitude upgrade,
aptitude dist-upgrade,
and
aptitude -f dist-upgrade.
Most of us have just been typing whichever command we learned first,
without understanding the trade-offs.
But Erinn Clark, our Debian Diva, checked with some of her fellow
Debian experts and got us most of the answers, which I will attempt
to summarize with a little extra help from web references and man pages.
First, apt-get vs. aptitude:
we were told that the primary difference between them is
that "aptitude is less likely to remove packages." I confess
I'm still not entirely clear on what that means, but aptitude is seen
as safer and smarter and I'll go on using it.
aptitude upgrade gets updates (security, bug fixes or whatever)
to all currently installed packages. No packages will be removed,
and no new packages will be installed.
If a currently installed package changes to require a
new package that isn't installed, upgrade will refuse to update
those packages (they will be "kept back"). To install the "kept back"
packages with their dependencies, you can use:
aptitude dist-upgrade gets updates to the currently installed
packages, including any new packages which are now required.
But sometimes you'll encounter problems in the dependencies,
in which case it will suggest that you:
aptitude -f dist-upgrade tries to "fix broken packages",
packages with broken dependencies. What sort of broken dependencies?
Well, for example, if one of the new packages conflicts with another
installed package, it will offer to remove the conflicting package.
Without -f, all you get is that a package will be "held back" for
unspecified reasons, and you have to go probing with commands like
aptitude -u install pkgname or
apt-get -o Debug::pkgProblemResolver=yes dist-upgrade
to find out the reason.
The upshot is that if you want everything to just happen in
one step without pestering you, use aptitude -f dist-upgrade;
if you want to be cautious and think things through at each step,
use aptitude upgrade and be willing to type the stronger
commands when it runs into trouble.
Sections 6.2 and 6.3 of the
Debian
Reference cover these commands a little, but not in much detail.
The APT
Howto is better, and runs through some useful examples (which I
used to try to understand what -f does).
Thanks go to Erinn, Ari Pollak, and Martin Krafft (whose highly rated book,
The
Debian System: Concepts and Techniques, apparently would have
answered these questions, and I'll be checking it out).
Tags: linux, debian, ubuntu
[
22:19 Mar 14, 2007
More linux |
permalink to this entry |
]
Sat, 17 Feb 2007
Remember a bit over a year ago when
Linus Torvalds slapped GNOME
for removing all their configuration options and assuming a
"users are idiots mentality"?
Here's the latest installment.
Linus, challenged to "start a constructive dialog" by using GNOME for
a while then giving a talk on his experiences, went them one better:
he sent in patches to fix the usability problems he experienced.
An excerpt from his posting to the Desktop_architects list:
I've sent out patches. The code is actually _cleaner_ after my
patches, and the end result is more capable. We'll see what happens.
THAT is constructive.
What I find unconstructive is how the GNOME people always make
*excuses*. It took me a few hours to actually do the patches. It
wasn't that hard. So why didn't I do it years ago?
I'll tell you why: because gnome apologists don't say "please send
us patches". No. They basically make it clear that they aren't even
*interested* in fixing things, because their dear old Mum isn't
interested in the feature.
Do you think that's "constructive"?
and, later,
Instead, I _still_ (now after I sent out the patch) hear more of your
kvetching about how you actually do everything right, and it's somehow
*my* fault that I find things limiting.
Here's a damn big clue: the reason I find gnome limiting is BECAUSE
IT IS.
Linus is back, and he's pissed. Go Linus!
Read more details in the linux.com
story,
or in Linus'
actual email in the Desktop_architects list, where you can also
follow the rest of the thread.
Tags: linux, gnome
[
21:05 Feb 17, 2007
More linux |
permalink to this entry |
]
The
simple
auto-login without gdm which I described a year ago stopped
working when I upgradeded to "Edgy Eft". As part of Ubuntu's new
"Upstart" boot system, they've replaced
/etc/inittab with a new
system that uses the directory
/etc/event.d.
There's a little bit
of documentation available, but in the end it just came down to
fiddling. Here's how it works:
First, use the same /usr/bin/loginscript you used for the old
setup, which contains something like this:
#! /bin/sh
/bin/login -f yourusername
Then edit /etc/event.d/tty1 and find the getty line: probably
the last line of the file, looking like
respawn /sbin/getty 38400 tty1
Change that to:
respawn /sbin/getty -n -l /usr/bin/loginscript 38400 tty1
That's it! If you want to run X (or anything else) automatically,
that works the same way as always.
Update: This changed again in Hardy.
Here
are the details.
And then changed
again in Karmic Koala: the file has moved from /etc/event.d/tty1
to /etc/init/tty1.conf.
Tags: linux, ubuntu, boot
[
13:37 Feb 17, 2007
More linux |
permalink to this entry |
]
Thu, 15 Feb 2007
I got a nifty new toy: a Logitech Cordless Presenter.
I've been watching some of the videos from
LCA2007,
and I like how some of the speakers made their presentations
smoother, and avoided being tied to standing next to their computer,
by using remote slide-changing gizmos. It wouldn't help for my GIMP
presentations, since those are usually live demos, but I do give
other types of talks.
I didn't find much on the web about remote presenters and Linux,
and wasn't at all sure whether or how they worked. As usual with
hardware, the only way to find out is to buy one and try it.
As it turns out, it works just fine, so I wrote up a
review
with details.
Tags: linux, speaking
[
15:34 Feb 15, 2007
More linux |
permalink to this entry |
]
Wed, 07 Feb 2007
A couple of udev tips I picked up at LinuxConf,
mostly from talking to folks in the hallways:
I'd been having trouble getting my laptop to read its
built-in memory stick since upgrading to Ubuntu Edgy.
It's basically the same problem I described in
an
earlier article: the machine boots, sees the built-in reader
with no card there, and udev creates /dev/sda but not /dev/sda1.
Later, I insert a memory stick, but the reader (like so many other
USB-based flash card readers) does not generate an event, so
no new device is created.
In that earlier article, the solution was to change the udev rule
that creates the device and add something like
NAME{all_partitions}="stick"
. The all_partitions
tells it to create /dev/stick1, /dev/stick2 etc. up through the
possible maximum number of partitions. (It would be nice to limit it
just to /dev/stick1, but there doesn't seem to be any way to do that.)
Unfortunately, in edgy, the udev rules have been rewritten to be a lot
more general, and adding {all_partitions} wasn't working.
But LinuxConf gave me two solutions to this problem:
First, I was able to pester one of the hal
developers about hal's annoying mandatory polling.
(This is the official Ubuntu solution to the problem, by the
way: if you let hald wake up twice a second to poll every device
on the USB bus to see whether anything new has been added, then
you'll get that /dev/sda1 device appearing. I wasn't the only one at
the conference, I was happy to find, who was unhappy about this hald
misbehavior. It got mentioned in at least two other talks as an
example of inefficient behavior that can eat batteries and CPU, and
a questioner during the hal talk echoed my opinion that the polling
should be made optional for those of us who don't want it.)
Anyway, I asked him what hald does to create the
/dev/sda1 device once it sees a card. It turns out that
touch /dev/sda causes udev to wake up and re-check the device,
and create any new device nodes which might have appeared.
Hurrah! That's a much cleaner workaround than sudo mknod.
But at breakfast a few days later, I found myself sitting next to a
udev expert. He took a look at the file
I'd created, /etc/udev/rules.d/memstick.rules, and after a few
minutes of fiddling discovered what it was missing: a crucial
ACTION=="add" directive which hadn't been required under the
old system. The working line now looks like this:
KERNEL=="sda", ACTION=="add", OPTIONS=="all_partitions", NAME{all_partitions}="stick"
Tags: linux, udev, hal
[
22:14 Feb 07, 2007
More linux |
permalink to this entry |
]
Sat, 27 Jan 2007
Australia! I spent the last week in Sydney as a speaker for
linux.conf.au 2007. My first time overseas, and first time in way
too long at a technical Linux conference.
I had lots of plans to write about it as it was happening. Jot down
events of the day, impressions of the talks, etc. In retrospect I have
no idea how anyone manages to do that. There's just so much
stuff going on at LCA that I was busy the whole time.
Blogging or sleep ... that might be a hard choice for some people, but
I like sleep. Sleep is good. Sleep lets me have a lot more fun at the
talks and the social events afterward.
First, about technical conferences. With the emphasis on technical.
In California we have a bunch of conferences like Linux World Expo and
the O'Reilly Emerging Tech conference, where a few
geeks-turned-PR-people make whizzy presentations to marketing and CIO
sorts. Ick. Sometimes there are a few presentations that are actually
technical, but not many. And oh, did I mention the multi-kilobuck reg
fees?
Linux.conf.au isn't like that at all. It's all geeks, all the time.
Not everyone is a programmer (though the majority are), but of the
hundreds of people I talked to during the week of the conference I
didn't meet a single person who wasn't deeply and passionately
involved with Linux in some way. You could pick any person at random,
start a conversation and immediately be deep in conversation about
interesting details of some aspect of Linux you hadn't thought much
about before.
Picking people at random and talking to them? What sort of a geek
would do that? Well, the cool thing is that in an environment like
LCA, the shyest geek can still network pretty well. If you can't
make small talk or force a fake smile, you can jump to the meaty
stuff right away and start trading notes on filesystems or network
configuration or IRQs or python GUI toolkits. I was almost late to
the post-conference LinuxChix meetup because it turned out the person
sitting next to me at breakfast was a udev expert who knew how to get
my memory stick reader recognized (more on that in a separate article).
Not that you'd really need to talk to random people if you didn't want
to. One of the many highlights of the conference was the chance to
meet people from all over the world whom I'd only met before on IRC
or mailing lists. I already "knew" lots of people there, even if I'd
never seen their faces before.
There were tons of
talks, with four or five tracks going at all
times, and all on good topics. It was quite common to want to go to
two or three or even more simultaneous presentations. Fortunately
nearly everything was video taped, so with any luck we'll be able to
catch up on sessions that we missed (and folks who couldn't afford the
trip can benefit from all the great talks). The videos are still being
uploaded and aren't all there yet, but they've done an amazing job
getting as many transcoded and uploaded as they have so far, and I'm
sure the rest won't be too far behind. (Some of them are on the
mirror
but not yet linked from the Schedule page.)
How do they get all those great talks? I must say, LCA treats its
speakers well. In addition to the super-secret "Speakers Adventure",
which we were assured was worth getting up at 6am for (it was),
they gave us a dinner cruise on scenic Sydney harbor, which
included an after-dinner talk on how to give better talks (focused on
flash rather than content). I didn't agree with all his points, but
that's okay, the point is to get people thinking. I bet every one of
us (certainly everyone I talked to) went back and revised our talks at
least a little bit based on the presentation.
I hope my GIMP
tutorial and my miniconf bugfixing talk lived up to
the organizing committee's expectations -- it's intimidating sharing a
schedule with so many smart people who are also good speakers!
The first two days of the conference were taken up by
"miniconfs".
I originally had my eyes on several of the miniconfs, on topics such
as Kernel, Education and Research, though I knew I'd start the day at the
LinuxChix
miniconf.
As it turned out, that miniconf was so excellent that I spent
the whole day there. It included a mixture of technical and social
issues: talks on women in FOSS (Sulamita), my talk on "Bug Fixing for
Non Programmers", "Demystifying PCI" (Kristin), a set of terrific
"Lightning Talks" under five minutes, and eventually concluded with
talks on networking in the social sense (Jacinta) and negotiating
wages (Val). After Jacinta's and Val's talks we broke up into small
groups and headed for the lawn outside for some very productive
discussions of networking and negotiation, which were so interesting
we kept the discussions going all afternoon.
The LinuxChix miniconf was Standing Room Only all day, with plenty of
men listening in. It was quite a rush to see so many technical women
all together, giving talks and discussing details of Linux and FOSS.
Another miniconf-like activity was Open Day, on Thursday afternoon,
when the conference invited people from the area (particularly
teachers and students) to wander through displays on all sorts of FOSS
topics. There were booths from most of the major distros handing out
CDs or inviting people to do network installs, a booth showing the One
Laptop Per Child project, booths showing games
and interesting projects such as amateur rocket and satellite projects
or the open source Segway clone, a Linuxchix booth, and booths from a
few companies such as Google. Open Day was jam packed, people seemed
to be having fun and they gave away a few amazing prizes, like Vaio
laptops (donated by IBM) which came in an amazingly small box. I was
itching to see what was in those little boxes (we never get the cool
small laptops in the US, where the national philosophy is "Bigger is
Better") but alas, I wasn't one of the lucky winners.
A couple of other notable talks I went to:
Making Things Move:
Finding Inappropriate Uses for Scripting Languages by Jonathan
Oxer, which included live demos of hooking up radio switches and
controlling them from the commandline (with a little simple C glue);
and "burning cpu and
battery on the gnome desktop" by Ryan Lortie, who not only gave an
excellent and entertaining list of programs and services which use up
system resources inefficiently by polling, opening too many files or
other evils (several other speakers offered similar lists), but also
gave concrete advice for finding such programs and fixing them.
I'm looking forward to seeing his slides uploaded (I'll link them
here when I find them).
Tags: linux, conferences, lca2007
[
23:30 Jan 27, 2007
More conferences |
permalink to this entry |
]
Fri, 29 Dec 2006
A friend called me for help with a sysadmin problem they were having
at work. The problem: find all files bigger than one gigabyte, print
all the filenames, add up all the sizes and print the total.
And for some reason (not explained to me) they needed to do this
all in one command line.
This is Unix, so of course it's possible somehow!
The obvious place to start is with the find command,
and man find showed how to find all the 1G+ files:
find / -size +1G
(Turns out that's a GNU find syntax, and BSD find, on OS X, doesn't
support it. I left it to my friend to check man find for the
OS X equivalent of -size _1G.)
But for a problem like this, it's pretty clear we'd need to get find
to execute a program that prints both the filename and the size.
Initially I used ls -ls, but Saz (who was helping on IRC)
pointed out that du on a file also does that, and looks a
bit cleaner. With find's unfortunate syntax, that becomes:
find / -size +1G -exec du "{}" \;
But now we needed awk, to collect and add up all the sizes
while printing just the filenames. A little googling (since I don't
use awk very often) and experimenting led to the final solution:
find / -size +1G -exec du "{}" \; | awk '{print $2; total += $1} END { print "Total is", total}'
Ah, the joys of Unix shell pipelines!
Update: Ed Davies suggested an easier way to do the same thing.
turns out du will handle it all by itself: du -hc `find . -size +1G`
Thanks, Ed!
Tags: linux, CLI, shell, backups, pipelines
[
17:53 Dec 29, 2006
More linux |
permalink to this entry |
]
Sat, 09 Dec 2006
Another person popped into #gimp today trying to get a Wacom tablet
working (this happens every few weeks). But this time it was someone
using Ubuntu's new release, "Edgy Eft", and I just happened to have
a shiny new Edgy install on my laptop (as well as a Wacom Graphire 2
gathering dust in the closet because I can never get it working
under Linux), so I dug out the Graphire and did some experimenting.
And got it working! It sees pressure changes and everything.
It actually wasn't that hard, but it did require
some changes. Here's what I had to do:
- Install wacom-tools and xinput
- Edit /etc/X11/xorg.conf and comment out those ForceDevice lines
that say "Tablet PC ONLY".
- Reconcile the difference between udev creating /dev/input/wacom
and xorg.conf using /dev/wacom: you can either change xorg.conf,
change /etc/udev/rules.d/65-wacom.rules, or symlink /dev/input/wacom
to /dev/wacom (that's what I did for testing, but it won't survive a
reboot, so I'll have to pick a device name and make udev and X
consistent).
A useful tool for testing is /usr/X11R6/bin/xinput list
(part of the xinput package).
That's a lot faster than going through GIMP's input device
preference panel every time.
I added some comments to Ubuntu's bug
73160, where people had already described some of the errors but
nobody had posted the steps to work around the problems.
While I was fiddling with GIMP on the laptop, I decided to install
the packages I needed to build the latest CVS GIMP there.
It needed a few small tweaks from the list I used on Dapper.
I've updated the package list on my GIMP Building
page accordingly.
Tags: linux, X11, imaging, gimp, ubuntu
[
16:12 Dec 09, 2006
More linux |
permalink to this entry |
]
Mon, 20 Nov 2006
I just tried Ubuntu's newest release, "Edgy
Eft", on the laptop (my trusty if aging Vaio SR17).
I used the "xubuntu" variant, in order to try out their lighter
weight xfce-based desktop.
So far it looks quite good.
But the installation process involved
quite a few snags: here follows an account of the various workarounds
I needed to get it up and running.
Live CD Problems
First, I tried to use the live CD, since I've heard it has a nice
installer. But it failed during the process of bringing up X, and
dumped me into me a console screen with an (initramfs) prompt.
I thought I had pretty good Linux creds, but I have to confess
I don't know what to do with an (initramfs) prompt; so I gave up
and switched to the install CD. Too bad! I was so impressed with
Ubuntu's previous live CDs, which worked well on this machine.
Guessing Keyboard Layout
Early on, the installer gives you the option to let it guess your
keyboard layout. Don't let it! What this does is subject you
to a seemingly infinite list of questions like:
Does your keyboard have a squiggle key?
where each
squiggle is a different garbled, completely illegible
character further mangled by the fact that the installer is running at
a resolution not native to the current LCD display. After about 15 of
these you just give up and start hitting return hoping it will end
soon -- but it doesn't, so eventually you give up and try ctl-alt-del.
But that doesn't work either. Pulling the power cord and starting over
seems to be the only answer. Everyone I've talked to who's installed
Edgy has gone through this exact experience and we all had a good
laugh about it. Come to think of it, go ahead and say yes to the
keyboard guesser, just so you can chuckle about it with the rest of us.
Once I rebooted and said no to the keyboard guesser, it asked three or
four very straightforward questions about language, and the rest
of the installation went smoothly. Of course it whined about not
seeing a network, and it whined about my not wanting it to overwrite
my existing /boot, and it whined about something to do with free
space on my ext3 partitions (set up by a previous breezy install),
but it made it through.
X Hangs on the Savage
On the first reboot after installation, it
hung while trying to start X -- blank screen, no keyboard
response, and I needed to pull the plug. I was prepared for that
(longstanding bug 41340)
so I immediately suspected dri. I booted from another partition and
removed the dri lines from /etc/X11/xorg.conf,
which fixed the problem.
Configuring the Network
Now I was up and running on Xubuntu Edgy.
Next I needed to configure the network (since the installer won't do
it: this machine only has one pcmcia slot, so it can't have
a CDROM drive and a network card installed at the same time).
I popped in the network card (a 3com 3c59x cardbus card) and waited
expectantly for something to happen.
Nada. So I poked around and found the network configuration tool in
the menus, set up my IP and DNS and all that, and looked in vain for
a "start network" or "enable card" or some similar button that would
perform an ifup eth0.
Nada again. Eventually I gave up, called up a terminal, and ran
ifup eth0, which worked fine.
Which leads me to ask:
Given that Ubuntu is so committed to automatic hardware detection that
it forces you to run hal, which spawns large numbers of daemons that
poll all your disks a couple of times a second -- why can't it notice
the insertion of a cardbus networking card?
And configure it in some easy way without requiring the user to know
about terminals and networking commands?
Ubuntu Still Wins for Suspend and Hibernate
Around this point I tested suspend and hibernate. They both worked
beautifully out of the box, with no additional twiddling
needed. Ubuntu is still the leader in suspending.
sudo: Timestamp Woes
Somewhere during these package management games, I lost the ability
to sudo: it complained "Timestamp too far in the future", without
telling me which file's timestamp was wrong so that I could fix it.
Googling showed that lots of other people were having the
same problem with Edgy, and found an answer: use the GUI Time and
Date tool to set the time to something farther in the future than
the timestamp sudo was complaining about, then run sudo -k to do
some magic that resets the timestamp. Then you can set the time back
to where it belongs. Why does this happen? No one seems to know, but
that's the fix.
(I found some discussion in
bug 43233.)
vim isn't vim?
I restored my normal user account, logged in as myself with my normal
fvwm environment, and went to edit (with vim) a few files. Every time,
vim complained:
"E319: Sorry, the command is not available in this version: syntax on"
after which I could edit the file normally. Eventually I googled to
the answer, which is
très bizarre: by default,
vim-common is installed but
vim is not. There's a binary
named vim, and a package which seems to be vim, but it isn't vim.
Installing the package named
vim gives you a vim that
understands "syntax on" without complaining.
Conclusion
That's the list. Edgy is up and running now, and looks pretty good.
The installer definitely has some rough edges, and I hope my
workarounds are helpful to someone ... but the installer is only
a tiny part of the OS, something you run only once or twice.
So don't let the rough installer stop you from installing Edgy and
trying it out. I know I look forward to using it.
Tags: linux, ubuntu
[
20:30 Nov 20, 2006
More linux |
permalink to this entry |
]
Mon, 04 Sep 2006
I've been updating some web pages with tricky JavaScript and CSS,
and testing to see if they work in IE (which they never do) is
a hassle involving a lot of pestering of long suffering friends.
I've always heard people talk about how difficult it is to get
IE working on Linux under WINE. It works in
Crossover Office
(which is a good excuse to get Crossover: the company, Codeweavers,
is a good open source citizen and has contributed lots of work
to WINE, and I've bought from them in the past) but most people
who try installing IE under regular WINE seem to have problems.
Today someone pointed me to
IEs 4
Linux. It's a script that downloads IE and installs it under WINE.
You need wine and cabextract installed.
I was sure it couldn't be that simple, but it seemed easy enough to try.
It works great! Asked me a couple of questions, downloaded IE,
installed it, gave me an easy-to-run link in ~/bin, and it runs fine.
Now I can test my pages myself without pestering my friends.
Good stuff!
Tags: tech, web, linux
[
15:21 Sep 04, 2006
More tech/web |
permalink to this entry |
]
Sat, 19 Aug 2006
It's been a week jam-packed with Linuxy stuff.
Wednesday I made my annual one-day trip to Linuxworld in San
Francisco. There wasn't much of great interest at the conference
this year: the usual collection of corporate booths (minus Redhat,
notably absent this year), virtualization as a hot keyword (but perhaps
less than the last two years) and a fair selection of sysadmin tools,
not much desktop Linux (two laptop vendors), and a somewhat light
"Dot Org" area compared to the last few years.
I was happy to notice that most of the big corporate
booths were running Linux on a majority of show machines, a nice
contrast from earlier years. (Dell was the exception, with more
Windows than Linux, but even they weren't all Windows.)
Linuxworld supposedly offers a wireless network but I never managed to
get it to work, either in the exhibit hall or in the building where
the BOFs were held.
Wednesday afternoon's BOF list didn't offer much that immediately
grabbed me, but in the end I chose one on introducing desktop
Linux to corporate environments. Run by a couple of IBM Linux
advocates, the BOF turned out to be interesting and well presented,
offering lots of sensible advice (base your arguments to management
on business advantages, like money saved or increased ability to get
the job done, not on promises of cool features; don't aim for a
wholesale switch to Linux, merely for a policy which allows employees
to choose; argue for standards-based corporate infrastructure since
that allows for more choice and avoids lock-in). There was plenty
of discussion between the audience and the folks leading the BOF,
and I think most attendees got something out of it.
More interesting than Linuxworld was Friday's Ubucon,
a free Ubuntu conference held at Google (and spilling over into
Saturday morning).
Despite a lack of advertising, the Ubucon was very well attended.
There were two tracks, ostensibly "beginner" and "expert", but
even aside from my own GIMP talk being a "beginner" topic, I
ended up hanging out in the "beginner" room for the whole day,
for topics like "Power Management", "How to Get Involved", and
"What Do Non Geeks Need?" (the last topic dovetailing into the
concluding session Linux corporate desktops).
All of the sessions were quite interactive
with lots of discussion and questions from the audience.
Everyone looked like they were having a good time, and I'm sure
several of us are motivated to get more deeply involved with Ubuntu.
Ubucon was a great example of a low-key, fun,
somewhat technical conference on a shoestring budget and I'd love to
see more conferences like this in the bay area.
Finally, the week wrapped up with the annual Linux Picnic in
Sunnyvale, a Silicon Valley tradition for many years and always a good
time. There were some organizational glitches this year ... but it's
hard to complain much about a free geek picnic in perfect weather
complete with t-shirts, an installfest, a raffle and even (by
mid-afternoon) a wireless network. Fun stuff!
Tags: linux, conferences, linuxworld, ubucon, ubuntu
[
20:52 Aug 19, 2006
More conferences |
permalink to this entry |
]
Mon, 10 Jul 2006
An unexplained change in the screen blanking timeout sent me
down the primrose path of learning about dpms, vbetool and screen
blanking. I guess it had to happen eventually.
It started when my laptop, which normally can sit for about ten
minutes before the screen goes black, suddenly started blanking
the screen after only two minutes.
After a few days of sporadic research, I now know what pieces are
involved and how to fix the timeout. But it turns out not everything
works the way it's supposed to.
I've written up my findings:
A Primer on
Screen Blanking Under Xorg.
Tags: linux, X11
[
12:55 Jul 10, 2006
More linux |
permalink to this entry |
]
Thu, 01 Jun 2006
On Dapper, whenever I tried to add a new printer or make any
modifications to the existing printer's settings, I would eventually
come to a dialog prompting me to enter username and password for
'CUPS'. There seemed to be no right answer: there is no user called
"cups" (there's a "cupsys", but that's not what it was asking for),
and trying either my own username and password, or root's, just
popped up the dialog again. A second attempt always led to a blank
white page.
But Carla
knew the answer. You're supposed to read:
zless /usr/share/doc/cupsys/README.Debian.gz
then skip to the end of the file where there's a brief hint
about this problem, stating that "Administration over the web
interface is disabled by default since it
requires the CUPS daemon to be able to read /etc/shadow."
Note that they don't actually disable it in a way that
tells users it's disabled. CUPS apparently doesn't check for
read permission on the shadow file before opening it,
or check whether there was an error in reading it;
it just silently bombs out with no indication what went wrong.
To fix it:
adduser cupsys shadow
adduser yourname lpadmin
You may not need the second line if you're already in group
lpadmin (type
groups to find out).
Then reboot. (Restarting cups and logging out and back in might
be enough: you need to get cups and your login session seeing their
new group permissions.)
Now, magically, the CUPS web interface works!
Tags: linux, printing
[
23:15 Jun 01, 2006
More linux |
permalink to this entry |
]
Sun, 14 May 2006
I had a page of plaintext which included some URLs in it, like this:
Tour of the Hayward Fault
http://www.mcs.csuhayward.edu/~shirschf/tour-1.html
Technical Reports on Hayward Fault
http://quake.usgs.gov/research/geology/docs/lienkaemper_docs06.htm
I wanted to add links around each of the urls, so that I could make
it part of a web page, more like this:
Tour of the Hayward Fault
http://www.mcs.csu
hayward.edu/~shirschf/tour-1.html
Technical Reports on Hayward Fault
htt
p://quake.usgs.gov/research/geology/docs/lienkaemper_docs06.htm
Surely there must be a program to do this, I thought. But I couldn't
find one that was part of a standard Linux distribution.
But you can do a fair job of linkifying just using a regular
expression in an editor like vim or emacs, or by using sed or perl from
the commandline. You just need to specify the input pattern you want
to change, then how you want to change it.
Here's a recipe for linkifying with regular expressions.
Within vim:
:%s_\(https\=\|ftp\)://\S\+_<a href="&">&</a>_
If you're new to regular expressions, it might be helpful to see a
detailed breakdown of why this works:
- :
- Tell vim you're about to type a command.
- %
- The following command should be applied everywhere in the file.
- s_
- Do a global substitute, and everything up to the next underscore
will represent the pattern to match.
- \(
- This will be a list of several alternate patterns.
- http
- If you see an "http", that counts as a match.
- s\=
- Zero or one esses after the http will match: so http and https are
okay, but httpsssss isn't.
- \|
- Here comes another alternate pattern that you might see instead
of http or https.
- ftp
- URLs starting with ftp are okay too.
- \)
- We're done with the list of alternate patterns.
- ://
- After the http, https or ftp there should always be a colon-slash-slash.
- \S
- After the ://, there must be a character which is not whitespace.
- \+
- There can be any number of these non-whitespace characters as long
as there's at least one. Keep matching until you see a space.
- _
- Finally, the underscore that says this is the end of the pattern
to match. Next (until the final underscore) will be the expression
which will replace the pattern.
- <a href="&">
- An ampersand, &, in a substitute expression means "insert
everything that was in the original pattern". So the whole url will
be inserted between the quotation marks.
- &</a>
- Now, outside the <a href="..."> tag, insert the matched url
again, and follow it with a </a> to close the tag.
- _
- The final underscore which says "this is the end of the
replacement pattern". We're done!
Linkifying from the commandline using sed
Sed is a bit trickier: it doesn't understand \S for
non-whitespace, nor = for "zero or one occurrence".
But this expression does the trick:
sed -e 's_\(http\|https\|ftp\)://[^ \t]\+_<a href="&">&</a>_' <infile.txt >outfile.html
Addendum: George
Riley tells me about
VST for Vim 7,
which looks like a nice package to linkify, htmlify, and various
other useful things such as creating HTML presentations.
I don't have Vim 7 yet, but once I do I'll definitely check out VST.
Tags: linux, editors, pipelines, regexp, shell, CLI
[
13:40 May 14, 2006
More linux/editors |
permalink to this entry |
]
Sun, 23 Apr 2006
Firefox' print system doesn't know how to print just even or just
odd pages of a document, so when I need to print out a web page
and want it double sided, I've been doing the Duplex Dance:
hover over the printer ready to grab each page as it comes out
the front so that I can quickly flip it and feed it back into the
top of the printer fast enough that the printer doesn't time out
with "Out of Paper".
Of course, more often I just sigh and let it print single sided
because that's just way too much hassle.
But today is Earth Day, so I decided it was time to find a better
solution. The solution, obviously, begins with telling the browser
to print to a Postscript file. Then the challenge is to find a way to
print only the odd pages of the Postscript file, put the pages back
in the printer, then print only the even pages.
First I tried to use mpage, which claims to be able to do this.
It looked like this command should do it:
mpage -j 1%2 file.ps | lpr
But it didn't work -- it said it was spooling something to the
printer, but nothing ever came out. Upon saving the mpage output
to a file, I found that gv couldn't show it, citing postscript
errors.
But it turns out there's a much easier way: the CUPS lp program
has an option called page-set precisely for this purpose,
and it's smart about detecting postscript input.
This command did the trick:
lp -o page-set=odd file.ps
Then flip the sheets, insert back into the printer, and repeat
with
even instead of
odd. Lovely!
This is documented in http://localhost:631/help/options.html?QUERY=odd#PAGESET
-- and the CUPS in-browser help has a search function that took me
right to it, I'm happy to note.
Other programs which may to be able to split postscript files into
odd and even pages include psselect and perhaps dviduplex.
With a smarter print dialog (one that allowed specifying a custom
print command, like Mozilla's used to) you could even define several
custom printers, one that printed even pages and one that printed
odd. Alas, Firefox' print dialog doesn't allow such things, or even
allow defining extra printers.
(The Mozilla bug asking for odd/even printing is bug
35228).
From a quick search of about:config, it
looks like you might be able to set up something by hand using
the print.printer_CUPS/printername.print_command preference
(by default mine was set to
lpr ${MOZ_PRINTER_NAME:+'-P'}${MOZ_PRINTER_NAME}) but I
notice something even more interesting: two variables called
print.printer_CUPS/printername.print_evenpages and
print.printer_CUPS/printername.print_oddpages (both set to
true by default). Also interesting are plex.0.name and
plex.count.
Might be an easy way to get duplex printing
straight from the browser, after doing a little hand-editing to
try and persuade Firefox that there's more than one printer.
I'll try it next time I need to print something.
(It seems wrong to spend Earth Day printing more pages than I
actually need.)
Tags: linux, printing
[
20:21 Apr 23, 2006
More linux |
permalink to this entry |
]
Sat, 01 Apr 2006
When I plug in my camera (or flash card reader) to upload photos,
they always upload as executable. I knew there must be an easy
way to fix it, and finally got around to it.
I'm sure you are fully capable of reading man pages and figuring this
out, just like I was. (Hint: the solution is in man mount.)
But wouldn't you rather have it just spoon-fed to you?
(I know I would have.) So here it is:
you need the fmask option to mount.
It's a mask, so you set it to the bits you
don't want set when you mount the filesystem
(on top of your normal umask).
My /etc/fstab entry for my camera or other flash card device
now looks like this:
/dev/sda1 /pix vfat rw,user,fmask=111,noauto 0 0
(On the laptop it's sdb1 because the built-in memory stick reader
always grabs sda.)
Tags: linux, filesystems
[
22:39 Apr 01, 2006
More linux |
permalink to this entry |
]
Mon, 20 Mar 2006
Dave has been complaining for a long time about the proliferation of
files named
.serverauth.???? (where
???? is various
four-digit numbers) in his home directory under Ubuntu. I never saw
them under Hoary, but now under Breezy and Dapper I'm seeing the same
problem.
I spent some time investigating, with the help of some IRC friends.
In fact, Carla
Schroder, author of O'Reilly's Linux Cookbook, was
the one who pinned down the creation of the files to the script
/usr/bin/startx.
Here's the deal: if you use gdm, kdm, or xdm, you'll never see
this. But for some reason, Ubuntu's startx uses a program
called xauth which creates a file containing an "MIT-MAGIC-COOKIE".
(Don't ask.) Under most Linux distributions, the magic cookie goes
into a file called .Xauthority. The startx script checks an
environment variable called XENVIRONMENT for the filename; if it's not
set to something else, it defaults to $HOME/.Xenvironment.
Ubuntu's version is a little different. It still has the block of
code where it checks XENVIRONMENT and sets it to
$HOME/.Xenvironment if it isn't already set. But a few
lines later, it proceeds to create the file under another, hardwired,
name: you guessed it, $HOME/.serverauth.$$. The XENVIRONMENT
variable which was checked and set is never actually used.
Programmers take note! When adding a feature to a script, please take
a moment and think about what the script does, and check to see
whether it already does something like what you're adding. If so,
it's okay -- really -- to remove the old code, rather than leaving
redundant and obsolete code blocks in place.
Okay, so why is the current code a problem? Because startx
creates the file, calls xinit, then removes the file. In other words,
it relies on xinit (and the X server) exiting gracefully. If anything
else happens -- for example, if you shut your machine down from within X --
the startx script (which does not catch signals) can die
without ever getting to the code that removes the file. So if you
habitually shut down your machine from within X, you will have a
.serverauth.???? file from each X session you ever shut down that way.
Note that the old setup didn't remove the file either, but at least it
was always the same file. So you always have a single .Xauthority file
in your home directory whether or not it's currently in use. Not much
of a problem.
I wasn't seeing this under Hoary because under Hoary I ran gdm, while
with Dapper, gdm would no longer log me in automatically so I had to
find
another approach to auto-login.
For Ubuntu users who wish to go back to the old one-file XAUTHORITY
setup, there's a very simple fix: edit /usr/bin/startx (as
root, of course) and change the line:
xserverauthfile=$HOME/.serverauth.$$
to read instead
xserverauthfile=$XAUTHORITY
If you want to track this issue, it's bug bug
35758.
Tags: linux, X11, ubuntu
[
21:24 Mar 20, 2006
More linux |
permalink to this entry |
]
Sat, 18 Mar 2006
I needed to send a formal letter, so I thought, as a nice touch,
I'd print the address on the envelope rather than handwriting it.
I felt sure I remembered glabels, that nice, lightweight,
straightforward label printing program, having envelope options.
But nope, it doesn't have any paper sizes corresponding to envelopes.
(It has a size called A10 but it's 1.0236 x 1.4567 inches, not the 4.13 x
9.50 I'd expect for a number-ten envelope.)
Darn, that would have been perfect.
OpenOffice doesn't have anything in its templates list that
corresponds to an envelope, nor does it have anything in the Wizards
list. But googling showed me -- aha! -- that it's hidden in the
Insert menu, Insert->envelope -- so if you want to
create a new envelope document, create some other type of document
first, go to Insert and bring up the envelope dialog, click New to get
the envelope, then close the blank dummy document.
But then it doesn't offer a choice of envelope sizes, and puts the text in
odd places so you have to fiddle with the margins to get the recipient
and return address in normal places. Then, when you go to Preview or
Printer Settings, it has forgotten all about the fact that it's doing
an envelope and now tries to print in the middle of a sheet of paper.
In theory you could fix this by setting the paper size to the size of
the envelope -- except that OOo doesn't actually have any envelope
sizes in its paper list.
Okay, let's try abiword instead.
Abiword has a nice selection of paper sizes, including some common
envelope sizes. Choosing A10 envelope and Landscape mode lets me
compose a nice looking envelope. But then when I go to Print Preview,
it turns out it wants to print it in portrait mode, with the addresses
going across the short dimension of the envelope. The Print dialog
offers a Paper tab which includes an Orientation dropdown, but
changing that from Landscape to Portrait makes no difference: the
preview still shows the addresses disappearing off the short dimension
of the envelope.
I suppose I should try kword. But it depends on nine other packages,
and I was tired of fighting. I gave up and wrote the address
out by hand.
The next day, though, I went back to gLabels, poked around for a bit
and found out about "Template Designer" (in the File menu).
It's almost there ... it's easy to set up custom sizes, but it
prints them in the middle of a US-letter page, rather than lined
up against the edge of the printer's feeder. I'm dubious that you
could feed real envelopes this way with any reliability. Still,
it's a lot closer than the word processing programs could get.
Tags: linux, printing
[
18:36 Mar 18, 2006
More linux |
permalink to this entry |
]
Wed, 15 Mar 2006
I updated my Ubuntu "dapper" yesterday. When I booted this morning,
I couldn't get to any outside sites: no DNS. A quick look at
/etc/resolv.conf revealed that it was empty -- my normal
static nameservers were missing -- except for a comment
indicating that the file is prone to be overwritten at any moment
by a program called resolvconf.
man resolvconf provided no enlightenment. Clearly it's intended
to work with packages such as PPP which get dynamic network
information, but that offered no clue as to why it should be operating
on a system with a static IP address and static nameservers.
The closest Ubuntu bug I found was bug
31057. The Ubuntu developer commenting in the bug asserts that
resolvconf is indeed the program at fault. The bug reporter
disputes this, saying that resolvconf isn't even installed on the
machine. So the cause is still a mystery.
After editing /etc/resolv.conf to restore my nameservers,
I uninstalled resolvconf along with some other packages that I clearly
don't need on this machine, hoping to prevent the problem from
happening again:
aptitude purge resolvconf ppp pppconfig pppoeconf ubuntu-standard wvdial
Meanwhile, I did some reading.
It turns out that resolvconf depends on
an undocumented bit of information added to the
/etc/network/interfaces file: lines like
dns-nameservers 192.168.1.1 192.168.1.2
This is not documented under
man interfaces, nor under
man
resolvconf; it turns out that you're supposed to find out about it
from
/usr/share/doc/resolvconf/README.gz. But of course, since
it isn't a standard part of
/etc/network/interfaces, no
automatic configuration programs will add the DNS lines there for you.
Hope you read the README!
resolvconf isn't inherently a bad idea, actually; it's supposed to
replace any old "up" commands in interfaces that copy
resolv.conf files into place.
Having all the information in the interfaces file would be a better
solution, if it were documented and supported.
Meanwhile, be careful about resolvconf, which you may have even
if you never intentionally installed it.
This
thread on a Debian list discusses the problem briefly, and this
reply quotes the relevant parts of the resolvconf README (in case
you're curious but have already removed resolvconf in a fit of pique).
Tags: linux, ubuntu, networking
[
15:22 Mar 15, 2006
More linux |
permalink to this entry |
]
Sun, 05 Feb 2006
I've been unable to persuade Ubuntu's "Dapper Drake" to log me in
automatically via gdm. My desktop background flashes briefly during
the login process, then vanishes; it appears that it actually is
logging me in briefly, then immediately logging me out and presenting
me with the normal gdm login screen.
I never liked gdm much anyway. It's heavyweight and it interferes with
seeing shutdown messages. The only reason I was using it on Hoary
was for autologin, and with that reason gone, I uninstalled it.
But that presented an interesting problem: it turns out that
Dapper doesn't actually allow users to run X. The error message is:
Unable to open wrapper config file /etc/X11/Xwrapper.config
X: user not authorized to run the X server, aborting.
The fix turned out to be trivial: make the X server setuid
and setgid (
chmod 6755 /usr/bin/X). Mode 4755 (setuid only,
no setgid) also works, but other Debian systems seem to set both bits.
The next question was how to implement auto-login without gdm or kdm.
I had already found a useful
Linux Gazette
article on the subject. The gist is that you compile a short C
program that calls login with your username, then you call getty with
your new program as the "alternate login program".
Now, I have nothing against C, but wouldn't a script be easier?
It turns out a script works too. Replace the tty1 line in
/etc/inittab with a line like:
1:2345:respawn:/sbin/getty -n -l /usr/bin/myloginscript 38400 tty1
where the script in question looks like:
#! /bin/sh
/bin/login -f username
At first, I tried an even simpler approach:
1:2345:respawn:/bin/login -f username
That logged me in, but I ended up on /dev/console instead of
/dev/tty1, with a message that I had no access to the tty and
therefore wouldn't be able to use job control. X didn't work
either. The getty is needed in order to switch control
from /dev/console to a real virtual terminal like /dev/tty1.
Of course, running X automatically once you're logged in is trivial,
just a line or three added to .login or .profile (see the Linux
Gazette article referenced above for an example).
It works great, it's very fast, plus I can watch shutdown messages
again. Nice!
Update 9/9/2006: the Linux Gazette article isn't accessible any more
(apparently Linux Journal bought them and made the old articles
inaccessible). But here's an example of what I do in my .login
on Dapper -- this is for tcsh, so bash users subtitute "fi" for "endif":
if ($tty == tty1) then
startx
endif
Tags: linux, ubuntu, boot
[
11:53 Feb 05, 2006
More linux |
permalink to this entry |
]
Sat, 04 Feb 2006
I've been meaning to upgrade my desktop machine from Ubuntu's "Hoary
Hedgehog" release for some time -- most notably so that I can get the
various packages needed to run GTK 2.8, which is now required to build
the most current GIMP.
Although I'm having good success with "Breezy Badger", the stable
Ubuntu successor to "Hoary", on my laptop, Breezy is already
borderline as far as GIMP requirements, and that can only get worse.
Since I do more development on the desktop, I figured
it was worth trying one of the pre-released versions of Ubuntu's
next release, "Dapper Drake".
Wins over hoary and breezy: it handles my multiple flash card reader
automatically (on hoary and breezy I had to hack the udev
configuration file to make it work).
I've had a few glitches, starting with the first auto-update wanting
to install a bunch of packages that didn't actually exist on the
server. This persisted for about a week, during which I got a list
of 404s and "packages held back" warnings every time I updated or
installed anything. It didn't seem to hurt anything -- just a minor
irritant -- and it did eventually get fixed. That's life with an
unstable distribution.
Dapper has the same problem that hoary and
breezy have with hald polling the hard disk every few seconds
(bug 27323).
In addition, hald seems to spawn a rather large number of
hald-addon-storage processes
(probably to handle the built-in multi flash card reader).
(Uncommenting the storage.automount_enabled_hint in
/etc/hal/fdi/policy/preferences.fdi didn't help.)
Killing hald (and nuking /usr/sbin/hald
so it won't restart) solves both these problems, but it also
stops hotplugged USB devices from working: apparently Dapper
has switched to using hal instead of hotplug for USB. Ouch!
In any case, hald came back on a dist-upgrade so it looks like
I'll have to find a more creative solution.
The printing packages have problems.
I tried to add my printer via the CUPS web interface,
but apparently it didn't install any printer drivers by default, and
it's not at all obvious where to get them. The drivers are there, in
/usr/share/cups/model/gutenprint/5.0/en, but dapper's cups apparently
isn't looking there. I eventually got around the problem by
uncompressing the ppd file and pointing CUPS directly at
/usr/share/cups/model/gutenprint/5.0/en/stp-escp2-c86.5.0.ppd.
(Filed bug
30178.)
Dapper's ImageMagick has a bug in the composite command:
basically, you can't combine two images at all. So I have to generate
web page thumbnails on another machine until that's fixed.
gdm refuses to set up my user for auto-login, and I hit an interesting
localization issue involving GIMP (I'll report on those issues separately).
Most other things work pretty well. Dapper has a decent set of
multimedia apps and codecs, and its kernel and udev setup seem to work
fine (it can't suspend my desktop machine, but neither can any other
distro, and I don't really need that anyway).
Except for the hald problem, Dapper looks like a very usable system.
Tags: linux, ubuntu
[
19:27 Feb 04, 2006
More linux |
permalink to this entry |
]
Wed, 04 Jan 2006
I installed the latest Ubuntu Linux, called "Breezy Badger", just
before leaving to visit family over the holidays. My previous Ubuntu
attempt on this machine had been rather unstable (probably not
Ubuntu's fault -- 2.6 kernels and this laptop don't get along very
well) but Ubuntu seems to have some very sharp kernel developers, so
I was curious to see whether there'd been progress.
Installation:
Didn't go well. I had most of the same problems I'd had installing
Hoary to this laptop (mostly due to the installer assuming that a
CDROM and network must remain connected throughout the install,
something that's impossible on a laptop where both of those functions
require sharing the single PCMCIA port). The Breezy installer has the
additional "feature" that it tends to hang if you change things like
the CDROM while the install is in progress, trashing everything and
forcing you to restart from the beginning. (Filed bug 20443.)
Networking:
But eventually I found a sequence that let me get a network-less
Breezy onto the laptop, and I'm happy to report that Breezy's built-in
networking tools were able to add networking after the first boot
(something that hadn't worked in Hoary). Well, admittedly I did have
to add a script, /etc/hotplug/pci/3c59x, to call ifup when my cardbus
network card is plugged in; but every other distro needs that too, and
Breezy is the first 2.6-based distro which correctly calls the script
every time.
Suspend:
Once up and running, Breezy shows impressive laptop savvy.
Like Hoary, it can suspend either to disk or to RAM; unlike Hoary, it
can do this without my needing to hack any config files except to
uncomment the line enabling suspend to RAM in /etc/default/acpi-support.
It does print various error messages on stdout when it resumes from
sleep or hibernate, but that's a minor issue.
Not only that, but it restores both network and usb when resuming from
suspend (on hoary I had to hack some of the suspend scripts to make
that work).
(Kernel flakiness:
Well, mostly it suspends fine. Unplugging a usb mouse at the wrong
time still causes a kernel hang.
That's a 2.6 bug, not an Ubuntu-specific problem.
And the system also tends to hang and need to be power cycled about
one time out of five when exiting X; perhaps it's an Xorg bug.)
Ironically, my "safe" partition on this laptop (a much-
modified Debian sarge) mysteriously stopped seeing PCMCIA on the first
day away from home, so I ended up using Breezy for the whole trip
and giving it a good workout.
Hal:
One problem Breezy shares with Hoary is that every few seconds, the
hald daemon makes the hard drive beep and whir. Unlike Hoary, which
had an easy
solution, Breezy ignores the storage_media_check_enabled and
storage_automount_enabled hints. The only way I found to disable
the beeping was to kill hald entirely by renaming /usr/sbin/hald
(it's not called from /etc/init.d, and I never did find out who was
starting it so I could disable it). Removing hald seems to have caused
no ill effects; at least, hotplug of pcmcia and usb still works, as do
udev rules. (Filed bug 21238.
Udev:
Oh, about those udev rules! Regular readers may recall that I had some
trouble with Hoary regarding
udev
choking on multiple flash card readers which I solved
on my desktop machine with a udev rule that renames the four fixed,
always present devices. But with a laptop, I don't have fixed devices;
I wanted a setup that would work regardless of what I plugged in.
That required a new udev rule. Here's the rule that worked for me:
in /etc/udev/permissions.rules, change
BUS=="scsi", KERNEL=="sd[a-z]*", PROGRAM="/etc/udev/scripts/removable.sh %k 'usb ieee1394'", RESULT="1", MODE="0640", GROUP="plugdev"
to
BUS=="scsi", KERNEL=="sd[a-z]*", NAME{all_partitions}="%k", MODE="0640", GROUP="plugdev"
Note that this means that whatever scripts/removable.sh does, it's not
happening any more. That doesn't seem to have caused any problem,
though. (Filed
bug 21662
on that problem.)
Conclusion:
Overall, Breezy is quite impressive and required very little tweaking
before it was usable. It was my primary distro for two weeks while
travelling; I may switch to it on the desktop once I find a workaround
for bug
352358 in GTK 2.8 (which has been fixed in gnome cvs, but
that doesn't make it any less maddening when using the buggy version).
Tags: linux, ubuntu, laptop
[
22:43 Jan 04, 2006
More linux |
permalink to this entry |
]
Tue, 13 Dec 2005
I spent an entertaining morning reading the thread on
Gnome-usability that Linus Torvalds started with his posting beginning,
"I
personally just encourage people to switch to KDE."
The whole thread is interesting reading for anyone who's been
frustrated with the lack of usability and missing features in
recent Gnome user interface redesigns.
Linus continues:
This "users are idiots, and are confused by functionality" mentality of
Gnome is a disease. If you think your users are idiots, only idiots will
use it. I don't use Gnome, because in striving to be simple, it has long
since reached the point where it simply doesn't do what I need it to do.
Naturally, various Gnome people speak up to defend Gnome.
Linus clarifies his position in a
followup posting:
No. I've talked to people, and often your "fixes" are actually removing
capabilities that you had, because they were "too confusing to the
user".
That's _not_ like any other open source project I know about. Gnome seems
to be developed by interface nazis, where consistently the excuse for not
doign something is not "it's too complicated to do", but "it would confuse
users".
He gives several examples of functionality the Gnome team has removed
which every competing project still allows, such as binding
mouse buttons to functions in the window manager, and
typing a filename in the file selection dialog.
A few messages later, Jeff
Waugh defends the file selection dialog, calling it "top stuff".
Carl
Worth responds to that with a nice summary of long-standing and
long-ignored bugs on file selection dialog usability. Several
other posters follow up with additional issues not yet fixed.
Somewhere along the line, the spectre of Mac OS is raised (of course).
Michael Bernstein posts the obligatory "I
gave my mom Mac OS X and she loved it" message, adding the
novel but unlikely suggestion that the cure for Linux usability issues
is to draft non Linux programmers to develop the UI.
(No one points out in the Mac OS subthread that Mac OS X itself includes
quite a few of the options which have been dropped by the Gnome
usability team, such as readline key bindings in every text field, or
a typable text field in the file selection dialog, yet no one seems to
be complaining about OS X's lack of usability.)
The thread eventually peters out without a conclusion. It's fairly
clear that neither side is convinced by the other. Gnome has chosen
their path, and "dumbing down" the UI and removing features needed by
users such as Linus is part of that choice. Too bad for the users,
though: especially users for whom switching entirely to KDE/Qt apps is
not a viable choice.
Tags: linux, gnome
[
18:40 Dec 13, 2005
More linux |
permalink to this entry |
]
Fri, 18 Nov 2005
I found myself in a situation where a package was mostly installed,
but it was missing some files, notably the startup file in
/etc/init.d/
packagename. No problem, right? Just reinstall
the package.
Well, no. dpkg -i packagename spun and looked busy for a
while, but the missing file didn't appear. Removing the package first
with dpkg -r packagename, then reinstalling, didn't help either,
nor did dpkg -i --force-newconfig packagename.
(I didn't try dpkg -r --purge packagename because I already
had invested some time into setting up the files in the package
and was hoping to avoid losing that work.)
Of course, I could have extracted the .deb somewhere else and pulled
the single init.d file out of it; but I was worried that I might be
missing other files, and end up with a flaky package.
Well, as far as I can tell, there really isn't any way to do this
"right" in Debian: there's no way to tell dpkg "Really install this
package, every file in it, even if you think maybe some of the files
already got installed before", or "Install any file in this package
which doesn't currently exist on disk." It's amazing (I'm pretty
sure RPM offered both of these options) but apparently this isn't
something dpkg allows.
I found a way to trick it, though:
rm /var/lib/dpkg/info/packagename.*
dpkg -i packagename
You get a lovely warning that
dpkg: serious warning: files list file for package `packagename'
missing, assuming package has no files currently installed.
and then dpkg finally goes ahead and reinstalls all the files.
Whew!
Update: Aha! It is possible after all. dpkg i --force-confmiss is
the option I wasn't seeing. Thanks, Yosh!
Tags: linux, debian
[
19:01 Nov 18, 2005
More linux |
permalink to this entry |
]
Mon, 07 Nov 2005
Update: Some of this has changed; see my newer entry,
Update
on writing udev rules for flash card readers.
Dave had one of those nifty front-panel multiple flash card readers
sitting on a shelf, so I borrowed it. It's USB based, fits in a
CD drive bay, and has slots for all the common types of flash memory,
as well as a generic USB socket.
With the device installed, I booted into my usual Ubuntu (hoary)
partition, inserted an SD card and checked dmesg.
Nothing! The four logical units of the device had been seen at boot
time, but nothing new happened when I inserted a card.
I tried mounting /dev/sda1, /dev/sdb1, /dev/sdc1, /dev/sdd1, and
/dev/sde1 anyway, but got "No such device" each time.
Dave muttered darkly about udev and hal and said I should try
it under an older Debian with a normal /dev.
I rebooted my old sid partition, with a kernel I built myself.
I needed a kernel with "Probe all LUNs on each SCSI device", of course.
I still got no messages or hotplug events when inserting the card,
but /dev/sdd1 mounted the SD card.
(For anyone reading this who's not familiar with Linux' handling
of USB storage devices, sd in /dev/sdd1 stands for "SCSI disk" and has
nothing to do with the fact that I was using a "secure digital" media card.
Any USB disk or flash card is supposed to show up under
/dev/sdsomething and the main trick is figuring out the
something. Which is part of what udev and hal are supposed
to help with.)
Then I discovered
that doing an fdisk -l /dev/hdd gave the right answer (one
partition) for the SD card. And as soon as I did that, the /dev/sdd1
device appeared and I was able to mount it normally.
Apparently, when udev sees the logical units at boot time,
with no cards inserted, it decides that there's a /dev/sdd, but it has
no partitions on it so there's no such device as /dev/sdd1. Since
inserting a card later doesn't generate a hotplug event, udev never
re-evaluates this, unless somehow forced to (apparently running fdisk
forces it, though I'm not sure why). Dave was right: udev/hal are the
culprit here, and the kernel was fine.
A helpful person on #ubuntu pointed me to this
tutorial
on writing rules for udev. It mentions the problem with multi USB
card readers not getting additional events when cards are plugged in,
and suggests modifying the NAME key in the rules (which seem
to be in /etc/udev/rules.d/udev.rules to:
BUS="usb", SYSFS{product}="USB 2.0 Storage Device", NAME{all_partitions}="usbhd"
Elsewhere in the document, it suggests getting that SYSFS{product}
string by running a command like
udevinfo -a -p /sys/block/sdd
Unfortunately, that seems to be completely ignored. udevinfo told me
the string was "CardReader SD ", but plugging that in to
udev.rules did not create any /dev/usbhd* devices.
It also seemed clear that udev is using BUS="scsi" rather than
BUS="usb" for these devices, based on the device names that are
being created (sd* rather than ub*). But making that change didn't
help.
Eventually I found a combination that worked. Ubuntu's current rules
for usb-storage devices are:
BUS="scsi", KERNEL="sd[a-z]*", PROGRAM="/etc/udev/scripts/removable.sh %k", RESULT="1", NAME="%k", MODE="0640", GROUP="plugdev"
BUS="usb", KERNEL="ub[a-z]*", NAME="%k", MODE="0640", GROUP="plugdev"
(I don't know what devices create the ub* devices. It's nothing I've
used so far).
I changed the "sd[a-z]*" to "sd[e-z]*", so that it
wouldn't apply to the four devices grabbed by the multicard reader.
Then I added these four lines:
BUS="scsi", KERNEL="sda*", PROGRAM="/etc/udev/scripts/removable.sh %k", RESULT="1", NAME{all_partitions}="cfcard", MODE="0640", GROUP="plugdev"
BUS="scsi", KERNEL="sdb*", PROGRAM="/etc/udev/scripts/removable.sh %k", RESULT="1", NAME{all_partitions}="smcard", MODE="0640", GROUP="plugdev"
BUS="scsi", KERNEL="sdc*", PROGRAM="/etc/udev/scripts/removable.sh %k", RESULT="1", NAME{all_partitions}="mscard", MODE="0640", GROUP="plugdev"
BUS="scsi", KERNEL="sdd*", PROGRAM="/etc/udev/scripts/removable.sh %k", RESULT="1", NAME{all_partitions}="sdcard", MODE="0640", GROUP="plugdev"
That worked. Now udev creates /dev/sdcard[1-15] as well as
/dev/sdcard (and likewise for the other three flash types),
and I can make a normal /etc/fstab entry:
/dev/sdcard1 /sd auto rw,user,noauto 0 0
Now as a user I can say mount /sd without needing to su to root
or do any extra fiddling. Hurrah!
Tags: linux, udev, hal
[
16:49 Nov 07, 2005
More linux |
permalink to this entry |
]
Mon, 26 Sep 2005
I've long wondered why people like the "alternate screen" misfeature
which has become mandatory in so many Linux terminals. That's the
one where you want to find out the options to ls, so you do
man
ls, get to the option you were curious about, hit q to get out
of man so that you can run ls with the right option -- and the man
page disappears, clearing the screen so you can't see the
documentation you were just trying to read! What's up with that?
(Sigh ... just another addition to the list of clever innovations that
make Linux terminal programs harder to use than any other system's ...
like the page up key that doesn't actually page up.)
Anyway, a long time ago I worked out how to get around this
irritating misfeature (referred to as "rmcup" on newer terminfo
systems; the workaround on older termcap systems was called
"tite inhibit").
It's easy in xterm-based programs, but harder in more "modern"
terminal programs that don't make it configurable.
I guess I never got around to writing it up publically.
Tonight someone asked about it on Linuxchix Techtalk, so I
resurrected the old terminfo entry I wrote for use with
gnome-terminal (which I'm not actually using because rxvt
doesn't need it), and wrote up a sketchy page on it:
Exorcising the
Evil Alternate Screen.
Tags: linux, terminals
[
20:18 Sep 26, 2005
More linux |
permalink to this entry |
]
Sun, 14 Aug 2005
I bet I'm not the only one who uses Ubuntu (Hoary Hedgehog) and didn't
realize that it doesn't automatically put the security sources in
/etc/apt/sources.list, so apt-get and aptitude don't pick up
any of the security updates without extra help.
After about a month with no security updates on any ubuntu machines
(during which time I know there were security alerts in Debian for
packages I use), I finally tracked down the answer.
It turns out that if you use synaptic, click on "Mark All Upgrades",
then click on Apply, synaptic will pull in security updates.
However, if you use the "Ubuntu Upgrade Manager" in the
System->Administration menu, or if you use commands
like apt-get -f dist-upgrade or aptitude -f dist-upgrade,
then the sources which synaptic wrote into sources.list are not
sufficient to get the security updates.
(Where synaptic keeps its extra sources, I still don't know.)
When I asked about this on #ubuntu, I was pointed to a page on
the Ubuntu wiki which walks you through selecting sources in synaptic.
Unfortunately, the screenshots on the wiki show lots of sources that
none of my Ubuntu machines show, and the wiki doesn't give you the
sources.list lines or tell you what to do if synaptic doesn't
automagically show the security sources.
The solution: to edit /etc/apt/sources.list and make sure the
following lines are there (which some of the people on the IRC channel
were kind enough to paste for me):
## All officially supported packages, including security- and other updates
deb http://archive.ubuntu.com/ubuntu hoary main restricted
deb http://security.ubuntu.com/ubuntu hoary-security main restricted
deb http://archive.ubuntu.com/ubuntu hoary-updates main restricted
In addition, if you use "universe" and "multiverse", you probably also
want these lines:
deb http://archive.ubuntu.com/ubuntu hoary universe multiverse
deb http://security.ubuntu.com/ubuntu hoary-security universe multiverse
deb http://archive.ubuntu.com/ubuntu hoary-updates universe multiverse
Tags: linux, ubuntu, security
[
22:49 Aug 14, 2005
More linux |
permalink to this entry |
]
Thu, 11 Aug 2005
There's really not much to say about Linuxworld this year. It was
smaller than last year (they moved it across the street to the
"little" Moscone hall) and the mood seemed a bit subdued.
SWAG was way down: to get anything remotely cool you generally had
to register to watch a presentation that gave you a chance to get
something to wear that would enter you for a chance to win something
cool later. Or similar indirections. You had to be pretty
desperate. But maybe people were, since I saw lines at some of the
booths.
I did my annual sweep of the big booths to see
who was running Linux on their show machines. This year was the first
time that all the major booths used predominately Linux (except on
machines running fullscreen presentation software, where it's
impossible to tell). It was a huge change from past shows --
I stopped keeping tabs after a while.
I saw only one or two confirmed Windows machines each
at most of the big booths, like Intel, AMD, IBM, Sun, and even HP.
They seemed fairly evenly divided between SuSE and Redhat.
At the AMD booth, lots of machines sported cardboard signs saying
"Powered by Redhat" or "Powered by SuSE". One of
the "Powered by Redhat" machines clearly had a Start menu,
so I had to ask. The AMD rep gave me a song and dance about
virtualization technologies, pointing out that although the machine
was running Windows, it displayed both Redhat and SuSE windows which
he said were running on the same machine. Okay, that's a perfectly
good reason to be running Windows at a Linux convention. No points off
there. I suspect most of the booths showing Windows had similar excuses.
"Virtualization is the wave of the future! Everybody here is
displaying virtualization technologies," the AMD rep told me.
Indeed, virtualization was everywhere. I don't know that I'm convinced
it's the wave of the future, but there was no question that it was the
wave of the present at this year's Linuxworld.
Sweeping the hall, I passed by the Adobe booth, where
someone was giving a presentation to an audience of maybe ten people.
The projector showed a window which showed ... nothing. A blank window
border with nothing inside.
"Now, it's connecting to San Jose", explained the presenter with
apparent pride, "to get permission to display the document."
I kept walking . It hadn't finished connecting yet by the time I
was out of earshot. Perhaps the audience was somehow persuaded by this
demo to buy Adobe software. I guess you never know what people will like.
A bit past Adobe was the weirdest booth of the exhibit hall:
SolovatSoft, offering offshore software development at rates starting
at $18/hour. Honest, this was an actual booth at Linuxworld. I should
have taken my camera.
Gone were most of the nifty embedded Linux displays of yesteryear. I saw
only two: one (Applieddata.net, I think) which I've seen there
before, showing an array of fun-looking custom embedded platforms of
all sizes, and another showing Linux on various cellphones and similar
consumer devices. Only one laptop maker (Emperor) made it there, and
none of the smaller-than-laptop manufacturers -- I was hoping Nokia,
Sharp, Psion or some other maker of nifty Linux PDAs might be there.
The "Dot Org Pavilion", the place where free software groups like
Debian, Mozilla, the FSF, and the EFF have their booths, was on a
completely separate floor, and would have been easy to miss if you
didn't look at the maps in the convention guide. But it wasn't all
bad: someone on a LUG mailing list pointed out that this put them in a
nice quiet area away from the raucous advertising of the big
commercial booths in the main hall, so you could actually have a
conversation with the booth folks. Also, the dot-orgs got a nice view
out the second-floor windows compared to the cavernous indoor
commercial hall.
I only went to one keynote, "The Explosive Growth of Linux and Open
Source: What Does It All Mean?" The description made it look like a
panel discussion, but it was really just five prepared speeches: three
suits repeating buzzwords (Dave and I amused ourselves counting the
uses of the word "exciting", and with Toastmasters reflexes I couldn't
help counting the "ah"s) and two more interesting talks (well, okay,
Eben Moglen was also wearing a suit but at least he didn't spend
his whole talk telling us about the exciting opportunities ahead for
company X).
I would have liked to have heard Mike Shaver's keynote on web
technologies, but it wasn't worth going back to San Francisco for a
second day just for that.
In the end, the real highlight of the day was hooking up with Sonja at
the Novell/SuSE booth for a nice lunch. Hooray for conferences that
give you an excuse to meet friends from far away! Catching up with
some of the Mozilla crowd was good, too. That made the trip worth it
even if the exhibit hall didn't offer much.
Tags: linux, conferences, linuxworld
[
22:25 Aug 11, 2005
More conferences |
permalink to this entry |
]
Mon, 01 Aug 2005
Another in the series of "I keep forgetting how to do this and
have to figure it out again each time, so this time I'm going to
write it down!"
Enabling remote X in a distro that disables it by default:
Of course, you need xhost. For testing, xhost +
enables access from any machine; once everything is working, you
may want to be selective, xhost hostname for the hosts from
which you're likely to want to connect.
If you log in to the console and start X, check
/etc/X11/xinit/xserverrc and see if it starts X with the
-nolisten flag. This is usually the problem, at least on
Debian derivatives: remove the -nolisten tcp.
If you log in using gdm, gdmconfig has an option in the
Security tab: "Always disallow TCP connections to X server
(disables all remote connections)". Un-checking this solves the
problem, but logging out won't be enough to see the change.
You must restart X; Ctrl-Alt-Backspace will do that.
Update: If you use kdm, the configuration to change is in
/etc/kde3/kdm/kdmrc
Tags: linux, X11, networking
[
13:52 Aug 01, 2005
More linux |
permalink to this entry |
]
Sat, 30 Jul 2005
I got a new, large, and most important,
quiet disk for my
laptop.
The first Linux distro I installed on it was Ubuntu. Since quiet was
my biggest incentive for buying this disk (the old IBM disk was so loud
that I was embarassed to run the laptop in meetings), as soon as the
install was finished I carried the laptop into a quiet room to listen
to the disk,
Turns out it was making a faint beep-beep noise every second or
two, plus some clicking in between.
Another, non-Linux, operating system installed on the same disk does
not make these beeping or clicking noises. It was clearly something
Ubuntu was doing.
After a long series of ps and kill, I finally narrowed
the problem down to hald. HAL is polling my disk, once per
second or so, in a way that makes it beep and click.
(HAL, if you're not familiar with it, is the Hardware Access Layer
which works hand in hand with the kernel service udev to monitor
hardware as it comes and goes. No one seems to know where the
dividing line is between udev and hal, or between the daemons
udevd and hald. Most systems which enable one, enable both.)
I floated down to the control room to dismantle HAL, humming
"Daisy".
But it turned out I didn't need to kill HAL entirely. The polling
apparently comes from HAL's attempt to query the CDROM to see if
anything has been inserted. (Even if there is no CDROM connected
to the machine. Go figure!)
The solution is to edit /etc/hal/hald.conf, and change
true to false under <storage_media_check_enabled>
and <storage_automount_enabled_hint>. This changes hald from a
"blacklist" policy, where everything is polled unless you blacklist
it, to a "whitelist" policy, where nothing is polled unless you
whitelist it. Voila! No more polling the disk, and no more
beepy-clicky noises. I suspect my drive will last longer and eat
less battery power, too.
Tags: linux, hal
[
20:00 Jul 30, 2005
More linux |
permalink to this entry |
]
Fri, 22 Jul 2005
This has bitten me too many times, and I always forget how to
recover. This time I'm saving it here for posterity.
The scene: you wanted to check something, perhaps a window manager
theme or a font setting or something, and innocently ran some gnome
app even though you aren't running a gnome desktop.
Immediately thereafter, you notice that something has changed
disastrously in your gtk apps, even apps that were already running
and working fine. Maybe it's your keyboard theme, or maybe fonts or
colors.
Now you're screwed: your previous configuration files like
~/.gtkrc-2.0 don't matter any more, because gnome has taken over
and Knows What's Best For You.
How do you fix it?
Don't bother looking for apps that start with gnome-- or
gtk-. That would be too obvious.
You might think that gnome-control-center would have something
related ... but mwa-ha-ha, you'd be wrong!
The solution, it turns out, is gconf-editor, an app obviously
modeled after regedit from everyone's favorite user interface
designer, Microsoft.
In the case of key theme, you'll find it in
desktop->gnome->interface->gtk_key_theme.
Tags: linux, gnome
[
17:39 Jul 22, 2005
More linux |
permalink to this entry |
]
Mon, 27 Jun 2005
I needed to use OpenOffice today, and got a nasty surprise: it no
longer remembered any of my settings, key bindings, or any of those
painfully-installed templates I'm required to use for this project.
It turns the version of OpenOffice.org I pulled in with the last
Sid upgrade, 1.1.4, has a nifty new feature: it has no knowledge
of the ~/.openoffice/1.1.1 directory its predecessor used
for its configuration, and instead wants to use
~/.openoffice/1.1.0. Does it notice that there's an existing
config directory there, and offer to migrate it? No! Instead, it
silently makes a new 1.1.0 directory, with all-default
settings, and uses that. The effect to the user is that all your
settings and templates have suddenly disappeared for no obvious
reason.
The fact that the new version uses a seemingly older version number
for its configurations is a nice twist. Perhaps they were worried
that otherwise some enterprising user might figure out what had
happened, and actually recover their settings, rather than wasting
hours painfully resetting them one by one.
Aside: it's impressively hard to read OOo's settings to figure
out which ones might be yours. For example, here's a sample
line which binds Ctrl-H to delete-backward-char:
<accel:item accel:code="KEY_H" accel:mod1="true" xlink:href="slot:20926"/>
(You can still tell it involves the "h" key somehow. But I bet they're
working on purging that shameful bit of human readable information.)
That adds a touch of extra spice to the challenge of figuring out
which set of files is the right one.
Anyway, if this happens to you, move the 1.1.0
directory somewhere else, then rename or cp -a 1.1.1
to 1.1.0. That seems to bring back the lost settings.
Tags: linux, open office
[
22:22 Jun 27, 2005
More linux |
permalink to this entry |
]
Fri, 03 Jun 2005
I've been experimenting with Ubuntu's second release, "Hoary
Hedgehog" off and on since just before it was released.
Overall, I'm very impressed. It's quite usable on a desktop machine;
but more important, I'm blown away by the fact that Ubuntu's kernel
team has made a 2.6 acpi kernel that actually works on my aging but
still beloved little Vaio SR17 laptop. It can suspend to RAM (if I
uncomment ACPI_SLEEP in /etc/defaults/acpi-support), it can
suspend to disk, it gets power button events (which are easily
customizable: by default it shuts the machine down, but if I replace
powerbtn.sh with a single line calling sleep.sh, it
suspends), it can read the CPU temperature. Very cool.
One thing didn't work: USB stopped working when resuming after a
suspend to RAM. It turned out this was a hotplug problem, not a kernel
problem: the solution was to add calls to /etc/init.d/hotplug
stop and /etc/init.d/hotplug start in the
/etc/acpi/sleep.sh script.
Problem solved (except now resuming takes forever, as does
booting; I need to tune that hotplug startup script and get rid of
whatever is taking so long).
Sonypi (the jogdial driver) also works. It isn't automatically loaded
(I've added it to /etc/modules), and it disables the power button (so
much for changing the script to call sleep.sh), a minor
annoyance. But when loaded, it automatically creates /dev/sonypi, so I
don't have to play the usual guessing game about which minor number it
wanted this time.
Oh, did I mention that the Hoary live CD also works on the Vaio?
It's the first live linux CD which has ever worked on this machine
(all the others, including rescue disks like the Bootable Business
Card and SuperRescue, have problems with the Sony PCMCIA-emulating-IDE
CD drive). It's too slow to use for real work, but the fact that it
works at all is amazing.
I have to balance this by saying that Ubuntu's not perfect.
The installer, which is apparently the Debian Sarge installer
dumbed down to reduce the number of choices, is inconsistent,
difficult, and can't deal with a networkless install (which, on
a laptop which can't have a CD drive and networking at the same time
because they both use the single PCMCIA slot, makes installation quite
tricky). The only way I found was to boot into expert mode, skip the
network installation step, then, after the system was up and running
(and I'd several times dismissed irritating warnings about how it
couldn't find the network, therefore "some things" in gnome wouldn't
work properly, and did I want to log in anyway?) I manually edited
/etc/network/interfaces to configure my card (none of Ubuntu's
built-in hardware or network configuration tools would let me
configure my vanilla 3Com card; presumably they depend on something
that would have been done at install time if I'd been allowed to
configure networking then). (Bug 2835.)
About that expert mode: I needed that even for the desktop,
because hoary's normal installer doesn't offer an option for
a static IP address. But on both desktop and laptop this causes a
problem. You see, hoary's normal mode of operation is to add the
first-created user to the sudoers list, and then not create a root
account at all. All of their system administration tools depend on the
user being in the sudoers file. Fine. But someone at ubuntu apparently
decided that anyone installing in expert mode probably wants a root
account (no argument so far) and therefore doesn't need to be in the
sudoers file. Which means that after the install, none of the admin
tools work; they just pop up variants on a permission denied dialog.
The solution is to use visudo to add yourself to
/etc/sudoers. (Bugs 7636 and
9832.)
Expert mode also has some other bugs, like prompting over and over for
additional kernel modules (bug 5999).
Okay, so nothing's perfect. I'm not very impressed with Hoary's
installer, though most of its problems are inherited from Sarge.
But once it's on the machine, Hoary works great. It's a modern
Debian-based Linux that gets security upgrades (something Debian
hasn't been able to do, though they keep making noises about finally
releasing Sarge). And there's that amazing kernel. Now that I have the
hotplug-on-resume problem fixed, I'm going to try using it as the
primary OS on the laptop for a while, and see how it goes.
Tags: linux, ubuntu, laptop, vaio
[
17:29 Jun 03, 2005
More linux |
permalink to this entry |
]
Wed, 01 Jun 2005
During Debian upgrades over the last few months, apparently my
system's alsa and aumix scripts had a little private battle for
control of the mixer, and alsa won. The visible symptom was that my
volume was always at 0 when I started up.
I tried re-enabling the aumix script in /etc/init.d, which
had previously controlled my default volume, but it just said
"Saved ALSA mixer settings detected; aumix will not touch mixer."
The solution, in the end, was to remove
/var/lib/alsa/asound.state, set the volume, and
run alsactl store. Someone suggested that I use chattr -i
to make the asound.state file inviolable; it isn't on an ext2/3
filesystem, so that isn't a solution for me, but if my volume goes
wonky again at least I know where to look.
Tags: linux, alsa, audio
[
10:40 Jun 01, 2005
More linux |
permalink to this entry |
]
Sun, 08 May 2005
Updating the blog again after taking time off for various reasons,
including lack of time, homework, paying work, broken computer
motherboard and other hardware problems, illness, a hand injury,
and so on.
This afternoon, thanks to a very helpful Keir Mierle showing
up on #gimp, I finally got all the pieces sorted and I now have
a working tablet again. Hurrah!
I've put details of the setup that finally worked on my Linux and Wacom
page.
Tags: linux, imaging, gimp
[
19:08 May 08, 2005
More linux |
permalink to this entry |
]
Tue, 12 Apr 2005
A recent change to the Debian font system has caused some odd
font problems which Debian users might do well to know about.
The change has to do with the addition in /etc/fonts of a
directory conf.d containing symbolic links to scripts,
and the overwriting of some of the existing files in /etc/fonts.
The symptoms are varied and peculiar. On my sid system, on each
boot, the system would toggle between two different font resolutions.
I'd start xchat, and the fonts would be too teeny to read; so I'd call
up the preferences dialog, see the font was at 9, and increase it to
12, at which point I'd see the font I was used to seeing (though the
UI font in the tabs would still be teeny). Subsequent runs of xchat
would be fine (except for the still-teeny tab fonts). But upon
reboot, xchat would come up with the tab font correct and the channel
font HUGE. Prefs dialog again: it's still at 12 where I set it last
time, so now I reset it to 9, which makes it the right size.
Until the next reboot, when everything became teeny again and
I have to go back to 12.
The system resolution never changed, nor did the rendering of the
bitmapped fonts I use in emacs and terminal clients; only the
rendering of freetype scalable fonts changed with each reboot.
Back in the days when all fonts were bitmapped, I would have guessed
that the font system was alternating between 100dpi fonts and 72dpi fonts.
At a loss as to what might cause this strange behavior, I took a
peek into /etc/fonts/conf.d, which Dave had discovered a
few weeks ago when he updated his sarge system and all his bitmapped
fonts disappeared. Though my problem didn't sound remotely
similar to his: my bitmapped fonts were fine, it was the scalable
ones which were flaky.
Turns out the symlink I'd aquired in the update,
/etc/fonts/conf.d/30-debconf-no-bitmaps.conf, did indeed
point to a file called no-bitmaps.conf, just as Dave's had.
Just to see what would happen, I removed it, and made a new symlink,
30-debconf-yes-bitmaps.conf, pointing to yes-bitmaps.conf.
Voila! The size-toggling problem disappeared,
and, even better, bitmapped fonts like "clean" now show up in
gtkfontsel and in gtk font selection dialogs, which they never did
before. I can use all my fonts now!
The moral is: if you've updated sarge or sid recently, and see
any weirdness at all in fonts, go to /etc/fonts/conf.d and
fiddle with the symlinks. Even if it doesn't seem directly related to
your problem.
As to why no-bitmaps.conf causes the system to toggle between
two different font scalings, that's still a mystery. The only
difference between no-bitmaps.conf and yes-bitmaps.conf
is that one rejects, and the other accepts, fonts that have "scalable"
set to false. Why that would change the scale at which fonts are
rendered is beyond me. I'll leave that up to someone who understands
the new debian font system. If any such person exists.
Update 5/24/2005: turns out you can change this on a per-user
basis too, with ~/.fonts.conf. man fonts.conf for details.
Tags: linux, debian, fonts
[
22:45 Apr 12, 2005
More linux |
permalink to this entry |
]
Wed, 02 Mar 2005
I like to keep my laptop's boot sequence lean and mean, so it
can boot very quickly. (Eventually I'll write about this in
more detail.) Recently I made some tweaks, and then went through
a couple of dist-upgrades (it's currently running Debian "sarge"),
and had some glitches. Some of what I learned was interesting
enough to be worth sharing.
First, apache stopped serving http://localhost/ -- not
important for most machines, but on the laptop it's nice to be
able to use a local web server when there's no network connected.
Further investigation revealed that this had nothing to do with
apache: it was localhost that wasn't working, for any port.
I thought perhaps my recent install of 2.4.29 was at fault, since
some configuration had changed, but that wasn't it either.
Eventually I discovered that the lo interface was there,
but wasn't being configured, because my boot-time tweaking had
disabled the ifupdown boot-time script, normally called from
/etc/rcS.d.
That's all straightforward, and I restored ifupdown to its
rightful place using update-rc.d ifupdown start 39 S .
Dancer suggested apt-get install --reinstall ifupdown
which sounds like a better way; I'll do that next time. But
meanwhile, what's this ifupdown-clean script that gets installed
as S18ifupdown-clean ?
I asked around, but nobody seemed to know, and googling doesn't
make it any clearer. The script obviously cleans up something
related to /etc/network/ifstate, which seems to be a text
file holding the names of the currently configured network
interfaces. Why? Wouldn't it be better to get this information
from the kernel or from ifconfig? I remain unclear as to what the
ifstate file is for or why ifupdown-clean is needed.
Now my loopback interface worked -- hurray!
But after another dist-upgrade, now eth0 stopped working.
It turns out there's a new hotplug in town. (I knew this
because apt-get asked me for permission to overwrite
/etc/hotplug/net.agent; the changes were significant enough
that I said yes, fully aware that this would likely break eth0.)
The new net.agent comes with comments referencing
NET_AGENT_POLICY in /etc/default/hotplug, and documentation
in /usr/share/doc/hotplug/README.Debian. I found the
documentation baffling -- did NET_AGENT_POLICY=all mean that
it would try to configure all interfaces on boot, or only that
it would try to configure them when they were hotplugged?
It turns out it means the latter. net.agent defaults to
NET_AGENT_POLICY=hotplug, which doesn't do anything unless you
edit /etc/network/interfaces and make a bunch of changes;
but changing NET_AGENT_POLICY=all makes hotplug "just work".
I didn't even have to excise LIFACE from the net.agent code,
like I needed to in the previous release. And it still works
fine with all my existing Network
Schemes entries in /etc/network/interfaces.
This new hotplug looks like a win for laptop users. I haven't
tried it with usb yet, but I have no reason to worry about that.
Speaking of usb, hotplug, and the laptop: I'm forever hoping
to switch to the 2.6 kernel, because it handles usb hotplug so much
better than 2.4; but so far, I've been prevented by PCMCIA hotplug
issues and general instability when the laptop suspends and resumes.
(2.6 works fine on the desktop, where PCMCIA and power management
don't come into play.)
A few days ago, I built both 2.4.29 and 2.6.10, since I was behind
on both branches. 2.4.29 works fine. 2.6.10, alas, is even less
stable than 2.6.9 was. On the laptop's very first resume from BIOS
suspend after the first 2.6.10 boot, it hung, in the same way I'd
been seeing sporadically from 2.6.9: no keyboard lights blinking
(so not a kernel "oops"), cpu fan sometimes spinning,
and no keyboard response to ctl-alt-Fn or anything else.
I suppose the next step is to hook up the "magic sysrq" key and see
if it responds to the keyboard at all when in that state.
Tags: linux, debian, networking, hotplug
[
23:06 Mar 02, 2005
More linux |
permalink to this entry |
]
Mon, 21 Feb 2005
In the storm a couple of days ago, our server crashed: turned out
we had some sort of power glitch that killed the UPS. Curiously,
the other machines stayed up, including mine. I thought everything
was fine, until I tried to power down that evening and found myself
in an infinite-reboot cycle.
Since then my machine has been increasingly flaky, sometimes
sending no video signal to the monitor at startup, sometimes not
booting at all, never able to power down. Dave suggested
downloading the latest BIOS and re-flashing.
The motherboard is a GigaByte GA-6VTXE (amusingly, the manual for it
doesn't mention the company name anywhere, so I had to google for the
model). It turns out that it has an option ("Q-Flash")
to flash a new BIOS image without needing Windows or DOS. Hooray!
Sounded good, anyway: but the download images for the BIOS updates
were a bit worrisome since they had names like bios_6vtxe_f9.exe.
I downloaded the latest and put the .exe on a DOS-formatted floppy.
The BIOS saw the file on the floppy, but said it was the wrong size
(469k when it expected 256k).
Turns out that the file does need to be extracted from Windows
in order to turn that 469k .exe file into the expected 256k image.
It can't be unpacked by unzip, unrar or any other Linux utility I've found.
In other words, GigaByte is making their download files twice as big
as they need to be in order to introduce an unnecessary
Windows requirement into the Q-Flash process, which otherwise would
be completely independant of operating system.
Sigh. (And no, the BIOS update didn't fix the problems, which are
probably hardware. But it was worth a try.)
(Update: looks like it was the obvious, the power supply.)
Tags: linux, bios
[
13:28 Feb 21, 2005
More linux |
permalink to this entry |
]
Tue, 08 Feb 2005
Turns out the
Novell
Ad requires flash 7, and just runs partially (but with no errors
explaining the problem) with flash 6. About 2/3 of the linux users
I polled on #linuxchix had the same problem as I did (still on flash 6).
I installed flash 7.0r25, and now I get video and sound (albeit with
the usual flash "way out of sync" problem), but mozilla 1.8a6 crashes
when leaving the page (I filed a talkback report).
Still not a great face to show migrating customers.
Oh, well, maybe it works better on Novell Linux ...
Tags: linux, marketing, web
[
18:33 Feb 08, 2005
More linux |
permalink to this entry |
]
Someone on IRC posted a link to a
Novell
ad trying to persuade people to migrate from Windows to Linux.
It's flash, so I saw the flash click-to-view button. I clicked it,
and something downloaded and showed play controls (a percent-done slider
and a pause button). The controls respond, but no video ever appears.
Thinking maybe it was a problem with click-to-view, I tried it in my
debug profile, with mostly default settings. No dice: even without
click-to-view, the page just plain doesn't work in Linux Mozilla.
Didn't work in Firefox either (though I don't have a Firefox profile
without click-to-view, admittedly). People on Windows and Mac
report that it works on those platforms.
I thought to myself, Novell is trying to be pro-Linux, they'll
probably want to know about this. So I went up one level to try to
find a contact address (there isn't one on the migration page).
I didn't find any email addresses but I did find a feedback link,
so I clicked it. It popped up an empty window, which sat empty
for a minute or two, then filled with "Novell Account:
Mal-formed reply from origin s". Any text which might follow
that is cut off, doesn't fit in the window size they specified.
What does Novell expect customers to think when they migrate
one machine to Linux, start using it to surf the web, and
discover that they can't even read Novell's own pro-Linux pages
from Linux? What sort of impression is that going to make on
someone considering migrating a whole shop?
Fortunately sites like Novell's which don't work in Linux and
Mozilla are the exception, not the rule. I can surf most
of the web just fine; it's only a few bad apples who can't manage
to write cross-platform web pages. But someone early in the
migration process doesn't know that. They're more likely to just
stop right there.
Tags: linux, marketing, web
[
12:30 Feb 08, 2005
More linux |
permalink to this entry |
]
Thu, 13 Jan 2005
For years I've been plagued by having web pages occasionally display
in a really ugly font that looks like some kind of ancient OCR font
blockily scaled up from a bitmap font.
For instance, look at West Valley College
page, or this news page.
I finally discovered today that pages look like this because Mozilla
thinks they're in Cyrillic! In the case of West Valley, their
server is saying in the http headers:
Content-Type: text/html; charset=WINDOWS-1251
-- WINDOWS-1251 is Cyrillic --
but the page itself specifies a Western character set:
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
On my system, Mozilla believes the server instead of the page,
and chooses a Cyrillic font to display the page in. Unfortunately,
the Cyrillic font it chooses is extremely bad -- I have good ones
installed, and I can't figure out where this bad one is coming from,
or I'd terminate it with extreme prejudice. It's not even readable
for pages that really are Cyrillic.
The easy solution for a single page is to use Mozilla's View menu:
View->Character Encoding->Western (ISO-8851-1).
Unfortunately, that has to be done again for each new link
I click on the site; there seems to be no way to say "Ignore
this server's bogus charset claims".
The harder way: I sent mail to the contact address on the server
page, and filed bug
278326 on Mozilla's ignoring the page's meta tag (which you'd
think would override the server's default), but it was closed with
the claim that the standard requires that Mozilla give precedence
to the server. (I wonder what IE does?)
At least that finally inspired me to install Mozilla 1.8a6, which
I'd downloaded a few days ago but hadn't installed yet, to verify
that it saw the same charset. It did, but almost immediately I hit
a worse bug: now mozilla -remote always opens a new window,
even if new-tab or no directive at all is specified.
The release notes have nothing matching "remote, but
someone had already filed bug
276808.
Tags: tech, web, fonts, linux
[
20:15 Jan 13, 2005
More tech/web |
permalink to this entry |
]
Tue, 04 Jan 2005
Why is it that devices which claims Linux support almost never
work with Linux?
When my mom signed up for broadband, we needed an ethernet card and
router/firewall for her machine. The router/firewall was no problem
(a nice Linksys with a 4-port switch included) but ethernet cards
are trickier. First, it turns out that lots of stores no longer
sell them, because they're trying to push wireless on everybody.
("Hey, I have a great idea! Let's take Windows users who don't
even know how to run Windows Update, and set them up with an
802.11b network that opens their connection to the whole neighborhood,
plus anyone driving by, unless they take extra security precautions!")
Second, ethernet cards are in that class of hardware that
manufacturers tend to change every month or so, without changing the
model number or adding any identifying information to the box so
you know it's not the one that worked last time.
The sale card at Fry's was an AirLink 101, and it claimed Linux
support right on the box. The obvious choice, right? We knew better,
but we tried it anyway.
Turns out that the driver on the floppy included in the box is for a
RealTek 8139 chip: a file called 8139too.c, which has already been
incorporated into the Linux kernel. Sounds great, no? Except that
it turns out that the card in the box is actually an 8039, not an
8139, according to lspci, and it doesn't work with 8139too.c. Nor
does it work with the ne2k driver, which supports the RealTek 8029
chip. No driver we could find could make head nor tail out of the
AirLink chip.
Amusingly, the Windows driver on the floppy didn't work either: it,
too, was for a RealTek 8139 and hadn't been updated to match the chip
that was actually being shipped on the card. So the AirLink is a
complete bust, and will be returned.
Fortunately, the other likely option at Fry's, a Linksys LNE100TX,
is still the same chip (DEC Tulip) that they've used in the past, and
it works just fine with Linux.
It's sad how often a claim of Linux support on the box
translates to "This is a crappy product which probably won't work
right with any operating system, since we change it every couple
of months. But three revs back someone tried it on a linux
machine and it worked, so we printed up all our packaging
to say so even though we didn't bother to retest it after we
completely redesigned the board."
Tags: linux
[
10:57 Jan 04, 2005
More linux |
permalink to this entry |
]
Mon, 20 Dec 2004
I've forever struggled with Debian's printing system.
A few months ago, Debian unstable introduced a new package called
printconf which, once I discovered by accident it required
the parallel port to be in EPP mode, actually detected and
configured my trusty Epson Photo 700. It was a happy day!
But since then, the printing system has broken again.
It wasn't so bad when printing did nothing at all, or printed
random garbage characters or postscript instead of a picture.
But now (for the past month or so), what it does is print out
a centimeter or so of reasonable graphics, after which the printer
starts to issue horrible grinding noises
and has to be powered off in order to stop the destruction.
I discovered through much fiddling that I could get the printer
working again (on a non-Debian system) by powering it off and
leaving it that way for quite a while (a few minutes doesn't seem to
be enough, but 20 minutes is), then plugging it into the SuSE 9.1
machine and running a series of clean/nozzle test/clean cycles.
Eventually, after the second round where the nozzle test prints
clean, the printer works normally again from SuSE or Redhat.
I still don't know whether all that loud grinding is doing
any permanent damage to the printer.
I suspect the actual problem may be something like paper size.
In the few months during which printing actually worked,
I had lots of problems with mozilla's printouts overrunning the
page, which turned out to be due to Xprint having its own idea of
paper size (A4) rather than following the system setting (usletter).
I never did find a place to configure Xprint's idea of paper size,
so I uninstalled Xprint, and mozilla magically became able to print
on usletter paper. But it's possible there are other parameters
buried in the debian printing system somewhere, perhaps telling
the printer to print to paper wider than it's capable of.
I've filed bugs, but they never get any response which might offer
a clue how I could help debug this; I suspect Debian's print
spooling system is basically orphaned. I've tried installing
and uninstalling every combination of the myriad print spooling
components I can find. I'd love to uninstall it all and build the
whole spooler from source, and then perhaps try to track down
the problem and fix it, but there are so many pieces which all
work together in undocumented ways that I don't know where to start.
(Perhaps by installing exactly the component set that SuSE does?)
I'm reluctantly giving up on Debian for my primary desktop machine.
I like almost everything else about Debian, and I've run it for
several years on my primary machine; but during that time I've
only had a few months here and there where printing briefly worked
before breaking again. There must be a distro that can do easy
software updates like Debian, yet is still capable of driving
a printer without damaging it!
Tags: linux, debian, printing
[
23:46 Dec 20, 2004
More linux |
permalink to this entry |
]
Mon, 13 Dec 2004
I've been alternating between icewm and openbox for my window manager,
but I'm not entirely happy with either one. They're both fast to
start up, which is important to me, but I've had frustrations
mostly relating to window focus -- which window becomes focused when
switching from one desktop to another (icewm's biggest problem) or
when a window resizes (openbox), and also with initial window positioning
and desktop location (e.g. making one window span all desktops without
having to select a menu item every time I run that app).
Someone was opining on IRC about fvwm and its wonderful configurability,
and that made me realize that I haven't really given fvwm a chance in
a long, long time. Time to see if I was missing anything!
The defaults are terrible. No wonder I didn't stick with fvwm after
newer windowmanagers came out! It's definitely not an install-and-go
sort of program. Nor is the documentation (a long and seemingly
thorough man page) clear on how to get started configuring it.
Eventually I figured out that it looks for ~/.fvwm/config, and
that some sample configs were in /usr/share/doc/fvwm/sample.fvwmrc
(which is a directory), and I went from there. After several hours
of hacking, googling, and asking questions on #fvwm, I had a setup
which rivals any window manager I've found: it's fast, lets me configure
the look of my windows, lets me bind just about anything to keys,
and seems pretty well behaved focus behavior. More important,
it also allows me to specify special behavior for certain windows,
for example, making xchat always occupy all desktops:
Style "xchat" Sticky
which is something I've wanted but haven't been able to do in
any other lightweight window manager. That alone may keep me
in fvwm for the forseeable future.
Tips for things that were non-obvious:
Rotating through three desktops
In other window managers, I define three desktops, and use ctrl-alt-right
and left to cycle through them, rotating, so going right from 3 goes to
desktop 1. fvwm has both "pages" (a virtual desktop can be bigger than
the screen, and mousing off the right side scrolls right one page) and
desktops. I didn't want pages, only desktops, so
DeskTopSize 1x1
turns off the pages, and it was clear from the man page that
PointerKey Left A CM GotoDesk -1
would go left one desk (the only unclear part about that is that A
in the modifier list means "Any", not "Alt", and "M" (presumably for
"meta" means alt, not the windows-key which some programs use for meta).
"PointerKey" is needed instead of "Key" because otherwise fvwm gets
confused when using the "sloppy focus" model (the man page warns
about that).
The question was, how to limit fvwm to three
desktops, and wrap around, rather than just going left to new desktops
forever? The answer (courtesy of someone on IRC) turned out to be:
PointerKey Left A CM GotoDesk -1 0 2
PointerKey Right A CM GotoDesk 1 0 2
PointerKey Left A CMS MoveToDesk -1 0 2
PointerKey Right A CMS MoveToDesk 1 0 2
The only problem at this point is that MoveToDesk doesn't then change
to the new desktop, the way other window managers do, but I'm confident
that will be easily solved.
Titlebar buttons
I had the hardest time getting buttons (e.g. maximize, close)
to appear on the titlebars of my windows. You'd think this would
happen by default, but it doesn't. It turned out that titlebar buttons
aren't drawn unless there's a key or mouse action bound to that button,
which they aren't by default. So to get buttons for window menu,
maximize, and close, I had to do:
Mouse 1 6 A Close
Mouse 1 8 A Maximize
Mouse 1 1 A Menu Window-Ops Nop
But then showing buttons 6 and 8 (the even buttons are numbered
from the top right) automatically turns on 2 and 4 (I chose 6 and
8 because their default shapes were vaguely mnemonic), so they have
to be turned off again:
Style * NoButton 2
Style * NoButton 4
Smaller titlebar and window frame
I also wanted to reduce the titlebar height and the width of the window
frame: I don't like wasting all that screen real estate. That, too,
took a long time to figure out. It turns out I had to define my own
theme in order to do that, then add a couple of undocumented items
to my theme. There's lots of documentation around on how to make
buttons and background images and menus and key bindings in themes,
but none of the documentation mentions simple stuff like titlebar height.
DestroyDecor MyDecor
AddToDecor MyDecor
+ TitleStyle Height 16
+ Style "*" BorderWidth 5, HandleWidth 5
+ ButtonStyle All -- UseTitleStyle
Style "*" UseDecor MyDecor
New windows should grab focus
Most window managers do this by default, but fvwm doesn't, and requires:
Style * FPGrabFocus
Full config file
For any other settings, see my
fvwm config file.
Tags: linux, X11, window managers
[
18:14 Dec 13, 2004
More linux |
permalink to this entry |
]
Wed, 08 Dec 2004
I ran out of space on the backup drive today (must replace that 40G
with a newer, bigger disk!) and decided I wanted to consolidate the
last two partitions into one. The filesystem (ext2) was on sda3,
and sda4 was blank.
Supposedly parted and qtparted can resize a filesystem,
but when I select the relevant partition in qtparted (delete sda4,
then select sda3) and tell it to resize, it gives an error message:
No Implementation: This ext2 filesystem has a rather strange layout! Parted can't resize this (yet).
I ended up using cfdisk to resize the partition, then resize2fs
to grow the filesystem.
Since there doesn't seem to be a howto on resizing filesystems,
here are the steps:
- cfdisk /dev/sda
- Select hda4 and delete it.
- Select hda3 and delete it. (Partitioning programs like fdisk
and cfdisk don't have "resize", they only have delete and recreate.)
- Create a new partition, using all the available space.
- Write and quit.
- resize2fs -p /dev/sda3
(there's also a resize_reiserfs). This required that I run fsck
first (even though the filesystem had been unmounted cleanly).
It's possible that that was why qtparted failed to resize, but
if so, it should have said so.
Tags: linux, filesystems
[
17:58 Dec 08, 2004
More linux |
permalink to this entry |
]
Tue, 07 Dec 2004
Something to do while sick, when I couldn't stand lying in bed
reading any more ...
A few of us got to talking about video formats and our confusion
about which formats were which, which ones were open, and so forth.
I've never found a web document that really explains that clearly,
but since I wasn't up to going anywhere or doing anything hard,
I spent some quality time with google and wrote up my findings:
Digital
Video Formats.
It's still very rough, but at least it tries to make clear what's
a codec vs. what's a container, and how to identify the container
and codec used in a particular file. And it's a good place to store
a few references that I found useful.
Tags: linux, video
[
23:05 Dec 07, 2004
More linux |
permalink to this entry |
]
Mon, 22 Nov 2004
My Epson 2400 Photo scanner is finally working again. It used to
work beautifully under 2.4, but since the scanner.o module
disappeared in 2.6 and sane started needing libusb,
I haven't been able to get it to work. (sane-find-scanner
would see the scanner, but scanimage -L would not, even as
root so it wasn't a permissions problem.)
Working with someone on #sane tonight (who was also having problems
with libusb and 2.6) I finally discovered the trick: I had an old
version of /etc/sane.d/epson.conf which used a line:
usb /dev/usb/scanner0
but I was completely missing a new, important, section which
includes a line that says simply
usb
preceeded by a couple of all important comment lines:
# For any system with libusb support
# (which is pretty much any
# recent Linux distribution) the
# following line is sufficient.
So I replaced the old libusbscanner script with the new one,
commented out scsi, left /dev/usb/scanner0 commented out,
and uncommented the standalone usb line. And voila, it worked!
<geeky_hotplug_details>
The old /etc/hotplug/usb/epson.scanner script (which I'd
gotten from a SANE help page long ago) was no longer being
called, since it's been replaced by libusbscanner.
The main function of either of these scripts is to do a
chown/chmod on the scanner device, so that non-root users
can use it. An interesting variation on this is a
bugzilla
attachment which changes scanner ownership to the person who is
currently logged in on the console. Might be worth doing on a
multiuser system (not an issue for my own desktop).
I have a line for my scanner in /etc/hotplug/usb.usermap (and
indeed that's the only line in that file):
libusbscanner 0x0003 0x04b8 0x011b 0x0000 0x0000 0x00 0x00 0x00 0x00 0x00 0x00 0x00000000
which is probably redundant with the 0x04b8 0x011b line in
libsane.usermap (
/etc/hotplug/usb.agent, which gets called
whenever a USB hotplug event occurs, looks at usb.usermap and also
usb/*.usermap)
</geeky_hotplug_details>
Tags: linux, imaging
[
19:03 Nov 22, 2004
More linux |
permalink to this entry |
]
Wed, 17 Nov 2004
Fedora Core 3 is finally out, so I tried it this evening.
The install itself went apparently smoothly, looking just like a
normal Redhat graphical install, with a few new options,
such as SElinux (which I turned off).
The first problem came during the first boot of the newly installed
system, when the boot process complained that fsck.ext3 was finding
errors on hda2.
hda2, on this disk, is a SuSE partition: I didn't tell the FC3
installer about it at all, so, correctly, it's not mentioned in
/etc/fstab; and it's reiser, so FC3 has no business running
fsck.ext3 on it!
It dumped me into a single-user shell, in which mount shows / as
mounted "rw,defaults", but any attempt to modify anything on the
root partition complains "Read-only file system".
I got around that with mount -o rw,remount / -- it turns out
that typing "mount" doesn't actually give an accurate picture of
mounted filesystems, and cat /proc/mounts is better.
Now able to edit /etc/fstab, I noticed the "LABEL=/" and
"LABEL=/boot" entries that Redhat is so fond of, and speculated
that this is what was causing the problems. After all, there are
several other OSes installed on this system (a Redhat 7.3 and a SuSE
9.1) and either or both of them might already have claimed the label
"/". So I changed the fstab entries to /dev/hda6 and /dev/hda1.
A reboot, and voila! things worked and I found myself in the
first-time boot configuration process.
(Note: Redhat bug
76467 seems to cover this; I've added a comment describing what
I saw.)
I really wish Redhat would get over this passion for using
disk partition labels, or at least detect when the labels it wants
to use are already taken.
Onward through hardware configuration. It didn't detect my LCD monitor,
but that was easy to correct. It correctly detected and configured
the sound card. It didn't try to configure a printer. At the end
of hardware configuration, it took me to a graphical login screen
(no option for non-graphical login was offered), and I logged in to
a gnome desktop, with a rather pretty background and some nicely
small and professional looking taskbars. The default gnome theme
looks nice, and the font in the terminal app (gnome-terminal)
is very readable on this 1280x1024 monitor.
The default browser is Firefox, one of the 1.0 preview releases,
with nice looking fonts.
The first step was to try and configure a printer. Double-clicking
on the "Computer" desktop icon offered only my two optical drives,
the hard drive, and "Network". The Redhat menu in the panel,
though, offered "System Settings->Printer", which ran printconf-gui,
which revealed that FC3 had in fact autodetected my Epson Photo 700
and configured it. Strangely, printconf-gui's "Test" menu was
greyed out, so I wasn't able to "print a test page" that way.
I tried quitting printconf-gui, restarting it (still grey),
left-clicking on the printer (still grey), right-clicking on the
printer (nothing test-y in the context menu) -- and the Test menu
finally ungreyed! The test page printed beautifully -- centered on
the page, something Debian's CUPS setup has never managed.
Clicking on the red ! in the taskbar took me to up2date;
clicking through the screens ended up updating only the kernel,
apparently because updates aren't auto-selected and I have to
manually "Select all" in order to update anything. Once I figured
this out, up2date, via yum, got started updating the other 75
available packages. But it only got halfway through before it
hung (the window wouldn't repaint). It turned out that kill -1
on the up2date process didn't help, but kill -1 on the
/usr/bin/python -u /usr/sbin/up2date made the window wake up and
start updating again. I had to repeat this several times during the
multi-hour update. Then it died, apparently with no memory of which
systems it had already updated.
A Fedora expert suggested that I should
- Go to /etc/yum.repos.d and add .us.west to the end of the url
in every file that has a mirrorlist entry, then
- Use yum -y update instead of up2date, because up2date doesn't
seem to work right for anyone.
Indeed, that seems to be working much better, and it turns out that
I can move the RPMs already downloaded from /var/spool/up2date to
/var/cache/yum/updates-released/packages so I don't have to
re-download them (whew!)
Mixed review
So overall, FC3 gets a mixed review.
The installer is pretty good. It's a bit light on feedback: for
instance, not telling me that a printer was configured (or giving
me an option to change it), which would have added a warm fuzzy
since it turned out it handled it so well; or not giving me a
non-graphical login option. The desktop look is clean and usable.
OTOH, the boot totally failed due to the LABEL=/ problem, and
up2date totally failed. A novice user, wiping out the disk,
wouldn't see the partitioning problem, but if up2date is as
flaky as it seems (everyone I talked to has had problems with it)
it's hard to understand why they don't just use yum directly,
and offer more mirror options (up2date only gave me a choice
of one server, which was obviously overloaded).
Tags: linux, redhat, fedora
[
23:58 Nov 17, 2004
More linux |
permalink to this entry |
]
Fri, 05 Nov 2004
Printing's been broken on my Debian machine forever.
For one brief shining moment back in July I
briefly
got it working, then a week later a dist-upgrade broke it again
and it's been broken ever since.
Last week Debian Weekly
News mentioned a new package called "printconf" which supposedly
autoconfigures usb and parallel printers for CUPS. Now, setting
aside for the moment that there's already a package called
printconf, which configures a completely different spooler than
CUPS, and that it's very confusing of Debian to resurrect an old
name for a completely different purpose, of course I wanted to try
it.
At apt-get time, it asked me whether I wanted to configure my
printers now, and of course I said yes. The package installed,
it printed a message about restarting CUPS, and no more details.
Did it do anything?
I visited the CUPS configuration url (CUPS is configured via a web
browser) and the entry looked like my old printer entry. Just for
ducks I clicked "print a test page". Nada. So I removed the entry,
went back to my root shell and typed printconf. It printed
"Restarting cups ... done." No other info. Back to the web
configuration page ... no printer there.
Eventually I discovered the -v option, which at least told me that
it wasn't finding any parallel printers. I know this printer can be
detected via the parallel port (SuSE and Mandrake both autoconfigure
it), so something was wrong. Time to look at the BIOS.
A bunch of reboots later, I finally managed to get into my machine's
BIOS screen (hint: repeatedly press DEL during boot. The screen saying
DEL is the right key only flashes for a fraction of a second, so
there's no hope of ever reading it and I wasted several boot cycles
pressing function keys instead) and changed the parallel port from
"ECP" to "ECP/EPP". Back into Debian -- and voila! printconf saw
the printer, autoconfigured it with some magic the earlier entry
hadn't had, and after a year and a half I have a debian printer again!
(Incidentally, the parallel port setting isn't why the printer
wasn't working before; it was something about the CUPS
configuration. Printing used to work on this machine several
years ago and the BIOS settings haven't changed since then.)
All hail printconf! I wonder if it's ever occurred to anyone to
mention in the man page that it needs an EPP (or ECP/EPP?) parallel
port?
Tags: linux, debian, printing
[
22:05 Nov 05, 2004
More linux |
permalink to this entry |
]
Wed, 13 Oct 2004
I took a break from housepainting yesterday to try out a couple
of new linux distros on my spare machine, "imbrium", which is mostly
used as a print server since Debian's CUPS can't talk to an Epson
Photo 700 any more.
The machine is currently running the venerable Redhat 7.3 -- ancient
but very solid. But I wanted a more modern distro, something
capable of running graphics apps like GIMP 2 and gLabels 2.
I considered Fedora, but FC2 is getting old by now and I would
rather wait for FC3.
First I tried SuSE 9.1. It was very impressive.
The installer whizzed through without a hitch,
giving me lots of warning before doing anything destructive.
It auto-configured just about everything: video card,
ethernet, sound card, and even the printer. It missed my LCD monitor;
X worked fine and it got the resolution right, but when I went in to
YaST to enable 3D support (which was off by default) it kept whining
about the monitor until I configured it by hand (which was easy).
It defaulted networking to DHCP, but made it clear that it had done
so, which made it easy to change it to my normal configuration.
SuSE still uses kde by default, which is fine. The default desktop
is pretty and functional, and not too slow. I'll be
switching to something lighter weight, like icewm or openbox, but
SuSE's default looks fine for a first-time user.
I hit a small hitch in specifying a password: it has a limited set
of characters it will accept, so several of the passwords I wanted
to use were not acceptable. Finally I gave up and used a simple
string, figuring I'd change it later, and then it whined about it
being all lower case. Why not just accept the full character set,
then? (At least full printable ascii.)
Another minor hitch involved the default mirror (LA) being down when it
got to the update stage. Another SuSE user told me that mirror is
always down. Choosing another mirror solved that problem.
Oh, and the printer? Flawless. The installer auto-detected it
and configured it to use gimp-print drivers.
(from gimp-print) worked fine in a full "test page with photo",
and subsequent prints (via kprinter) from Open Office also worked.
Good job, SuSE!
The experience with Ubuntu Warty wasn't quite so positive.
The installer is a near-standard Debian installer, with the usual
awkward curses UI (I have nothing against curses UIs; it's the
debian installer UI specifically which I find hard to use, since
it does none of the "move the focus to where you need to be next"
that modern UI design calls for, and there's a lot of "Arrow down
over empty space that couldn't possibly be selectable" or "Arrow
down to somewhere where you can hit tab to change the button so you
can hit return". It's reminiscent of DOS text editors from the
early eighties.
But okay, that's not Ubuntu's fault -- they got that from Debian.
The first step in the install, of course, is partitioning.
My disk was already partitioned, so I just needed to select / to
be formatted, /boot to be re-used (since it's being shared with the
other distros on this machine), and swap. Seemed easy, it accepted
my choices, made a reiserfs filesystem on my chosen root partition
-- then spit out a parted error screen telling me that due to an
inconsistent ext2 filesystem, it was unable to resize the /boot
partition.
Attempting to resize an existing partition without confirming it
is not cool. Fortunately, parted, for whatever reason,
decided it couldn't resize, and after a few confirmation screens
I persuaded it to continue with the install without changing /boot.
The rest of the install went smoothly, including software update
from the net, and I found myself in a nice looking gnome screen
(with, unfortunately the usual proliferation of taskbars gnome
uses as a default).
Of course, the first thing I wanted to try was the printer.
I poked through various menus (several semi-redundant sets) and
eventually found one for printer configuration. Auto-detect didn't
detect my printer (apparently it can't detect over the parallel
port like SuSE can) so I specified Parallel Port 1 (via an option
menu that still has the gtk bug where the top half of the menu is
just blank space), selected epson, and looked ... and discovered
that they don't have any driver at all for the Photo 700. I tried
the Photo 720 driver, which printed a mangled test page, and the
generic Epson Photo driver, which printed nothing at all. So I
checked Ubuntu's
Bugzilla, where I found a bug
filed requesting a driver for
the Epson C80 (one of the most popular printers in the linux
community, as far as I can tell). Looks like Ubuntu just doesn't
include any of the gimp-print drivers right now; I signed up
for a bugzilla account and added a comment about the Photo 700,
and filed one about the partitioning error while I was there,
which was quickly duped to a more general bug about
parted and ext2 partitions.
I don't mean to sound down on Ubuntu. It's a nice looking
distro, it's still in beta and hasn't yet had an official release,
and my printer is rather old (though quite well supported by
most non-debian distros). I'm looking forward to seeing more.
But for the time being, imbrium's going to be a SuSE machine.
Tags: linux, ubuntu, suse
[
19:15 Oct 13, 2004
More linux |
permalink to this entry |
]
Thu, 07 Oct 2004
Linux Magazine has a good article by Jonathan A. Zdziarski on
Linux on
the laptop: Ten power tools for the mobile Linux user.
He gives hints such as what services to turn off for better power
management, and how to configure apmd to turn off those services
automatically; finding modules to drive various types of wireless
internet cards; various ways of minimizing disk activity;
and even making data calls with a mobile phone.
Lots of good information in there!
Tags: linux, laptop
[
20:51 Oct 07, 2004
More linux/laptop |
permalink to this entry |
]
Sat, 18 Sep 2004
Amazing! HP finally updated their web site and made it possible to
buy the SuSE Linux laptop that they've been
claiming
to have since early August. Only took a month and a half.
I wonder if anyone else has noticed that you have to buy the
high-end version of the laptop (over $1500) to get the Linux
option, and it's only $20 less than the comparable Windows version,
even though all their press releases last month said it would be
under $1100 and significantly (like $50) cheaper than the Windows
version?
Tags: linux, marketing
[
23:13 Sep 18, 2004
More linux |
permalink to this entry |
]
Tue, 10 Aug 2004
A Sun employee named James Todd has been posting
paeans to Sun and their Linux support on the svlug list
(the
thread).
I don't intend to follow up to that thread,
because I expect after 18 messages in four days
(including 9 from jwtodd spanning over 800 lines)
I expect most folks on the list would prefer to move on to other topics.
James attacks me repeatedly for my earlier blog entry
wherein I say that the machines I saw in the Sun booth were all running
Windows. He says he worked in the booth, and there were no Windows
machines there.
If that's true, then that's terrific! I'm very happy to hear that
all the machines I saw with "Start" menus and Redmond-looking icons
and themes were actually just a theme Sun puts on their Linux
(or Solaris?) desktop boxes. I don't know why Sun feels it
necessary to make Linux look just like Windows -- maybe that's part
of their theory that you don't need to know what OS you're on (which
is really quite a good idea for corporate installations, and
reportedly is working quite well internally at Sun).
Perhaps they further assume that if they make the non-Windows
installations look like Windows, people will be more accepting of
the idea. I'm not sure this part is a good idea -- wouldn't it be better
if the theme sent the message "Sun" rather than "Windows",
so customers don't get the idea that they can just zip off to Dell
or somewhere and buy cheaper machines that will do the same thing?
Wouldn't it be better marketing at a show like Linux World to show
off a theme that didn't look like Windows?
But that's all marketing. If the machines were in fact running
Linux and Solaris, I'm happy to hear that I was wrong.
Time will tell whether the Windows-like theme is the
choice, and whether Sun sticks with Linux in the long run.
Of course I hope they do, and that they succeed in selling linux
boxes to corporate customers, and that the recent settlement
agreement with Microsoft does not herald a withdrawal from open source,
as it has with some other companies.
(Whether Sun has helped open source is not at issue,
and never was part of this debate, as far as I know.
They've already contributed quite a bit, with the Open Office
project, and with contributions to Gnome and Mozilla accessibility
and internationalization.)
Tags: linux, conferences, linuxworld, marketing
[
14:29 Aug 10, 2004
More linux |
permalink to this entry |
]
Okay, that subject line isn't likely to surprise any veteran linux
user. But here's the deal: wvdialconf in the past didn't support
winmodems (it checked the four /dev/ttyN ports, but not /dev/modem
or /dev/ttyHSF0 or anything like that) so in order to get it working
on a laptop, you had to hand-edit the modem information in the file,
or copy from another machine (if you have a machine with a real
modem) and then edit that. Whereas kppp can use /dev/modem, and
it's relatively easy to add new phone numbers. So I've been using
kppp on the laptop (which has a winmodem, alas) for years.
But with the SBC switch to Yahoo, I haven't been able to dial up.
"Access denied" and "Failed to authenticate ourselves to peer" right
after it sends the username and password, with PAP or PAP/CHAP
(with CHAP only, it fails earlier with a different error).
Just recently I discovered that wvdial now does check /dev/modem,
and works fine on winmodems (assuming of course a driver is present).
I tried it on sbc -- and it works.
I'm still not entirely sure what's wrong with kppp. Perhaps
SBC isn't actually using PAP, but their own script, and
wvdial's intelligent "Try to guess how to log in" code is figuring
it out (it's pretty simple, looking at the transcript in the log).
I could probably use the transcript to make a login script that
would work with kppp. But why bother? wvdial handles it
excellently, and doesn't flood stdout with hundreds of lines
of complaints about missing icons like kppp does.
Yay, wvdial!
Tags: linux, networking
[
13:51 Aug 10, 2004
More linux |
permalink to this entry |
]
Mon, 09 Aug 2004
Searching, as always, for the perfect window manager ...
Helix likes ion3, because it handles key accelerators very well,
so I thought I'd try it.
I don't really like the "tiled or fullscreen" model it uses by
default, but found the answer in the FAQ (after a rude RTFM comment
which made no sense, since I already had RTFM and it doesn't give
information on anything but runtime arguments): press F9 and get
it to prompt for the type of the new workspace (the correct answer
is the one with "Float" in the name).
All of the available themes use the "small grab handle around the
title, but the rest of the titlebar isn't there" look. I don't like
that (it's harder to find a place to grab to move windows around)
though I suppose I could get used to it. Not a big deal.
It does look like it has good key binding support, and ways to
specify different behavior for different apps, both of which would
be very nice. Focus behavior on resize seems to be the same as openbox
and icewm, though: if the window resizes out from under the cursor,
it loses focus. Also the root menus (right-click) are a pain: they
don't stay posted, and they're small, so it's hard to navigate them.
I'm sure ion3 has some coolnesses, but I decided that it didn't look
likely enough to solve my problems to be worth learning how to
configure it (and that unjustified RTFM left a bad taste in my
mouth about how easy that configuration was likely to be). So
I'm back on openbox for a little while, anyway.
Here's what I want out of a window manager:
- First, startup speed and basic operation comparable to icewm or openbox
(my two faves so far).
- Intelligent handling of keyboard focus even when in "focus follows mouse"
mode. That means:
- When switching workspaces, focus goes either to the window
under the mouse, or the window on that workspace which last
had the focus (either one would be fine). Openbox does this
pretty well; icewm's focus is pretty random.
- When resizing a window that has focus, it should keep the
focus even if it resizes out from under the mouse.
- Click in a window focuses (but doesn't necessarily raise) that window.
(Again, openbox does this well, icewm doesn't.)
- A new window or dialog gets focused.
- A way of setting up rules for different windows:
e.g. I want xchat to be on all workspaces, or I want moonroot to
come up with no window borders. I shouldn't have to do a
right-click-and-menu-selection operation every single time I
run xchat.
- Easy configuration to start apps with from a menu on the root
or via key bindings (or, ideally, both). No dependance on
having a panel (which was why I quit using xfce4).
Both icewm and openbox handle this well.
- A way of navigating among windows using only the keyboard.
Both openbox and icewm allow for easy movement between
workspaces and moving windows between workspaces, but in both
of them, alt-tab toggles between the current app and the last
app on that workspace, and there seems to be no way to get to
anything besides those two.
Tags: linux, X11, window managers
[
18:36 Aug 09, 2004
More linux |
permalink to this entry |
]
Sun, 08 Aug 2004
Hey, cool! The
Linux
Picnix 13 T-shirts came in a women's version!
Looked like they had a bunch -- I hope they don't end up with
too many extras and regret making them, 'cause they're very nice
and I'd love to see this catch on.
(It's black, so maybe not too useful outdoors, but it looks great.)
(Followup: actually it's very thin fabric and even outdoors it's
okay.)
The picnic was fun, too, and well organized.
Oracle sponsored the food.
Thanks to Google, Oracle, and the Linux Picnix crew
(Bill Kendrick, Bill Ward and whoever else helped out).
Tags: linux, conferences, chix
[
00:05 Aug 08, 2004
More linux |
permalink to this entry |
]
Wed, 04 Aug 2004
The SF Chronicle this morning had a little note headlined,
"HP first to introduce Linux-based laptop".
Aside from the obvious error in "first" (the article itself admits
that several smaller manufacturers have been selling linux laptops
for quite some time), I wondered if this was real, or just another
of HP's attempts to get credit for supporting Linux without actually
risking Microsoft's wrath by selling any product.
So I went to HP's web site. No mention on the top level, so I tried
searching for "linux laptop" and got nothing useful. So I did some
clicking around to find the particular model mentioned in the
article (nx5000) and eventually found it (under business systems).
That listing did indeed list SuSE as an OS option. But clicking
through to buy or customize the machine took me to screens where
Windows XP was the only option, and the lowest price was the Windows
price (the Linux price is supposed to be $60 lower).
Later, it occurred to me that HP calls them "notebooks" rather than
"laptops", so I went back and searched for "linux notebook". This
gave several false hits, including a page on "Linux solutions from
HP" with a link to "Products", which eventually leads me to a page
where they're apparently offering drivers, but no hardware.
An excellent example of Jakob Nielsen's Alertbox topic this week,
"Deceivingly
Strong Information Scent Costs Sales".
The search also led me to a press release which was obviously the
basis for the Chron article, and a generic "Business Products" page
that looks like one I probably already went through in my search
earlier today that led to screens offering only Windows XP.
I can only conclude that this is another fakie HP marketing ploy
to claim to be supporting linux, while having no intention of
actually offering it. Mark my words, in a few months HP will
announce that it's no longer offering this option because strangely,
customers didn't buy very many of them. (HP has pulled this prank
three or four times before, with desktop machines.)
I very much hope HP proves me wrong this time, and updates its
sales pages to offer the OS option its press release is claiming.
Tags: linux, marketing
[
19:40 Aug 04, 2004
More linux |
permalink to this entry |
]
Tue, 03 Aug 2004
I just got back from Linuxworld.
The exhibit floor isn't very different from last year.
A few more desktop booths, a few fewer embedded and small (e.g.
PDA) booths, still dominated by server oriented exhibits --
clustering, network monitoring and similar sysadmin tools.
The "Dot Org Pavilion" was quite a bit bigger than last year
(which I like, since that's where most of the interest is for me).
The most interesting corporate booth was Psion, which was showing
a small device midway between a large PDA (like their old Revo)
and a small subnotebook (like my Vaio SR). It has a keyboard that
looks smaller than my Vaio's, but which types very well,
surprisingly comparable to the Sony. It uses CF as its main disk,
but also has a SD/MMC slot, and PCMCIA and USB (no actual hard drive).
And a touchscreen. They claim 8 hours battery life.
They told Dave that it might sell for around
$1000-1500 (too much, probably because of the touchscreen).
They've been trying to sell
the hardware as a WinCE box to corporate buyers and vertical
markets, and I guess it isn't doing well, so they put linux on it
(a Debian variant) and brought two of them to the show to gauge
interest. It looked like they weren't expecting any interest at
all: their booth was spare, with a table with two of the devices on
it, one guy who seemed to know something and two women who just
stood around and didn't seem inclined to talk to anyone or answer
questions. No fliers, no sign, no nuthin. There were people
crowded around the booth (not thickly, but a few) both times I
passed. Perhaps they'll decide there's enough interest to go ahead.
The linuxastronomy.com guy
was there again (with a friend) showing
off a live homebuilt seismometer, recording on the show floor.
Very cool. Someone who came to visit the booth showed his latest
hack, a knoppix that boots from an NTFS partition (so it's fast and
doesn't require a CD). I suggested he show it to the open source in education
guy in the booth next door, since I'd just been
commiserating with him about how hard it is to get people at schools
(or any Windows users, really) to try something like Knoppix.
The Mozilla booth was doing great and always had a crowd around it.
Apparently they sold out of the plush firefox toys immediately,
surprising everyone since they hadn't been selling on the web site.
The Debian and local LUG (shared between LUGoD, SVLUG, PenLUG and
BayLUG) booths were both doing well, and the Gentoo booth always had
a few visitors typing on the demo machines. The Fedora staffer
looked lonely; hardly anyone seemed interested in the Fedora booth.
I did my usual quick survey of which of the big booths were running
linux in their booth. Oracle and Redhat were clear winners, with no
definite Windows boxes (a few in each case which were running full
screen presentation software so I couldn't tell what the OS was).
Sun was the worst, with only one Linux box I saw, and the rest all
Windows: no Solaris that I saw. AMD leaned toward Linux (maybe
60%), Veritas leaned the other way (60% Windows) and IBM was about
50-50 (no better than last year).
The "Golden Penguin Bowl" was strange. It's a trivia contest
between two teams of luminaries; but in this case, one team (the
Nerds) was three big-name Linux luminaries, and the other team (the
Geeks) was all Apple or BSD people with no connection to Linux at
all. Dave kept wondering, "And his connection to Linux is ...?"
About half the questions dealt with sci-fi rather than
computers, and the Geeks had a strong lead there, but the Nerds
cleaned up on the computer questions and ended up with the prize.
Timothy D. Witham (of OSDL) was particularly impressive with his
knowledge of obscure CPUs, and got several major ovations from the
audience after correcting the judges.
Then it was BOF time, and I chose the Zeroconf BOF. I'd done a
little research into Zeroconf a month ago for a possible article,
but hadn't been able to get it to work, and ran into a snag that
the sourceforge page on zcip says it's been removed because of
Apple asserting intellectual property rights over the protocol.
I hoped that the BOF would shed some light on what I was missing.
Boy, was that wrong! The BOF was a long advertisement for Apple,
run by an Apple employee, Stuart Cheshire,
and an Apple user (and former employee? I wasn't clear). It
consisted of a powerpoint presentation followed by numerous demos of
two Mac laptops talking to each other, or a Mac laptop talking to
some obscure-but-nifty piece of hardware that implements rendezvous
(or whatever Apple is calling it now). After about an hour of this
I finally asked where linux fit in, and the answer boiled down to,
"Gee, linux doesn't really do zeroconf very well and we wish it did,
we're hoping someone writes it."
He mentioned zcip as one of the options, so I made the mistake of
asking about the message on sourceforge about Apple's patents, and
got a long lecture on how beneficent apple was and how they'd never
sue anyone other than in self defense, and anyone who says otherwise
is probably trying to push an anti-patent political agenda,
but poor apple has to have patents to protect itself from mean
companies. And besides, he thinks the patent expired a couple
of weeks ago, but it was a good patent, which may seem obvious
now but probably wasn't back when it was issued.
Hoping to get back on track after unintentionally derailling the
conversation, I asked what Linux users should use given
that his first suggestion, zcip, was unavailable. He mentioned
HOWL, by Scott Herscher (who was there at the session),
which implements all three parts of zeroconf.
Googling later at home, I found it at
Porchdog
software. It's apparently a mixture of BSD licensed code and
Apple licensed code. But it doesn't seem to require signing the
Apple developer agreement to download it. I haven't tried it yet.
He said that the zcip part of zeroconf would be much better
implemented in the kernel, where it would be only about twelve lines
of additional code. (He seemed to be talking about the "choose a
random number, check for traffic, and back off if it's occupied"
portion. Isn't there more to zcip than that? Or is the rest
already there in the kernel?) He wondered why no one was adding it.
Dave asked, "So, why don't you do it? C'mon, just twelve lines!"
Stuart was not amused, and said that while writing the twelve lines
wasn't a problem, setting up the build environment for the kernel
takes a lot of time, far more than he had available.
Speaking of that Apple license and agreement,
Stuart says that there's nothing
prohibiting Apple licensed code like zeroconf from being distributed
in any linux distro, and he seemed surprised and perturbed that no
distro was shipping it (of course, the fact that it just released
a couple weeks ago might have something to do with that. :-)
Other bits I wasn't previously clear on: a machine can have a link local
address and a regular IP address concurrently, on the same ethernet
card. I'm not clear how this shows up in ifconfig (if it does).
Link local addressing is not required for the other two parts of
zeroconf: MDNS and service discovery should work even over normal
IP addresses.
Some of the docs on the web about zeroconf say that MS is backing a
service discovery protocol called SLP, rather than the DNS-SD protocol
used by Apple, and that most people think SLP scales much better
than DNS-SD. As presented at the BOF, both Apple and MS support
Rendezvous as it exists now (but then why point out that Apple's
Rendezvous is now available for Windows? If Windows already does
it, why would anyone need to register with Apple and download
different software? I wasn't clear on that) and UPNP, backed by
MS, is losing ground and probably won't win in the long run.
Perhaps UPNP is a renaming of SLP. I declined to ask about the
scaling issues, having already unwittingly caused enough trouble
asking about patents.
Great quote from Stuart, possibly sufficient to justify sitting
through an hour and a half of Apple advertising:
he was talking about somebody having trouble with a networked
printer which it turned out had been configured to have a
non-default IP, then returned to Fry's, and added
"Fry's is the Silicon Valley Hardware Lending Library."
Incidentally, the BOF area was supposed to have wi-fi, with an
essid that was given on the signs, but I got "access point out of
range". I guess I really should try one of the other drivers,
linux-wlan or hostap. Dancer says hostap is easier to set up,
and apparently it uses the normal wireless-tools, unlike linux-wlan
which uses its own set of tools. I don't need hostap mode, but
if it's a good driver for normal client use then I guess that's
what matters. Though the Apple people didn't see a network either,
but maybe that's because they didn't know about the essid.
Tags: linux, conferences, linuxworld
[
23:36 Aug 03, 2004
More conferences |
permalink to this entry |
]
Someone sent me mail asking about my CD label page, and that
inspired me to fire up gLabels for the first time in a while.
Debian has 1.93.3 (on sarge, anyway) and it's looking very nice!
There's now a separate pane for object properties, which I'm not
entirely crazy about (takes up a lot of screen space relative to
using a dialog) but the most important thing is that the label
outlines now draw even if covered by an image. That means that
you can reasonably line up a CD label image with the template now,
which makes my old patch to gimp-print much less needed.
The gimp-print patch might not be needed at all, if libgnomeprint
could print with high quality. I wonder if that's coming?
I haven't actually tried printing to see what the quality is
like now. I should probably snarf the latest gimp-print and
update my patch anyway.
Tags: linux, printing
[
13:34 Aug 03, 2004
More linux |
permalink to this entry |
]
Sat, 31 Jul 2004
For some reason X on the laptop hasn't been seeing the external
USB mouse. But last night I got it working again. Turns out that
/dev/input/mouse0 no longer works; I have to use /dev/input/mice
because the mouse number changes each time it's plugged in
(which I don't think was a problem with earlier kernels).
Thanks to Peter S. for helping me track the problem down.
I also learned (unrelated to the mouse issue) about a couple of
very useful Debian apps, deborphan and debfoster, for finding
orphaned and no longer needed libraries. I'd always wanted
something like that to help clean up my crufty debian systems.
Tags: linux, X11, debian
[
19:39 Jul 31, 2004
More linux |
permalink to this entry |
]
Sat, 24 Jul 2004
We had dinner with Tim and Pam last night (visiting for AstroCon)
at "Skates on the Bay" in Berkeley -- excellent food.
so I told Tim about the virus attack on
Shallow-Sky
a few days ago, perpetrated in his name.
Messages were sent to the list, ostensibly from his address,
containing various attachments which were obviously Windows viruses.
Unfortunately, I was out on a hike when the attack happened, so five
of them slipped through before I found out about it and blocked
his address in order to investigate further.
The virus turned out to be
W32.Beagle.AG@mm
(W32 is obvious, and f3ew tells me that "mm" stands for "mass
mailing").
Pasc gave me a procmail rule to block this virus, to put in
smartlist's rc.submit. It should have worked, but it didn't,
so I ended up using a more general rule to block all base64 encoded
attachments (that'll probably piss off some people who like to send
images to one of our other lists, but Dave says he's asked them not
to do that anyway and doesn't mind having the rule there).
Of course, the messages weren't really coming from Tim: he doesn't
even use Windows (Mac and Sun, usually). It turns out they're
coming from a Comcast address, which doesn't narrow things down
much. There are nine @comcast.net addresses on the list, so I
notified them privately, but it could easily be someone else or
even someone off the list (though I suspect it's a list member,
since it's someone who has Shallow in their addressbook).
I suppose I'll probably never know who it was. The "Tim" attacks
have stopped (so I don't even know for sure that my filter works,
though it worked for a test message I sent) but I've gotten two
attempts spoofing Peter J (who is not currently on the list, so
they bounced with "Not on accept list" before they could test the
filter).
Grumble grumble Windows security grumble ...
Tags: linux, security
[
15:13 Jul 24, 2004
More linux |
permalink to this entry |
]
Tue, 20 Jul 2004
After a year of no printing on sid, I went back to sarge to see if I could
still print from there.
When I dist-upgraded my ancient sarge, one of the questions it asked me
was whether to replace printers.conf. That sounded suspicious: I saved
the old printers.conf, then allowed it to replace it with its new
version.
Well, sure enough, with the new printers.conf it didn't know about
my Epson, and when I went to the cups admin url to add it, there
was no "add printer" button. Just like I'd always seen in sid.
In sid, someone once gave me the direct url to "add a printer",
but when I followed it, I didn't get a working setup anyway.
I decided to try copying my old printers.conf on top of the new one.
And voila, it worked! Printing works okay from sarge. (It still has
the problem of the cups test page outlines not aligning well with the
physical printer page, so it may not work for printing labels, but
it's a start.)
So I moved over to sid, and tried the same printers.conf. Voila,
something came out of the printer, the first I've ever seen that
happen from sid! It didn't entirely work: I printed a few lines
using lpr, and the printer printed those lines but then didn't
eject the page, and I had to wrestle with the printer to get the
paper out. So all is not quite well in sid land, but it's much
farther along than it was using only the tools available in sid
(rather than my two-year-old printers.conf originally configured
on a much older sarge).
The other interesting file that upgrade asked me about was
epson.conf, which turns out to be for the epson scanner, not the
epson printer. Perhaps by using that (I saved the old sarge file)
I'll eventually be able to get scanning working on sid! That
would be lovely. For now, I'm using sarge a lot more.
Tags: linux, debian, printing
[
23:09 Jul 20, 2004
More linux |
permalink to this entry |
]
Mon, 12 Jul 2004
Someone posted on #gimp his
review
of the new gtk2 file chooser, someone else pointed to the
file
chooser spec. I spent most of the afternoon angry. The file
chooser has no text field to type in or paste a file path!
What is it about the idiot gtk2 designers that they continually
insist on removing accessibility features, like keyboard access,
and moving everyone to a mouse-only form of interaction? Do they
want to keep keyboard-only users from using their library?
Are they getting kickbacks from doctors treating RSI injuries?
Or are they actually Windows developers who want to make sure that
linux is even less usable than Windows?
Sometimes I think that just about the time linux is getting usable,
I'm going to have to find some other OS to use, because all the
linux apps will have purged all accessibility features and it
will be too painful to try to get any work done.
Grr.
Tags: linux, gnome
[
20:00 Jul 12, 2004
More linux |
permalink to this entry |
]
Fri, 09 Jul 2004
This evening, thanks to
Rob Weir's
Debian Font Guide and some suggestions from Rob himself, I finally
got rid of that ugly oversized scaled-bitmap helvetica (or similar)
font that has plagued all my Debian installs since I first started
using Debian. It turns out that it comes from the gsfonts-x11
(ghostscript and gv). Remove gsfonts-x11, and gtk windows now use
a much much smaller, clean, truetype helvetica font. Alternately,
keeping gsfonts-x11 installed, but removing /usr/lib/X11/fonts/Type1
from the font path, gives me a medium sized, cleanly rendered
helvetica in gtk windows. I may have trouble viewing postscript
documents in gv; we'll see. But having all my other windows use
clean fonts makes it worth risking some gv breakage.
Other things I had to do: install x-ttcidfont-conf (defoma was
already installed); disable the font server (comment out the
unix/:7100 line from XF86Config-4, plus update-rc.d -f xfs remove);
reorder the FontPath lines in XF86Config-4 as suggested in the font
guide, and remove (comment out) the
/var/lib/defoma/x-ttcidfont-conf.d/dirs/CID line.
Strangely, even if I list the Type1 directory after all the others
in XF86Config-4, it still takes precedence over all the other
helveticas. Neither Rob nor I could figure out why.
Why do packages include fonts like that? Abiword used to have a
font like that on Redhat. It's almost always for helvetica (which
has a gazillion other implementations anyway, so it's not as though
they have to worry that they won't be able to find a helvetica on
the system if they don't install theirs).
Tags: linux, fonts
[
19:00 Jul 09, 2004
More linux |
permalink to this entry |
]
Wed, 07 Jul 2004
Got an account on alioth, akkana-guest.
Discovered that the tuxracer problem I've been having isn't actually
Debian sid having broken DRI, but merely some problem with the
commercial tuxracer (probably not loading the gl libs properly
or something). Free tuxracer still works. Yay.
Tags: linux, debian, X11, games
[
20:00 Jul 07, 2004
More linux |
permalink to this entry |
]
Mon, 05 Jul 2004
Cool -- moray on #debian-women worked with me a little on midi,
and I finally have timidity working!
The trick is that there's a tarball I downloaded a while ago
(which means I no longer have the url, but I think it was linked
from the main timidity site) which extracts into a timidity.cfg
and an "instruments" directory (plus a couple of readmes that
don't explain anything, which is why I hadn't unpacked this before).
Turns out that if you put that .cf into /etc/timidity.cfg, then add
a line that says
dir /usr/local/share/timidity
(or wherever you unpack the tarball, as long as it contains
"instruments") then voila, timidity starts working.
That means not only that I can play .midi files (big whoop), but,
more important, I can use all those programs like denemo that come
in the education-music package, plus all the java music API stuff
like the CA music code (for some reason, all these things output
only midi!)
So I played with denemo a little (found out how to enter a chord)
in brief spurts, between long interludes in the house because it's
TOO HOT out here in the office.
Tags: linux, audio
[
20:00 Jul 05, 2004
More linux |
permalink to this entry |
]
Sun, 27 Jun 2004
I got swsusp working on blackbird. Re-reading the
kernel documentation with a night's sleep behind me revealed that
at the very end of swsusp.txt, there's code for a C program to
suspend non-ACPI machines. Worked fine! swsusp in 2.6.7 doesn't
quite resume X properly (I had to ctrl-alt-FN back and forth a few
times before I saw my X screen) but it's progress, anyway.
Tags: linux, suspend, laptop
[
21:00 Jun 27, 2004
More linux |
permalink to this entry |
]
Fri, 25 Jun 2004
Great talk at PenLUG by Chander Kant of Linux Certified,
on Linux on Laptops. It was much the same talk he gave a month ago
at SVLUG, except at PenLUG he got to give his talk without being
pestered by a bunch of irritating flamers, and it was possible for
people to ask real questions and get answers.
The biggest revelation: "Suspend to Disk" in the 2.6 kernel is
actually ACPI S4 suspend, and won't do anything on a machine that
doesn't support ACPI S4. Why can't they say that in the help?
Sheesh!
Tags: linux, laptop
[
00:00 Jun 25, 2004
More linux |
permalink to this entry |
]