Shallow Thoughts : tags : ubuntu
Akkana's Musings on Open Source Computing and Technology, Science, and Nature.
Mon, 17 Jan 2022
For many years, I used extlinux as my boot loader to avoid having to
deal with
the
annoying and difficult grub2. But that was on MBR machines.
I never got the sense that extlinux was terribly well supported
in the newer UEFI/Secure Boot world. So when I bought my current
machine a few years ago, I bit the bullet and let Ubuntu's installer
put grub2 on the hard drive.
One of the things I lost in that transition was a boot splash image.
Read more ...
Tags: linux, debian, ubuntu, grub, boot
[
19:29 Jan 17, 2022
More linux |
permalink to this entry |
]
Tue, 28 Dec 2021
When I bought my new laptop several years ago, I chose Ubuntu as
its first distro even though I usually run Debian.
For one thing, Ubuntu has an excellent installer.
Second, they seem to do more testing on cutting-edge hardware, so I
thought the chances were better that hardware on a brand-new laptop
would be supported.
Ubuntu has been working fine for a couple of years, but with 21.10
("Impish Indri") it took a precipitous downturn.
Read more ...
Tags: linux, boot, grub, debian, ubuntu
[
19:53 Dec 28, 2021
More linux |
permalink to this entry |
]
Sun, 05 Jul 2020
... which is what I have now on my Carbon X1 gen 7 laptop.
Early reviews of this particular laptop praised its supposedly
excellent speakers (as laptops go), but that has never been apparent on Linux.
It is an improvement over the previous version -- the microphone works
now, which it didn't in 19.10 -- though in the
meantime I acquired a Samson GoPro based on
Embedded.fm's
recommendation (it works very well).
But although the internal mic works now, the sound from the built-in
speakers is just terrible, even worse than it was before.
The laptop has four speakers, but Ubuntu is using only two.
Read more ...
Tags: linux, ubuntu, lenovo
[
12:44 Jul 05, 2020
More linux |
permalink to this entry |
]
Thu, 07 May 2020
Controlling PulseAudio from the Command Line
#tags linux,audio,pulseaudio,ubuntu,cmdline
Controlling
PulseAudio via pavucontrol is all very nice, but it's time
consuming and fiddly: you have to do a lot of clicking in a lot of
tabs any time you want to change anything.
So I've been learning how to control PulseAudio from the command line,
so I can make aliases to switch between speakers quickly, or set audio
defaults at login time.
That was going to be a blog post, but I think this is going to be an
evolving document for quite some time, so instead, I just made it a
page on the Linux section of my website:
Controlling PulseAudio from the Command Line.
I also wrote a Python script,
pulsehelper.py,
that uses some of these commands to provide clearer output and easier
switching. It even uses color and bold fonts if you have the
termcolor module installed. Like the document, this script is likely
to be evolving for quite some time.
Happy listening and recording!
Tags: linux, audio, pulseaudio, ubuntu
[
12:21 May 07, 2020
More linux |
permalink to this entry |
]
Mon, 04 May 2020
(Note: this is not an alphabet post. You may have noticed I'm
a little stuck on I. I hope to get un-stuck soon; but first, here
are a pair of articles on configuring audio on Linux.)
I'm a very late adopter for PulseAudio. In the past, on my minimal
Debian machines,
nearly
any sound problem could be made better by apt-get remove pulseaudio.
But pulse seems like it's working better since those days,
and a lot of applications (like Firefox) require it, so it's
time to learn how to use it. Especially in these days
of COVID-19 and video conferencing, when I'll need to be using the
microphone and speakers a lot more. (I'd never actually had a reason
to use the microphone on my last laptop.)
Beginner tutorials always start with something like "Go into System
Preferences and click on Audio", leaving out anyone who doesn't use
the standard desktop. The standard GUI PulseAudio controller is
pavucontrol. It has four tabs.
Read more ...
Tags: linux, audio, pulseaudio, ubuntu
[
18:04 May 04, 2020
More linux |
permalink to this entry |
]
Sun, 09 Jun 2013
I recently went on an upgrading spree on my main computer. In the hope
of getting more up-to-date libraries, I updated my Ubuntu to 13.04
"Raring Ringtail", and Debian to unstable "Sid". Most things went fine
-- except for Firefox.
Under both Ringtail and Sid, Firefox became extremely unstable.
I couldn't use it for more than about fifteen minutes before it would
freeze while trying to access some web resource. The only cure when
that happened was to kill it and start another Firefox.
This was happening with the exact same Firefox -- a 21.0 build from
mozilla.org -- that I was using without any problems on older versions
of Debian and Ubuntu; and with the exact same profile. So it was
clearly something that had changed about Debian and Ubuntu.
The first thing I do when I hit a Firefox bug is test with
a fresh profile. I have all sorts of Firefox customizations, extensions
and other hacks. In fact, the customizations are what keep me tied
to Firefox rather than jumping to some other browser. But they do,
too often, cause problems. I have a generic profile I keep around
for testing, so I fired it up and used it for browsing for a day.
Firefox still froze, but not as often.
Disabling Extensions
Was it one of my extensions?
I went to the Tools->Add-ons to try disabling them all ...
and Firefox froze. Bingo! That was actually good news. Problems like
"Firefox freezes a lot" are hard to debug. "Firefox freezes every time
I open Tools->Add-ons" are a whole lot easier.
Now I needed to find some other way of disabling extensions to see if
that helped.
I went to my Firefox profile directory and moved everything
in the extensions directory into a new directory I made called
extensions.sav. Then I started moving them back one by one,
each time starting Firefox and calling up Tools->Add-ons.
It turned out two extensions were causing the freeze: Open in Browser
and Custom Tab Width. So I left those off for the time being.
Disabling Themes
Along the way, I discovered that clicking on Appearance in
Tools->Add-ons would also cause a freeze, so my visual
theme was also a problem. This wasn't something I cared about:
some time back when Mozilla started trumpeting their themeability,
I clicked around and picked up some theme involving stars and planets.
I could live without that.
But how do you disable a theme?
Especially if you can't go to Tools->Add-ons->Appearance?
Turns out everything written on the web on this is wrong. First,
everything on themes on mozilla.org assumes you can get to that
Appearance tab, and doesn't even consider the possibility that you
might have to look in your profile and remove a file.
Search further and you might find references to files named
lightweighttheme-header and lightweighttheme-footer, neither of
which existed in my profile.
But I did have a directory called lwtheme.
So I removed that, plus four preferences in prefs.js that included
the term "lightweightThemes".
After a restart, my theme was gone, I was able to view that Appearance tab,
and I was able to browse the web for nearly 4 hours before firefox hung again.
Darn! That wasn't all of it.
Debugging the environment
But soon after that I had a breakthrough.
I discovered a page on my bank's website that froze Firefox every time.
But that was annoying for testing, since it required logging in then
clicking through several other pages, and you never know what a bank
website might decide to do if you start logging in over and over.
I didn't want to get locked out.
But then I was checking an episode in one of the podcasts I listen to,
which involved going to the link
http://downloads.bbc.co.uk/podcasts/radio4/moreorless/rss.xml
-- and Firefox froze, on a simple RSS link. I restarted and tried
again -- another freeze. I'd finally found the Rosetta stone,
something that hung Firefox every time. Now I could do some serious testing!
I'd had friends try this using the same version of Firefox and Ubuntu,
without seeing a freeze. Was it something about my user environment?
I created a new user, switched to another virtual console (Ctrl-Alt-F2)
and logged in as my new user, then ran X. This was a handy way to test:
I could get to my normal user's X session in Ctrl-Alt-F7, while the new
user's X session was on Ctrl-Alt-F8. Since I don't have Gnome or KDE
installed on this machine, the new user came up with a default Openbox
session. It came up at the wrong resolution -- the X11 in the newest
Linux distros apparently doesn't read the HDMI monitor properly --
but I wasn't worried about that.
And when I ran Firefox as the new user (letting it create a new profile)
and middlemouse-pasted the BBC RSS URL, it loaded it, without freezing.
Now we're getting somewhere.
Now I knew it was something about my user environment.
I tried copying all of ~/.config from my user to the new user. No hang.
I tried various other configuration files. Still no hang.
The X initialization
I'll skip some steps here, and just mention that in trying to fix the
resolution problem, so I didn't have to do all my debugging at 1024x768,
I discovered that if I used my .xinitrc file to start X, I'd get a freezy
Firefox. If I didn't use my .xinitrc, and defaulted to the system one,
Firefox was fine. Even if I removed everything else from my .xinitrc,
and simply ran openbox from it, that was enough to make Firefox hang.
Okay, what was the system doing? I poked around /etc/X11:
it was running /etc/X11/Xsession. I copied that file to my
.xinitrc and started X. No hang.
Xsession does a bunch of things, but one of the main things it does is run
every script in the /etc/X11/Xsession.d directory.
So I made a copy of that directory inside my home directory, and modified
.xinitrc to execute those files instead. Then I started moving them
aside to see which ones made a difference.
And I found it. /etc/X11/Xsession.d/75dbus_dbus-launch was the
file that mattered.
75dbus_dbus-launch takes the name of the program that's
going to be executed -- in this case that was x-session-manager, which
links to /etc/alternatives/x-session-manager, which links to
/usr/bin/openbox-session -- and instead runs
/usr/bin/dbus-launch --exit-with-session x-session-manager
.
Now that I knew that, I moved everything aside and made a little
.xinitrc that ran
/usr/bin/dbus-launch --exit-with-session openbox-session
.
And Firefox didn't crash.
Dbus
So it all comes down to dbus. I was already running dbus: ps shows
/usr/bin/dbus-daemon --system running -- and that worked fine
for everything dbussy I normally do, like run "gimp image.jpg" and
have it open in my already running GIMP.
But on Ringtail and Sid, that isn't enough for Firefox. For some
reason, on these newer systems, Firefox requires a second
dbus daemon -- it shows up in ps as
/usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
-- for the X session. If it doesn't have that, it's fine for a while,
and then, hours later, it will mysteriously freeze while waiting for
a network resource.
Why? I have no idea. No one I've asked seems to know anything about
how dbus works, the difference between system and session dbus daemons,
or why any of it it would have this effect on Firefox.
I filed a Firefox bug,
Bug 881122,
though I don't have much hope of anyone being interested in a bug
that only affects Linux users using nonstandard X sessions.
But maybe I'm not the only one. If your Firefox is hanging and
you found your way here, I hope I've given you some ideas.
And if anyone has a clue as to what's really happening and why
dbus would have that effect, I'd love to hear from you.
Tags: firefox, mozilla, debugging, linux, debian, ubuntu, dbus
[
20:08 Jun 09, 2013
More linux |
permalink to this entry |
]
Wed, 15 May 2013
Checking versions in Debian-based systems is a bit of a pain.
This happens to me a couple of times a month: for some reason I need
to know what version of something I'm currently running -- often a
library, like libgtk. aptitude show
will tell you all about a package -- but only if you know its exact name.
You can't do aptitude show libgtk
or even
aptitude show '*libgtk*'
-- you have to know that the
package name is libgtk2.0-0. Why is it libgtk2.0-0? I have no idea,
and it makes no sense to me.
So I always have to do something like
aptitude search libgtk | egrep '^i'
to find out what
packages I have installed that matches the name libgtk, find the
package I want, then copy and paste that name after typing
aptitude show
.
But it turns out it's super easy in Python to query Debian packages using the
Python
apt package. In fact, this is all the code you need:
import sys
import apt
cache = apt.cache.Cache()
pat = sys.argv[1]
for pkgname in cache.keys():
if pat in pkgname:
pkg = cache[pkgname]
instver = pkg.installed
if instver:
print pkg.name, instver.version
Then run
aptver libgtk
and you're all set.
In practice, I wanted nicer formatting, with columns that lined up, so
the actual script is a little longer. I also added a -u flag to show
uninstalled packages as well as installed ones. Amusingly, the code to
format the columns took about twice as many lines as the code that does the
actual work. There doesn't seem to be a standard way of formatting
columns in Python, though there are lots of different implementations
on the web. Now there's one more -- in my
aptver
on github.
Tags: linux, debian, ubuntu, python, programming
[
16:07 May 15, 2013
More linux |
permalink to this entry |
]
Mon, 04 Mar 2013
My Lenovo laptop has a nifty button, Fn-F5, to toggle wi-fi and bluetooth
on and off. Works fine, and the indicator lights (of which the Lenovo
has many -- it's quite nice that way) obligingly go off or on.
But when I suspend and resume, the settings aren't remembered.
The machine always comes up with wireless active, even if it wasn't
before suspending.
Since wireless can be a drain on battery life, as well as a potential
security issue, I don't want it on when I'm not actually using it.
So I wanted a way to turn it off programmatically.
The answer, it turns out, is rfkill.
$ rfkill list
0: tpacpi_bluetooth_sw: Bluetooth
Soft blocked: yes
Hard blocked: no
0: phy0: Wireless LAN
Soft blocked: yes
Hard blocked: no
tells you what hardware is currently enabled or disabled.
To toggle something off,
$ rfkill block bluetooth
$ rfkill block wifi
Type rfkill -h
for more details on arguments you can use.
Fn-F5 still works to enable or disable them together.
I think this is being controlled by /etc/acpi/ibm-wireless.sh,
though I can't find where it's tied to Fn-F5.
You can make it automatic by creating /etc/pm/sleep.d/.
(That's on Ubuntu; of course, the exact file location may vary with distro
and version.) To disable wireless on resume, do this:
#! /bin/sh
case "$1" in
resume)
rfkill block bluetooth
rfkill block wifi
;;
esac
exit $?
Of course, you can also tie that into other things, like your current
network scheme, or what wireless networks are visible (which you can
get with iwlist wlan0 scan
).
Tags: linux, ubuntu, laptop, tip
[
19:46 Mar 04, 2013
More linux/laptop |
permalink to this entry |
]
Wed, 14 Nov 2012
(This is a guest post by David North.)
Debian developers tend to get overzealous in their dependency lists, probably to avoid constant headaches from fringe cases whose favorite programs fail because they also need some obscure library or package support (and yes, I'm talking to you, Ubuntu). But what if you don't want some goofy dependency (and the cascade of other crap it pulls in?)
As a small aside, aptitude/apt-get hold <pkg> is terrific if you just want to keep a package at a pre-horkage level, but for some obcure reason you can't "hold" a package that isn't installed. So that won't work as of 11/2012.
You can however generate an equivalent package with a higher version number and install it, which naturally blocks the offending package. Even better, the replacement package need do nothing at all other than satisfy the apt database. Even better, the whole thing is incredibly simple.
First install the "equivs" package. This will deliver two programs:
- equivs-control
- equivs-build
Officially you should start with 'equivs-control <:pkgname>' which will create a file 'pkgname' in the current directory. Inside are various fields but you only need eight and can simply delete the rest. Here's approximately what you should end up with for a fictional package "pkgname":
Section: misc
Priority: optional
Standards-Version: 3.9.2
Package: pkgname
Version: 1:42
Maintainer: Your Name <your@email.address>
Architecture: all
Description: fake pkgname to block a dumb dependency
The first three lines are just boilerplate, though you may have to increment the standards-version at some point if you reuse the file. No changes are needed now.
The pkgname does actually have to match the name of the package you want to block. The version must be higher than that of the target package. Maintainer need not be you, but it's a good idea to at least use a name you recognize as yourself. Architecture can be left as "all" unless you're doing something extra tricky. Description is not necessary but a good idea; put your notes here.
The only trick is the version. Note the 1:42 structure here. The first number is the "epoch" in debian-speak, and may or may not be used. In practice I've never seen an epoch greater than one, so I suggest using either 1 or 2 here rather than just leaving it blank. You can see the epoch number in a package when you use aptitude show <pkgname>. The version is the number immediately after the colon, and for safety's sake should be considerably larger than the version you're trying to block (to avoid future updates). I like to use "42" for obvious reasons unless the actual package version is too close. Factoid: if no "epoch" is indicated debian will assume epoch 0, which will not show up as a zero in a .deb (or in aptitude show) but rather as a blank. The version number will have no colon in this event.
Having done this, all you need do is issue the command 'equivs-build path-to-pkgname' (preferably from the same directory) and you get a fake deb to install with dpkg -i. Say goodbye to the dependency.
One more trick: once you have your file <pkgname> with the Eight
Important Fields, you can pretty much skip using equivs-control. All it
does is make the initial text file, and it will be easier to edit the
one you already have with a new package name (and rename the file at
the same time). Note, however, this handy file will not necessarily be
useful on other debian-based systems or later installs, so running
equivs-control after a big upgrade or moving to another distro is very
good practice. If you compare the files and they have the same entries,
great. If not, use the new ones.
Tags: linux, debian, ubuntu, install
[
11:50 Nov 14, 2012
More linux/install |
permalink to this entry |
]
Sat, 16 Jun 2012
I ran ubuntu-bug
to report a bug. After collecting some
dependency info, the program asked me if I wanted to load the bug
report page in a browser. Of course I did -- but it launched chromium,
where I don't have any of my launchpad info loaded, rather than firefox.
So how do you change the default browser in Ubuntu?
The program that controls that, and lots of similar defaults,
is update-alternatives.
update-alternatives with no arguments gives a long usage statement that
isn't too clear. You need to know the various category names ("groups")
before you can do much. Here's how to get a list of all the groups:
update-alternatives --get-selections
But that's still a long list. To find the entries that might be pointing
to chrome or chromium, I narrowed it down:
update-alternatives --get-selections | grep chrom
That narrowed it down:
x-www-browser and gnome-www-browser both pointed
to chromium. So let's try to change that to firefox:
$ update-alternatives --set gnome-www-browser /usr/local/firefox11/firefox
update-alternatives: error: alternative /usr/local/firefox11/firefox for gnome-www-browser not registered, not setting.
Whoops! The problem here is that I'm running a firefox installed from
Mozilla.org, not the one that comes with Ubuntu.
What if I want to make that my default browser?
What does it mean for an application to be "registered"?
Well, no one seems to have documented that.
I found it discussed briefly here:
What is Ubuntu's Definition of a “Registered Application”?,
but the only solutions seemed to involve hand-editing desktop files to
add icons, and there's no easy way to figure out how much of
the desktop file it needs. That sounded way too complicated.
Thanks to Lyz and Maco for the real answer: skip update-alternatives
entirely, and change the symbolic links in /etc/alternatives by hand.
$ sudo rm /etc/alternatives/gnome-www-browser
$ sudo ln -s /usr/local/firefox11/firefox /etc/alternatives/gnome-www-browser
$ sudo rm /etc/alternatives/x-www-browser
$ sudo ln -s /usr/local/firefox11/firefox /etc/alternatives/x-www-browser
That was much simpler, and worked fine: now applications that need to
call up a browser will use firefox instead of chromium.
Tags: ubuntu, debian, linux
[
17:04 Jun 16, 2012
More linux |
permalink to this entry |
]
Wed, 30 May 2012
In a previous article I wrote about
how
to use stdeb to turn a Python script
into source and binary Debian/Ubuntu packages.
You can distribute a .deb file that people can download and install;
but it's a lot easier for people to install if you set up a repository,
so they can get automatic updates from you.
If you're targeting Ubuntu, the best way to do that is to set up a
Launchpad Personal
Package Archive, or PPA.
Create your PPA
First, create your PPA.
If you don't have a Launchpad account yet,
create one, add a GPG key, and sign the Code of Conduct.
Then log in to your account and click on Create a new PPA.
You'll have to pick a name and a display name for your PPA.
The default is "ppa", and many people leave personal PPAs as that.
You might want to give it a display name of yourname-ppa
or something similar if it's for a collection of stuff;
or you're only going to use it for software related to one program or
package, name it accordingly.
Ubuntu requires nonstandard paths
When you're creating your package with stdeb,
if you're ultimately targeting a PPA, you'll only need the souce dsc
package, not the binary deb.
But as you'll see, you'll need to rebuild it to make Launchpad happy.
If you're intending to go through the
Developer.ubuntu.com
process, there are specific requirements for version
numbering and tarball naming -- see "Packaging" in the
App
Review Board Guidelines.
Your app will also need to install unusual locations --
in particular, any files it installs, including the script itself,
need to be in
/opt/extras.ubuntu.com/<packagename> instead of a more
standard location.
How the user is supposed to run these apps (run a script to add
each of /opt/extras.ubuntu.com/* to your path?) is not clear to me;
I'm not sure this app review thing has been fully thought out.
In any case, you may need to massage your setup.py accordingly,
and keep a separate version around for when you're creating the
Ubuntu version of your app.
There are also apparently some
problems
loading translation files for an app in /opt/extras.ubuntu.com
which may require some changes to your Python code.
Prepare and sign your package
Okay, now comes the silly part. You know that source .dsc package
you just made? Now you have to unpack it and "build" it before you
can upload it. That's partly because you have to sign it
with your GPG key -- stdeb apparently can't do the signing step.
Normally, you'd sign a package with
debsign deb_dist/packagename_version.changes
(then type your GPG passphrase when prompted).
Unfortunately, that sort of signing doesn't work here.
If you used stdeb's bdist_deb to generate both binary and
source packages, the .changes file it generates will contain
both source and binary and Launchpad will reject it.
If you used sdist_dsc to generate only the source package,
then you don't have a .changes file to sign and submit to Launchpad.
So here's how you can make a signed, source-only .changes file
Launchpad will accept.
Since this will extract all your files again, I suggest doing this in
a temporary directory to make it easier to clean up afterward:
$ mkdir tmp
$ cd tmp
$ dpkg-source -x ../deb_dist/packagename_version.dsc
$ cd packagename_version
Now is a good time to take a look at the
deb_dist/packagename_version/debian/changelog that stdeb created,
and make sure it got the right version and OS codename for the
Ubuntu release you're targeting -- oneiric, precise, quantal or whatever.
stdeb's default is "unstable" (Debian) so you'll probably need to change it.
You can cross-check this information in the
deb_dist/packagename_version.changes file, which is the file
you'll actually be uploading to the PPA.
Finally, build and sign your source package:
$ debuild -S -sa
[type your GPG passphrase when prompted, twice]
$ dput ppa:yourppa ../packagename_version_source.changes
Upload the package
Finally, it's time to upload the package:
$ dput ppa:your-ppa-name deb_dist/packagename_version.changes
This will give you some output and eventually probably tell you
Successfully uploaded packages.
It's lying -- it may have failed. Watch your inbox
for messages. If Launchpad rejects your changes, you should get an
email fairly quickly.
If Launchpad accepts the changes, you'll get an Accepted email.
Great! But don't celebrate quite yet. Launchpad still has to build
your package before it can be installed. If you try to add your PPA
now, you'll get a 404.
Wait for Launchpad to build
You might as well add your repository now so you can install from it
once it's ready:
$ sudo add-apt-repository ppa:your-ppa-name
But don't apt-get update
yet!
if you try that too soon, you'll get a 404, or an Ign meaning
that the repository exists but there are no packages in it for
your architecture.
It might be as long as a few hours before Launchpad builds your package.
To keep track of this, go to your Launchpad PPA page (something like
https://launchpad.net/~yourname/+archive/ppa) and look under
PPA Statistics for something like "1 package waiting to build".
Click on that link, then in the page that comes up, click on the link
like i386 build of pkgname version in ubuntu precise RELEASE.
That should give you a time estimate.
Wondering why it's being built for i386 when Python should be
arch independent? Worry not -- that's just the architecture that's
doing the building. Once it's built, your package should install anywhere.
Once the Launchpad build page finally says the package is built,
it's finally safe to run the usual apt-get update
.
Add your key
But when you apt-get update you may get an error like this:
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 16126D3A3E5C1192
Obviously you have your own public key, so what's up?
You have to import the key from Ubuntu's keyserver,
and then export it into apt-key, before apt can use it --
even if it's your own key.
For this, you need the last 8 digits given in the NO PUBKEY message.
Take those 8 digits and run these two commands:
gpg --keyserver keyserver.ubuntu.com --recv 3E5C1192
gpg --export --armor 3E5C1192 | sudo apt-key add -
I'm told that apt-add-repository is supposed to add the key automatically,
but it didn't for me. Maybe it will if you wait until after your package
is built before calling apt-add-repository.
Now if you apt-get update
, you should see no errors.
Finally, you can apt-get install pkgname
.
Congratulations! You have a working PPA package.
Tags: ubuntu, linux, programming
[
13:34 May 30, 2012
More programming |
permalink to this entry |
]
Sat, 26 May 2012
I write a lot of little Python scripts. And I use Ubuntu and Debian.
So why aren't any of my scripts packaged for those distros?
Because Debian packaging is absurdly hard, and there's very little
documentation on how to do it. In particular, there's no help on how
to take something small, like a Python script,
and turn it into a package someone else could install on a Debian
system. It's pretty crazy, since
RPM
packaging of Python scripts is so easy.
Recently at the Ubuntu Developers' Summit, Asheesh of OpenHatch pointed me toward
a Python package called stdeb that simplifies a lot of the steps
and makes Python packaging fairly straightforward.
You'll need a setup.py file to describe your Python script, and
you'll probably want a .desktop file and an icon.
If you haven't done that before, see my article on
Packaging Python for MeeGo
for some hints.
Then install python-stdeb.
The package has some requirements that aren't listed
as dependencies, so you'll need to install:
apt-get install python-stdeb fakeroot python-all
(I have no idea why it needs python-all, which installs only a
directory
/usr/share/doc/python-all with some policy
documentation files, but if you don't install it, stdeb will fail later.)
Now create a config file for stdeb to tell it what Debian/Ubuntu version
you're going to be targeting, if it's anything other than Debian unstable
(stdeb's default).
Unfortunately, there seems to be no way to pass this on the command
line rather than in a config file. So if you want to make packages for
several distros, you'll have to edit the config
file for every distro you want to support.
Here's what I'm using for Ubuntu 12.04 Precise Pangolin:
[DEFAULT]
Suite: precise
Now you're ready to run stdeb. I know of two ways to run it.
You can generate both source and binary packages, like this:
python setup.py --command-packages=stdeb.command bdist_deb
Or you can generate source packages only, like this:
python setup.py --command-packages=stdeb.command sdist_dsc
Either syntax creates a directory called deb_dist. It contains a lot of
files including a source .dsc, several tarballs, a copy of your source
directory, and (if you used bdist_deb) a binary .deb package.
If you used the bdist_deb form, don't be put off that
it concludes with a message:
dpkg-buildpackage: binary only upload (no source included)
It's fibbing: the source .dsc is there as well as the binary .deb.
I presume it prints the warning because it creates them as
separate steps, and the binary is the last step.
Now you can use dpkg -i to install your binary deb, or you can use
the source dsc for various purposes, like creating a repository or
a Launchpad PPA. But those involve a lot more steps -- so I'll
cover that in a separate article about creating PPAs.
Update: you can find that article here:
Creating
packages for a Launchpad PPA.
Tags: debian, ubuntu, linux, programming, python
[
11:44 May 26, 2012
More programming |
permalink to this entry |
]
Sun, 11 Dec 2011
Need your Ubuntu clock to stay in sync with a dual-boot Windows install?
It seems to have changed over the years, and google wasn't finding any
pages for me offering suggestions that still worked.
Turns out, in Ubuntu Oneiric Ocelot,
it's controlled by the file /etc/default/rcS: set
UTC=no
.
Apparently it's also possible to get Windows to understand a
UTC
system clock using a registry tweak.
Ironically, that page, which I found by searching for
windows system clock utc
, also has the answer for setting
Ubuntu to local time. So if I'd searched for Windows in the first
place, I wouldn't have had to puzzle out the Ubuntu solution myself.
Go figure!
Tags: linux, ubuntu, tip
[
13:56 Dec 11, 2011
More linux |
permalink to this entry |
]
Thu, 24 Nov 2011
A few days ago, I wrote about
how to
set up and configure extlinux (syslinux) as a bootloader.
But on Debian or Ubuntu,
if you make changes to files like /boot/extlinux/extlinux.conf
directly, they'll be overwritten.
The configuration files are regenerated by a program
called extlinux-update, which runs automatically every time you
update your kernel. (Specifically, it runs from the postinst script of
the linux-base package:
you can see it in /var/lib/dpkg/info/linux-base.postinst.)
So what's a Debian user to do if she wants to customize the menus,
add a splash image or boot other operating systems?
First, if you decide you really don't want Debian overwriting your
configuration files, you can change disable updates
by editing /etc/default/extlinux.
Just be aware you won't get your boot menu updated when you install new
kernels -- you'll have to remember to update them by hand.
It might be worth it: the automatic update is nearly as annoying as
the grub2 updater: it creates two automatic entries for every kernel
you have installed. So if you have several distros installed, each
with a kernel or two in your shared /boot,
you'll get an entry to boot Debian Squeeze with the
Ubuntu Oneiric kernel, one for Squeeze with the Natty kernel,
one for Squeeze with the Fedora 16 kernel ... as well as entries
for every kernel you have that's actually owned by Debian.
And then for each of these, you'll also get a second entry,
to boot in recovery mode. If you have several distros installed,
it makes for a very long and confusing boot menu!
It's a shame that the auto-updater doesn't restrict itself to kernels
managed by the packaging system, which would be easy enough to do.
(Wonder if they would accept a patch?)
You might be able to fudge something that works right by setting up
symlinks so that the only readable kernels actually live on the root
partition, so Debian can't read the kernels from the other
distros. Sounds a bit complicated and I haven't tried it.
For now, I've turned off automatic updating on my system.
But if your setup is simpler --
perhaps just one Debian or one Ubuntu partition plus some non-Linux
entries such as BSD or Windows -- here's how to set up Debian-style
automatic updating and still keep all your non-Linux boot entries
and your nice menu customizations.
Debian automatic updates and themes
First, take a quick look at /etc/default/extlinux and customize
anything there you might need, like the names of the kernels, kernel
boot parameters or timeout.
See man extlinux-update
for details.
For configuring menu colors, image backgrounds and such, you'll need to
make a theme. You can see a sample theme by installing the package
syslinux-themes-debian -- but watch out.
If you haven't configured apt not to pull in suggested packages, that
may bring back grub or grub-legacy, which you probably don't want.
You can make a theme without needing that package, though.
Create a directory /usr/share/syslinux/themes/mythemename
(the extlinux-update man page claims you can put a theme anywhere and
specify it by its full path, but it lies). Create a directory called
extlinux inside it, and make a file with everything you want
from extlinux.conf. For example:
default 0
prompt 1
timeout 50
ui vesamenu.c32
menu title Welcome to my Linux machine!
menu background mysplash.png
menu color title 1;36 #ffff8888 #00000000 std
menu color unsel 0 #ffffffff #00000000 none
menu color sel 7 #ff000000 #ffffff00 none
include linux.cfg
menu separator
include themes/mythemename/other.cfg
Note that last line: you can include other files from your theme.
For instance, you can create a file called other.cfg
with entries for other partitions you want to boot:
label oneiric
menu label Ubuntu Oneiric Ocelot
kernel /vmlinuz-3.0.0-12-generic
append initrd=/initrd.img-3.0.0-12-generic root=UUID=c332b3e2-5c38-4c50-982a-680af82c00ab ro quiet
label fedora
menu label Fedora 16
kernel /vmlinuz-3.1.0-7.fc16.i686
append initrd=/initramfs-3.1.0-7.fc16.i686.img root=UUID=47f6b1fa-eb5d-4254-9fe0-79c8b106f0d9 ro quiet
menu separator
LABEL Windows
KERNEL chain.c32
APPEND hd0 1
Of course, you could have a debian.cfg, an ubuntu.cfg,
a fedora.cfg etc. if you wanted to have multiple distros
all keeping their kernels up-to-date. Or you can keep the whole
thing in one file, theme.cfg. You can make a theme as complex
or as simple as you like.
Tags: linux, boot, extlinux, syslinux, debian, ubuntu
[
12:26 Nov 24, 2011
More linux/install |
permalink to this entry |
]
Fri, 28 Oct 2011
I wrote a few days ago about my
multi-distro
Linux live USB stick. Very handy!
But one thing that bugs me about live distros:
they're set up with default settings and don't
have a lot of the programs I want to use. Even getting a terminal
takes quite a lot of clicks on most distros. If only they would save
their settings!
It's possible to make a live USB stick "persistent", but not much is
written about it. Most of what's written tells you to create the USB
stick with usb-creator -- a GUI app that I've tried periodically for
the past two years without ever once succeeding in creating a bootable
USB stick.
Even if usb-creator did work, it wouldn't work with a multi-boot
stick like this one, because it would want to overwrite the whole drive.
So how does persistence really work? What is usb-creator doing, anyway?
How persistence works: Casper
The best howto I've found on Ubuntu persistence is
LiveCD
Persistence. But it's long and you have to wade through a lot of
fdisk commands and similar arcana. So here's how to take your
multi-distro stick and make at least one of the installs persistent.
Ubuntu persistence uses a package called casper which overlays
the live filesystem with the contents of another filesystem.
Figuring out where it looks for that filesystem is the key.
Casper looks for its persistent storage in two possible places: a
partition with the label "casper-rw", and a file named
"casper-rw" at the root of its mounted partitions.
So you could make a separate partition labeled "casper-rw", using your
favorite partitioning tool, such as gparted or fdisk. But if you already
have your multi-distro stick set up as one big partition, it's just as
easy to create a file. You'll have to decide how big to make the file,
based on the size of your USB stick.
I'm using a 4G stick, and I chose 512M for my persistent partition:
$ dd if=/dev/zero of=/path/to/casper-rw bs=1M count=512
Be patient: this step takes a while.
Next, create a filesystem inside that file. I'm not sure what the
tradeoffs are among various filesystem types -- no filesystem is
optimized for being run as a loopback file read from a vfat USB stick
that was also the boot device. So I flipped a coin and used ext3:
$ mkfs.ext3 /path/to/casper-rw
/path/to/casper-rw is not a block special device.
Proceed anyway? (y,n) y
One more step: you need to add the persistent flag to your boot
options. If you're following the multi-distro USB stick tutorial I
linked to earlier, that means you should edit boot/grub/grub.cfg on
the USB stick, find the boot stanza you're using for Ubuntu, and make
the line starting with linux look something like this:
linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=$isofile quiet splash noprompt persistent --
Now write the stick, unmount it, and try booting your live install.
Testing: did it work?
The LiveCD/Persistence page says persistent settings aren't
necessarily saved for the default "ubuntu" user, so it's a good idea
to make a new user. I did so.
Oops -- about that Ubuntu new user thing
But at least in Ubuntu Oneiric: there's a problem with that. If you
create a user, even as class Administrator (and of course you do want
to be an Administrator), it doesn't ask you for a password. If you
now log out or reboot, your new user should be saved -- but you won't
be able to do anything with the system, because anything that requires
sudo will prompt you for your nonexistent password. Even attempting to
set a password will prompt you for the nonexistent password.
Apparently you can "unlock" the user at the time you create it, and
then maybe it'll let you set a password. I didn't know this beforehand,
so here's how to set a password on a locked user from a terminal:
$ sudo passwd username
For some reason, sudo will let you do this without prompting for a
password, even though you can't do anything administrative through the GUI.
Testing redux
Once you're logged in as your new user, try making some changes.
Add and remove some items from the unity taskbar. Install a couple
of packages. Change the background.
Now try rebooting. If your casper-rw file worked, it should remember your
changes.
When you're not booted from your live USB stick, you can poke around
in the filesystem it uses by mounting it in "loopback" mode.
Plug the stick into a running Linux machine, mount it the usb stick,
then mount it with
$ sudo mount -o loop /path/to/casper-rw /mnt
/path/to is wherever you mounted your usb stick -- e.g. /media/whatever.
With the file mounted in loopback mode,
you should be able to adjust settings or add new files without
needing to boot the live install -- and they should show up the
next time you use the live install.
My live Ubuntu Oneiric install is so much more fun to use now!
Tags: ubuntu, linux, install, grub
[
15:41 Oct 28, 2011
More linux/install |
permalink to this entry |
]
Tue, 25 Oct 2011
Linux live USB sticks (flash drivers) are awesome. You can carry them
anywhere and give a demo of Linux on anyone's computer, any time. But
how do you keep track of them? Especially since USB sticks don't have
any place to write a label. How do you remember that the shiny blue
stick is the one with Ubuntu Oneiric, the black one has Ubuntu Lucid,
the other blue one that's missing its top is Debian ... and so forth.
It's impossible! Plus, such a waste -- you can hardly buy a flash drive
smaller than 4G these days, and then you go and devote it to a 700Mb
ISO designed to fit on a CD. Silly.
The answer: get one big USB stick and put lots of distros on it,
using grub to let you choose at boot time.
To create my stick, I followed the easy instructions at
HOWTO:
Booting LiveCD ISOs from USB flash drive with Grub2.
I found that tutorial quite simple, so I'm not going to duplicate
the instructions there.
I used the non-LUA version, since my grub on Ubuntu Natty didn't seem
to support LUA.
Basically you run grub-install to the stick,
create a directory called iso where you stick all your ISO files,
then create a grub.cfg with magic incantations to boot each ISO.
Ah, wait ... magic incantations?
The tutorial is missing one important part: what if you want to use an ISO
that isn't already mentioned in the tutorial? If Ubuntu's entry is
linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=$isofile quiet splash noprompt --
and Parted Magic's is
linux (loop)/pmagic/bzImage iso_filename=$isofile edd=off noapic load_ramdisk=1 prompt_ramdisk=0 rwnomce sleep=10 loglevel=0
then you know there's some magic going on there.
I knew I needed at least the Ubuntu "alternate installer", since it
allows installing a command-line system without the Unity desktop, and
Debian Squeeze, since that's currently the most power-efficient Linux
for laptops, in addition to the distros mentioned in the tutorial.
How do you figure out what to put in those grub.cfg lines?
Here's how to figure it out from the ISO file. I'll use the Debian Squeeze
ISO as an example.
Step 1: mount the ISO file.
$ sudo mount -o loop /pix/boot/isos/debian-6.0.0-i386-netinst.iso /mnt
Step 2: find the kernel
$ ls /mnt/*/vmlinuz /mnt/*/bzImage
/mnt/install.386/vmlinuz
Step 3: find the initrd. It might have various names, and might or
might not be compressed, but the name will almost always start with init.
$ ls /mnt/*/vmlinuz /mnt/*/init*
/mnt/install.386/initrd.gz
Unmount the ISO file.
$ umount /mnt
The trick in steps 2 and 3 is that nearly all live ISO images put the
kernel and initrd a single directory below the root. If you're using
an ISO that doesn't, you may have to search more deeply (try /mnt/*/*).
In the case of Debian Squeeze, now I have the two filenames:
/install.386/vmlinuz and /install.386/initrd.gz. (I've removed the
/mnt part since that won't be there when I'm booting from the USB stick.)
Now I can edit boot/grub/grub.cfg and make a boot stanza for Debian:
menuentry "Debian Squeeze" {
set isofile="/boot/isos/debian-6.0.0-i386-netinst.iso"
loopback loop $isofile
linux (loop)/install.386/vmlinuz iso_filename=$isofile quiet splash noprompt --
initrd (loop)/install.386/initrd.gz
}
Here's the entry for the Ubuntu alternate installer:
menuentry "Oneiric 11.10 alternate" {
set isofile="/boot/isos/ubuntu-11.10-alternate-i386.iso"
loopback loop $isofile
linux (loop)/install/vmlinuz iso_filename=$isofile
initrd (loop)/install/initrd.gz
}
It sounds a little convoluted, I know -- but you only have to do it
once, and then you have this amazing keychain drive with every Linux
distro on it you can think of.
Amaze your friends!
Tags: linux, install, ubuntu, debian, grub
[
22:21 Oct 25, 2011
More linux/install |
permalink to this entry |
]
Mon, 16 May 2011
Update and warning: My bzr diff was not accepted. It turns
out this particular package doesn't accept that format. Apparently
different packages within Ubuntu require different types of patches,
and there's no good way to find out besides submitting one type of
patch and seeing if it's rejected or ignored. In the end, I did get
a patch accepted, and will write up separately how that patch was
generated.
The process of submitting bugs and patches to Ubuntu can be deeply
frustrating. Even if you figure out how to fix a bug and attach a patch,
the patch can sit in Launchpad for years with no attention, as this
ubuntu-devel-discuss
thread attests.
The problem is that there are a lot of bugs and not enough people
qualified to review patches and check them in. To make things easier
for the packagers, sometimes people are told to "make a debdiff" or
"make a ppa".
But it's tough to find good instructions on how to do these things.
There are partial instructions at
Contributing
and on the
Packaging Guide
-- but both pages are aimed at people who want to become regular
packagers of new apps, not someone who just has one patch for a specific bug,
and they're both missing crucial steps. Apparently there's a new and better
packaging guide being written, but it's not publically available yet.
These days, Bazaar (bzr), not debdiff, is considered the best way to
make a patch easy for Ubuntu developers to review.
With a lot of help from #ubuntu-women, and particularly
Maco (THANKS!),
I worked through the steps to submit a patch I'd posted to
bug
370735 two years ago for gmemusage.
Here's what I needed to do.
Set up the tools
First, install some build tools you'll need, if you don't already have them:
sudo apt-get install bzr bzr-builddeb pbuilder
You will also need a Launchpad account:
and connect bzr to your Launchpad account:
bzr whoami "Firstname Lastname <yourname@example.com>"
bzr launchpad-login your-acct
Check out the code
Create a directory where you'll do the work:
mkdir pkgname
cd pkgname
Check out the source from bzr:
bzr branch lp:ubuntu/pkgname pkgname
Make a bzr branch for your fixes. It's probably a good idea to include the
bug number or other specifics in the branch name:
bzr branch pkgname pkgname-fix-bugnum
cd pkgname-fix-bugnum
Now you can apply the patch, e.g. patch <../mypatch.diff
,
or edit source files directly.
Make a package you can test
Making a package from a bzr directory requires several steps.
Making a source package is easy:
bzr bd -S -- -uc -us
This will show up as ../pkgname_version.dsc.
But if you want something you can install and test, you need a binary package.
That's quite a bit more trouble to generate.
You'll be using pbuilder to create a minimal install of Ubuntu in a chroot
environment, so the build isn't polluted by any local changes you have
on your own machine.
First create the chroot: this takes a while, maybe 10 minutes or so, or
a lot longer if you have a slow network connection. You'll also need some
disk space: on my machine it used 168M in /var/cache (plus more for
the next step). Since it uses /var/cache, it needs sudo to write there:
sudo pbuilder --create natty
Now build a .deb binary package from your .dsc source package:
sudo pbuilder --build ../pkgname_version.dsc
pbuilder will install a bunch of additional packages, like X and other
libraries that are needed to build your package but weren't included
in the minimal pbuilder setup.
And then once it's done with the build, it removes them all again.
Apparently there's a way to make it cache them so you'll have them
if you need to build again, but I'm not sure how.
pbuilder --build
gives lots of output, but none of that
output tells you where it's actually creating the .deb.
Look in /var/cache/pbuilder/result for it.
And now you can finally try installing it:
sudo dpkg -i /var/cache/pbuilder/result/pkgname_blahblah.deb
You can now test your fix, and make sure
you fixed the problem and didn't break anything else.
Check in your bzr branch
Once you're confident your fix is good. it's time to check it in.
Make a new changelog entry:
dch -i
This will open your editor of choice, where you should explain briefly
what you changed and why. If it's a fix for a Launchpad bug,
list the bug number like this:
(LP: #370735).
If you're proposing a fix for an Ubuntu that's already released,
you also need to add -proposed to the release name in the top
line in the changelog, e.g.:
pkgname (0.2-11ubuntu1) natty-proposed; urgency=low
Also, pay attention to that ubuntu1 part of the version string
if the entry prior to yours doesn't include "ubuntu" in the version.
If you're proposing a change to a stable release, change that to
ubuntu0.1; if it's for the current development release, it's
okay to leave it at ubuntu1 (more details on this
Packaging
page).
Finally, you can check it in to your local repository:
debcommit
and push it to Launchpad:
bzr push lp:~yourname/ubuntu/natty/pkgname/pkgname-fix-bugnum
Notify possible sponsors
You'll want to make sure your patch gets on the sponsorship queue,
so someone can review it and check in the fix.
bzr lp-open
(For me, this opened chromium even though firefox is my preferred browser.
To use Firefox, I had to:
sudo update-alternatives --config x-www-browser
first.
Boo chromium for making itself default without asking me.)
You should now have a launchpad page open in your browser. Click on
"Propose for merging into another branch" and include a description of
your change and why it should be merged. This, I'm told, notifies potential
sponsors who can review your patch and approve it for check-in.
Whew! That's a lot of steps. You could argue that it's harder to prepare
a patch for Ubuntu than it was to fix the bug in the first place.
Stay tuned ... I'll let you know when and if my patch actually gets approved.
Tags: ubuntu, bugs, open source
[
15:38 May 16, 2011
More linux |
permalink to this entry |
]
Sat, 30 Apr 2011
Intel hosted a MeeGo developer camp on Friday where they gave out
ExoPC tablets for developers, and I was lucky enough to get one.
Intel is making a big MeeGo push -- they want lots of apps available for
this platform, so they're trying to make it as easy as possible for
develoeprs to make new apps for their
AppUp store.
Meego looks fun -- it's a real Unix under the hood, with a more or less
mainstream kernel and a shell. I'm looking forward to developing for it;
in theory it can run Python programs (using Qt or possibly even gtk for
the front end) as well as C++ Qt apps. Of course, I'll be writing about
MeeGo developing once I know more about it; for now I'm still setting up
my development environment.
But on a lazy Saturday, I thought it would be fun to see if the new
Ubuntu 11.04, "Natty Narwhal", can run on the ExoPC. Natty's whizzy new "Unity"
interface (actually not new, but much revamped since the previous Ubuntu
release) is rumoured to be somewhat aimed at tablets with touchscreens.
How would it work on the ExoPC?
Making a bootable Ubuntu USB stick
The first step was to create a bootable USB stick with Ubuntu on it.
Sadly, this is
not
as easy as on Fedora or SuSE. Ubuntu is still very CD oriented, and
to make a live USB stick you need to take an ISO intended for a CDROM
then run a program that changes it to make it bootable from USB.
There are two programs for this: usb-creator and unetbootin.
In the past, I've had zero luck getting these programs to work except
when running under a Gnome desktop on the same version of Ubuntu I
was trying to install. Maybe it would be better this time.
I tried usb-creator-gtk first, since that seems to be the one Ubuntu
pushes most. It installed without too many extra dependencies -- it
did pull in several PolicyKit libraries like libpolkit-backend-1-0 and
libpolkit-gobject-1-0. When I ran it, it saw the USB stick right away,
and I chose the ubuntu-11.04-desktop-i386.iso file I'd downloaded.
But the Make Startup Disk button remained blank. I guess I needed to
click the Erase Disk button first. So I did -- and was presented
with an error dialog that said:
org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.PolicyKit1 was not provided by any service files
Dave (who's wrestled with this problem more than I have) suggested maybe
it wanted a vfat partition on the USB stick. So I quit usb-creator-gtk,
used gparted to make the stick into a single vfat partition, and restarted
usb-creator-gtk. Now everything was un-greyed -- so I clicked
Make Startup Disk and was immediately presented with another dialog:
Installation failed
No clue about what went wrong or why. Okay, on to unetbootin.
When I ran unetbootin, it gave me a helpful dialog that "unetbootin
must be run as root." Then it proceeded to show its window anyway.
I can read, so I quit and ran it again as root. I chose the iso
file, clicked OK -- and it worked! In a minute or two I had a bootable
Ubuntu USB stick.
(Update: unetbootin is better than usb-creator for another reason:
you can use it to burn CDs other than the default live desktop CD --
like if you want to burn the "alternate installer" ISO so you can
install server systems, use RAID partitions, etc.)
Booting on the ExoPC
Natty booted up just fine! I inserted the USB stick, powered on, leapt
for the XXX button that shows the boot menu and told it to boot from the
stick. Natty booted quite fast, and before long I was in the Unity desktop,
and, oddly, it started off in a banshee screen telling me I didn't
have any albums installed. I dismissed banshee ...
... at which point I found I couldn't actually do much without a keyboard.
I couldn't sign on to our wi-fi since I couldn't type the password,
and I didn't have any local files installed. But wait! I had an
SD card with some photos on it, and Ubuntu recognized it just fine and
popped up a file browser.
But I wanted to try net access.
I borrowed Dave's Mac USB keyboard to type in the WPA password. It
worked fine, and soon I was signed on to wi-fi and happily browsing the web.
"onboard" keyboard
What about an onscreen keyboard, though? I found one, called "onboard".
It's installed by default. Unfortunately, I couldn't find a way to run
it without a keyboard. Unity has a "+" button that took me to a window
with a text field labeled Search Applications, but you have to
type something there before it will show you any applications.
I couldn't find any way to get a list of applications without a
keyboard.
With a keyboard, I was able to find a terminal app, from which I was
able to run onboard. It's tiny! Far too small for me to type on a
capacitive display, even with my tiny fingers. It has no man page,
but it does have a --help argument, by which I was able to discover
the -s argument: onboard -s 900x300
did nicely.
It's ugly, but I can live with that.
Now if I can figure out how to make a custom Unity launcher for that,
I'll be all set.
Unity on tablets -- not quite there yet
With onboard running, I gave Dave back his keyboard, and discovered a
few other problems. I couldn't scroll in the file browser window: the
scrollbar thumb is only a few pixels wide, too narrow to hit with a
finger on a touchscreen, and the onboard keyboard has no up/down arrows
or Page Up/Down. I tried dragging with two fingers, but no dice.
Also, when I went back to that Unity Search Applications screen,
I discovered it takes up the whole screen, covering the onscreen
keyboard, and there's no way to move it so I can type.
Update: forgot to mention that Unity, for all its huge Playskool
buttons, has a lot of very small
targets that are hard to hit with a finger. It took me two or three
tries to choose the wi-fi icon on the top bar rather than the icon
to the left or right of it, and shutdown is similarly tricky.
So Natty's usability on tablets isn't quite there. Still, I'm impressed
at how easy it was to get this far. I didn't expect it to boot, run and
be so usable without any extra work on my part. Very cool!
And no, I won't be installing Natty permanently on the ExoPC. I got this
tablet for MeeGo development and I'm not welching on that deal. But it's
fun to know that it's so easy to boot Ubuntu when I want to.
Tags: exopc, ubuntu, tablet, boot, install
[
13:46 Apr 30, 2011
More linux/install |
permalink to this entry |
]
Wed, 20 Apr 2011
(or: Fixing a multi flash card reader on Ubuntu Natty)
For the first time after installing Ubuntu Natty, I needed to upload
some photos from my camera -- and realized with a sinking feeling
that I now had the new UDEV, which no longer lets you use the
udev
all_partitions directive, so that cards inserted into a multi flash
card reader will show up as /dev/sdb1 or whatever the appropriate
device name is.
Without all_partitions, you get the initial
sdb, sdc, sdd and sde for the various slots in the card reader, but
since there's no card there when the machine boots, and the reader
doesn't send an event when you insert a card later,
you never get a mountable /dev/sdb1 device.
But the udev developers in their infinite wisdom removed
all_partitions some time last year, apparently without providing any
replacement for it. So you can no longer solve this problem through
udev rules.
Static udev devices
Fortunately, there's another way, which is actually easier (though
less flexible) than udev rules: udev static devices. You can create
the devices you need once, and tell udev to create exactly those
devices every time.
To begin, first find out what your base devices are.
Look through dmesg | more
for your card reader.
Mine looks something like this:
[ 3.304938] scsi 4:0:0:0: Direct-Access Generic USB SD Reader 1.00 PQ
: 0 ANSI: 0
[ 3.305440] scsi 4:0:0:1: Direct-Access Generic USB CF Reader 1.01 PQ
: 0 ANSI: 0
[ 3.305939] scsi 4:0:0:2: Direct-Access Generic USB xD/SM Reader 1.02 PQ
: 0 ANSI: 0
[ 3.306438] scsi 4:0:0:3: Direct-Access Generic USB MS Reader 1.03 PQ
: 0 ANSI: 0
[ 3.306876] sd 4:0:0:0: Attached scsi generic sg1 type 0
[ 3.307020] sd 4:0:0:1: Attached scsi generic sg2 type 0
[ 3.307165] sd 4:0:0:2: Attached scsi generic sg3 type 0
[ 3.307293] sd 4:0:0:3: Attached scsi generic sg4 type 0
[ 3.313181] sd 4:0:0:1: [sdc] Attached SCSI removable disk
[ 3.313806] sd 4:0:0:0: [sdb] Attached SCSI removable disk
[ 3.314430] sd 4:0:0:2: [sdd] Attached SCSI removable disk
[ 3.315055] sd 4:0:0:3: [sde] Attached SCSI removable disk
Notice that the SD reader is scsi 4:0:0:0, and a few lines later, 4:0:0:0
is mapped to sdb. They're out of order, so make sure you match those scsi
numbers. If I want to read SD cards, /dev/sdb is where to look.
(Note: sd in "sdb" stands for "SCSI disk", while SD in "SD card"
stands for "Secure Digital". Two completely different meanings for
the same abbreviation -- just an unfortunate coincidence to make
this all extra confusing.)
To create static devices, I'll need the major and minor device numbers
for the devices I want to create. Since I know the SD card slot is sdb,
I can get those with ls:
$ ls -l /dev/sdb
brw-rw---- 1 root disk 8, 16 2011-04-20 09:43 /dev/sdb
The b at the beginning of the line tells me it's a block device;
the major and minor device numbers for the base SD card device are 8 and 16.
To get the first partition on that card, use the same major device and
add one to the minor device: 8 and 17.
Now you can create new static block devices, as root, using mknod
in the /lib/udev/devices directory:
$ sudo mknod /lib/udev/devices/sdb1 b 8 17
$ sudo mknod /lib/udev/devices/sdb2 b 8 18
$ sudo mknod /lib/udev/devices/sdb3 b 8 19
Update: Previously I had here
$ sudo mknod b 8 17 /lib/udev/devices/sdb1
but the syntax seems to have changed as of mid-2012.
Although my camera only uses one partition, sdb1, I created devices
for a couple of extra partitions because I
sometimes
partition cards that way.
If you only use flash cards for cameras and MP3 players, you
may not need anything beyond sdb1.
You can make devices for the other slots in the card reader the same way.
The memory stick reader showed up as scsi 4:0:0:3 or sde, and
/dev/sde has device numbers 8, 64 ... so to read the memory stick
from Dave's Sony camera, I'd need:
$ sudo mknod /lib/udev/devices/sde1 b 8 65
You don't have to call the devices sdb1, either. You can call them
sdcard1 or whatever you like. However, the base device will still
be named sdb (unless you write a udev rule to change that).
fstab entry
I like to use fstab entries and keep control over what's mounted,
rather than letting the system automatically mount everything it sees.
I can do that with this entry in /etc/fstab:
/dev/sdb1 /sdcard vfat user,noauto,exec,fmask=111,shortname=lower 0 0
plus
sudo mkdir /sdcard
.
Now, whenever I insert an SD card and want to mount it, I type
mount /sdcard
as myself. There's no need for sudo because of the user directive.
Tags: linux, udev, ubuntu
[
20:22 Apr 20, 2011
More linux |
permalink to this entry |
]
Mon, 18 Apr 2011
I had to buy a new hard drive recently, and figured as long as I had a
new install ahead of me, why not try the latest Ubuntu 11.04 beta,
"Natty Narwhal"?
One of the things I noticed right away was that sound was really LOUD! --
and my usual volume keys weren't working to change that.
I have a simple setup under openbox: Meta-F7 and Meta-F8 call a shell
script called "louder" and "softer" (two links to the same script),
and depending on how it's invoked, the script calls
aumix -v +4
or aumix -v -4
.
Great, except it turns out aumix doesn't work -- at all -- under Natty
(bug
684416). Rumor has it that Natty has dropped all support for OSS
sound, though I don't know if that's actually true -- the bug has
been sitting for four months without anyone commenting on it.
(Ubuntu never seems terribly concerned about having programs in
their repositories that completely fail to do anything; sadly,
programs can persist that way for years.)
The command-line replacement for aumix seems to be amixer, but its
documentation is sketchy at best. After a bit of experimentation, I
found if I set the Master volume to 100% using alsamixergui, I could
call amixer set PCM 4-
or 4-
. But I couldn't
use amixer set Master 4+
-- sometimes it would work but
most of the time it wouldn't.
That all seemed a bit too flaky for me -- surely there must be a
better way? Some magic Python library? Sure enough, there's
python-alsaaudio, and learning how to use it took a lot less
time than I'd already wasted trying random amixer commands to see
what worked. Here's the program:
#!/usr/bin/env python
# Set the volume louder or softer, depending on program name.
import alsaaudio, sys, os
increment = 4
# First find a mixer. Use the first one.
try :
mixer = alsaaudio.Mixer('Master', 0)
except alsaaudio.ALSAAudioError :
sys.stderr.write("No such mixer\n")
sys.exit(1)
cur = mixer.getvolume()[0]
if os.path.basename(sys.argv[0]).startswith("louder") :
mixer.setvolume(cur + increment, alsaaudio.MIXER_CHANNEL_ALL)
else :
mixer.setvolume(cur - increment, alsaaudio.MIXER_CHANNEL_ALL)
print "Volume from", cur, "to", mixer.getvolume()[0]
Tags: python, audio, ubuntu, natty
[
21:13 Apr 18, 2011
More programming |
permalink to this entry |
]
Sun, 13 Jun 2010
Update: though the rest of this article is still useful in
explaining how to un-blacklist the pcspkr module, unfortunately
that module works very erratically. Sometimes you'll get a beep,
sometimes not. So this article may be a good start but it still
doesn't explain why Ubuntu's kernels have such a flaky pcspkr module.
For years I've used Ubuntu with my own kernels rather than the kernels
Ubuntu provides. I have several reasons: home-built kernels boot a lot
faster (when I say a lot, I mean like saving 30 seconds off a one-minute
boot) and offer more control over options. But a minor reason is that
Ubuntu kernels generally don't support the system beep, so for example
there's no way to tell in vim when you get out of insert mode. (In the
past I've sometimes used the excellent
fancy beeper
module to play sounds, but I don't always want that.)
On Ubuntu's latest "Lucid Lynx", I'm using their kernel (so far).
The Ubuntu kernel team has made huge improvements in boot time and number
of modules loaded, so it's much more efficient than past kernels.
But it did leave me without a beeper.
modprobe pcspkr
failed to do anything except print the enigmatic:
WARNING: All config files need .conf: /etc/modprobe.d/00local, it will be ignored in a future release.
modprobe -v pcspkr
(verbose) was no help -- it printed
install /bin/true
which didn't make anything clearer.
To get my beep back, I had to do two things:
First, edit /etc/modprobe.d/blacklist.conf and comment out the line
blacklisting pcspeakr. It looks like this:
# ugly and loud noise, getting on everyone's nerves; this should be done by a
# nice pulseaudio bing (Ubuntu: #77010)
blacklist pcspkr
(They don't seem to be concerned about anyone who doesn't run Pulse,
or about the various other bugs involved -- there's quite a laundry
list in
bug
486154.)
Secomd. pcspkr was blacklisted a second time in a different way, in that
file so confusingly alluded to by the warning. /etc/modprobe.d/00local
was apparently left over from a previous version of Ubuntu, and never
removed by any upgrade script, and consisted of this:
install pcspkr /bin/true
Aha! So that's why modprobe -v pcspkr
printed
install /bin/true
-- because that's all it was doing instead
of loading the module like I'd asked.
So rm /etc/modprobe.d/00local
was the second step, and
once I'd done that, modprobe pcspkr
loaded the module and
gave me my system beep.
Tags: ubuntu, lucid, audio
[
14:02 Jun 13, 2010
More linux/kernel |
permalink to this entry |
]
Sun, 09 May 2010
Ubuntu's latest release, 10.04 "Lucid Lynx", really seems remarkably
solid. It boots much faster than any Ubuntu of the past three years,
and has some other nice improvements too.
But like every release, they made some pointless random undocumented
changes that broke stuff. The most frustrating has been getting my
front-panel flash card reader to work under Lucid's new udev,
so I could read SD cards from my camera and PDA.
The SD card slot shows up as /dev/sdb, but unless there's a card
plugged in at boot time, there's no /dev/sdb1 that you can
actually mount.
hal vs udisks
Prior to Lucid, the "approved" way of creating sdb1 was to
let hald-addons-storage poll every USB device every so
often, to see if anyone has plugged in a card and if so, check its
partition table and create appropriate devices.
That's a lot of polling -- and in any case, hald isn't standard on
Lucid, and even when it's installed, it sometimes runs and sometimes
doesn't. (I haven't figured out what controls whether it decides to run).
Hal isn't even supposed to be needed on Lucid -- it's supposed to use
devicekit (renamed to) udisks for that.
Except I guess they couldn't quite figure out how to get udisks working
in time, so they patched things together so that on Gnome systems, hald
does the same old polling stuff -- and on non Gnome systems, well,
maybe it does and maybe it doesn't. And maybe you can't read your
camera cards. Oh well!
udev rules
But on systems prior to Lucid there was another way:
make a udev rule to create sdb1 through sdb15 every time. I have an older
article
on setting up udev rules for multicard readers, but none of my old
udev rules worked on Lucid.
After many rounds of udevadm info -a -p /block/sdb
and udevadm test /block/sdb
, service udev restart
,
and many reboots, I finally found a rule that worked.
Create a /etc/udev/rules.d/71-multicard-reader.rules file
containing the following:
# Create all devices for multicard reader:
KERNEL=="sd[b-g]", SUBSYSTEMS=="usb", ATTRS{idVendor}=="1d6b", ATTRS{idProduct}=="0002", OPTIONS+="all_partitions,last_rule"
Replace the 1d6b and 0002 with the vendor and product of your own device,
as determined with udevadm info -a -p /block/sdb
... and
don't be tempted to use the vendor and device ID you get from lsusb,
because those are different.
What didn't work that used to? String matches. Some of them.
For example, this worked:
KERNEL=="sd[b-g]", SUBSYSTEMS=="scsi", ATTRS{model}=="*SD*", NAME{all_partitions}="sdcard"
but these didn't:
KERNEL=="sd[b-g]", SUBSYSTEMS=="scsi", ATTRS{model}=="*SD*Reader*", NAME{all_partitions}="sdcard"
KERNEL=="sd[a-g]", SUBSYSTEMS=="scsi", ATTRS{model}=="USB SD Reader ", NAME{all_partitions}="cardsd"
Update: The first of those two lines does indeed work now, whereas
it didn't when I was testing. It's possible that this has something
to do with saving hardware states and needing an extra
udevadm trigger
, as suggested in Alex's
Changes in Ubuntu Lucid to udev.
According to udevadm info
, the model is "USB SD Reader " (three
spaces at the end). But somehow "*SD*" matches this while "*SD*Reader*"
and the exact string do not. Go figure.
Numeric order
I'd like to have this rule run earlier, so it runs before
/lib/udev/rules.d/60-persistent-storage.rules and could use
OPTIONS+="last_rule" to keep the persistent storage rules from firing
(they run a lot of unnecessary external programs for each device).
But if I rename the rule from 71-multicard-reader.rules to 59-,
it doesn't run at all. Why? Shrug. It's not like udevadm test
will tell me.
Other things I love (not) about the new udev
- I love how if you give the
udevadm info
arguments in the wrong
order, -p -a, it means something else and gives an error message.
- I love how
udevadm test
doesn't actually test the same
rules udev will use, so it's completely unrelated to anything.
- I love the complete lack of documentation on things like string
matching and how the numeric order is handled.
- I love how you can't match both the device name (a string) and
the USB IDs in the same rule, because one is SUBSYSTEMS=="scsi" and the
other is SUBSYSTEMS=="usb".
- Finally, I love how there's no longer any way to test udev rules on
a running system -- if you want it to actually create new devices, you
have to reboot for each new test.
service udev restart
and
udevadm control --reload-rules
don't touch existing devices.
Gives me that warm feeling like maybe I'm not missing out on the full
Windows experience by using Linux.
Tags: linux, ubuntu, kernel, udev, install
[
21:51 May 09, 2010
More linux/kernel |
permalink to this entry |
]
Sat, 27 Mar 2010
Three times now I've gotten myself into a situation where I was trying
to install Ubuntu and for some reason couldn't burn a CD. So I
thought hey, maybe I can make a bootable USB image on this handy
thumb drive here. And spent the next three hours unsuccessfully
trying to create one. And finally gave up, got in the car and went to buy
a new CD burner or find someone who could burn the ISO to a CD because
that's really the only way you can install or run Ubuntu.
There are tons of howtos on the web for creating live USB sticks for
Ubuntu. Almost all of them start with "First, download the CD image
and burn it to a CD. Now, boot off the CD and ..."
The few that don't discuss apps like usb-creator-gtk or unetbootin
tha work great if you're burning the current Ubuntu Live CD image
from a reasonably current Ubuntu machine, but which fail miserably
in every other case (wildly pathological cases like burning the
current Ubuntu alternate installer CD from the last long-term-support
version of Ubuntu. I mean, really, should that be so unusual?)
Tonight, I wanted a bootable USB of Fedora 12. I tried the Ubuntu
tools already mentioned, but usb-creator-gtk won't even try with
an image that isn't Ubuntu, and unetbootin wrote something but the
resulting stick didn't boot.
I asked on the Fedora IRC channel, where a helpful person
pointed me to this paragraph on
copying an ISO image with dd.
Holy mackerel! One command:
dd if=Fedora-12-i686-Live.iso of=/dev/sdf bs=8M
and in less than ten minutes it was ready. And it booted just fine!
Really, Ubuntu, you should take a look at Fedora now and then.
For machines that are new enough, USB boot is much faster and easier
than CD burning -- so give people an easy way to get a bootable USB
version of your operating system. Or they might give up and try
a distro that does make it easy.
Tags: linux, install, fedora, ubuntu
[
23:01 Mar 27, 2010
More linux/install |
permalink to this entry |
]
Thu, 11 Mar 2010
Part 3 and final of my series on configuring Ubuntu's new grub2 boot menu.
I translate a couple of commonly-seen error messages, but most of
the article is devoted to multi-boot machines. If you have several
different operating systems or Linux distros installed on separate
disk partitions, grub2 has some unpleasant surprises, so see my
article for some (unfortunately very hacky) workarounds for its
limitations.
Why
use Grub2? Good question!
(Let me note that I didn't write the title, though I don't disagree
with it.)
Tags: writing, linux, boot, grub, ubuntu
[
10:56 Mar 11, 2010
More writing |
permalink to this entry |
]
Thu, 25 Feb 2010
Part 2 of my 3-parter on configuring Ubuntu's new grub2 boot menu
covers cleaning up all the bogus menu entries (if you have a
multiple-boot system) and some tricks on setting color and image
backgrounds:
Cleaning
up your boot menu (Grub2 part 2).
Tags: writing, linux, boot, grub, ubuntu
[
22:49 Feb 25, 2010
More writing |
permalink to this entry |
]
Sat, 20 Feb 2010
I gave a lightning talk at the Ubucon -- the Ubuntu miniconf -- at the
SCALE 8x, Southern
California Linux Expo yesterday. I've been writing about grub2
for Linux Planet but it left
me with some, well, opinions that I wanted to share.
A lightning talk
is an informal very short talk, anywhere from 2 to 5 minutes.
Typically a conference will have a session of lightning talks,
where anyone can get up to plug a project, tell a story or flame about
an annoyance. Anything goes.
I'm a lightning talk junkie -- I love giving them, and I
love hearing what everyone else has to say.
I had some simple slides for this particular talk. Generally I've
used bold or other set-offs to indicate terms I showed on a slide.
SCALE 8x, by
the way, is awesome so far, and I'm looking forward to the next two days.
Grub2 3-minute lightning talk
What's a grub? A soft wriggly worm.
But it's also the Ubuntu Bootloader.
And in Karmic, we have a brand new grub: grub2!
Well, sort of. Karmic uses Grub 2 version 1.97 beta4.
Aside from the fact that it's a beta -- nuff said about that --
what's this business of grub TWO being version ONE point something?
Are you hearing alarm bells go off yet?
But it must be better, right?
Like, they say it cleans up partition numbering.
Yay! So that confusing syntax in grub1, where you have to say
(hd0,0) that doesn't look like anything else on Linux,
and you're always wanting to put the parenthesis in the wrong place
-- they finally fixed that?
Well, no. Now it looks like this: (hd0,1)
THEY KEPT THE CONFUSING SYNTAX BUT CHANGED THE NUMBER!
Gee, guys, thanks for making things simpler!
But at least grub2 is better at graphics, right? Like what if
you want to add a background image under that boring boot screen?
A dark image, because the text is white.
Except now Ubuntu changes the text color to black.
So you look in the config file to find out why ...
if background_image `make_system_path_relative...
set color_normal=black/black
... there it is! But why are there two blacks?
Of course, there's no documentation. They can't be fg/bg --
black on black wouldn't make any sense, right?
Well, it turns out it DOES mean foreground and background -- but the second
"black" doesn't mean black. It's a special grub2 code for "transparent".
That's right, they wrote this brand new program from scratch, but they
couldn't make a parser that understands "none" or "transparent".
What if you actually want text with a black background? I have
no idea. I guess you're out of luck.
Okay, what about dual booting? grub's great at that, right?
I have three distros installed on this laptop. There's a shared /boot
partition. When I change something, all I have to do is edit a file
in /boot/grub. It's great -- so much better than lilo! Anybody remember
what a pain lilo was?
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by /usr/sbin/grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#
Oops, wait -- not with grub2. Now I'm not supposed to edit
that file. Instead, I edit files in TWO places,
/etc/grub.d and /etc/default/grub.conf, and then
run a program in a third place, /usr/bin/update-grub.
All this has to be done from the same machine where you installed
grub2 -- if you're booted into one of your other distros, you're out
of luck.
grub2 takes us back to the bad old days of lilo.
FAIL
Grub2 really is a soft slimy worm after all.
But I have some ideas for workarounds. If you care, watch my next
few articles on LinuxPlanet.com.
Update: links to Linux Planet articles:
Part 1: Grub2 worms into Ubuntu
Part 2: Cleaning up your boot menu
Part 3: Why use Grub2? Good question!
Tags: grub, ubuntu, linux, boot, speaking, conferences
[
11:29 Feb 20, 2010
More linux |
permalink to this entry |
]
Thu, 11 Feb 2010
Upgraded to Ubuntu 9.10 Karmic and wondering how to configure your
boot menu or set it up for multiple boots?
Grub2 Worms Into Ubuntu (part 1)
is an introductory tutorial -- just enough to get you started.
More details will follow in parts 2 and 3.
Tags: writing, linux, boot, grub, ubuntu
[
17:40 Feb 11, 2010
More writing |
permalink to this entry |
]
Mon, 25 Jan 2010
Ever since I upgraded to Ubuntu 9.10 "Karmic koala", printing text files
has been a problem. They print out with normal line height, but in a
super-wide font so I only get about 48 ugly characters per line.
Various people have reported the problem -- for instance,
bug 447961
and this
post -- but no one seemed to have an answer.
I don't have an answer either, but I do have a workaround. The problem
is that Ubuntu is scaling incorrectly. When it thinks it's putting
10 characters per inch (cpi) on a line, it's actually using a font that
only fits 6 characters. But if you tell it to fit 17 characters per inch,
that comes out pretty close to the 10cpi that's supposed to be the default:
lpr -o cpi=17 filename
As long as you have to specify the cpi, try different settings for it.
cpi=20
gives a nice crisp looking font with about 11.8
characters per inch.
If needed, you can adjust line spacing with lpi=NN
as well.
Update: The ever-vigilant Till Kamppeter has tracked the problem
down to the font used by texttopdf for lp/lpr printing. Interesting details in
bug
447961.
Tags: printing, ubuntu, linux, tips
[
16:36 Jan 25, 2010
More linux |
permalink to this entry |
]
Tue, 10 Nov 2009
I've been seeing intermittent mouse failures since upgrading to Ubuntu
9.10 "Karmic".
At first, maybe one time out of five I would boot, start X, and find
that I couldn't move my mouse pointer. But after building a 2.6.31.4
kernel, things got worse and it happened nearly every time.
It wasn't purely an X problem; if I enabled gpm, the mouse failed in the
console as well as in X. And it wasn't hardware, because if I used
Ubuntu 9.10's standard kernel, my mouse worked every time.
After much poking around with kernel options, I discovered that if I
tunred off the Direct Rendering manager ("Intel 830M, 845G, 852GM, 855GM,
865G (i915 driver)"), my mouse would work. But that wasn't a
satisfactory solution; aside from not being able to run Google Earth,
it seems that Intel graphics needs DRM even to get reasonable
performance redrawing windows. Without it, every desktop switch means
watching windows slowly redraw over two or three seconds.
(Aside: why is it that Intel cards with shared CPU memory need DRM
to draw basic 2-D windows, when my ancient ATI Radeon cards without
shared memory had no such problems?)
But I think I finally have it nailed. In the kernel's Direct Rendering
Manager options (under Graphics), the "Intel 830M, 845G, 852GM, 855GM,
865G (i915 driver)" using its "i915 driver" option has a new sub-option:
"Enable modesetting on intel by default".
The help says:
CONFIG_DRM_I915_KMS:
Choose this option if you want kernel modesetting enabled by default,
and you have a new enough userspace to support this. Running old
userspaces with this enabled will cause pain. Note that this causes
the driver to bind to PCI devices, which precludes loading things
like intelfb.
Sounds optional, right? Sounds like, if I want to build a kernel that
will work on both karmic and jaunty, I should leave that off
so as not to "cause pain".
But no. It turns out it's actually mandatory on karmic. Without it,
there's a race condition where about 80-90% of the time, hal won't
see a mouse device at all, so the mouse won't work either in X or
even on the console with gpm.
It's sort of the opposite of the
"Remove sysfs features which may confuse old userspace tools"
in General Setup, where the name implies that it's optional on
new distros like Karmic, but in fact, if you leave it on, the
kernel won't work reliably.
So be warned when configuring a kernel for brand-new distros.
There are some new pitfalls, and options that worked in the past
may not work any longer!
Update: see also the
followup
post for two more non-optional options.
Tags: linux, ubuntu, intel, X11, kernel
[
23:34 Nov 10, 2009
More linux/kernel |
permalink to this entry |
]
Mon, 02 Nov 2009
The syntax to log in automatically (without gdm or kdm) has changed
yet again in Ubuntu Karmic Koala. It's similar to the
Hardy
autologin, but the file has moved:
under Karmic,
/etc/event.d is no longer used, as documented
in the
releasenotes
(though, confusingly, it isn't removed when you upgrade, so it may still
be there taking up space and looking like it's useful for something).
The new location is
/etc/init/tty1.conf.
So here are the updated instructions:
Create /usr/bin/loginscript if you haven't already,
containing something like this:
#! /bin/sh
/bin/login -f yourusername
Then edit /etc/init/tty1.conf and look for the
respawn
line, and replace the line after it,
exec /sbin/getty -8 38400 tty1
, with this:
exec /sbin/getty -n -l /usr/bin/loginscript 38400 tty1
As far as I know, it's safe to delete /etc/event.d since it's now unused.
I haven't verified that yet. Better rename it first, and see if anything
breaks.
Tags: linux, ubuntu, boot
[
20:46 Nov 02, 2009
More linux/install |
permalink to this entry |
]
Sun, 06 Sep 2009
Someone was asking for help building XEphem on the XEphem mailing list.
It was a simple case of a missing include file, where the only trick
is to find out what package you need to install to get that file.
(This is complicated on Ubuntu, which the poster was using,
by the way they fragment the X developement headers into a maze of
a xillion tiny packages.)
The solution -- apt-file -- is so simple and easy to use, and yet
a lot of people don't know about it. So here's how it works.
The poster reported getting these compiler errors:
ar rc libz.a adler32.o compress.o crc32.o uncompr.o deflate.o trees.o zutil.o inflate.o inftrees.o inffast.o
ranlib libz.a
make[1]: Leaving directory `/home/gregs/xephem-3.7.4/libz'
gcc -I../../libastro -I../../libip -I../../liblilxml -I../../libjpegd -I../../libpng -I../../libz -g -O2 -Wall -I../../libXm/linux86 -I/usr/X11R6/include -c -o aavso.o aavso.c
In file included from aavso.c:12:
../../libXm/linux86/Xm/Xm.h:56:27: error: X11/Intrinsic.h: No such file or directory
../../libXm/linux86/Xm/Xm.h:57:23: error: X11/Shell.h: No such file or directory
../../libXm/linux86/Xm/Xm.h:58:23: error: X11/Xatom.h: No such file or directory
../../libXm/linux86/Xm/Xm.h:59:34: error: X11/extensions/Print.h: No such file or directory
In file included from ../../libXm/linux86/Xm/Xm.h:60,
from aavso.c:12:
../../libXm/linux86/Xm/XmStrDefs.h:1373: error: expected `=', `,', `;', `asm' or `__attribute__' before `char'
In file included from ../../libXm/linux86/Xm/Xm.h:60,
from aavso.c:12:
../../libXm/linux86/Xm/XmStrDefs.h:5439:28: error: X11/StringDefs.h: No such file or directory
In file included from ../../libXm/linux86/Xm/Xm.h:61,
from aavso.c:12:
../../libXm/linux86/Xm/VirtKeys.h:108: error: expected `)' before `*' token
In file included from ../../libXm/linux86/Xm/Display.h:49,
from ../../libXm/linux86/Xm/DragC.h:48,
from ../../libXm/linux86/Xm/Transfer.h:44,
from ../../libXm/linux86/Xm/Xm.h:62,
from aavso.c:12:
../../libXm/linux86/Xm/DropSMgr.h:88: error: expected specifier-qualifier-list before `XEvent'
../../libXm/linux86/Xm/DropSMgr.h:100: error: expected specifier-qualifier-list before `XEvent'
How do you go about figuring this out?
When interpreting compiler errors, usually what matters is the
*first* error. So try to find that. In the transcript above, the first
line saying "error:" is this one:
../../libXm/linux86/Xm/Xm.h:56:27: error: X11/Intrinsic.h: No such file or directory
So the first problem is that the compiler is trying to find a file
called Intrinsic.h that isn't installed.
On Debian-based systems, there's a great program you can use to find
files available for install: apt-file. It's not installed by default,
so install it, then update it, like this (the update will take a long time):
$ sudo apt-get install apt-file
$ sudo apt-file update
Once it's updated, you can now find out what package would install a
file like this:
$ apt-file search Intrinsic.h
libxt-dev: /usr/include/X11/Intrinsic.h
tendra: /usr/lib/TenDRA/lib/include/x5/t.api/X11/Intrinsic.h
In this case two two packages could install a file by that name.
You can usually figure out from looking which one is the
"real" one (usually the one with the shorter name, or the one
where the package name sounds related to what you're trying to do).
If you're stil not sure, try something like
apt-cache show libxt-dev tendra
to find out more
about the packages involved.
In this case, it's pretty clear that tendra is a red herring,
and the problem is likely that the libxt-dev package is missing.
So apt-get install libxt-dev
and try the build again.
Repeat the process until you have everything you need for the build.
Remember apt-file if you're not already using it.
It's tremendously useful in tracking down build dependencies.
Tags: open source, linux, programming, debian, ubuntu
[
11:25 Sep 06, 2009
More linux |
permalink to this entry |
]
Sun, 07 Jun 2009
I upgraded to Ubuntu's current 9.04 release, "Jaunty Jackalope", quite a
while ago, but I haven't been able to use it because its X server
crashes or hangs regularly. (Fortunately I only upgraded a copy
of my working 8.10 "Intrepid" install, on a separate partition.)
The really puzzling thing, though, wasn't the crashes, but the fact
that X acceleration didn't work at all. Programs like tuxracer
(etracer) and Google earth would display at something like one frame
update every two seconds, and glxinfo | grep renderer
said
OpenGL renderer string: Software Rasterizer
But that was all on my old desktop machine, with an
ATI Radeon 9000 card that I know no one cares about much.
I have a new machine now! An Intel dual Atom D945GCLF2D board
with 945 graphics. Finally, a graphics chip that's supported!
Now everything would work!
Well, not quite -- there were major teething pains, including
returning the first nonworking motherboard, but that's a
separate article. Eventually I got it running nicely with Intrepid.
DRI worked! Tuxracer worked! Even Google Earth worked! Unbelievable!
I copied the Jaunty install from my old machine to a partition on
the new machine. Booted into it and -- no DRI.
Just like on the Radeon.
Now, there's a huge pile of bugs in Ubuntu's bug system on problems
with video on Jaunty, all grouped by graphics card manufacturer even
though everybody seems to be
seeing pretty much the same problems on every chipset.
But hardly any of the bugs talk about not getting any DRI at all --
they're all about whether EXA acceleration works
better or worse than XAA and whether it's worth trying UXA.
I tried them all: EXA and UXA both gave me no DRI, while XAA
crashed/rebooted the machine every time. Clearly, there was something
about my install that was disabling DRI, regardless of
graphics card. But I poked and prodded and couldn't figure out what it was.
The breakthrough came when, purely by accident, I ran that same
glxinfo | grep renderer
from a root shell. Guess what?
OpenGL renderer string: Mesa DRI Intel(R) 945G GEM 20090326 2009Q1 RC2 x86/MMX/SSE2
As me (non-root), it still said "Software Rasterizer."
It was a simple permissions problem! But wait ... doesn't X run as root?
Well, it does, but the DRI part doesn't, as it turns out.
(This is actually a good thing, sort of, in the long term:
eventually the hope is to get X not to need root permissions either.)
Armed with the keyword "permissions" I went back to the web, and the
Troubleshooting
Intel Performance page on the Ubuntu wiki, and found the
solution right away.
(I'd looked at that page before but never got past the part right at
the beginning that says it's for problems involving EXA vs. UXA
vs. XAA, which mine clearly wasn't).
The Solution
In Jaunty, the user has to be in group video to use DRI in X.
But if you've upgraded from an Ubuntu version prior to Jaunty, where
this wasn't required, you're probably not in that group. The upgrader
(I used do-release-upgrade) doesn't check for this or warn you
that you have desktop users who aren't in the video group,
so you're on your own to find out about the problem.
Fixing it is easy, though:
edit /etc/group as root and add your user(s) to the group.
You might think this would have been an error worth reporting,
say, at X startup, or in glxinfo, or even in /var/log/Xorg.0.log.
You'd think wrong. Xorg.0.log blithely claims that DRI is enabled
and everything is fine, and there's no indication of an error
anywhere else.
I hope this article makes it easier for other people with this problem
to find the solution.
Tags: linux, ubuntu, X11, jaunty
[
20:23 Jun 07, 2009
More linux/install |
permalink to this entry |
]
Wed, 13 May 2009
Someone asked on a mailing list whether to upgrade to a new OS release
when her current install was working so well. I thought I should write
up how I back up my old systems before attempting a risky upgrade
or new install.
On my disks, I make several relatively small partitions, maybe
15G or so (pause to laugh about what I would have thought ten or
even five years ago if someone told me I'd be referring to 15G
as "small"), one small shared /boot partition, a swap partition,
and use the rest of the disk for /home or other shared data.
Now you can install a new release, like 9.04, onto a new partition
without risking your existing install.
If you prefer upgrading rather than running the installer, you can
do that too. I needed a jaunty (9.04) install to test
whether a bug was fixed. But my intrepid (8.10) is working fine and
I know there are some issues with jaunty, so I didn't want to risk
the working install. So from Intrepid, I copied the whole root
partition over to one of my spare root partitions, sda5:
mkfs.ext3 /dev/sda5
mkdir /jaunty
mount /dev/sda5 /jaunty
cp -ax / /jaunty
(that last step takes quite a while: you're copying the whole system.)
Now there are a couple of things you have to do to make that /jaunty
partition work as a bootable install:
1. /dev on an ubuntu system isn't a real file system, but something
magically created by the kernel and udev. But to boot, you need some
basic stuff there. When you're up and running, that's stored in
/dev/.static, so you can copy it like this:
cp -ax /dev/.static/dev/ /jaunty/
2021: Ignore this whole section.
For several years, Linux distros
have been able to create /dev on their own, without needing any seed
files. So there's no need to create any devices. Skip to #2, fstab
which is definitely still needed.
Note: it used to work to copy it to /jaunty/dev/.
The exact semantics of copying directories in cp and rsync, and
where you need slashes, seem to vary with every release.
The important thing is that you want /jaunty/dev to end up
containing a lot of devices, not a directory called dev or
a directory called .static. So fiddle with it after the cp -ax
if you need to.
Note 2: Doesn't it just figure? A couple of days
after I posted this, I found out that the latest udev has removed
/dev/.static so this doesn't work at all any more. What you can do
instead is:
cd /jaunty/dev
/dev/MAKEDEV generic
Note 3: If you're running MAKEDEV from Fedora, it
will target /dev instead of the current directory, so you need
MAKEDEV -d /whatever/dev generic
.
However, caution: on Debian and Ubuntu -d deletes the
devices. Check man MAKEDEV
first to be sure.
Ain't consistency wonderful?
2. /etc/fstab on the system you just created points to the wrong
root partition, so you have to fix that. As root, edit /etc/fstab
in your favorite editor (e.g. sudo vim /etc/fstab or whatever)
and find the line for the root filesystem -- the one where the
second entry on the line is /. It'll look something like this:
# /dev/sda1
UUID=f7djaac8-fd44-672b-3432-5afd759bc561 / ext3 relatime,errors=remount-ro 0 1
The easy fix is to change that to point to your new disk partition:
# jaunty is now on /dev/sda5
/dev/sda5 / ext3 relatime,errors=remount-ro 0 1
If you want to do it the "right", ubuntu-approved way, with UUIDs,
you can get the UUID of your disk this way:
ls -l /dev/disk/by-uuid/ | grep sda5
Take the UUID (that's the big long hex number with the dashes) and
put it after the UUID= in the original fstab line.
While you're editing /etc/fstab, be sure to look for any lines that
might mount /dev/sda5 as something other than root and delete them
or comment them out.
The following section describes how to update grub1.
Since most distros now use grub2, it is out of date.
With any luck, you can run update-grub and it will notice
your new partition; but you might want to fiddle with entries in
/etc/grub.d to label it more clearly.
Now you should have a partition that you can boot into and upgrade.
Now you just need to tell grub about it. As root, edit
/boot/grub/menu.lst and find the line that's booting your current
kernel. If you haven't changed the file yourself, that's probably
right after a line that says:
## ## End Default Options ##
It will look something like this:
title Ubuntu 8.10, kernel 2.6.27-11-generic
uuid f7djaac8-fd44-672b-3432-5afd759bc561
kernel /vmlinuz-2.6.27-11-generic root=UUID=f7djaac8-fd44-672b-3432-5afd759bc561 ro
initrd /initrd.img-2.6.27-11-generic
Make a copy of this whole stanza, so you have two identical copies,
and edit one of them. (If you edit the first of them, the new OS
it will be the default when you boot; if you're not that confident,
edit the second copy.) Change the two UUIDs to point to your new disk
partition (the same UUID you just put into /etc/fstab) and change
the Title to say 9.04 or Jaunty or My Copy or whatever you want the
title to be (this is the title that shows up in the grub menu when
you first boot the machine).
Now you should be able to boot into your new partition.
Most things should basically work -- certainly enough to start
a do-release-upgrade
without risking your original
install.
Tags: linux, ubuntu, install
[
10:44 May 13, 2009
More linux/install |
permalink to this entry |
]
Sat, 18 Apr 2009
Long ago I wrote about
getting
my multi-flash card reader to work using udev rules.
This always evokes horrified exclaimations from people in the
Ubuntu project -- "You shouldn't need to do that!" But there are
several reasons for wanting special udev rules for multi-card readers.
You might want your SD card to show up in the same place every time
(is it /dev/sdb1 or /dev/sdc1 today?); or you might be trying to
reduce polling to cut down your CPU and battery use.
But my older article referred to a script that no longer exists, and as
I recently had to update my udev rules on a fairly fresh Intrepid install,
I needed something more up-to-date and less dependent on Ubuntu's
specific udev scripts (which change frequently).
I found a wonderful forum article,
Create your
own udev rules to control removable devices,
that explains exactly how to find out the names of your devices
and make rules for them.
Another excellent article with essentially the same information is
Linux Format's
Connect your devices with udev.
Start by guessing at the current device name: for example, in this
particular session, my SD card reader showed up on /dev/sdd.
Find out the corresponding /block device name for it, like this:
udevinfo -q path -n /dev/sdd
Update: In Ubuntu jaunty, udevinfo is gone.
But you can substitute udevadm info
for udevinfo,
with the same flags.
In my case, the SD reader was /block/sdd. Now pass that into
udevinfo -a, like so:
udevinfo -a -p /block/sdd
and look for a few items that you can use to identify that
slot uniquely. If you can find a make or model, that's ideal.
For my card reader, I chose
KERNEL=="sdd"
SUBSYSTEMS=="scsi"
ATTRS{model}=="CardReader SD "
Note that SUBSYSTEM was scsi: usb-storage devices (handled by the scsi
system) sometimes show up as usb and sometimes as scsi.
Now you're ready to create some udev rules. In your favorite text
editor, create a new file named
/etc/udev/rules.d/59-multicard-reader.rules
.
You can name it whatever you want, but make sure the number
at the beginning is lower than the number of the udev rule
that would otherwise create the device's name -- in this case,
60-persistent-storage.rules.
Now write your udev rule. Include the identifying lines you picked out
from udevinfo -a:
KERNEL=="sd[a-g]", SUBSYSTEMS=="scsi", ATTRS{vendor}=="USB2.0 ", ATTRS{model}=="CardReader SD ", NAME{all_partitions}="card-sd", group=plugdev
A few things to notice. First, I used KERNEL=="sd[a-g]"
instead of just sdd, in case the devices might some day show up in
a different order.
The NAME field can be whatever you choose.
NAME{all_partitions}="card-sd"
will make the device show
up as /dev/card-sd, so to mount the first partition I'll use /dev/card-sd1.
The {all_partitions}
part tells the kernel to create
partitions like /dev/card-sd1 even if there's no SD card inserted
in the slot when you boot. Otherwise, you have to run
touch /dev/card-sd
after inserting a card to get
the device created -- or run a daemon like hald-addons-storage
that polls the device a few times every second checking to see if
anything has been inserted (as Ubuntu normally prefers to do).
GROUP="plugdev"
ensures the devices will be owned by
the group named "plugdev". This isn't particularly important since
you'll probably be mounting the cards using /etc/fstab lines or
some sort of automount daemon.
Pause and reflect sadly on the confusing coincidence of "scsi disk"
and "secure digital" both having the same abbreviation, so that
you need context to tell what each of these "sd"s means.
Test your new udev line by restarting udev:
/etc/init.d/udev restart
and see if your new device is there in /dev. If it is, you're all set!
Now you can add the rest of the devices from your multicard reader:
go back to the udevinfo steps and find out what each device is
called, then add a line for each of them.
Tags: ubuntu, udev, linux
[
16:45 Apr 18, 2009
More linux |
permalink to this entry |
]
Sun, 05 Apr 2009
(after attempting to install Ubuntu onto it)
I'm not a Mac person, but Dave hit this a few days ago on a brand
shiny new Mac Mini and it was somewhat traumatic. Since none of the
pages we found were helpful, here's my contribution to the googosphere.
Ubuntu through Intrepid (supposedly this will be fixed in Jaunty;
we didn't test it) have a major bug in their installer which will
render Macs unable to boot -- even off a CD.
(I should mention that the problem isn't limited to Ubuntu --
I did see a few Fedora discussions while I was googling.)
What happens is that in the grub install stage, gparted writes the
partition table (even if you didn't repartition) to the disk in a format
that's incompatible with the Mac boot loader.
For the gory details, it's
Bug
222126: Partition Table is cleared during install on Intel Macs.
There's some discussion in a couple of Ubuntu forums threads:
8.04 won't boot
and
Intel Macs with Hardy 'no bootable devices'
and all of them point to an open source Mac app called
rEFIt (yes, it's supposed
to be capitalized like that). But Dave had already tried to install
rEFIt, he thought, unsuccessfully (it turned out it was installed but
wasn't showing its menu properly, perhaps due to an issue of his Apple
keyboard not properly passing keys through the USB KVM at boot time.
Ah, the Apple world!)
Anyway, none of the usual tricks like holding Alt or C during boot
were working, even when he took the KVM out of the loop. After much
despair and teeth gnashing, though, he finally hit on the solution:
Cmd-Option-P-R during boot to reset the Parameter RAM back
to factory defaults.
We still aren't clear how the Ubuntu installer managed to change the
Parameter RAM. But a couple of iterations of Cmd-Option-P-R cleared
up the Mini's boot problem and made it able to boot from CD again,
and even made rEFIt start showing its menu properly.
There's one more step: once you get the machine straightened out
enough to show the rEFIt menu, you have to right-arrow into rEFIt's
partition manager, hit return, and hit return when it asks whether to
synchronize the partitions. That will reformat the incorrect
gparted-format partition table so the Mac can use it.
(And with any luck, that is the last time that I will EVER have
to type rEFIt!)
(Though a better way, if you could go back and do it over again, is
to click the Advanced button on the last screen of the Ubuntu live
installer, or else use the alternate installer instead. Either way
gives you the option of writing grub to the root partition where
you installed Ubuntu, rather than to the MBR, leaving you much less
horked. You'll still need to rewrite the partitions with rEFIt (grumble,
I knew I wasn't quite through typing that!) but you might avoid the
Parameter RAM scare of not being able to boot at all.)
That's the story as I understand it. I hope this helps someone else
who hits this problem.
Tags: ubuntu, mac
[
23:49 Apr 05, 2009
More linux/install |
permalink to this entry |
]
Wed, 11 Mar 2009
I finally got around to upgrading to the current Ubuntu, Intrepid
Ibex. I know Intrepid has been out for months and Jaunty is just
around the corner; but I was busy with the run-up to a couple of
important conferences when Intrepid came out, and couldn't risk
an upgrade. Better late than never, right?
The upgrade went smoothly, though with the usual amount of
babysitting, watching messages scroll by for a couple of hours
so that I could answer the questions that popped up
every five or ten minutes. Question: Why, after all these years
of software installs, hasn't anyone come up with a way to ask all
the questions at the beginning, or at the end, so the user can
go have dinner or watch a movie or sleep or do anything besides
sit there for hours watching messages scroll by?
XKbOptions: getting Ctrl/Capslock back
The upgrade finished, I rebooted, everything seemed to work ...
except my capslock key wasn't doing ctrl as it should. I checked
/etc/X11/xorg.conf, where that's set ... and found the whole
file commented out, preceded by the comment:
# commented out by update-manager, HAL is now used
Oh, great. And thanks for the tip on where to look to get my settings
back. HAL, that really narrows it down.
Google led me to a forum thread on
Intrepid
xorg.conf - input section. The official recommendation is to
run sudo dpkg-reconfigure console-setup
... but of
course it doesn't allow for options like ctrl/capslock.
(It does let you specify which key will act as the Compose key,
which is thoughtful.)
Fortunately, the release
notes give the crucial file name: /etc/default/console-setup.
The XKBOPTIONS= line in that file is what I needed.
It also had the useful XKBOPTIONS="compose:menu" option left over
from my dpkg-configure run. I hadn't known about that before; I'd
been using xmodmap to set my multi key. So my XKBOPTIONS now looks
like: "ctrl:nocaps,compose:menu"
.
Fixing the network after resume from suspend
Another problem I hit was suspending on my desktop machine.
It still suspended, but after resuming, there was no network.
The problem turned out to lie in
/etc/acpi/suspend.d/55-down-interfaces.sh.
It makes a list of interfaces which should be brought up and down like this:
IFDOWN_INTERFACES="`cat /etc/network/run/ifstate | sed 's/=.*//'`"
IFUP_INTERFACES="`cat /etc/network/run/ifstate`"
However, there is no file /etc/network/run/ifstate, so this always
fails and so
/etc/acpi/resume.d/62-ifup.sh fails to bring
up the network.
Google to the rescue again. The bad thing about Ubuntu is that they
change random stuff so things break from release to release. The good
thing about Ubuntu is a zillion other people run it too, so whatever
problem you find, someone has already written about. Turns
out ifstate is actually in /var/run/network/ifstate
now, so making that change in
/etc/acpi/suspend.d/55-down-interfaces.sh
fixes suspend/resume.
It's bug
295544, fixed in Jaunty and nominated for Intrepid (I just learned
about the "Nominate for release" button, which I'd completely missed
in the past -- very useful!) Should be interesting to see if the fix
gets pushed to Intrepid, since networking after resume is completely
broken without it.
Otherwise, it was a very clean upgrade -- and now I can build
the GIMP trunk again, which was really the point of the exercise.
Tags: ubuntu, linux, install, X11, networking
[
18:28 Mar 11, 2009
More linux/install |
permalink to this entry |
]
Sun, 01 Mar 2009
"Pinning" is the usual way Debian derivatives (like Ubuntu) deal
with pulling software from multiple releases. For instance, you
need an updated gtk or qt library in order to build some program,
but you don't want to pull in everything else from the newer release.
But most people, upon trying to actually set up pinning,
get lost in the elaborate documentation
and end up deciding maybe they don't really need it after all.
For years, I've been avoiding needing to learn pinning because of a wonderful
LinuxChix Techtalk posting from Hamster years ago on
easier
method of pinning releases
Basically, you add a line like:
APT::Default-Release "hardy";
to your
/etc/apt/apt.conf (creating it if it doesn't already exist).
Then when you need to pull something from the newer repository you
pull with
apt-get install -t hardy-backports packagename
.
That's generally worked for me, until yesterday when I tried to pull
a -dev package and found out it was incompatible with the library
package I already had installed. It turned out that the lib package
came from hardy-security, which is considered a different archive
from hardy, so my Default-Release didn't apply to security updates
(or bugfixes, which come from hardy-updates).
You can apparently only have one Default-Release. Since Ubuntu
uses three different archives for hardy the only way to
handle it is pinning. Pinning is documented in the man page
apt_preferences(5) -- which is a perfect example of a well
intentioned geek-written Unix man page.
There's tons of information there -- someone went to
a lot of work, bless their heart, to document exactly what happens
and why, down to the algorithms used to decide priorities -- but
straightforward "type X to achieve effect Y" examples are lost in
the noise. If you want to figure out how to actually set this up
on your own system, expect to spend a long time going back and
forward and back and forward in the man page correlating bits from
different sections.
Ubuntu guru Mackenzie Morgan was nice enough to help me out, and with
her help I got the problem fixed pretty quickly. Here's the quick recipe:
First, remove the Default-Release thing from apt.conf.
Next, create /etc/apt/preferences and put this in it:
Package: *
Pin: release a=hardy-security
Pin-Priority: 950
Package: *
Pin: release a=hardy-updates
Pin-Priority: 940
Package: *
Pin: release a=hardy
Pin-Priority: 900
# Pin backports negative so it'll never try to auto-upgrade
Package: *
Pin: release a=hardy-backports
Pin-Priority: -1
Here's what it means:
a= means archive, though it's apparently not really needed.
The hardy-security archive has the highest priority, 950.
hardy-updates is right behind it with 940 (actually, setting these
equal might be smarter but I'm not sure it matters).
hardy, which apparently is just the software initially installed,
is lower priority so it won't override the other two.
Finally, hardy-backports has a negative priority so that apt will
never try to upgrade automatically from it; it'll only grab things
from there if I specify apt-get install -t hardy-backports.
You can put comments (with #) in /etc/apt/preferences
but not in apt.conf -- they're a syntax error there (so don't
bother trying to comment out that Default-Release line).
And while you're editing apt.conf, a useful thing to put there is:
APT::Install-Recommends "false";
APT::Install-Suggests "false";
which prevents apt from automatically installing recommended or
suggested packages. Aptitude will still install the recommends and
suggests; it's supposed to be configurable in aptitude as well, but
turning it off never worked for me, so mostly I just stick to apt-get.
Tags: debian, ubuntu, pinning
[
21:19 Mar 01, 2009
More linux/install |
permalink to this entry |
]
Tue, 13 Jan 2009
I've been wanting for a long time to make Debian and Ubuntu
repositories so people can install
pho with apt-get,
but every time I try to look it up I get bogged down.
But I got mail from a pho user who really wanted that, and even
suggested a howto.
That howto
didn't quite do it, but it got me moving to look for a better one,
which I eventually found in the
Debian
Repository Howto.
It wasn't complete either, alas, so it took some trial-and-error
before it actually worked. Here's what finally worked:
I created two web-accessible directories, called hardy and etch.
I copied all the files created by dpgk-buildpkg on each distro --
.deb, .dsc, .tar.gz, and .changes (I don't think
this last file is used by anything) -- into each directory
(renaming them to add -etch and -hardy as appropriate).
Then:
% cd hardy/
% dpkg-scanpackages . /dev/null | gzip > Packages.gz
% dpkg-scansources . /dev/null | gzip > Sources.gz
% cd ../etch/
% dpkg-scanpackages . /dev/null | gzip > Packages.gz
% dpkg-scansources . /dev/null | gzip > Sources.gz
It gives an error,
** Packages in archive but missing from override file: **
but seems to work anyway.
Now you can use one of the following /etc/apt/sources.list lines:
deb http://shallowsky.com/apt/hardy ./
deb http://shallowsky.com/apt/etch ./
After an apt-get update, it saw pho, but it warned me
WARNING: The following packages cannot be authenticated!
pho
Install these packages without verification [y/N]?
There's some discussion in the
SecureAPT page
on the Debian wiki, but it's a bit involved and I'm not clear if
it helps me if I'm not already part of the official Debian keychain.
This page on
Release
check of non Debian sources was a little more helpful, and told me
how to create the Release and Release.gpg file -- but then I just get
a different error,
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY
And worse, it's an error now, not just a warning,
preventing any
apt-get update.
Going back to the SecureApt page, under
Setting up a secure apt repository they give the two steps the
other page gave for creating Release and Release.gpg, with a third
step: "Publish the key fingerprint, that way your users will know what
key they need to import in order to authenticate the files in the
archive."
So apparently if users don't take steps to import the key manually,
they can't update at all. Whereas if I leave out the Release and
Release.gpg files, all they have to do is type y when they see the
warning. Sounds like it's better to leave off the key.
I wish, though, that there was a middle ground, where I could offer the
key for those who wanted it without making it harder for those
who don't care.
Tags: programming, pho, debian, ubuntu, linux
[
21:14 Jan 13, 2009
More linux |
permalink to this entry |
]
Sun, 04 Jan 2009
I got myself a GPS unit for Christmas.
I've been resisting the GPS siren song for years -- mostly because I
knew it would be a huge time sink involving months of futzing with
drivers and software trying to get it to do something useful.
But my experience at an OpenStreetMap
mapping
party got me fired up about it, and I ordered a Garmin Vista Cx.
Shopping for a handheld GPS is confusing. I was fairly convinced I
wanted a Garmin, just because it's the brand used by most people in
the open source mapping community so I knew they were likely to work.
I wanted one with a barometric altimeter, because I
wanted that data from my hikes and bike rides (and besides,
it's fun to know how much you've climbed on an outing; I used to have
a bike computer with an altimeter and it was a surprisingly good
motivator for working harder and getting in better shape).
But Garmin has a bazillion models and I never found any comparison
page explaining the differences among the various hiking eTrex models.
Eventually I worked it out:
Garmin eTrex models, decoded
- C
- Color display. This generally also implies USB connectivity
instead of serial, just because the color models are newer.
- H
- High precision (a more sensitive satellite receiver).
- x
- Takes micro-SD cards. This may not be important for storing
tracks and waypoints (you can store quite a long track with the
built-in memory) but they mean that you can load extra base maps,
like topographic data or other useful features.
- Vista, Summit
- These models have barometric altimeters and magnetic compasses.
(I never did figure out the difference between a Vista and a Summit,
except that in the color models (C), Vistas take micro-SD cards (x)
while Summits don't, so there's a Summit C and HC while Vistas
come in Cx and HCx. I don't know what the difference is between
a monochrome Summit and Vista.)
- Legend, Venture
- These have no altimeter or compass.
A Venture is a Legend that comes without the bundled
extras like SD card, USB cable and base maps, so it's cheaper.
For me, the price/performance curve pointed to the Vista Cx.
Loading maps
Loading base maps was simplicity itself, and I found lots of howtos
on how to use downloadable maps. Just mount the micro-SD card on any
computer, make a directory called Garmin, and name the file
gmapsupp.img.
I used the CloudMade map
for California, and it worked great.
There are lots of howtos on generating your own maps, too,
and I'm looking forward to making some with topographic data
(which the CloudMade maps don't have). The most promising
howtos I've found so far are the
OSM
Map On Garmin page on the OSM wiki and the much more difficult,
but gorgeous,
Hiking
Biking Mapswiki page.
Uploading tracks and waypoints
But the real goal was to be able to take this toy out on a hike,
then come back and upload the track and waypoint files.
I already knew, from the mapping party, that Garmins have an odd
misfeature: you can connect them in usb-storage mode, where they look
like an external disk and don't need any special software ... but then
you can't upload any waypoints. (In fact, when I tried it with my
Vista Cx I didn't even see the track file.) To upload tracks and
waypoints, you need to use something that speaks Garmin protocol:
namely, the excellent GPSBabel.
So far so good. How do you call GPSbabel?
Luckily for me, just before my GPS arrived,
Iván Sánchez Ortega posted a
useful
little gpsbabel script
to the OSM newbies list and I thought I was all set.
But once I actually had the Vista in hand, complete with track and
waypoints from a walk around the block, it turned out it wasn't quite
that simple -- because Ubuntu didn't create the /dev/ttyUSB0 that
Iván's script used. A web search found tons of people having that
problem on Ubuntu and talking about various workarounds, involving
making sure the garmin_usb driver is blacklisted in
/etc/modprobe.d/blacklist (it was already), adding a
/etc/udev/rules.d/45-garmin.rules file that changes permissions
and ownership of ... um, I guess of the file that isn't being created?
That didn't make much sense. Anyway, none of it helped.
But finally I found the fix: keep the garmin_usb driver blacklisted
use "usb:" as the device to pass to GPSBabel rather than
"/dev/ttyUSB0". So the commands are:
gpsbabel -t -i garmin -f usb: -o gpx -F tracks.gpx
gpsbabel -i garmin -f usb: -o gpx -F waypoints.gpx
Like so many other things, it's easy once you know the secret!
Viewing tracklogs works great in Merkaartor, though I haven't yet
found an app that does anything useful with the elevation data.
I may have to write one.
Update: After I wrote this but before I was able to post it,
a discussion on the OSM Newbies list with someone
who was having similar troubles resulted in this useful wiki page:
Garmin
on GNU/Linux. It may also be worth checking the
Discussion
tab on that wiki page for further information.
Update, October 2011:
As of Debian Squeeze or Ubuntu Natty, you need two steps:
- Add a line to /etc/modprobe.d/blacklist.conf:
blacklist garmin_gps
- Create a udev file,
/etc/udev/rules.d/51-garmin.rules, to set the permissions so
that you can access the device without being root. It contains the line:
ATTRS{idVendor}=="091e", ATTRS{idProduct}=="0003", MODE="0660", GROUP="plugdev"
Then use gpsbabel with usb:
and you should be fine.
Tags: gps, mapping, GIS, linux, ubuntu, udev
[
16:31 Jan 04, 2009
More mapping |
permalink to this entry |
]
Thu, 09 Oct 2008
Ever been annoyed by the file in your home directory,
.sudo_as_admin_successful? You know, the one file with the name
so long that it alone is responsible for making ls print out your
home directory in two columns rather than three or four?
And if you remove it, it comes right back after the next time
you run sudo?
Here's what's creating it (credit goes to Dave North for figuring
out most of this).
It's there because you're in the group admin,
and it's there to turn off a silly bash warning.
It's specific to Ubuntu (at least, Fedora doesn't do it).
Whenever you log in under bash, if bash sees that you're in the
admin group in /etc/groups, it prints this warning:
To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.
Once you sudo to root, if you're in the admin group, sudo
creates an empty file named .sudo_as_admin_successful
in your home directory.
That tells bash, the next time you log in, not to print the
stupid warning any more.
Sudo creates the file even if your login shell isn't bash and so
you would never have seen the stupid warning. Hey, you might some
day go back to bash, right?
If you want to reclaim your ls columns and get rid of the file
forever, it's easy:
just edit /etc/group and remove yourself from the admin group.
If you were doing anything that required being in the admin group,
substitute another group with a different name.
Tags: linux, bash, sudo, annoyances, ubuntu
[
18:33 Oct 09, 2008
More linux |
permalink to this entry |
]
Sat, 04 Oct 2008
Dave and I were testing some ways of speeding up the booting process,
which is how he came to be looking at my Vaio's console with no X
running. "What's wrong with that font?" he asked.
I explained how Ubuntu always starts the boot process with a perfectly
fine font, then about 80% of the way through boot it deliberately
changes it to a garbled, difficult to read that was clearly not
designed for 1024x761. Been meaning for ages to figure out how to
fix it, never spent the time ... Okay, it said "Setting up console
font and keymap" just before it changes the font.
That message should be easy to find.
Maybe I should take a few minutes now and look into it.
The message comes from /etc/init.d/console-setup
,
which runs a program called setupcons
, which has a
man page. setupcons
uses /etc/default/console-setup
which includes the following section:
# Valid font faces are: VGA (sizes 8, 14 and 16), Terminus (sizes
# 12x6, 14, 16, 20x10, 24x12, 28x14 and 32x16), TerminusBold (sizes
# 14, 16, 20x10, 24x12, 28x14 and 32x16), TerminusBoldVGA (sizes 14
# and 16), Fixed (sizes 13, 14, 15, 16 and 18), Goha (sizes 12, 14 and
# 16), GohaClassic (sizes 12, 14 and 16).
FONTFACE="Fixed"
FONTSIZE="16"
The hard part of changing the console font in the past has always been
finding out what console fonts are available. So having a list right
there in the comment is a big help.
Okay, let's try changing it to Terminus and running setupcons again.
Nope, error message. How about VGA? Success, looks fine. That was easy!
But while I was in that file, what about the keymap? That's another
thing I've been meaning to fix for ages ... under Debian, Redhat and
earlier Ubuntu versions I had a .kmap.gz console map that turned my
capslock key into a Control key (the way God intended). But Ubuntu
changed things all around so the old fix didn't work any more.
I found a thread from
December from someone who wanted to make the exact same change,
for the same reason, but the only real advice in the thread involved
an elaborate ritual involving defining keymaps for X and Gnome then
applying them to the console. Surely there was a better way.
It seemed pretty clear that /etc/console-setup/boottime.kmap.gz
was the keymap it was using. I tried substituting my old keymap, but
since I'd written it to inherit from other keymaps that no longer
existed, loadkeys can't use it. Eventually I just gunzipped
boottime.kmap.gz, found the Caps Lock key (keycode 29), replaced
all the Caps_Locks
with Controls
and gzipped
it back up again. And it worked!
Gary Vollink has a more detailed description, and the process hasn't
changed much since his page on
Getting "Control"
on the "Caps Lock".
Another gem linked to from the Ubuntu thread was this
excellent
article on keyboard layouts under X by Daniel Paul O'Donnell.
It's not relevant to the problem of setting the console keymap,
but it looks like a very useful reference on how various
international character input methods work under X.
Tags: linux, ubuntu, fonts
[
22:33 Oct 04, 2008
More linux |
permalink to this entry |
]
Mon, 22 Sep 2008
Part
III in the Linux Astronomy series on Linux Planet covers two 3-D apps,
Stellarium and Celestia.
Writing this one was somewhat tricky because
the current Ubuntu, "Hardy", has a bug in its Radeon handling
and both these apps lock my machine up pretty quickly, so I went
through a lot of reboot cycles getting the screenshots.
(I found lots of bug reports and comments on the web, so I know
it's not just me.)
Fortunately I was able to test both apps and grab a few screenshots
on Fedora 8 and Ubuntu "Feisty" without encountering crashes.
(Ubuntu sure has been having a lot of
trouble with their X support lately! I'm going to start keeping
current Fedora and Suse installs around for times like this.)
Tags: writing, astronomy, linux, ubuntu, bugs
[
22:10 Sep 22, 2008
More writing |
permalink to this entry |
]
Sat, 31 May 2008
Ah, I so love progress. I was working with powertop to try to make my
system more efficient, and kept seeing a USB device I didn't recognize
showing up as a frequent source of wakeups.
lsusb
didn't
show it either, so I tried firing up
usbview
.
Except it didn't work: on Hardy it brings up an error window
complaining about not being able to open /proc/bus/usb, which, indeed,
is not mounted despite being enabled in my kernel.
A little googling showed this was an oft-reported bug in Ubuntu Hardy:
for instance, bug
156085 and bug
151585, both with the charming attitude I so love in open
source projects, "No, we won't enable this simple fix that reverts
the software to the way it worked in the last release; we'd prefer
to keep it completely broken indefinitely until someone happens to get
around to fixing it right."
Okay, that's being a little harsh:
admittedly, most of the programs broken by this are in the "universe"
repository and thus not an official part of Ubuntu. Still, why be rude
to users who are just trying to find a way around bustage that was
deliberately introduced? Doesn't Ubuntu have any sort of process to
assign bugs in universe packages to a maintainer who might care about them?
Anyway, the workaround, in case you need usbview or qemu/kvm or
anything else that needs /proc/bus/usb, is to edit the file
/etc/init.d/mountdevsubfs.sh
and look for the line that says:
# Magic to make /proc/bus/usb work
Uncomment out the lines immediately following that line, then either
reboot or run the last command there by hand.
(In case you're wondering, usbview showed that the USB device causing
the powertop wakeups was the multi-flash card reader. I'm suspecting
hald-addons-storage is involved -- powertop already flagged hal's cdrom
polling as the number-one power waster. I don't know why the flash
multicard reader shows up in usbview but not in lsusb.)
Tags: ubuntu, usb
[
21:45 May 31, 2008
More linux |
permalink to this entry |
]
Thu, 22 May 2008
Dave needed something scanned. Oh, good! The first use of a scanner under
a new distro is always an interesting test. Though the last few
Ubuntu releases have been so good about making scanners "just work"
that I was beginning to take scanners for granted.
"Sure, no problem," I told Dave, taking the sketch he gave me.
Ha! Famous last words.
For Hardy, I guess the Ubuntu folks decided that users had
had it too easy for a while and it was time to throw us a challenge.
Under Hardy, scanning works fine as root, but normal users can't
access the scanner. sane-find-scanner
sees the scanner,
but xsane and the xsane-gimp plug-in can't talk to it (except as root).
It turns out the code for noticing you plugged in a scanner and
setting appropriate permissions (like making it group "scanner")
has been removed from udev, the obvious place for it ... and moved
into hal. Except, you guessed it, whatever hal is supposed to be
doing isn't working, so the device's group is never set to "scanner"
to make it accessible to non-root users.
Lots of people are hitting this and filing bugs (search for
scanner permissions), in particular
bug
121082 and bug
217571.
Fortunately, the fix is quite easy if you have a copy of your old
gutsy install: just copy /etc/udev/rules.d/45-libsane.rules from
gutsy to the same place on hardy.
(If you don't have your gutsy udev rules handy, I attached the file to the
latter of the two bugs I linked above.)
Then udev will handle your scanner just like it used to,
and you don't have to wait for the hal developers to figure out
what's wrong with the new hal rules.
Tags: linux, ubuntu, scanner, udev, hal
[
16:56 May 22, 2008
More linux |
permalink to this entry |
]
Fri, 16 May 2008
My laptop's clock has been drifting. I suspect the clock battery is
low (not surprising on a 7-year-old machine). But after an hour of
poking and prodding, I've been unable to find a way to expose the
circuit board under the keyboard, either from the top (keyboard)
side -- though I know how to remove individual keycaps, thanks to a reader
who sent me detailed instructions a while back (thanks, Miles!) --
or the bottom. Any expert on Vaio SR laptops know how this works?
Anyway, that means I have to check and reset the time periodically.
So this morning I did a time check and found it many hours off.
No, wait -- actually it was pretty close; it only looked like it
was way off because the system had suddenly decided it was in UTC,
not PDT. But how could I change that back?
I checked /etc/timezone -- sure enough, it was set to UTC. So I
changed that, copying one from a debian machine -- "US/Pacific",
but that didn't do it, even after a reboot.
I spent some time reading man hwclock
-- there's a lot
of good reading in that manual page, about the relation between the
system (kernel) clock and the hardware clock. Did you know that
you're not supposed to use the date command to set the system
time while the system is running? Me neither -- I do that all the
time. Hmm. Anyway, interesting reading, but nothing useful about
the system time zone.
It has an extensive SEE ALSO list at the end, so I explored some
of those documents.
/usr/share/doc/util-linux/README.Debian.hwclock
is full of lots of interesting information, well worth reading,
but it didn't have the answer. man tzset
sounded
promising, but there was no such man page (or program) on my system.
Just for the heckofit, I tried typing tz
[tab]
to see if I had any other timezone-related programs installed ...
and found tzselect. And there was the answer, added almost as an
afterthought at the end of the manual page:
Note that tzselect will not actually change the timezone for you.
Use 'dpkg-reconfigure tzdata' to achieve this.
Sure enough,
dpkg-reconfigure tzdata
let me set
the time zone. And it even seems to be remembered through a reboot.
Tags: linux, debian, ubuntu, vaio
[
11:04 May 16, 2008
More linux |
permalink to this entry |
]
Tue, 29 Apr 2008
Since updating to Hardy, I've been getting mail from Anacron:
/etc/cron.weekly/slocate:
slocate: fatal error: load_file: Could not open file: /etc/updatedb.conf: No such file or directory
That's the script that updates the database for locate,
Linux's fast find system.
I figured I must have screwed something up when I moved
that slocate cron script from cron.daily
to cron.weekly (because I hate having my machine slow to a
crawl as soon as I boot it in the morning, and it doesn't bother me
if the database doesn't necessarily have files added in the
last day or two).
But after talking to some other folks and googling for Ubuntu bugs,
I discovered I wasn't the only one getting that mail, and there was
already a
bug covering it. Comparing my setup with another Hardy user's,
I found that the file slocate was failing to find, /etc/updatedb.conf,
belongs to a different package, mlocate. If mlocate is installed,
then slocate's cron script works; otherwise, it doesn't.
Sounds like slocate should have a dependency that pulls in mlocate,
no?
But wait, what do these two packages do? Let's try a little
aptitude search locate
:
p dlocate - fast alternative to dpkg -L and dpkg -S
p kio-locate - kio-slave for the locate command
i locate - maintain and query an index of a directory
p mlocate - quickly find files on the filesystem based
i slocate - Secure replacement of findutil's locate
Okay, forget the first two, but we have locate, mlocate, and slocate.
How do they relate?
Worse, if I install mlocate (so slocate will work) and then look in my
cron directories, it turns out I now have, count 'em, five
different cron scripts that run updatedb. They are:
In cron.daily:
locate: 72 lines! but a lot of that is comments and pruning,
and a lot of fiddling to figure out what version of the kernel is
running to see whether it can pass any advanced flags when it tries
to renice the process. In the end it calls
updatedb.findutils
(note no full path, though it
uses a full path when it checks for it earlier in the script).
slocate: A much simpler but unfortunately buggy 20 lines.
It checks for /etc/updatedb.conf, runs it if it exists, fiddles
with ionice, checks again for /etc/updatedb.conf, and based
on whether it finds it, runs either /usr/bin/slocate -u
or /usr/bin/slocate -u -f proc
. The latter path is what
was failing and sending root mail every time the script was run.
mlocate: an even slimmer 12 line script, which checks for
/usr/bin/updatedb.mlocate and, if it exists, fiddles ionice then
runs /usr/bin/updatedb.mlocate
.
In cron.weekly:
Two virtually identical scripts called find.notslocate and
find.notslocate.dpkg-new, which differ only in dpkg-new having
more elaborate ionice options. They both run updatedb
.
And which updatedb would that be? Probably /usr/bin/updatedb, which
links to /etc/alternatives/updatedb, which probably links to either
updatedb.mlocate or updatedb.slocate, whichever you've installed
most recently. But in either case, it's hard to see why you'd need
this script running weekly if you're already running both flavors
of updatedb from other scripts cron.daily. And having two copies
of the script is just plain wrong (and there was already a
bug
filed on it). (As long as you're poking around
in cron.daily and cron.weekly, check and see if you have
any more of these extra dpkg-new or dpkg-old scripts -- they might be
slowing down your machine for no reason.)
Further research reveals that mlocate is a new(ish) package intended
to replace slocate. (There was a long discussion of that on
ubuntu-devel,
leading to the replacement of slocate with mlocate very late in
the Hardy development cycle. There was also lots of discussion of
"tracker", apparently a GUI fast find tool that can only search in
the user's home directory.)
What is this mlocate?
The m stands for "merge": the advantage of mlocate is
that it can merge new results into its existing database instead
of replacing the whole thing every time. Sounds good, right?
However, the down side is that mlocate apparently can't
to purge its database of old files that no longer
exist, and these files will clutter up your locate results.
Running locate -e
will keep them from being printed --
but there seems
to be no way to set this permanently, via an environment variable
or .locaterc file, nor to tell updatedb.mlocate to clean up its database.
So you'll need to alias locate to locate -e
if you want sensible behavior. Or go back to slocate. Sigh.
Cleaning up
The important thing is to get rid of most of those spurious updatedb
cron scripts. You might choose to run updatedb daily, weekly, or only
when you choose to run it; but you probably don't want five different
scripts running two different versions of updatedb at different times.
The packages obviously aren't cleaning up after themselves, so let's
do a little manual cleanup.
That find.slocate script looks suspicious. In fact, if you run
dpkg -S find.notslocate
, you find out that it doesn't
belong to any package -- not only should the .dpkg-old version not
be there, neither should the other one! So out they go.
As for slocate and mlocate,
it's important to know that the two packages can coexist:
installing mlocate doesn't remove slocate or vice versa.
A clean Hardy install should have only mlocate; upgrades from Gutsy
are more likely to have a broken slocate.
Having both packages probably isn't what you want. So pick one, and
remove or disable the other. If mlocate is what you want,
apt-get purge slocate
and just make sure that
/etc/cron.*/slocate disappears. If you decide you want slocate,
it's a little trickier since the slocate package is broken;
but you can fix it by creating an empty /etc/updatedb.conf so
updatedb.slocate won't fail.
Tags: linux, boot, ubuntu, install
[
21:48 Apr 29, 2008
More linux/install |
permalink to this entry |
]
Tue, 22 Apr 2008
Seems like each new Ubuntu release makes a few gratuitous changes
to the syntax of system files. Today's change involves autologin,
controlled by the "upstart" system (here's what I wrote about the
previous syntax for
autologin
under upstart).
The /usr/bin/loginscript still hasn't changed, and this still works:
#! /bin/sh
/bin/login -f yourusername
But the syntax has changed a little for the getty line in
/etc/event.d/tty1:
respawn is now on its own line (I don't know if that matters --
I still can't find any documentation on this file's syntax,
though I found a new upstart
page that links to some blog entries illustrating how upstart
can be used to start system daemons like dbus).
And the getty now needs an exec before it.
Like this:
respawn
exec /sbin/getty -n -l /usr/bin/loginscript 38400 tty1
Update: this changed
again in Karmic Koala: the file has moved from /etc/event.d/tty1
to /etc/init/tty1.conf.
Tags: linux, ubuntu, boot
[
15:27 Apr 22, 2008
More linux |
permalink to this entry |
]
Sun, 20 Apr 2008
I finally had a moment to upgrade my desktop to Ubuntu's "Hardy Heron".
I followed the same procedure as when I went from feisty to gutsy:
- cp -ax / /hardy
- cp -ax /dev/.static/dev/* /hardy/dev/
- Fix up files like /hardy/etc/fstab and /boot/grub/menu.lst
- Reboot into the newly copied gutsy
- do-release-upgrade -d
It took an hour or two to pull down all the files, followed by a long
interval of occasionally typing Y or N, and then I was ready to start
cleaning up some of the packages I'd noticed flying by that I didn't
want. Oops! I couldn't remove or install anything with apt-get,
because: dpkg --configure -a
But I couldn't dpkg --configure -a
because several
packages were broken.
The first broken package was plucker,
which apparently had failed to install any files.
Its postinstall script was failing because it had no
files to operate on; and then I couldn't do anything further with it
because apt-get wouldn't do anything until I did a
dpkg --reconfigure -a
I finally got out of that by dpkg -P plucker; then after several
more dpkg --reconfigure -a rounds I was eventually able to apt-get
install plucker (which installed just fine the second time).
But apt still wasn't happy, because it wanted to run the trigger for
initramfs-tools, which wouldn't run because it wanted kernel modules
for some specific kernel version in /lib/modules. I didn't have any
kernel modules because I'm not running Ubuntu's kernel (I'm stuck on
2.6.23 because
bug 10118
makes all 2.6.24 variants unable to sync with USB Palm devices).
But I couldn't remove initramfs-tools because udev
(along with a bunch of other less important packages) depends on it.
I finally found my way out of that by removing
/var/lib/dpkg/triggers/initramfs-tools.
I reported it as
bug 220094.
Update: I forgot to mention one important thing I hit both on
this machine and earlier, on the laptop: /usr/bin/play (provided by
the "sox" package) no longer works because it now depends on a
zillion separate libraries. apt-get install libsox-fmt-all
to get all of them.
Tags: linux, ubuntu, debian, install
[
21:02 Apr 20, 2008
More linux/install |
permalink to this entry |
]
Mon, 07 Apr 2008
On a lunchtime bird walk on Monday I saw one blue heron and at least
five green herons (very unusuual to see so many of those).
Maybe that helped prepare me for installing the latest
Ubuntu beta, "Hardy Heron", Monday afternoon.
I was trying the beta primarily in the hope that it would fix a
serious video out regression that appeared in Gutsy (the current
Ubuntu) in January.
My beloved old Vaio SR17 laptop can't switch video signals on the
fly like some laptops can; I've always needed to boot it with an
external monitor or projector connected. But as long as it saw a
monitor at boot time, it would remember that state through many
suspend cycles, so I could come out of suspend, plug in to a projector
and be ready to go. But beginning some time in late January, somehow
Gutsy started doing something that turned off the video signal when
suspending. To talk to a projector, I could reboot with the projector
connected (I hate making an audience watch that! and besides, it takes
away the magic). I also discovered that switching to one of
the alternate consoles, then back (ctl-alt-F2 ctl-alt-F7) got a signal
going out on the video port -- but I found out the hard way, in front
of an audience, that it was only a 640x480 signal, not the 1024x768
signal I expected. Not pretty! I could either go back to Feisty ...
or try upgrading to Hardy.
I've already written about the handy
debootstrap
lightweight install process I used.
(I did try the official Hardy "alternate installer" disk first, but
after finishing package installation it got into a spin lock
trying to configure kernel modules, so I had to pull the plug and
try another approach.)
This left me with a system that was very minimal indeed, so I spent
the next few hours installing packages, starting with
tcsh, vim (Ubuntu's minimal install has something called vim, but
it's not actually vim so you tend to get lots of errors about parsing
your .vimrc until you install the real vim),
acpi and acpi-support (for suspending),
and the window system: xorg and friends. To get xorg, I started with:
apt-get install xserver-xorg-video-savage xbase-clients openbox xloadimage xterm
Then there was the usual exercise of aptitude search font
and installing everything on that list that seemed relevant to
European languages (I don't really need to scroll through dozens of
Tamil, Thai, Devanagari and Bangla fonts every time I'm looking for a
fancy cursive in GIMP).
But I hit a problem with that pretty early on: turns out most of
the fonts I installed weren't actually showing up in xlsfonts,
xfontsel, gtkfontsel, or, most important, the little xlib program
I'm using for a talk I need to give in a couple weeks.
I filed it as bug
212669, but kept working on it, and when a clever person on
#ubuntu+1 ("redwhitewaldo") suggested I take a look at the
x-ttcidfont-conf README, that gave me enough clue to get me
the rest of the way. Turns out there's a Debian
bug with the solution, and the workaround is easy until the
Ubuntu folks pick up the update.
I hit a few other problems, like the
PCMCIA/udev
problem I've described elsewhere ... but mostly, my debootstrapped
Hardy Heron is working quite well.
And in case you're wondering whether Hardy fixed the video signal
problem, I'm happy to say it does. Video out is working just fine.
Tags: linux, ubuntu, install, fonts, vaio
[
19:31 Apr 07, 2008
More linux/install |
permalink to this entry |
]
Fri, 04 Apr 2008
I'm experimenting with Ubuntu's "Hardy Heron" beta on the laptop, and
one problem I've hit is that it never configures my network card properly.
The card is a cardbus 3Com card that uses the 3c59x driver.
When I plug it in, or when I boot or resume after a suspend, the
card ends up in a state where it shows up in ifconfig eth0
,
but it isn't marked UP. ifup eth0
says it's already up;
ifdown eth0
complains
error: SIOCDELRT: No such process
but afterward, I can run ifup eth0
and this time it
works. I've made an alias, net
, that does
sudo ifdown eth0; sudo ifup eth0
. That's silly --
I wanted to fix it so it happened automatically.
Unfortunately, there's nothing written anywhere on debugging udev.
I fiddled a little with udevmonitor
and
udevtest /class/net/eth0
and it looked like udev
was in fact running the ifup rule in
/etc/udev/rules.d/85-ifupdown.rules, which calls:
/sbin/start-stop-daemon --start --background --pid file /var/run/network/bogus --startas /sbin/ifup -- --allow auto $env{INTERFACE}
So I tried running that by hand (with $env{INTERFACE} being eth0)
and, indeed, it didn't bring the interface up.
But that suggested a fix: how about adding --force
to that ifup line? I don't know why the card is already in a state
where ifup doesn't want to handle it, but it is, and maybe
--force
would fix it. Sure enough: that worked fine,
and it even works when resuming after a suspend.
I filed bug
211955 including a description of the fix. Maybe there's some
reason for not wanting to use --force
in 85-ifupdown
(why wouldn't you always want to configure a network card when it's
added and is specified as auto and allow-hotplug in
/etc/network/interfaces?) but if so, maybe someone will
suggest a better fix.
Tags: linux, ubuntu, udev, networking
[
14:41 Apr 04, 2008
More linux |
permalink to this entry |
]
Sun, 23 Dec 2007
I use wireless so seldom that it seems like each time I need it, it's
a brand new adventure finding out what has changed since the last time
to make it break in a new and exciting way.
This week's wi-fi adventure involved Ubuntu's current "Gutsy Gibbon"
release and my prism54 wireless card. I booted the machine,
switched to the right
(a href="http://shallowsky.com/linux/networkSchemes.html">network
scheme, inserted the card, and ... no lights.
ifconfig -a
showed the card on eth1 rather
than eth0.
After some fiddling, I ejected the card and re-inserted it; now
ifconfig -a
showed it on eth2. Each time I
inserted it, the number incremented by one.
Ah, that's something I remembered from
Debian
Etch -- a problem with the udev "persistent net rules" file in
/etc/udev.
Sure enough, /etc/udev/70-persistent-net.rules had two entries
for the card, one on eth1 and the other on eth2. Ejecting and
re-inserting added another one for eth3. Since my network scheme is
set up to apply to eth0, this obviously wouldn't work.
A comment in that file says it's generated from
75-persistent-net-generator.rules. But unfortunately,
the rules uesd by that file are undocumented and opaque -- I've never
been able to figure out how to make a change in its behavior.
I fiddled around for a bit, then gave up and chose the brute force
solution:
- Remove /etc/udev/75-persistent-net-generator.rulesa
- Edit /etc/udev/70-persistent-net.rules to give the
device the right name (eth1, eth0 or whatever).
And that worked fine. Without 75-persistent-net-generator.rules
getting in the way, the name seen in 70-persistent-net.rules
works fine and I'm able to use the network.
The weird thing about this is that I've been using Gutsy with my wired
network card (a 3com) for at least a month now without this problem
showing up. For some reason, the persistent net generator doesn't work
for the Prism54 card though it works fine for the 3com.
A scan of the Ubuntu bug repository reveals lots of other people
hitting similar problems on an assortment of wireless cards;
bug
153727 is a fairly typical report, but the older
bug 31502
(marked as fixed) points to a likely reason this is apparently so
common on wireless cards -- apparently some of them report the wrong
MAC address before the firmware is loaded.
Tags: linux, ubuntu, udev, networking
[
19:02 Dec 23, 2007
More linux |
permalink to this entry |
]
Fri, 07 Dec 2007
(A culture of regressions, part 2)
I've been running on Ubuntu's latest, "Gutsy gibbon", for maybe
a month now. Like any release, it has its problems that I've needed to
work around. Like many distros, these problems won't be fixed before
the next release. But unlike other distros, it's not just lack of
developer time; it turns out Ubuntu's developers point to an official
policy as a reason not to fix bugs.
Take the case of the
aumix
bug. Aumix just plain doesn't work in gutsy. It prints,
"aumix: SOUND_MIXER_READ_DEVMASK" and exits.
This turns out to be some error in the way it was compiled.
If you apt-get the official ubuntu sources, build the package
and install it yourself, it works fine. So somehow they got a glitch
during the process of building it, and produced a bad binary.
(Minor digression -- does that make this a GPL violation? Shipping
sources that don't match the distributed binary? No telling what
sources were used to produce the binary in Gutsy. Not that anyone
would actually want the sources for the broken aumix, of course.)
It's an easy fix, right? Just rebuild the binary from the source
in the repository, and push it to the servers.
Apparently not. A few days ago, Henrik Nilsen Omma wrote in the bug:
This bug was nominated for Gutsy but does currently not qualify for a 7.10 stable release update (SRU) and the nomination is therefore declined.
According the the SRU policy, the fix should already be deployed and
tested in the current development version before an update to the
stable releases will be considered. [ ... ]
See: https://wiki.ubuntu.com/StableReleaseUpdates.
Of course, I clicked on the link to receive enlightenment.
Ubuntu's Stable Release page explains
Users of the official release, in contrast, expect a high degree of
stability. They use their Ubuntu system for their day-to-day work, and
problems they experience with it can be extremely disruptive. Many of
them are less experienced with Ubuntu and with Linux, and expect a
reliable system which does not require their intervention.
by way of explaining the official Ubuntu policy on updates:
Stable release updates will, in general, only be issued in order to
fix high-impact bugs. Examples of such bugs include:
- Bugs which may, under realistic circumstances, directly cause a
security vulnerability
- Bugs which represent severe regressions from the previous release of Ubuntu
- Bugs which may, under realistic circumstances, directly cause a
loss of user data
Clearly aumix isn't a security vulnerability or a loss of user data.
But I could make a good argument that a package that doesn't work ...
ever ... for anyone ... constitutes a severe regression from
the previous version of that package.
Ubuntu apparently thinks that users get used to packages not working,
and grow to like it. I guess that if you actually fixed
packages that you broke, that would be disruptive to users of the
stable release.
I'm trying to picture these Ubuntu target users, who embrace
regressions and get upset when something that doesn't work at all gets
fixed so that it works as it did in past releases. I can't say I've
ever actually met a user like that myself. But evidently the Ubuntu
Updates Team knows better.
Update: I just have to pass along Dave's comment:
"When an organization gets to the point where it spends more energy
on institutional processes for justifying not fixing
something than on just fixing it -- it's over."
Update: Carla Schroder has also
written
about this.
Tags: linux, ubuntu, audio
[
11:21 Dec 07, 2007
More linux |
permalink to this entry |
]
Fri, 30 Nov 2007
I upgraded my system to the latest Ubuntu, "Gutsy Gibbon", recently.
Of course, it's always best
to make a backup before doing a major upgrade. In this case, the goal
was to back up my root partition to another partition on the same
disk and get it working as a bootable Ubuntu, which I could then
upgrade, saving the old partition as a working backup.
I'll describe here a couple of silly snags I hit,
to save you from making the same mistakes.
Linux offers lots of ways to copy filesystems.
I've used tar in the past, with a command like (starting in /gutsy):
tar --one-file-system -cf - / | tar xvf - > /tmp/backup.out
but cp seemed like an easier way, so I want to try it.
I mounted my freshly made backup partition as /gutsy and started a
cp -ax /* /gutsy
(-a does the right thing for
permissions, owner and group, and file type; -x tells it to stay
on the original filesystem).
Count to ten, then check what's getting copied.
Whoops! It clearly wasn't staying on the original filesystem.
It turned out my mistake was that /*
.
Pretty obvious in hindsight what cp was doing: for each entry in /
it did a cp -ax, staying on the filesystem for that entry, not on
the filesystem for /. So /home, /boot, /proc, etc. all got copied.
The solution was to remove the *: cp -ax / /gutsy
.
But it wasn't quite that simple.
It looked like it was working -- a long wait, then cp finished
and I had what looked like a nice filesystem backup.
I adjusted /gutsy/etc/fstab so that it would point to the right root,
set up a grub entry, and rebooted. Disaster! The boot hung right after
Attached scsi generic sg0 type 0
with no indication of
what was wrong.
Rebooting into the old partition told me that what's supposed to
happen next is: * Setting preliminary keymap...
But the crucial error message was actually
several lines earlier: Warning: unable to open an initial
console
. It hadn't been able to open /dev/console.
Now, in the newly copied filesystem,
there was no /dev/console: in fact, /dev was empty. Nothing had been
copied because /dev is a virtual file system, created by udev.
But it turns out that the boot process needs some static devices in
/dev, before udev has created anything. Of course, once udev's
virtual filesystem has been mounted on /dev, you can no longer read
whatever was in /dev on the root partition in order to copy it
somewhere else. But udev nicely gives you access to it,
in /dev/.static/dev. So what I needed to do to get my new partition
booting was:
cp -ax /dev/.static/dev/ /gutsy/dev/
With that done, I was able to boot into my new filesystem and upgrade
to Gutsy.
Tags: linux, ubuntu, backups
[
23:48 Nov 30, 2007
More linux |
permalink to this entry |
]
Sat, 25 Aug 2007
On a seemingly harmless trip to Fry's,
my mother got a look at the 22-inch widescreen LCD monitors
and decided she had to have one. (Can't blame her ... I've been
feeling the urge myself lately.)
We got the lovely new monitor home, plugged it in, configured X
and discovered that the screen showed severe vertical banding.
It was beautiful at low resolutions, but whenever we went to
the monitor's maximum resolution of 1680x1050, the bands appeared.
After lots of testing, we tentatively pinned the problem
down to the motherboard.
It turns out ancient machines with 1x AGP motherboards
can't drive that many pixels properly,
even if the video card is up to the job. Who knew?
Off we trooped to check out new computers.
We'd been hinting for quite some time that it might be about
time for a new machine, and Mom was ready to take the plunge
(especially if it meant not having to return that beautiful monitor).
We were hoping to find something with a relatively efficient Intel Core 2
processor and Intel integrated graphics: I've been told the Intel
graphics chip works well with Linux using open source drivers.
(Mom, being a person of good taste, prefers Linux, and none of us
wanted to wrestle with the proprietary nvidia drivers).
We found a likely machine at PC Club. They were even willing to
knock $60 off the price since she didn't want Windows.
But that raised a new problem. During our fiddling with her old
machine, we'd tried burning a Xubuntu CD, to see if the banding
problem was due to the old XFree86 she was running. Installing it hadn't
worked: her CD burner claimed it burned correctly, but the resulting
CD had errors and didn't pass verification. So we needed a CD burned.
We asked PC Club when buying the computer whether we might burn the
ISO to CD, but apparently that counts as a "data transfer" and their
minimum data transfer charge is $80. A bit much.
No problem -- a friend was coming over for dinner that night,
and he was kind enough to bring his Mac laptop ...
and after a half hour of fiddling, we determined that his burner
didn't work either (it gave a checksum error before starting the
burn). He'd never tried burning a CD on that laptop.
What about Kinko's? They have lots of data services, right?
Maybe they can burn an ISO. So we stopped at Kinko's after dinner.
They, of course, had never heard of an ISO image and had no idea how
to burn one on their Windows box.
Fearing getting a disk with a filesystem containing one file named
"xubuntu-7.04-alternate-i386.iso", we asked if they had a mac,
since we knew how to burn an ISO there.
They did, though they said sometimes the CD burner was flaky.
We decided to take the risk.
Burning an ISO on a mac isn't straightforward -- you have to do
things in exactly the right order.
It took some fast talking to persuade them of the steps ("No, it
really won't work if you insert the blank CD first. Yes, we're quite
sure") and we had to wait a long time for Kinko's antivirus software
to decide that Xubuntu wasn't malware, but 45 minutes and $10 later,
we had a disc.
And it worked! We first set up the machine in the living room, away
from the network, so we had to kill aptitude update
when the install hung installing "xubuntu-desktop" at 85%
(thank goodness for alternate consoles on ctl-alt-F2) but otherwise
the install went just fine. We rebooted, and Xubuntu came up ...
at 1280x1024, totally wrong. Fiddling with the resolution in xorg.conf
didn't help; trying to autodetect the monitor with
dpkg-reconfigure xorg
crashed the machine and we had to
power cycle.
Back to the web ... turns out that Ubuntu "Feisty" ships with a bad
Intel driver. Lots of people have hit the problem, and we found a
few elaborate workarounds involving installing X drivers from various
places, but nothing simple. Well, we hadn't come
this far to take all the hardware back now.
First we moved the machine into the computer room, hooked up
networking and reinstalled xubuntu with a full network, just in
case. The resolution was still wrong.
Then, with Dave in the living room calling out steps off a web page
he'd found, we began the long workaround process.
"First," Dave suggested, reading, "check the version of
xserver-xorg-video-intel.
Let's make sure we're starting with the same version this guy is."
dpkg -l xserver-xorg-video-intel
... "Uh, it isn't
installed," I reported. I tried installing it. "It wants to remove
xserver-xorg-video-i810." Hmm! We decided we'd better do it,
since the rest of the instructions depended on having the
intel, not i810, driver.
And that was all it needed! The intel driver autodetected the monitor
and worked fine at 1680x1050.
So forget the elaborate instructions for trying X drivers from various
sources.
The problem was that xubuntu installed the wrong driver:
the i810 driver instead of the more generic intel driver.
(Apparently that bug is fixed for the next Ubuntu release.)
With that fix, it was only a few more minutes before Mom was
happily using her new system, widescreen monitor and all.
Tags: linux, X11, ubuntu
[
14:23 Aug 25, 2007
More linux |
permalink to this entry |
]
Sat, 18 Aug 2007
I'm forever having problems connecting to wireless networks,
especially with my Netgear Prism 54 card. The most common failure mode:
I insert the card and run
/etc/init.d/networking restart
(udev is supposed to handle this, but that stopped working a month
or so ago). The card looks like it's connecting,
ifconfig eth0
says it has the right IP address and it's marked up -- but try to
connect anywhere and it says "no route to host" or
"Destination host unreachable".
I've seen this both on networks which require a WEP key
and those that don't, and on nets where my older Prism2/Orinoco based
card will connect fine.
Apparently, the root of the problem
is that the Prism54 is more sensitive than the Prism2: it can see
more nearby networks. The Prism2 (with the orinoco_cs driver)
only sees the strongest network, and gloms onto it.
But the Prism54 chooses an access point according to arcane wisdom
only known to the driver developers.
So even if you're sitting right next to your access point and the
next one is half a block away and almost out of range, you need to
specify which one you want. How do you do that? Use the ESSID.
Every wireless network has a short identifier called the ESSID
to distinguish it from other nearby networks.
You can list all the access points the card sees with:
iwlist eth0 scan
(I'll be assuming
eth0 as the ethernet device throughout this
article. Depending on your distro and hardware, you may need to
substitute
ath0 or
eth1 or whatever your wireless card
calls itself. Some cards don't support scanning,
but details like that seem to be improving in recent kernels.)
You'll probably see a lot of ESSIDs like "linksys" or
"default" or "OEM" -- the default values on typical low-cost consumer
access points. Of course, you can set your own access point's ESSID
to anything you want.
So what if you think your wireless card should be working, but it can't
connect anywhere? Check the ESSID first. Start with iwconfig:
iwconfig eth0
iwconfig lists the access point associated with the card right now.
If it's not the one you expect, there are two ways to change that.
First, change it temporarily to make sure you're choosing the right ESSID:
iwconfig eth0 essid MyESSID
If your accesspoint requires a key, add key nnnnnnnnnn
to the end of that line. Then see if your network is working.
If that works, you can make it permanent. On Debian-derived distros,
just add lines to the entry in /etc/network/interfaces:
wireless-essid MyESSID
wireless-key nnnnnnnnnn
Some older howtos may suggest an interfaces line that looks like this:
up iwconfig eth0 essid MyESSID
Don't get sucked in. This "up" syntax used to work (along with pre-up
and post-up), but although man interfaces still mentions it,
it doesn't work reliably in modern releases.
Use wireless-essid instead.
Of course, you can also use a gooey tool like
gnome-network-manager to set the essid and key. Not being a
gnome user, some time ago I hacked up the beginnings of a standalone
Python GTK tool to configure networks. During this week's wi-fi
fiddlings, I dug it out and blew some of the dust off:
wifi-picker.
You can choose from a list of known networks (including both essid and
key) set up in your own configuration file, or from a list of essids
currently visible to the card, and (assuming you run it as root)
it can then set the essid and key to whatever you choose.
For networks I use often, I prefer to set up a long-term
network
scheme, but it's fun to have something I can run once to
show me the visible networks then let me set essid and key.
Tags: linux, networking, debian, ubuntu
[
15:44 Aug 18, 2007
More linux |
permalink to this entry |
]
Thu, 28 Jun 2007
I upgraded my laptop's Ubuntu partition from Edgy to Feisty.
Debian Etch works well, but it's just too old and I can't build
software like GIMP that insists on depending on cutting-edge
libraries.
But Feisty is cutting edge in other ways, so
it's been a week of workarounds, in two areas: Firefox and the kernel.
I'll start with Firefox.
Firefox crashes playing flash
First, the way Ubuntu's Firefox crashes when running Flash.
I run flashblock, fortunately, so I've been able to browse the web
just fine as long as I don't click on a flashblock button.
But I like being able to view the occasional youtube video,
and flash 7 has worked fine for me on every distro except Ubuntu.
I first saw the problem on Edgy, and upgrading to Feisty didn't cure the
problem.
But it wasn't their Firefox build; my own "kitfox" firefox
build crashed as well. And it wasn't their flash installation; I've
never had any luck with either their adobe flash installer nor their
opensource libswfdec, so I'm running the same old flash 7 plug-in
that I've used all along for other distros.
To find out what was really happening, I ran Firefox from the
commandline, then went to a flash page. It turned out it was
triggering an X error:
The error was: 'BadMatch (invalid parameter attributes)'.
(Details: serial 104 error_code 8 request_code 145 minor_code 3)
That gave me something to search for. It turns out there's a longstanding
Ubuntu
bug, 14911 filed on this issue, with several workarounds.
Setting the environment variable XLIB_SKIP_ARGB_VISUALS to 1 fixed the
problem, but, reading farther in the bug, I saw that the real problem
was that Ubuntu's installer had, for some strange reason, configured
my X to use 16 bit color instead of 24. Apparently this is pretty
common, and due to some bug involving X's and Mozilla's or Flash's
handling of transparency, this causes flash to crash Mozilla.
So the solution is very simple. Edit /etc/X11/xorg.conf, look
for the DefaultDepth line, and if it's 16, that's your problem.
Change it to 24, restart X and see if flash works. It worked for me!
Eliminating Firefox's saved session pester dialog
While I was fiddling with Firefox, Dave started swearing. "Why does
Firefox always make me go through this dialog about restoring the last
session? Is there a way to turn that off?"
Sure enough, there's no exposed preference for this, so I poked around
about:config, searched for browser and found
browser.sessionstore.resume_from_crash. Doubleclick that
line to change it to false and you'll have no more pesky
dialog.
For more options related to session store, check the
Mozillazine
Session Restore page.
Kernel: runaway kacpid
Alas, having upgraded to Feisty expressly so that I could build
cutting-edge programs like GIMP, I discovered that I couldn't build
anything at all. Anything that uses heavy CPU for more than a minute
or two triggers a kernel daemon, kacpid, to grab most of the CPU for
itself. Being part of the kernel (even though it has a process ID),
kacpi is unkillable, and prevents the machine from shutting down,
so once this happens the only solution is to pull the power plug.
This has actually been a longstanding Ubuntu problem
(bug 75174)
but it used to be that disabling acpid would stop kacpid from
running away, and with feisty, that no longer helps.
The bug is also
kernel.org
bug 8274.
The Ubuntu bug suggested that disabling cpufreq solved it for one
person. Apparently the only way to do that is to build a new kernel.
There followed a long session of attempted kernel building.
It was tricky because of course I couldn't build on the
target machine (inability to build being the problem I was trying to
solve), and even if I built on my desktop machine,
a large rsync of the modules directory would trigger a
runaway kacpi. In the end, building a standalone kernel with
no modules was the only option.
But turning off cpufreq didn't help, nor did any of the other obvious
acpi options. The only option which stops kacpid is to disable acpi
altogether, and use apm. I'm sorry to lose hibernate, and temperature
monitoring, but that appears to be my only option with modern kernels.
Sigh.
Kernel: Hangs for 2 minutes at boot time initializing sound card
While Dave and I were madly trying to find a set of config options to
build a 2.6.21 that would boot on a Vaio (he was helping out with his
SR33 laptop, starting from a different set of config options) we both
hit, at about the same time, an odd bug: partway through boot, the
kernel would initialize the USB memory stick reader:
sd 0:0:0:0: Attached scsi removable disk sda
sd 0:0:0:0: Attached scsi generic sg0 type 0
and then it would hang, for a long time. Two minutes, as it turned
out. And the messages after that were pretty random: sometimes related
to the sound card, sometimes to the network, sometimes ... GConf?!
(What on earth is GConf doing in a kernel boot sequence?)
We tried disabling various options to try to pin down the culprit:
what was causing that two minute hang?
We eventually narrowed the blame to the sound card (which is a Yamaha,
using the ymfpci driver). And that was enough information for google
to find this
Linux Kernel Mailing List thread. Apparently the sound maintainer
decided, for some reason, to make the ymfpci driver depend on an
external firmware file ... and then didn't include the firmware file,
nor is it included in the alsa-firmware package he references in that
message. Lovely. I'm still a little puzzled about the timeout: the
post does not explain why, if a firmware file isn't found on the
disk, waiting for two minutes is likely to make one magically appear.
Apparently it will be fixed in 2.6.22, which isn't much help for
anyone who's trying to run a kernel on any of the 2.6.21.* series
in the meantime. (Isn't it a serious enough regression to fix in
2.6.21.*?) And he didn't suggest a workaround, except that
alsa-firmware package which doesn't actually contain the firmware
for that card.
Looks like it's left to the user to make things work.
So here's what to do: it turns out that if you take a 2.6.21 kernel,
and substitute the whole sound/pci/ymfpci directory from a 2.6.20
kernel source tree, it builds and boots just fine. And I'm off and
running with a standalone apm kernel with no acpi; sound works, and I
can finally build GIMP again.
So it's been quite a week of workarounds. You know, I used to argue
with all those annoying "Linux is not ready for the desktop"
people. But sometimes I feel like Linux usability is moving in the
wrong direction. I try to imagine explaining to my mac-using friends
why they should have to edit /etc/X11/xorg.conf because their distro
set up a configuration that makes Firefox crash, or why they need to
build a new kernel because the distributed ones all crash or hang
... I love Linux and don't regret using it, but I seem to need
workarounds like this more often now than I did a few years ago.
Sometimes it really does seem like the open source world is moving
backward, not forward.
Tags: linux, ubuntu, mozilla, firefox, kernel, audio
[
23:24 Jun 28, 2007
More linux |
permalink to this entry |
]
Sun, 13 May 2007
In the last installment,
I got the Visor driver working. My sitescooper process also requires
that I have a local web server (long story), so I needed Apache. It
was already there and running (curiously, Apache 1.3.34, not Apache 2),
and it was no problem to point the DocumentRoot to the right place.
But when I tested my local site,
I discovered that although I could see the text on my website, I
couldn't see any of the images. Furthermore, if I right-clicked on any
of those images and tried "View image", the link was pointing to the
right place (http://localhost/images/foo.jpg). The file
(/path/to/mysite/images/foo.jpg) existed with all the right
permissions. What was going on?
/var/log/apache/error.log gave me the clue. When I was trying to
view http://localhost/images/foo.jpg, apache was throwing this error:
[error] [client 127.0.0.1] File does not exist: /usr/share/images/foo.jpg
/usr/share/images? Huh?
Searching for usr/share/images in /etc/apache/httpd.conf gave the
answer. It turns out that Ubuntu, in their infinite wisdom, has
decided that no one would ever want a directory called images
in their webspace. Instead, they set up an alias so that any
reference to /images gets redirected to /usr/share/images.
WTF?
Anyway, the solution is to comment out that stanza of httpd.conf:
<IfModule mod_alias.c>
# Alias /icons/ /usr/share/apache/icons/
#
# <Directory /usr/share/apache/icons>
# Options Indexes MultiViews
# AllowOverride None
# Order allow,deny
# Allow from all
# </Directory>
#
# Alias /images/ /usr/share/images/
#
# <Directory /usr/share/images>
# Options MultiViews
# AllowOverride None
# Order allow,deny
# Allow from all
# </Directory>
</IfModule>
I suppose it's nice that they provided an example for how to use
mod_alias. But at the cost of breaking any site that has directories
named /images or /icons? Is it just me, or is that a bit crazy?
Tags: linux, ubuntu, web
[
22:55 May 13, 2007
More linux |
permalink to this entry |
]
When we left off,
I had just found a workaround for my Feisty Fawn installer problems
and had gotten the system up and running.
By now, it was late in the day, time for my
daily Sitescooper run to grab some news to read on my Treo PDA.
The process starts with making a backup (pilot-xfer -s).
But pilot-xfer failed because it couldn't find the device,
/dev/ttyUSB1. The system was seeing the device connection --
dmesg said
[ 1424.598770] usb 5-2.3: new full speed USB device using ehci_hcd and address 4
[ 1424.690951] usb 5-2.3: configuration #1 chosen from 1 choice
"configuration #1"? What does that mean? I poked around /etc/udev a
bit and found this rule in rules.d/60-symlinks.rules:
# Create /dev/pilot symlink for Palm Pilots
KERNEL=="ttyUSB*", ATTRS{product}=="Palm Handheld*|Handspring *|palmOne Handheld", \
SYMLINK+="pilot"
Oh, maybe they were calling it /dev/pilot1? But no, there was nothing
matching /dev/*pilot*, just as there was nothing matching
/dev/ttyUSB*.
But this time googling led me right to the bug,
bug
108512. Turns out that for some reason (which no one has
investigated yet), feisty doesn't autoload the visor module when
you plug in a USB palm device the way other distros always have.
The temporary workaround is sudo modprobe visor
;
the long-term workaround is to add visor to /etc/modules.
On the subject of Feisty's USB support, though, I do have some good
news to report.
My biggest motivation for upgrading from edgy was because USB2 had
stopped working a few months ago --
bug 54419.
I hoped that the newer kernel in Feisty might fix the problem.
So once I had the system up and running, I plugged my trusty
hated-by-edgy MP3 player into the USB2 hub, and checked dmesg.
It wasn't working -- but the error message was actually useful.
Rather than obscure complaints like
end_request: I/O error, dev sde, sector 2033440
or
device descriptor read/64, error -110
or
3:0:0:0: rejecting I/O to dead device
it had a message (which I've since lost) about "insufficient power".
Now that's something I might be able to do something about!
So I dug into my bag o' cables and found a PS/2 power adaptor that
fit my USB2 hub, plugged it in, plugged the MP3 player into the hub,
and voila! it was talking on USB2 again.
Tags: linux, ubuntu, udev, palm, pda, usb
[
21:10 May 13, 2007
More linux |
permalink to this entry |
]
Sat, 12 May 2007
I finally found some time to try the latest Ubuntu, "Feisty Fawn", on
my desktop machine.
I used a xubuntu alternate installer disk, since I don't need the
gnome desktop, and haven't had much luck booting from the Ubuntu
live CDs lately. (I should try the latest, but my husband had already
downloaded and burned the alternate disk and I couldn't work up the
incentive to download another disk image.
The early portions of the install were typical ubuntu installer:
choose a few language options, choose manual disk partitioning,
spend forever up- and down-arrowing through the partitioner trying
to persuade it not to automount every partition on the disk (after
about the sixth time through I gave up and just let it mount the
partitions; I'll edit /etc/fstab later) then begin the install.
Cannot find /lib/modules/2.6.20-15-generic
update-initramfs: failed for /boot/initrd.img-2.6.0-15-generic
Couldn't install grub, and warning direly, "This is a fatal error".
But then popcorn on #linuxchix found
Ubuntu
bug 37527. Turns out the problem is due to using an existing /boot
partition, which has other kernels installed. Basically, Ubuntu's
new installer can't handle this properly. The workaround is to
go through the installer without a separate /boot partition, let it
install its kernels to /boot on the root partition (but don't let it
install grub, even though it's fairly insistent about it), then reboot
into an old distro and copy the files from the newly-installed feisty
partition to the real /boot. That worked fine.
The rest of the installation was smooth, and soon I was up and
running. I made some of my usual tweaks (uninstall gdm, install tcsh,
add myself to /etc/password with my preferred UID, install fvwm and
xloadimage, install build-essentials and the zillion other packages
needed to compile anything useful) and I had a desktop.
Of course, the adventure wasn't over. There was more fun yet to come!
But I'll post about that separately.
Tags: linux, ubuntu
[
20:36 May 12, 2007
More linux |
permalink to this entry |
]
Thu, 12 Apr 2007
My laptop has always been able to sleep (suspend to RAM), one way
or another, but I had never managed it on a desktop machine.
Every time I tried running something like
apm -s, apm -S, echo 3 >/sys/power/state, or Ubuntu's
/etc/acpi/sleep.sh, the machine would sleep nicely, then when I
resumed it would come up partway then hang, or would simply boot
rather than resuming.
Dave was annoyed by it too: his Mac G4 sleeps just fine, but none
of his Linux desktops could. And finally he got annoyed enough to
spend half a day playing with different options. With what he
learned, both he and I now have desktops that can suspend to RAM
(his under Debian Sarge, mine under Ubuntu Edgy).
One step was to install hibernate (available as
a deb package in both Sarge and Edgy, but distros which don't offer
it can probably get it from somewhere on suspend2.net).
The hibernate program suspends to disk by default (which
is what its parent project, suspend2, is all about) but it
can also suspend to RAM, with the following set of arcane arguments:
hibernate -v 4 -F /etc/hibernate/ram.conf
(the
-v 4 adds a lot of debugging output; remove it once
you have things working).
Though actually, in retrospect I suspect I didn't need to install
hibernate at all, and Ubuntu's /etc/acpi/sleep.sh script would
have done just as well, once I'd finished the other step:
Fiddle with BIOS options. Most BIOSes have a submenu named something
like "Power Management Options", and they're almost always set wrong
by default (if you want suspend to work). Which ones are wrong
depends on your BIOS, of course. On Dave's old PIII system, the
key was to change "Sleep States" to include S3 (S3 is the ACPI
suspend-to-RAM state). He also enabled APM sleep, which was disabled
by default but which works better with the older Linux kernels he
uses under Sarge.
On my much newer AMD64 system, the key was an option to "Run VGABIOS
if S3 Resume", which was turned off by default. So I guess it wasn't
re-enabling the video when I resumed. (You might think this would
mean the machine comes up but doesn't have video, but it's never
as simple as that -- the machine came up with its disk light solid
red and no network access, so it wasn't just the screen that was
futzed.)
Such a simple fix! I should have fiddled with BIOS settings long
ago. It's lovely to be able to suspend my machine when I go away
for a while. Power consumption as measured on the Kill-a-Watt
goes down to 5 watts, versus 3 when the machine is "off"
(desktop machines never actually power off, they're always sitting
there on standby waiting for you to press the power button)
and about 75 watts when the machine is up and running.
Now I just have to tweak the suspend scripts so that it gives me a
new desktop background when I resume, since I've been having so much
fun with my random
wallpaper script.
Later update: Alas, I was too optimistic. Turns out it actually only
works about one time out of three. The other two times, it hangs
after X comes up, or else it initially reboots instead of resuming.
Bummer!
Tags: linux, laptop, suspend, ubuntu
[
11:07 Apr 12, 2007
More linux |
permalink to this entry |
]
Wed, 14 Mar 2007
Carla Schroder's latest (excellent) article,
Cheatsheet:
Master Linux Package Management,
spawned a LinuxChix discussion of the subtleties of Debian package
management (which includes other Debian-based distros such as
Ubuntu, Knoppix etc.)
Specifically, we were unclear on the differences among
apt-get
upgrade or
dist-upgrade,
aptitude upgrade,
aptitude dist-upgrade,
and
aptitude -f dist-upgrade.
Most of us have just been typing whichever command we learned first,
without understanding the trade-offs.
But Erinn Clark, our Debian Diva, checked with some of her fellow
Debian experts and got us most of the answers, which I will attempt
to summarize with a little extra help from web references and man pages.
First, apt-get vs. aptitude:
we were told that the primary difference between them is
that "aptitude is less likely to remove packages." I confess
I'm still not entirely clear on what that means, but aptitude is seen
as safer and smarter and I'll go on using it.
aptitude upgrade gets updates (security, bug fixes or whatever)
to all currently installed packages. No packages will be removed,
and no new packages will be installed.
If a currently installed package changes to require a
new package that isn't installed, upgrade will refuse to update
those packages (they will be "kept back"). To install the "kept back"
packages with their dependencies, you can use:
aptitude dist-upgrade gets updates to the currently installed
packages, including any new packages which are now required.
But sometimes you'll encounter problems in the dependencies,
in which case it will suggest that you:
aptitude -f dist-upgrade tries to "fix broken packages",
packages with broken dependencies. What sort of broken dependencies?
Well, for example, if one of the new packages conflicts with another
installed package, it will offer to remove the conflicting package.
Without -f, all you get is that a package will be "held back" for
unspecified reasons, and you have to go probing with commands like
aptitude -u install pkgname or
apt-get -o Debug::pkgProblemResolver=yes dist-upgrade
to find out the reason.
The upshot is that if you want everything to just happen in
one step without pestering you, use aptitude -f dist-upgrade;
if you want to be cautious and think things through at each step,
use aptitude upgrade and be willing to type the stronger
commands when it runs into trouble.
Sections 6.2 and 6.3 of the
Debian
Reference cover these commands a little, but not in much detail.
The APT
Howto is better, and runs through some useful examples (which I
used to try to understand what -f does).
Thanks go to Erinn, Ari Pollak, and Martin Krafft (whose highly rated book,
The
Debian System: Concepts and Techniques, apparently would have
answered these questions, and I'll be checking it out).
Tags: linux, debian, ubuntu
[
22:19 Mar 14, 2007
More linux |
permalink to this entry |
]
Sat, 17 Feb 2007
The
simple
auto-login without gdm which I described a year ago stopped
working when I upgradeded to "Edgy Eft". As part of Ubuntu's new
"Upstart" boot system, they've replaced
/etc/inittab with a new
system that uses the directory
/etc/event.d.
There's a little bit
of documentation available, but in the end it just came down to
fiddling. Here's how it works:
First, use the same /usr/bin/loginscript you used for the old
setup, which contains something like this:
#! /bin/sh
/bin/login -f yourusername
Then edit /etc/event.d/tty1 and find the getty line: probably
the last line of the file, looking like
respawn /sbin/getty 38400 tty1
Change that to:
respawn /sbin/getty -n -l /usr/bin/loginscript 38400 tty1
That's it! If you want to run X (or anything else) automatically,
that works the same way as always.
Update: This changed again in Hardy.
Here
are the details.
And then changed
again in Karmic Koala: the file has moved from /etc/event.d/tty1
to /etc/init/tty1.conf.
Tags: linux, ubuntu, boot
[
13:37 Feb 17, 2007
More linux |
permalink to this entry |
]
Sat, 09 Dec 2006
Another person popped into #gimp today trying to get a Wacom tablet
working (this happens every few weeks). But this time it was someone
using Ubuntu's new release, "Edgy Eft", and I just happened to have
a shiny new Edgy install on my laptop (as well as a Wacom Graphire 2
gathering dust in the closet because I can never get it working
under Linux), so I dug out the Graphire and did some experimenting.
And got it working! It sees pressure changes and everything.
It actually wasn't that hard, but it did require
some changes. Here's what I had to do:
- Install wacom-tools and xinput
- Edit /etc/X11/xorg.conf and comment out those ForceDevice lines
that say "Tablet PC ONLY".
- Reconcile the difference between udev creating /dev/input/wacom
and xorg.conf using /dev/wacom: you can either change xorg.conf,
change /etc/udev/rules.d/65-wacom.rules, or symlink /dev/input/wacom
to /dev/wacom (that's what I did for testing, but it won't survive a
reboot, so I'll have to pick a device name and make udev and X
consistent).
A useful tool for testing is /usr/X11R6/bin/xinput list
(part of the xinput package).
That's a lot faster than going through GIMP's input device
preference panel every time.
I added some comments to Ubuntu's bug
73160, where people had already described some of the errors but
nobody had posted the steps to work around the problems.
While I was fiddling with GIMP on the laptop, I decided to install
the packages I needed to build the latest CVS GIMP there.
It needed a few small tweaks from the list I used on Dapper.
I've updated the package list on my GIMP Building
page accordingly.
Tags: linux, X11, imaging, gimp, ubuntu
[
16:12 Dec 09, 2006
More linux |
permalink to this entry |
]
Mon, 20 Nov 2006
I just tried Ubuntu's newest release, "Edgy
Eft", on the laptop (my trusty if aging Vaio SR17).
I used the "xubuntu" variant, in order to try out their lighter
weight xfce-based desktop.
So far it looks quite good.
But the installation process involved
quite a few snags: here follows an account of the various workarounds
I needed to get it up and running.
Live CD Problems
First, I tried to use the live CD, since I've heard it has a nice
installer. But it failed during the process of bringing up X, and
dumped me into me a console screen with an (initramfs) prompt.
I thought I had pretty good Linux creds, but I have to confess
I don't know what to do with an (initramfs) prompt; so I gave up
and switched to the install CD. Too bad! I was so impressed with
Ubuntu's previous live CDs, which worked well on this machine.
Guessing Keyboard Layout
Early on, the installer gives you the option to let it guess your
keyboard layout. Don't let it! What this does is subject you
to a seemingly infinite list of questions like:
Does your keyboard have a squiggle key?
where each
squiggle is a different garbled, completely illegible
character further mangled by the fact that the installer is running at
a resolution not native to the current LCD display. After about 15 of
these you just give up and start hitting return hoping it will end
soon -- but it doesn't, so eventually you give up and try ctl-alt-del.
But that doesn't work either. Pulling the power cord and starting over
seems to be the only answer. Everyone I've talked to who's installed
Edgy has gone through this exact experience and we all had a good
laugh about it. Come to think of it, go ahead and say yes to the
keyboard guesser, just so you can chuckle about it with the rest of us.
Once I rebooted and said no to the keyboard guesser, it asked three or
four very straightforward questions about language, and the rest
of the installation went smoothly. Of course it whined about not
seeing a network, and it whined about my not wanting it to overwrite
my existing /boot, and it whined about something to do with free
space on my ext3 partitions (set up by a previous breezy install),
but it made it through.
X Hangs on the Savage
On the first reboot after installation, it
hung while trying to start X -- blank screen, no keyboard
response, and I needed to pull the plug. I was prepared for that
(longstanding bug 41340)
so I immediately suspected dri. I booted from another partition and
removed the dri lines from /etc/X11/xorg.conf,
which fixed the problem.
Configuring the Network
Now I was up and running on Xubuntu Edgy.
Next I needed to configure the network (since the installer won't do
it: this machine only has one pcmcia slot, so it can't have
a CDROM drive and a network card installed at the same time).
I popped in the network card (a 3com 3c59x cardbus card) and waited
expectantly for something to happen.
Nada. So I poked around and found the network configuration tool in
the menus, set up my IP and DNS and all that, and looked in vain for
a "start network" or "enable card" or some similar button that would
perform an ifup eth0.
Nada again. Eventually I gave up, called up a terminal, and ran
ifup eth0, which worked fine.
Which leads me to ask:
Given that Ubuntu is so committed to automatic hardware detection that
it forces you to run hal, which spawns large numbers of daemons that
poll all your disks a couple of times a second -- why can't it notice
the insertion of a cardbus networking card?
And configure it in some easy way without requiring the user to know
about terminals and networking commands?
Ubuntu Still Wins for Suspend and Hibernate
Around this point I tested suspend and hibernate. They both worked
beautifully out of the box, with no additional twiddling
needed. Ubuntu is still the leader in suspending.
sudo: Timestamp Woes
Somewhere during these package management games, I lost the ability
to sudo: it complained "Timestamp too far in the future", without
telling me which file's timestamp was wrong so that I could fix it.
Googling showed that lots of other people were having the
same problem with Edgy, and found an answer: use the GUI Time and
Date tool to set the time to something farther in the future than
the timestamp sudo was complaining about, then run sudo -k to do
some magic that resets the timestamp. Then you can set the time back
to where it belongs. Why does this happen? No one seems to know, but
that's the fix.
(I found some discussion in
bug 43233.)
vim isn't vim?
I restored my normal user account, logged in as myself with my normal
fvwm environment, and went to edit (with vim) a few files. Every time,
vim complained:
"E319: Sorry, the command is not available in this version: syntax on"
after which I could edit the file normally. Eventually I googled to
the answer, which is
très bizarre: by default,
vim-common is installed but
vim is not. There's a binary
named vim, and a package which seems to be vim, but it isn't vim.
Installing the package named
vim gives you a vim that
understands "syntax on" without complaining.
Conclusion
That's the list. Edgy is up and running now, and looks pretty good.
The installer definitely has some rough edges, and I hope my
workarounds are helpful to someone ... but the installer is only
a tiny part of the OS, something you run only once or twice.
So don't let the rough installer stop you from installing Edgy and
trying it out. I know I look forward to using it.
Tags: linux, ubuntu
[
20:30 Nov 20, 2006
More linux |
permalink to this entry |
]
Sat, 19 Aug 2006
It's been a week jam-packed with Linuxy stuff.
Wednesday I made my annual one-day trip to Linuxworld in San
Francisco. There wasn't much of great interest at the conference
this year: the usual collection of corporate booths (minus Redhat,
notably absent this year), virtualization as a hot keyword (but perhaps
less than the last two years) and a fair selection of sysadmin tools,
not much desktop Linux (two laptop vendors), and a somewhat light
"Dot Org" area compared to the last few years.
I was happy to notice that most of the big corporate
booths were running Linux on a majority of show machines, a nice
contrast from earlier years. (Dell was the exception, with more
Windows than Linux, but even they weren't all Windows.)
Linuxworld supposedly offers a wireless network but I never managed to
get it to work, either in the exhibit hall or in the building where
the BOFs were held.
Wednesday afternoon's BOF list didn't offer much that immediately
grabbed me, but in the end I chose one on introducing desktop
Linux to corporate environments. Run by a couple of IBM Linux
advocates, the BOF turned out to be interesting and well presented,
offering lots of sensible advice (base your arguments to management
on business advantages, like money saved or increased ability to get
the job done, not on promises of cool features; don't aim for a
wholesale switch to Linux, merely for a policy which allows employees
to choose; argue for standards-based corporate infrastructure since
that allows for more choice and avoids lock-in). There was plenty
of discussion between the audience and the folks leading the BOF,
and I think most attendees got something out of it.
More interesting than Linuxworld was Friday's Ubucon,
a free Ubuntu conference held at Google (and spilling over into
Saturday morning).
Despite a lack of advertising, the Ubucon was very well attended.
There were two tracks, ostensibly "beginner" and "expert", but
even aside from my own GIMP talk being a "beginner" topic, I
ended up hanging out in the "beginner" room for the whole day,
for topics like "Power Management", "How to Get Involved", and
"What Do Non Geeks Need?" (the last topic dovetailing into the
concluding session Linux corporate desktops).
All of the sessions were quite interactive
with lots of discussion and questions from the audience.
Everyone looked like they were having a good time, and I'm sure
several of us are motivated to get more deeply involved with Ubuntu.
Ubucon was a great example of a low-key, fun,
somewhat technical conference on a shoestring budget and I'd love to
see more conferences like this in the bay area.
Finally, the week wrapped up with the annual Linux Picnic in
Sunnyvale, a Silicon Valley tradition for many years and always a good
time. There were some organizational glitches this year ... but it's
hard to complain much about a free geek picnic in perfect weather
complete with t-shirts, an installfest, a raffle and even (by
mid-afternoon) a wireless network. Fun stuff!
Tags: linux, conferences, linuxworld, ubucon, ubuntu
[
20:52 Aug 19, 2006
More conferences |
permalink to this entry |
]
Mon, 20 Mar 2006
Dave has been complaining for a long time about the proliferation of
files named
.serverauth.???? (where
???? is various
four-digit numbers) in his home directory under Ubuntu. I never saw
them under Hoary, but now under Breezy and Dapper I'm seeing the same
problem.
I spent some time investigating, with the help of some IRC friends.
In fact, Carla
Schroder, author of O'Reilly's Linux Cookbook, was
the one who pinned down the creation of the files to the script
/usr/bin/startx.
Here's the deal: if you use gdm, kdm, or xdm, you'll never see
this. But for some reason, Ubuntu's startx uses a program
called xauth which creates a file containing an "MIT-MAGIC-COOKIE".
(Don't ask.) Under most Linux distributions, the magic cookie goes
into a file called .Xauthority. The startx script checks an
environment variable called XENVIRONMENT for the filename; if it's not
set to something else, it defaults to $HOME/.Xenvironment.
Ubuntu's version is a little different. It still has the block of
code where it checks XENVIRONMENT and sets it to
$HOME/.Xenvironment if it isn't already set. But a few
lines later, it proceeds to create the file under another, hardwired,
name: you guessed it, $HOME/.serverauth.$$. The XENVIRONMENT
variable which was checked and set is never actually used.
Programmers take note! When adding a feature to a script, please take
a moment and think about what the script does, and check to see
whether it already does something like what you're adding. If so,
it's okay -- really -- to remove the old code, rather than leaving
redundant and obsolete code blocks in place.
Okay, so why is the current code a problem? Because startx
creates the file, calls xinit, then removes the file. In other words,
it relies on xinit (and the X server) exiting gracefully. If anything
else happens -- for example, if you shut your machine down from within X --
the startx script (which does not catch signals) can die
without ever getting to the code that removes the file. So if you
habitually shut down your machine from within X, you will have a
.serverauth.???? file from each X session you ever shut down that way.
Note that the old setup didn't remove the file either, but at least it
was always the same file. So you always have a single .Xauthority file
in your home directory whether or not it's currently in use. Not much
of a problem.
I wasn't seeing this under Hoary because under Hoary I ran gdm, while
with Dapper, gdm would no longer log me in automatically so I had to
find
another approach to auto-login.
For Ubuntu users who wish to go back to the old one-file XAUTHORITY
setup, there's a very simple fix: edit /usr/bin/startx (as
root, of course) and change the line:
xserverauthfile=$HOME/.serverauth.$$
to read instead
xserverauthfile=$XAUTHORITY
If you want to track this issue, it's bug bug
35758.
Tags: linux, X11, ubuntu
[
21:24 Mar 20, 2006
More linux |
permalink to this entry |
]
Wed, 15 Mar 2006
I updated my Ubuntu "dapper" yesterday. When I booted this morning,
I couldn't get to any outside sites: no DNS. A quick look at
/etc/resolv.conf revealed that it was empty -- my normal
static nameservers were missing -- except for a comment
indicating that the file is prone to be overwritten at any moment
by a program called resolvconf.
man resolvconf provided no enlightenment. Clearly it's intended
to work with packages such as PPP which get dynamic network
information, but that offered no clue as to why it should be operating
on a system with a static IP address and static nameservers.
The closest Ubuntu bug I found was bug
31057. The Ubuntu developer commenting in the bug asserts that
resolvconf is indeed the program at fault. The bug reporter
disputes this, saying that resolvconf isn't even installed on the
machine. So the cause is still a mystery.
After editing /etc/resolv.conf to restore my nameservers,
I uninstalled resolvconf along with some other packages that I clearly
don't need on this machine, hoping to prevent the problem from
happening again:
aptitude purge resolvconf ppp pppconfig pppoeconf ubuntu-standard wvdial
Meanwhile, I did some reading.
It turns out that resolvconf depends on
an undocumented bit of information added to the
/etc/network/interfaces file: lines like
dns-nameservers 192.168.1.1 192.168.1.2
This is not documented under
man interfaces, nor under
man
resolvconf; it turns out that you're supposed to find out about it
from
/usr/share/doc/resolvconf/README.gz. But of course, since
it isn't a standard part of
/etc/network/interfaces, no
automatic configuration programs will add the DNS lines there for you.
Hope you read the README!
resolvconf isn't inherently a bad idea, actually; it's supposed to
replace any old "up" commands in interfaces that copy
resolv.conf files into place.
Having all the information in the interfaces file would be a better
solution, if it were documented and supported.
Meanwhile, be careful about resolvconf, which you may have even
if you never intentionally installed it.
This
thread on a Debian list discusses the problem briefly, and this
reply quotes the relevant parts of the resolvconf README (in case
you're curious but have already removed resolvconf in a fit of pique).
Tags: linux, ubuntu, networking
[
15:22 Mar 15, 2006
More linux |
permalink to this entry |
]
Sun, 05 Feb 2006
I've been unable to persuade Ubuntu's "Dapper Drake" to log me in
automatically via gdm. My desktop background flashes briefly during
the login process, then vanishes; it appears that it actually is
logging me in briefly, then immediately logging me out and presenting
me with the normal gdm login screen.
I never liked gdm much anyway. It's heavyweight and it interferes with
seeing shutdown messages. The only reason I was using it on Hoary
was for autologin, and with that reason gone, I uninstalled it.
But that presented an interesting problem: it turns out that
Dapper doesn't actually allow users to run X. The error message is:
Unable to open wrapper config file /etc/X11/Xwrapper.config
X: user not authorized to run the X server, aborting.
The fix turned out to be trivial: make the X server setuid
and setgid (
chmod 6755 /usr/bin/X). Mode 4755 (setuid only,
no setgid) also works, but other Debian systems seem to set both bits.
The next question was how to implement auto-login without gdm or kdm.
I had already found a useful
Linux Gazette
article on the subject. The gist is that you compile a short C
program that calls login with your username, then you call getty with
your new program as the "alternate login program".
Now, I have nothing against C, but wouldn't a script be easier?
It turns out a script works too. Replace the tty1 line in
/etc/inittab with a line like:
1:2345:respawn:/sbin/getty -n -l /usr/bin/myloginscript 38400 tty1
where the script in question looks like:
#! /bin/sh
/bin/login -f username
At first, I tried an even simpler approach:
1:2345:respawn:/bin/login -f username
That logged me in, but I ended up on /dev/console instead of
/dev/tty1, with a message that I had no access to the tty and
therefore wouldn't be able to use job control. X didn't work
either. The getty is needed in order to switch control
from /dev/console to a real virtual terminal like /dev/tty1.
Of course, running X automatically once you're logged in is trivial,
just a line or three added to .login or .profile (see the Linux
Gazette article referenced above for an example).
It works great, it's very fast, plus I can watch shutdown messages
again. Nice!
Update 9/9/2006: the Linux Gazette article isn't accessible any more
(apparently Linux Journal bought them and made the old articles
inaccessible). But here's an example of what I do in my .login
on Dapper -- this is for tcsh, so bash users subtitute "fi" for "endif":
if ($tty == tty1) then
startx
endif
Tags: linux, ubuntu, boot
[
11:53 Feb 05, 2006
More linux |
permalink to this entry |
]
Sat, 04 Feb 2006
I've been meaning to upgrade my desktop machine from Ubuntu's "Hoary
Hedgehog" release for some time -- most notably so that I can get the
various packages needed to run GTK 2.8, which is now required to build
the most current GIMP.
Although I'm having good success with "Breezy Badger", the stable
Ubuntu successor to "Hoary", on my laptop, Breezy is already
borderline as far as GIMP requirements, and that can only get worse.
Since I do more development on the desktop, I figured
it was worth trying one of the pre-released versions of Ubuntu's
next release, "Dapper Drake".
Wins over hoary and breezy: it handles my multiple flash card reader
automatically (on hoary and breezy I had to hack the udev
configuration file to make it work).
I've had a few glitches, starting with the first auto-update wanting
to install a bunch of packages that didn't actually exist on the
server. This persisted for about a week, during which I got a list
of 404s and "packages held back" warnings every time I updated or
installed anything. It didn't seem to hurt anything -- just a minor
irritant -- and it did eventually get fixed. That's life with an
unstable distribution.
Dapper has the same problem that hoary and
breezy have with hald polling the hard disk every few seconds
(bug 27323).
In addition, hald seems to spawn a rather large number of
hald-addon-storage processes
(probably to handle the built-in multi flash card reader).
(Uncommenting the storage.automount_enabled_hint in
/etc/hal/fdi/policy/preferences.fdi didn't help.)
Killing hald (and nuking /usr/sbin/hald
so it won't restart) solves both these problems, but it also
stops hotplugged USB devices from working: apparently Dapper
has switched to using hal instead of hotplug for USB. Ouch!
In any case, hald came back on a dist-upgrade so it looks like
I'll have to find a more creative solution.
The printing packages have problems.
I tried to add my printer via the CUPS web interface,
but apparently it didn't install any printer drivers by default, and
it's not at all obvious where to get them. The drivers are there, in
/usr/share/cups/model/gutenprint/5.0/en, but dapper's cups apparently
isn't looking there. I eventually got around the problem by
uncompressing the ppd file and pointing CUPS directly at
/usr/share/cups/model/gutenprint/5.0/en/stp-escp2-c86.5.0.ppd.
(Filed bug
30178.)
Dapper's ImageMagick has a bug in the composite command:
basically, you can't combine two images at all. So I have to generate
web page thumbnails on another machine until that's fixed.
gdm refuses to set up my user for auto-login, and I hit an interesting
localization issue involving GIMP (I'll report on those issues separately).
Most other things work pretty well. Dapper has a decent set of
multimedia apps and codecs, and its kernel and udev setup seem to work
fine (it can't suspend my desktop machine, but neither can any other
distro, and I don't really need that anyway).
Except for the hald problem, Dapper looks like a very usable system.
Tags: linux, ubuntu
[
19:27 Feb 04, 2006
More linux |
permalink to this entry |
]
Wed, 04 Jan 2006
I installed the latest Ubuntu Linux, called "Breezy Badger", just
before leaving to visit family over the holidays. My previous Ubuntu
attempt on this machine had been rather unstable (probably not
Ubuntu's fault -- 2.6 kernels and this laptop don't get along very
well) but Ubuntu seems to have some very sharp kernel developers, so
I was curious to see whether there'd been progress.
Installation:
Didn't go well. I had most of the same problems I'd had installing
Hoary to this laptop (mostly due to the installer assuming that a
CDROM and network must remain connected throughout the install,
something that's impossible on a laptop where both of those functions
require sharing the single PCMCIA port). The Breezy installer has the
additional "feature" that it tends to hang if you change things like
the CDROM while the install is in progress, trashing everything and
forcing you to restart from the beginning. (Filed bug 20443.)
Networking:
But eventually I found a sequence that let me get a network-less
Breezy onto the laptop, and I'm happy to report that Breezy's built-in
networking tools were able to add networking after the first boot
(something that hadn't worked in Hoary). Well, admittedly I did have
to add a script, /etc/hotplug/pci/3c59x, to call ifup when my cardbus
network card is plugged in; but every other distro needs that too, and
Breezy is the first 2.6-based distro which correctly calls the script
every time.
Suspend:
Once up and running, Breezy shows impressive laptop savvy.
Like Hoary, it can suspend either to disk or to RAM; unlike Hoary, it
can do this without my needing to hack any config files except to
uncomment the line enabling suspend to RAM in /etc/default/acpi-support.
It does print various error messages on stdout when it resumes from
sleep or hibernate, but that's a minor issue.
Not only that, but it restores both network and usb when resuming from
suspend (on hoary I had to hack some of the suspend scripts to make
that work).
(Kernel flakiness:
Well, mostly it suspends fine. Unplugging a usb mouse at the wrong
time still causes a kernel hang.
That's a 2.6 bug, not an Ubuntu-specific problem.
And the system also tends to hang and need to be power cycled about
one time out of five when exiting X; perhaps it's an Xorg bug.)
Ironically, my "safe" partition on this laptop (a much-
modified Debian sarge) mysteriously stopped seeing PCMCIA on the first
day away from home, so I ended up using Breezy for the whole trip
and giving it a good workout.
Hal:
One problem Breezy shares with Hoary is that every few seconds, the
hald daemon makes the hard drive beep and whir. Unlike Hoary, which
had an easy
solution, Breezy ignores the storage_media_check_enabled and
storage_automount_enabled hints. The only way I found to disable
the beeping was to kill hald entirely by renaming /usr/sbin/hald
(it's not called from /etc/init.d, and I never did find out who was
starting it so I could disable it). Removing hald seems to have caused
no ill effects; at least, hotplug of pcmcia and usb still works, as do
udev rules. (Filed bug 21238.
Udev:
Oh, about those udev rules! Regular readers may recall that I had some
trouble with Hoary regarding
udev
choking on multiple flash card readers which I solved
on my desktop machine with a udev rule that renames the four fixed,
always present devices. But with a laptop, I don't have fixed devices;
I wanted a setup that would work regardless of what I plugged in.
That required a new udev rule. Here's the rule that worked for me:
in /etc/udev/permissions.rules, change
BUS=="scsi", KERNEL=="sd[a-z]*", PROGRAM="/etc/udev/scripts/removable.sh %k 'usb ieee1394'", RESULT="1", MODE="0640", GROUP="plugdev"
to
BUS=="scsi", KERNEL=="sd[a-z]*", NAME{all_partitions}="%k", MODE="0640", GROUP="plugdev"
Note that this means that whatever scripts/removable.sh does, it's not
happening any more. That doesn't seem to have caused any problem,
though. (Filed
bug 21662
on that problem.)
Conclusion:
Overall, Breezy is quite impressive and required very little tweaking
before it was usable. It was my primary distro for two weeks while
travelling; I may switch to it on the desktop once I find a workaround
for bug
352358 in GTK 2.8 (which has been fixed in gnome cvs, but
that doesn't make it any less maddening when using the buggy version).
Tags: linux, ubuntu, laptop
[
22:43 Jan 04, 2006
More linux |
permalink to this entry |
]
Sun, 14 Aug 2005
I bet I'm not the only one who uses Ubuntu (Hoary Hedgehog) and didn't
realize that it doesn't automatically put the security sources in
/etc/apt/sources.list, so apt-get and aptitude don't pick up
any of the security updates without extra help.
After about a month with no security updates on any ubuntu machines
(during which time I know there were security alerts in Debian for
packages I use), I finally tracked down the answer.
It turns out that if you use synaptic, click on "Mark All Upgrades",
then click on Apply, synaptic will pull in security updates.
However, if you use the "Ubuntu Upgrade Manager" in the
System->Administration menu, or if you use commands
like apt-get -f dist-upgrade or aptitude -f dist-upgrade,
then the sources which synaptic wrote into sources.list are not
sufficient to get the security updates.
(Where synaptic keeps its extra sources, I still don't know.)
When I asked about this on #ubuntu, I was pointed to a page on
the Ubuntu wiki which walks you through selecting sources in synaptic.
Unfortunately, the screenshots on the wiki show lots of sources that
none of my Ubuntu machines show, and the wiki doesn't give you the
sources.list lines or tell you what to do if synaptic doesn't
automagically show the security sources.
The solution: to edit /etc/apt/sources.list and make sure the
following lines are there (which some of the people on the IRC channel
were kind enough to paste for me):
## All officially supported packages, including security- and other updates
deb http://archive.ubuntu.com/ubuntu hoary main restricted
deb http://security.ubuntu.com/ubuntu hoary-security main restricted
deb http://archive.ubuntu.com/ubuntu hoary-updates main restricted
In addition, if you use "universe" and "multiverse", you probably also
want these lines:
deb http://archive.ubuntu.com/ubuntu hoary universe multiverse
deb http://security.ubuntu.com/ubuntu hoary-security universe multiverse
deb http://archive.ubuntu.com/ubuntu hoary-updates universe multiverse
Tags: linux, ubuntu, security
[
22:49 Aug 14, 2005
More linux |
permalink to this entry |
]
Fri, 03 Jun 2005
I've been experimenting with Ubuntu's second release, "Hoary
Hedgehog" off and on since just before it was released.
Overall, I'm very impressed. It's quite usable on a desktop machine;
but more important, I'm blown away by the fact that Ubuntu's kernel
team has made a 2.6 acpi kernel that actually works on my aging but
still beloved little Vaio SR17 laptop. It can suspend to RAM (if I
uncomment ACPI_SLEEP in /etc/defaults/acpi-support), it can
suspend to disk, it gets power button events (which are easily
customizable: by default it shuts the machine down, but if I replace
powerbtn.sh with a single line calling sleep.sh, it
suspends), it can read the CPU temperature. Very cool.
One thing didn't work: USB stopped working when resuming after a
suspend to RAM. It turned out this was a hotplug problem, not a kernel
problem: the solution was to add calls to /etc/init.d/hotplug
stop and /etc/init.d/hotplug start in the
/etc/acpi/sleep.sh script.
Problem solved (except now resuming takes forever, as does
booting; I need to tune that hotplug startup script and get rid of
whatever is taking so long).
Sonypi (the jogdial driver) also works. It isn't automatically loaded
(I've added it to /etc/modules), and it disables the power button (so
much for changing the script to call sleep.sh), a minor
annoyance. But when loaded, it automatically creates /dev/sonypi, so I
don't have to play the usual guessing game about which minor number it
wanted this time.
Oh, did I mention that the Hoary live CD also works on the Vaio?
It's the first live linux CD which has ever worked on this machine
(all the others, including rescue disks like the Bootable Business
Card and SuperRescue, have problems with the Sony PCMCIA-emulating-IDE
CD drive). It's too slow to use for real work, but the fact that it
works at all is amazing.
I have to balance this by saying that Ubuntu's not perfect.
The installer, which is apparently the Debian Sarge installer
dumbed down to reduce the number of choices, is inconsistent,
difficult, and can't deal with a networkless install (which, on
a laptop which can't have a CD drive and networking at the same time
because they both use the single PCMCIA slot, makes installation quite
tricky). The only way I found was to boot into expert mode, skip the
network installation step, then, after the system was up and running
(and I'd several times dismissed irritating warnings about how it
couldn't find the network, therefore "some things" in gnome wouldn't
work properly, and did I want to log in anyway?) I manually edited
/etc/network/interfaces to configure my card (none of Ubuntu's
built-in hardware or network configuration tools would let me
configure my vanilla 3Com card; presumably they depend on something
that would have been done at install time if I'd been allowed to
configure networking then). (Bug 2835.)
About that expert mode: I needed that even for the desktop,
because hoary's normal installer doesn't offer an option for
a static IP address. But on both desktop and laptop this causes a
problem. You see, hoary's normal mode of operation is to add the
first-created user to the sudoers list, and then not create a root
account at all. All of their system administration tools depend on the
user being in the sudoers file. Fine. But someone at ubuntu apparently
decided that anyone installing in expert mode probably wants a root
account (no argument so far) and therefore doesn't need to be in the
sudoers file. Which means that after the install, none of the admin
tools work; they just pop up variants on a permission denied dialog.
The solution is to use visudo to add yourself to
/etc/sudoers. (Bugs 7636 and
9832.)
Expert mode also has some other bugs, like prompting over and over for
additional kernel modules (bug 5999).
Okay, so nothing's perfect. I'm not very impressed with Hoary's
installer, though most of its problems are inherited from Sarge.
But once it's on the machine, Hoary works great. It's a modern
Debian-based Linux that gets security upgrades (something Debian
hasn't been able to do, though they keep making noises about finally
releasing Sarge). And there's that amazing kernel. Now that I have the
hotplug-on-resume problem fixed, I'm going to try using it as the
primary OS on the laptop for a while, and see how it goes.
Tags: linux, ubuntu, laptop, vaio
[
17:29 Jun 03, 2005
More linux |
permalink to this entry |
]
Wed, 13 Oct 2004
I took a break from housepainting yesterday to try out a couple
of new linux distros on my spare machine, "imbrium", which is mostly
used as a print server since Debian's CUPS can't talk to an Epson
Photo 700 any more.
The machine is currently running the venerable Redhat 7.3 -- ancient
but very solid. But I wanted a more modern distro, something
capable of running graphics apps like GIMP 2 and gLabels 2.
I considered Fedora, but FC2 is getting old by now and I would
rather wait for FC3.
First I tried SuSE 9.1. It was very impressive.
The installer whizzed through without a hitch,
giving me lots of warning before doing anything destructive.
It auto-configured just about everything: video card,
ethernet, sound card, and even the printer. It missed my LCD monitor;
X worked fine and it got the resolution right, but when I went in to
YaST to enable 3D support (which was off by default) it kept whining
about the monitor until I configured it by hand (which was easy).
It defaulted networking to DHCP, but made it clear that it had done
so, which made it easy to change it to my normal configuration.
SuSE still uses kde by default, which is fine. The default desktop
is pretty and functional, and not too slow. I'll be
switching to something lighter weight, like icewm or openbox, but
SuSE's default looks fine for a first-time user.
I hit a small hitch in specifying a password: it has a limited set
of characters it will accept, so several of the passwords I wanted
to use were not acceptable. Finally I gave up and used a simple
string, figuring I'd change it later, and then it whined about it
being all lower case. Why not just accept the full character set,
then? (At least full printable ascii.)
Another minor hitch involved the default mirror (LA) being down when it
got to the update stage. Another SuSE user told me that mirror is
always down. Choosing another mirror solved that problem.
Oh, and the printer? Flawless. The installer auto-detected it
and configured it to use gimp-print drivers.
(from gimp-print) worked fine in a full "test page with photo",
and subsequent prints (via kprinter) from Open Office also worked.
Good job, SuSE!
The experience with Ubuntu Warty wasn't quite so positive.
The installer is a near-standard Debian installer, with the usual
awkward curses UI (I have nothing against curses UIs; it's the
debian installer UI specifically which I find hard to use, since
it does none of the "move the focus to where you need to be next"
that modern UI design calls for, and there's a lot of "Arrow down
over empty space that couldn't possibly be selectable" or "Arrow
down to somewhere where you can hit tab to change the button so you
can hit return". It's reminiscent of DOS text editors from the
early eighties.
But okay, that's not Ubuntu's fault -- they got that from Debian.
The first step in the install, of course, is partitioning.
My disk was already partitioned, so I just needed to select / to
be formatted, /boot to be re-used (since it's being shared with the
other distros on this machine), and swap. Seemed easy, it accepted
my choices, made a reiserfs filesystem on my chosen root partition
-- then spit out a parted error screen telling me that due to an
inconsistent ext2 filesystem, it was unable to resize the /boot
partition.
Attempting to resize an existing partition without confirming it
is not cool. Fortunately, parted, for whatever reason,
decided it couldn't resize, and after a few confirmation screens
I persuaded it to continue with the install without changing /boot.
The rest of the install went smoothly, including software update
from the net, and I found myself in a nice looking gnome screen
(with, unfortunately the usual proliferation of taskbars gnome
uses as a default).
Of course, the first thing I wanted to try was the printer.
I poked through various menus (several semi-redundant sets) and
eventually found one for printer configuration. Auto-detect didn't
detect my printer (apparently it can't detect over the parallel
port like SuSE can) so I specified Parallel Port 1 (via an option
menu that still has the gtk bug where the top half of the menu is
just blank space), selected epson, and looked ... and discovered
that they don't have any driver at all for the Photo 700. I tried
the Photo 720 driver, which printed a mangled test page, and the
generic Epson Photo driver, which printed nothing at all. So I
checked Ubuntu's
Bugzilla, where I found a bug
filed requesting a driver for
the Epson C80 (one of the most popular printers in the linux
community, as far as I can tell). Looks like Ubuntu just doesn't
include any of the gimp-print drivers right now; I signed up
for a bugzilla account and added a comment about the Photo 700,
and filed one about the partitioning error while I was there,
which was quickly duped to a more general bug about
parted and ext2 partitions.
I don't mean to sound down on Ubuntu. It's a nice looking
distro, it's still in beta and hasn't yet had an official release,
and my printer is rather old (though quite well supported by
most non-debian distros). I'm looking forward to seeing more.
But for the time being, imbrium's going to be a SuSE machine.
Tags: linux, ubuntu, suse
[
19:15 Oct 13, 2004
More linux |
permalink to this entry |
]