Shallow Thoughts : tags : debian
Akkana's Musings on Open Source Computing and Technology, Science, and Nature.
Mon, 21 Oct 2024
An upgrade on Debian unstable ("sid") a few days ago left me unable to ping.
When I tried, I got
ping: socket: Operation not permitted
with an additional reason of
missing cap_net_raw+p capability or setuid?
Ping worked fine as root, so it was a permission problem.
After some discussion on IRC with several helpful people in
#debian-next, I learned two ways of enabling it
(but read to the end before doing either of these,
since there's a better way).
Read more ...
Tags: linux, debian
[
12:37 Oct 21, 2024
More linux |
permalink to this entry |
]
Tue, 26 Dec 2023
In October I wrote about making a
Windows 10 that Boots off a USB Stick,
From Linux.
A Debian update today or yesterday (Merry Christmas!) broke that
and I spent a few hours today chasing that down.
There's a package called ovmf
that puts BIOS/firmware
related files
in /usr/share/OVMF/. The command I used in the earlier article
included the flag -bios /usr/share/OVMF/OVMF_CODE.fd
but as of today, -bios
apparently doesn't work any more
with any of the files there.
Read more ...
Tags: linux, windows, virtualization, qemu, debian
[
18:01 Dec 26, 2023
More linux |
permalink to this entry |
]
Sun, 01 Oct 2023
In 2019, I wrote about struggling to get any sort of Windows booting off
an external USB stick, in order to
Install Lenovo Firmware Packaged as a .exe on a Linux Machine.
I ended up needing to borrow a real Windows machine and install Rufus
on it.
In 2023, things are much better. Aki at atkdinosaurus has written a
clear, concise tutorial on that topic:
How to create a Windows 10 installation on a USB stick in UEFI mode.
I love that it's all command-line, so you can duplicate the steps exactly.
Read more ...
Tags: linux, windows, virtualization, qemu, debian
[
10:07 Oct 01, 2023
More linux |
permalink to this entry |
]
Mon, 10 Oct 2022
I boot Linux in text mode, with all the boot-time messages showing.
There are several reasons for this, but one is that
I want to be able to see any errors that might arise — boot-time
errors aren't otherwise shown to the user.
However, many Linux distros, including Debian and Ubuntu, clear the
screen before showing a login prompt, making it impossible to read the
last few messages or find any errors.
Some years back, I looked into why this was happening, and found the
answer in Stop
Clearing My God Damned Console. It comes down to a line in
getty@tty1.service: TTYVTDisallocate=yes
.
Change that to TTYVTDisallocate=no
and the terminal
will stop clearing before you log in.
Read more ...
Tags: linux, debian
[
19:08 Oct 10, 2022
More linux |
permalink to this entry |
]
Fri, 11 Feb 2022
It's nice to be back on a relatively minimal Debian install,
instead of Ubuntu-with-everything. But one thing that I have
to admit I appreciated about Ubuntu: printing "just worked".
Turn on a printer, call up the print menu in any app, and the
printer I turned on would be there in the menu, without any need
of struggling with CUPS configurations.
Ubuntu was using Avahi, the Linux version of Apple's Zeroconf/Bonjour
framework, to discover printers. I knew that I'd probably need to
install Avahi if I wanted easy printer configuration on Debian.
But as it turned out, getting printing working was both harder, and easier.
Read more ...
Tags: linux, debian, cups, printing
[
18:14 Feb 11, 2022
More linux |
permalink to this entry |
]
Mon, 17 Jan 2022
For many years, I used extlinux as my boot loader to avoid having to
deal with
the
annoying and difficult grub2. But that was on MBR machines.
I never got the sense that extlinux was terribly well supported
in the newer UEFI/Secure Boot world. So when I bought my current
machine a few years ago, I bit the bullet and let Ubuntu's installer
put grub2 on the hard drive.
One of the things I lost in that transition was a boot splash image.
Read more ...
Tags: linux, debian, ubuntu, grub, boot
[
19:29 Jan 17, 2022
More linux |
permalink to this entry |
]
Tue, 28 Dec 2021
When I bought my new laptop several years ago, I chose Ubuntu as
its first distro even though I usually run Debian.
For one thing, Ubuntu has an excellent installer.
Second, they seem to do more testing on cutting-edge hardware, so I
thought the chances were better that hardware on a brand-new laptop
would be supported.
Ubuntu has been working fine for a couple of years, but with 21.10
("Impish Indri") it took a precipitous downturn.
Read more ...
Tags: linux, boot, grub, debian, ubuntu
[
19:53 Dec 28, 2021
More linux |
permalink to this entry |
]
Thu, 23 Aug 2018
I try to
avoid Grub2
on my Linux machines, for reasons I've discussed before.
Even if I run it, I usually block it from auto-updating /boot since that
tends to overwrite other operating systems.
But on a couple of my Debian machines, that has meant needing to notice
when a system update has installed a new kernel, so I can update the
relevant boot files. Inevitably, I fail to notice, and end up running
an out of date kernel.
But didn't Debian use to have a /boot/vmlinuz that always
linked to the latest kernel? That was such a good idea: what happened
to that?
I'll get to that. But before I found out, I got sidetracked trying to
find a way to check whether my kernel was up-to-date, so I could have
it warn me of out-of-date kernels when I log in.
That turned out to be fairly easy using uname and a little shell
pipery:
# Is the kernel running behind?
kernelvers=$(uname -a | awk '{ print $3; }')
latestvers=$(cd /boot; ls -1 vmlinuz-* | sort --version-sort | tail -1 | sed 's/vmlinuz-//')
if [[ $kernelvers != $latestvers ]]; then
echo "======= Running kernel $kernelvers but $latestvers is available"
else
echo "The kernel is up to date"
fi
I put that in my .login. But meanwhile I discovered that that
/boot/vmlinuz link still exists -- it just isn't enabled
by default for some strange reason. That, of course, is the right
way to make sure you're on the latest kernel, and you can do it with the
linux-update-symlinks command.
linux-update-symlinks
is called automatically when you install a new kernel -- but by
default it updates symlinks in the root directory, /, which isn't
much help if you're trying to boot off a separate /boot
partition.
But you can configure it to notice your /boot partition.
Edit /etc/kernel-img.conf and change link_in_boot
to yes:
link_in_boot = yes
Then linux-update-symlinks will automatically update the
/boot/vmlinuz link whenever you update the kernel,
and whatever bootloader you prefer can point to that image.
It also updates /boot/vmlinuz.old to point to the previous kernel
in case you can't boot from the new one.
Update: To get linux-update-symlinks to update symlinks to reflect
the current kernel, you need to reinstall the package for the current kernel,
e.g. apt-get install --reinstall linux-image-4.18.0-3-amd64
.
Just apt-get install --reinstall linux-image-amd64
isn't enough.
Tags: linux, debian, shell
[
20:14 Aug 23, 2018
More linux/kernel |
permalink to this entry |
]
Thu, 01 Mar 2018
I updated my Debian Testing system via apt-get upgrade
,
as one does during the normal course of running a Debian system.
The next time I went to a locally hosted website, I discovered PHP
didn't work. One of my websites gave an error, due to a directive
in .htaccess; another one presented pages that were full of PHP code
interspersed with the HTML of the page. Ick!
In theory, Debian updates aren't supposed to change configuration files
without asking first, but in practice, silent and unexpected Apache
bustage is fairly common. But for this one, I couldn't find anything
in a web search, so maybe this will help.
The problem turned out to be that /etc/apache2/mods-available/
includes four files:
$ ls /etc/apache2/mods-available/*php*
/etc/apache2/mods-available/php7.0.conf
/etc/apache2/mods-available/php7.0.load
/etc/apache2/mods-available/php7.2.conf
/etc/apache2/mods-available/php7.2.load
The appropriate files are supposed to be linked from there into
/etc/apache2/mods-enabled. Presumably, I previously had a link
to ../mods-available/php7.0.* (or perhaps 7.1?); the upgrade to
PHP 7.2 must have removed that existing link without replacing it with
a link to the new ../mods-available/php7.2.*.
The solution is to restore those links, either with ln -s
or with the approved apache2 commands (as root, of course):
# a2enmod php7.2
# systemctl restart apache2
Whew! Easy fix, but it took a while to realize what was broken, and
would have been nice if it didn't break in the first place.
Why is the link version-specific anyway? Why isn't there a file called
/etc/apache2/mods-available/php.* for the latest version?
Does PHP really change enough between minor releases to break websites?
Doesn't it break a website more to disable PHP entirely than to swap in
a newer version of it?
Tags: linux, debian, apache, web
[
10:31 Mar 01, 2018
More linux |
permalink to this entry |
]
Sat, 26 Mar 2016
Recently I wrote about
building the
Debian hexchat package to correct a key binding bug.
I built my own version of the hexchat packages, then installed the ones
I needed:
dpkg -i hexchat_2.10.2-1_i386.deb hexchat-common_2.10.2-1_all.deb hexchat-python_2.10.2-1_i386.deb hexchat-perl_2.10.2-1_i386.deb
That's fine, but of course, a few days later Debian had an update to
the hexchat package that wiped out my changes.
The solution to that is to hold the packages so they won't be overwritten
on the next apt-get upgrade:
aptitude hold hexchat hexchat-common hexchat-perl hexchat-python
If you forget which packages you've held, you can find out with aptitude:
aptitude search '~ahold'
Simplifying the rebuilding process
But now I wanted an easier way to build the package.
I didn't want to have to search for my old blog post and paste the
lines one by one every time there was an update -- then I'd get lazy
and never update the package, and I'd never get security fixes.
I solved that with a zsh function:
newhexchat() {
# Can't set errreturn yet, because that will cause mv and rm
# (even with -f) to exit if there's nothing to remove.
cd ~/outsrc/hexchat
echo "Removing what was in old previously"
rm -rf old
echo "Moving everything here to old/"
mkdir old
mv *.* old/
# Make sure this exits on errors from here on!
setopt localoptions errreturn
echo "Getting source ..."
apt-get source hexchat
cd hexchat-2*
echo "Patching ..."
patch -p0 < ~/outsrc/hexchat-2.10.2.patch
echo "Building ..."
debuild -b -uc -us
echo
echo 'Installing' ../hexchat{,-python,-perl}_2*.deb
sudo dpkg -i ../hexchat{,-python,-perl}_2*.deb
}
Now I can type newhexchat
and pull a new version of the
source, build it, and install the new packages.
How do you know if you need to rebuild?
One more thing. How can I find out when there's a new version of hexchat,
so I know I need to build new source in case there's a security fix?
One way is the
Debian Package Tracking System.
You can subscribe to a package and get emails when a new version
is released.
There's supposed to be a package tracker web interface, e.g.
package
tracker: hexchat with a form you can fill out to subscribe to
updates -- but for some packages, including hexchat, there's no form.
Clicking on the link for the new package tracker goes to a similar page
that also doesn't have a form.
So I guess the only option is to subscribe by email.
Send mail to pts@qa.debian.org containing this line:
subscribe hexchat [your-email-address]
You'll get a reply asking for confirmation.
This may turn out to generate too much mail: I've only just subscribed,
so I don't know yet.
There are supposedly keywords you can use to limit the subscription,
such as upload-binary
and upload-source
,
but the instructions aren't at all clear on how to include them
in your subscription mail -- you say keyword
, or
keyword your-email
, so where do you put the actual
keywords you want to accept? They offer no examples.
Use apt to check whether your version is current
If you can't get the email interface to work or suspect it'll be too
much email, you can use apt to check whether the current version in
the repository is higher than the one you're running:
apt-cache policy hexchat
You might want to automate that, to make it easy to check on every
package you've held to see if there's a new version.
Here's a little shell function to do that:
# Check on status of all held packages:
check_holds() {
for pkg in $( aptitude search '~ahold' | awk '{print $2}' ); do
policy=$(apt-cache policy $pkg)
installed=$(echo $policy | grep Installed: | awk '{print $2}' )
candidate=$(echo $policy | grep Candidate: | awk '{print $2}' )
if [[ "$installed" == "$candidate" ]]; then
echo $pkg : nothing new
else
echo $pkg : new version $candidate available
fi
done
}
Tags: debian, hexchat, irc, apt
[
11:11 Mar 26, 2016
More linux |
permalink to this entry |
]
Thu, 17 Mar 2016
I switched a few weeks ago from unstable ("Sid") to testing ("Stretch")
in the hope that my system, particularly X, would break less often.
The very next day, I updated and discovered I couldn't use my system
at night any more, because the program I use to
reduce
the screen brightness by tweaking X gamma no longer worked.
Neither did other related programs, such as xgamma and xcalib.
The Dell monitor I use doesn't have reasonable hardware brightness controls:
strangely, the brightness button works when the monitor is connected
over VGA, but if I want to use the sharper HDMI connection, brightness
adjustment no longer works. So I depend on software brightness adjustment
in order to use my computer at night when the room is dim.
Fortunately, it turns out there's a workaround. xrandr
has options for both brightness and gamma:
xrandr --output HDMI1 --brightness .5
xrandr --output HDMI1 --gamma .5:.5:.5
I've always put xbrightness on a key, so I can use a function key to
adjust brightness interactively up and down according to conditions.
So a command that sets brightness to .5 or .8 isn't what I need;
I need to get the current brightness and set it a little brighter
or a little dimmer. xrandr doesn't offer that, so I needed to script it.
You can get the current brightness with
xrandr --verbose | grep -i brightness
But I was hoping there would be a more straightforward way to get
brightness from a program.
I looked into Python bindings for xrandr; there are some,
but with no documentation and no examples. After an hour
of fiddling around, I concluded that I could waste the rest of the day
poring through the source code and trying things hoping something would
work; or I could spend fifteen minutes
using subprocess.call()
to wrap the command-line xrandr.
So subprocesses it was. It made for a nice short script,
much simpler than the old xbrightness C program that used
<X11/extensions/xf86vmode.h>
and
XF86VidModeGetGammaRampSize()
:
xbright
on github.
Tags: linux, X11, debian
[
11:01 Mar 17, 2016
More linux |
permalink to this entry |
]
Fri, 05 Feb 2016
Debian's Unstable ("Sid") distribution has been terrible lately.
They're switching to a version of X that doesn't require root,
and apparently
the X
transition has broken all sorts of things in ways that are hard to fix
and there's no ETA for when things might get any better.
And, being Debian, there's no real bug system so you can't just CC
yourself on the bug to see when new fixes might be available to try.
You just have to wait, try every few days and see if the system
That's hard when the system doesn't work at all. Last week, I was
booting into a shell but X wouldn't run, so at least I could pull
updates. This week, X starts but the keyboard and mouse don't work at all,
making it hard to run an upgrade.
has been fixed.
Fortunately, I have an install of Debian stable ("Jessie") on this
system as well. When I partition a large disk I always reserve several
root partitions so I can try out other Linux distros, and when running
the more experimental versions, like Sid, sometimes that's a life saver.
So I've been running Jessie while I wait for Sid to get fixed.
The only trick is: how can I upgrade my Sid partition while running
Jessie, since Sid isn't usable at all?
I have an entry in /etc/fstab that lets me mount my Sid
partition easily:
/dev/sda6 /sid ext4 defaults,user,noauto,exec 0 0
So I can type
mount /sid
as myself, without even needing
to be root.
But Debian's apt upgrade tools assume everything will be on /,
not on /sid. So I'll need to use chroot /sid
(as root)
to change the root of the filesystem to /sid. That only affects
the shell where I type that command; the rest of my system will still be
happily running Jessie.
Mount the special filesystems
That mostly works, but not quite, because I get a lot of errors like
permission denied: /dev/null
.
/dev/null is a device: you can write to it and the bytes disappear,
as if into a black hole except without Hawking radiation.
Since /dev is implemented by the kernel and udev, in the chroot
it's just an empty directory. And if a program opens /dev/null in
the chroot, it might create a regular file there and actually write to it.
You wouldn't want that: it eats up disk space and can slow things down a lot.
The way to fix that is before you chroot:
mount --bind /dev /sid/dev
which will make /sid/dev a mirror of the real /dev.
It has to be done before the chroot because inside the chroot,
you no longer have access to the running system's /dev.
But there is a different syntax you can use after chrooting:
mount -t proc proc proc/
mount --rbind /sys sys/
mount --rbind /dev dev/
It's a good idea to do this for /proc and /sys as well,
and Debian recommends adding /dev/pts (which must be done after you've
mounted /dev), even though most of these probably won't come into play
during your upgrade.
Mount /boot
Finally, on my multi-boot system, I have one shared /boot
partition with kernels for Jessie, Sid and any other distros I have
installed on this system. (That's
somewhat
hard to do using grub2
but easy on Debian
though you may need to
turn
off auto-update and
Debian
is making it harder to use extlinux now.)
Anyway, if you have a separate /boot partition, you'll want it mounted
in the chroot, in case the update needs to add a new kernel.
Since you presumably already have the same /boot mounted on the
running system, use mount --bind
for that as well.
So here's the final set of commands to run, as root:
mount /sid
mount --bind /proc /sid/proc
mount --bind /sys /sid/sys
mount --bind /dev /sid/dev
mount --bind /dev/pts /sid/dev/pts
mount --bind /boot /sid/boot
chroot /sid
And then you can proceed with your apt-get update
,
apt-get dist-upgrade
etc.
When you're finished, you can unmount everything with one command:
umount --recursive /sid
Some helpful background reading:
Tags: linux, debian, chroot, install, boot
[
11:43 Feb 05, 2016
More linux/install |
permalink to this entry |
]
Sun, 27 Dec 2015
Debian "Sid" (unstable) stopped working on my Thinkpad X201 as of the
last upgrade -- it's dropping mouse and keyboard events. With any luck
that'll get straightened out soon -- I hear I'm not the only one
having USB problems with recent Sid updates. But meanwhile,
fortunately, I keep a couple of spare root partitions so I can
try out different Linux distros. So I decided to switch to the
current Debian stable version, "Jessie".
The mouse and keyboard worked fine there. Except it turned out I had
never fully upgraded that partition to the "Jessie"; it was still on
"Wheezy". So, with much trepidation, I attempted an
apt-get update; apt-get dist-upgrade
After an interminable wait for everything to download, though, I was
faced with a blue screen asking this:
No bootloader integration code anymore.
The extlinux package does not ship bootloader integration anymore.
If you are upgrading to this version of EXTLINUX your system will not boot any longer if EXTLINUX was the only configured bootloader.
Please install GRUB.
<Ok>
No -- it's not okay! I have
good
reasons for not using grub2 -- besides which, extlinux on
exact machine has been working fine for years under Debian Sid.
If it worked on Wheezy and works on Sid, why wouldn't it work on
the version in between, Jessie?
And what does it mean not to ship "bootloader integration", anyway?
That term is completely unclear, and googling was no help.
There have been various Debian bugs filed but of course, no
explanation from the developers for exactly what does and doesn't work.
My best guess is that what Debian means by "bootloader integration"
is that there's a script that looks at /boot/extlinux/extlinux.conf,
figures out which stanza corresponds to the current system,
figures out whether there's a new kernel being installed that's
different from the one in extlinux.conf, and updates the
appropriate kernel and initrd lines to point to the new kernel.
If so, that's something I can do myself easily enough. But what if
there's more to it? What would actually happen if I upgraded the
extlinux package?
Of course, there's zero documentation on this. I found plenty of
questions from people who had hit this warning, but most were from
newbies who had no idea what extlinux was or why their systems were
using it, and they were advised to install grub. I only found one hit
from someone who was intentionally using extlinux. That person aborted
the install, held back the package so the potentially nonbooting new
version of extlinux wouldn't be installed, then updated extlinux.conf
by hand, and apparently that worked fine.
It sounded like a reasonable bet. So here's what I did (as root, of course):
- Open another terminal window and run
ps aux | grep apt
to find the apt-get dist-upgrade process and kill it.
(sudo pkill apt-get
is probably an easier approach.)
Ensure that apt has exited and there's a shell prompt in the window
where the scary blue extlinux warning was.
echo "extlinux hold" | dpkg --set-selections
apt-get dist-upgrade
and wait forever for all the
packages to install
aptitude search linux-image | grep '^i'
to find out
what kernel versions are installed. Pick one. I picked 3.14-2-686-pae
because that happened to be the same kernel I was already running,
from Sid.
ls -l /boot
and make sure that kernel is there,
along with an initrd.img of the same version.
- Edit /boot/extlinux/extlinux.conf and find the stanza
for the Jessie boot. Edit the kernel and append initrd
lines to use the right kernel version.
It worked fine. I booted into jessie with the kernel I had specified.
And hooray -- my keyboard and mouse work, so I can continue to use my
system until Sid becomes usable again.
Tags: linux, debian, extlinux
[
17:28 Dec 27, 2015
More linux/install |
permalink to this entry |
]
Sun, 09 Jun 2013
I recently went on an upgrading spree on my main computer. In the hope
of getting more up-to-date libraries, I updated my Ubuntu to 13.04
"Raring Ringtail", and Debian to unstable "Sid". Most things went fine
-- except for Firefox.
Under both Ringtail and Sid, Firefox became extremely unstable.
I couldn't use it for more than about fifteen minutes before it would
freeze while trying to access some web resource. The only cure when
that happened was to kill it and start another Firefox.
This was happening with the exact same Firefox -- a 21.0 build from
mozilla.org -- that I was using without any problems on older versions
of Debian and Ubuntu; and with the exact same profile. So it was
clearly something that had changed about Debian and Ubuntu.
The first thing I do when I hit a Firefox bug is test with
a fresh profile. I have all sorts of Firefox customizations, extensions
and other hacks. In fact, the customizations are what keep me tied
to Firefox rather than jumping to some other browser. But they do,
too often, cause problems. I have a generic profile I keep around
for testing, so I fired it up and used it for browsing for a day.
Firefox still froze, but not as often.
Disabling Extensions
Was it one of my extensions?
I went to the Tools->Add-ons to try disabling them all ...
and Firefox froze. Bingo! That was actually good news. Problems like
"Firefox freezes a lot" are hard to debug. "Firefox freezes every time
I open Tools->Add-ons" are a whole lot easier.
Now I needed to find some other way of disabling extensions to see if
that helped.
I went to my Firefox profile directory and moved everything
in the extensions directory into a new directory I made called
extensions.sav. Then I started moving them back one by one,
each time starting Firefox and calling up Tools->Add-ons.
It turned out two extensions were causing the freeze: Open in Browser
and Custom Tab Width. So I left those off for the time being.
Disabling Themes
Along the way, I discovered that clicking on Appearance in
Tools->Add-ons would also cause a freeze, so my visual
theme was also a problem. This wasn't something I cared about:
some time back when Mozilla started trumpeting their themeability,
I clicked around and picked up some theme involving stars and planets.
I could live without that.
But how do you disable a theme?
Especially if you can't go to Tools->Add-ons->Appearance?
Turns out everything written on the web on this is wrong. First,
everything on themes on mozilla.org assumes you can get to that
Appearance tab, and doesn't even consider the possibility that you
might have to look in your profile and remove a file.
Search further and you might find references to files named
lightweighttheme-header and lightweighttheme-footer, neither of
which existed in my profile.
But I did have a directory called lwtheme.
So I removed that, plus four preferences in prefs.js that included
the term "lightweightThemes".
After a restart, my theme was gone, I was able to view that Appearance tab,
and I was able to browse the web for nearly 4 hours before firefox hung again.
Darn! That wasn't all of it.
Debugging the environment
But soon after that I had a breakthrough.
I discovered a page on my bank's website that froze Firefox every time.
But that was annoying for testing, since it required logging in then
clicking through several other pages, and you never know what a bank
website might decide to do if you start logging in over and over.
I didn't want to get locked out.
But then I was checking an episode in one of the podcasts I listen to,
which involved going to the link
http://downloads.bbc.co.uk/podcasts/radio4/moreorless/rss.xml
-- and Firefox froze, on a simple RSS link. I restarted and tried
again -- another freeze. I'd finally found the Rosetta stone,
something that hung Firefox every time. Now I could do some serious testing!
I'd had friends try this using the same version of Firefox and Ubuntu,
without seeing a freeze. Was it something about my user environment?
I created a new user, switched to another virtual console (Ctrl-Alt-F2)
and logged in as my new user, then ran X. This was a handy way to test:
I could get to my normal user's X session in Ctrl-Alt-F7, while the new
user's X session was on Ctrl-Alt-F8. Since I don't have Gnome or KDE
installed on this machine, the new user came up with a default Openbox
session. It came up at the wrong resolution -- the X11 in the newest
Linux distros apparently doesn't read the HDMI monitor properly --
but I wasn't worried about that.
And when I ran Firefox as the new user (letting it create a new profile)
and middlemouse-pasted the BBC RSS URL, it loaded it, without freezing.
Now we're getting somewhere.
Now I knew it was something about my user environment.
I tried copying all of ~/.config from my user to the new user. No hang.
I tried various other configuration files. Still no hang.
The X initialization
I'll skip some steps here, and just mention that in trying to fix the
resolution problem, so I didn't have to do all my debugging at 1024x768,
I discovered that if I used my .xinitrc file to start X, I'd get a freezy
Firefox. If I didn't use my .xinitrc, and defaulted to the system one,
Firefox was fine. Even if I removed everything else from my .xinitrc,
and simply ran openbox from it, that was enough to make Firefox hang.
Okay, what was the system doing? I poked around /etc/X11:
it was running /etc/X11/Xsession. I copied that file to my
.xinitrc and started X. No hang.
Xsession does a bunch of things, but one of the main things it does is run
every script in the /etc/X11/Xsession.d directory.
So I made a copy of that directory inside my home directory, and modified
.xinitrc to execute those files instead. Then I started moving them
aside to see which ones made a difference.
And I found it. /etc/X11/Xsession.d/75dbus_dbus-launch was the
file that mattered.
75dbus_dbus-launch takes the name of the program that's
going to be executed -- in this case that was x-session-manager, which
links to /etc/alternatives/x-session-manager, which links to
/usr/bin/openbox-session -- and instead runs
/usr/bin/dbus-launch --exit-with-session x-session-manager
.
Now that I knew that, I moved everything aside and made a little
.xinitrc that ran
/usr/bin/dbus-launch --exit-with-session openbox-session
.
And Firefox didn't crash.
Dbus
So it all comes down to dbus. I was already running dbus: ps shows
/usr/bin/dbus-daemon --system running -- and that worked fine
for everything dbussy I normally do, like run "gimp image.jpg" and
have it open in my already running GIMP.
But on Ringtail and Sid, that isn't enough for Firefox. For some
reason, on these newer systems, Firefox requires a second
dbus daemon -- it shows up in ps as
/usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
-- for the X session. If it doesn't have that, it's fine for a while,
and then, hours later, it will mysteriously freeze while waiting for
a network resource.
Why? I have no idea. No one I've asked seems to know anything about
how dbus works, the difference between system and session dbus daemons,
or why any of it it would have this effect on Firefox.
I filed a Firefox bug,
Bug 881122,
though I don't have much hope of anyone being interested in a bug
that only affects Linux users using nonstandard X sessions.
But maybe I'm not the only one. If your Firefox is hanging and
you found your way here, I hope I've given you some ideas.
And if anyone has a clue as to what's really happening and why
dbus would have that effect, I'd love to hear from you.
Tags: firefox, mozilla, debugging, linux, debian, ubuntu, dbus
[
20:08 Jun 09, 2013
More linux |
permalink to this entry |
]
Wed, 15 May 2013
Checking versions in Debian-based systems is a bit of a pain.
This happens to me a couple of times a month: for some reason I need
to know what version of something I'm currently running -- often a
library, like libgtk. aptitude show
will tell you all about a package -- but only if you know its exact name.
You can't do aptitude show libgtk
or even
aptitude show '*libgtk*'
-- you have to know that the
package name is libgtk2.0-0. Why is it libgtk2.0-0? I have no idea,
and it makes no sense to me.
So I always have to do something like
aptitude search libgtk | egrep '^i'
to find out what
packages I have installed that matches the name libgtk, find the
package I want, then copy and paste that name after typing
aptitude show
.
But it turns out it's super easy in Python to query Debian packages using the
Python
apt package. In fact, this is all the code you need:
import sys
import apt
cache = apt.cache.Cache()
pat = sys.argv[1]
for pkgname in cache.keys():
if pat in pkgname:
pkg = cache[pkgname]
instver = pkg.installed
if instver:
print pkg.name, instver.version
Then run
aptver libgtk
and you're all set.
In practice, I wanted nicer formatting, with columns that lined up, so
the actual script is a little longer. I also added a -u flag to show
uninstalled packages as well as installed ones. Amusingly, the code to
format the columns took about twice as many lines as the code that does the
actual work. There doesn't seem to be a standard way of formatting
columns in Python, though there are lots of different implementations
on the web. Now there's one more -- in my
aptver
on github.
Tags: linux, debian, ubuntu, python, programming
[
16:07 May 15, 2013
More linux |
permalink to this entry |
]
Wed, 14 Nov 2012
(This is a guest post by David North.)
Debian developers tend to get overzealous in their dependency lists, probably to avoid constant headaches from fringe cases whose favorite programs fail because they also need some obscure library or package support (and yes, I'm talking to you, Ubuntu). But what if you don't want some goofy dependency (and the cascade of other crap it pulls in?)
As a small aside, aptitude/apt-get hold <pkg> is terrific if you just want to keep a package at a pre-horkage level, but for some obcure reason you can't "hold" a package that isn't installed. So that won't work as of 11/2012.
You can however generate an equivalent package with a higher version number and install it, which naturally blocks the offending package. Even better, the replacement package need do nothing at all other than satisfy the apt database. Even better, the whole thing is incredibly simple.
First install the "equivs" package. This will deliver two programs:
- equivs-control
- equivs-build
Officially you should start with 'equivs-control <:pkgname>' which will create a file 'pkgname' in the current directory. Inside are various fields but you only need eight and can simply delete the rest. Here's approximately what you should end up with for a fictional package "pkgname":
Section: misc
Priority: optional
Standards-Version: 3.9.2
Package: pkgname
Version: 1:42
Maintainer: Your Name <your@email.address>
Architecture: all
Description: fake pkgname to block a dumb dependency
The first three lines are just boilerplate, though you may have to increment the standards-version at some point if you reuse the file. No changes are needed now.
The pkgname does actually have to match the name of the package you want to block. The version must be higher than that of the target package. Maintainer need not be you, but it's a good idea to at least use a name you recognize as yourself. Architecture can be left as "all" unless you're doing something extra tricky. Description is not necessary but a good idea; put your notes here.
The only trick is the version. Note the 1:42 structure here. The first number is the "epoch" in debian-speak, and may or may not be used. In practice I've never seen an epoch greater than one, so I suggest using either 1 or 2 here rather than just leaving it blank. You can see the epoch number in a package when you use aptitude show <pkgname>. The version is the number immediately after the colon, and for safety's sake should be considerably larger than the version you're trying to block (to avoid future updates). I like to use "42" for obvious reasons unless the actual package version is too close. Factoid: if no "epoch" is indicated debian will assume epoch 0, which will not show up as a zero in a .deb (or in aptitude show) but rather as a blank. The version number will have no colon in this event.
Having done this, all you need do is issue the command 'equivs-build path-to-pkgname' (preferably from the same directory) and you get a fake deb to install with dpkg -i. Say goodbye to the dependency.
One more trick: once you have your file <pkgname> with the Eight
Important Fields, you can pretty much skip using equivs-control. All it
does is make the initial text file, and it will be easier to edit the
one you already have with a new package name (and rename the file at
the same time). Note, however, this handy file will not necessarily be
useful on other debian-based systems or later installs, so running
equivs-control after a big upgrade or moving to another distro is very
good practice. If you compare the files and they have the same entries,
great. If not, use the new ones.
Tags: linux, debian, ubuntu, install
[
11:50 Nov 14, 2012
More linux/install |
permalink to this entry |
]
Sat, 16 Jun 2012
I ran ubuntu-bug
to report a bug. After collecting some
dependency info, the program asked me if I wanted to load the bug
report page in a browser. Of course I did -- but it launched chromium,
where I don't have any of my launchpad info loaded, rather than firefox.
So how do you change the default browser in Ubuntu?
The program that controls that, and lots of similar defaults,
is update-alternatives.
update-alternatives with no arguments gives a long usage statement that
isn't too clear. You need to know the various category names ("groups")
before you can do much. Here's how to get a list of all the groups:
update-alternatives --get-selections
But that's still a long list. To find the entries that might be pointing
to chrome or chromium, I narrowed it down:
update-alternatives --get-selections | grep chrom
That narrowed it down:
x-www-browser and gnome-www-browser both pointed
to chromium. So let's try to change that to firefox:
$ update-alternatives --set gnome-www-browser /usr/local/firefox11/firefox
update-alternatives: error: alternative /usr/local/firefox11/firefox for gnome-www-browser not registered, not setting.
Whoops! The problem here is that I'm running a firefox installed from
Mozilla.org, not the one that comes with Ubuntu.
What if I want to make that my default browser?
What does it mean for an application to be "registered"?
Well, no one seems to have documented that.
I found it discussed briefly here:
What is Ubuntu's Definition of a “Registered Application”?,
but the only solutions seemed to involve hand-editing desktop files to
add icons, and there's no easy way to figure out how much of
the desktop file it needs. That sounded way too complicated.
Thanks to Lyz and Maco for the real answer: skip update-alternatives
entirely, and change the symbolic links in /etc/alternatives by hand.
$ sudo rm /etc/alternatives/gnome-www-browser
$ sudo ln -s /usr/local/firefox11/firefox /etc/alternatives/gnome-www-browser
$ sudo rm /etc/alternatives/x-www-browser
$ sudo ln -s /usr/local/firefox11/firefox /etc/alternatives/x-www-browser
That was much simpler, and worked fine: now applications that need to
call up a browser will use firefox instead of chromium.
Tags: ubuntu, debian, linux
[
17:04 Jun 16, 2012
More linux |
permalink to this entry |
]
Sat, 26 May 2012
I write a lot of little Python scripts. And I use Ubuntu and Debian.
So why aren't any of my scripts packaged for those distros?
Because Debian packaging is absurdly hard, and there's very little
documentation on how to do it. In particular, there's no help on how
to take something small, like a Python script,
and turn it into a package someone else could install on a Debian
system. It's pretty crazy, since
RPM
packaging of Python scripts is so easy.
Recently at the Ubuntu Developers' Summit, Asheesh of OpenHatch pointed me toward
a Python package called stdeb that simplifies a lot of the steps
and makes Python packaging fairly straightforward.
You'll need a setup.py file to describe your Python script, and
you'll probably want a .desktop file and an icon.
If you haven't done that before, see my article on
Packaging Python for MeeGo
for some hints.
Then install python-stdeb.
The package has some requirements that aren't listed
as dependencies, so you'll need to install:
apt-get install python-stdeb fakeroot python-all
(I have no idea why it needs python-all, which installs only a
directory
/usr/share/doc/python-all with some policy
documentation files, but if you don't install it, stdeb will fail later.)
Now create a config file for stdeb to tell it what Debian/Ubuntu version
you're going to be targeting, if it's anything other than Debian unstable
(stdeb's default).
Unfortunately, there seems to be no way to pass this on the command
line rather than in a config file. So if you want to make packages for
several distros, you'll have to edit the config
file for every distro you want to support.
Here's what I'm using for Ubuntu 12.04 Precise Pangolin:
[DEFAULT]
Suite: precise
Now you're ready to run stdeb. I know of two ways to run it.
You can generate both source and binary packages, like this:
python setup.py --command-packages=stdeb.command bdist_deb
Or you can generate source packages only, like this:
python setup.py --command-packages=stdeb.command sdist_dsc
Either syntax creates a directory called deb_dist. It contains a lot of
files including a source .dsc, several tarballs, a copy of your source
directory, and (if you used bdist_deb) a binary .deb package.
If you used the bdist_deb form, don't be put off that
it concludes with a message:
dpkg-buildpackage: binary only upload (no source included)
It's fibbing: the source .dsc is there as well as the binary .deb.
I presume it prints the warning because it creates them as
separate steps, and the binary is the last step.
Now you can use dpkg -i to install your binary deb, or you can use
the source dsc for various purposes, like creating a repository or
a Launchpad PPA. But those involve a lot more steps -- so I'll
cover that in a separate article about creating PPAs.
Update: you can find that article here:
Creating
packages for a Launchpad PPA.
Tags: debian, ubuntu, linux, programming, python
[
11:44 May 26, 2012
More programming |
permalink to this entry |
]
Thu, 24 Nov 2011
A few days ago, I wrote about
how to
set up and configure extlinux (syslinux) as a bootloader.
But on Debian or Ubuntu,
if you make changes to files like /boot/extlinux/extlinux.conf
directly, they'll be overwritten.
The configuration files are regenerated by a program
called extlinux-update, which runs automatically every time you
update your kernel. (Specifically, it runs from the postinst script of
the linux-base package:
you can see it in /var/lib/dpkg/info/linux-base.postinst.)
So what's a Debian user to do if she wants to customize the menus,
add a splash image or boot other operating systems?
First, if you decide you really don't want Debian overwriting your
configuration files, you can change disable updates
by editing /etc/default/extlinux.
Just be aware you won't get your boot menu updated when you install new
kernels -- you'll have to remember to update them by hand.
It might be worth it: the automatic update is nearly as annoying as
the grub2 updater: it creates two automatic entries for every kernel
you have installed. So if you have several distros installed, each
with a kernel or two in your shared /boot,
you'll get an entry to boot Debian Squeeze with the
Ubuntu Oneiric kernel, one for Squeeze with the Natty kernel,
one for Squeeze with the Fedora 16 kernel ... as well as entries
for every kernel you have that's actually owned by Debian.
And then for each of these, you'll also get a second entry,
to boot in recovery mode. If you have several distros installed,
it makes for a very long and confusing boot menu!
It's a shame that the auto-updater doesn't restrict itself to kernels
managed by the packaging system, which would be easy enough to do.
(Wonder if they would accept a patch?)
You might be able to fudge something that works right by setting up
symlinks so that the only readable kernels actually live on the root
partition, so Debian can't read the kernels from the other
distros. Sounds a bit complicated and I haven't tried it.
For now, I've turned off automatic updating on my system.
But if your setup is simpler --
perhaps just one Debian or one Ubuntu partition plus some non-Linux
entries such as BSD or Windows -- here's how to set up Debian-style
automatic updating and still keep all your non-Linux boot entries
and your nice menu customizations.
Debian automatic updates and themes
First, take a quick look at /etc/default/extlinux and customize
anything there you might need, like the names of the kernels, kernel
boot parameters or timeout.
See man extlinux-update
for details.
For configuring menu colors, image backgrounds and such, you'll need to
make a theme. You can see a sample theme by installing the package
syslinux-themes-debian -- but watch out.
If you haven't configured apt not to pull in suggested packages, that
may bring back grub or grub-legacy, which you probably don't want.
You can make a theme without needing that package, though.
Create a directory /usr/share/syslinux/themes/mythemename
(the extlinux-update man page claims you can put a theme anywhere and
specify it by its full path, but it lies). Create a directory called
extlinux inside it, and make a file with everything you want
from extlinux.conf. For example:
default 0
prompt 1
timeout 50
ui vesamenu.c32
menu title Welcome to my Linux machine!
menu background mysplash.png
menu color title 1;36 #ffff8888 #00000000 std
menu color unsel 0 #ffffffff #00000000 none
menu color sel 7 #ff000000 #ffffff00 none
include linux.cfg
menu separator
include themes/mythemename/other.cfg
Note that last line: you can include other files from your theme.
For instance, you can create a file called other.cfg
with entries for other partitions you want to boot:
label oneiric
menu label Ubuntu Oneiric Ocelot
kernel /vmlinuz-3.0.0-12-generic
append initrd=/initrd.img-3.0.0-12-generic root=UUID=c332b3e2-5c38-4c50-982a-680af82c00ab ro quiet
label fedora
menu label Fedora 16
kernel /vmlinuz-3.1.0-7.fc16.i686
append initrd=/initramfs-3.1.0-7.fc16.i686.img root=UUID=47f6b1fa-eb5d-4254-9fe0-79c8b106f0d9 ro quiet
menu separator
LABEL Windows
KERNEL chain.c32
APPEND hd0 1
Of course, you could have a debian.cfg, an ubuntu.cfg,
a fedora.cfg etc. if you wanted to have multiple distros
all keeping their kernels up-to-date. Or you can keep the whole
thing in one file, theme.cfg. You can make a theme as complex
or as simple as you like.
Tags: linux, boot, extlinux, syslinux, debian, ubuntu
[
12:26 Nov 24, 2011
More linux/install |
permalink to this entry |
]
Tue, 25 Oct 2011
Linux live USB sticks (flash drivers) are awesome. You can carry them
anywhere and give a demo of Linux on anyone's computer, any time. But
how do you keep track of them? Especially since USB sticks don't have
any place to write a label. How do you remember that the shiny blue
stick is the one with Ubuntu Oneiric, the black one has Ubuntu Lucid,
the other blue one that's missing its top is Debian ... and so forth.
It's impossible! Plus, such a waste -- you can hardly buy a flash drive
smaller than 4G these days, and then you go and devote it to a 700Mb
ISO designed to fit on a CD. Silly.
The answer: get one big USB stick and put lots of distros on it,
using grub to let you choose at boot time.
To create my stick, I followed the easy instructions at
HOWTO:
Booting LiveCD ISOs from USB flash drive with Grub2.
I found that tutorial quite simple, so I'm not going to duplicate
the instructions there.
I used the non-LUA version, since my grub on Ubuntu Natty didn't seem
to support LUA.
Basically you run grub-install to the stick,
create a directory called iso where you stick all your ISO files,
then create a grub.cfg with magic incantations to boot each ISO.
Ah, wait ... magic incantations?
The tutorial is missing one important part: what if you want to use an ISO
that isn't already mentioned in the tutorial? If Ubuntu's entry is
linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=$isofile quiet splash noprompt --
and Parted Magic's is
linux (loop)/pmagic/bzImage iso_filename=$isofile edd=off noapic load_ramdisk=1 prompt_ramdisk=0 rwnomce sleep=10 loglevel=0
then you know there's some magic going on there.
I knew I needed at least the Ubuntu "alternate installer", since it
allows installing a command-line system without the Unity desktop, and
Debian Squeeze, since that's currently the most power-efficient Linux
for laptops, in addition to the distros mentioned in the tutorial.
How do you figure out what to put in those grub.cfg lines?
Here's how to figure it out from the ISO file. I'll use the Debian Squeeze
ISO as an example.
Step 1: mount the ISO file.
$ sudo mount -o loop /pix/boot/isos/debian-6.0.0-i386-netinst.iso /mnt
Step 2: find the kernel
$ ls /mnt/*/vmlinuz /mnt/*/bzImage
/mnt/install.386/vmlinuz
Step 3: find the initrd. It might have various names, and might or
might not be compressed, but the name will almost always start with init.
$ ls /mnt/*/vmlinuz /mnt/*/init*
/mnt/install.386/initrd.gz
Unmount the ISO file.
$ umount /mnt
The trick in steps 2 and 3 is that nearly all live ISO images put the
kernel and initrd a single directory below the root. If you're using
an ISO that doesn't, you may have to search more deeply (try /mnt/*/*).
In the case of Debian Squeeze, now I have the two filenames:
/install.386/vmlinuz and /install.386/initrd.gz. (I've removed the
/mnt part since that won't be there when I'm booting from the USB stick.)
Now I can edit boot/grub/grub.cfg and make a boot stanza for Debian:
menuentry "Debian Squeeze" {
set isofile="/boot/isos/debian-6.0.0-i386-netinst.iso"
loopback loop $isofile
linux (loop)/install.386/vmlinuz iso_filename=$isofile quiet splash noprompt --
initrd (loop)/install.386/initrd.gz
}
Here's the entry for the Ubuntu alternate installer:
menuentry "Oneiric 11.10 alternate" {
set isofile="/boot/isos/ubuntu-11.10-alternate-i386.iso"
loopback loop $isofile
linux (loop)/install/vmlinuz iso_filename=$isofile
initrd (loop)/install/initrd.gz
}
It sounds a little convoluted, I know -- but you only have to do it
once, and then you have this amazing keychain drive with every Linux
distro on it you can think of.
Amaze your friends!
Tags: linux, install, ubuntu, debian, grub
[
22:21 Oct 25, 2011
More linux/install |
permalink to this entry |
]
Sat, 27 Aug 2011
I switched to the current Debian release, "Squeeze", quite a few
months ago on my Sony Vaio laptop. I've found that Squeeze, with its older
kernel and good attention to power management (compared to the
power
management regressions in more recent kernels), gets much better
battery life than either Arch Linux or Ubuntu on this machine. I'm using
Squeeze as the primary OS at least until the other distros get their
kernel power management sorted out.
I did have to solve a couple of minor problems when switching over, though.
Suspend/Resume quirks
The first problem was that my Vaio TX650 would freeze on resuming from
suspend -- something that every other Linux distro has handled out of
the box on this machine.
The solution turned out to be simple though
non-obvious, apparently a problem with controlling power to the display:
sudo pm-suspend --quirk-dpms-on
That wasn't easy to find, but ever since then the machine has been
suspending without a single glitch. And it's a true suspend, unlike
Ubuntu Natty, which on this machine will use up a full battery if I
leave it suspended all day -- Natty uses nearly as much power when
suspended as it does running.
Adjusting screen brightness: debugging ACPI
Of course, once I got that sorted out, there were the usual collection
of little changes I needed to make. Number one was that it didn't
automatically handle brightness adjustment with the Fn-F5 and Fn-F6 keys.
It turned out my
previous
technique for handling the brightness keys
didn't work, because the names of the ACPI events in /etc/acpi/events
had changed. Previously, /etc/acpi/events/sony-brightness-down
had contained references to the Sony I/O Control, or SPIC:
event=sony/hotkey SPIC 00000001 00000010
action=/etc/acpi/sonybright.sh down
That device didn't exist on Squeeze. To find out what I needed now,
I ran
acpi-listen
and typed the function-key combos in question.
That gave me the codes I needed. I changed the
sony-brightness-down
file to read:
event=video/brightnessdown BRTDN 00000087 00000000
action=/etc/acpi/sonybright.sh down
It's probably a good thing, changing to be less Sony-specific ...
but as a user it's one of those niggling annoyances that I have to
go chase down every time I upgrade to a new Linux version.
Tags: linux, suspend, acpi, debian, sony, vaio
[
12:07 Aug 27, 2011
More linux/laptop |
permalink to this entry |
]
Tue, 22 Mar 2011
Over the weekend I tried installing Debian's new release,
"Squeeze", on my Vaio TX650 laptop.
I used a "net install" CD, the one that installs only the bare minimum
then goes to the net for anything else. I used Expert mode, because I
needed to set a static IP address and keep it from overwriting my grub
configuration.
Most of the install went smoothly -- until I got to the last big step
near the end, "Select and install software", where it froze at 1%.
A little web searching (on another machine) gave me the hint that
the Debian installer prints a log on the fourth console, Ctrl-Alt-F4.
Checking that log made the problem clear:
aptitude was complaining about packages
without a proper GPG signature -- type Yes to continue without verifying
signatures. But since this was running inside the installer, there's no
place to type Yes -- that Ctrl-Alt-F4 console is merely displaying messages,
not accepting input, and the installer doesn't accept any input for
aptitude.
Fortunately, "Select and install software" isn't crucial to the net
install process. I don't actually know what software it would have
installed -- it never asked me to choose any -- but without it, you
should still have a working minimal Debian on the disk. So I made
another console on Ctrl-Alt-F2, ran ps aux
, found that
aptitude was the highest numbered process running, and killed it.
Upon returning to the installer (Ctrl-Alt-F1), I was able to skip
"Select and install software", finish the install process and reboot.
Upon rebooting, I logged in as root and ran apt-get update
.
It complained about GPG errors; but now I could do something about it.
I ran apt-get upgrade
and confirmed that I wanted to proceed
even without verifying package signatures. When that was over, the
problem was fixed: a subsequent apt-get update
ran
without errors.
This ISO was downloaded (from the kernel.org mirror, I believe)
a few days after the official release.
I'm told that Debian changes the keys at the last minute before a
release; perhaps the new keys don't make it into the ISO images on
all the mirrors. Or maybe they just messed up with the Squeeze release.
Anyway, it was fairly easily solved, but seemed like a disappointing
and silly problem. A web search found lots of people people hitting
this problem; it's a shame that the installer can't run aptitude in
a mode where it won't prompt and hang up the whole install.
Alas, it's probably all academic anyway, since suspend/resume
doesn't work. It freezes on resume, with a black screen --
another common Debian problem, judging by what I see on the net.
I'm a bit surprised, since every other distro I've tried has suspended
the Vaio beautifully. But after hours of messing with it over the
weekend, I ran out of time and conceded defeat.
Tags: install, debian, linux
[
22:49 Mar 22, 2011
More linux/install |
permalink to this entry |
]
Sun, 20 Mar 2011
It's time for another installment of "Where have the control/capslock
adjustments migrated to?" This time it's for the latest Debian
release, "Squeeze".
Ever since they stopped making keyboards with the control key to the
left of the A,
I've remapped my CapsLock key to be another Control key. I never
need CapsLock, but I use Control constantly all day while editing text.
Some people prefer to swap Control and CapsLock.
But the right way to do that changes periodically.
For the last few years, since
Ubuntu
Intrepid, you could set XKbOptions for Control and Capslock in
/etc/default/console-setup. But that no longer works in Squeeze.
It turns out Squeeze introduced a new file,
/etc/default/keyboard, so any keyboard options previously
had in console-setup need to move to keyboard.
For me, that's these lines:
XKBMODEL="pc104"
XKBLAYOUT="us"
XKBVARIANT=""
XKBOPTIONS="ctrl:nocaps,compose:menu,terminate:ctrl_alt_bksp"
though I suspect only the last line matters.
This wasn't well covered on the web. There aren't many howtos covering
Squeeze yet, but I found the hint I needed in a terse Debian IRCbot factoid:
Factoid
capslock says
For console-setup, append ",ctrl:nocaps" to the value of XKBOPTIONS
within /etc/default/console-setup (/etc/default/keyboard on Squeeze).
That factoid assumes you already have XKBOPTIONS set; as shipped,
it's empty, so skip that initial comma.
I was going to conclude with a link to the documentation on XKBOPTIONS,
or XKbOptions as it was capitalized in xorg.conf ... but there
doesn't seem to be any. It's not in any of the Xorg man pages like
xorg.conf(5) where I expected to find it; nor can I find anything
on the web beyond howtos like this one from people who have figured
out a few specific options. Anyone know?
Tags: install, debian, linux
[
12:54 Mar 20, 2011
More linux/install |
permalink to this entry |
]
Mon, 05 Jul 2010
We had a server that was still running Debian Etch -- for which
Debian just dropped support.
We would have upgraded that machine to Lenny long ago except for one
impediment: upgrading the live web server from apache 1 to apache 2.2.
Installing etch's apache 2.2.3 package and getting the website running
under it was no problem. Debian has vastly improved their apache2 setup
from years past -- for instance, installing PHP also enables it now,
so you don't need to track down all the places it needs to be turned on.
But when we upgraded to Lenny and its apache 2.2.9, things broke.
Getting it working again was tricky because most of the documentation
is standard Apache documentation, not based on Debian's more complex setup.
Here are the solutions we found.
Enabling virtual hosts
As soon as the new apache 2.2.9 was running, we lost all our
websites, because the virtual hosts that had worked fine on
Etch broke under Lenny's 2.2.9. Plus, every restart complained
[warn] NameVirtualHost *:80 has no VirtualHosts
.
All the web documentation said that we had to change the
<VirtualHost *>
lines to
<VirtualHost *:80>
. But that didn't help.
Most documentation also said we would also need the line:
NameVirtualHost *:80
Usually people seemed to find it worked best to put that in a newly
created file called conf.d/virtualhosts. Our Lenny upgrade had
already created that line and put it in ports.conf, but it
didn't work either there or in conf.d/virtualhosts.
It turned out the key was to remove the NameVirtualHost *:80
line from ports.conf, and add it in sites-available/default.
Removing it from ports was the important step: if it was in ports.conf
at all, then it didn't matter if it was also in the default virtual host.
Enabling CGI scripts
Another problem to track down: CGI scripts had stopped working.
I knew about Options +ExecCGI, but adding it wasn't helping.
Turned out it also needed an AddHandler
, which I
don't remember having to add in recent versions on Ubuntu.
I added this in the relevant virtual host file in sites-available:
<Directory />
AddHandler cgi-script .cgi
Options ExecCGI
</Directory>
Enabling .htaccess
We have one enduring mystery: .htaccess files work without needing
a line like AllowOverride FileInfo
anywhere. I've
needed to add that directive in Ubuntu-based apache2 installations,
but Lenny seems to allow .htaccess without any override for it.
I'm still not sure why it works. It's not supposed to. But hey,
without a few mysteries, computers would be boring, right?
Tags: web, apache, debian
[
21:46 Jul 05, 2010
More tech/web |
permalink to this entry |
]
Sun, 06 Sep 2009
Someone was asking for help building XEphem on the XEphem mailing list.
It was a simple case of a missing include file, where the only trick
is to find out what package you need to install to get that file.
(This is complicated on Ubuntu, which the poster was using,
by the way they fragment the X developement headers into a maze of
a xillion tiny packages.)
The solution -- apt-file -- is so simple and easy to use, and yet
a lot of people don't know about it. So here's how it works.
The poster reported getting these compiler errors:
ar rc libz.a adler32.o compress.o crc32.o uncompr.o deflate.o trees.o zutil.o inflate.o inftrees.o inffast.o
ranlib libz.a
make[1]: Leaving directory `/home/gregs/xephem-3.7.4/libz'
gcc -I../../libastro -I../../libip -I../../liblilxml -I../../libjpegd -I../../libpng -I../../libz -g -O2 -Wall -I../../libXm/linux86 -I/usr/X11R6/include -c -o aavso.o aavso.c
In file included from aavso.c:12:
../../libXm/linux86/Xm/Xm.h:56:27: error: X11/Intrinsic.h: No such file or directory
../../libXm/linux86/Xm/Xm.h:57:23: error: X11/Shell.h: No such file or directory
../../libXm/linux86/Xm/Xm.h:58:23: error: X11/Xatom.h: No such file or directory
../../libXm/linux86/Xm/Xm.h:59:34: error: X11/extensions/Print.h: No such file or directory
In file included from ../../libXm/linux86/Xm/Xm.h:60,
from aavso.c:12:
../../libXm/linux86/Xm/XmStrDefs.h:1373: error: expected `=', `,', `;', `asm' or `__attribute__' before `char'
In file included from ../../libXm/linux86/Xm/Xm.h:60,
from aavso.c:12:
../../libXm/linux86/Xm/XmStrDefs.h:5439:28: error: X11/StringDefs.h: No such file or directory
In file included from ../../libXm/linux86/Xm/Xm.h:61,
from aavso.c:12:
../../libXm/linux86/Xm/VirtKeys.h:108: error: expected `)' before `*' token
In file included from ../../libXm/linux86/Xm/Display.h:49,
from ../../libXm/linux86/Xm/DragC.h:48,
from ../../libXm/linux86/Xm/Transfer.h:44,
from ../../libXm/linux86/Xm/Xm.h:62,
from aavso.c:12:
../../libXm/linux86/Xm/DropSMgr.h:88: error: expected specifier-qualifier-list before `XEvent'
../../libXm/linux86/Xm/DropSMgr.h:100: error: expected specifier-qualifier-list before `XEvent'
How do you go about figuring this out?
When interpreting compiler errors, usually what matters is the
*first* error. So try to find that. In the transcript above, the first
line saying "error:" is this one:
../../libXm/linux86/Xm/Xm.h:56:27: error: X11/Intrinsic.h: No such file or directory
So the first problem is that the compiler is trying to find a file
called Intrinsic.h that isn't installed.
On Debian-based systems, there's a great program you can use to find
files available for install: apt-file. It's not installed by default,
so install it, then update it, like this (the update will take a long time):
$ sudo apt-get install apt-file
$ sudo apt-file update
Once it's updated, you can now find out what package would install a
file like this:
$ apt-file search Intrinsic.h
libxt-dev: /usr/include/X11/Intrinsic.h
tendra: /usr/lib/TenDRA/lib/include/x5/t.api/X11/Intrinsic.h
In this case two two packages could install a file by that name.
You can usually figure out from looking which one is the
"real" one (usually the one with the shorter name, or the one
where the package name sounds related to what you're trying to do).
If you're stil not sure, try something like
apt-cache show libxt-dev tendra
to find out more
about the packages involved.
In this case, it's pretty clear that tendra is a red herring,
and the problem is likely that the libxt-dev package is missing.
So apt-get install libxt-dev
and try the build again.
Repeat the process until you have everything you need for the build.
Remember apt-file if you're not already using it.
It's tremendously useful in tracking down build dependencies.
Tags: open source, linux, programming, debian, ubuntu
[
11:25 Sep 06, 2009
More linux |
permalink to this entry |
]
Sun, 01 Mar 2009
"Pinning" is the usual way Debian derivatives (like Ubuntu) deal
with pulling software from multiple releases. For instance, you
need an updated gtk or qt library in order to build some program,
but you don't want to pull in everything else from the newer release.
But most people, upon trying to actually set up pinning,
get lost in the elaborate documentation
and end up deciding maybe they don't really need it after all.
For years, I've been avoiding needing to learn pinning because of a wonderful
LinuxChix Techtalk posting from Hamster years ago on
easier
method of pinning releases
Basically, you add a line like:
APT::Default-Release "hardy";
to your
/etc/apt/apt.conf (creating it if it doesn't already exist).
Then when you need to pull something from the newer repository you
pull with
apt-get install -t hardy-backports packagename
.
That's generally worked for me, until yesterday when I tried to pull
a -dev package and found out it was incompatible with the library
package I already had installed. It turned out that the lib package
came from hardy-security, which is considered a different archive
from hardy, so my Default-Release didn't apply to security updates
(or bugfixes, which come from hardy-updates).
You can apparently only have one Default-Release. Since Ubuntu
uses three different archives for hardy the only way to
handle it is pinning. Pinning is documented in the man page
apt_preferences(5) -- which is a perfect example of a well
intentioned geek-written Unix man page.
There's tons of information there -- someone went to
a lot of work, bless their heart, to document exactly what happens
and why, down to the algorithms used to decide priorities -- but
straightforward "type X to achieve effect Y" examples are lost in
the noise. If you want to figure out how to actually set this up
on your own system, expect to spend a long time going back and
forward and back and forward in the man page correlating bits from
different sections.
Ubuntu guru Mackenzie Morgan was nice enough to help me out, and with
her help I got the problem fixed pretty quickly. Here's the quick recipe:
First, remove the Default-Release thing from apt.conf.
Next, create /etc/apt/preferences and put this in it:
Package: *
Pin: release a=hardy-security
Pin-Priority: 950
Package: *
Pin: release a=hardy-updates
Pin-Priority: 940
Package: *
Pin: release a=hardy
Pin-Priority: 900
# Pin backports negative so it'll never try to auto-upgrade
Package: *
Pin: release a=hardy-backports
Pin-Priority: -1
Here's what it means:
a= means archive, though it's apparently not really needed.
The hardy-security archive has the highest priority, 950.
hardy-updates is right behind it with 940 (actually, setting these
equal might be smarter but I'm not sure it matters).
hardy, which apparently is just the software initially installed,
is lower priority so it won't override the other two.
Finally, hardy-backports has a negative priority so that apt will
never try to upgrade automatically from it; it'll only grab things
from there if I specify apt-get install -t hardy-backports.
You can put comments (with #) in /etc/apt/preferences
but not in apt.conf -- they're a syntax error there (so don't
bother trying to comment out that Default-Release line).
And while you're editing apt.conf, a useful thing to put there is:
APT::Install-Recommends "false";
APT::Install-Suggests "false";
which prevents apt from automatically installing recommended or
suggested packages. Aptitude will still install the recommends and
suggests; it's supposed to be configurable in aptitude as well, but
turning it off never worked for me, so mostly I just stick to apt-get.
Tags: debian, ubuntu, pinning
[
21:19 Mar 01, 2009
More linux/install |
permalink to this entry |
]
Tue, 13 Jan 2009
I've been wanting for a long time to make Debian and Ubuntu
repositories so people can install
pho with apt-get,
but every time I try to look it up I get bogged down.
But I got mail from a pho user who really wanted that, and even
suggested a howto.
That howto
didn't quite do it, but it got me moving to look for a better one,
which I eventually found in the
Debian
Repository Howto.
It wasn't complete either, alas, so it took some trial-and-error
before it actually worked. Here's what finally worked:
I created two web-accessible directories, called hardy and etch.
I copied all the files created by dpgk-buildpkg on each distro --
.deb, .dsc, .tar.gz, and .changes (I don't think
this last file is used by anything) -- into each directory
(renaming them to add -etch and -hardy as appropriate).
Then:
% cd hardy/
% dpkg-scanpackages . /dev/null | gzip > Packages.gz
% dpkg-scansources . /dev/null | gzip > Sources.gz
% cd ../etch/
% dpkg-scanpackages . /dev/null | gzip > Packages.gz
% dpkg-scansources . /dev/null | gzip > Sources.gz
It gives an error,
** Packages in archive but missing from override file: **
but seems to work anyway.
Now you can use one of the following /etc/apt/sources.list lines:
deb http://shallowsky.com/apt/hardy ./
deb http://shallowsky.com/apt/etch ./
After an apt-get update, it saw pho, but it warned me
WARNING: The following packages cannot be authenticated!
pho
Install these packages without verification [y/N]?
There's some discussion in the
SecureAPT page
on the Debian wiki, but it's a bit involved and I'm not clear if
it helps me if I'm not already part of the official Debian keychain.
This page on
Release
check of non Debian sources was a little more helpful, and told me
how to create the Release and Release.gpg file -- but then I just get
a different error,
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY
And worse, it's an error now, not just a warning,
preventing any
apt-get update.
Going back to the SecureApt page, under
Setting up a secure apt repository they give the two steps the
other page gave for creating Release and Release.gpg, with a third
step: "Publish the key fingerprint, that way your users will know what
key they need to import in order to authenticate the files in the
archive."
So apparently if users don't take steps to import the key manually,
they can't update at all. Whereas if I leave out the Release and
Release.gpg files, all they have to do is type y when they see the
warning. Sounds like it's better to leave off the key.
I wish, though, that there was a middle ground, where I could offer the
key for those who wanted it without making it harder for those
who don't care.
Tags: programming, pho, debian, ubuntu, linux
[
21:14 Jan 13, 2009
More linux |
permalink to this entry |
]
Fri, 16 May 2008
My laptop's clock has been drifting. I suspect the clock battery is
low (not surprising on a 7-year-old machine). But after an hour of
poking and prodding, I've been unable to find a way to expose the
circuit board under the keyboard, either from the top (keyboard)
side -- though I know how to remove individual keycaps, thanks to a reader
who sent me detailed instructions a while back (thanks, Miles!) --
or the bottom. Any expert on Vaio SR laptops know how this works?
Anyway, that means I have to check and reset the time periodically.
So this morning I did a time check and found it many hours off.
No, wait -- actually it was pretty close; it only looked like it
was way off because the system had suddenly decided it was in UTC,
not PDT. But how could I change that back?
I checked /etc/timezone -- sure enough, it was set to UTC. So I
changed that, copying one from a debian machine -- "US/Pacific",
but that didn't do it, even after a reboot.
I spent some time reading man hwclock
-- there's a lot
of good reading in that manual page, about the relation between the
system (kernel) clock and the hardware clock. Did you know that
you're not supposed to use the date command to set the system
time while the system is running? Me neither -- I do that all the
time. Hmm. Anyway, interesting reading, but nothing useful about
the system time zone.
It has an extensive SEE ALSO list at the end, so I explored some
of those documents.
/usr/share/doc/util-linux/README.Debian.hwclock
is full of lots of interesting information, well worth reading,
but it didn't have the answer. man tzset
sounded
promising, but there was no such man page (or program) on my system.
Just for the heckofit, I tried typing tz
[tab]
to see if I had any other timezone-related programs installed ...
and found tzselect. And there was the answer, added almost as an
afterthought at the end of the manual page:
Note that tzselect will not actually change the timezone for you.
Use 'dpkg-reconfigure tzdata' to achieve this.
Sure enough,
dpkg-reconfigure tzdata
let me set
the time zone. And it even seems to be remembered through a reboot.
Tags: linux, debian, ubuntu, vaio
[
11:04 May 16, 2008
More linux |
permalink to this entry |
]
Sun, 20 Apr 2008
I finally had a moment to upgrade my desktop to Ubuntu's "Hardy Heron".
I followed the same procedure as when I went from feisty to gutsy:
- cp -ax / /hardy
- cp -ax /dev/.static/dev/* /hardy/dev/
- Fix up files like /hardy/etc/fstab and /boot/grub/menu.lst
- Reboot into the newly copied gutsy
- do-release-upgrade -d
It took an hour or two to pull down all the files, followed by a long
interval of occasionally typing Y or N, and then I was ready to start
cleaning up some of the packages I'd noticed flying by that I didn't
want. Oops! I couldn't remove or install anything with apt-get,
because: dpkg --configure -a
But I couldn't dpkg --configure -a
because several
packages were broken.
The first broken package was plucker,
which apparently had failed to install any files.
Its postinstall script was failing because it had no
files to operate on; and then I couldn't do anything further with it
because apt-get wouldn't do anything until I did a
dpkg --reconfigure -a
I finally got out of that by dpkg -P plucker; then after several
more dpkg --reconfigure -a rounds I was eventually able to apt-get
install plucker (which installed just fine the second time).
But apt still wasn't happy, because it wanted to run the trigger for
initramfs-tools, which wouldn't run because it wanted kernel modules
for some specific kernel version in /lib/modules. I didn't have any
kernel modules because I'm not running Ubuntu's kernel (I'm stuck on
2.6.23 because
bug 10118
makes all 2.6.24 variants unable to sync with USB Palm devices).
But I couldn't remove initramfs-tools because udev
(along with a bunch of other less important packages) depends on it.
I finally found my way out of that by removing
/var/lib/dpkg/triggers/initramfs-tools.
I reported it as
bug 220094.
Update: I forgot to mention one important thing I hit both on
this machine and earlier, on the laptop: /usr/bin/play (provided by
the "sox" package) no longer works because it now depends on a
zillion separate libraries. apt-get install libsox-fmt-all
to get all of them.
Tags: linux, ubuntu, debian, install
[
21:02 Apr 20, 2008
More linux/install |
permalink to this entry |
]
Sun, 06 Apr 2008
Some time ago, I wished for a simple Linux
"Tarball
installer", something that could install a minimal install of
a Linux distribution onto an existing partition or directory,
skipping all the flaky and error-prone hardware-guessing that
installers do.
It turns out Debian (and therefore also Ubuntu) has had this for
years, and it's totally cool. It's called debootstrap.
Some folks on the #ubuntu+1 channel told me about it, and I found
a nice clear
howto
article on how to use it for Debian. It works just the same
for Ubuntu.
First, get the .deb package for the debootstrap you want to use.
Here's
debootstrap
for Ubuntu Hardy Heron. Install it with dpkg -i
.
Then run it, giving it the name of the system you want to install
and the directory (or mounted partition) where you want to install
it. Like this:
debootstrap hardy /mnt/hda3
That's all! It fetches the files it needs from the online
repositories. It takes no time at all -- this really is a minimal
system.
Then you need to do some fiddling to turn it into a bootable system.
That includes (all paths relative to the newly installed filesystem
unless otherwise stated):
- Set up etc/fstab to list the fileystems on the disk,
and to mount / from the filesystem you just installed
- Define the hostname in etc/hostname
- Set up a grub boot stanza in /boot/grub/menu.lst
(that's /boot on the current system, which should be the
same as /boot in the new fstab you just created).
Use whatever kernel you were using for your old system, for now.
Now you're read to reboot into the new system. Of course, since this is
a very minimal system, you have a lot more work to do.
Hardly anything is installed, and nothing has been configured for you.
Some things may be challenging (for example, as I write this, X is
installed but most of the fonts aren't showing up properly, which
may be a bug in Hardy).
Anyway, you can get a good start by mounting your old system's root
directory and copying some starter files from there, starting with these:
- Set up your important configuration files:
/etc/network/interfaces, /etc/hosts, /etc/resolv.conf,
/etc/passwd etc.
- edit /etc/apt/sources.list to include
restricted universe multiverse
- Install a kernel package if you're using distro kernels
- Install vim if you're a vim user -- remember, ubuntu comes with
something called vim that
isn't really vim.
- Create users and homedirs and such
- Install all the other stuff you want -- X, gimp/gtk, development
tools, editors, shells -- all that stuff that makes the system
feel like home. You're on your own there, so have fun!
Tags: linux, debian, install
[
13:54 Apr 06, 2008
More linux/install |
permalink to this entry |
]
Sat, 18 Aug 2007
I'm forever having problems connecting to wireless networks,
especially with my Netgear Prism 54 card. The most common failure mode:
I insert the card and run
/etc/init.d/networking restart
(udev is supposed to handle this, but that stopped working a month
or so ago). The card looks like it's connecting,
ifconfig eth0
says it has the right IP address and it's marked up -- but try to
connect anywhere and it says "no route to host" or
"Destination host unreachable".
I've seen this both on networks which require a WEP key
and those that don't, and on nets where my older Prism2/Orinoco based
card will connect fine.
Apparently, the root of the problem
is that the Prism54 is more sensitive than the Prism2: it can see
more nearby networks. The Prism2 (with the orinoco_cs driver)
only sees the strongest network, and gloms onto it.
But the Prism54 chooses an access point according to arcane wisdom
only known to the driver developers.
So even if you're sitting right next to your access point and the
next one is half a block away and almost out of range, you need to
specify which one you want. How do you do that? Use the ESSID.
Every wireless network has a short identifier called the ESSID
to distinguish it from other nearby networks.
You can list all the access points the card sees with:
iwlist eth0 scan
(I'll be assuming
eth0 as the ethernet device throughout this
article. Depending on your distro and hardware, you may need to
substitute
ath0 or
eth1 or whatever your wireless card
calls itself. Some cards don't support scanning,
but details like that seem to be improving in recent kernels.)
You'll probably see a lot of ESSIDs like "linksys" or
"default" or "OEM" -- the default values on typical low-cost consumer
access points. Of course, you can set your own access point's ESSID
to anything you want.
So what if you think your wireless card should be working, but it can't
connect anywhere? Check the ESSID first. Start with iwconfig:
iwconfig eth0
iwconfig lists the access point associated with the card right now.
If it's not the one you expect, there are two ways to change that.
First, change it temporarily to make sure you're choosing the right ESSID:
iwconfig eth0 essid MyESSID
If your accesspoint requires a key, add key nnnnnnnnnn
to the end of that line. Then see if your network is working.
If that works, you can make it permanent. On Debian-derived distros,
just add lines to the entry in /etc/network/interfaces:
wireless-essid MyESSID
wireless-key nnnnnnnnnn
Some older howtos may suggest an interfaces line that looks like this:
up iwconfig eth0 essid MyESSID
Don't get sucked in. This "up" syntax used to work (along with pre-up
and post-up), but although man interfaces still mentions it,
it doesn't work reliably in modern releases.
Use wireless-essid instead.
Of course, you can also use a gooey tool like
gnome-network-manager to set the essid and key. Not being a
gnome user, some time ago I hacked up the beginnings of a standalone
Python GTK tool to configure networks. During this week's wi-fi
fiddlings, I dug it out and blew some of the dust off:
wifi-picker.
You can choose from a list of known networks (including both essid and
key) set up in your own configuration file, or from a list of essids
currently visible to the card, and (assuming you run it as root)
it can then set the essid and key to whatever you choose.
For networks I use often, I prefer to set up a long-term
network
scheme, but it's fun to have something I can run once to
show me the visible networks then let me set essid and key.
Tags: linux, networking, debian, ubuntu
[
15:44 Aug 18, 2007
More linux |
permalink to this entry |
]
Tue, 15 May 2007
The new
Debian Etch installation
on my laptop was working pretty well.
But it had one weirdness: the ethernet card was on eth1, not eth0.
ifconfig -a
revealed that eth0 was ... something else,
with no IP address configured and a really long MAC address.
What was it?
Poking around dmesg revealed that it was related to the IEEE 1394 and
the eth1394 module. It was firewire networking.
This laptop, being a Vaio, does have a built-in firewire interface
(Sony calls it i.Link). The Etch installer, when it detected no
network present, had noted that it was "possible, though unlikely"
that I might want to use firewire instead, and asked whether to
enable it. I declined.
Yet the installed system ended up with firewire networking not only
installed, but taking the first network slot, ahead of any network
cards. It didn't get in the way of functionality, but it was annoying
and clutters the output whenever I type ifconfig -a
.
Probably took up a little extra boot time and system resources, too.
I wanted it gone.
Easier said than done, as it turns out.
I could see two possible approaches.
- Figure out who was setting it to eth1, and tell it to ignore
the device instead.
- Blacklist the kernel module, so it couldn't load at all.
I begain with approach 1.
The obvious culprit, of course, was udev. (I had already ruled out
hal, by removing it, rebooting and observing that the bogus eth0 was
still there.) Poking around /etc/udev/rules.d revealed the file
where the naming was happening: z25_persistent-net.rules.
It looks like all you have to do is comment out the two lines
for the firewire device in that file. Don't believe it.
Upon reboot, udev sees the firewire devices and says "Oops!
persistent-net.rules doesn't have a rule for this device. I'd better
add one!" and you end up with both your commented-out line, plus a
brand new uncommented line. No help.
Where is that controlled? From another file,
z45_persistent-net-generator.rules. So all you have to do is
edit that file and comment out the lines, right?
Well, no. The firewire lines in that file merely tell udev how to add
a comment when it updates z25_persistent-net.rules.
It still updates the file, it just doesn't comment it as clearly.
There are some lines in z45_persistent-net-generator.rules
whose comments say they're disabling particular devices, by adding a rule
GOTO="persistent_net_generator_end"
. But adding that
in the firewire device lines caused the boot process to hang.
There may be a way to ignore a device from this file, but I haven't
found it, nor any documentation on how this system works.
Defeated, I switched to approach 2: prevent the module from loading at
all. I never expect to use firewire networking, so it's no loss. And indeed,
there are lots of other modules loaded I'd like to blacklist, since
they represent hardware this machine doesn't have. So it would be
nice to learn how.
I had a vague memory of there having been a file with a name like
/etc/modules.blacklist some time back in the Pliocene.
But apparently no such file exists any more.
I did find /etc/modprobe.d/blacklist, which looked
promising; but the comment at the beginning of that file says
# This file lists modules which will not be loaded as the result of
# alias expsnsion, with the purpose of preventing the hotplug subsystem
# to load them. It does not affect autoloading of modules by the kernel.
Okay, sounds like this file isn't what I wanted. (And ... hotplug? I
thought that was long gone, replaced by udev scripts.)
I tried it anyway. Sure enough, not what I wanted.
I fiddled with several other approaches before Debian diva Erinn Clark
found this helpful page.
I created a file called /etc/modprobe.d/00local
and added this line to it:
install eth1394 /bin/true
and on the next boot, the module was no longer loaded, and no longer
showed up as a bogus ethernet device. Hurray!
This /etc/modprobe.d/00local technique probably doesn't bear
examining too closely. It has "hack" written all over it.
But if that's the only way to blacklist problematic modules,
I guess it's better than nothing.
Tags: linux, debian, kernel, networking
[
19:10 May 15, 2007
More linux |
permalink to this entry |
]
Since I'd already tried the latest Ubuntu on my desktop, I wanted to
check out Debian's latest, "Etch", on my laptop.
The installer was the same as always, and the same as the Ubuntu
installer. No surprises, although I do like the way Debian gives
me a choice of system types to install (Basic desktop, Web server,
etc. ... though why isn't "Development" an option?) compared to
Ubuntu's "take the packages we give you and deal with it later"
approach.
Otherwise, the install went very much like a typical Ubuntu install.
I followed the usual procedures and workarounds so as not to overwrite
the existing grub, to get around the Vaio hardware issues, etc.
No big deal, and the install went smoothly.
The good
But the real surprise came on booting into the new system.
Background: my Vaio SR-17 has a quirk (which regular readers will have
heard about already): it has one PCMCIA slot, which is needed for either
the external CDROM drive or a network card. This means that at any one
time, you can have a network, or a CDROM, but not both. This tends to
throw Debian-based installers into a tizzy -- you have to go through
five or more screens (including timing out on DHCP even after you've
told it that you have no network card) to persuade the installer that
yes, you really don't have a network and it's okay to continue anyway.
That means that the first step after rebooting into the new system is
always configuring the network card. In Ubuntu installs, this
typically means either fiddling endlessly with entries in the System
or Admin menus, or editing /etc/network/interfaces.
Anticipating a vi session, I booted into my new Etch and inserted the
network card (a 3COM 3c59x which often confuses Ubuntu).
Immediately, something began spinning in the upper taskbar.
Curious, I waited, and in ten seconds or so
a popup appeared informing me "You are now connected to the wired net."
And indeed I was! The network worked fine.
Kudos to debian -- Etch is the first distro which
has ever handled this automatically.
(I still need to edit /etc/network/interfaces to set my static IP
address -- network manager
Of course, since this was my laptop, the next most important feature
is power management. Happily,
both sleep and hibernate worked correctly,
once I installed the hibernate package. That had been my biggest
worry: Ubuntu was an early pioneer in getting ACPI and power
management code working properly, but it looks like Debian has
caught up.
The bad
I did see a couple of minor glitches.
First, I got a lot of system hangs in X. These turned out to be the
usual dri problem on S3 video cards. It's a well known bug, and I wish
distros would fix it!
I've also gotten at least one kernel OOPS, but I have a theory
about what might be causing that. Time will tell whether it's
a real problem.
It took a little googling to figure out the line I needed to add to
/etc/apt/sources.list in order to install programs that weren't
included on the CD.
(Etch automatically adds lines for security updates, but not for getting
new software). But fortunately, lots of other people have already asked
this in a variety of forums. The answer is:
deb http://http.us.debian.org/debian etch main contrib non-free
My husband had suggested that Etch might be lighter weight than Ubuntu
and less dependent on hal (which I always remove from my laptop,
because its
constant hardware polling
makes noise and sucks power). But no: Etch installed hal, and
any attempt to uninstall it takes with it the whole gnome desktop
environment, plus network-manager (that's apparently that nice app
that noticed my network card earlier) and rhythmbox. I don't actually
use the gnome desktop or these other programs, but it would be nice
to have the option of trying them when I want to check something out.
So for now I've resorted to the temporary solution:
mv /usr/sbin/hald /usr/sbin/hald-not
The ugly
Etch looks fairly nice, and I'm looking forward to exploring it.
I'm mostly kidding about the "ugly". I did hit one minor bit of
ugliness involving network devices which led me on a two-hour chase
... but I'll save that for its own article.
Tags: linux, debian, vaio
[
14:29 May 15, 2007
More linux |
permalink to this entry |
]
Wed, 14 Mar 2007
Carla Schroder's latest (excellent) article,
Cheatsheet:
Master Linux Package Management,
spawned a LinuxChix discussion of the subtleties of Debian package
management (which includes other Debian-based distros such as
Ubuntu, Knoppix etc.)
Specifically, we were unclear on the differences among
apt-get
upgrade or
dist-upgrade,
aptitude upgrade,
aptitude dist-upgrade,
and
aptitude -f dist-upgrade.
Most of us have just been typing whichever command we learned first,
without understanding the trade-offs.
But Erinn Clark, our Debian Diva, checked with some of her fellow
Debian experts and got us most of the answers, which I will attempt
to summarize with a little extra help from web references and man pages.
First, apt-get vs. aptitude:
we were told that the primary difference between them is
that "aptitude is less likely to remove packages." I confess
I'm still not entirely clear on what that means, but aptitude is seen
as safer and smarter and I'll go on using it.
aptitude upgrade gets updates (security, bug fixes or whatever)
to all currently installed packages. No packages will be removed,
and no new packages will be installed.
If a currently installed package changes to require a
new package that isn't installed, upgrade will refuse to update
those packages (they will be "kept back"). To install the "kept back"
packages with their dependencies, you can use:
aptitude dist-upgrade gets updates to the currently installed
packages, including any new packages which are now required.
But sometimes you'll encounter problems in the dependencies,
in which case it will suggest that you:
aptitude -f dist-upgrade tries to "fix broken packages",
packages with broken dependencies. What sort of broken dependencies?
Well, for example, if one of the new packages conflicts with another
installed package, it will offer to remove the conflicting package.
Without -f, all you get is that a package will be "held back" for
unspecified reasons, and you have to go probing with commands like
aptitude -u install pkgname or
apt-get -o Debug::pkgProblemResolver=yes dist-upgrade
to find out the reason.
The upshot is that if you want everything to just happen in
one step without pestering you, use aptitude -f dist-upgrade;
if you want to be cautious and think things through at each step,
use aptitude upgrade and be willing to type the stronger
commands when it runs into trouble.
Sections 6.2 and 6.3 of the
Debian
Reference cover these commands a little, but not in much detail.
The APT
Howto is better, and runs through some useful examples (which I
used to try to understand what -f does).
Thanks go to Erinn, Ari Pollak, and Martin Krafft (whose highly rated book,
The
Debian System: Concepts and Techniques, apparently would have
answered these questions, and I'll be checking it out).
Tags: linux, debian, ubuntu
[
22:19 Mar 14, 2007
More linux |
permalink to this entry |
]
Fri, 18 Nov 2005
I found myself in a situation where a package was mostly installed,
but it was missing some files, notably the startup file in
/etc/init.d/
packagename. No problem, right? Just reinstall
the package.
Well, no. dpkg -i packagename spun and looked busy for a
while, but the missing file didn't appear. Removing the package first
with dpkg -r packagename, then reinstalling, didn't help either,
nor did dpkg -i --force-newconfig packagename.
(I didn't try dpkg -r --purge packagename because I already
had invested some time into setting up the files in the package
and was hoping to avoid losing that work.)
Of course, I could have extracted the .deb somewhere else and pulled
the single init.d file out of it; but I was worried that I might be
missing other files, and end up with a flaky package.
Well, as far as I can tell, there really isn't any way to do this
"right" in Debian: there's no way to tell dpkg "Really install this
package, every file in it, even if you think maybe some of the files
already got installed before", or "Install any file in this package
which doesn't currently exist on disk." It's amazing (I'm pretty
sure RPM offered both of these options) but apparently this isn't
something dpkg allows.
I found a way to trick it, though:
rm /var/lib/dpkg/info/packagename.*
dpkg -i packagename
You get a lovely warning that
dpkg: serious warning: files list file for package `packagename'
missing, assuming package has no files currently installed.
and then dpkg finally goes ahead and reinstalls all the files.
Whew!
Update: Aha! It is possible after all. dpkg i --force-confmiss is
the option I wasn't seeing. Thanks, Yosh!
Tags: linux, debian
[
19:01 Nov 18, 2005
More linux |
permalink to this entry |
]
Tue, 12 Apr 2005
A recent change to the Debian font system has caused some odd
font problems which Debian users might do well to know about.
The change has to do with the addition in /etc/fonts of a
directory conf.d containing symbolic links to scripts,
and the overwriting of some of the existing files in /etc/fonts.
The symptoms are varied and peculiar. On my sid system, on each
boot, the system would toggle between two different font resolutions.
I'd start xchat, and the fonts would be too teeny to read; so I'd call
up the preferences dialog, see the font was at 9, and increase it to
12, at which point I'd see the font I was used to seeing (though the
UI font in the tabs would still be teeny). Subsequent runs of xchat
would be fine (except for the still-teeny tab fonts). But upon
reboot, xchat would come up with the tab font correct and the channel
font HUGE. Prefs dialog again: it's still at 12 where I set it last
time, so now I reset it to 9, which makes it the right size.
Until the next reboot, when everything became teeny again and
I have to go back to 12.
The system resolution never changed, nor did the rendering of the
bitmapped fonts I use in emacs and terminal clients; only the
rendering of freetype scalable fonts changed with each reboot.
Back in the days when all fonts were bitmapped, I would have guessed
that the font system was alternating between 100dpi fonts and 72dpi fonts.
At a loss as to what might cause this strange behavior, I took a
peek into /etc/fonts/conf.d, which Dave had discovered a
few weeks ago when he updated his sarge system and all his bitmapped
fonts disappeared. Though my problem didn't sound remotely
similar to his: my bitmapped fonts were fine, it was the scalable
ones which were flaky.
Turns out the symlink I'd aquired in the update,
/etc/fonts/conf.d/30-debconf-no-bitmaps.conf, did indeed
point to a file called no-bitmaps.conf, just as Dave's had.
Just to see what would happen, I removed it, and made a new symlink,
30-debconf-yes-bitmaps.conf, pointing to yes-bitmaps.conf.
Voila! The size-toggling problem disappeared,
and, even better, bitmapped fonts like "clean" now show up in
gtkfontsel and in gtk font selection dialogs, which they never did
before. I can use all my fonts now!
The moral is: if you've updated sarge or sid recently, and see
any weirdness at all in fonts, go to /etc/fonts/conf.d and
fiddle with the symlinks. Even if it doesn't seem directly related to
your problem.
As to why no-bitmaps.conf causes the system to toggle between
two different font scalings, that's still a mystery. The only
difference between no-bitmaps.conf and yes-bitmaps.conf
is that one rejects, and the other accepts, fonts that have "scalable"
set to false. Why that would change the scale at which fonts are
rendered is beyond me. I'll leave that up to someone who understands
the new debian font system. If any such person exists.
Update 5/24/2005: turns out you can change this on a per-user
basis too, with ~/.fonts.conf. man fonts.conf for details.
Tags: linux, debian, fonts
[
22:45 Apr 12, 2005
More linux |
permalink to this entry |
]
Wed, 02 Mar 2005
I like to keep my laptop's boot sequence lean and mean, so it
can boot very quickly. (Eventually I'll write about this in
more detail.) Recently I made some tweaks, and then went through
a couple of dist-upgrades (it's currently running Debian "sarge"),
and had some glitches. Some of what I learned was interesting
enough to be worth sharing.
First, apache stopped serving http://localhost/ -- not
important for most machines, but on the laptop it's nice to be
able to use a local web server when there's no network connected.
Further investigation revealed that this had nothing to do with
apache: it was localhost that wasn't working, for any port.
I thought perhaps my recent install of 2.4.29 was at fault, since
some configuration had changed, but that wasn't it either.
Eventually I discovered that the lo interface was there,
but wasn't being configured, because my boot-time tweaking had
disabled the ifupdown boot-time script, normally called from
/etc/rcS.d.
That's all straightforward, and I restored ifupdown to its
rightful place using update-rc.d ifupdown start 39 S .
Dancer suggested apt-get install --reinstall ifupdown
which sounds like a better way; I'll do that next time. But
meanwhile, what's this ifupdown-clean script that gets installed
as S18ifupdown-clean ?
I asked around, but nobody seemed to know, and googling doesn't
make it any clearer. The script obviously cleans up something
related to /etc/network/ifstate, which seems to be a text
file holding the names of the currently configured network
interfaces. Why? Wouldn't it be better to get this information
from the kernel or from ifconfig? I remain unclear as to what the
ifstate file is for or why ifupdown-clean is needed.
Now my loopback interface worked -- hurray!
But after another dist-upgrade, now eth0 stopped working.
It turns out there's a new hotplug in town. (I knew this
because apt-get asked me for permission to overwrite
/etc/hotplug/net.agent; the changes were significant enough
that I said yes, fully aware that this would likely break eth0.)
The new net.agent comes with comments referencing
NET_AGENT_POLICY in /etc/default/hotplug, and documentation
in /usr/share/doc/hotplug/README.Debian. I found the
documentation baffling -- did NET_AGENT_POLICY=all mean that
it would try to configure all interfaces on boot, or only that
it would try to configure them when they were hotplugged?
It turns out it means the latter. net.agent defaults to
NET_AGENT_POLICY=hotplug, which doesn't do anything unless you
edit /etc/network/interfaces and make a bunch of changes;
but changing NET_AGENT_POLICY=all makes hotplug "just work".
I didn't even have to excise LIFACE from the net.agent code,
like I needed to in the previous release. And it still works
fine with all my existing Network
Schemes entries in /etc/network/interfaces.
This new hotplug looks like a win for laptop users. I haven't
tried it with usb yet, but I have no reason to worry about that.
Speaking of usb, hotplug, and the laptop: I'm forever hoping
to switch to the 2.6 kernel, because it handles usb hotplug so much
better than 2.4; but so far, I've been prevented by PCMCIA hotplug
issues and general instability when the laptop suspends and resumes.
(2.6 works fine on the desktop, where PCMCIA and power management
don't come into play.)
A few days ago, I built both 2.4.29 and 2.6.10, since I was behind
on both branches. 2.4.29 works fine. 2.6.10, alas, is even less
stable than 2.6.9 was. On the laptop's very first resume from BIOS
suspend after the first 2.6.10 boot, it hung, in the same way I'd
been seeing sporadically from 2.6.9: no keyboard lights blinking
(so not a kernel "oops"), cpu fan sometimes spinning,
and no keyboard response to ctl-alt-Fn or anything else.
I suppose the next step is to hook up the "magic sysrq" key and see
if it responds to the keyboard at all when in that state.
Tags: linux, debian, networking, hotplug
[
23:06 Mar 02, 2005
More linux |
permalink to this entry |
]
Mon, 20 Dec 2004
I've forever struggled with Debian's printing system.
A few months ago, Debian unstable introduced a new package called
printconf which, once I discovered by accident it required
the parallel port to be in EPP mode, actually detected and
configured my trusty Epson Photo 700. It was a happy day!
But since then, the printing system has broken again.
It wasn't so bad when printing did nothing at all, or printed
random garbage characters or postscript instead of a picture.
But now (for the past month or so), what it does is print out
a centimeter or so of reasonable graphics, after which the printer
starts to issue horrible grinding noises
and has to be powered off in order to stop the destruction.
I discovered through much fiddling that I could get the printer
working again (on a non-Debian system) by powering it off and
leaving it that way for quite a while (a few minutes doesn't seem to
be enough, but 20 minutes is), then plugging it into the SuSE 9.1
machine and running a series of clean/nozzle test/clean cycles.
Eventually, after the second round where the nozzle test prints
clean, the printer works normally again from SuSE or Redhat.
I still don't know whether all that loud grinding is doing
any permanent damage to the printer.
I suspect the actual problem may be something like paper size.
In the few months during which printing actually worked,
I had lots of problems with mozilla's printouts overrunning the
page, which turned out to be due to Xprint having its own idea of
paper size (A4) rather than following the system setting (usletter).
I never did find a place to configure Xprint's idea of paper size,
so I uninstalled Xprint, and mozilla magically became able to print
on usletter paper. But it's possible there are other parameters
buried in the debian printing system somewhere, perhaps telling
the printer to print to paper wider than it's capable of.
I've filed bugs, but they never get any response which might offer
a clue how I could help debug this; I suspect Debian's print
spooling system is basically orphaned. I've tried installing
and uninstalling every combination of the myriad print spooling
components I can find. I'd love to uninstall it all and build the
whole spooler from source, and then perhaps try to track down
the problem and fix it, but there are so many pieces which all
work together in undocumented ways that I don't know where to start.
(Perhaps by installing exactly the component set that SuSE does?)
I'm reluctantly giving up on Debian for my primary desktop machine.
I like almost everything else about Debian, and I've run it for
several years on my primary machine; but during that time I've
only had a few months here and there where printing briefly worked
before breaking again. There must be a distro that can do easy
software updates like Debian, yet is still capable of driving
a printer without damaging it!
Tags: linux, debian, printing
[
23:46 Dec 20, 2004
More linux |
permalink to this entry |
]
Fri, 05 Nov 2004
Printing's been broken on my Debian machine forever.
For one brief shining moment back in July I
briefly
got it working, then a week later a dist-upgrade broke it again
and it's been broken ever since.
Last week Debian Weekly
News mentioned a new package called "printconf" which supposedly
autoconfigures usb and parallel printers for CUPS. Now, setting
aside for the moment that there's already a package called
printconf, which configures a completely different spooler than
CUPS, and that it's very confusing of Debian to resurrect an old
name for a completely different purpose, of course I wanted to try
it.
At apt-get time, it asked me whether I wanted to configure my
printers now, and of course I said yes. The package installed,
it printed a message about restarting CUPS, and no more details.
Did it do anything?
I visited the CUPS configuration url (CUPS is configured via a web
browser) and the entry looked like my old printer entry. Just for
ducks I clicked "print a test page". Nada. So I removed the entry,
went back to my root shell and typed printconf. It printed
"Restarting cups ... done." No other info. Back to the web
configuration page ... no printer there.
Eventually I discovered the -v option, which at least told me that
it wasn't finding any parallel printers. I know this printer can be
detected via the parallel port (SuSE and Mandrake both autoconfigure
it), so something was wrong. Time to look at the BIOS.
A bunch of reboots later, I finally managed to get into my machine's
BIOS screen (hint: repeatedly press DEL during boot. The screen saying
DEL is the right key only flashes for a fraction of a second, so
there's no hope of ever reading it and I wasted several boot cycles
pressing function keys instead) and changed the parallel port from
"ECP" to "ECP/EPP". Back into Debian -- and voila! printconf saw
the printer, autoconfigured it with some magic the earlier entry
hadn't had, and after a year and a half I have a debian printer again!
(Incidentally, the parallel port setting isn't why the printer
wasn't working before; it was something about the CUPS
configuration. Printing used to work on this machine several
years ago and the BIOS settings haven't changed since then.)
All hail printconf! I wonder if it's ever occurred to anyone to
mention in the man page that it needs an EPP (or ECP/EPP?) parallel
port?
Tags: linux, debian, printing
[
22:05 Nov 05, 2004
More linux |
permalink to this entry |
]
Sat, 31 Jul 2004
For some reason X on the laptop hasn't been seeing the external
USB mouse. But last night I got it working again. Turns out that
/dev/input/mouse0 no longer works; I have to use /dev/input/mice
because the mouse number changes each time it's plugged in
(which I don't think was a problem with earlier kernels).
Thanks to Peter S. for helping me track the problem down.
I also learned (unrelated to the mouse issue) about a couple of
very useful Debian apps, deborphan and debfoster, for finding
orphaned and no longer needed libraries. I'd always wanted
something like that to help clean up my crufty debian systems.
Tags: linux, X11, debian
[
19:39 Jul 31, 2004
More linux |
permalink to this entry |
]
Tue, 20 Jul 2004
After a year of no printing on sid, I went back to sarge to see if I could
still print from there.
When I dist-upgraded my ancient sarge, one of the questions it asked me
was whether to replace printers.conf. That sounded suspicious: I saved
the old printers.conf, then allowed it to replace it with its new
version.
Well, sure enough, with the new printers.conf it didn't know about
my Epson, and when I went to the cups admin url to add it, there
was no "add printer" button. Just like I'd always seen in sid.
In sid, someone once gave me the direct url to "add a printer",
but when I followed it, I didn't get a working setup anyway.
I decided to try copying my old printers.conf on top of the new one.
And voila, it worked! Printing works okay from sarge. (It still has
the problem of the cups test page outlines not aligning well with the
physical printer page, so it may not work for printing labels, but
it's a start.)
So I moved over to sid, and tried the same printers.conf. Voila,
something came out of the printer, the first I've ever seen that
happen from sid! It didn't entirely work: I printed a few lines
using lpr, and the printer printed those lines but then didn't
eject the page, and I had to wrestle with the printer to get the
paper out. So all is not quite well in sid land, but it's much
farther along than it was using only the tools available in sid
(rather than my two-year-old printers.conf originally configured
on a much older sarge).
The other interesting file that upgrade asked me about was
epson.conf, which turns out to be for the epson scanner, not the
epson printer. Perhaps by using that (I saved the old sarge file)
I'll eventually be able to get scanning working on sid! That
would be lovely. For now, I'm using sarge a lot more.
Tags: linux, debian, printing
[
23:09 Jul 20, 2004
More linux |
permalink to this entry |
]
Wed, 07 Jul 2004
Got an account on alioth, akkana-guest.
Discovered that the tuxracer problem I've been having isn't actually
Debian sid having broken DRI, but merely some problem with the
commercial tuxracer (probably not loading the gl libs properly
or something). Free tuxracer still works. Yay.
Tags: linux, debian, X11, games
[
20:00 Jul 07, 2004
More linux |
permalink to this entry |
]