[ 21:59 Sep 11, 2009 More writing | permalink to this entry ]
How Do You Really Measure Linux Bloat?
Finding and Trimming Linux Bloat.
This one just covers the basics. The followup article, in two weeks, will dive into more detail on how to analyze what resources programs are really using.
I'm posting this while sitting in an excellent OSCON tutorial on Linux System and Network Performance Monitoring, by Darren Hoch. It's full of great information and I'm sure his web site is equally useful.
And it's a great extension to topic that's been occupying me over the past few months: performance tracking to slim down software that might be slowing a Linux system down. That's the topic of one of my two OSCON talks this Wednesday: "Featherweight Linux: How to turn a netbook or older laptop into a Ferrari." Although I don't go into anywhere near the detail Darren does, a lot of the principles are the same, and I know I'll find a use for a lot of his techniques. The talk also includes a free bonus tourist tip for San Jose visitors.
Today's Linux Planet article is related to my Featherweight talk: What's Bogging Down Your Linux PC? Tracking Down Resource Hogs. Usually they publish my articles on Thursdays, but I asked for an early release since it's related to tomorrow's talk.
For anyone at OSCON in San Jose, I hope you can come to Featherweight late Wednesday afternoon, or to my other talk, Wednesday just after lunch, "Bug Fixing for Everyone (even non-programmers!)" where I'll go over the steps programmers use while fixing bugs, and show that anyone can fix simple bugs even without any prior knowledge of programming.
I've been through all the BIOS screens looking for a setting to flip, but there's nothing there. Some web searching told me that under Windows, there's a setting you can change that will affect this, but I couldn't find anything similar for Linux, until finally drc clued me in to /proc/acpi/wakeup.
cat /proc/acpi/wakeupwill tell you all the events that can cause your machine to wake up from various sleep states.
Unfortunately, they're also obscurely coded. Here are mine:
Device S-state Status Sysfs node SLPB S4 *enabled P32 S4 disabled pci:0000:00:1e.0 UAR1 S4 enabled pnp:00:0a PEX0 S4 disabled pci:0000:00:1c.0 PEX1 S4 disabled PEX2 S4 disabled pci:0000:00:1c.2 PEX3 S4 disabled pci:0000:00:1c.3 PEX4 S4 disabled PEX5 S4 disabled UHC1 S3 disabled pci:0000:00:1d.0 UHC2 S3 disabled pci:0000:00:1d.1 UHC3 S3 disabled pci:0000:00:1d.2 UHC4 S3 disabled pci:0000:00:1d.3 EHCI S3 disabled pci:0000:00:1d.7 AC9M S4 disabled AZAL S4 disabled pci:0000:00:1b.0
What do all those symbols mean? I have no clue. Apparently the codes come from the BIOS's DSDT code, and since it varies from board to board, nobody has published tables of likely translations.
The only two wakeups that were enabled for me were SLPB and UAR1.
SLPB apparently stands for SLeeP Button, and Rik suggested UAR
probably stood for Universal Asynchronous Receiver (the more familiar
term UART both receives and Transmits.)
Some of the other devices in the list can possibly be identified by
comparing their pci: codes against
lspci, but not those two.
Time for some experimentation. You can toggle any of these by writing to the wakeup device:
echo UAR1 >/proc/acpi/wakeup
It turned out that to disable mouse and keyboard wakeup, I had to disable both SLPB and UAR1. With both disabled, the machine wakes up when I press the power button. (What the SLeeP Button is, if it's not the power button, I don't know.)
My mouse and keyboard are PS/2. For a USB mouse and keyboard, look for something like USB0, UHC0, USB1.
The UAR1 setting is remembered even across boots: there's no need to do anything to make sure the setting is remembered. But the SLPB setting resets every time I boot. So I edited /etc/rc.local and added this line:
echo SLPB >/proc/acpi/wakeup
A recursive grep in ~/.mozilla/firefox showed that the only reference to the old unwanted file was in the binary file places.sqlite.
My places.sqlite was 11Mb. I look through the Prefs window showed that the default setting was to store history for minimum of 90 days. That seemed rather excessive, so I reduced it drastically. But that didn't reduce the size of the file any, nor did it banish the spurious URLbar suggestion when I typed /home/....
After some discussion with folks on IRC, it developed that Firefox may never actually reduce the size of the places.sqlite file at all. Even if it did reduce the amount of data in the file (which it's not clear it does), it never tells sqlite to compact the file to use less space. Apparently there was some work on that about a year ago, but it was slow and unreliable and they never got it working, and eventually gave up on it.
You can run an sqlite compaction by hand (make sure to exit your running firefox first!):
sqlite3 places.sqlite vacuum
But vacuuming really didn't help much. It reduced the size of the file from 11 to 8.8 Mb (after reducing the number of days firefox was supposed to store to less than a third of the original size) and it didn't get rid of that spurious suggestion.
So the only remaining option seemed to be to remove the file. It stores both history and bookmarks, so it's best to back up bookmarks before removing it. I backed up bookmarks to the .json format firefox likes to use for backups, and also exported them to a more human (and browser) readable bookmarks.html. Then I removed the places.sqlite file.
Success! The spurious recommendation was gone. Typing seems faster too (less of those freezes while the "awesomebar" searches through its list of recommendations).
So I guess firefox can't be trusted to clean up after itself, and users who care have to do that manually. It remains to be seen how much the file will grow now. I expect periodic vacuumings or removals will still be warranted if I don't want a huge file; but it's pretty easy to do, and firefox found the bookmarks backup and reloaded them without any extra work on my part.
In the meantime, I made a new bookmark -- hidden in my bookmarklets menu so it doesn't clutter the main bookmarks menu -- to the directory where I preview articles I'm writing. That ought to help a bit with future URLbar suggestions.
Since I don't run a gnome desktop, I assumed it probably had something to do with requiring gnome-vfs services that I don't have. But yesterday I finally got some time to chase it down with help from various folk on #gimp.
I had libgnomevfs (and its associated dev package) installed on my Ubuntu Hardy machine, but I didn't have gvfs. It was suggested that I install the gfvs-backends package. I tried that, but it didn't help; apparently gvfs requires not just libgvfs and gvfs-backends, but also running a new daemon, gvfsd. Finding an alternative was starting to sound appealing.
Turns out gimp now has three compile-time configure options related to opening URIs:
--without-gvfs build without GIO/GVfs support --without-gnomevfs build without gnomevfs support --without-libcurl build without curl support
These correspond to four URI-getting methods in the source, in plug-ins/file-uri:
GIMP can degrade from gvfs to gnomevfs to libcurl to wget, but only at compile time, not at runtime: only one of the four is built.
On my desktop machine,
--without-gvfs was all I needed.
Even without running the gnome desktop, the gnomevfs
front-end seems to work fine. But it's good to know about the other
options too, in case I need to make a non-gnomevfs version to run on
the laptop or other lightweight machines.
Last night Joao and I were on IRC helping someone who was learning to write gimp plug-ins. We got to talking about pixel operations and how to do them in Python. I offered my arclayer.py as an example of using pixel regions in gimp, but added that C is a lot faster for pixel operations. I wondered if reading directly from the tiles (then writing to a pixel region) might be faster.
But Joao knew a still faster way. As I understand it, one major reason Python is slow at pixel region operations compared to a C plug-in is that Python only writes to the region one pixel at a time, while C can write batches of pixels by row, column, etc. But it turns out you can grab a whole pixel region into a Python array, manipulate it as an array then write the whole array back to the region. He thought this would probably be quite a bit faster than writing to the pixel region for every pixel.
He showed me how to change the arclayer.py code to use arrays, and I tried it on a few test layers. Was it faster? I made a test I knew would take a long time in arclayer, a line of text about 1500 pixels wide. Tested it in the old arclayer; it took just over a minute to calculate the arc. Then I tried Joao's array version: timing with my wristwatch stopwatch, I call it about 1.7 seconds. Wow! That might be faster than the C version.
The updated, fast version (0.3) of arclayer.py is on my arclayer page.
If you just want the trick to using arrays, here it is:
from array import array [ ... setting up ... ] # initialize the regions and get their contents into arrays: srcRgn = layer.get_pixel_rgn(0, 0, srcWidth, srcHeight, False, False) src_pixels = array("B", srcRgn[0:srcWidth, 0:srcHeight]) dstRgn = destDrawable.get_pixel_rgn(0, 0, newWidth, newHeight, True, True) p_size = len(srcRgn[0,0]) dest_pixels = array("B", "\x00" * (newWidth * newHeight * p_size)) [ ... then inside the loop over x and y ... ] src_pos = (x + srcWidth * y) * p_size dest_pos = (newx + newWidth * newy) * p_size newval = src_pixels[src_pos: src_pos + p_size] dest_pixels[dest_pos : dest_pos + p_size] = newval [ ... when the loop is all finished ... ] # Copy the whole array back to the pixel region: dstRgn[0:newWidth, 0:newHeight] = dest_pixels.tostring()
Time passed ... and Back/Forward didn't get faster. In fact, they got a lot slower. The "Blazingly Fast Back" code did get checked in (here's how to enable it) but somehow it never seemed to make any difference.
The problem, it turns out, is that the landing of bug 101832 added code to respect a couple of HTTP Cache-Control header settings, no-store and no-cache. There's also a third cache control header, must-revalidate, which is similar (the difference among the three settings is fairly subtle, and Firefox seems to treat them pretty much the same way).
Translated, that means that web servers, when they send you a page, can send some information along with the page that asks the browser "Please don't keep a local copy of this page -- any time you want it again, go back to the web and get a new copy."
There are pages for which this makes sense. Consider a secure bank site. You log in, you do your banking, you view your balance and other details, you log out and go to lunch ... then someone else comes by and clicks Back on your browser and can now see all those bank pages you were just viewing. That's why banks like to set no-cache headers.
But those are secure pages (https, not http). There are probably reasons for some non-secure pages to use no-cache or no-store ... um ... I can't think of any offhand, but I'm sure there are some.
But for most pages it's just silly. If I click Back, why wouldn't I want to go back to the exact same page I was just looking at? Why would I want to wait for it to reload everything from the server?
The problem is that modern Content Management Systems (CMSes) almost always set one or more of these headers. Consider the Linux.conf.au site. Linx.conf.au is one of the most clueful, geeky conferences around. Yet the software running their site sets
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cacheon every page. I'm sure this isn't intentional -- it makes no sense for a bunch of basically static pages showing information about a conference several months away. Drupal, the CMS used by LinuxChix sets
Cache-Control: must-revalidate-- again, pointless. All it does is make you afraid to click on links because then if you want to go Back it'll take forever. (I asked some Drupal folks about this and they said it could be changed with drupal_set_header).
(By the way, you can check the http headers on any page with:
wget -S -O /dev/null http://... or, if you have curl,
curl --head http://...)
Here's an excellent summary of the options in an Opera developer's blog, explaining why the way Firefox handle caching is not only unfriendly to the user, but also wrong according to the specs. (Darn it, reading sensible articles like that make me wish I wasn't so deeply invested in Mozilla technology -- Opera cares so much more about the user experience.)
But, short of a switch to Opera, how could I fix it on my end? Google wasn't any help, but I figured that this must be a reported Mozilla bug, so I turned to Bugzilla and found quite a lot. Here's the scoop. First, the code to respect the cache settings (slowing down Back/Forward) was apparently added in response to bug 101832. People quickly noticed the performance problem, and filed 112564. (This was back in late 2001.) There was a long debate, but in the end, a fix was checked in which allowed no-cache http (non-secure) sites to cache and get a fast Back/Forward. This didn't help no-store and must-revalidate sites, which were still just as slow as ever.
Then a few months later, bug 135289 changed this code around quite a bit. I'm still getting my head around the code involved in the two bugs, but I think this update didn't change the basic rules governing which pages get revalidated.
(Warning: geekage alert for next two paragraphs. Use this fix at your own risk, etc.)
Unfortunately, it looks like the only way to fix this is in the C++ code. For folks not afraid of building Firefox, the code lives in nsDocShell::ShouldDiscardLayoutState and controls the no-cache and no-store directives. In nsDocShell::ShouldDiscardLayoutState (currently lie 8224, but don't count on it), the final line is:
return (noStore || (noCache && securityInfo));Change that to
return ((noStore || noCache) && securityInfo);and Back/Forward will get instantly faster, while still preserving security for https. (If you don't care about that security issue and want pages to cache no matter what, just replace the whole function with
The must-validate setting is handled in a completely different place, in nsHttpChannel. However, for some reason, fixing nsDocShell also fixes Drupal pages which set only must-validate. I don't quite understand why yet. More study required. (End geekage.)
Any Mozilla folks are welcome to tell me why I shouldn't be doing this, or if there's a better way (especially if it's possible in a way that would work from an extension or preference). I'd also be interested in from Drupal or other CMS folks defending why so many CMSes destroy the user experience like this. But please first read the Opera article referenced above, so that you understand why I and so many other users have complained about it. I'm happy to share any comments I receive (let me know if you want your comments to be public or not).