Shallow Thoughts : : Mar

Akkana's Musings on Open Source Computing and Technology, Science, and Nature.

Wed, 30 Mar 2011

Reading, converting and editing EPUB ebooks

Since switching to the Archos 5 Android tablet for my daily feed reading, I've also been using it to read books in EPUB format.

There are tons of places to get EPUB ebooks -- I won't try to list them all, but Project Gutenberg is a good place to start. The next question was how to read them.

Reading EPUB books: Aldiko or FBReader

I've already mentioned Aldiko in my post on Android as an RSS reader. It's not so good for reading short RSS feeds, but it's excellent for ebooks.

But Aldiko has one fatal flaw: it insists on keeping its books in one place, and you can't change it. When I tried to add a big technical book, Aldiko spun for several minutes with no feedback, then finally declared it was out of space on the device. Frustrating, since I have a nearly empty 8-gigabyte micro-SD card and there's no way to get Aldiko to use it. Fiddling with symlinks didn't help.

A reader gave me a tip a while back that I should check out FBReader. I'd been avoiding it because of a bad experience with the early FBReader on the Nokia 770 -- but it's come a long way since then, and FBReaderJ, the Android port, works very nicely. It's as good a reader as Aldiko (except I wish the line spacing were more configurable). It has better navigation: I can see how far along in the book I am or jump to an arbitrary point, tasks Aldiko makes quite difficult. Most important, it lets me keep my books anywhere I want them. Plus it's open source.

Creating EPUB books: Calibre and ebook-convert

I hadn't had the tablet for long before I encountered an article that was only available as PDF. Wouldn't it be nice to read it on my tablet?

Of course, Android has lots of PDF readers. But most of them aren't smart about things like rewrapping lines or changing fonts and colors, so it's an unpleasant experience to try to read PDF on a five-inch screen. Could I convert the PDF to an EPUB?

Sadly, there aren't very many open-source options for handling EPUB. For converting from other formats, you have one choice: Calibre. It's a big complex GUI program for organizing your ebook library and a whole bunch of other things I would never want to do, and it has a ton of prerequisites, like Qt4. But the important thing is that it comes with a small Python script called ebook-convert.

ebook-convert has no built-in help -- it takes lots of options, but to find out what they are, you have to go to the ebook-convert page on Calibre's site. But here's all you typically need

ebook-convert --authors "Mark Twain" --title "Huckleberry Finn" infile.pdf huckfinn.epub
Update: They've changed the syntax as of Calibre v. 0.7.44, and now it insists on having the input and output filenames first:
ebook-convert infile.pdf huckfinn.epub --authors "Mark Twain" --title "Huckleberry Finn"

Pretty easy; the only hard part is remembering that it's --authors and not --author.

Calibre (and ebook-convert) can take lots of different input formats, not just PDF. If you're converting ebooks, you need it. I wish ebook-convert was available by itself, so I could run it on a server; I took a quick stab at separating it, but even once I separated out the Qt parts it still required Python libraries not yet available on Debian stable. I may try again some day, but for now, I'll stick to running it on desktop systems.

Editing EPUB books: Sigil

But we're not quite done yet. Calibre and ebook-convert do a fairly good job, but they're not perfect. When I tried converting my GIMP book from a PDF, the chapter headings were a mess and there was no table of contents. And of course I wanted the cover page to be right, instead of the default Calibre image. I needed a way to edit it.

EPUB is an XML format, so in theory I could have fixed this with a text editor, but I wanted to avoid that if possible.

And I found Sigil. Wikipedia claims it's the only application that can edit EPUB books.

There's no sigil package in Ubuntu (though Arch has one), but it was very easy to install from the sigil website.

And it worked beautifully. I cleaned up the font problems at the beginnings of chapters, added chapter breaks where they were missing, and deleted headings that didn't belong. Then I had Sigil auto-generate a table of contents from headers in the document. I was also able to fix the title and put the real book cover on the title page.

It all worked flawlessly, and the ebook I generated with Sigil looks very nice and has working navigation when I view it in FBReaderJ (it's still too big for Aldiko to handle). Very impressive. If you've ever wanted to generate your own ebook, or edit one you already have, you should definitely check out Sigil.

Tags: , ,
[ 11:17 Mar 30, 2011    More tech | permalink to this entry | ]

Sun, 27 Mar 2011

Automated mail: check the plaintext part (or don't send one)

Funny thing happened last week.

I'm on the mailing list for a volunteer group. Round about last December, I started getting emails every few weeks congratulating me on RSVPing for the annual picnic meeting on October 17.

This being well past October, when the meeting apparently occurred -- and considering I'd never heard of the meeting before, let alone RSVPed for it -- I couldn't figure out why I kept getting these notices.

After about the third time I got the same notice, I tried replying, telling them there must be something wrong with their mailer. I never got a reply, and a few weeks later I got another copy of the message about the October meeting.

I continued sending replies, getting nothing in return -- until last week, when I got a nice apologetic note from someone in the organization, and an explanation of what had happened. And the explanation made me laugh.

Seems their automated email system sends messages as multipart, both HTML and plaintext. Many user mailers do that; if you haven't explicitly set it to do otherwise, you yourself are probably sending out two copies of every mail you send, one in HTML and one in plain text.

But in this automated system, the plaintext part was broken. When it sent out new messages in HTML format, apparently for the plaintext part it was always attaching the same old message, this message from October. Apparently no one in the organization had ever bothered to check the configuration, or looked at the plaintext part, to realize it was broken. They probably didn't even know it was sending out multiple formats.

I have my mailer configured to show me plaintext in preference to HTML. Even if I didn't use a text mailer (mutt), I'd still use that setting -- Thunderbird, Apple Mail, Claws and many other mailers offer it. It protects you from lots of scams and phishing attacks, "web bugs" to track you,, and people who think it's the height of style to send mail in blinking yellow comic sans on a red plaid background.

And reading the plaintext messages from this organization, I'd never noticed that the message had an HTML part, or thought to look at it to see if it was different.

It's not the first time I've seen automated mailers send multipart mail with the text part broken. An astronomy club I used to belong to set up a new website last year, and now all their meeting notices, which used to come in plaintext over a Yahoo groups mailing list, have a text part that looks like this actual example from a few days ago:

Subject: Members' Night at the Monthly Meeting
<p>&#60;&#115;&#116;&#121;&#108;&#101;&#32;&#116;&#121;&#112;&#101;&#61;&#34;&#1
16;&#101;&#120;&#116;&#47;&#99;&#115;&#115;&#34;&#62;@font-face {
  font-family: "MS 明朝";
}@font-face {
  font-family: "MS 明朝";
}@font-face {
  font-family: "Cambria";
}p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0in 0in 0.0001pt; font-size:
12pt; font-family: Cambria; }a:link, span.MsoHyperlink { color: blue;
text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color:
purple; text-decoration: underline; }.MsoChpDefault { font-family: Cambria;
}div.WordSection1 { page: WordSection1;
}&#60;&#47;&#115;&#116;&#121;&#108;&#101;&#62;
<p class="MsoNormal">Friday April 8<sup>th</sup> is members’ night at the
monthly meeting of the PAS.<span style="">&#160; </span>We are asking for
anyone, who has astronomical photographs that they would like to share, to
present them at the meeting.<span style="">&#160; </span>Each presenter will
have about 15 minutes to present and discuss his pictures.<span style=""> We
already have some presenters. &#160; </span></p>
<p class="MsoNormal">&#160;</p>
... on and on for pages full of HTML tags and no line breaks. I contacted the webmaster, but he was just using packaged software and didn't seem to grok that the software was broken and was sending HTML for the plaintext part as well as for the HTML part. His response was fairly typical: "It looks fine to me". I eventually gave up even trying to read their meeting announcements, and now I just delete them.

The silly thing about this is that I can read HTML mail just fine, if they'd just send HTML mail. What causes the problem is these automated systems that insist on sending both HTML and plaintext, but then the plaintext part is wrong. You'll see it on a lot of spam, too, where the plaintext portion says something like "Get a better mailer" (why? so I can see your phishing attack in all its glory?)

Folks, if you're setting up an automated email system, just pick one format and send it. Don't configure it to send multiple formats unless you're willing to test that all the formats actually work.

And developers, if you're writing an automated email system: don't use MIME multipart/alternative by default unless you're actually sending the same message in different formats. And if you must use multipart ... test it. Because your users, the administrators deploying your system for their organizations, won't know how to.

Tags: ,
[ 14:19 Mar 27, 2011    More tech/email | permalink to this entry | ]

Thu, 24 Mar 2011

How to use Bitlbee for Twitter

I've been using Bitlbee to follow Twitter from my IRC client (xchat) for many months now. I love it -- it's a great interface, really easy to use.

But every now and then I have to install it on a new machine, and I remember its one flaw: it has no documentation to speak of. What docs there are cover only pieces of the puzzle, and nobody covers basics like "How do I connect in the first place?"

So here's mine.

First, install bitlbee. The download page has tarballs, but if you're on Ubuntu or Debian, the easiest way is to add the bitlbee repository to your sources.list.

Once bitlbee is installed (the server should start automatically), it will run an IRC server on port 6667. So connect your IRC client to localhost/6667.

In the bitlbee server window that comes up, type this: register passwd
This will be your bitlbee password. It isn't related to your Twitter password.

Set your IRC client to identify passwd so you don't have to type the bitlbee password every time you connect.

Tell Bitlbee your Twitter account handle: account add twitter your-twitter-handle passwd
The password is just a placeholder; it doesn't have to match the one you just set up for bitlbee.

Then enable it: account on
Bitlbee should print:

<root> twitter - Logging in: Connecting
<root> twitter - Logging in: Requesting OAuth request token

Before long, you should see a new channel called twitter_, with a long URL. Paste this URL into your browser to authenticate. You'll have to log in with your Twitter handle and password.

Twitter will give you a code number. Paste this back into the Bitlbee twitter_ channel.

That should be all you need! Bitlbee should now log in to Twitter and give you statuses in a #twitter channel.

(Slightly updated from initial post to clarify the two passwords -- thanks pleia2 and wilmer.)

Tags: ,
[ 17:19 Mar 24, 2011    More tech | permalink to this entry | ]

Tue, 22 Mar 2011

Installing Debian Squeeze

Over the weekend I tried installing Debian's new release, "Squeeze", on my Vaio TX650 laptop.

I used a "net install" CD, the one that installs only the bare minimum then goes to the net for anything else. I used Expert mode, because I needed to set a static IP address and keep it from overwriting my grub configuration.

Most of the install went smoothly -- until I got to the last big step near the end, "Select and install software", where it froze at 1%.

A little web searching (on another machine) gave me the hint that the Debian installer prints a log on the fourth console, Ctrl-Alt-F4. Checking that log made the problem clear: aptitude was complaining about packages without a proper GPG signature -- type Yes to continue without verifying signatures. But since this was running inside the installer, there's no place to type Yes -- that Ctrl-Alt-F4 console is merely displaying messages, not accepting input, and the installer doesn't accept any input for aptitude.

Fortunately, "Select and install software" isn't crucial to the net install process. I don't actually know what software it would have installed -- it never asked me to choose any -- but without it, you should still have a working minimal Debian on the disk. So I made another console on Ctrl-Alt-F2, ran ps aux, found that aptitude was the highest numbered process running, and killed it. Upon returning to the installer (Ctrl-Alt-F1), I was able to skip "Select and install software", finish the install process and reboot.

Upon rebooting, I logged in as root and ran apt-get update. It complained about GPG errors; but now I could do something about it. I ran apt-get upgrade and confirmed that I wanted to proceed even without verifying package signatures. When that was over, the problem was fixed: a subsequent apt-get update ran without errors.

This ISO was downloaded (from the kernel.org mirror, I believe) a few days after the official release. I'm told that Debian changes the keys at the last minute before a release; perhaps the new keys don't make it into the ISO images on all the mirrors. Or maybe they just messed up with the Squeeze release.

Anyway, it was fairly easily solved, but seemed like a disappointing and silly problem. A web search found lots of people people hitting this problem; it's a shame that the installer can't run aptitude in a mode where it won't prompt and hang up the whole install.

Alas, it's probably all academic anyway, since suspend/resume doesn't work. It freezes on resume, with a black screen -- another common Debian problem, judging by what I see on the net. I'm a bit surprised, since every other distro I've tried has suspended the Vaio beautifully. But after hours of messing with it over the weekend, I ran out of time and conceded defeat.

Tags: , ,
[ 22:49 Mar 22, 2011    More linux/install | permalink to this entry | ]

Sun, 20 Mar 2011

Making CapsLock equal Control in Debian Squeeze

It's time for another installment of "Where have the control/capslock adjustments migrated to?" This time it's for the latest Debian release, "Squeeze".

Ever since they stopped making keyboards with the control key to the left of the A, I've remapped my CapsLock key to be another Control key. I never need CapsLock, but I use Control constantly all day while editing text. Some people prefer to swap Control and CapsLock.

But the right way to do that changes periodically. For the last few years, since Ubuntu Intrepid, you could set XKbOptions for Control and Capslock in /etc/default/console-setup. But that no longer works in Squeeze.

It turns out Squeeze introduced a new file, /etc/default/keyboard, so any keyboard options previously had in console-setup need to move to keyboard. For me, that's these lines:

XKBMODEL="pc104"
XKBLAYOUT="us"
XKBVARIANT=""
XKBOPTIONS="ctrl:nocaps,compose:menu,terminate:ctrl_alt_bksp"
though I suspect only the last line matters.

This wasn't well covered on the web. There aren't many howtos covering Squeeze yet, but I found the hint I needed in a terse Debian IRCbot factoid: Factoid capslock says

For console-setup, append ",ctrl:nocaps" to the value of XKBOPTIONS within /etc/default/console-setup (/etc/default/keyboard on Squeeze).

That factoid assumes you already have XKBOPTIONS set; as shipped, it's empty, so skip that initial comma.

I was going to conclude with a link to the documentation on XKBOPTIONS, or XKbOptions as it was capitalized in xorg.conf ... but there doesn't seem to be any. It's not in any of the Xorg man pages like xorg.conf(5) where I expected to find it; nor can I find anything on the web beyond howtos like this one from people who have figured out a few specific options. Anyone know?

Tags: , ,
[ 12:54 Mar 20, 2011    More linux/install | permalink to this entry | ]

Fri, 18 Mar 2011

Finding Twitter references to you

Twitter is a bit frustrating when you try to have conversations there. You say something, then an hour later, someone replies to you (by making a tweet that includes your Twitter @handle). If you're away from your computer, or don't happen to be watching it with an eagle eye right then -- that's it, you'll never see it again. Some Twitter programs alert you to @ references even if they're old, but many programs don't.

Wouldn't it be nice if you could be notified regularly if anyone replied to your tweets, or mentioned you?

Happily, you can. The Twitter API is fairly simple; I wrote a Python function a while back to do searches in my Twitter app "twit", based on a code snippet I originally cribbed from Gwibber. But if you take out all the user interface code from twit and use just the simple JSON code, you get a nice short app. The full script is here: twitref, but the essence of it is this:

import sys, simplejson, urllib, urllib2

def get_search_data(query):
    s = simplejson.loads(urllib2.urlopen(
            urllib2.Request("http://search.twitter.com/search.json",
                            urllib.urlencode({"q": query}))).read())
    return s

def json_search(query):
    for data in get_search_data(query)["results"]:
        yield data

if __name__ == "__main__" :
    for searchterm in sys.argv[1:] :
        print "**** Tweets containing", searchterm
        statuses = json_search(searchterm)
        for st in statuses :
            print st['created_at']
            print "<%s> %s" % (st['from_user'], st['text'])
            print ""

You can run twitref @yourname from the commandline now and then. You can even call it as a cron job and mail yourself the output, if you want to make sure you see replies. Of course, you can use it to search for other patterns too, like twitref #vss or twitref #scale9x.

You'll need the simplejson Python library, which most distros offer as a package; on Ubuntu, install python-simplejson.

It's unclear how long any of this will continue to be supported, since Twitter recently announced that they disapprove of third-party apps using their API. Oh, well ... if Twitter stops allowing outside apps, I'm not sure how interested I'll be in continuing to use it.

On the other hand, their original announcement on Google Groups seems to have been removed -- I was going to link to it here and discovered it was no longer there. So maybe Twitter is listening to the outcry and re-thinking their position.

Tags: , , ,
[ 10:53 Mar 18, 2011    More programming | permalink to this entry | ]

Tue, 15 Mar 2011

Using grep to solve another Cartalk puzzler

It's another episode of "How to use Linux to figure out CarTalk puzzlers"! This time you don't even need any programming.

Last week's puzzler was A Seven-Letter Vacation Curiosity. Basically, one couple hiking in Northern California and another couple carousing in Florida both see something described by a seven-letter word containing all five vowels -- but the two things they saw were very different. What's the word?

That's an easy one to solve using basic Linux command-line skills -- assuming the word is in the standard dictionary. If it's some esoteric word, all bets are off. But let's try it and see. It's a good beginning exercise in regular expressions and how to use the command line.

There's a handy word list in /usr/share/dict/words, one word per line. Depending on what packages you have installed, you may have bigger dictionaries handy, but you can usually count on /usr/share/dict/words being there on any Linux system. Some older Unix systems may have it in /usr/dict/words instead.

We need a way to choose all seven letter words. That's easy. In a regular expression, . (a dot) matches one letter. So ....... (seven dots) matches any seven letters.

(There's a more direct way to do that: the expression .\{7\} will also match 7 letters, and is really a better way. But personally, I find it harder both to remember and to type than the seven dots. Still, if you ever need to match 43 characters, or 114, it's good to know the "right" syntax.)

Fine, but if you grep ....... /usr/share/dict/words you get a list of words with seven or more letters. See why? It's because grep prints any line where it finds a match -- and a word with nine letters certainly contains seven letters within it.

The pattern you need to search for is '^.......$' -- the up-caret ^ matches the beginning of a line, and the dollar sign $ matches the end. Put single quotes around the pattern so the shell won't try to interpret the caret or dollar sign as special characters. (When in doubt, it's always safest to put single quotes around grep patterns.)

So now we can view all seven-letter words: grep '^.......$' /usr/share/dict/words
How do we choose only the ones that contain all the letters a e i o and u?

That's easy enough to build up using pipelines, using the pipe character | to pipe the output of one grep into a different grep. grep '^.......$' /usr/share/dict/words | grep a sends that list of 7-letter words through another grep command to make sure you only see words containing an a.

Now tack a grep for each of the other letters on the end, the same way:
grep '^.......$' /usr/share/dict/words | grep a | grep e | grep i | grep o | grep u

Voilà! I won't spoil the puzzler, but there are two words that match, and one of them is obviously the answer.

The power of the Unix command line to the rescue!

Tags: , , , , ,
[ 11:00 Mar 15, 2011    More linux/cmdline | permalink to this entry | ]

Sun, 13 Mar 2011

Measuring Mars Methane -- Messy!

I write a monthly column for the San Jose Astronomical Association. Usually I don't reprint the columns here, but last month's column, Worlds of Controversy, discussed several recently controversial topics in planetary science.

One of the topics was the issue of methane on Mars -- or lack thereof. We've all read the articles about how the measurements of Mars methane points to possible signs of life, woohoo! But none of the articles cover the problems with those measurements, as described in a recent paper by Kevin Zahnle, Richard S. Freedmana and David C. Catling: Is there methane on Mars?

Lack of life on Mars isn't sexy, I guess; The Economist was the only mainstream publication covering Kevin's paper, in an excellent article, Methane on Mars: Now you don't...

Here's the short summary from my column last month:

I'm sure you've seen articles on Martian methane. Methane doesn't last long in the atmosphere -- only a few hundred years -- so if it's there, it's being replenished somehow. On Earth, one of the most common ways to produce methane is through biological processes. Life on Mars! Whoopee! So everyone wants to see methane on Mars, and it makes for great headlines.

The problem, according to Kevin, is that the Mars measurements show changes on a scale much shorter than hundreds of years: they fluctuate on a seasonal basis. That's tough to explain. Known atmospheric oxidation processes wouldn't get rid of methane fast enough, so you'd need to invent some even more exotic process -- perhaps methane-eating bacteria in the Martian soil? -- to account for the drops.

Worse, the measurements showing methane aren't very reliable. The evidence is spectroscopic: methane absorbs light at several fixed wavelengths, so you can measure methane by looking for its absorption lines.

But any Earth-based measurement of Martian methane has to cope with the fact that Earth's atmosphere has far more methane than Mars. How do you separate possible Mars methane absorption lines from Terran ones? There's one clever way: you can measure Mars at quadrature, when it's coming toward us or going away from us, and any methane spectral lines would be red- or blue-shifted compared to the Terran ones. But then the lines overlap with other absorption lines from Earth's atmosphere. It's very difficult to get a reliable measurement. Of course, a measurement from space would avoid those problems, so the spectrograph on the ESA Mars orbiter has been pressed into service. But there are questions about its accuracy.

The published evidence so far for Martian methane just isn't convincing, especially with those unlikely seasonal fluctuations. That doesn't mean there's no methane there; it means we need better data. The next Mars Rover, dubbed "Curiosity", will include a laser spectrometer which can give us much more accurate methane measurements. Curiosity is set to launch this fall and arrive at Mars in August of next year.

It gets worse: the kapton tape issue

But it gets worse. That Curiosity rover whose sensitive equipment is going to answer the question for us? Well, check out an article in Wired last week: Space Duct Tape Could Confuse Mars Rover.

It seems the materials used to build Curiosity, notably the kapton tape used in large quantities to hold the rover together ... emit methane! Andrew C. Schuerger, Christian Clausen and Daniel Britt, in Methane Evolution from UV-irradiated Spacecraft Materials under Simulated Martian Conditions: Implications for the Mars Science Laboratory (MSL) Mission (abstract), take a selection of materials used in the rover, plus bacteria that might be expected to contaminate it, and subject them to simulated Mars conditions. They conclude

... the large amount of kapton tape used on the MSL rover (lower bound estimated at 3 m2) is likely to create a significant source of terrestrial methane contamination during the early part of the mission.

A skeptical eye

So let's sum up:

* We desperately want to see methane on Mars, because it might point to biological processes and that would be cool.
* But we don't currently have any reliable way to measure Martian methane.
* So we build a special mission one of whose primary purposes is to get accurate measurements of Martian methane.
* But we build the probe with materials that will make the measurements unreliable.

It's apparently too late to fix the problem; so instead, just shrug and say, well, it might not be so bad if we measure at night, or if we wait a while (how long?) until most of the methane outgasses. The methane emission from the kapton tape is fairly small -- though it's hard to know exactly how small, since it's impossible to test it in a real Martian environment.

So in a couple of years, when you start seeing news releases trumpeting Curiosity's methane measurements and talking about life on Mars, read them with a skeptical eye.

Maybe Curiosity will see methane levels on Mars so large that they swamp any contamination issues. Maybe not. But we won't be able to tell from the reports we read in the popular press.

Tags: , ,
[ 12:38 Mar 13, 2011    More science/astro | permalink to this entry | ]

Fri, 11 Mar 2011

Testing blog comments

Based on the offline comments I got, I'm going to try Disqus for blog comments.

Setting up an account was easy, and I think I have the PyBloxsom side working now. So this is a test post to see if comments are working. Feel free to post comments and see! In theory, you should be able to use OpenID, Twitter, Disqus or various other types of accounts.

Comments aren't visible on the blog home page, only on the pages for individual stories.

If you want to try commenting but can't think of anything to say, how about the Japan earthquake? Wow! My heart goes out to everyone affected by the huge quake or the tsunami that followed. Any good links to information about the quake?

Tags:
[ 17:09 Mar 11, 2011    More blogging | permalink to this entry | ]

Thu, 10 Mar 2011

On Linux Planet: Plotting mail logs with CairoPlot

[Pie chart showing origins of spam]

My latest LinuxPlanet article is on plotting pretty graphs from Python with CairoPlot.

Of course, to demonstrate a graphing package I needed some data. So I decided to plot some stats parsed from my Postfix mail log file. We bounce a lot of mail (mostly spam but some false positives from mis-configured email servers) that comes in with bogus HELO addresses. So I thought I'd take a graphical look at the geographical sources of those messages.

The majority were from IPs that weren't identifiable at all -- no reverse DNS info. But after that, the vast majority turned out to be, surprisingly, from .il (Israel) and .br (Brazil).

Surprised me! What fun to get useful and interesting data when I thought I was just looking for samples for an article.

Tags: , , ,
[ 15:08 Mar 10, 2011    More programming | permalink to this entry | ]

Tue, 08 Mar 2011

Blog Comment Services

People periodically ask me why I don't have comments on my blog.

It's not because I don't want to see user discussion -- I'd love that. In particular, several people had opinions on my recent post about locations for the SCALE conference, and I would have loved to see and participate in a discussion on that.

The hold-up is purely technical: my current blogging setup makes it difficult to add them and keep the system maintained.

But lately several services have arisen that apparently make it easy to add comments to otherwise static pages, and I'm considering trying one.

The candidates I know about are:

None of these is perfect. I believe they all require signing up, though the first two can use an OpenID account, and of course a lot of people already have a Facebook account. But it might be a lot better than no comments.

Readers of my blog: do you have a preference, or any experience with any of these services? ("Don't bother adding any of these" is also a valid option, as are suggestions for services I didn't list.) If there's a preference, I'll go with it ... otherwise, I'll probably just pick one and try it.

Please mail me your thoughts or suggestions. Thanks!

Tags:
[ 22:16 Mar 08, 2011    More blogging | permalink to this entry | ]

Fri, 04 Mar 2011

Thoughts on moving SCALE?

I've been going to SCALE for about four years now, and this year's SCALE9x was as good as ever.

There's really only one aspect of SCALE I'm not wild about: the LAX location.

On the conference's #scale-chat IRC channel Sunday night, some folks were discussing whether it might be possible to move away from LAX. It seemed like a great idea, which I want to examine.

Apparently there's a Marriott at the Burbank airport that handles conferences very well, if being near an airport is important, Other folks have suggested Pasadena, a great conference venue if it's not so important to be right next to an airport.

In Burbank or Pasadena, there would be more space, better and cheaper parking, nice scenery, and options for lunch besides overpriced hotel restaurants and fast food. But there's another factor, too: out-of-towners would come away with a much better impression of LA.

I grew up in the Los Angeles area, and I love going back to visit. But I've lost count of the number of times I've heard "Ugh! I bet you're glad to be out of there!" I always ask how much time they've spent in LA, and where; the answer is invariably, "Not much, just a few days near LAX."

It makes me wince. The area around LAX is one of the most smog-ridden, characterless hives of asphalt in four counties. It's a long way from either culture or nature, it's hard for locals to get there on traffic-choked freeways, and it's difficult and expensive to park. It's not even easy to fly in and out of, last I tried; the smaller airports are much friendlier. But face it: a lot of people never see anything of Los Angeles besides LAX. And those folks go away thinking what a pit LA is -- even if the conference itself was great.

While I was at SCALE, my husband amused himself in Burbank. On Saturday, it snowed (!) and he drove around watching folks having snowball fights and ogling the snow piled on their cars. Sunday dawned clear and beautiful, and he went for a hike in the Verdugo hills, with spectacular views of the snowy San Gabriel mountains, and the resident raven flock practicing aerobatics like snap rolls, inverted and knife-edge flight.

Okay, so you won't see any of that while listening to talks. But in Burbank or Pasadena, you could get out during lunch, walk to a restaurant, see the mountains looming over you, make a Trader Joe's run (I heard more than one attendee asking about the nearest TJs). And parking and hotels would be much cheaper, for those who can't afford to stay at the conference venue.

A reader points out that I forgot to mention there's a Fry's Electronics just across the street from the Burbank Marriott -- geek paradise! Even more important than Trader Joe's!

I know, you're thinking people don't go to computer conferences to walk around outside, or to go to zoos or museums or whatever. But ... don't they? I've sure had fun exploring the attractions of cities like Melbourne or Brussels, hiking with friends in the Blue Mountains near Sydney, visiting Powell's books in Portland, or petting a koala in Hobart before or after Linux conferences. I'm sure I'm not the only one.

Sure, they could rent a car and go driving after the conference. But if all they've seen is LAX, they probably don't even know any of that other stuff is there. LA is just endless freeways and parking lots -- everybody knows that, right?

I know there are lots of arguments for staying at LAX, and I'm sure it's a lot easier for international visitors flying in. But, SCALE organizers, you do such a fantastic job running the conference; please consider some day moving it to a venue that lives up to the rest of the conference.

Tags: ,
[ 20:59 Mar 04, 2011    More conferences | permalink to this entry | ]