Shallow Thoughts : tags : backups
Akkana's Musings on Open Source Computing and Technology, Science, and Nature.
Sun, 19 Mar 2023
I back up my computer to a local disk (well, several redundant local disks)
using rsync
. (I don't particularly trust cloud providers,
and in any case our internet connection is very slow, especially for upload,
so waiting hours while the entire contents of my disk uploads isn't appealing.)
To save space and time, I have script that includes a list of files
and directories I don't need to back up: browser cache directories,
object files, build directories, generated files like thumbnails,
large video files, downloaded source, and so on.
I also have a list of files I do want to back up even though
they'd otherwise be excluded. For instance, I sometimes have local changes
in my GIMP source directory, outsrc/gimp-master/gimp/, even
though most of outsrc doesn't need to be backed up.
Or /blog/tags/build in my local mirror of the shallowsky
website, even though I have a rule that says directories named
build shouldn't usually be backed up.
I've been using rsync's --include
and --exclude
to handle this.
But I discovered yesterday that I'd been using them wrong, and some
things I thought were getting backed up, weren't.
It took some reading and experimenting before I figured out how
these rsync flags actually work — which doesn't seem to be
well explained anywhere.
Read more ...
Tags: backups, linux, cmdline, python
[
16:11 Mar 19, 2023
More linux/cmdline |
permalink to this entry |
]
Sun, 08 Mar 2009
USB flash drives and SD cards are getting so big -- you can really
carry a lot of stuff around on them. Like a backup of your mail
directory, your dot files, your website, stuff you'd really rather
not lose. You can even slip an SD card into your wallet.
But that convenient small size also means it's easy to lose your
USB key, or leave it somewhere. Someone could pick it up and have
access to anything you put there. Wouldn't it be nice to encrypt it?
There are lots of howtos on the net, like
this and
this,
but most of them are out of date and need a bit of fiddling to get
working. So here's one that works on Ubuntu hardy and a 2.6.28 kernel.
I'll assume that the drive is on /dev/sdb.
First, consider making two partitions on the flash drive.
The first partition is a normal unencrypted vfat partition, so you can
use it to transfer files to other machines, even Windows and Mac.
The second partition will be your encrypted file system.
Use fdisk, gparted or your favorite partitioning
tool to make the partitions, then create a filesystem on the first:
mkfs.vfat /dev/sdb1
Update:
On many distros you'll probably need to load the "cryptoloop" module
before proceeding with the following steps. Mount it like this (as root):
modprobe cyptoloop
Easy, moderate security version
Now create the encrypted partition (you'll need to be root).
I'll start with a relatively low security version -- good enough to
keep out most casual snoops who happen to pick up your flash drive.
losetup -e aes /dev/loop0 /dev/sdb2
[type a password or pass phrase]
mkfs.ext2 /dev/loop0
losetup -d /dev/loop0
I used ext2 as the filesystem type. I want a Linux filesystem, not a
Windows one, so I can store dot-filenames like .gnupg/ and so I can
make symbolic links. I chose ext2 rather
than a journalled filesystem like ext3 because of kernel configuration
warnings: to get encrypted mounts working I had to enable
loopback and cryptoloop support under drivers/block
devices, and the cryptoloop help says:
WARNING: This device is not safe for journaled file systems like
ext3 or Reiserfs. Please use the Device Mapper crypto module
instead, which can be configured to be on-disk compatible with the
cryptoloop device.
I hunted around a bit for this "device mapper crypto module" and
couldn't figure out where or what it was (I wish kernel help files
would give either the path to the option, or the CONFIG_ name in
.config!) so I decided I'd better stick with non-journalled
filesystems for the time being.
Now the filesystem is ready to use. Mount it with:
mount -o loop,encryption=aes /dev/sdb2 /mnt
[type your password/phrase]
Higher security version
There are several ways you can increase security. Of course, you can
choose other encryption algorithms than aes -- that choice is up to you.
You can also use a random key, rather than a password you type in.
Save the random key to a file on your system (obviously, you'll want
to back that up somewhere else). This has both advantages and
disadvantages: anyone who has access to your computer has the key
and can read your encrypted disk, but on the other hand, someone who
finds your flash drive somewhere will be very, very unlikely to be
able to use it. Set up a random key like this:
dd if=/dev/random > /etc/loop.key bs=1 count=60
mkfs.ext2 /dev/loop0
losetup -e aes -p 0 /dev/loop0 /dev/sdb2 < /etc/loop.key
(of course, you can make it quite a bit longer than 60 bytes).
(Update: skipped mkfs.ext2 step originally.)
Then mount it with:
cat /etc/loop.key | mount -p0 -o loop,encryption=aes /dev/sdb2 /crypt
Finally, most file systems write predictable data, like superblock
backups, at predictable places. This can make it easier for someone to
break your encryption. In theory, you can foil this by specifying
an offset so those locations are no longer so predictable.
Add -o 123456
(of course, use your own offset, not that
one) to the losetup
line, and ,offset=123456
in the options of the mount
command.
In practice, though, offset doesn't work for me: I get
ioctl: LOOP_SET_STATUS: Invalid argument
whenever I try to specify an offset. I haven't pursued it;
I'm not trying to hide state secrets, so
I'm more worried about putting off casual snoops and data thieves.
Tags: linux, security, filesystems, backups
[
15:10 Mar 08, 2009
More tech/security |
permalink to this entry |
]
Fri, 30 Nov 2007
I upgraded my system to the latest Ubuntu, "Gutsy Gibbon", recently.
Of course, it's always best
to make a backup before doing a major upgrade. In this case, the goal
was to back up my root partition to another partition on the same
disk and get it working as a bootable Ubuntu, which I could then
upgrade, saving the old partition as a working backup.
I'll describe here a couple of silly snags I hit,
to save you from making the same mistakes.
Linux offers lots of ways to copy filesystems.
I've used tar in the past, with a command like (starting in /gutsy):
tar --one-file-system -cf - / | tar xvf - > /tmp/backup.out
but cp seemed like an easier way, so I want to try it.
I mounted my freshly made backup partition as /gutsy and started a
cp -ax /* /gutsy
(-a does the right thing for
permissions, owner and group, and file type; -x tells it to stay
on the original filesystem).
Count to ten, then check what's getting copied.
Whoops! It clearly wasn't staying on the original filesystem.
It turned out my mistake was that /*
.
Pretty obvious in hindsight what cp was doing: for each entry in /
it did a cp -ax, staying on the filesystem for that entry, not on
the filesystem for /. So /home, /boot, /proc, etc. all got copied.
The solution was to remove the *: cp -ax / /gutsy
.
But it wasn't quite that simple.
It looked like it was working -- a long wait, then cp finished
and I had what looked like a nice filesystem backup.
I adjusted /gutsy/etc/fstab so that it would point to the right root,
set up a grub entry, and rebooted. Disaster! The boot hung right after
Attached scsi generic sg0 type 0
with no indication of
what was wrong.
Rebooting into the old partition told me that what's supposed to
happen next is: * Setting preliminary keymap...
But the crucial error message was actually
several lines earlier: Warning: unable to open an initial
console
. It hadn't been able to open /dev/console.
Now, in the newly copied filesystem,
there was no /dev/console: in fact, /dev was empty. Nothing had been
copied because /dev is a virtual file system, created by udev.
But it turns out that the boot process needs some static devices in
/dev, before udev has created anything. Of course, once udev's
virtual filesystem has been mounted on /dev, you can no longer read
whatever was in /dev on the root partition in order to copy it
somewhere else. But udev nicely gives you access to it,
in /dev/.static/dev. So what I needed to do to get my new partition
booting was:
cp -ax /dev/.static/dev/ /gutsy/dev/
With that done, I was able to boot into my new filesystem and upgrade
to Gutsy.
Tags: linux, ubuntu, backups
[
23:48 Nov 30, 2007
More linux |
permalink to this entry |
]
Fri, 29 Dec 2006
A friend called me for help with a sysadmin problem they were having
at work. The problem: find all files bigger than one gigabyte, print
all the filenames, add up all the sizes and print the total.
And for some reason (not explained to me) they needed to do this
all in one command line.
This is Unix, so of course it's possible somehow!
The obvious place to start is with the find command,
and man find showed how to find all the 1G+ files:
find / -size +1G
(Turns out that's a GNU find syntax, and BSD find, on OS X, doesn't
support it. I left it to my friend to check man find for the
OS X equivalent of -size _1G.)
But for a problem like this, it's pretty clear we'd need to get find
to execute a program that prints both the filename and the size.
Initially I used ls -ls, but Saz (who was helping on IRC)
pointed out that du on a file also does that, and looks a
bit cleaner. With find's unfortunate syntax, that becomes:
find / -size +1G -exec du "{}" \;
But now we needed awk, to collect and add up all the sizes
while printing just the filenames. A little googling (since I don't
use awk very often) and experimenting led to the final solution:
find / -size +1G -exec du "{}" \; | awk '{print $2; total += $1} END { print "Total is", total}'
Ah, the joys of Unix shell pipelines!
Update: Ed Davies suggested an easier way to do the same thing.
turns out du will handle it all by itself: du -hc `find . -size +1G`
Thanks, Ed!
Tags: linux, CLI, shell, backups, pipelines
[
17:53 Dec 29, 2006
More linux |
permalink to this entry |
]