Jim's Depository

this code is not yet written
 

One person regularly corresponds with me using Hotmail. I’m frequently amused at the odd non-sequiturs at the end of his sometimes delicate email.

Lately though, the Hotmail graffiti ads don’t even make sense.

For example:

Send e-mail anywhere. No map, no compass. Get your Hotmail account now.

Because I need maps and a compass to use gmail? My Safari icon is a compass, should I not use Safari for Hotmail? Or is it a reference to paper mail where I used a map and a compass to post letters? 

Or perhaps:

Send e-mail faster without improving your typing skills. Get your Hotmail account now.

Faster? My bits already move at about light speed, that can’t be it. Perhaps the delay between pressing SEND and when it leaves the sending computer could be a few milliseconds shorter. How many people sit down to deliberately improve their typing speed so they can send email faster? Who, other than spammers, even cares how fast they send email?

Bus error - founder dumped

That was a nice sixteen years.

I needed a machine to do some DNS server tests. I settled on a \$280 EEE PC 900A (stripped of webcam and half of its storage) from Best Buy. That gets me a 1.6GHz x86 server with 1G of ram that only burns 10 watts and comes with its own little console for when I need it. Not a bad deal. 

Only 4G of storage, but I’m only using about 60% even with a bunch of heavy eyecandy gnome and compiz stuff I installed to see what would happen (it is pretty fast, lower end graphic accelerator, but not many pixels comes out well).

I wiped the friendly linux it came with and installed Debian Lenny and all is good, except I kept noticing intermittent disk hangs lasting several seconds. I think I finally tracked this down to the kernel syncing out written pages. The fix is to not write so much. By mounting the partitions noatime most of my writes go away and I don’t notice hangs anymore.

Reading the first byte of every file in /usr went from 131 seconds to 92 seconds with the change (after a fresh boot each time), that is about a 30% speedup.

I’m pleased with the EEE. My code builds from clean in 1.6 seconds. I rarely use more than 10% of the RAM doing development which leaves plenty of RAM for caches to mitigate the slow flash disk. 

At last, I can put my /boot partition in LVM.

  • Get the Debian box up to Lenny.
  • Note that I accidentally trashed my MBR and had to boot into rescue mode while working out these steps. You shouldn’t do this if you follow all of the instructions, but you ought to have media handy.
  • aptitude install grub-pc (Note: this will remove the old grub package and offer to chain load grub2 from your existing grub. Do this. If you have problems you can still boot.)
  • Verify you can reboot.
  • Remove the old grub MBR and put in the grub2 one with upgrade-from-grub-legacy
  • Hide your /boot/grub/menu.lst so you aren’t tempted to edit it.
  • Your basic configuration, like kernel command line parameters is now in /etc/defaults/grub, there is also /etc/grub.d/* which I hope to never touch.
  • Move your /boot into the LVM. You could tar up your /boot partition, unmount it, and extract it onto the root partition. You could also make a new LVM managed /boot if you like it on its own partition. I was out of space in the volume groups, so I went with /. If you didn’t make a new /boot, remember to take /boot out of /etc/fstab.
  • dd zeros onto your old boot partition to make sure you aren’t deluding yourself.
  • Edit /etc/defaults/grub to add GRUB_PRELOAD_MODULES=lvm
  • Go back and make certain you did the previous step. I made an unbootable system before I learned that little tidbit.
  • Do an update-grub and a grub-install /dev/sda or whatever your disk is.
  • Go back and make sure you did the grub-install… Just update-grub is not enough to pick up the lvm module.
  • Reboot and rejoice.

I am left wondering what silliness lead to GRUB-2 being version 1.96, but I am happy.

You saved me many (more) hours of head pounding with this blog entry.  I am thoroughly grateful.

At some point in the past I managed to screw up my file server's lenny install in such a way that I ended up with the non-lvm ext2 boot partition commented out of fstab and a separate /boot directory on the lvm root.

I forgot about this incident and went about continuing to run apt-get dist-upgrade periodically.  Everything worked until I went to squeeze and rebooted, at which point I made some more poor choices ("Why am I not running the new kernel?  I'll just apt-get remove the old one!") and ended up unable to mount ext2 partitions (while still able to boot from one).

After about eight hours of head scratching I found this page and by following your steps had no trouble upgrading to GRUB 2 which booted the new kernel which fixed all the problems, allowing me to get on with my life (such as it is).

You are awesome and so is GRUB 2.

You should note that grub-pc for lenny is missing part_msdos.mod and will not install, you need to fetch it from backports to get the file.

Thanks for the info though, it helped me to confirm what I was doing would work before I sent a remote machine through a reboot (I know, dangeous, but couldn't be helped)
I have been trying to do this for 3 whole days and I finally found this post by accident. I have tried to do this with Arch Linux and Fedora and had no success. Even Google yielded no answers until I tried "Debian grub2 lvm /boot" and found this. Even though this is not written for squeeze I managed to take the bottom half and make this work. I even went as far as asking on 2 forums and in IRC channels for multiple OS's, even bugged some friends and got the usual "why would you want to do that?". Sir, I cannot thank you enough. Looks like I'm a Debian user now thanks to blogs like this.
Debian Squeeze, VirtualBox 4.0 from backports.

Debian 6.0.4 installer .iso

6 virtual disk devices
-> one physical raid partition per device
-> one RAID6 md0

One physical volume group
logical volume /
logical volume /home

The system does not manage to boot even after going in into the rescue mode and making sure grub is both configured and installed with the lvm module preloading (/etc/default/grub -> update-grub -> grub-install).

It all trips up in grub-pc (GRUB2) somehow and I do not know how to debug it.

GRUB loading.
Welcome to GRUB!

error: file not found.
Entering rescue mode...
grub rescue>


Any pointers? Did I miss and mess up any of the rescue mode superhero stuff?
I tried to wave a rubber chicken and just throw every plausibly connected module at it, but it's still a no-go.

I've seen some indirect talk about mdraid + LVM + EXT4 not being a bootable combo yet, but then again I've seen people boast about their mdraid + dmcrypt + LVM + EXT4 / BTRFS setups.

I guess I will just fall back to a good old fashioned EXT2 boot partition.

Here's my rubber chicken waving approach for the internet:

GRUB_PRELOAD_MODULES="search_fs_uuid raid raid5rec raid6rec mdraid lvm ext2 chain pci"

update-grub

grub-install --modules="search_fs_uuid raid raid5rec raid6rec mdraid lvm ext2 chain pci" /dev/sda

Little known and not terribly documented, but true:

You can tell your Debian machine to do an apt update and and download any files that you might need by adding two lines to your apt config (which I’ll bet you didn’t know you had.)

cat > /etc/apt/apt.conf.d/50autoupdate APT::Periodic::Update-Package-Lists “1”; APT::Periodic::Download-Upgradeable-Packages “1”;

Then the script /etc/cron.daily/apt will keep you all up to date savings you seconds to minutes every time you decide to upgrade.

Another good one: To use a cacheing proxy for your packages, add this line to a similar file:

Acquire::http::Proxy "http://YOURHOST:YOURPORT";

This way you don't have to mess with your /etc/apt/sources.list file to make all the proxy changes.

Cacheing common reports from a database inside the database itself is surprisingly easy and makes a huge difference for Ajax style web pages.

I have several systems that record data at regular intervals. For the sake of example let us consider a weather station which reports temperature, humidity, pressure, and wind every 10 minutes. If I want to graph this data for a three day period, I have to query out 432 of these rows and send them down to my browser. Unfortunately this is not fast enough.

Step 1: Get the browser cache working for me.

If I break the request in to midnight aligned 24 hour periods, then I can cache the result for any completed day. This way I only need to pull new days of data. 

This helps, but it turns out I don’t revisit days often during a session.

Step 2: Tune the database indices.

Fail miserably. It turns out SQLite does my query faster without indicies, so I took them off completely. (Sequential read on my virtual server is much faster than random.)

Step 3: Server side cacheing. In the database.

Now we get to the meat. I can cache pre-compressed reports for each of my daily periods. There are a couple of wrinkles though. I need some way to invalidate a report when the underlying data changes. (Sometimes some observations can be delayed and trickle in later.) I can’t think of a good way to have the database delete a cache file, so instead I store the cached copies in the database.

This turns out to be surprisingly clean to code, and a simple set of triggers on the underlying data can remove any affected report.

It would be more efficient to keep the cached reports as files, or better let in a httpd cache, but then they could not be invalidated by the database. 

Step 4: Clean up the little PHP wrapper and stick it here.

I’ll get to that.

Certain corrupt JPEG files will explode Quicktime Player if it encounters them in an “Import Image Sequence…” operation. This bug has been present for many years and shows no sign of going away.

If you are archiving images from cheap webcams you will encounter these.

I am not aware of anything included with Mac OS X Leopard that validates a JPEG file, so I built jpeginfo for Leopard. 

With that, you can do things like… for v in *.jpg ;do jpeginfo -c \$v || rm \$v ;done … to delete all the corrupted JPEGs.

You can find the sources at http://www.iki.fi/tjko/projects.html You will also need to build libjpeg, from ftp://ftp.uu.net/graphics/jpeg/jpegsrc.v6b.tar.gz

Build libjpeg first (./configure ; make ), then build jpeginfo (./configure –with-libjpeg=THE-RIGHT_DIRECTORY ; make )

But you don’t need to do that… I’ve attached the copy I built.

Attachments

jpeginfo 103388 bytes
I might mention that there are still some jpegs that will crash the quicktime encoder that are valid according to jpeginfo, but it gets most of them.
I found a much simpler solution, there is a tool 'Corrupt JPEG Checker' for Mac OSX.
It can be found here: http://www.failedinsider.com/corruptjpegcheckermacosx/ or in the mac app store.

Saved me a lot of time.

http://www.failedinsider.com/corruptjpegcheckermacosx/
unfortunately JPEGInfo will not catch all.

the tool above catches the rest

Safari is odd with contentEditable divs. It doesn’t assume you will have a pre-formatted div as a container so it puts each line into its own div… sometimes marked with code class, sometimes not.

That makes a mess but is tolerable, until my HTML sanitizer burps out the clean version with newlines between the divs and causes accidental double spacing.

I could rewrite the ->saveHTML() method of the PHP DOMDocument to not put newlines in between divs, but that would give awful looking HTML.

For now I added a function to the sanitizer to remove extra divs from inside code formated divs. It tries to be smart about inserting newline characters, but it may not be smart enough.

Someone should revisit the whole contentEditable thing and specify precisely what is to meant by all of the operations.

If you run your own bind server for your domain you can easily support dynamic updates from your machines that have transient IP addresses. bind is capable of many things, but I’ll just show the bits you need…

First, use bind 9.3 or better.

On the server, go edit the file that contains your zone… perhaps it looks like this… zone “studt.net” { type master;         file “/etc/bind/studt.net”; };

… you are going to need to generate a key for each host that has a transient address, add it to this file, and then tell the zone that machines are allowed to update their own addresses.

dnssec-keygen -a HMAC-MD5 -b 512 -n HOST jimshouse.studt.net

… will generate two files with long names. Keep them, you will need them on the client machines. But look inside the K*.private file and copy out the value of the “Key: “ line. You are about to paste it into your zone file.

Add a key for each host that will update its name, and an update-policy to the zone, you may end up looking like this…

key jimshouse.studt.net {         algorithm HMAC-MD5;         secret “eQGH–lots-of-more-key-i-left-out==”; };
zone “studt.net” {         type master;         notify yes;         forwarders { };         file “/etc/bind/studt.net”;         update-policy {                 grant * self * A TXT;         }; };

Restart your bind server and your server is ready to go. (Yes, you could reload but there was a bug in 1999 or so and I have never gotten over it.)

Possible Bug: Your bind process has to be able to write “.jnl” files. Debian etch is configured to put them in /etc/bind but the bind user can’t write there. I chmodded /etc/bind to 775 to deal with that. You’ll know you have this problem when your client update fails with a SERVFAIL and you tail your syslog on the server and read the error messages.

Dangerous Note: Before you edit the zone file you have to first stop the dynamic updates so the .jnl file gets merged with the zone file… rndc freeze studt.net edit the studt.net zone file rndc unfreeze studt.net
Now for the client side. You could set dhclient.conf to do this automatically, but for primitive cave programmers like me you can just execute a command. First… copy your K*.private key file for the client to the client machine, you’re going to need it. Second… use the nsupdate command to set the name’s value. I do it in a script sort of like this…

#!/bin/sh TTL=600
SERVER=ns.studt.net. ZONE=studt.net HOSTNAME=jimshouse.studt.net. KEYFILE=/path/to/where/you/keep/**Kjimshouse.studt.net.+157+26806.private IP=99.153.198.165
nsupdate -v -k \$KEYFILE > /dev/null << EOF server \$SERVER zone \$ZONE update delete \$HOSTNAME A update add \$HOSTNAME \$TTL A \$IP send EOF See that TTL? That says 10 minutes, so a computer on the internet might keep using your old address for up to 10 minutes after your address changes. You can adjust that number for your situation. Clients won’t like you if you get too short.

I’d just like to mention that I would feel better if Tyrannosaurus Rex had lived in Europe. 1,000 miles and 65 million years is too close. I want there to be an ocean between them and me as well.

more articles