SIGBUS
Bus error - founder dumped
That was a nice sixteen years.
Bus error - founder dumped
That was a nice sixteen years.
I needed a machine to do some DNS server tests. I settled on a \$280 EEE PC 900A (stripped of webcam and half of its storage) from Best Buy. That gets me a 1.6GHz x86 server with 1G of ram that only burns 10 watts and comes with its own little console for when I need it. Not a bad deal.
Only 4G of storage, but I’m only using about 60% even with a bunch of heavy eyecandy gnome and compiz stuff I installed to see what would happen (it is pretty fast, lower end graphic accelerator, but not many pixels comes out well).
I wiped the friendly linux it came with and installed Debian Lenny and all is good, except I kept noticing intermittent disk hangs lasting several seconds. I think I finally tracked this down to the kernel syncing out written pages. The fix is to not write so much. By mounting the partitions noatime most of my writes go away and I don’t notice hangs anymore.
Reading the first byte of every file in /usr went from 131 seconds to 92 seconds with the change (after a fresh boot each time), that is about a 30% speedup.
I’m pleased with the EEE. My code builds from clean in 1.6 seconds. I rarely use more than 10% of the RAM doing development which leaves plenty of RAM for caches to mitigate the slow flash disk.
At last, I can put my /boot partition in LVM.
aptitude install grub-pc
(Note: this will remove the old grub
package and offer to chain load grub2 from your existing grub. Do
this. If you have problems you can still boot.)upgrade-from-grub-legacy
dd
zeros onto your old boot partition to make sure you aren’t
deluding yourself.GRUB_PRELOAD_MODULES=lvm
update-grub
and a grub-install /dev/sda
or whatever your
disk is.grub-install
… Just update-grub
is not enough to pick up the lvm module.I am left wondering what silliness lead to GRUB-2 being version 1.96, but I am happy.
GRUB loading.
Welcome to GRUB!
error: file not found.
Entering rescue mode...
grub rescue>
Little known and not terribly documented, but true:
You can tell your Debian machine to do an apt update and and download any files that you might need by adding two lines to your apt config (which I’ll bet you didn’t know you had.)
cat > /etc/apt/apt.conf.d/50autoupdate APT::Periodic::Update-Package-Lists “1”; APT::Periodic::Download-Upgradeable-Packages “1”;
Then the script /etc/cron.daily/apt will keep you all up to date savings you seconds to minutes every time you decide to upgrade.
Cacheing common reports from a database inside the database itself is surprisingly easy and makes a huge difference for Ajax style web pages.
I have several systems that record data at regular intervals. For the sake of example let us consider a weather station which reports temperature, humidity, pressure, and wind every 10 minutes. If I want to graph this data for a three day period, I have to query out 432 of these rows and send them down to my browser. Unfortunately this is not fast enough.
Step 1: Get the browser cache working for me.
If I break the request in to midnight aligned 24 hour periods, then I can cache the result for any completed day. This way I only need to pull new days of data.
This helps, but it turns out I don’t revisit days often during a session.
Step 2: Tune the database indices.
Fail miserably. It turns out SQLite does my query faster without indicies, so I took them off completely. (Sequential read on my virtual server is much faster than random.)
Step 3: Server side cacheing. In the database.
Now we get to the meat. I can cache pre-compressed reports for each of my daily periods. There are a couple of wrinkles though. I need some way to invalidate a report when the underlying data changes. (Sometimes some observations can be delayed and trickle in later.) I can’t think of a good way to have the database delete a cache file, so instead I store the cached copies in the database.
This turns out to be surprisingly clean to code, and a simple set of triggers on the underlying data can remove any affected report.
It would be more efficient to keep the cached reports as files, or better let in a httpd cache, but then they could not be invalidated by the database.
Step 4: Clean up the little PHP wrapper and stick it here.
I’ll get to that.
Certain corrupt JPEG files will explode Quicktime Player if it encounters them in an “Import Image Sequence…” operation. This bug has been present for many years and shows no sign of going away.
If you are archiving images from cheap webcams you will encounter these.
I am not aware of anything included with Mac OS X Leopard that validates a JPEG file, so I built jpeginfo for Leopard.
With that, you can do things like… for v in *.jpg ;do jpeginfo -c \$v || rm \$v ;done … to delete all the corrupted JPEGs.
You can find the sources at http://www.iki.fi/tjko/projects.html You will also need to build libjpeg, from ftp://ftp.uu.net/graphics/jpeg/jpegsrc.v6b.tar.gz
Build libjpeg first (./configure ; make ), then build jpeginfo (./configure –with-libjpeg=THE-RIGHT_DIRECTORY ; make )
But you don’t need to do that… I’ve attached the copy I built.
Safari is odd with contentEditable divs. It doesn’t assume you will have a pre-formatted div as a container so it puts each line into its own div… sometimes marked with code class, sometimes not.
That makes a mess but is tolerable, until my HTML sanitizer burps out the clean version with newlines between the divs and causes accidental double spacing.
I could rewrite the ->saveHTML() method of the PHP DOMDocument to not put newlines in between divs, but that would give awful looking HTML.
For now I added a function to the sanitizer to remove extra divs from inside code formated divs. It tries to be smart about inserting newline characters, but it may not be smart enough.
Someone should revisit the whole contentEditable thing and specify precisely what is to meant by all of the operations.
If you run your own bind server for your domain you can easily support dynamic updates from your machines that have transient IP addresses. bind is capable of many things, but I’ll just show the bits you need…
First, use bind 9.3 or better.
On the server, go edit the file that contains your zone… perhaps it looks like this… zone “studt.net” { type master; file “/etc/bind/studt.net”; };
… you are going to need to generate a key for each host that has a transient address, add it to this file, and then tell the zone that machines are allowed to update their own addresses.
dnssec-keygen -a HMAC-MD5 -b 512 -n HOST jimshouse.studt.net
… will generate two files with long names. Keep them, you will need them on the client machines. But look inside the K*.private file and copy out the value of the “Key: “ line. You are about to paste it into your zone file.
Add a key for each host that will update its name, and an update-policy to the zone, you may end up looking like this…
key jimshouse.studt.net { algorithm HMAC-MD5; secret
“eQGH–lots-of-more-key-i-left-out==”; };
zone “studt.net” { type master; notify yes;
forwarders { }; file “/etc/bind/studt.net”;
update-policy { grant * self * A TXT; }; };
Restart your bind server and your server is ready to go. (Yes, you could reload but there was a bug in 1999 or so and I have never gotten over it.)
Possible Bug: Your bind process has to be able to write “.jnl” files. Debian etch is configured to put them in /etc/bind but the bind user can’t write there. I chmodded /etc/bind to 775 to deal with that. You’ll know you have this problem when your client update fails with a SERVFAIL and you tail your syslog on the server and read the error messages.
Dangerous Note: Before you edit the zone file you have to first stop
the dynamic updates so the .jnl file gets merged with the zone file…
rndc freeze studt.net edit the studt.net zone file rndc unfreeze
studt.net
Now for the client side. You could set dhclient.conf to do this
automatically, but for primitive cave programmers like me you can just
execute a command. First… copy your K*.private key file for the
client to the client machine, you’re going to need it. Second… use the
nsupdate command to set the name’s value. I do it in a script sort of
like this…
#!/bin/sh TTL=600
SERVER=ns.studt.net. ZONE=studt.net HOSTNAME=jimshouse.studt.net.
KEYFILE=/path/to/where/you/keep/**Kjimshouse.studt.net.+157+26806.private
IP=99.153.198.165
nsupdate -v -k \$KEYFILE > /dev/null << EOF server \$SERVER zone
\$ZONE update delete \$HOSTNAME A update add \$HOSTNAME \$TTL A \$IP
send EOF
See that TTL? That says 10 minutes, so a computer on the internet might
keep using your old address for up to 10 minutes after your address
changes. You can adjust that number for your situation. Clients won’t
like you if you get too short.
I’d just like to mention that I would feel better if Tyrannosaurus Rex had lived in Europe. 1,000 miles and 65 million years is too close. I want there to be an ocean between them and me as well.
I generally forget some nice packages when I toss up a new debian machine and then spend too much time trying to remember which ones they are. Now I keep them listed here. Maybe you will like them too.
That attachment is an etch build of dfu-programmer, I can’t test it, but it probably works on Etch.