System Administration Bits of Knowledge

I keep learning things about system administration that don't seem to be written down anywhere. So I'm writing them down here for your search-engine to find.


PDFedit complains that the document is read-only

I was pleased to find a PDF editor that would let me make minor changes to documents that show up in PDF. It's called
PDFedit, and it's been packaged for Ubuntu, so it's trivial to install. There's a good tutorial on how to use it in HowtoForge. But when I tried to change some minor text in a PDF document, it complained "ContentStream.saveChange : Document is read-only".

The problem turned out to be that the document was "linearized PDF". It mentioned this in the window title. So under "Tools" I found a "Delinearize PDF" menu item. I picked it, gave it the filename of my document, and a different filename to output the document into, and it made an editable (de-linearized) version of the document. Then I was able to open that editable file, make the simple text change that I'd been trying to make, and save the file. It worked. Counter-intuitive, but it worked. Thank you, PDFedit developers!

Removing the Dell preinstalled Ubuntu EULA

I bought a nice (and cheap!) new Dell Mini 10v from the Dell online store. I was excited to buy my first netbook (other than the OLPC) and my first laptop that came with Linux preinstalled and supported by the manufacturer. It didn't arrive for a month, but when it did and I plugged it in and turned it on, I was shocked to discover that it presented me with an End-User License Agreement (EULA):
  Dell    Dell End User Software License Agreement

  * Before using your computer, please read the Dell End User Software License
    Agreement (DELL EULA) that came with your computer.  To comply with the terms
    and conditions of the DELL EULA, you must consider any CD or diskette set of
    Dell-installed software as BACKUP copies of the software installed on your
    computer's hard-disk drive.

  * If you do not accept the DELL EULA terms, please call the customer assistance
    telephone number listed in your system documentation.

              Press any key on the keyboard to indicate that you have read
                     the DELL EULA and agree to its terms.
Well, that shot the day as far as getting any work done on my new Linux netbook. Of course, I wasn't going to agree to that crap. Free software doesn't need a contract of adhesion to spell out what I can and can't do; it is governed by its own license, which Dell can't expand upon, only limit. And when I got back to where I'd chucked the pamphlet of legalese, I found that there was no particular proposed contract called the "Dell End User Software License Agreement". There was a "U.S. Terms and Conditions of Sale", a "U.S. Retail Purchaser End User Agreement", a "Limited Warranties and Return Policy", a "Dell Software License Agreement", and various other things like a "Dell Software and Peripherals (Canada only)" section and "Third Party Software and Peripherals Products" that were either proposed contracts or sub-parts of other contracts -- it wasn't clear.

In reading some of those proposed contracts, I discovered that they want me to agree not to exercise a bunch of rights that I have with respect to the free software they're selling me -- such as the right to reverse-engineer, decompile, or disassemble it. They also claimed that "This agreement is not for the sale of Software", i.e. they claim they never sold it to me, despite the fact that I paid money for it before ever seeing their proposed contract of adhesion. (They also proposed that "Any open source software provided by Dell can be used according to the terms and conditions of the specific license under which the open source software is distributed." Nice of them. That was in the same proposed legalese that claimed I couldn't reverse-engineer "the Software".)

Well, since I had no intention of agreeing with any of those proposed EULAs, it was time to do the work so I could use my computer despite it. Computers are very programmable and one of the joys of free software is that if you don't like what the software does, you can change it yourself. I didn't like that "power me down or agree to this contract" ultimatum, so I resolved to reprogram the computer to stop doing that.

When I got home, I plugged in a USB DVD drive and the latest Ubuntu live DVD. It booted up without trouble when I told the BIOS to boot from the DVD. It didn't give me any EULA crap. Then I looked at the hard drive. /proc/partitions showed that it has three partitions -- /dev/sda1, sda2, and sda3. I took them in order. Sda1 is a small 24MB FAT (microsoft formatted) partition. I mounted it (mount -r /dev/sda1 /mnt/sda1) and looked inside. After a bit of looking at file names and reading the English language text in the "seal.ini" file, I discovered that this little partition contains both hardware diagnostics and the "Electronic Break the Seal" package, or "EBTS".

The theory of EBTS is apparently that Dell can extend legal precedents that relate to "sealed" products wrapped in plastic. Those old products showed you a proposed contract printed on the box, and if you "broke the seal", removed the plastic and opened the box, the legal theory was that you had "agreed" to the proposed contract. You could use the box as a doorstop or a paperweight without removing the plastic and agreeing to the proposal, of course. Under EBTS, unlike shrink-wrapped plastic, they DON'T actually show you a proposed contract before you have a choice about whether to agree to it, there is no plastic wrapper that requires breaking before you can get at the real product, and there are many more ways to use the product without ever "breaking the seal".

It appears to me that the disk drive's "Master Boot Record" is set up so that it boots that useless little first partition -- not only by default, but always.

"Seal.ini" shows a whole set of possible things that this program can do, based on the seal.ini control file. One such action is "SEAL" which apparently does something to the hard drive. Another is "SHOWBMP" which shows a bitmap and waits til a key is pressed. Another is a section that shows the "Service Tag" for the laptop (I think this is some sort of identifier that Dell uses to keep track of your service calls about it.) The file controls what order these things are done in.

It occurred to me to wonder what operating system this laptop is running while it executes these BAT scripts and these EXE programs. Surely Dell wouldn't be dumb enough to pay Microsoft for a copy of DOS or Windows just in order to put up a EULA. So I looked at "HIMEM.SYS", guessing that it is part of the operating system, and it says "PKLITE Copr. 1990-92 PKWARE Inc. All Rights Reserved", but that's just from PKZIP, the compression program. It also says "XMSXXXX0Copyright(C)1992,1996 Caldera,Inc." That is also recognizable, and I think it means this may be from "FreeDOS", the free software MSDOS replacement that was acquired by Caldera from Digital Research (DR-DOS). I haven't quite figured out if that's true or not. If it IS a free DOS, then I'm free to reverse-engineer it.

So then I looked at the second partition (/dev/sda2). It seems to be a 107GB standard Linux partition with Ubuntu 8.04.1, booted with Grub. It also allows "System Restore" to be booted (using the third partition, which is a 1.4GB FAT filesystem).

Oddly, fdisk shows the three partitions like this:

Disk /dev/sda: 120.0 GB, 
255 heads, 63 sectors/track, 14593 cylinders
Units = sectors of 1 * 512 = 512 bytes
Disk dientifier: 0xe611e8fb

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          63       48194       24066   de  Dell Utility
/dev/sda2           48195   231496649   115724227+  83  Linux
/dev/sda3       231496650   234436544     1469947+  1c  Hidden W95 FAT32 (LBA)
It seems to me that just by changing the boot flag so that it boots from /dev/sda2, I might well be done. Indeed, I did that and it works fine.

There's another EULA inside the Ubuntu installation itself. Of course, they put it at the end of the interaction (after you've burned some time telling it your timezone and your username and etc). It's a EULA for the Fluendo codec package. This one was a bitch to find. It's in the "oem-config" package, which is all written in Python, so you'd think it would be easy to hack. Eventually I figured out that the text of what it shows you and prompts you isn't visible in the code at all. It's in the Ubuntu package installation crud, hiding out in /usr/cache/debconf/templates.dat. Replace "Accept and continue" with "Decline contract " (note 3 spaces at end, keeping the two the same length). To do this, you'll have to mount the filesystem on /dev/sda2 into your live-CD environment, then edit the file, unmount, and reboot from the hard drive. And enjoy your "free" software free of the constraints of contract law, limited only by copyright law.

Removing the Firefox-3 EULA

I tried installing Firefox version 3.0 from the Linux tar file that they offer at firefox.com. I wanted to bring it up on my OLPC, as an alternative to the clumsy "Browse" program.

I unpacked the tar file and ran "firefox/firefox". Eek!!! I was shocked when it filled the screen with a EULA (End User License Agreement) and refused to let me continue unless I clicked that I agreed to it.

Well, of course I declined, then went to another machine and searched the web for mentions of this. I found complaints here: http://blogs.gnome.org/hughsie/2008/05/23/firefox-eula/, along with comments from a few weasels from Mozilla (plus Blizzard, hi Chris) defending their decision.

In particular, they kept repeating, "Fedora has a EULA, so why shouldn't we?". That seemed to be the main reason for it.

I have news for them. Fedora doesn't have a EULA any more.

The reason it doesn't is because I complained about it, and made the case to their management, lawyers, and release manager, that:

The Fedora Project Leader, Max Spevack, was nice enough to send me a note thanking me for making it go away. Getting rid of it was on his list of battles to fight, but he had had some more pressing things ahead of it. He also pointed out that if you did a text-mode install, it never presented you with the EULA anyway, so if Red Hat's lawyers had been serious about really, really demanding that every copy of Fedora was accompanied by a signed contract with the end user, their releases weren't doing that anyway.

So it's gone now. It's been gone since Fedora 7.

And here it is 2008, and we have the Mozilla Corporation not only going "see, Fedora is doing it", but also saying they're "working with distros to include our EULA as part of the first EULA that is shown to users". They're not only being evil, they're being eeeeevil: encouraging more free software distributions to add EULAs, and to add even more pages of legalese to any that they already have.

Now here's the free-as-in-freedom bit:

Firefox is free software and you are free to modify it, either before or after you install it. I chose to modify it before I installed it. I modified it by removing the EULA. So there is no EULA, no agreement between me and the Mozilla Corporation, no contract. Just the free software. Thank you for the free software.

It would be easier to modify the source, and I'm happy to come up with a source patch for the Mozilla team (or any distro) if they'll install it. But it was more expedient to modify the binary: it was sitting in front of me, and it turned out to be easy.

The EULA in the Firefox 3.0 linux installer is hiding in firefox/chrome/en-US.jar. This is a binary file, something Java-ish, but it contains a lot of visible text. I found the text of the EULA's dialog boxes. I unpacked the tar file, modified Firefox, then started it up, with these commands:

tar xvvjf firefox-3.0.tar.bz2
mv firefox/chrome/en-US.jar en-US.jar.orig
sed -e "s/Terms and conditions/Lack of conditions  /" -e "s/Please read the following/Ignore the following evil/" -e "s/I accept the terms/I reject the terms/" <en-US.jar.orig >firefox/chrome/en-US.jar
firefox/firefox &
Then you can happily agree to reject the terms, click OK, and your browser will be installed.

You can't get too creative with this kind of patch, because the binary file has length fields that get screwed up if you replace one string with a string of a different length. Note the two spaces between the word "conditions" and the following slash and quote. The essence of art is in working within the constraints of your medium.

Enjoy your truly free browser. Thanks for the improvements, Mozilla. It's a shame you have to deal with such lemming-like lawyers as your new Chief Counsel, Harvey Anderson. Just because most proprietary software comes with a EULA, you don't need one too.

Getting to setup mode on serial console on PC servers

When running your PC-clone server hardware over a serial port, To get into setup mode while rebooting your PC server, here's what undocumented ASCII codes to send to your server:

Phoenix BIOS: ESC dash (escape key, minus key).

Another one to try for Award and AMI BIOSes: F4 (I don't know what actual ASCII codes this sends).

Or try: Tab, ESC, ESC

Using 5GB of RAM in an HP dc5750 Linux pc

15 March 2008 - I got tired of running with just the 1GB of RAM that I bought with my nice quiet fast HP dc5750 desktop computers. I'd figured when I got them that RAM would get cheaper and fatter by the time I needed it. So this week I bought a 4GB memory set for each computer. These sets each contain G.Skill 2 x 2GB "240-pin DDR2-800 SDRAM (PC2 6400)".

The dc5750 motherboard has four RAM slots. Two were occupied by the existing 1GB of RAM (2 x 512, PC2 6400) from HP. I didn't really think about it before ordering, but the HP manuals say that only 4GB of total memory is supported by the dc5750. There's no obvious reason for this restriction; and I'm suspecting that it's because they didn't have big enough RAM chips when they were testing it. They haven't bothered to update the documentation since then.

I suspect this because I popped the new RAM dimms into the two empty slots, and turned it on, and it all works fine! There's 5GB of RAM in these machines. So shoot me.

The Fedora and Ubuntu kernels I was running couldn't use more than 4GB of this RAM. Yawn. What is this, breaking the 640k barrier yet again? I only noticed because "top" reported total memory as being 3106861k, and a very early Fedora kernel message said, "Warning only 4GB will be used. Use a PAE enabled kernel." Many people had been here before me, though, so the fix was easy. On Fedora, "yum install kernel-PAE kernel-PAE-devel", changing one line in /etc/grub.conf, and rebooting, was all I needed to do. I hate yum, but if you're willing to wait, it does its little job. Now "top" reports 4922736k total.

On the other hand, Ubuntu's kernel maintainers want to make this a pain. There are four longstanding bugs reported against this, and it's always marked "wontfix" in every release. They require you to install the "server" kernel, because who would ever have a desktop computer with more than 3 GB of RAM? (Many 4GB machines have some of the RAM at an address higher than 4GB.) "apt-get install linux-headers-server linux-image-server linux-server" didn't work, despite being recommended by some person on some web forum, but adding "linux-image-2.6.28-11-server" to that made it work, for some non-obvious reason. It deleted the "linux-image-virtual" packages that I'd installed so I could play around with virtualization (I have no idea why, nor do I know whether virtualization will still work for me). apt-get is still my favorite, despite getting increasingly obscure results from how people build bizarre dependencies. After a reboot, "top" reports "5023452k total". My graphical boot messages went away -- the screen stays dark until the login screen -- but I can use all my RAM.

One of the other things I've been wondering is why HP doesn't support ECC memory in these systems. The AMD chip set they're using supports it. I should've been daring enough to buy a little ECC DRAM just to see if they're snowing me again, to protect their "server" revenues or something. After working on professional computers and workstations for decades, it spooks me that if a RAM chip ever dropped a bit, the system would never notice or report it. Does anybody else have a clue about this?

Kensington (mice, etc) is particularly stupid about tech support

I was trying to make a Kensington bluetooth mouse (left by a friend) work today with my Linux smartphone (Neo1973). No manuals were left with the mouse, so I looked on the Kensington web site. Of course there are no manuals there. I tried their tech support website, but it had little or no useful information. (I can't even get the mouse to turn itself on. Simple instructions, e.g. a manual, would be the perfect thing.) I tried calling their tech support line listed on their contact page, but it is answered by a robot that wants me to press "1" if I'm using the product with a WIndows machine, or "2" if I'm using it with a Macintosh. When I pressed "0" it hung up on me.

I suggest that you buy your products from a different company, whenever you have a choice.

Promise Technology Ultra133 TX2 can't boot from CDROM or DVD drives

This PCI card provides two parallel ATA interfaces, so you can plug in hard drives, CD readers and/or burners, and DVD readers and/or burners.

The catch is that you can never boot from your CD or DVD drives if they are plugged into this controller card. You can access the CD or DVD drives just fine after you're running an operating system, but you can't install a new operating system from CD or DVD while the CD or DVD drive is plugged into this card.

This is undocumented in the card's manual, and on Promise's web site.

I called the Promise Technology tech support department about this. The good news is that once I got the right phone number, they answered promptly and gave me the answer: "It can't boot from a CDROM and we are not planning to fix this." Indeed the tech support person seemed surprised that I would want to do that, and asserted that you can't boot from a CDROM on any PCI card, though he's just ignorant. The Adaptec AIC-7876 SCSI interface PCI card in my other machine has no trouble booting from a SCSI CDROM drive.

I'm not planning on buying any more cards from Promise. I do like that some of their cards are supported under Linux. But I'll try my luck with another vendor next time, who may take the whole idea of building generic interfaces (that work no matter what you plug into them) a bit more seriously.

The bad news is that it took four tries to get the right phone number. The one in the manual, +1 408 282 6402, leads to a non-telco recording that repeats "This phone number is out of service" over and over, doesn't tell you the new number, and won't take messages. The telephone directory that MCI runs at "411" has no Promise Technology listed in Milpitas, CA. WhoWhere.com lists them at +1 408 719 0438, which is disconnected and has no forwarding. A web search located +1 408 452 1180, which leads to a recording telling you the new number is 282 6402 -- the same out-of-service number that's printed in the manual. I had lots of trouble reaching their web site, which kept going up and down. Eventually http://www.promise.com came up and told me the correct number: +1 408 228 1400. You have to press 4 for tech support.

Telephone extension cords aren't twisted pairs

I tried running telephone extension cords around my house rather than doing custom installation of phone wire and little jacks in boxes. In general I find that pre-made cables complete with connectors are more reliable than the connections I make out of raw cables and attached connectors.

However, I discovered that the ordinary "telephone extension cords", the kind that have an RJ11 plug on one end, and one or two RJ11 jacks on the other end, don't actually use twisted-pair wiring in the cable. The wire is apparently running straight, not twisted. The result is that there's audible crosstalk between the two phone lines carried on the four wires in the cable.

I'm looking for a source of telephone extension cords that DOES use twisted-pair wire, or which otherwise eliminates crosstalk.

Some people have suggested using Ethernet cables, with little adapters on each end to convert the 8-pin plugs into 4-pin RJ11 plugs and RJ11 jacks. Ethernet cables won't work very well at all unless they really do twist the pairs of wires together, so any vendor who shipped straight wires with Ethernet connectors would be quickly eliminated from the market.. But I don't know a good source of the necessary adapters...and it seems kind of stupid to me that nobody I know seems to make real twisted pair cables that have RJ11 jacks on the ends.

Revised RPMs for Red Hat 7.3 zlib and libpng security fixes

Some of us are lazy about constantly reinstalling the operating system. There are Red Hat 7.3 machines still happily running. For such people, I've created new RPMs that include the recent security patches for libpng and zlib. You can install them with "rpm --upgrade ...". Note that merely installing these will not fix your Mozilla browser; it apparently uses its own version of these libraries, rather than linking to the system version. Upgrading your browser is a much more arduous process than updating these RPMs.

Addendum: There's a site, fedoralegacy.org that purports to provide RPMs with security fixes, for Red Hat 7.3, 9.0, and early Fedora Core releases. I have no idea whether their fixes are trustworthy. The key they use to sign them has no signatures on it -- and there's apparently an RPM bug that PREVENTS them from signing the keys used to validate RPMs. Rather than fix the bug and release a new version of RPM, they merely don't bother to provide any trust path for their revised RPMs.

(Yes, I know, I haven't either fixed the RPM bug, or signed the above RPMs, either. But at least I know a problem when I see one, even if I haven't had time to fix it yet.)

USB "External Hard Drive Enclosure" may not provide enough power

Having lots of hard drives lying around, I thought it would be good to be able to plug them in on USB ports and access them without opening cases and fussing with delicate cables. So I got a "CompUSA USB 2.0 External Hard Drive Enclosure", SKU 312100.

It doesn't work for me. Perhaps it's that my hard drives tend to be large, but the 3.5" drives that I have require more power than the wall-wart power supply can provide. When I put a 250GB drive in it, for example, the drive would start to spin up for about half a second, then quit. Then start to spin up, then quit. It would do this as long as I would let it (which wasn't long, since I figured it was straining something). Cleverly, the power supplied by the enclosure is not spec'd in its manual. The wall-wart says it provides 1.5A at 12V. Some of that will clearly be used for the 5V supply to the disk.

I did find a 160GB 3.5" drive that works in it. But what a pain to have some drives work and some drives fail, depending on the fine print on the drives. This particular USB drive enclosure isn't worth the cheap $40 I paid for it.

Here's another few reports of the same problem. Motto: Check the power supply of the power brick and the disk drive before buying one an "external enclosure", especially a cheap brand.

UPDATE (December 2005): These external hard drive enclosures also don't supply enough cooling, if you're going to use the drive extensively. I ran a pair of 400GB drives for months in these enclosures, which have no fans, and made them seek pretty hard by mirroring other web sites using slow but highly parallel wget's. I ran them vertically to help their heat dissipation, but that wasn't enough. Eventually the drives started failing, returning unreadable sectors that stayed unreadable even after the drives cooled. These cheap consumer products aren't really designed for serious professional work. Look for one that has a fan!

Keyspan USB serial port with "Linux support" unfree; use Prolific's instead

UPDATE (10 Oct 2005): I've found a company selling a good Linux-supported USB to serial adapter: Prolific (tech.prolific.com.tw). Their PL-2303 USB 1.1 to Serial Bridge Controller chip is fully supported by a free Linux device driver. Their web site even publishes the driver, and a user guide for Linux users. (Though they are clueless enough to do their site navigation with idiotic Javascript buttons rather than links, so I can't give you a URL for the right page; and they distribute their Linux users' guide as a Microsoft Windows .doc file inside a Zip file). This chip is in various products, such as the IOGear USB to Serial Adapter, model GUC232A. This product costs $34 and is available as CompUSA SKU #50177640). I recommend briefly forgiving them for the MS-Word manual, and buying their product, rather than the proprietary and inferior Keyspan product.

I foolishly bought the Keyspan USA-19HS in August 2005, because I was impressed that the company had even *heard of* Linux and included it on its packaging. However, what the package said was "Linux: Supported. Please visit website for details."

After buying it, I went to the web site, and its Linux page hasn't been updated since 2003. There's nothing about the USA-19HS model that I bought (and which seems to be the only model currently for sale).

I tried plugging it in anyway. Debian's Sarge 2.6.11 kernel doesn't recognize the device. Nothing created the device nodes for it. The keyspan web site doesn't say how to create device nodes -- it has a long song-and-dance about how newbies can run scripts under Linux, but never provides the &%&$^# script that you need to run!

So I fell back on Google. And discovered that there's been a big tempest over the binary firmware that's in the driver module. It's not released under the GNU General Public License like the rest of the driver (for no obvious reason -- any real competitor could reverse-engineer it without trouble, so this just causes pain for the product's users). The result is that Debian won't put it into their release, because they actually care whether their release contains free software or not. The code appears to have been relegated to a package called "kernel-source-nonfree-2.6.11_2.6.11-1_i386" and "kernel-nonfree-modules-2.6.11-[kerneltype]" (or something like that). So far, I haven't been able to find clear documentation on how I might get such non-free modules with apt-get (it doesn't work for me). It's not clear to me that I want to run non-free software anyway; I've long made a point of owning no Microsoft or other proprietary software.

Google also showed me a page at keyspan.com that actually does tell you how to manually create the relevant device nodes. This page is foolishly not linked from the main Linux support page, which is why I needed Google to find it for me. This page is: http://www.keyspan.com/support/linux/LinuxReadMe.html. Creating the device nodes doesn't help, though, since I don't have the driver.

Would Keyspan consider releasing its firmware under the Gnu General Public License, or under an X or BSD-style license? The GNU license would protect you best from competitors; if they use it, they have to distribute the sources themselves (and you can sue them if they don't), and if they improve it, they have to release the source of the changes, which you are then free to use in your own products.

I don't know that there are many competitors in the USB-to-serial market who are unable to write their own firmware. What problem is Keyspan protecting itself from?

I got no response at all from Keyspan to the above message -- and got derision from Greg KH, the kernel driver maintainer who put this non-free code into the stock Linux kernel sources. He said:

This is due to Debian's issues, nothing that I, as the kernel driver
maintainer can do about that.  Perhaps you should switch to a different
distro?  :)

Anyway, this was brought up about 5 years ago on debian-legal.  Back
then I proposed a solution for this (move the firmware to userspace
which would solve all of the issues) and asked for a patch.  I have yet
to receive it...
Both of these paragraphs are factually incorrect. This is not merely due to Debian's issues -- I have the same issues. He could, as kernel driver maintainer, reject contributed nonfree code, or write free code that did the same thing, like every other maintainer in GNU/Linux. But he doesn't. And moving the nonfree code to userspace would not solve any problem for people who only want to use free software. But that's not a problem he cares to solve. What's this guy doing as a Linux kernel maintainer?

If I felt like supporting this company, I'd spend a couple days and reverse-engineer the damn thing myself. But I don't feel like encouraging people to buy this product from this company. I'd rather keep looking for a company whose products have free software drivers. And a company that answers emails sent to the email address listed in their web site for Linux support questions.

DAT (digital audio tape) drive software to read and write audio tapes

Computers have used DAT drives for backup for years. Initially these drives did not read or write DAT audio tapes (the application for which the format was invented). SGI (Silicon Graphics) eventually caused firmware to be created for a few DAT drives so they could read and write audio DAT tapes, making SGI's workstations useful for professional audio editing. Since then, this firmware has fallen into disrepair (neither SGI nor the manufacturers of the drives will support it -- while both claiming that the other one is the problem). An active community of users is now spreading the firmware around and helping each other to read and write their audio tapes.

An influential piece of software that uses such drives on Solaris and Linux is called DATlib. The last known version I found links to was called DATlib-0.81.tar.gz, but all the links were dead. When I couldn't find any versions of it on the net in October 2005, I emailed Marcus Meissner, its author, who is now working with SuSE. He pointed me to http://www.lst.de/~mm/datlib.tgz, and I have mirrored a copy of datlib.tgz here as well. It unpacks as "DATlib-current", with the latest change in the archive made on September 30, 1997. It refers to itself as DATlib-0.7. This version does not have Linux support -- it is for Solaris (SunOS 5.5). There's a good summary of the firmware's early history in its README.datlib.

Presumably the 0.81 version was a Linux port done by someone. An earlier version of this page appealed to anyone to send me a copy; Lamar Owen responded with copies of DATlib-0.81.tar.gz and DATlib-0.81-wh0.2.tar.gz (which unpacks to DATlib-0.81-wh0.1, not wh0.2). I believe that the standard Linux kernel's SCSI tape driver supports these audio drives, if you set the density to 0x80 ("mt setdensity 128" or "mt setdensity 0x80"). If you try this code on a modern Linux system, please let me know how it works, and send any relevant patches to gnu@datlib.toad.com.

Debian Sarge Source Code DVDs via "trackerless" BitTorrent

UPDATE: I offered these for a few months, but nobody ever accessed them. I couldn't even access them myself, from elsewhere on the Internet. After talking with Bram, who says trackerless torrents don't really work due to too many broken (non-official-BT) trackerless peers, I've stopped offering these trackerless torrents. But for history's sake, here's the original notice:

The Debian release now takes 15 (fifteen!) CDs to hold its source code. The sources fit on only three DVD's, though.

For some reason, at cdimage.debian.org, you can get the source code DVDs via "Jigdo" but not via BitTorrent. I got them via Jigdo and am offering them up via BitTorrent.

  debian-31r0a-source-1.iso.torrent (352K torrent for 4.4GB file)
  debian-31r0a-source-2.iso.torrent (352K torrent for 4.4GB file)
  debian-31r0a-source-3.iso.torrent ( 20K torrent for 0.2GB file)
I'm using "trackerless" Torrent files, which is a feature of the 4.1.x and 4.2.x and 4.3.x releases of BitTorrent. Get a stable or newer beta version of BitTorrent from www.bittorrent.com; it's got a decent GUI and many other improvements besides trackerless operation.

These Torrents probably won't work with most implementations of BitTorrent, since only a few of them have implemented trackerless support. Encourage your BT client supplier to implement it -- trackerless operation eliminates one point of control which can be used for censoring particular uses of BitTorrent. Meanwhile, use the official implementation, at least for these and other Trackerless torrents.

Don't put these Torrents (or the .iso images) in the same directory as copies of the Debian Sarge Source CDs, if you have them. Unfortunately the three DVDs' ISO images have the exact same names as the first three CDs' ISO images. But of course the CD images and the DVD images are very different (600MB versus 4.4GB).

Disk drive recovery: ddrescue, dd_rescue, dd_rhelp

If you have a disk drive with errors on it, that you'd like to be able to read the recoverable data from, GNU ddrescue is your best friend.

It is modeled after the two preceding programs, dd_rescue (with an underbar), and dd_rhelp. But GNU ddrescue it's far better than both -- I've tried all three, on the same drive, as well as trying to use plain old "dd". You should skip my learning process and just head straight for the best way, which is GNU ddrescue. I'll tell you about it.

So, a brief tutorial on things I learned about copying disk drives. "dd" will make a copy of a disk drive with errors, if you set "conv=noerror" so it will keep going after errors. The catch is that it just *removes* the erroneous sectors from its output, as if they didn't exist, which totally screws up the file system image. Fsck will tell you just how unhappy it is with such an image; it's unrecoverable without massive manual work, shifting big blocks of data around. Instead, you can use "dd conv=noerror,sync" which will write an output record (zeros) even if the input record has an error. You had better do this on single disk sectors, thus "dd bs=512 conv=noerror,sync". If you use a larger blocksize (read multiple disk sectors) at once, the first one that has an error will stop the read, and what will get written out will be zeroes for not only the bad sector, but for all subsequent sectors in that block.

"dd bs=512 conv=noerror,sync" works, but has many drawbacks. It's slow even on the error-free stuff since it's doing tiny reads and writes. It spends a lot of time chewing through the erroneous parts of the drive, rather than reading as much error-free stuff as it can, THEN going back to do the hard stuff. (When your drive is crapping out, it has a tendency to die the big death at any moment. You'd like to get as much info off it as possible before that happens. One example is if small particles of stuff are rattling around inside the drive; they make more and more errors, as you run the drive. Sometimes, putting the drive in the freezer for a few hours, in a ziploc bag to keep the moisture off, will revive it briefly; electronics work better at low temperatures than when they get hot.)

Kurt Garloff's dd_rescue was the first attempt to improve on this. It reads and copies bigger blocks until it sees an error, then slows down and goes back, and reads single sectors. After a while it speeds up again. It can also read backward, and can quit after it gets some specified number of errors. It keeps a 3-line display updated in your text window so you can see what it's doing. If you run it simply, it just does what "dd bs=64k" does until it sees an error, then backs up and does "dd bs=512". If it gets an error reading a sector, it doesn't write to that sector of the output file, but it skips past it to write the next good one, so everything stays in sync. It seeks the input and output in parallel so it makes an exact copy of the parts that it can read.

LAB Valentin's dd_rhelp. is a complex shell script that runs dd_rescue many times, trying to be strategic about copying the drive. It copies forward until it gets errors, then jumps forward by a big jump looking for either the end of the drive, or more easy-to-read stuff. Once it finds the end of the drive, then it starts working backward, trying to close up the "hole" that it hasn't read yet. As it encounters errors, it skips around looking for more error-free parts of the drive. It only reads each sector once. It reads the logfile output of dd_rescue to see what happened and to figure out what to do next.

One problem with dd_rhelp is that it's a shell script, so it's really slow and consumes massive resources. On one of my drives that had about 2900 bad sectors on it, dd_rhelp would waste upwards of 15 minutes deciding what blocks to tell dd_rescue to try reading next. During that time it makes about 100 new Unix processes every second.

Antonio Diaz Diaz's GNU ddrescue learned from these experiences. It combines both dd_rescue's ability to read big blocks and then shift gears, with dd_rhelp's ability to remember what parts of the disk have been looked at already. It keeps this info in a really simple logfile format, and keeps it updated every 30 seconds, or whenever it stops or is interrupted. It's written in C++ and it's small and fast.

It starts off running like "dd", blasting through large error-free areas. When it gets an error, it writes out any partial data that it received during that read, and KEEPS GOING to the next big block. It notes in the logfile that a bunch of sectors (the first erroneous one, plus whatever ones followed it in the multi-sector read) were skipped. And keeps going. So it reads through the entire disk in big blocks first. Then it goes back to "split" the skipped parts, trying to read each sector individually. The compact logfile always shows which chunks of disk have been read OK, have been read with errors, have been read one sector at a time with errors, or have never been read yet.

One catch about GNU ddrescue is that the author has some strange ideas about what "ought to" be in the C++ library. So in the current version (1.1), you'll be lucky to get it to compile without errors. All the errors are minor, and are not in key parts of the software, so you can dike them out if you need to. Sometime soon I'll make some nice clean patches for these parts, and submit them, but it's been this way for a year and people complain about this every month on the bug-ddrescue mailing list, and the maintainer doesn't fix it, so I'm not optimistic that the patches will be accepted. But use his software anyway; other than this quirk, it's really nice.

As an aside, it takes a lot of time and screen space for the kernel to log all the error messages from when you're reading from a failing disk drive. The messages also tend to screw up the screen that you're trying to work in. To speed up the logging, you can edit /etc/syslog.conf and insert a "-" before "/var/log/messages", then restart the syslog daemon. This tells syslog to not do an "fsync" after every log message it writes out. If you crash you'll be missing the last few messages, but if you don't crash you'll run about eight times as fast. Also, to make it stop printing those messages on your console, you have to edit the arguments to the "klogd" daemon, which is usually started by the same script that starts the syslog daemon (/etc/init.d/syslog). On my Red Hat 7.3 system, you can edit /etc/sysconfig/syslog and change the KLOGD_OPTIONS line so it includes " -c 0 " which will suppress all console messages, then restart the syslog daemon. (If you can figure out the "logging level" of the disk error messages, you can set the level higher than 0, but I was in a hurry.) Change these things back when you're done doing disk recovery, so you'll see kernel error messages, and so they'll get logged reliably, when the kernel is crashing a year later for some totally unrelated reason.

Speculation about fixing bad disk blocks to keep using the disk

Once you have copied your entire disk drive off to some other drive (or to a file on a bigger drive, which is often easier), what do you do with the failing drive? If there only a few errors, and the drive is fairly modern, you can probably just rewrite those sectors, and the drive will reallocate those sectors automatically to new, "spare" sectors that it keeps lying around for just this purpose. From the drive's point of view, it doesn't matter what you write to those sectors -- could be zeros, garbage, or good data; it will either reallocate them and write your data there, or it will just try writing your new data overtop of the bad data and see if it "sticks" (is readable afterward).

When I have a drive with only one or two failing sectors, I often find them with "smartctl -t long" (one at a time, sigh) and then write to them with a complicated series of commands. The very smart smartmontools maintainer, Bruce Allen, has written a BadBlockHowTo.txt about how to do this. (I hope somebody automates this error-prone process soon.) I have done this many times, on many generations of disk drives; on old 1980s SCSI drives there were utility programs that would reallocate individual sectors. I know of no free software program for doing this kind of low-level drive formatting, unfortunately. Modern drives just do it when you write.

OK, this next part is pure uninformed speculation. I HAVEN'T DONE THIS ON MY DISKS, AND YOU SHOULD NOT DO THIS TO YOUR DISKS UNLESS YOU ARE A WIZARD AND YOU KNOW WHAT YOU ARE DOING, AND YOU'RE WILLING TO TAKE THE RISKS YOURSELF WITHOUT WHINING.

It occurs to me that it OUGHT to work to do this: First, copy the entire drive to somewhere else with GNU ddrescue. The logfile will show you exactly where the erroneous sectors are. Look at it with a text editor. Make sure that's what it says. Then, make a copy of that logfile, and (here's the tricky part) run GNU ddrescue in a very strange way to write zeroes onto those bad sectors:

   ddrescue  -r1  /dev/zero  /dev/baddisk  my-logfile-copy

Note that your bad disk is the OUTPUT of this command, while the system file "/dev/zero" (as many zeroes as you ever wanted) is the INPUT to the command.

If you hadn't specified the logfile and the "-r1", this would copy zeroes over your entire bad disk. The logfile points out what disk blocks it couldn't read, so ddrescue will only try to read the parts it hasn't read before. The "-r1" tells it to go back and try to read them again, even though it failed the first time. But since it's reading from /dev/zero this time, all of those reads are going to succeed -- and then it will write those zeros into the exact places on the bad disk drive where you need to write new data to reallocate the bad sectors.

In the process, it destroys the logfile, which is why you made a copy of it. When it's done, the logfile will report that the entire disk was readable, because it was able to read from every "bad" sector in /dev/zero.

LIKE I SAID ABOVE, this is all pure speculation. I have never done this on my own drives. I might try it someday (the drive with >2000 errors would be a good candidate, except that it's a very old 2GB drive, it has already reallocated every spare sector in it, and I don't know what trashed those 2000 sectors so I don't really trust the drive anyway. Better to try it on a 200GB or up drive (newer, with better error recovery firmware) and with only a few errors (that probably weren't cause by some massive internal problem).) Don't blame me if you trash your drive this way. But do tell me what you think of the idea.

Disk drive recovery: ddrescue on MacOSX

My friend's Macintosh started acting very strange and slow today, and we eventually figured out that it's the disk drive, which shows failing SMART status. (MacOS is too stupid to give access to the real SMART info like the free smartctl, it just provides a red FAILED indication. However, smartctl may have been ported to MacOSX by now.)

So the race is on to get as much data as possible off the drive.

We immediately copied off a few key documents onto a USB Flash drive, then hooked up a spare USB hard drive. Then the question became: what software do we have to copy this failing drive to the external USB drive?

The snooty Mac utility Carbon Copy refused to copy to a USB drive. (And I don't know whether it would handle a source drive full of errors, anyway.)

I looked for MacOS ports of ddrescue, but there weren't any in a form that I could simply download and run (or even download, install, and run). Instead there was Darwinports, which wanted me to install almost a gigabyte of Apple proprietary versions of the GNU tools (Xcode), with a license agreement that you MUST CLICK AGREE on or it REFUSES TO INSTALL, even though the GNU license does not work that way. Plus their own small wart on the side of that, which seems to be some scripts that suck down sources and run them through "./configure; make; make install". I aborted that all-night download as soon as I found an alternative; I don't know if this disk drive is up to installing a gigabyte of software and then configuring and compiling a GNU program.

Fink at least has the concept of installing binaries rather than compiling everything from sources. But I downloaded it and tried running its installer, and the installer failed with an error message that didn't appear in the documentation, saying that the installation had failed but I should retry it. When I retried it, of course, it said it couldn't install on top of an existing installation. When I tried to run the half-installed one, it would instantly fail. And when I removed it according to the documentation, and tried a reinstall, I got the exact same problem. Yeah, I know, it's free software, so it's my problem. I don't recommend Fink to you.

So I have fallen back on trusty old "dd conv=sync,noerror" and I hope it doesn't lose my friend too much of the drive. My hat's off to the Mac community for the elaborateness of its infrastructure. Doesn't it need a few more bells and whistles, guys? Someday if I ever waste a week installing a compiler on a Mac, I'll build a simple binary of ddrescue, and put it up on this web site to help the next person who's trying to rescue their data from a failing drive.

OK, so John Perry Barlow came over, and had the development environment handy, so we built ddrescue by downloading the sources (ddrescue-1.1.tar.bz2, unpacking them in the Finder, opening a terminal, going into the unpacked ddrescue-1.1 folder, and typing "./configure" and "make". Here's the resulting binary of ddrescue and the matching documentation on how to use it. As I type this, it is copying my friend's whole dying hard drive onto an identical Firewire drive.

If your disk dies on MacOS, grab a copy of this binary and use it to copy the whole disk onto an identical (or larger) spare disk drive. If it works at all, you'll probably recover 99+% of your files. In fact, why don't you grab it now and put it on your Mac, so it'll be handy when you need it?

Bandwidth used by a Tor server

  • Bandwidth used by my Tor server (named "tor-ture")

    Asus A7M266-D motherboard problems

    I got a PC clone with an Asus A7M266-D motherboard, and two AMD Athlon MP processors in 2002, for GNU Radio development. I've used it off and on for general computing since then. It originally ran Mandrake Linux, but I switched it to Debian Sarge.

    The system was flakey for many years. It would run fine for days and then hang inexplicably; the mouse would stop moving on the screen and nothing would happen further until you hit RESET.

    Eric Blossom told me that there's a bug in the chipset (probably the AMD 762) that causes the bus arbitration to hang when the onboard IDE controller is used. I installed a pair of Promise Technology Ultra133 TX2 boards, which eliminated that problem. However, as mentioned above, you can't boot from CDROM or DVD drives attached to these boards. So when I need to boot a new OS CD or DVD, I have to open up the case, attach the DVD drive to the motherboard IDE, re-enable it in the BIOS, and then boot from the DVD. And reverse the process when I'm done.

    That circumvention kept the system doing pretty well for years. But in the last year or two, it has gotten flakey again. I tried changing various settings, turned down the bus clock, etc. These "tended" to work, but it was hard to tell, since the hangs would happen infrequently and without warning or obvious provocation.

    This week (2007 Feb 16), I was using the system and it froze. I hit RESET and it didn't come back up. (Luckily I'd backed it up recently.) Power cycling it didn't bring it back up either. The system does not beep when it gets power, though all the drives spin up, as well as the CPU fans. I've tried removing all the PCI and video boards, changing the CMOS battery (which was weak at 2.35V), checking the power supply voltages. No obvious problems, except that it doesn't do anything! When I remove the RAM, it doesn't beep or complain. It's very dim, Jed.

    I have another PC clone on order and may just move to that. This was a noisy system. It served for five years, doing hard work much of that time. It was far flakier than any other PC I've owned, so I don't particularly recommend it.

    Installing HP dc5750 with Fedora Core 6

    I recently bought an HP dc5750 minitower. I was looking for a very quiet system with modern performance and peripherals, with good Linux support. (In surveying my equipment, I realized the fastest working machine I had used a pair of Xeon's at 1GHz.) The dc5750 offers many options that make it possible to build a very low power, low heat, quiet system. And it is extremely quiet; it's the quietest computer I have, by far.

    I bought it with the high efficiency Energy Star 4.0 power supply, Athlon64 X2 3800+ 35-watt processor, and "FreeDOS". Yes, HP won't ship this unit with Linux of any sort -- but you can buy it without the Microsoft tax (which I refuse on principle to pay). Shipping the GPL'd FreeDOS on the hard drive is a good circumvention of Microsoft's lockin contract, and nobody expects them to actually support it, the way a preinstalled Linux would attract support calls.

    It took HP several weeks longer than their promised delivery date to build and ship the system, but I've been pleased since it arrived.

    I installed Fedora Core 6 from a torrent-downloaded DVD I made. There was one glitch around the built-in SATA controller, which is part of the graphics chip. Fedora's install CD could access the disk drives just fine through it, but couldn't access the Lightscribe CD/DVD writer. I avoided this problem by adding a $25 Rosewill RC-210 PCI card with two SATA ports, and cabling the Lightscribe to it. Then Fedora had no trouble.

    There are still a few teething troubles with the system. Its Cool-N-Quiet support isn't working; it always runs at 2 GHz. There's something broken about the kernel module that's supposed to manage this; I get this message sometime during the Fedora init scripts on bootup:

    Checking for hardware changes   [OK]
    FATAL: Error inserting acpi-cpufreq (/lib/modules/2.6.18-1.2798.fc6/kernel/arch/i386/kernel/cpu/cpufreq/acpi-cpufreq.ko): No such device
    Bringing up loopback interface...	[OK]
    
    This message is reproducible by running "modprobe acpi-cpufreq".

    Also, while the X server runs without trouble, I can't use any of the fancy graphics stuff because it's run by an ATI chip that doesn't have very good free software support. The X.org logfile says:

    X Window System Version 7.1.1
    ...
    (--) PCI:*(1:5:0) ATI Technologies Inc RS482 [Radeon Xpress 200] rev 0, Mem @ 0xc8000000/27, 0xd8500000/16, I/O @ 0x2100/8
    (--) PCI: (1:5:1) ATI Technologies Inc unknown chipset (0x5874) rev 0, Mem @ 0xd0000000/27, 0xd8510000/16
    ...
    (II) RADEON(0): Direct rendering broken on XPRESS 200 and 200M
    (II) RADEON(0): Generation 1 PCI interface in multifunction mode, accessible memory limited to one aperture
    (II) RADEON(0): Detected total video RAM=31744K, accessible=65536K (PCI BAR=131072K)
    (--) RADEON(0): Mapped VideoRAM: 31744 kByte (64 bit DDR SDRAM)
    
    Something is burning up about 85MB of the 1GB RAM before the kernel gets it; the kernel reports:
    Feb 26 14:21:00 quiet kernel: Memory: 900968k/916188k available (2105k kernel code, 14616k reserved, 844k data, 240k init, 0k highmem)
    
    I tried running the "BIOS firmware tester for Linux" that comes from Intel (
    http://www.linuxfirmwarekit.org), and it found all kinds of problems in the HP BIOS. Some of these are also reported by the Fedora kernel as it comes up. I'll have to test it with the new BIOS they released to support Windross Bistro.

    I think there's a BIOS option to set how much video memory is reserved; I can probably set it lower.

    Getting back to administering an RPM-based system is a bit clunky, compared to Debian's debs. But neither one is very good at dealing with source code, which is the part I tend to care about most.

    Addendum: Yum is such a pain that I have essentially stopped maintaining this Fedora system. I have a similarly configured system running Ubuntu Feisty, and when I need some obscure program, "apt-get install obscure-program" and in ten seconds it's there. (Either from the local copy of the install ISO that's on the hard drive, or from the net if it's been updated.) When I get a chance, I'll rebuild this machine as an Ubuntu machine also. The Fedora developers claim they aren't wedded to RPM and Yum -- and if somebody would just port their distro to dpkg and apt-get, they'd adopt it. I'll reconsider Fedora after somebody gets around to that.

    Addendum 2: I've upgraded the Ubuntu system to Gutsy (7.10) and the DVD drive problem is gone. It was a kernel bug involving how the driver for the SATA host interface responded to an obscure hardware error. The one-line change did the proper kind of reset, and now it works, so no extra SATA PCI card is needed.