Archive for the ‘Computer Hardware Interfacing’ Category

Why does my Antec case not support high speed front USB operation?

Saturday, May 2nd, 2009

On a 2004 vintage Antec SX series case, I was surprised to find out that while the rear ports on the motherboard supported high speed USB 2.0 operation, the front ports limited the USB device (iPod) to USB 1.1 speed (“full speed”). I checked and rechecked the wiring.

I came across the following posts which described a similar problem, USB 2.0 devices are unable to connect to the front ports or simply don’t work. This is due to a no-connect pin being connected to the front USB panel. The solution is to remove the header pin or cut the wire going to that pin.

Then I found the solution to my precise problem: USB 2.0 devices only work in slow speed on the Antec cases. The solution is to remove and bypass the filter circuit on the small board containing the front USB and firewire ports! It looks like Antec may even offer a free upgrade of the front USB board to one that operates correctly with USB 2.0 devices, so it doesn’t hurt to ask at Antec’s tech support site.

What if my SATA controller doesn’t see my new 1TB SATA hard disk?

Wednesday, April 29th, 2009

First-generation SATA controllers have several problems getting along with newer SATA drives, among them: inability to negotiate transfer rate of the drive down from 3.0Gb/s to 1.5Gb/s, and failure to account for the large capacity of the drive (>527GB).  There is more information on this issue in this excellent post.

Unfortunately, it is difficult to tell which issue is being experienced when the BIOS merely locks up, or the controller locks up during drive inquiry and is invisible to the system thereafter. Assuming first that the problem persists under a newer version of the Linux kernel with the libata drivers mostly sorted out, there are several other potential compatibility fixes that do not involve buying and installing a newer SATA controller card that you are sure will work:

(more…)

Compaq Proliant DL380 Generation 1

Wednesday, February 4th, 2009

I was given a Compaq Proliant DL380 (Generation 1 or G1) as part of an exchange for doing some filesystem recovery for a customer. It came with 2x800MHz/256K SECC2 Pentium III CPUs; 512MB PC133 CL3 SDRAM DIMMs; five Compaq 18.2GB Ultra2 SCSI hard disks (two failed); a Compaq “Integrated Smart Array” onboard RAID controller; a floppy drive, slimline CD-ROM, and single 275W hotswap Compaq power supply. Here are HP’s specs for the system. Here is where you can find manuals and documentation for the system. Now, this isn’t a bad machine as-is, but let’s turn it into a beast with some smart buys on the used market.

(more…)

Why does Linux only see 1GB of my 2GB SD Card?

Friday, August 31st, 2007

This mailing list post explains why.

A 2GB SD card reports a 1024 byte block size instead of the 512 byte block size that all smaller SD cards employ. However, I/O is performed in 512 byte blocks by the USB mass storage driver. The card reader is supposed to know about this and report a 512 byte block size, while multiplying the number of blocks reported by the card by 2, in order to report the correct card geometry to the operating system.

Older card readers do not know about this conversion, and may even assume that all SD cards have a 512 byte block size. Thus, a 512 byte block size is reported, along with the number of blocks reported by the card. This cuts the reported capacity of the card in half.

But the filesystem that exists on the card reports a 2GB size. On a system where the card physically only shows up as 1GB, this causes the FAT filesystem driver to read past the end of the card. This will produce read/write errors, and could even crash the filesystem driver if it is not equipped to deal with this case.

Windows reportedly employs one of two solutions to ensure that 2GB cards are correctly recognized.

The first solution is to assume that the card has a PC partition table on it (a mostly correct assumption for any card purchased retail). The Windows USB mass storage driver then examines the partition table to determine the number of blocks on the physical device, and ignores the number of blocks reported by the reader. A possible flaw with this scheme is that the Windows driver may not account for cases where a partition table has been erroneously or maliciously constructed, leading to an incorrect physical size being entered, and thus a device which cannot be correctly repartitioned or reformatted. It may also not account for the device being divided into several partitions. And of course this scheme won’t work for a card which is formatted without a partition table.

The other solution is to ignore the number of blocks reported by the reader, and to probe the size of the card by issuing test reads, probably in large increments at first and then smaller increments, until a read failure occurs. The read failure is assumed to occur because the read occurred past the end of the device. A possible flaw in this scheme is that a card with one or more defective sectors could cause the storage driver to believe it has found the end of the device, when in fact the read error occurs because of a bad sector.

At the moment, the solution for Linux and other operating systems which do not implement such hacks and which trust the card reader to report a useful block count and block size of the inserted card, is to buy a new card reader.

Corrupted NTFS filesystem recovery

Monday, March 19th, 2007

The quick guide to recovering a corrupt Windows NTFS filesystem from a dead or dying hard drive:
1) If the drive does not power up or respond at all to host I/O, replace the drive controller board with a compatible one (i.e. from an identical drive purchased on Ebay), unless it is a drive known to not work with a controller board swap. Don’t bother doing this if the drive responds but clicks when accessing certain files. If a controller swap doesn’t get the drive to at least respond to ID, the drive has serious problems and will require professional service (or a do-it-yourself head stack/preamp replacement, and possible reserved region rewrite…not for the faint of heart).
2) Put the hard drive in a Linux system with excess hard disk capacity.
3) Attempt to mount the partition. Recover any utterly irreplaceable files immediately, in order of necessity. You may not be able to get anything, and it may take several reboots if you “poke” the drive in the wrong place, but if you do get something, at least you know you have _that_.
4) Use dd_rescue, and dd_rhelp if necessary, to make a “clone” image of the drive. The clone image can be a file or it can be another blank hard disk. This may take several weeks and the drive may die while it is being cloned. Not much you can do if that happens but send it in to the recovery house like you would have had to do anyway.
5) Attempt to loop-mount the NTFS filesystem (mount -o loop /tmp/image.img /mnt). If it succeeds, try to copy the data you need out of /mnt that way. Very likely that the filesystem will not mount. Even more likely that it will mount, but then attempting to read certain files crashes the kernel.
6) If you couldn’t get the files you need, copy the image to a sufficiently sized blank hard disk if you hadn’t already (dd if=/tmp/image.img of=/dev/hdd bs=10M), and then attach the cloned drive to a Windows XP machine. Do NOT allow Windows to “Chkdsk” the drive when it boots.
7) If Windows blue screens when it looks at the drive while booting up, wipe out the partition table in Linux (dd if=/dev/zero of=/dev/hdd bs=512 count=1). This will cause Windows to effectively ignore the drive.
8) Use EasyRecovery from Ontrack in “Advanced” mode to scan the disk for directory structure, and recover as necessary. The result can be copied to another disk or uploaded to a FTP server.

Hints for EasyRecovery:

  • Don’t bother with the Undelete tool because it does not deal with massive filesystem corruption.
  • The Format recovery tool will only work on an existing NTFS volume, which it won’t see because yours is corrupted.
  • The Raw scan should only be used a last resort because it omits all file and directory names, resulting in a disorganized mess. However, it may find files that the Advanced scan does not, because they have been severed from the directory structure by corruption. If you know the contents of the file you are looking for, you can do a Raw recovery, and then “grep” through the files for a pattern that you know is in the interesting file.

If EasyRecovery cannot find your file, use a hex editor to search through the raw disk image for a piece of the file contents. You may get lucky and find it in the hex dump, and use the hex editor to save it to a file, or copy and paste from the hex editor to another program. If you don’t, well, time to decide if that file is worth $500+ for an attempted professional recovery…

Parallel ports

Monday, November 6th, 2006

Parallel switchboxes
Don't use a parallel switchbox if you want to do high speed transfers. I have not yet found one, manual or automatic type, that is reliable at high byte rates.

PS/2
PS/2 (or Extended) mode takes the standard parallel port (SPP) and introduces an output latch and direction control for bidirectional port operation. Contrary to ECP/EPP mode, the behavior of the nACK interrupt is also changed so that the IRQ becomes active on the trailing edge instead of mirroring the pin; also, bit 2 of the status register reflects the status of the ACK interrupt (latched when IRQ generated, cleared on read) – a violation of ECP spec.

EPP
The difference between EPP 1.7 and EPP 1.9 as set in a system BIOS is one trivial difference in the EPP handshake. EPP 1.9 is equivalent to IEEE 1284. The only purpose for the EPP 1.7 setting is for any particular EPP devices built before IEEE 1284 that malfunction with the IEEE 1284/EPP 1.9 handshake. Note: IEEE 1284 defines the electrical characteristics and handshake protocols of an EPP port, not the register definition.

ECP
An ECP-capable port is a functional superset of IEEE 1284. An ECP-capable port in ECP mode is incompatible with non-ECP devices, however.

Many ECP ports do not implement the full ECP specification. Common elements to leave out are:
– nFault IRQ generation (full/empty FIFO can still generate IRQ though!)
– Hardware RLE compression (not required by spec)
– DMA (PCI cards cannot implement ECP DMA and generally do not need to since PCI write buffers provide sufficient speed)
– IRQ/DMA resource configuration (PCI cards cannot implement this)

Handling a parallel port interrupt
By definition, when sharing interrupts it is necessary for your device driver to be able to determine whether your device is the source of the interrupt or not (so you can pass the interrupt on unclaimed to other drivers if it is not). This is exceedingly difficult to do in a generic fashion for PCI parallel cards. Whether or not an interrupt is delivered in a particular operating mode, and where the status of that interrupt is reflected, is highly implementation dependent.

There are three places where an interrupt can be enabled:
– Control register bit 4 (~ACK interrupt)
– ECP Extended Control register (ECR) bit 4 (~ERR interrupt)
– ECP Extended Control register (ECR) bit 3 (DMA interrupt)
– ECP Extended Control register (ECR) bit 2 (FIFO interrupts)

There are five places where an interrupt can be generated:
– ~ACK transition
– ECP ~ERR transition
– ECP DMA completion
– ECP read FIFO filling
– ECP write FIFO emptying
– Some devices (NS) generate an interrupt on an unexpected EPP read

There are at least three places where an interrupt can be detected:
– Status register bit 2 (latched after ~ACK transition)
– ECP Config B register bit 6 (follows interrupt pin on bus)
– ECP Extended Control register (ECR) bit 2 (check for 0->1 transition)

PCI multifunction cards usually also have a global control register, which has some location outside of the usual parallel port register set that reflects the status of a parallel interrupt.

We don't really care what in particular caused the interrupt, but we do need to find some proof somewhere in the registers that this card was the one responsible for the interrupt, or things will go horribly wrong.

Problems:
– ~ACK transition is only latched to Status[2] in PS/2 mode by many cards. In SPP and other modes, it either reads 1 or follows the IRQ pin. Since a spec-conforming PCI card will use a level triggered interrupt, we can in theory use this to test for the interrupt (but only on PCI cards!)
– ECP Config B register can be used, but first the port has to be switched into Test mode to read it, which means the ECP FIFOs must be flushed and current ECP transaction terminated, possibly too high a cost for interrupt handling.
– There is no way to determine whether a ~ERR transition caused the interrupt or not. On an ISA card or one without a shared interrupt, it can be determined by a process of elimination (since a spec-conforming driver disables the ~ACK interrupt when in ECP mode), but on a shared interrupt it is impossible.
– ECR bit 2 is only useful if in ECP mode and FIFOs are being used.

Basically, the most useful parallel interrupts (those generated by external events) give us no reliable way to determine which card owns the interrupt. The ~ACK interrupt could be probed, had the PC parallel port's designers thought to put in a loop-back test, but they did not.

The best thing you can do to handle PCI parallel interrupt sharing in a generic fashion is to:
– Disable the ~ERR interrupt.
– The DMA interrupt is not an issue on PCI cards since they don't support it anyway.
– Keep track of the state of the ECR bit 2 when you set it to 0 (unmasks the ECP FIFO interrupt) so that you can check if it changed in your interrupt handler (meaning we generated an interrupt).
– Ensure that your card cannot both have the ~ACK interrupt enabled AND be in a mode that will not latch that interrupt in Status[2] (reflecting the pin state is not enough!). Then you can assume that a Status[2]==0 event means that we generated the interrupt. Note: On most/all PCI cards, the status register must be read in order to clear the level-triggered interrupt.
– Assure yourself to whatever degree of confidence required that your card will not produce ANY other type of interrupt (vendor's logic equation for IRQ event helps)!

If you are lucky enough to have a global interrupt flag for the parallel port on your PCI card, USE THAT INSTEAD! Then you can use ~ERR and ~ACK as external interrupt sources without worries, and you can also handle spurious interrupts with a high degree of confidence! Only use the above “generic” mechanism as a last resort. If someone would look into using the ECP Register B to check for the interrupt and see how well that works, that may be an even better “generic” solution for PCI parallel cards.

Simple PC parallel port detection in DOS


unsigned short lpt_base;
char lpt_irq;
unsigned char lpt_vector;
unsigned char lpt_pic; /* 0 = pic1, 1 = pic2 */
unsigned char lpt_mask; /* bit in PIC OCW to unmask/mask */
unsigned char received; /* The last byte received */
char is_ecp;
void interrupt(*old_lpt_irqhandler)(__CPPARGS);

// The following code should be inserted into a setup function, and allow
// user to override base address and IRQ
{
	// setup parallel port
	if (lpt_base == 0) {
		// Use BDA to find base address of system's first parallel port
		unsigned short far *bda_lpt = (unsigned short far*)MK_FP(0x40, 8);

		lpt_base = *bda_lpt;
		//printf("lpt_base %0.4x", *bda_lpt);
		assert(lpt_base == 0x3bc || lpt_base == 0x378 || lpt_base == 0x278);
	}

	if (lpt_base == 0x3bc) {
	  // We can assume a port at 0x3BC has IRQ 7 unless we find otherwise
	  lpt_irq = 7;
	}
	// Detect ECP port according to ECP spec p.31
	// ECR is at lpt_base + 0x402
	unsigned char test = inp(lpt_base+0x402);
	if ((test & 1) /* fifo empty */ && !(test & 2) /* fifo not full */) {
		// Attempt to write a read only bit (fifo empty) in ECR
		outp(lpt_base+0x402, 0x34);
		test = inp(lpt_base+0x402);
		if (test == 0x35)
			is_ecp = 1;
	}

	// If ECP port, read cnfgB to find parallel port IRQ number
	if (is_ecp) {
		// Put port into configuration mode
		test = inp(lpt_base+0x402);
		test |= 0xE0;
		outp(lpt_base+0x402, test);
		// Read cnfgB
		unsigned char irq = inp(lpt_base+0x401);
		irq &= 0x38;
		irq >>= 3;
		// irq0 means selected via jumper, user will have to hard code the irq
		if (irq != 0) {
			switch(irq){
				case 1: lpt_irq = 7; break;
				case 2: lpt_irq = 9; break;
				case 3: lpt_irq = 10; break;
				case 4: lpt_irq = 11; break;
				case 5: lpt_irq = 14; break;
				case 6: lpt_irq = 15; break;
				case 7: lpt_irq = 5; break;
				default: break;
			}
		}
		// Set ECP port mode to PS2
		test = inp(lpt_base+0x402);
		test &= ~0xE0;
		test |= 0x20;
		outp(lpt_base+0x402, test);
	}

	if (lpt_irq == -1) {
		fprintf(stderr, "Couldn't find interrupt for parallel port at 0x%x !\n", lpt_base);
		sleep(2);
		exit(EXIT_FAILURE);
	}

	// Convert IRQ number to interrupt vector
	switch(lpt_irq) {
		case 5: lpt_vector = 0x0d; lpt_mask = (1 << 5); break;
		case 7: lpt_vector = 0x0f; lpt_mask = (1 << 7); break;
		case 9: lpt_vector = 0x71; lpt_pic = 1; lpt_mask = (1 << 1); break;
		case 10: lpt_vector = 0x72; lpt_pic = 1; lpt_mask = (1 << 2); break;
		case 11: lpt_vector = 0x73; lpt_pic = 1; lpt_mask = (1 << 3); break;
		case 14: lpt_vector = 0x76; lpt_pic = 1; lpt_mask = (1 << 6); break;
		case 15: lpt_vector = 0x77; lpt_pic = 1; lpt_mask = (1 << 7); break;
		default: abort();
        }


        fprintf(stderr, "Parallel port at 0x%x, irq %d", lpt_base, lpt_irq);
        if (is_ecp)
                fprintf(stderr, ", ECP");
        fprintf(stderr, "\n");

        // set to data input mode using DCR
        outp(lpt_base+2, inp(lpt_base+2) | 0x20);

        // check that data lines are not driven by us
        int fail = 1;

        for (i = 0; i < 5; i++) {
                outp(lpt_base, 0x5a+i);
                if (inp(lpt_base) != 0x5a+i) {
                        fail = 0;
                        break;
                }
        }
        
        if (fail) {
                fprintf(stderr, "Parallel port does not appear to be bidirectional!\n");        
                sleep(2);
                exit(EXIT_FAILURE);
        }
        
        disable();  // cli()
        // grab IRQ vector
        old_lpt_irqhandler=getvect(lpt_vector);
        setvect(lpt_vector, lpt_irqhandler);
        
        if (lpt_pic > 0) {
                // unmask our IRQ
		outp(PICB_1, inp(PICB_1) & ~lpt_mask);
		// then unmask IRQ2
		outp(PICA_1, inp(PICA_1) & ~0x04);

	}
	else {
		// unmask our IRQ
		outp(PICA_1, inp(PICA_1) & ~lpt_mask);
	}
	// enable parallel port interrupt via ACK line
	outp(lpt_base+2, inp(lpt_base+2) | 0x10);

	enable(); // sti()
}

Simple bidirectional communication between two PCs with a standard parallel port cable
Swap STROBE and nACK pins on one end of the parallel cable. Ensure that the parallel port nACK interrupt is enabled on both ends (DCR[5] := 1). Then the communication looks like the following:


// Parallel port ISR, Turbo C++ 3.1 DOS code
void interrupt lpt_irqhandler(__CPPARGS)
{
  disable();

  received = inp(lpt_base);
  // Interrupt the sender, since STROBE on this end
  // is connected to ACK on the other end
  unsigned char tmp = inp(lpt_base+2);
  outp(lpt_base+2, tmp ^ LPT_STROBE);
  outp(lpt_base+2, tmp);

  old_lpt_irqhandler(); // chain old IRQ handler
  outp(PICA_0, EOI); // EOI
  if (lpt_pic > 0)
	outp(PICB_0, EOI); // also send EOI to PIC2

  enable();
}

I have found this to be a sufficient quick & dirty way of transferring bytes from one PC to another in interrupt driven fashion.

Transferring data off that old MFM or RLL drive

Wednesday, July 26th, 2006

ST506 is an interface consisting of a 34-pin control cable and a 20-pin data cable. MFM drives are ST506 drives that formatted with 17 sectors per track and RLL drives are those formatted with 26 sectors per track. It is possible to format any RLL drive as MFM, and any MFM drive as RLL. A drive not certified for RLL formatting may lose data when formatted as RLL, however.

PC/XT and AT machines are different, and so are the controllers designed for each. A 16-bit controller (i.e. designed for an AT) usually does not have its own BIOS, because the AT BIOS has built in INT13 support for hard disks. (Many AT compatible BIOSes have a built in surface scan and low level format routines.) 16-bit BIOSless ST506 controllers should be interchangeable with a given drive. There are a few exceptions where a particular controller will write the formatting information in an incompatible fashion (example: WD1006V-SR1).

However, in general no controller with an onboard BIOS is interchangeable with any other controller with an onboard BIOS. Since PC/XT machines have no INT13 BIOS, 8-bit cards designed for use with a PC/XT will always have an onboard BIOS that is required to low-level format the disk, making it imperative that any drive that originated in a PC/XT be matched with the controller that low-level formatted it in order to be able to read the data on the drive. The same goes for the limited number of 16-bit controllers that had an onboard BIOS with a low-level format utility. If a drive is formatted with a standard AT BIOS, however, any other standard AT BIOS should be able to read the low-level format.

Here is a book with a fascinating chapter detailing low-level formats and other little-known facts about older drive technology.

(If you wish to have an arsenal of controllers for attempting to deal with bare MFM/RLL drives formatted with XT controllers, start with a DTC 5150(X/CX) and WD1002-WX1 MFM controllers, and a WD1002(A)-27X RLL controller. From there, ST11M/R, WDXT-GEN series, DTC 5160 series, WD1002(A)-WX2, WD1004(A)-27X, OMTI 552x series, Adaptec ACB-2010 and ACB2072 series,and XEBEC 1210/1220 are other common XT controllers.)

When you have a controller/drive combination that has its own BIOS, you don’t have to worry about the drive types defined in the AT BIOS. Those are only used by the INT13 interface built into the BIOS. When a controller BIOS is invoked at boot, it installs its own INT13 routines, making the built in INT13 routines irrelevant.

In the case of a 16-bit BIOSless controller, however, it is necessary to know the C/H/S geometry of the drive in order for it to be used. If the battery in the system has died, you can try to guess the parameters by trying all the built in types (all besides type 47) that have a formatted size equivalent to the drive’s size. For example, a ST-251 40MB MFM drive is usually formatted as 860/6/17, which corresponds to Type 40.

If guessing all the built in types doesn’t work, it is probably a user defined type which will be impossible to guess. To rescue the parameters, you can run the Spinrite utility and near the beginning of the operation it will tell you that the CMOS geometry does not match the formatted geometry, and it will tell you the formatted geometry. You can then enter this geometry as a user defined type, and the drive should then be accessible.

Sometimes drives will be accessible but impossible to boot from for various reasons. Just because you cannot boot from a drive does not mean that you should not boot from a floppy and attempt to access drive C:.

Cabling is important to check. Have a few sets of cables around. The cables may have rotted and will cause behavior like DOS “not ready” errors and intermittent operation.

Usually you would disable the floppy controller on the ST506 controller board so as not to conflict with the floppy controller already in the system. However, some ST506 controllers have built in floppy controllers that cannot be disabled. In this case, you’ll have to hook up the floppy drive to the ST506 controller and disable the system floppy controller. If the controller is an older model, you should verify that it supports the type of floppy drive you are connecting to it (most will support standard 1.44MB drives though). The system BIOS must also support the floppy drive type that is connected.

It is usually difficult to put a ST506 drive and controller in the same system as a modern IDE drive, due to BIOS issues. Also, depending on the system, the ST506 controller may not even work due to incompatibility with the system BIOS and timing issues (disabling internal and external caches and maxing out I/O wait states may help). Also, the BIOS in a modern system may simply hang when confronted with a particular drive or controller, making it impossible to use them in that system. So you will need another mechanism to transfer the files off the drive.

If the filesystem is readable, you can use a null modem / laplink program (such as INTERLNK, Laplink, Fastlynx, LPTransfer) to transfer the readable files to another computer. Or set up an IPX network using a packet driver or ODI, and use IPXFER or similar to transfer the files. Or use the backup/restore utilities that come with DOS to transfer the data via floppies. You could also use a parallel port zip drive with the associated DOS utilities (GUEST.EXE). Another option is use an ISA NE2000-compatible network card with jumpers for the I/O base address and IRQ, a NE2000 packet driver, WATTCP, and SSHDOS. (This may be all possible to fit on a single floppy disk.)

If the filesystem is unreadable (the C: drive exists, but errors occur when you attempt to obtain a directory listing), or if the files you are interested in cause errors like Sector not found or Data error when attempting to read them, Spinrite is the best program to salvage the data. Run Spinrite on Level 2 (Recover data), and it will do its best to recover and relocate the data so that the file can be accessed again.

If the MBR/FAT areas of the drive are unreadable, and Spinrite can’t recover the data stored on them, the prognosis is somewhat hopeless. The best you can do here is to take a raw image of the filesystem and transfer it elsewhere for analysis. Then the disk will need to be low level formatted to possibly be used again. Just because an old disk is unreadable doesn’t mean the disk itself is unfit for future use. The low level format on a MFM/RLL drive will drift with time if the recorded data is not refreshed on a regular basis (such as with Spinrite’s level 3 option). This is not a failure of the drive itself, but rather a maintenance failure.

The controller, if it has a BIOS, will usually have format and/or diagnostic routines in the BIOS that can be called by the user. To locate the BIOS, you can use a utility like Quarterdeck’s Manifest that will map the area between 640K and 1MB. Then you can use the DOS debug utility to dump any areas that Manifest reveals as option rom areas. The most common region where controller BIOSes would be mapped is in the C800-EFFFh region, at multiples of 400h. (The Video bios usually lives at C000h and the system BIOS at F000h). In Debug, just do:


- d c800

for the suspected area. You should see a 55 AA for the first two bytes, then if you continue to – d then you should see text that gives away the nature of the ROM in that area. If you don’t see 55 AA, or if you see all FF’s, there is no ROM that begins at that address.

Once you have located the controller BIOS’s base address, you will then want to use Debug to jump to the diagnostic routine. This is almost always located at offset 5. (The first two bytes are the option ROM ID bytes 55 AA, and the next two bytes are a JMP to the BIOS initialization routines; offset 3 is invoked by the system BIOS when it is scanning option ROMs.) So if your controller is at CC00, for example:


- g=cc00:5

For 16-bit controllers that do not have a BIOS, you can low-level format the disk using a utility like WDFMT, or the BIOS’s built in format utility. You should low level format the disk first, then perform a media analysis. You can do the low level format repeatedly if the media analysis continues to show too many bad sectors. (This may be necessary on drives which have been in storage for a long time.)

After you have formatted and allowed the media analysis to fill the defect list, and optionally entered by hand any manufacturer defects listed on the drive label, then you should fdisk and DOS-format the drive. When the DOS-format encounters a sector which has been marked bad by the low-level media analysis, the sector will be unreadable, so DOS will mark the whole cluster above it as bad. ST506 drives do not “hide” their low-level defects with an internal reallocation pool like modern drives do, so it is normal for a perfectly working ST506 drive to have several kilobytes worth of bad clusters.

The next thing you should do after low-leveling the drive is to run Spinrite at Level 5 (Restore good sectors) several times. If the results are stable after several runs, then the disk is ready for use. Remember to consider thermal issues as well as mounting issues (the drive should be screwed into the mounts and mounted in the same horizontal/vertical position that it will be used in) when deploying the drive.

8-Bit / XT IDE disks

These types of older disks use an 8-bit interface and are thus incompatible with typical IDE controllers (which use a 16-bit interface). Some of them can be jumpered either for 8-bit or 16-bit operation. If you take an IDE disk out of an older system for data recovery and it appears that the drive does not respond in a newer system, make sure that it is not set for 8-bit operation. (A tell-tale sign will be that it was hooked up to an IDE controller with an 8-bit edge connector.)