Archive for May, 2006

Why do hippies drive VW microbuses?

Wednesday, May 31st, 2006

Right wing types, always giddy at an opportunity to pick on counterculture types, attempt to use the Microbus as an example of what is wrong with hippies. Surely a hippie, being concerned with everything that is earth and the environment, would not drive a SUV-like vehicle with correspondingly poor gas mileage and lacking modern emissions controls?

– The Type 2 had between a 1.2L 25HP motor and a 1.8L 50HP motor depending on the model year. Gas mileage was between 20 and 30 mpg, still unheard of for the modern SUV of similar size.

– The Type 2 seated at least 5 people with cooking and camping gear. Makes sense for taking the commune on a road trip to live it rough for a while. Doesn't make sense for 1 person to drive around in, but how many hippies are loners?

– Disposing of a Type 2 would saddle the recipient with 3600 lbs. of metal and plastics. It makes 'green' sense to keep it running.

– Replacement parts were and still are cheap due to basic design.

– While emissions controls did not exist, creating more ground level pollutants, a Microbus produces less CO2 than a modern 'clean' vehicle.

New Testament Christians are sheeple

Wednesday, May 31st, 2006

Romans 13:1-7
Let every soul be subject to the governing authorities. For there is no authority except from God, and the authorities that exist are appointed by God. 2 Therefore whoever resists the authority resists the ordinance of God, and those who resist will bring judgment on themselves. 3 For rulers are not a terror to good works, but to evil. Do you want to be unafraid of the authority? Do what is good, and you will have praise from the same. 4 For he is God's minister to you for good. But if you do evil, be afraid; for he does not bear the sword in vain; for he is God's minister, an avenger to execute wrath on him who practices evil. 5 Therefore you must be subject, not only because of wrath but also for conscience' sake. 6 For because of this you also pay taxes, for they are God's ministers attending continually to this very thing. 7 Render therefore to all their due: taxes to whom taxes are due, customs to whom customs, fear to whom fear, honor to whom honor.

1 Peter 2:13-17
Therefore submit yourselves to every ordinance of man for the Lord's sake, whether to the king as supreme, 14 or to governors, as to those who are sent by him for the punishment of evildoers and for the praise of those who do good. 15 For this is the will of God, that by doing good you may put to silence the ignorance of foolish men — 16 as free, yet not using liberty as a cloak for vice, but as bondservants of God. 17 Honor all people. Love the brotherhood. Fear God. Honor the king.

Titus 3:1
Remind them to be subject to rulers and authorities, to obey ….

But on the other hand:

Acts 5:29
We must obey God rather than men.

So obey the rules of men, unless they conflict with the rules of God.

Since the rules of God generally subtract freedom as well, a dictator would have carte blanche with Christians to rule as he pleases. As a Christian, you can protest all you want, but civil disobedience or revolution is an unavailable option to you.

How convenient for a totalitarian government. “The authorities that exist are appointed by God.” That certainly seems to echo in some eerie statements President Bush has made while appealing to the religious right.

Breathing new life into that BX system

Wednesday, May 3rd, 2006

Since 2000, I have had a system based around a Micro-Star MS-6163 BX Master mainboard. When I bought it, I also bought a MSI MS-6905 Master 1.1B slocket and a Celeron-II 533MHz which I had run at 840MHz since day one with no problems after increasing the voltage to 1.65V from the stock 1.5V. Two sticks of PC100 CL2 Micron RAM for a total of 256MB rounded out the system.

Lately, due to software becoming more bloated and inefficient, I was forced to contemplate an upgrade. CPU, memory, and video card were all considered. I decided to stay with the MSI BX board in the end, since it was still possible to obtain good performance from the platform, and I didn't want any external problems such as video card incompatibility, power supply issues, or problems fitting a new board into the Inwin A500 mid-tower case. I also wanted to keep the MSI board because its six PCI slots and single ISA slot are a perfect match for the expansion cards I am using. (Newer mainboards with ISA slots are very rare.)

Unfortunately, the Matrox G400 MAX appears to still be the best overall choice for 3D games, dualhead and TV-out performance, video playback, and open source drivers. (An option was to obtain an ATI 9200.) I ended up staying with the G400MAX. It has served me well.

Memory was a quick fix, with the caveat that I observe the compatibility constraints with BX boards and memory (http://homepage.hispeed.ch/rscheidegger/ram_bx_faq.html) and obtain memory modules with RAM chips organized as 16×8. The mainboard has four memory slots, for a potential total of 1GB memory (to install more than 3 memory modules, buffered memory is required). So, I obtained four 256MB buffered ECC 16×8 Viking sticks from Ebay for $60. The seller claimed they were CL2, but they turned out to be CL3. I obtained a $5 refund on that basis.

Now for the CPU issue.

Since this is a BX board, only 100MHz bus speeds are guaranteed to be stable with respect to the AGP slot (since the AGP bus only has 1/1 and 2/3 dividers). So a 133MHz CPU is a possibility, but I preferred to limit my search to 100MHz CPUs (also convenient since I had PC100 memory).

I also questioned the capability of the motherboard power electronics and my 300W Antec power supply to supply a hungrier CPU, but these concerns turned out to be irrelevant. Celeron Tualatin 1400/100/256KB SL6JV 33.2W / 1.50 volts = 22.1 amps

The motherboard BIOS could be an issue, because the CPU settings are done through software. In this case, the locked multiplier of Intel CPUs turns out to be beneficial, because we aren't dependent on the motherboard to set the CPU multiplier (only the bus speed). Some boards require an update to the embedded Intel microcode update: http://www.mrufer.ch/pc/tualatin4_e.html The slocket also has several adjustable settings for voltage and bus speed.

Several CPU choices were available, all Socket-370. (SECC-2 CPUs are hopelessly obsolete and expensive in comparison.) The board can physically take:

– Coppermine PIII which were made up to 1.1GHz at 100MHz and 133MHz bus speed on a 0.18 process and with 256K of L2 cache
– Tualatin PIII which were made up to 1.4GHz at 133MHz bus speed on a 0.13 process and with 256K or 512K (Pentium III-S) of L2 cache
– Celeron-II which were made up to 1.1GHz at 66MHz and 100MHz bus speed on a 0.18 process with 128K of L2 cache
– Tualatin Celeron (Tualeron) which was made up to 1.4GHz at 100MHz bus speed on a 0.13 process with 256K of L2 cache.

More information can be found here: http://processorfinder.intel.com/scripts/default.asp

We want the best combination of price, high clock speed, small process (for lower power consumption), and large cache. The 1.4GHz Pentium III-S is the best performing choice, but easily fetches $100 due to its rarity. It also runs on a 133MHz system bus and thus would have required more expensive PC133 memory. The second best choice is a 1.4GHz Tualeron, which runs on a 100MHz system bus. I obtained one of these for $30, which I consider to be a bargain.

The next challenge was interfacing the Tualatin to a Socket-370 Coppermine adapter in a motherboard designed to take slot CPUs. While the Tualeron will physically fit in a Socket-370, the electrical signals are different. I obtained what is referred to as a “Lin-Lin” adapter on eBay. This cost approximately $7, and is about a 5mm shim socket that sits in between the Tualeron CPU and a socket designed to take Coppermine CPUs.

So what I then had was a slocket, with the LinLin adapter installed, the 1.4GHz Tualeron CPU installed into that, and the Global Win HSF on top of it. When I reinstalled this, I was left with less than 1mm clearance between the top of the CPU fan and the second memory DIMM. I decided that the new Tualeron CPU, being built on a smaller process, did not need as massive of a heatsink as the old one did, and so I ordered a CoolJag 1U cooler from 1coolpc.com. This is a low profile cooler that is designed for use in a 1U-height rack mount. It works perfectly for giving a slocket more room too!

However, I was not so lucky when it came to correct functioning. Upon boot up, the system would freeze in the BIOS. After extensive troubleshooting, I found this page: http://overclockers.com/tips61/ which detailed an RMA program, long expired, for the MSI slocket I was using. Apparently, while the 533MHz Celeron I was using happened to be compatible with this slocket, MSI's opinion was that it was unusable with any Coppermine CPU, and by extension, my Tualeron+LinLin which appears to the system as a Coppermine CPU. New slockets cannot be found for sale anymore, and are incredibly rare even on the used market. I thought I was out of luck until a friend mentioned that he had a MS-6905 1.1 in storage. I traded him my 1.1B + Celeron (guaranteed to work with the 1.1B) for his 1.1, installed the Tualeron stack, and the system worked after that point.


If you are having problems getting the system to boot, clear the CMOS, and then set 400MHz (100×4) in the CPU soft menu. Also make sure both JP2 and JP3 are jumpered on the board (for automatic FSB selection).

In the end, I now have a 1.4GHz/256K/100MHz/1024MB/CL3/ECC system, which performs MUCH better with current software than the previous 840MHz/128K/105MHz/256MB/CL2 system. And there was no need to upgrade any other part of the system, such as motherboard, power supply, case, video card, etc. I have not yet done extensive 3D game testing, but I would imagine significant improvement in that area. Total cost of the upgrade was around $80, which is much less expensive than a system platform change would have been.

The only possible upgrade now, without changing the underlying platform to something more current, would be to use CL2 memory instead, but the CL2/CL3 distinction is a small performance hit.

The LunchBox discussion forum was very helpful in figuring out what to do about problems with this project: http://www.geocities.com/_lunchbox/

Keyboard repair

Tuesday, May 2nd, 2006

Here's a couple things you can do to beat that old electric piano back into shape:

1. Clean the contacts on the keys. Under the key, at least on a velocity sensitive keyboard like this Ensoniq SDP-1, there is a spring that makes contact with a busbar. The bar and the spring will corrode and get dirty. Use contact cleaner and an old toothbrush to scrub them gently until they shine.

2. Check electrolytic caps ESR and replace as necessary.

3. Check the “action” springs (the ones that return the key to its original position) for weakness and replace as necessary.

4. Check the amplifier jacks and make sure the retainer mechanism is in place, usually a thin nut that threads around the outside of the jack. If this is missing, the jacks will eventually break off the PC board. (The replacement part number for surface-mounted 1/4″ mono jacks is Kobiconn 16PJ500.) It also would not hurt to reflow the solder between the jacks at the board as a preventative measure.

I make my own patch cords out of bulk 20 gauge coax instrument cord and bulk spring-relief ends.

To do this, take a length of cable, cut the outer jacket away about 1″ down, twist up the outer strands, and then use a wire stripper to remove about 1/2″ of the inner jacket. Take an end and insert the cable into it, getting some of the strands for the outer/ground part through the ground hole and some of the strands for the inner/signal part through the signal hole, and bend the strands around so they stay (and aren't touching the wrong part of the jack). Use a pair of pliers and “crimp” the retaining clip around the cable. Now apply a bit of solder to both holes, securing the strands into the hole. Slide the plastic insulator sleeve over the junction area, then screw on the outer shell and the spring strain relief.

Do the other end in the same way and give the cord a test by wiggling both ends while playing, there should be no drop-outs or fuzz in the signal. Congratulations, you just saved $20 or more…

The open source video driver debate

Monday, May 1st, 2006

Axioms:

1. Users should not be prevented from choosing whatever operating system, video driver and video hardware suits their needs. A copyright license violation (compared to EULA) cannot occur on the end user's machine since he is not redistributing anything.

2. Vendors should not be prevented from distributing video driver software in whatever form they see fit and for whatever platform they see fit, as long as they are complying with the law in doing so.

3. Open source software is categorically superior than binary only software in terms of maintainability, in terms of supporting different host hardware platforms including those that did not exist at the time of original development, and in terms of building a knowledge base that later innovations can be built upon.

4. Most open source software is superior to binary only software in terms of the license conditions and in terms of flexibility.

5. Any open source software can be better designed and better tested, and therefore more usable, than any binary-only software – given sufficient developer time and a common, fixed set of user requirements between the two.

6. Binary only software provides a superior, but not infallible, vehicle within which to protect trade secrets, hide evidence that could substantiate a patent infringement injunction or award, and hide copyright violations.

7. A system with binary only modules that fails is not practical to debug by code reading and must be debugged interactively. The level of encapsulation does not matter because data that has passed through a binary only module is tainted for purposes of debugging. Strict assertion checking may mitigate this somewhat.

8. An operating system kernel with binary only modules cannot protect other parts of the kernel without sacrificing performance.

9. Precedent has shown that given access to the kernel, third-party binary only code will subvert the user's control of the system, either on behalf of the author of the code, or on behalf of someone else who exploits the binary-only code. Therefore, for a reasonable guarantee of security, it is necessary but not sufficient that the source code is made available to all modules that have access to the operating system kernel.

10. A high frame rate is the number one requirement for a gaming-oriented 3D graphics video card driver.

11. Vendors of 3D graphics cards intended for the consumer market prioritize frame rate and apparent image quality over rendering correctness, reliability, security, or flexibility. Vendors of 3D graphics cards intended for the professional market prioritize image quality, rendering correctness, and operating system stability over other characteristics.

12. The DRI 3D graphics framework that is used by open source operating systems is the best available framework in terms of flexibility and security. Graphics card vendors who have a number one priority of delivering performance and image quality prefer not to use the DRI framework.

13. Open source developers categorically consider flexibility to be a virtue. Therefore, they would prefer a given graphics card to be usable both in a consumer and in a professional context, as well as possessing all of the above mentioned qualities such as reliability and security, to the fullest extent that the hardware permits.

14. Once a driver is developed, the source code of the driver is sufficient reference material in terms of maintainability, portability, and auditability. Therefore, it is irrelevant whether a databook for a piece of hardware is available publicly or whether its availability is restricted under NDA.

There seem to be two issues, with several schools of thought.

a. If hardware manufacturers should provide necessary materials that open source video driver development requires, what the extent of expectations for such materials should be.

b. Whether open source operating systems under the GNU license should attempt to use the GNU license terms with respect to derivative works to compel distribution of driver source code.

Some supporting information on both issues. Any discussion of “drivers” or “databooks” below is limited strictly to the context of 3D hardware drivers and open source developers. When binary-only drivers are mentioned, assume they are for Linux on IA-32 processors unless otherwise noted. 2D acceleration hardware that assumes a planar windowing system is largely considered obsolete and so open source drivers and availability of documentation are not a hotly contended issue. (Some vendors, such as NVidia, contribute directly to open source 2D driver development.)

NVidia and ATI have not released any driver source code recently. They have released binary-only drivers for Linux. NVidia's only driver source code release, in 1999, was deliberately obfuscated. NVidia has never released databooks. ATI has released databooks for Rage Pro, Rage 128, and Radeon 9200 or for Rage Fury MAXX. A reverse engineering effort for the binary-only R300 driver exists meeting with limited success.

Matrox released driver source code developed externally (by Precision Insight, a subsidiary of VA Linux, now VA Software). Matrox also publically released databooks for the G200 and G400. Matrox's driver did not include microcode source for the graphic processor (or for the HAL used to configure the display). Without the microcode source, it is impossible to improve the security architecture of the microcode, as well as to allow the microcode to accept standard OpenGL vertex formats (currently conversion to Direct3D 6 format is necessary, with the associated overhead). It is also impossible to improve the microcode to add new features, such as offloading the T&L rendering stage. Matrox has not released databooks for the G550 or Parhelia products, if they exist. Matrox has not released source code for any Parhelia or video capture/editing product. Matrox has released binary-only Linux drivers for Parhelia. Matrox has removed access to G200 and G400 databooks. Matrox has also placed a EULA prohibiting open source development on their driver source downloads, though it is possible to avoid agreeing to it, further divulging their current attitude. No known open source developer is part of Matrox's development partner program, nor has anyone been known to receive a response from their developer inquiry form. Matrox is largely considered irrelevant to the consumer 3D industry.

3dfx released driver source code developed internally. 3dfx also released databooks to developers. Precision Insight developed the open source Linux drivers for 3dfx hardware. 3dfx is now irrelevant, buried by lawsuits unrelated to its source code release. Documentation or code for the VSA-100 SLI and T-buffer features was not released.

S3, VIA, SiS, and Trident Microsystems did not release source code or binary-only code for any graphics product when they were separate companies. SiS and Trident Microsystems have never released databooks for any graphics product.

S3 had a developer program to receive databooks, with the exception of the Savage2000 which no one has been known to possess a databook for. Based on those databooks, an open source S3 ViRGE driver was created.

VIA used Trident graphics cores until acquiring S3. VIA/S3 has released source code for Unichrome, including the MPEG decoding hardware, but no databooks. Based on that source code, an open source driver for the S3 Savage3D/Savage4 was constructed. The quality of the released Unichrome source code is contested and some hardware features are considered unreliable as long as the source code released as of 2006 is the only documentation. No source code or databooks for Deltachrome or Gammachrome were released. VIA/S3 has released binary-only Linux drivers for all of their integrated graphics platforms.

ST has never released PowerVR, including Kyro, databooks or source code. A binary-only Kyro Linux driver exists. Kyro is considered a dead product. PowerVR and SGX cores are currently being licensed, again with no databooks or source code.

Rendition never released databooks, source code, microcode source, or Linux drivers. Rendition considered the binary-only Verite SDK sufficient for third-party developers. A V2200 databook was leaked after Rendition became irrelevant.

XGI, a spinoff of SiS's graphics division, acquired Trident. XGI has released binary-only drivers for its Volari products but no databooks. The situation has not changed since XGI's graphics unit has been purchased by ATI.

3DLabs has released or leaked a Permedia2 databook and no others. 3DLabs has not otherwise released source code or databooks for Permedia2, Permedia3, GLINT R3/R4, or Wildcat. Binary-only drivers for Linux were released. 3DLabs is now owned by Creative Labs and pursues core licensing.

Intel released databooks for i810 and a mix of source code and binary-only drivers for later integrated graphics solutions. No other databooks have been released.

Based on the above, we can summarize the places where vendors would like to hide programming information about their cards.

– BIOS/firmware
– HAL, if applicable (may be part of BIOS as in Intel)
– 3D processor microcode
– 3D driver

There can be significant assets inside the driver source code. Some of these, such as Macrovision and S3TC texture compression, are patented technologies which are licensed from third parties or covered by cross-licensing agreements. Others are in-house optimization techniques, either hardware-dependent or hardware-independent, which provide a competitive advantage in the market.

The patent system encourages the publication of techniques, but its current climate discourages the publication of source code. Software and technology-related patents tend to border on idea patents. Some are patents on a functional block that takes some input and transforms it to some output; the functional block and output can be so vaguely described as to apply to any transformative process involving the given input. Vague patents can cover software techniques as well as hardware techniques. Even without knowing the physical hardware design, NVidia and ATI representatives have argued to Jon Smirl that any revelation of hardware interface and internal state would be a dangerous situation with respect to the patent situation. However, it is questionable how much weight this holds given that engineers flow back and forth within the industry.

There is no statute of limitations for patent infringement, so any related patent would have to expire before it would be considered safe to reveal one's hand with respect to potential patent violations. This means that a piece of hardware and the accompanying software must be at least 20 years old to be considered safe with respect to patent infringement.

So in summary:
– Companies cannot open-source their drivers that contain externally licensed software and patents.
– Companies that remove the externally licensed software and patents and open-source the rest of the driver would be giving up a competitive advantage in terms of implementation secrets, and potentially opening themselves to patent suits.
– Companies that remove external assets as well as remove any code that would dissolve a competitive advantage if divulged would still suffer from patent liability.
– Companies that externally publish hardware specifications would suffer from patent liability.
– For some companies, it is speculated that the driver source code is the hardware specification.
– An NDA does not change the fact that if the NDA is violated and the individual penalized, the company could still be sunk.
– 20 years must pass before hardware can be safely “opened”. By that time, the hardware's capabilities will be irrelevant, and it is likely that if the company is even still in business, that the associated documents have been lost (possibly intentionally).

The current patent system seems to be the problem. By encouraging vague patents, it invokes the paranoia that discourages publication of driver source code or hardware specifications (if they exist) to third parties.

There is little to be done about the patent system. Since a contract between ATI and NVidia not to sue each other over information divulged through open source development is unlikely, we have few options.

1) Reverse-engineer existing drivers.
2) Convince ATI and NVidia to hire open source developers to work on an open source driver tree free of legal liabilities.
3) Create hardware on our own like OpenGraphics.

1 is difficult, but not as unattractive as it would appear. A reverse-engineered driver will never catch up with hardware to be able to play the latest games, but it will be capable of running all of the existing base of software up until the point the driver was released. So as time passes, this existing software library is larger for a given piece of hardware developed at that time than for a piece of hardware developed at a previous time. This phenomenon is also due to the fact that new software targeting older hardware still in deployment continues to be developed. The hardware itself becomes more complex and difficult to reverse-engineer, but the reward for successfully reverse-engineering successive pieces of hardware is a monotonically increasing percentage of application coverage.

A reverse-engineered driver cannot, however, expose unimplemented features in the driver, and provides only limited information on how the driver architectutre can be improved through more efficient use of hardware.

3 is difficult for a community development project due to the nature of hardware development. FPGA hardware is at least an order of magnitude less efficient/capable than ASIC hardware. However, since graphics is a task of highly parallel nature, it should be possible to exploit this to create fast FPGA-based graphics boards. The cost will not be reasonable compared to ASIC boards, but it will provide an option for those who cannot use or cannot tolerate binary-only drivers.

To be continued.

Using a vfat filesystem sanely in Linux

Monday, May 1st, 2006

Here is how I set up my fstab to automatically mount my vfat partitions, owned by a single user, at boot:

/dev/hdg1 /home/nemesis/mnt/c vfat defaults,user,dmask=022,fmask=113,uid=nemesis,gid=disk,noatime 0 0

The mount point is under user nemesis's home directory and owned by the user.

The 'user' mount option allows the user owning the mount point to mount and unmount the volume. Even though it is mounted automatically at boot, this can still be handy. It also implies the 'noexec', 'nosuid' and 'nodev' options, meaning that execution of files is not permitted (WINE will still work), files with the suid bit are not allowed, and device special files are not allowed.

Since vfat (FAT32) has no concept of security, the ownership and permissions must be synthesized.

uid=nemesis means the user 'nemesis' will be the Unix owner of all files and directories in the filesystem.

gid=disk means that all files and directories will be under group 'disk'.

dmask=022 means that directories will have a mode of 755, corresponding to rwxr-xr-x.

umask=113 means that files will have a mode of 664, corresponding to rw-rw-r–.

This setup means that files will not be falsely flagged as executable while the noexec option disallows executing files. It also means that only user nemesis and group disk can write to the filesystem.

Unfortunately, the vfat driver has no 'sync' or even 'dirsync' support, so preventing data corruption in an unclean shutdown (i.e. due to a WINE/video driver crash) or in an unexpected media removal (i.e., a USB flash device) is impossible. Using autofs to manage removable vfat media is currently the best option.

The 'noatime' option prevents the vfat driver from updating the access time in the file metadata on each access, a typically useless source of overhead.

Climate Change

Monday, May 1st, 2006

Is global warming actually occurring? If so, why do we seem to be in a local cooling trend?

If global warming is occurring, is the magnitude of the warming unprecedented?

If unprecedented global warming is occurring, is it the result of the greenhouse effect, or some other natural process?

If global warming is occurring because of the greenhouse effect, is it due to CO2 or some other greenhouse gas, is it possible that some other atmospheric agent that was previously absorbing solar radiation is dwindling, and what is the magnitude of the greenhouse effect's contribution compared to that of other sources?

Is it possible that elevated CO2, and thus a greater magnitude of warming due to the greenhouse effect, follows global warming that is initiated by some other means, as opposed to CO2 being the direct cause of the warming process?

If global warming is occurring primarily because of CO2's contribution to the greenhouse effect, how much CO2 emission is under our control? Can we reverse the warming trend even if we zeroed CO2 emissions overnight?

Is a precedented level of global warming a net benefit or a net detriment to the US? What about globally? What about an unprecedented level?

If global warming is a net detriment, what would the impact of a significant enough emissions restriction to reverse the warming process be, in terms of economic growth and in terms of poverty and lost lives -especially in undeveloped nations?