SPONSORED LINKS
Do you have any idea how har it is to write perfect non-trivial software on the first try?
Did I say somewhere that it needs to be done on the first try? This is precisely the problem; that companies are releasing "first tries" as finished products and letting their customers do the QA for them. Economically, this works out better, because customers like yourself are wusses and apologize for them instead of holding them accountable for their shoddy engineering practice.
I imagine you think a stock RH 4.2 box would be fine on the Internet, unpatched and unfirewalled for eternity. Or 5.2. Or 6.2. Or 7.3. Or 8. Or 9. Or FC3 or RHEL or SuSe or any other distro you can name.
Absolutely not, and where did I claim this? Red Hat's release policy is atrocious. I didn't say it was impossible to create an insecure system with open source software. My assertion is that open source has the *capability* of creating a system in which more confidence can be placed than even the best proprietary system.

From my perspective, Debian is very close to an ideal secure system. Full disclosure is encouraged, which is followed promptly with patches; cron-apt then performs a twice-daily check for _security_updates_only_ and downloads and installs them. Security patches are applied to the release version of the package, so you run zero risk of any collateral breakage.

A window of exploitation still remains there (between when an exploit is developed based on the security disclosure, and when a patched package is installed), but because services running as root are highly discouraged (as opposed to Red Hat who still embraces the godawful sendmail), an attacker would have to come up with a multi-level attack within that window, to both remotely compromise the unprivileged service and then use that compromise as a lever to exploit a local root vulnerability in another package. This is a far cry from something like the Windows RPC worm, which has nothing more to do to get root on an unpatched machine than to send a message to it.

So, peer review by competent developers implies that the software is closer to correct, while full disclosure demands immediate fixes to security bugs that sneak in anyway, while a multi-layer security model provides damage control when social measures have failed to keep a security problem from creeping in anyway. The question is, do all of those measures offset the lost security-by-obscurity that proprietary software enjoys?

First of all, this is one area where having multiple competing implementations of software is great, because you can deploy one FTP server, for example, and then spoof its response to appear to be some other popular FTP server. But otherwise, obscurity is only a slowdown to someone who is looking for vulnerabilities, and it may further lead a vendor and its customers into a false sense of security if the software has lurking faults. But in terms of the real world, Microsoft's closed-source and limited-disclosure policies have not prevented it from having to deal with the worst vulnerabilities of the last ten years; on the side of open source, it has been a few years since the last remote-push root vulnerability (OpenSSH) that affected a significant majority of deployed machines.

So no, open source obviously isn't immune to security problems, but good luck writing the equivalent of the MS-Blaster worm for open source systems. The above mitigating factors coupled with the inherent diversity of open source systems are going to be a real headache for an attacker.

Score:2