Skip to main content

Security Through Obscurity

/si-kyu̇r-ə-tē thrü äb-skyu̇r-ə-tē/

n. A name applied by hackers to most OS vendors' favorite way of coping with security holes -- namely, ignoring them and not documenting them and trusting that nobody will find out about them and that people who do find out about them won't exploit them. This never works for long and occasionally sets the world up for debacles like the RTM worm of 1988, but once the brief moments of panic created by such events subside most vendors are all too willing to turn over and go back to sleep. After all, actually fixing the bugs would siphon off the resources needed to implement the next user-interface frill on marketing's wish list -- and besides, if they started fixing security bugs customers might begin to *expect* it and imagine that their warranties of merchantability gave them some sort of *right* to a system with fewer holes in it than a shotgunned Swiss cheese, and then where would we be?

Historical note: It is claimed (with dissent from ITS fans who say they used to use 'security through obscurity' in a positive sense) that this term was first used in the USENET newsgroup in comp.sys.apollo during a campaign to get HP/Apollo to fix security problems in its UNIX-clone Aegis/DomainOS. They didn't change a thing.