• United States



by CSO Contributor

A Good Worm Is Hard to Find

Aug 30, 20045 mins
CSO and CISOData and Information Security

In August last year, a week after the Blaster worm infected computers across the

Internet, a “benevolent” worm started spreading in its wake. Called Nachi, Blast.D and

Welchia (why can’t the people who name these things pick a single name and stick with

it?), it infected computers through the same vulnerability that Blaster did. But its effects

were different. If it found Blaster it deleted it, and then it applied the relevant Microsoft

patch to close the vulnerability so Blaster could not reinfect. Then, Nachi scanned the

network for other infected machines and repaired them, too.

Blast.D represents a cool-sounding idea that we hear about again and again. Why don’t

we use worms for good instead of evil? Worms are great at infecting computers, so why

don’t we use them to patch vulnerabilities, update systems, and improve security?

Benevolent worms are attractive for several reasons. One, they are poetic: turning

weapons against themselves. Two, they let ethical programmers share in the fun of

designing wormsand it is fun. And three, they sound like a promising solution to

one of the nastiest online security problems: patching vulnerabilities.

Everyone knows that patching is in shambles. Users, especially home users, don’t do it.

At the corporate level, the best patching techniques involve a lot of negotiation, pleading

and manual labor, things that nobody enjoys very much. From the point of view of a

software engineer, benevolent worms look like a killer app. You turn a difficult social

problem into a fun technical problem. You don’t have to convince people to install

patches. You use technology to force them to do it.

And that’s exactly why they’re a terrible idea. Patching other people’s machines without

annoying them is good; patching other people’s machines without their consent is not. A

worm is not “bad” or “good” depending on its payload. Viral propagation mechanisms are

inherently bad, and giving them beneficial payloads doesn’t make things better. A worm

is no tool for any rational network administrator, regardless of intent. When Nachi was

released, no company suggested that it be allowed to infect the Internet, even though its

payload was ostensibly benevolent.

A successful worm runs without the consent of the user. It has a small amount of code,

and once it starts to spread, it is self-propagating and will keep going automatically until

it’s halted.

These characteristics are simply incompatible with a good software distribution

mechanism. The characteristics of good software distributiongiving the user more

choice, making installation flexible and universal, allowing for uninstallationmake

for a worse worm. Characteristics of good worms—,quieter and less obvious to the

user, smaller and easier to propagate, impossible to containall make for bad

software distribution.

Experimentation, most of it involuntary, proves that worms are very hard to debug

successfully. In other words, once worms start spreading it’s hard to predict exactly what

they will do. Some worms were written to propagate harmlessly but did damage&

#151ranging from crashed machines to clogged networks— because of bugs in their

code. Many worms were written to do damage and turned out to be harmless (which is

even more revealing).

Intentional experimentation by well-meaning system administrators proves that in your

average office environment, code that successfully patches one machine won’t work on

another. Indeed, sometimes patching is worse than any threat of external attack.

Combining a tricky problem with a distribution mechanism that’s impossible to debug and

difficult to control is fraught with danger. Every system administrator who’s ever

distributed software automatically on his network has had the “I just automatically, with

the press of a button, destroyed the software on hundreds of machines at once!”

experience. And that’s with systems you can debug and control; self-propagating

systems don’t even let you shut them down when you find the problem. Patching systems

is fundamentally a human problem, and beneficial worms are a technical solution that

doesn’t work.

HP is currently struggling with this issue and claims to have found a middle route. Its

“Active Countermeasures” strategy tries to use the spreading capabilities of worms to

patch network systems. I am not impressed. Patching systems is good, and there are

existing software distribution mechanisms to do that. Going around them is simply bad

system administration.

Similar issues also arise with spyware. Spyware doesn’t spread, but it can “infect” a

user’s machine without his knowledge. Again, we can use the distinction between

software distribution and viral propagation: If the user knowingly and willingly invites the

spyware into his computer, then it’s okay. If the spyware surreptitiously installs itself, then

it’s a worm. It’s the propagation mechanism that matters, in addition to the payload.

I’m a big fan of automated network updates. They ease the workload on system

administrators and make keeping up with patches possible. The key is control; corporate

network administratorsor whatever sysadmins they outsource the problem to&

#151;need to maintain it. Once they lose control, they lose the ability to manage their

networks. And that’s bad.