Site Network: Beskerming.com | Skiifwrald.com | Jongsma & Jongsma

Innovation in Information Security

Coverage of important Information Security and Information Technology news and events from the research team at S?nnet Beskerming.

Username: | Password: Contact us to request an account

When Content Goes Missing - A Lesson From McAfee

Should there be an ethical consideration to removing content from a site without leaving at least some evidence that something existed there? What if the site in question is an Information Security repository of information? What if the content was brand new and was erased before anyone noticed its presence (at least before you think anyone noticed it)?

There will probably never be a consensus on how to treat information that has appeared and disappeared like this and when it happens it can be difficult to prove that content ever existed in the first place - especially if it was withdrawn too quickly for various webcrawlers to get to and identify.

A posting on the McAfee Avert labs blog carried out a disappearing act within hours of being aded to the site.

Despite the content being pulled from the blog (if it ever made it there in the first place), it did make it into the RSS feed for a few hours. Cached versions of the feed pulled down in this time show the content, but when the feed was scraped a few hours later again, it had disappeared from the feed as well.

It seems that McAfee is going to some lengths to make the content disappear completely from the Internet, but thanks to those who captured the content before it could be pulled (Google it here, it is going to stay around for a while. Ironically, one of the sites that managed to preserve the full content delivered is a McAfee site which appears to include verbatim copies of the Avert labs posts amongst its content (though it may not be available there for much longer).

So why might the content have been pulled? Was there something in it that might have been an embarrassment for McAfee? The entry starts out claiming that the growth in malware attacks is "partly due to how successful AV technology really is". That might sound reasonable but it is a specious claim. Does the author really believe that "if AV scanners were not so successful in blocking trojans and viruses there would be little need for the bad guys to write new ones".

After waving the flag of we must be great, otherwise the malware authors wouldn't be writing so many malware variants, the author proceeds to complain that developing a functional product is way too hard and other developers should stop doing things the way they are, to make life easier for the AV companies.

Singled out in the post is the use of packers and protectors in producing a final software product. Packing to reduce space, through the use of tools like UPX and Petite, can help legitimate software authors reduce the space that their final binaries take up. They are also favourite tools of malware authors, used to allow them to squeeze more content into whatever limited space they have allowed themselves, especially when trying to push their payloads through as small a space in the network as possible. The smaller it is and the quicker it gets across the network, the less likely it is to attract the attention of anyone or anything that might be watching for suspicious behaviour.

The use of a packer also allows the same payload to be presented as a different binary, one of the reasons why there are so many variants of malware being detected by scanning software (or at least being claimed by the AV companies) - a lot of it is the result of a slightly different packed payload, but the same payload once unpacked. For signature based scanners, this is both a goldmine and a threat - the goldmine of being able to claim each variant as a different unique detection, and the threat of somebody using a new repacked version that the signature libraries haven't been told to recognise yet.

Likewise, the use of protectors (Armadillo and Themida are quoted in the post), designed to obfuscate runtime code and make reverse engineering more difficult, is also an area where legitimate software developers and malware authors have found overlapping interests. With a protector, not only can the same malware source compile to different resultant application binaries, but it also makes it difficult for a scanner operating on a non-interference basis to work out what is going on inside. Again, it can both help and hinder signature based approaches in much the same way as with the packer tools.

Lazy AV developers and those trading off speed and system resources for accuracy (they may not necessarily be the same group) build their systems to alert on files that identify as having been packed with a packer and / or protector. It is then up to the developer whether or not to take time and resources looking inside the file for proof of maliciousness, maybe even forcing partial execution to trace logic in a protected binary, or just wave the flag of surrender and isolate what might be a legitimate application. This last course of action might be quickest but it unnecessarily scares users, leaves them with a false sense of protection (snake oil, anyone?) and potentially harms legitimate developers who have made use of the same tools that malware authors have. If your application keeps being flagged as a false positive by AV scanners, it shouldn't be your job to go and show the AV developer that they are wrong. It almost borders on restriction of trade for developers in that situation.

McAfee's approach, in the pulled post, seems to be encouraging the legitimate developers away from the use of these tools - "We would urge all developers who use software protection to think twice before doing so. There is an increasing risk that your legitimate files will be blocked by AV software by mistake".

Despite the complete post alluding to the effectiveness of packers and protectors at actually protecting binary content, the closing point runs contrary to what came before it "The point is that software protectors are just not a secure software technology any longer because they have been misused so much. Do not use it if you can avoid it."

Were McAfee justified in pulling the content, based on how it might be seen against their commercial offerings, and even how it might reflect on the Antivirus industry as a whole? You can always make your mind up for yourself - the pulled content appears in full below:

"Who digs the elephant trap?

Igor Muttik

Yesterday, 9:40 PM (May 28th, 2009)

It is ironic but the extreme growth rate of malware attacks is actually partly due to how successful AV technology really is. Quite simply - if AV scanners were not so successful in blocking trojans and viruses there would be little need for the bad guys to write new ones. One can even say that both AV developers and malware writers inadvertently work together in digging an elephant trap for users because more malware means advancing scanning capabilities which often means slower computer operation for all of us.

Figuratively speaking, the primary tools that the bad guys are using to dig their side of the trap and evade detection are packers (like UPX and Petite) and protectors (like Armadillo and Themida). Packers are legitimately used to reduce the size of programs (saving disk space), while protectors are legitimately used to prevent patching, hacking or reverse engineering. For malware production, however, packers and protectors are useful as they can often obfuscate original malware beyond recognition by AV.

Commercial protectors are especially loved by malware writers because they can put a protective envelope on top of, say, their spam-bot and it will be well hidden inside. Additionally, it will now really look more like a legitimate file obfuscated with the same protector. Malware writers use this trick more and more frequently.

As a result, on any average computer, AV can frequently encounter, say, a Themida-packed computer game and a Themida-packed spam-bot. To determine what is what an AV product has to know what is "under" the protecting envelope. Unfortunately, this simply cannot be done very quickly. It takes computing cycles....

We would urge all developers who use software protection to think twice before doing so. There is an increasing risk that your legitimate files will be blocked by AV software by mistake or that there will be an unpleasant slowdown due to long analysis. Either can cause troubles for users. If you feel that you really must use an obfuscating protector at least digitally sign your files. That would reduce the level of suspicion by introducing traceability to the source.

The point is that software protectors are just not a secure software technology any longer because they have been misused so much. Do not use it if you can avoid it."

Update - Intrigued by this story? Why not look at this related one about Pace suppressing open reporting of disassembly and reverse engineering efforts against Interlok.

29 May 2009

Social bookmark this page at eKstreme.
Alternatively, Bookmark or Share via AddThis

Do you like how we cover Information Security news? How about checking out our company services, delivered the same way our news is.

Let our Free OS X Screen Saver deliver the latest security alerts and commentary to your desktop when you're not at your system.

Comments will soon be available for registered users.