Re: Large scale test of AV

Do you have a question? Post it now! No Registration Necessary.  Now with pictures!

Threaded View

Quoted text here. Click to load it

You're "Oliver".  I'm "Perry".  He's "Bill".

:)

I wasn't replying to you Oliver, I was explaining to Bill why we
trusted our business to "some half assed product like AntiVir".

Missing the machine-gun burst of a new worm *most* anti-virus
programs couldn't detect until they were updated doesn't make
AntiVir a generically "bad" product.  We were told six detection
failures in four days was a "once-in-a-lifetime probability", but
even so, relying on AntiVir after it went tits-up like that was an
unacceptable risk in our mission critical environment, so it had
to go.

AntiVir dectected 98.13% of polymorphic viruses used in Virus
Bulletin's latest test.  A miss rate of "only" 1.87% doesn't sound
like much, but that 1.87% translates to a possible 128 attacks
NOD32 (with its 100% detection in the same test) would have
blocked.

Add to that NOD32's claim of being able to identify most "zero
day" malware on sight (129 attacks by unknown crap blocked
since the rollout in November gives that claim serious weight)
and you'll see why I wondered why the OP switched to AntiVir.

AntiVir was a much better solution *for us* than the Symantec
it replaced.  NOD32 has proved to be a much better solution
*for us* than the AntiVir it replaced.  YMMV.

Perry




Re: Large scale test of AV

wrote:

Quoted text here. Click to load it

The proactive capability of a av product is certainly one of the most
important aspects to consider when choosing a product. Both
AV-Test.org and AV-Comparatives have conducted tests designed
to evaluate this aspect. In the dim past, the Uni Hamburg VTC
conducted their version of such a test.

Large scale detection tests alone leave out so much of importance.
I heard recently about a type of false positive that really "drives me
up the wall" :) It has been alleged that a couple of vendors in
particular have a bad habit of alerting "heuristically" on certain run
time packers used by malware authors. Thus, if some legit program
happens to use one of those packers, it will be flagged as
"suspicious" (or even worse) regardless of the code in the file.

Another aspect I've noticed is the "generic" detections done
by some products where many different variants of a malware
are lumped together and reported with a generic malware name.
Often, the generic detection is  "too loose", and a misidentification
occurs.

It would be nice IMO to see large scale tests which include other
categories such as spyware and adware as well, but with only
products included which meet certain criteria such as:

1. Good proactive capability consistent with
2. "real" heuristics and generics

where products that do "heuristics" in a phoney way and/or
which far too often misidentify malware due to overly
loose generics are not included in the tests. The effect would
be to put pressure on vendors to improve quality as well
as quantity.

Art
http://home.epix.net/~artnpeg

Site Timeline