| By Mark Joseph Edwards |
Several different labs conduct tests that pit antivirus solutions against various sets of malware circulating in the wilds of the Internet.
Now questions are being raised about the effectiveness of the tests in representing real-world PC threats and how accurately they reflect an antivirus program’s ability to prevent infections.
Antivirus testing procedures come under fire
In my May 1 column, I described the five antivirus programs that aced Virus Bulletin’s PC defense tests. Virus Bulletin is probably the biggest name in the field of such testing; the service’s results are highly credible.
Antivirus tests are useful as long as you know the basis of the tests and what it takes to achieve a given score. To conduct its tests, Virus Bulletin uses a set of malware selected from what is called the WildList, which is essentially a set of malware known to be circulating in the wilds of the Internet.
For years, the WildList has been the de facto standard for measuring the effectiveness of antivirus tools. The list dates back to the mid-1990s and is currently maintained by ICSA Labs, which is owned by Cybertrust. Cybertrust was bought by Verizon about a year ago.
Various antivirus labs use the WildList as part of their testing of an antivirus product’s ability to detect and remove malware. In 2007, some antivirus makers began to complain that the WildList was no longer sufficient for antivirus tests. Several companies suggest that a more complete set of tests should be run to examine a security suite’s entire arsenal of protection.