I’ve decided to move us away from McAfee and onto Kaspersky. I’ve used McAfee’s product here for more than ten years and have been pretty happy with it and it’s protection has been pretty top notch, too ‘top notch’ as a matter of fact. I’ve in fact gotten away from even installing McAfee on mission critical systems due to it’s penchant for bringing systems to their knees at seemingly random intervals. It had gotten to the point that I didn’t even see the point of paying for McAfee since I had so sparesly installed.
It was at that point I knew a change was required: a virus scanner barely wroks to begin with, but not at all if it’s not installed. I’ve had a foul experience with Symantec (doesn’t seem to stop anything) and Trend (ditto, at least for their home product), so I decided to go with Kaspersky.
What was interesting though was when I first went to install it on a batch of PCs I got a bluescreen error on one of the PCs (my bosses system!) of 0x000000d1.
As it turned out though, the issue had nothing do to with Kaspersky, and everything to do with some bum DNS entries. In my initial testing I was installing to two computers of users who weren’t in that day, but then my boss called and said that it was installing on hers. I thought this was odd, but when I checked the logs Kaspersky did indeed say that I had installed it to the incorrect system. Flustered, I ran it again while double checking the computer name (which is fairly similar), and around that time my bosses PC bluescreened and Kaspersky again said that I was installing to the wrong computer. At that point I resolved that I would use the IP address of the computer I wanted to use, so I pinged it and plugged it into the script and as a joke before running it I pinged my bosses computer to see what it was, and it turned out that it was the same. My desired target PC had the wrong address assigned to it in DNS.
Kaspersky proved rather extra adept since it detected the name failure and then helpfully replaced the ‘wrong’ name with the ‘right’ name that the system was reporting and the blue screen was caused by trying to force an install over the existing install.
A couple of days after I had updated a series of products within the McAfee EPO, I started getting complaints from users about slow access times over the WAN. After running a technically intensive test (ping) I determined that their complaints were well founded. In an earlier time I would hop on the router and do who knows what to find the offending party, but I’ve been spoiled these last couple of years by having inaccessible (by me) outsourced routers with our MPLS setup. Not knowing what was causing the issue I tried toggling some Internet services, investigated file shares, e-mail usage, etc. before taking a ‘what the heck approach’ and stopped the EPO Server service. The instant I stopped it, the bandwidth issue cleared up. Started it up, and it comes back.
Thinking that the issue lied with the EPO program itself, I figured the best approach would be to try and upgrade myself out of this issue by moving from EPO 4.0 to EPO 4.5. This was an event all to it’s own and required a bit of work to get past a database upgrade issue. After I was done the system came back up and…same issue, the WAN pipe gets completely clogged (apart from our class of service specs of course). I tried following some bandwidth minimization strategies put forward by McAfee but they weren’t really a good fit for the issue we were having. I wasn’t getting anywhere with the logs in trying to determine what the huge chunk of data was that being sent into the server, so I fired up network monitor on the off chance that some XML file was being sent in clear text and that it would allow me to determine what the data was.
When I got into the captured data I began scanning some packets, and while none of them were plain text, I did notice that there was a huge disparity in which machines were communicating with the server. It was so large that it appeared that two PCs, one at each of our remote locations were the sole users of the servers over that brief time. These PCs were also communicating over port 8085 which is the agent communication port for the EPO server. I opened the services on the trouble units, stopped the McAfee Framework service and the bandwidth issue cleared up immediately. I started the services back up and although it took a variable amount of time the bandwidth issue would spring back up.
I’m going to try and redo the agents on the affected system to see if I can clear this issue up…..
UPDATE: Forcing a reinstall of the agent through EPO cleared the issue up on the affected systems.
UPDATE 2: Not so fast! It appears that for whatever reason my two problem PCs were not applying the second patch for McAfee VirusScan 8.7. If I had to guess they were constantly trying to download the patch, leading to my bandwidth issue. The problem now seems permanently cleared up after manually applying the patch to the systems. The misdiagnosis from the earlier update was caused by a very long lag time from when the agent was installed to when it checked in with the EPO.
I went to upgrade my old MSDE database to SQL 2005 Express on our McAfee EPO server and I was receiving a ‘-1’ error during the upgrade process with the detail of:
You selected Mixed Mode authentication, but did not provide a strong password. To continue you must provide a strong sa password.
I went through a bunch of different items, but this link detailed that the error is in fact caused by a bum installer of Microsoft’s design:
Copy C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\BPA\bin\BPAClient.dll to C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\BPA
This was one of those ‘no way’ errors that aggravated me to no end. Why didn’t they fix this?