Tag Archives: Windows Server

Bluescreen 0xc00002e2

Judge a smidge of background first. I came in this morning and the Hyper-V guest of our windows server domain controller was getting paused while booting with a disk IO error of some sort inside the VM (there were no error within the cluster itself). It turned out that due to an issue I’m having with the DPM backups that I had a half dozen snapshots stacked up for the server. After merging them all the server then stopped getting paused but then did bluescreen with the error 0xc00002e2 and the Internet was less than helpful.

The error is caused by an issue with the active directory database and running a repair on it helpfully fixed it up. I first booted to the repair mode command prompt and tried to run esentutl /p against ntds.dit, but it failed with the error “Unable to find the callback library ntdsai.dll” and the fix for that was to run the utility from the Windows instance that was being repaired. In this case the repair tool dropped me off to an X:\ drive, but the Windows instance that I wanted to repair was actually the E: drive. So I changed the path to e:\windows\system32 so the command was something like:

cd e:\windows\system32
esentutl /p e:\windows\ntds\ntds.dit

I honestly figured it would fail, but it ran through successfully and the server booted up fine afterwards. Note that I have other DCs for this domain so I didn’t have to worry too-too much about the actual data and state of the DS database since it was just going to plowed over with the objects from one of the other DCs.

Upgrading to Server 2008

I’m a little late on this, but it has more to do with what’s supported on Server 2008 and having the time to upgrade the servers.  Of note are the two bigger issues when upgrading:

  • One of my servers was formally a Server 2000 box and as such doesn’t have the largest hard drive.  Now existing as a Server 2003 virtual machine I decided to boost the drive size.  This required boosting the drive size in Hyper-V, changing the simple dynamic disk to a basic disk by using this Microsoft tip (sorry I don’t remember which site gave me the tip), and then finishing up with PartedMagic.
  • Next up is removing/uninstalling Powershell.  Since my disks were low on space I was rather diligent about removing the compressed patch uninstall files under the Windows directory so I didn’t have the Powershell uninstall sitting around anywhere. Since I had no other choice, I went through a lengthy process of building an un-SP* Server 2003 virtual machine just so I could get the Powershell uninstall files.  To save others the pain I am sharing the files here.

As well, perhaps due to the age of the OS on the box, the Frontpage extensions had a nasty hook into the OS with the constant upgrade error of “setup has detected that Frontpage server extensions is installed on this computer” blah blah blah.  I paced through the tip here and the one that did it was deleting everything in the registry labeled “web server extensions”.

Does Anyone Else Use Microsoft RMS?

I think not, which leads to a dearth of searchable content when I encounter an issue.

My latest round of technical snafus that I’ve encountered with Microsoft’s digital rights management software for Windows 2003 revolve around the fact that I’d like to upgrade the server to Windows Server 2008.  I’d love even more to port it to the 64 bit Windows Server 2008 R2, but I gave up on ever getting that to work.

However I did get a virtual lab install of the server to upgrade to regular old 32 bit Server 2008, except that RMS on 2008 runs off of SQL (express) 2005.  Unfortunately my existing install of RMS was running on the old MSDE 2000.  After making sure that the SQL upgrade worked, I upgraded MSDE on the server to Express and everything seemed to run okay, except that occasionally the DRMS_Logging service wouldn’t start.  I’d start it back, and at some point in time it would stop (sometimes stopping right away after I tried to start it).  Finally, I couldn’t get it to start at all (to be fair to myself I figuring it was a timing issue with IIS since cycling IIS seemed to get the RMS service to start, though obviously this turned out to be a coincidence).

When I first looked at the server I noticed that I was getting ‘file full’ SQL errors, which I figured came about because SQL Express was hitting it’s space limit.  When I looked at the MOM/Onepoint database I noticed that it had grown quite large.  I looked up how to purge data and the posts all seemed to go back to the ‘sqlagent’ running a process that ran a stored procedure that handled the ‘grooming’.  After messing around with the SQL Express installer a bit looking for the agent install I’d found that although the SQL agent at least appeared to be included with MSDE 2000, it’s not with 2005.  I then went through the database and determined that the procedure ‘dbo.MOMXGrooming’ was the winner.  After executing that and shrinking the DB and files it cleared up the SQL space issue.  Yet still, the DRMS_Logging service wouldn’t start.

I looked at the logging database for RMS and it was rather large as well, but with no built in grooming procedure I just dropped the table and recreated it (backing up first of course).  The service still wouldn’t start, no errors, no nothing.  I figured that even though it wasn’t working it was worth looking at the web management piece to see if it would let me configure it.  When I tried to pull it up, the site kicked out an error saying that it could not run because the event viewer was full – the application event log had filled up with SQL space errors.  After purging the log, the service started and people could get into their documents yet again.

Obscure IIS 7 Issue

On my WSUS implementations on my Windows 2008 servers I’ve an issue on two occasions where clients become unable to download the wuident.cab file.  Attempting to manually download the file results in a “403-Forbidden: Access is denied” error.  The first time I was getting the error I had an update to the Windows Update Service that I had been putting off, and after installing it the error cleared up.  The second time it came up only one of my update servers had the issue and I was befuddled as (just like the first time) the server was working fine and then began getting the issue seemingly out of the blue (more than likely due to an update of some sort?  The DPM install on the same server?).  One caveat though was that it all worked fine locally.

After hunting through the GUI and checking permissions I finally tracked down this web link.  For some reason the ‘<location path=”Default Web Site/SimpleAuthWebService”>’ section of the applicationhost.config file was getting set to all the ‘NoRemote’ settings.  After setting the handler section to “<handlers accessPolicy=”Read, Script” />” the WSUS began functioning properly again.

I’m not a total gluten for the GUI, but it would be nice to know where it’s purview ended and the text based editing began (maybe an embedded link in the GUI?).  It could also be that I’m not quite familiar enough with it as well since I’m constantly having to switch between the 6 and 7 interfaces.

Too Much At One Time

I’ve had quite a bit of off-hours work, but not a lot of off-hours time in which to do it (mostly having to do with house issues, apartment dwellers should forgive me for being jealous of them a fair portion of the time).  The end result of this is cramming several days worth of work into a window of a few hours.

Last night I had to patch several servers via Windows Update, upgrade the memory in one server (which required re-cabling due to a half installed cable arm), updated the MS DPM agent on two other servers, updated the firmware on our Barracuda spam firewall (which was the second Barracuda update in a row that created more problems than it solved), and replaced the batteries in our battery backup unit (which itself required carefully shutting down several different servers and processes).

After everything came back up I caught a non-production virtual machine that wasn’t starting (which will be a story for a different post), the Citrix servers were running slow, and I was having issues getting a database process to start correctly.  After wrestling with the host of issues for an hour and resolving them for the most part I took off while I was ahead, or so I thought.

In my rush to wrap up I forgot my cardinal rule when touching anything to do with e-mail: test with an outbound and reply it back.  That night my diagnosis of Exchange consisted of making sure Outlook wasn’t popping an error up in the tray before I got bogged down on the other issues.  To make matters worse a user e-mailed me to let me know it wasn’t working, but unfortunately he e-mailed me at the time when the server was down due to the battery replacement, so I thought nothing more of it and told him that it should be working (while I only testing the OWA splash page).  I admit that I also improperly relied on my Windows Mobile phone for testing, an unreliable device even when everything is working properly.

In the end the error was caused by the Exchange 2003 server booting up before (probably by seconds) any of the domain controllers and as a result most of the Exchange services did not start.  It’s worth noting that my near-production Exchange 2007 server did not experience this fault.  Long term I should have a more reliable test mechanism (this happened before after an extended power outage), but most of all I just need to remember to perform my diagnostic procedures before attempting to fix the first issue that grabs my time.