All posts by Steven McNutt

About Steven McNutt

I am a technical support analyst and manager with more than fifteen years of experience. Although specializing in the Microsoft line of products I am also familiar with and have worked with (among others) the IBM AIX and Red Hat Linux operating systems, as well as the installation and maintenance of Cisco and Nortel networking equipment. I have obtained the following Certifications at differening times over my years of working in the IT field: -CISSP -MCSE (NT4 and Windows 2003) -RHCE (v3) -CNA (v4) I currently work and reside in Cleveland but I also frequently work in both Cincinnati and central Michigan as well. 

Great Moments in Database Design

I am by no means an expert DBA, but the design mentality behind a product that my company uses leaves me scratching my head.  This product which shall go unnamed (Exact JobBOSS) has a variety of faults, but none more perplexing than their ‘double keying’ of tables.

Now I’m not talking about a compound key here.  No, this is where they’ve gone through and put the key in twice, such as this example from the ‘Address’ table:

JobBoss Address Key
No problem they’re the same (green), until they’re not! (red)

The logic behind this design decision is somewhat puzzling. I guess they figure that a user may want, for example, a new customer to have an old customer’s Address number? Not only does that not make any sense on it’s face, but A) The user has no control over how this number is assigned and (the kicker) B) This second key is allowed to be null within the database. Only the application rules keep a catastrophic event from occurring in making sure that this second ‘key’ is not null.

Now you might be wondering: what’s the big deal if this second key is null, after all, there’s a primary key on the table already. Here we get to square the circle of this puzzling design decision because as it turns out other tables do not ‘key up’ to the primary key of the table, but the second, fake, ‘key’. Here’s the create code from a table that ‘keys up’ to the Address table (emphasis mine):

ALTER TABLE [dbo].[RptPref] WITH NOCHECK ADD CONSTRAINT [FK_Addr_on_RptP] FOREIGN KEY([Address]) REFERENCES [dbo].[Address] ([Address])

Not [Address].[AddressKey], but [Address].[Address]?!?*  Obviously there’s two issues that immediately spring to mind in such a situation, both of which the developer is attempting to control strictly through their Franken-code.  The first is, like I stated above, the field can be null.  What if this record is not inserted properly and it is null?  Fun times I’m sure!

The second issue is that since the field is not a key, there can be more than one record with that same value.  Is their code tight enough to prevent this possible corruption issue?  Like much to do with this product, you’ll have to take it on trust!

Addendum: I should also point out a basic programmatic issue with this whole design methodology.  If I want to modify records in a table which contains a column that is referenced as a key in a different table, but isn’t a key in the source table that I want to modify, there’s no (easy) way to catch this.  In the back of my mind, I’m thinking that maybe the database was designed as a complicated mess in order to make it semi-proprietary.

*(I should point out that the database for this product is small enough that it’s feasible to make a (tiny) entity chart of it within Visual Studio; but beware that easily half the tables will appear to have no relationship with any other table because of situations like the above (‘keying up’ to a possible null non-key, that’s actually never supposed to be null)).

**(While I ranting about JobBOSS I figured I’d bring to light the package’s great love of the ‘rtl70.bpl access violation’ errors that it is prone to getting.  I long thought that it was an issue with my installation, but I’m now convinced that they’ve coded the Delphi portions with little to no error handling (is there no ‘try/catch’ in Delphi?)).

Unrecommendations

I had been a mild advocate of HP hardware for a brief time (best of the worst you might say), but I’ll have to withdrawal even that mild support.  First, two people that I’ve recommended HP to have had their systems die (dead laptop display on one, dead desktop with the other).  Secondly, I liked their thin clients from several years ago and for some reason I keep ordering the things even though the software therein has treated me horribly.  The terminal I got last year was a t5570 with Windows Embedded 2009.  This travesty came with a stripped out version of Windows XP that required a secret handshake to boot to admin mode, and then required several attempts to install a certificate onto it for RDP Network Layer Authentication (NLA) since the decrepit OS only has certificates from 2004, or something, I don’t now.  HP was ZERO help in getting this thing to behave.  Not to be outdone I later ordered some of t510 models.  These time sinks feature a butchered version of Ubuntu that cannot hook into NLA*.  Why HP thinks that it’s a-okay to ship new hardware with such a basic functionality missing is beyond me.

To go with the terminals are a batch of WCS9000 CCD Wasp scanners: absolute crap.  If the barcode is huge, shiny, and very close, there is no problem, otherwise expect to be keying in the info.  They would probably suggest one of their ‘up’ models; but this piece of junk is already closing in on $200 and it doesn’t work.  If they have no scruples about shipping something that doesn’t work, why would I buy something else from them? (We ended up getting a Honeywell 3800g scanner which is an amazing device for a handheld scanner; they may be hard to come by though).

Ah that brings us to Microsoft’s latest offerings.  By now we all know about the horrible Windows 8 interface, but why did they chose to curse the server version with it.  Rare are the cases anymore where someone is physically at a server (if there even is a server to be physically at).  Who at MS thought that it was excessively clever to use those floating corner cursor moves on a remote control interface?  It barely works when you’re at the system itself, but due to inevitable lag on even the fastest remote connections, it’s hard to tell if the menu will ever pop up.  As well, Server 2012 has ZERO metro apps, so every app that’s opened just boots you back to the desktop, and if you haven’t pinned everything to the taskbar you’ll be forced to remote float in the corner again to bring up the useless start screen.  I also need to add that they’ve removed various management tools as well, especially those related to Remote Desktop Services.

And then there’s Microsoft Exchange 2013.  Here MS has completely removed the management app, replacing it with a buggy, stunted web interface.  They’ve also taken the time to remove some functionality from the package as well (have fun trying to get the certificates and names to behave).

*With NO help from HP, I was able to get the t510 thin client to hook into Terminal Servers running NLA.  It turns out that the issue is somehow related to NLA terminal servers that are using commercially signed SSL certificates.  If you use a self signed certificate it works fine (after a warning).  Note that the terminal will still not work (by default) if NLA is optional on the server if the server still uses a commercially signed certificate as the t510 RDP client will automatically try to upscale the encryption and fail with an error like “RDP CLIENT ERROR: Critical RDP client error” (GUI) or “segmentation fault” (terminal shell).  Anyway, I created a custom app on the terminal that executes the RDP client as a shell command  ‘xfreerdp -u userName –ignore-certificate serverName’.

Lack of Laptop Focus

I’m in the hunt for a new spare/traveling laptop at work after our old Dell Latitude D820 finally died.  I loved that Dell but it had it’s faults, primarily it’s desktop worthy weight which my boss hated (she actually preferred to use a lighter nine year old Thinkpad because the D820 was so heavy).

Typically for work I’ve been buying HP laptops, but the giant (heavy) model of Elitebook that I prefer can be hard to come by (with a flash drive), but since my boss is concerned with weight I was eyeballing the MacBook Air as well.  I figured I’d check out both sites and I have to say that I don’t get the sales strategies for Windows resellers.  It’s enough if you just look at the URLs for HP and Apple:

HP’s Laptop and Tablet page: http://shopping1.hp.com/is-bin/INTERSHOP.enfinity/WFS/WW-USSMBPublicStore-Site/en_US/-/USD/ViewStandardCatalog-Browse?CatalogCategoryID=kWIQ7EN5dVcAAAEtGpgoSe36&hiderightpanel=true

Apple’s: http://www.apple.com/mac/

But a screenshot tells the tale as well, Apple:

applescreen

HP: hpscreen

If you open up the gaudy HP page you’ll see that they list 157 different models of Tablets/Laptops and in order to keep consumer confusion at it’s peak, no effort is made by HP to relate the differences of the models in a form that makes sense (one wonders if it’s even possible though, to be fair).  Does this product-diareha strategy work?

DPM and Hyper-V and iSCSI

It seemed like a good idea, and it still seems like a good idea when it works, but when my combination of Data Protection Manager, Hyper-V, and iSCSI gets grumpy, it gives me ulcers.

My recent issue started, as it usually does with this setup, with a power outage. The power went out, and when it came back on all the servers came back up properly.   “No Problem”, I thought, however a little bit later I noticed that DPM was having issues backing up the virtual servers on one of the virtual hosts.  I tried to kick the backup jobs off manually, but they failed with a VSS error of event ID 12305, “Volume/disk not connected or not found” and that the VSS provider was in a “bad state”.  I tried some of the easy things that I found on the ‘net but to no avail.  I figured I’d reboot the box and that it would “figure itself out “when it came back up (which is often the case when my DPM backups delightfully start bombing out on a virtual host).  This time however, the server came up and my iSCSI virtual machines were gone, unlisted in the Hyper-V manager.  Despite the fact that Hyper-V has delisted virtual machines on me several times in the past as well, it never ceases to cause me to question my career choice when it happens.

Investigating, I saw that the iSCSI drive along with the virtual disks were there so my hope was that the VHDs were okay.  When Hyper-V was trying to add the virtual machines it was kicking out an error along the lines of an OID that couldn’t be found.  Distressed, I decided to just recreate the virtual machines from the VHDs (not a first for me either), however I made the fortunate mistake of letting Hyper-V store the virtual machines in the default directory on the system partition.  The first two machines started up fine, but the third kicked out an error that it couldn’t write the memory file.  I had forgotten that Hyper-V keeps a swap file of the memory for the virtual machine and I had run out of space for such files on the system partition.  I figured that I’d have to recreate the virtual machines, again, but in their original directories on the iSCSI drive.  Before I went through that work again though, I figured I would take a shot at modifying the XML config for the servers that I had just created.

It was then in the Hyper-V ProgramData folder that noticed something peculiar – the links to the original non-operational virtual machines weren’t working.  When I pulled the directory listing I found them pointing at ‘F:’, but there was a different drive on ‘F:’ and the virtual machine iSCSI drive had been moved to ‘H:’.  It turns out that after the power outage the server had decided to snag an unassigned iSCSI drive that was on an attached Netgear ReadyNas box and assign it to ‘F:’.  The Virtual Host had worked fine through the week because the drive mapping didn’t take affect until after I rebooted the server.  After reassigning the drives my Hyper-V and DPM are happy again, but I’m sure they’ll get back at me eventually.