Many, many years ago I had a domain controller installed on a rather naughty IBM server. This server, as it turned out later, had some bad firmware on the drives which would cause occasional system oddities such as blue screening, hanging on boot, etc. I let it go for too long, but the issues were vague and infrequent; many a techie know the rut one can get into when it’s safer to leave well enough alone. That line of thinking is always a catch-22 and the server’s issues finally came to a head when it crashed the right files and my domain controller no longer thought that it was a domain controller.
What about backups you say? Well we had our handy-dandy untested disaster recovery backup from Arcserve, which turned out to require a special boot disk that needed to be made from the server in question before the disaster (this later turned out to be bogus anyway as I later found that even under the best lab circumstances I couldn’t get their worthless product to work). I was in a pinch so I called Microsoft support and somehow, over the course of the night, they were able to get the trashed Active Directory operational again. The call spanned between two shifts so the guy I talked to at the end of the call was not the same one from the beginning.
Compare that experience to my recent experience with my Exchange 2013 setup where, it appears, no one in the eastern hemisphere was given proper training on the issues that this product is prone to having. No one calls back in time, despite the use of cut-rate help, and if your support rep has an end of shift they may abadon you until the next morning. Don’t bother with web/phone support unless you’ve exhausted the list below.
First and foremost, Exchange 2013 as released was a beta product. Cumulative update 1, made it seem like it was usable, but please be sure to install CU2 if you want a product that comes close to behaving! Before installing CU2 the ‘RPC over HTTP’ function was sketchy and I would get prompted for authentication when making a new profile, external users would work while internal ones would not, and running tests would result in ‘500 http’ web server errors in OWA and ‘X-CasErrorCode: ServerLocatorError’ when running the connectivity tester.
After installing cumulative update 2 on two different servers, it failed to start both the transport service and the frontend transport service (while making sure to start ‘manual’ processes that we don’t use like unified messaging).
The new Exchange control panel is a marvel to behold, and crap at the same time (much like Microsoft’s whole product portfolio at this point in time). If everything works it’s great, but when it comes time to set ‘internaluri’ and whatnot to the virtual directories it’s best to get familiar with the get/set functions for these in Powershell.
Please make a note that the ‘Microsoft Exchange Service Host’ service has a nasty habit of resetting/changing the RPC folder settings in IIS. On one server it would change the backend RPC to point to the frontend folder and on another it would turn off all the SSL on the RPC folders. Why does it do this? No one knows, though Microsoft did tell me that this can be effectively managed if you go to the registry key ‘HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeServiceHost\RpcHttpConfigurator’ and set ‘PeriodicPollingMinutes’ to zero. However, if the server reboots, be sure to double check these settings again.
Amateur mistake, but remember when migrating mailboxes keep in mind that it will eat up double the disk space of the source mailboxes until a backup can roll the logs off. Please note that backing up with DPM 2010 apparently does not count as a backup. (Also note that, for whatever reason, mailboxes get a 5-10% size boost when going from Exchange 2010 to 2013).
Setup will make the proper ‘receive’ connectors, but not the send connector. When making it through the Exchange control panel, I had to uncheck ‘‘ so that my send connector would work properly.
Another note is that within IIS, if you are not able to access Exchange properly through Outlook (but Outlook Web Access works), then it might also be that you are missing the ‘Negotiate’ provider for Windows Authentication. Just add/check it by right clicking on Windows Authentication under the virtual directories->Authentication applet and clicking advanced.
One site had issues with Outlook hanging on the new message notification and a general slowness in trying to do anything else. Were it not for the niggling issues from the migration I might have turned on to the culprit sooner: Kaspersky.
Of course I wonder how many places still run their single Exchange server in-house. I’d imagine that it’s getting to be a pretty lonely existence. If this is your situation: migrating a single Exchange 2007/2010 server to Exchange 2013, I should point out an issue with the SSL certificate. Chances are if you have one Exchange server, you have one, simple commercial SSL certificate as well, though even if you cheap out and use self-signing this issue still might apply. The issue is that once the users are migrated, and you then migrate the certificate, the Outlook profiles will need to be rebuilt – for every user. I am guessing that there are at least two possible work-arounds, though I haven’t tested them. One is to get a certificate with a different name, this way Outlook knows that it’s a different server and to re-do it’s security settings. Another idea is that it might be possible to migrate everyone to the new server, let Outlook catch the settings change (this part does work) and then move the certificate later (sketchy on how Outlook will behave here). The main issue with this is remote e-mail support since the old server cannot proxy to the new one (I believe?). Otherwise, without changing the profiles, end users will just get logon/credential prompts and not be able to access their e-mail.
I am by no means an expert DBA, but the design mentality behind a product that my company uses leaves me scratching my head. This product which shall go unnamed (Exact JobBOSS) has a variety of faults, but none more perplexing than their ‘double keying’ of tables.
Now I’m not talking about a compound key here. No, this is where they’ve gone through and put the key in twice, such as this example from the ‘Address’ table:
No problem they’re the same (green), until they’re not! (red)
The logic behind this design decision is somewhat puzzling. I guess they figure that a user may want, for example, a new customer to have an old customer’s Address number? Not only does that not make any sense on it’s face, but A) The user has no control over how this number is assigned and (the kicker) B) This second key is allowed to be null within the database. Only the application rules keep a catastrophic event from occurring in making sure that this second ‘key’ is not null.
Now you might be wondering: what’s the big deal if this second key is null, after all, there’s a primary key on the table already. Here we get to square the circle of this puzzling design decision because as it turns out other tables do not ‘key up’ to the primary key of the table, but the second, fake, ‘key’. Here’s the create code from a table that ‘keys up’ to the Address table (emphasis mine):
ALTER TABLE [dbo].[RptPref] WITH NOCHECK ADD CONSTRAINT [FK_Addr_on_RptP] FOREIGN KEY([Address]) REFERENCES [dbo].[Address] ([Address])
Not [Address].[AddressKey], but [Address].[Address]?!?* Obviously there’s two issues that immediately spring to mind in such a situation, both of which the developer is attempting to control strictly through their Franken-code. The first is, like I stated above, the field can be null. What if this record is not inserted properly and it is null? Fun times I’m sure!
The second issue is that since the field is not a key, there can be more than one record with that same value. Is their code tight enough to prevent this possible corruption issue? Like much to do with this product, you’ll have to take it on trust!
Addendum: I should also point out a basic programmatic issue with this whole design methodology. If I want to modify records in a table which contains a column that is referenced as a key in a different table, but isn’t a key in the source table that I want to modify, there’s no (easy) way to catch this. In the back of my mind, I’m thinking that maybe the database was designed as a complicated mess in order to make it semi-proprietary.
*(I should point out that the database for this product is small enough that it’s feasible to make a (tiny) entity chart of it within Visual Studio; but beware that easily half the tables will appear to have no relationship with any other table because of situations like the above (‘keying up’ to a possible null non-key, that’s actually never supposed to be null)).
**(While I ranting about JobBOSS I figured I’d bring to light the package’s great love of the ‘rtl70.bpl access violation’ errors that it is prone to getting. I long thought that it was an issue with my installation, but I’m now convinced that they’ve coded the Delphi portions with little to no error handling (is there no ‘try/catch’ in Delphi?)).
I had been a mild advocate of HP hardware for a brief time (best of the worst you might say), but I’ll have to withdrawal even that mild support. First, two people that I’ve recommended HP to have had their systems die (dead laptop display on one, dead desktop with the other). Secondly, I liked their thin clients from several years ago and for some reason I keep ordering the things even though the software therein has treated me horribly. The terminal I got last year was a t5570 with Windows Embedded 2009. This travesty came with a stripped out version of Windows XP that required a secret handshake to boot to admin mode, and then required several attempts to install a certificate onto it for RDP Network Layer Authentication (NLA) since the decrepit OS only has certificates from 2004, or something, I don’t now. HP was ZERO help in getting this thing to behave. Not to be outdone I later ordered some of t510 models. These time sinks feature a butchered version of Ubuntu that cannot hook into NLA*. Why HP thinks that it’s a-okay to ship new hardware with such a basic functionality missing is beyond me.
To go with the terminals are a batch of WCS9000 CCD Wasp scanners: absolute crap. If the barcode is huge, shiny, and very close, there is no problem, otherwise expect to be keying in the info. They would probably suggest one of their ‘up’ models; but this piece of junk is already closing in on $200 and it doesn’t work. If they have no scruples about shipping something that doesn’t work, why would I buy something else from them? (We ended up getting a Honeywell 3800g scanner which is an amazing device for a handheld scanner; they may be hard to come by though).
Ah that brings us to Microsoft’s latest offerings. By now we all know about the horrible Windows 8 interface, but why did they chose to curse the server version with it. Rare are the cases anymore where someone is physically at a server (if there even is a server to be physically at). Who at MS thought that it was excessively clever to use those floating corner cursor moves on a remote control interface? It barely works when you’re at the system itself, but due to inevitable lag on even the fastest remote connections, it’s hard to tell if the menu will ever pop up. As well, Server 2012 has ZERO metro apps, so every app that’s opened just boots you back to the desktop, and if you haven’t pinned everything to the taskbar you’ll be forced to remote float in the corner again to bring up the useless start screen. I also need to add that they’ve removed various management tools as well, especially those related to Remote Desktop Services.
And then there’s Microsoft Exchange 2013. Here MS has completely removed the management app, replacing it with a buggy, stunted web interface. They’ve also taken the time to remove some functionality from the package as well (have fun trying to get the certificates and names to behave).
*With NO help from HP, I was able to get the t510 thin client to hook into Terminal Servers running NLA. It turns out that the issue is somehow related to NLA terminal servers that are using commercially signed SSL certificates. If you use a self signed certificate it works fine (after a warning). Note that the terminal will still not work (by default) if NLA is optional on the server if the server still uses a commercially signed certificate as the t510 RDP client will automatically try to upscale the encryption and fail with an error like “RDP CLIENT ERROR: Critical RDP client error” (GUI) or “segmentation fault” (terminal shell). Anyway, I created a custom app on the terminal that executes the RDP client as a shell command ‘xfreerdp -u userName –ignore-certificate serverName’.
I’m in the hunt for a new spare/traveling laptop at work after our old Dell Latitude D820 finally died. I loved that Dell but it had it’s faults, primarily it’s desktop worthy weight which my boss hated (she actually preferred to use a lighter nine year old Thinkpad because the D820 was so heavy).
Typically for work I’ve been buying HP laptops, but the giant (heavy) model of Elitebook that I prefer can be hard to come by (with a flash drive), but since my boss is concerned with weight I was eyeballing the MacBook Air as well. I figured I’d check out both sites and I have to say that I don’t get the sales strategies for Windows resellers. It’s enough if you just look at the URLs for HP and Apple:
If you open up the gaudy HP page you’ll see that they list 157 different models of Tablets/Laptops and in order to keep consumer confusion at it’s peak, no effort is made by HP to relate the differences of the models in a form that makes sense (one wonders if it’s even possible though, to be fair). Does this product-diareha strategy work?