At the heart of every application, and this includes your System Center Service Manager (SCSM) deployment, are one or more databases. All information that is generated or consumed is located in said database(s), so having a healthy SQL Server infrastructure is the prerequisite to an IT Service Management (ITSM) deployment that performs well and makes end users, and bosses, happy.
But SQL Server is a pretty complex animal with lots of knobs and levers to adjust for optimum performance. You could just execute the installer and hit “next-next-finish,” and you’ll have a running SQL server instance. But the question is, will it run well?
In addition, we live in an age of virtualized workloads. Do the same rules apply to virtual servers as they do to physical deployments? (The good news is yes – well, at least mostly.)
Below, you will learn how to deploy well performing SQL Server instances, based on the gazillions of SQL Server deployments the team has performed over the years.
Prior to starting your installation, there are a couple of items you should make sure you have lined up before starting your deployment.
Make sure you have the appropriate hardware to run SQL Server. Regardless if your deployment type is physical or virtual (for virtual environments you’re looking at the underlying hardware that the Hypervisor is using), use the fastest disks available and ensure you have redundancies at every level:
Whether or not you’re using software defined storage technologies (ex. Storage Spaces Direct “S2D”, vSAN, etc.), or big iron, ensure you have the appropriate levels of redundancy implemented that meet your requirements. In addition, make sure your storage is accessible via multiple paths. This means using multiple NICs or HBAs. Note that using multiple ports on a multi-port card doesn’t really give you true redundancy, since it’s usually the entire card that breaks, and not just a single port.
With both physical and virtual setups, use multiple NICs and team these, where applicable. Ensure you are teaming across cards. As mentioned above, teaming multiple ports on a multi-port NIC doesn’t give you true redundancy.
On HA setups, ensure there are multiple communication paths. Loss of communication between HA nodes can take the database(s) down, and that’s the best-case scenario. Worst case scenario is a split-brain situation, which can get incredibly difficult.
In addition, make sure you have redundancy in your Network topology. Switches do go bad occasionally, and enterprise grade switches are not something you can get at your local Best Buy. So, if a switch does go bad, you might be looking at a week or two of downtime, possibly more with current supply chain disruptions. If you’re using IP networks for your storage (iSCSI, S2D, vSAN etc.) it’s a good idea to separate storage traffic from application traffic, either via vLAN-ing or by using separate hardware switches. In addition, depending on how “busy” you expect the database to be, the use of 10Gb (or higher) networking hardware is highly recommended. 10Gb switches are pretty mainstream nowadays and even 25Gb and 40Gb equipment is becoming more affordable.
Lastly, if you’re relying on external connectivity for your deployment, ensure you have multiple paths to the “outside” across two (or more) different carriers. This can be challenging from a cost perspective if you’re hosting your systems privately and need to contract multiple carriers, but if you’re in a colocation scenario, then the datacenter should have a “blend” of carriers in their service catalog (consider moving to a different datacenter if they don’t.)
- Compute and Platform Components
You obviously don’t get compute (CPU and RAM) redundancies on physical deployments, so you should consider highly available instances (“HA” – aka. clusters) in this case. On virtualization platforms you will likely have more than one node anyway, so you do get compute redundancy there.
With both physical and virtual architectures ensure you deploy systems with dual power supplies, these also do go bad on occasion. Also, order your hardware with out of band management cards (HP ILO, Dell iDRAC or similar), and make sure these are properly configured and accessible. Trust me, there’s almost nothing worse than having to go to the datacenter at 2 a.m. in the morning, because you can’t access your server remotely anymore.
Choosing an operating system for your SQL Server deployment should be pretty straight forward. Select the appropriate supported version of Windows for the version of SQL Server you’re planning on deploying, and then choose between Windows Standard and Windows Datacenter edition. There really aren’t any scalability differences from a compute perspective anymore, so for a SQL Server base operating system deployment it essentially boils down to the question of whether or not you’re intending to deploy S2D to use as a Cluster Shared Volume (“CSV”). S2D (and SDN, software defined networking) is only available in Datacenter editions of Windows. Note that you do not need Windows Datacenter Edition to run SQL Server Enterprise.
After choosing the appropriate Windows edition, consider deploying on Windows Core. There are many advantages to deploying Windows Core and with the Server Core App Compatibility Feature on Demand (FOD) package introduced in Windows Server 2019 there are hardly any disadvantages anymore. That being said, be aware that the SQL Server installation GUI will not run on Windows Core, which means you have to create a configuration file for unattended setup via the command line (or worse: deal with 20-30 command line switches). Instead of looking through documentation to get the syntax of this file right, you can generate an “unattend” file by running through the setup on a system with a GUI, making all your install choices and then grabbing the ConfigurationFile.ini right before the install. After which you can execute SQL Server setup from the command line using the following syntax:
SQL Server Edition
Make sure you have the right edition that will meet your requirements. This is not necessarily just a question of cost. What other requirements has the business expressed? Will there be personally identifiable information (PII) stored on the platform? You’re going to need encryption then (SQL Server Transparent Data Encryption – TDE). And that means you’re going to need SQL Server Enterprise Edition. You want to deploy complex Availability Groups for high availability and disaster recovery? You’re going to need SQL Server Enterprise Edition.
Here’s a list of some of notable features that usually tilt the scales towards SQL Server Enterprise Edition (look here for more details):
- Scalability (memory etc.)
- Number of cluster nodes (Std Edition is limited to two)
- Transparent Data Encryption (TDE)
- Availability Groups*
- Online Indexing
- DB Snapshots
- In-Memory OLTP
- Advanced Integration Services
*Std Edition supports limited Availability Group architectures
In addition, note that if you’re licensing SQL Server via core, which is highly recommended for the reason stated in the next paragraph, you need to purchase a minimum of two 2-Core licenses (this applies to both Standard and Enterprise editions).
If you’re licensing via Client Access Licenses (CALs), you must make sure you are licensing the correct number of users. SCSM accesses the database via just a couple of service accounts, so you may be tempted to conclude that you only need a few CALs. However, you may have hundreds or even thousands of users writing and consuming data on the database through those service accounts. Microsoft calls this “License Multiplexing,” and let’s just say they majorly frown upon this. This is particularly important should you ever be selected for a License Audit.
Stay tuned for part two which goes over SQL Server deployment.
As always, we’d love to hear your questions and comments in the Cireson Community.