Deployment of a SQL Server Failover Cluster Instance on Huawei Cloud
Deployment of a SQL Server Failover Cluster Instance on Huawei Cloud
*DISCLAIMER: While the following completely covers the high availability portion within the scope of our product, this is a setup “guide” only and should be adapted to your own configuration.
Overview
HUAWEI CLOUD is a leading cloud service provider not just in China but also has global footprint with many datacenters around the world. They bring Huawei’s 30-plus years of expertise together in ICT infrastructure products and solutions and are committed to providing reliable, secure, and cost-effective cloud services to empower applications, harness the power of data, and help organizations of all sizes grow in today’s intelligent world. HUAWEI CLOUD is also committed to bringing affordable, effective, and reliable cloud and AI services through technological innovation.
DataKeeper Cluster Edition provides replication in a virtual private cloud (VPC) within a single region across availability zones for the Huawei cloud. In this particular SQL Server clustering example, we will launch four instances (one domain controller instance, two SQL Server instances and a quorum/witness instance) into three availability zones.
DataKeeper Cluster Edition provides support for a data replication node outside of the cluster with all nodes in Huawei cloud. In this particular SQL Server clustering example, four instances are launched (one domain controller instance, two SQL Server instances and a quorum/witness instance) into three availability zones. Then an additional DataKeeper instance is launched in a second region including a VPN instance in both regions. Please see Configuration of Data Replication From a Cluster Node to External DR Site for more information. For additional information on using multiple regions please see Connecting Two VPCs in Different Regions.
DataKeeper Cluster Edition also provides support for a data replication node outside of the cluster with only the node outside of the cluster in Huawei Cloud. In this particular SQL Server clustering example, WSFC1 and WSFC2 are in an on-site cluster replicating to a Huawei Cloud instance. Then an additional DataKeeper instance is launched in a region in Huawei Cloud. Please see Configuration of Data Replication From a Cluster Node to External DR Site for more information.
Requirements
Description | Requirement |
Virtual Private Cloud | In a single region with three availability zones |
Instance Type | Minimum recommended instance type: s3.large.2 |
Operating System | See the DKCE Support Matrix |
Elastic IP | One elastic IP address connected to the domain controller |
Four instances | One domain controller instance, two SQL Server instances and one quorum/witness instance |
Each SQL Server | ENI (Elastic Network Interface) with 4 IPs
· Primary ENI IP statically defined in Windows and used by DataKeeper Cluster Edition · Three IPs maintained by ECS while used by Windows Failover Clustering , DTC and SQLFC |
Volumes | Three volumes (EBS and NTFS only)
· One primary volume (C drive) · Two additional volumes o One for Failover Clustering o One for MSDTC |
Release Notes
Before beginning, make sure you read the DataKeeper Cluster Edition Release Notes for the latest information. It is highly recommended that you read and understand the DataKeeper Cluster Edition Installation Guide.
Create a Virtual Private Cloud (VPC)
A virtual private cloud is the first object you create when using DataKeeper Cluster Edition.
*A virtual Private Cloud (VPC) is an isolated private cloud consisting of a configurable pool of shared computing resources in a public cloud.
- Using the email address and password specified when signing up for Huawei Cloud, sign in to the Huawei Cloud Management Console.
- From the Services dropdown, select Virtual Private Cloud.
- On the right side of the screen, click on Create VPC and select the region that you want to use.
- Input the name that you want to use for the VPC
- Define your virtual private cloud subnet by entering your CIDR (Classless Inter-Domain Routing) as described below
- Input the subnet name, then click Create Now.
*A Route Table will automatically be created with a “main” association to the new VPC. You can use it later or create another Route Table.
*HELPFUL LINK:
Huawei’s Creating a Virtual Private Cloud (VPC)
Launch an Instance
The following walks you through launching an instance into your subnet. You will want to launch two instances into one availability zone, one for your domain controller instance and one for your SQL instance. Then you will launch another SQL instance into another availability zone and a quorum witness instance into yet another availability zone.
*HELPFUL LINKS:
Huawei Cloud ECS Instances
- Using the email address and password specified when signing up for Huawei Cloud, sign in to the Huawei Cloud Management Console.
- From the Service List dropdown, select Elastic Cloud Server.
- Select Buy ECS button and choose the Billing Mode, Region and AZ (Availability Zone) to deploy the Instance
- Select your Instance Type. (Note:Select s3.large.2 or larger.).
- Choose an Image. Under Public Image, select the Windows Server 2019 Datacenter 64bit English image
- For Configure Network, select your VPC.
- For Subnet, select an Subnet that you want to use, select Manually-specified IP address and input the IP address that you want to use
- Select the Security Group to use or Edit and select an existing one.
- Assign an EIPif you need the ECS instance to access the internet
- Click Configure Advanced Settings and provide a name for the ECS, use Password for Login Mode and provide the secure password for Administrator login
- Click Configure Now on Advanced Options Add a Tag to name your instance and Click on Confirm
- Perform final review of the Instance and click on Submit.
*IMPORTANT: Make a note of this initial administrator password. It will be needed to log on to your instance.
Repeat the above steps for all instances.
Connect to Instances
You can connect to your domain controller instance via Remote Login from the ECS pane.
Login as administrator and enter your administrator password.
*BEST PRACTICE: Once logged on, it is best practice to change your password.
Configure the Domain Controller Instance
Now that the instances have been created, we started with setting up the Domain Service instance.
This guide is not a tutorial on how to set up an Active Domain server instance. We recommend reading articles on how to set up and configure an Active Directory server. It is very important to understand that even though the instance is running in a Huawei cloud, this is a regular installation of Active Directory.
Static IP Addresses
Configure Static IP Addresses for your Instances
- Connect to your domain controller instance.
- Click Start/ Control Panel.
- Click Network and Sharing Center.
- Select your network interface.
- Click Properties.
- Click Internet Protocol Version 4 (TCP/IPv4), then Properties.
- Obtain your current IPv4 address, default gateway and DNS server for the network interface from Amazon.
- In the Internet Protocol Version 4 (TCP/IPv4) Properties dialog box, under Use the following IP address, enter your IPv4 address.
- In the Subnet mask box, type the subnet mask associated with your virtual private cloud subnet.
- In the Default Gateway box, type the IP address of the default gateway and then click OK.
- For the Preferred DNS Server, enter the Primary IP Address of Your Domain Controller(ex. 15.0.1.72).
- Click Okay, then select Close. Exit Network and Sharing Center.
- Repeat the above steps on your other instances.
Join the Two SQL Instances and the Witness Instance to Domain
*Before attempting to join a domain make these network adjustments. On your network adapter, Add/Change the Preferred DNS server to the new Domain Controller address and its DNS server. Use ipconfig /flushdns to refresh the DNS search list after this change. Do this before attempting to join the Domain.
*Ensure that Core Networking and File and Printer Sharing options are permitted in Windows Firewall.
- On each instance, click Start, then right-click Computer and select Properties.
- On the far right, select Change Settings.
- Click on Change.
- Enter a new Computer Name.
- Select Domain.
- Enter Domain Name– (ex. docs.huawei.com).
- Click Apply.
*Use Control Panel to make sure all instances are using the correct time zone for your location.
*BEST PRACTICE: It is recommend that the System Page File is set to system managed (not automatic) and to always use the C: drive.
Control Panel > Advanced system settings > Performance > Settings > Advanced > Virtual Memory. Select System managed size, Volume C: only, then select Set to save.
Assign Secondary Private IPs to the Two SQL Instances
In addition to the Primary IP, you will need to add three additional IPs (Secondary IPs) to the elastic network interface for each SQL instance.
- From the Service List dropdown, select Elastic Cloud Server.
- Click the instance for which you want to add secondary private IP addresses.
- Select NICs > Manage Virtual IP Address.
- Click on Assign Virtual IP address and select Manual enter an IP address that is within the subnet range for the instance (ex. For 15.0.1.25, enter 15.0.1.26). Click Ok.
- Click on the More dropdown on the IP address row, and select Bind to Server, select the server to bind the IP address to, and the NIC card.
- Click OK to save your work.
- Perform the above on both SQL Instances.
*HELPFUL LINKS:
Managing Virtual IP Addresses
Binding a Virtual IP Address to an EIP or ECS
Create and Attach Volumes
DataKeeper is a block-level volume replication solution and requires that each node in the cluster have additional volume(s) (other than the system drive) that are the same size and same drive letters. Please review Volume Considerations for additional information regarding storage requirements.
Create Volumes
Create two volumes in each availability zone for each SQL server instance, a total of four volumes.
- From the Service List dropdown, select Elastic Cloud Server.
- Click the instance for which you want to manage
- Go to the Disks tab
- Click Add Disk to add a new volume of your choice and size, make sure you select the volume in the same AZ as the SQL server that you intend to attach it to
- Select the check box to agree to the SLA and Submit
- Click Back to Server Console
- Attach the disk if necessary to the SQL instance
- Do this for all four volumes.
*HELPFUL LINKS:
Elastic Volume Service
Configure the Cluster
Prior to installing DataKeeper Cluster Edition, it is important to have Windows Server configured as a cluster using either a node majority quorum (if there is an odd number of nodes) or a node and file share majority quorum (if there is an even number of nodes). Consult the Microsoft documentation on clustering in addition to this topic for step-by-step instructions. Note: Microsoft released a hotfix for Windows 2008R2 that allows disabling of a node’s vote which may help achieve a higher level of availability in certain multi-site cluster configurations.
Add Failover Clustering
Add the Failover Clustering feature to both SQL instances.
- Launch Server Manager.
- Select Features in the left pane and click Add Features in the Features This starts the Add Features Wizard.
- Select Failover Clustering.
- Select Install.
Validate a Configuration
- Open Failover Cluster Manager.
- Select Failover Cluster Manager, select Validate a Configuration.
- Click Next, then add your two SQL instances.
Note: To search, select Browse, then click on Advanced and Find Now. This will list available instances.
- Click Next.
- Select Run Only Tests I Select and click Next.
- In the Test Selection screen, deselect Storage and click Next.
- At the resulting confirmation screen, click Next.
- Review Validation Summary Report then click Finish.
Create Cluster
- In Failover Cluster Manager, click on Create a Cluster then click Next.
- Enter your two SQL instances.
- On the Validation Warning page, select No then click Next.
- On the Access Point for Administering the Cluster page, enter a unique name for your WSFC Cluster. Then enter the Failover Clustering IP address for each node involved in the cluster. This is the first of the three secondary IP addresses added previously to each instance.
- IMPORTANT!Uncheck the “Add all available storage to the cluster” checkbox. DataKeeper mirrored drives must not be managed natively by the cluster. They will be managed as DataKeeper Volumes.
- Click Next on the Confirmation
- On Summary page, review any warnings then select Finish.
Configure Quorum/Witness
- Create a folder on your quorum/witness instance (witness).
- Share the folder.
- Right-click folder and select Share With / Specific People….
- From the dropdown, select Everyone and click Add.
- Under Permission Level, select Read/Write.
- Click Share, then Done. (Make note of the path of this file share to be used below.)
- In Failover Cluster Manager, right-click cluster and choose More Actions and Configure Cluster Quorum Settings. Click Next.
- On the Select Quorum Configuration, choose Node and File Share Majority and click Next.
- On the Configure File Share Witness screen, enter the path to the file share previously created and click Next.
- On the Confirmation page, click Next.
- On the Summary page, click Finish.
Install and Configure DataKeeper
After the basic cluster is configured but prior to any cluster resources being created, install and license DataKeeper Cluster Edition on all cluster nodes. See the DataKeeper Cluster Edition Installation Guide for detailed instructions.
- Run DataKeeper setup to install DataKeeper Cluster Edition on both SQL instances.
- Enter your license key and reboot when prompted.
- Launch the DataKeeper GUI and connect to server.
*Note: The domain or server account used must be added to the Local System Administrators Group. The account must have administrator privileges on each server that DataKeeper is installed on. Refer to DataKeeper Service Log On ID and Password Selection for additional information.
- Right click on Jobs and connect to both SQL servers.
- Create a Job for each mirror you will create. One for your DTC resource, and one for your SQL resource..
- When asked if you would like to auto-register the volume as a cluster volume, select Yes.
*Note: If installing DataKeeper Cluster Edition on Windows “Core” (GUI-less Windows), make sure to read Installing and Using DataKeeper on Windows 2008R2/2012 Server Core Platforms for detailed instructions.
Configure MSDTC
- For Windows Server 2012 and 2016, in the Failover Cluster Manager GUI, select Roles, then select Configure Role.
- Select Distributed Transaction Coordinator (DTC), and click Next.
*For Windows Server 2008, in the Failover Cluster Manager GUI, select Services and Applications, then select Configure a Service or Application and click Next.
- On the Client Access Point screen, enter a name, then enter the MSDTC IP address for each node involved in the cluster. This is the second of the three secondary IP addresses added previously to each instance. Click Next.
- Select the MSDTC volume and click Next.
- On the Confirmation page, click Next.
- Once the Summary page displays, click Finish.
Install SQL on the First SQL Instance
- On the domain controller server create a folder and share it..
- For example “TEMPSHARE” with Everyone permission.
- Create a sub folder “SQL” and copy the SQL .iso installer into that sub folder.
- On the SQL server, create a network drive and attach it to the shared folder on the domain controller.
- . For example “net use S: \\\TEMPSHARE
- On the SQL server the S: drive will appear. CD to the SQL folder and find the SQL .iso installer. Right click on the .iso file and select Mount. The setup.exe installer will appear with the SQL .iso installer.
F:\>Setup /SkipRules=Cluster_VerifyForErrors /Action=InstallFailoverCluster
- On Setup Support Rules, click OK.
- On the Product Key dialog, enter your product key and click Next.
- On the License Terms dialog, accept the license agreement and click Next.
- On the Product Updates dialog, click Next.
- On the Setup Support Files dialog, click Install.
- On the Setup Support Rules dialog, you will receive a warning. Click Next, ignoring this message, since it is expected in a multi-site or non-shared storage cluster.
- Verify Cluster Node Configuration and click Next.
- Configure your Cluster Network by adding the “third” secondary IP address for your SQL instance and click Next. Click Yes to proceed with multi-subnet configuration.
- Enter passwords for service accounts and click Next.
- On the Error Reporting dialog, click Next.
- On the Add Node Rules dialog, skipped operation warnings can be ignored. Click Next.
- Verify features and click Install.
- Click Close to complete the installation process.
Install SQL on the Second SQL Instance
Installing the second SQL instance is similar to the first one.
- On the SQL server, create a network drive and attach it to the shared folder on the domain controller as explained above for the first SQL server.
- Once the .iso installer is mounted, run SQL setup once again from the command line in order to skip the Validate Open a Command window, browse to your SQL install directory and type the following command:
Setup /SkipRules=Cluster_VerifyForErrors /Action=AddNode /INSTANCENAME=”MSSQLSERVER”
(Note: This assumes you installed the default instance on the first node)
- On Setup Support Rules, click OK.
- On the Product Key dialog, enter your product key and click Next.
- On the License Terms dialog, accept the license agreement and click Next.
- On the Product Updates dialog, click Next.
- On the Setup Support Files dialog, click Install.
- On the Setup Support Rules dialog, you will receive a warning. Click Next, ignoring this message, since it is expected in a multi-site or non-shared storage cluster.
- Verify Cluster Node Configuration and click Next.
- Configure your Cluster Network by adding the “third” secondary IP address for your SQL Instance and click Next. Click Yes to proceed with multi-subnet configuration.
- Enter passwords for service accounts and click Next.
- On the Error Reporting dialog, click Next.
- On the Add Node Rules dialog, skipped operation warnings can be ignored. Click Next.
- Verify features and click Install.
- Click Close to complete the installation process.
Common Cluster Configuration
This section describes a common 2-node replicated cluster configuration.
- The initial configuration must be done from the DataKeeper UI running on one of the cluster nodes. If it is not possible to run the DataKeeper UI on a cluster node, such as when running DataKeeper on a Windows Core only server, install the DataKeeper UI on any computer running Windows XP or higher and follow the instruction in the Core Only section for creating a mirror and registering the cluster resources via the command line.
- Once the DataKeeper UI is running, connect to each of the nodes in the cluster.
- Create a Job using the DataKeeper UI. This process creates a mirror and adds the DataKeeper Volume resource to the Available Storage.
!IMPORTANT: Make sure that Virtual Network Names for NIC connections are identical on all cluster nodes.
- If additional mirrors are required, you can Add a Mirror to a Job.
- With the DataKeeper Volume(s)now in Available Storage, you are able to create cluster resources (SQL, File Server, etc.) in the same way as if there were a shared disk resource in the cluster. Refer to Microsoft documentation for additional information in addition to the above for step-by-step cluster configuration instructions.
Connectivity to the cluster (virtual) IPs
In addition to the Primary IP and secondary IP, you will also need to configure the virtual IP addresses in the Huawei Cloud so that they can be routed to the active node.
- From the Service List dropdown, select Elastic Cloud Server.
- Click on one of the SQL instance for which you want to add cluster virtual IP address (one for MSDTC, one for SQL Failover Cluster)
- Select NICs > Manage Virtual IP Address.
- Click on Assign Virtual IP address and select Manual enter an IP address that is within the subnet range for the instance (ex. For 15.0.1.25, enter 15.0.1.26). Click Ok.
- Click on the More dropdown on the IP address row, and select Bind to Server, select both the server to bind the IP address to, and the NIC card.
- Use the same steps 4. and 5 for the MSDTC and SQLFC virtual IPs
- Click OKto save your work.
Management
Once a DataKeeper volume is registered with Windows Server Failover Clustering, all of the management of that volume will be done through the Windows Server Failover Clustering interface. All of the management functions normally available in DataKeeper will be disabled on any volume that is under cluster control. Instead, the DataKeeper Volume cluster resource will control the mirror direction, so when a DataKeeper Volume comes online on a node, that node becomes the source of the mirror. The properties of the DataKeeper Volume cluster resource also display basic mirroring information such as the source, target, type and state of the mirror.
Troubleshooting
Use the following resources to help troubleshoot issues:
- Troubleshooting issues section
- For customers with a support contract – http://us.sios.com/support/overview/
- For evaluation customers only – Pre-sales support
Additional Resources:
Step-by-Step: Configuring a 2-Node Multi-Site Cluster on Windows Server 2008 R2 – Part 1 — http://clusteringformeremortals.com/2009/09/15/step-by-step-configuring-a-2-node-multi-site-cluster-on-windows-server-2008-r2-%E2%80%93-part-1/
Step-by-Step: Configuring a 2-Node Multi-Site Cluster on Windows Server 2008 R2 – Part 3 — http://clusteringformeremortals.com/2009/10/07/step-by-step-configuring-a-2-node-multi-site-cluster-on-windows-server-2008-r2-%E2%80%93-part-3/
Major Cloud Outage Impacts Google Compute Engine – Were You Prepared?
Major Cloud Outage Impacts Google Compute Engine – Were You Prepared?
Google first reported an “Issue” on Jun 2, 2019 at 12:25 PDT. As is now common in any type of disaster, reports of this outage first appeared on social media. Social media seems to the most reliable place to get any type of information early in a disaster now.
Many services that rely on Google Compute Engine were impacted. I’ve three teenage kids at home. Something was up when all three kids emerged from their caves, aka, bedrooms, at the same time with a worried look on their faces. Snapchat, Youtube and Discord were all offline!
They must have thought that surely this was the first sign of the apocalypse. I reassured them this was not the beginning of the new dark ages. And instead they should go outside and do some yard work. That scared them back to reality and they quickly scurried away to find something else to occupy their time.
All kidding aside, there were many services being reported as down, or only available in certain areas. The dust is still settling on the cause, breadth and scope of the outage. But it certainly seems that the outage was pretty significant in size and scope, impacting many customers and services including Gmail and other G-Suite services, Vimeo and more.
While we wait for the official root cause analysis on this latest Google Compute Engine outage, Google reported “high levels of network congestion in the eastern USA” caused the downtime. We will have to wait to see what they determine caused the network issues. Was it human error, cyber-attack, hardware failure, or something else?
Were You Prepared For This Cloud Outage?
I wrote during the last major cloud outage. If you are running business critical workloads in the cloud, regardless of the cloud service provider, it is incumbent upon you to plan for the inevitable outage. The multi-day Azure outage of Sept 4th, 2018 was related to a failure of the secondary HVAC system to kick in during a power surge related to an electrical storm. While the failure was just within a single datacenter, the outage exposed multiple services that had dependencies on this single datacenter. This made the datacenter itself a single point of failure.
Have A Sound Disaster Recovery Plan
Leveraging the cloud’s infrastructure, minimize risks by continuously replicating critical data between Availability Zones, Regions or even cloud service providers. In addition to data protection, having a procedure in place to rapidly recover business critical applications is an essential part of any disaster recovery plan. There are various replication and recovery options available. This includes services provided by the cloud vendor themselves like Azure Site Recovery, to application specific solutions like SQL Server Always On Availability Groups, to third party solutions like SIOS DataKeeper that protect a wide range of applications running on both Windows and Linux.
Having a disaster recovery strategy that is wholly dependent on a single cloud provider leaves you susceptible to a scenario that might impact multiple regions within a single cloud. Multi-datacenter or multi-region disasters are not likely. However, as we saw with this recent outage and the Azure outage last fall, even if a failure is local to a single datacenter, the impact can be wide reaching across multiple datacenters or even regions within a cloud. To minimize your risks, consider a multi-cloud or hybrid cloud scenario where the disaster recovery site resides outside of your primary cloud platform.
The cloud is just as susceptible to outages as your own datacenter. You must take steps to prepare for disasters. I suggest you start by looking at your most business critical apps first. What would you do if they were offline and the cloud portal to manage them was not even available? Could you recover? Would you meet your RTO and RPO objectives? If not, maybe it is time to re-evaluate your Disaster Recovery strategy.
“By failing to prepare, you are preparing to fail.”
― Benjamin Franklin
Configure SQL Server 2008 R2 Failover Cluster Instance On Windows Server 2008 R2 In Azure
Step-By-Step: How To Configure A SQL Server 2008 R2 Failover Cluster Instance On Windows Server 2008 R2 In Azure
Intro
On July 9, 2019, support for SQL Server 2008 and 2008 R2 will end. That means the end of regular security updates. However, if you move those SQL Server instances to Azure, Microsoft will give you three years of Extended Security Updates at no additional charge. If you are currently running SQL Server 2008/2008 R2 and you are unable to update to a later version of SQL Server before the July 9th deadline, you will want to take advantage of this offer rather than running the risk of facing a future security vulnerability. An unpatched instance of SQL Server could lead to data loss, downtime or a devastating data breach.
One of the challenges you will face when running SQL Server 2008/2008 R2 in Azure is ensuring high availability. On premises you may be running a SQL Server Failover Cluster (FCI) instance for high availability, or possibly you are running SQL Server in a virtual machine and are relying on VMware HA or a Hyper-V cluster for availability. When moving to Azure, none of those options are available. Downtime in Azure is a very real possibility that you must take steps to mitigate.
In order to mitigate the possibility of downtime and qualify for Azure’s 99.95% or 99.99% SLA, you have to leverage SIOS DataKeeper. DataKeeper overcomes Azure’s lack of shared storage and allows you to build a SQL Server FCI in Azure that leverages the locally attached storage on each instance. SIOS DataKeeper not only supports SQL Server 2008 R2 and Windows Server 2008 R2 as documented in this guide, it supports any version of Windows Server, from 2008 R2 through Windows Server 2019 and any version of SQL Server from from SQL Server 2008 through SQL Server 2019.
This guide will walk through the process of creating a two-node SQL Server 2008 R2 Failover Cluster Instance (FCI) in Azure, running on Windows Server 2008 R2. Although SIOS DataKeeper also supports clusters that span Availability Zones or Regions, this guide assumes each node resides in the same Azure Region, but different Fault Domains. SIOS DataKeeper will be used in place of the shared storage normally required to create a SQL Server 2008 R2 FCI.
Create The First SQL Server Instance In Azure
This guide will leverage the SQL Server 2008R2SP3 on Windows Server 2008R2 image that is published in the Azure Marketplace.
When you provision the first instance you will have to create a new Availability Set. During this process be sure to increase the number of Fault Domains to 3. This allows the two cluster nodes and the file share witness each to reside in their own Fault Domain.
Add additional disks to each instance. Premium or Ultra SSD are recommended. Disable caching on the disks used for the SQL log files. Enable read-only caching on the disk used for the SQL data files. Refer to Performance guidelines for SQL Server in Azure Virtual Machines for additional information on storage best practices.
If you don’t already have a virtual network configured, allow the creation wizard to create a new one for you.
Once the instance is created, go in to the IP configurations and make the Private IP address static. This is required for SIOS DataKeeper and is best practice for clustered instances.
Make sure that your virtual network is configured to set the DNS server to be a local Windows AD controller. This is to ensure you will be able to join the domain in a later step.
Create The End SQL Server Instance In Azure
Follow the same steps as above. Except be sure to place this instance in the same virtual network and Availability Set that you created with the 1st instance.
Create A File Share Witness (FSW) Instance
In order for the Windows Server Failover Cluster (WSFC) to work optimally you are required to create another Windows Server instance and place it in the same Availability Set as the SQL Server instances. By placing it in the same Availability Set, you ensure that each cluster node and the FSW reside in different Fault Domains. Thereby ensuring your cluster stays on line should an entire Fault Domain go off line. This instances does not require SQL Server. It can be a simple Windows Server as all it needs to do is host a simple file share.
This instance will host the file share witness required by WSFC. This instance does not need to be the same size, nor does it require any additional disks to be attached. It’s only purpose is to host a simple file share. It can in fact be used for other purposes. In my lab environment my FSW is also my domain controller.
Uninstall SQL Server 2008 R2
Each of the two SQL Server instances provisioned already have SQL Server 2008 R2 installed on them. However, they are installed as standalone SQL Server instances, not clustered instances. SQL Server must be uninstalled from each of these instances before we can install the cluster instance. The easiest way to do that is to run the SQL Setup as shown below.
When you run setup.exe /Action-RunDiscovery you will see everything that is preinstalled
setup.exe /Action-RunDiscovery
Running setup.exe /Action=Uninstall /FEATURES=SQL,AS,RS,IS,Tools /INSTANCENAME=MSSQLSERVER kicks off the uninstall process
setup.exe /Action=Uninstall /FEATURES=SQL,AS,RS,IS,Tools /INSTANCENAME=MSSQLSERVER
Running setup.exe /Action-RunDiscovery confirms the uninstallation completed
setup.exe /Action-RunDiscovery
Run this uninstallation process again on the 2nd instance.
Add Instances To The Domain
All three of these instances will need to be added to a Windows Domain.
Add Windows Failover Clustering Feature
The Failover Clustering Feature needs to be added to the two SQL Server instances
Add-WindowsFeature Failover-Clustering
Turn Off Windows Firewall
For simplicity sake, turn off the Windows Firewall during the installation and configuration of the SQL Server FCI. Consult Azure Network Security Best Practices for advice on securing your Azure resources. Details on required Windows ports can be found here , SQL Server ports here and SIOS DataKeeper ports here, The Internal Load Balancer that we will configure later also requires port 59999 access. So be sure to account for that in your security configuration.
NetSh Advfirewall set allprofiles state off
Install Convenience Rollup Update For Windows Server 2008 R2 SP1
There is a critical update ( kb2854082) that is required in order to configure a Windows Server 2008 R2 instance in Azure. That update and many more are included in the Convenience Rollup Update for Windows Server 2008 R2 SP1. Install this update on each of the two SQL Server instances.
Format The Storage
The additional disks that were attached when the two SQL Server instances were provisioned need to be formatted. Do the following for each volume on each instance.
Microsoft best practices says the following…
“NTFS allocation unit size: When formatting the data disk, it is recommended that you use a 64-KB allocation unit size for data and log files as well as TempDB.”
Run Cluster Validation
Run cluster validation to ensure everything is ready to be clustered.
Your report will contain WARNINGS about Storage and Networking. You can ignore those warnings as we know there are no shared disks and only a single network connection exists between the servers. You may also receive a warning about network binding order which can also be ignored. If you encounter any ERRORS you must address those before you continue.
Create The Cluster
Best practices for creating a cluster in Azure would be to use Powershell as shown below. Powershell allows us to specify a Static IP Address, whereas the GUI method does not. Unfortunately, Azure’s implementation of DHCP does not work well with Windows Server Failover Clustering. If you use the GUI method you will wind up with a duplicate IP address as the Cluster IP Address. It’s not the end of the world, but you will need to fix that as I show.
As I said, the Powershell method generally works best. However, for some reason, it seems to be failing on Windows Server 2008 R2 as shown below.
New-Cluster -Name cluster1 -Node sql1,sql2 -StaticAddress 10.1.0.100 -NoStorage
You can try that method and if it works for you – great! I need to go back and investigate this a bit more to see if it was a fluke. Another option I need to explore if Powershell is not working is Cluster.exe. Running cluster /create /? gives the proper syntax to use for creating clusters with the deprecated cluster.exe command.
However, if Powershell or Cluster.exe fails you, the steps below illustrate how to create a cluster via the Windows Server Failover Clustering UI, including fixing the duplicate IP address that will be assigned to the cluster.
Remember, the name you specify here is just the Cluster Name Object (CNO). This is not the name that your SQL clients will use to connect to the cluster; we will define that during the SQL Server cluster setup in a later step.
At this point, the cluster is created, but you may not be able to connect to it with the Windows Server Failover Clustering UI due to the duplicate IP address problem.
Fix The Duplicate IP Address
As I mentioned earlier, if you create the cluster using the GUI, you are not given the opportunity to choose an IP address for the cluster. Because your instances are configured to use DHCP (required in Azure), the GUI wants to automatically assign you an IP address using DHCP. Unfortunately, Azure’s implementation of DHCP does not work as expected and the cluster will be assign the same address that is already being used by one of the nodes. Although the cluster will create properly, you will have a hard time connecting to the cluster until you fix this problem.
To fix this problem, from one of the nodes run the following command to ensure the Cluster service is started on that node.
Net start clussvc /fq
On that same node you should now be able to connect to the Windows Server Failover Clustering UI, where you will see the IP Address has failed to come online.
Open the properties of the Cluster IP address and change it from DHCP to Static, and assign it an unused IP address.
Bring the Name resource online
Add The File Share Witness
Next we need to add the File Share Witness. On the 3rd server we provisioned as the FSW, create a folder and share it as shown below. You will need to grant the Cluster Name Object (CNO) read/write permissions at both the Share and Security levels as shown below.
Once the share is created, run the Configure Cluster Quorum wizard on one of the cluster nodes and follow the steps illustrated below.
Create Service Account For DataKeeper
We are almost ready to install DataKeeper. However, before we do that you need to create a Domain account and add it to the Local Administrators group on each of the SQL Server cluster instances. We will specify this account when we install DataKeeper.
Install DataKeeper
Install DataKeeper on each of the two SQL Server cluster nodes as shown below.
This is where we will specify the Domain account we added to each of the local Domain Administrators group.
Configure DataKeeper
Once DataKeeper is installed on each of the two cluster nodes, you are ready to configure DataKeeper.
NOTE – The most common error encountered in the following steps is security related, most often by pre-existing Azure Security groups blocking required ports. Please refer to the SIOS documentation to ensure the servers can communicate over the required ports.
First, you must connect to each of the two nodes.
If everything is configured properly, you should then see the following in the Server Overview report.
Next, create a New Job and follow the steps illustrated below
Choose Yes here to register the DataKeeper Volume resource in Available Storage
Complete the above steps for each of the volumes. Once you are finished, you should see the following in the Windows Server Failover Clustering UI.
You are now ready to install SQL Server into the cluster.
NOTE – At this point the replicated volume is only accessible on the node that is currently hosting Available Storage. That is expected, so don’t worry!
Install SQL Server On The First Node
On the first node, run the SQL Server setup.
Choose New SQL Server Failover Cluster Installation and follow the steps as illustrated.
Choose only the options you need.
Please note, this document assumes you are using the Default instance of SQL Server. If you use a Named Instance, you need to make sure you lock down the port that it listens on, and use that port later on when you configure the load balancer. You also will need to create a load balancer rule for the SQL Server Browser Service (UDP 1434) in order to connect to a Named Instance. Neither of those two requirements are covered in this guide. But if you require a Named Instance, it will work if you do those two additional steps.
Here you will need to specify an unused IP address
Go to the Data Directories tab and relocate data and log files. At the end of this guide we talk about relocating tempdb to a non-mirrored DataKeeper Volume for optimal performance. For now, just keep it on one of the clustered disks.
Install SQL On Second Node
Run the SQL Server setup again on the second node. Then, choose Add node to a SQL Server Failover Cluster.
Congratulations, you are almost done! However, due to Azure’s lack of support for gratuitous ARP, we will need to configure an Internal Load Balancer (ILB) to assist with client redirection as shown in the following steps.
Update The SQL Cluster IP Address
In order for the ILB to function properly, you must run run the following command from one of the cluster nodes. It SQL Cluster IP enables the SQL Cluster IP address to respond to the ILB health probe while also setting the subnet mask to 255.255.255.255 in order to avoid IP address conflicts with the health probe.
cluster res <IPResourceName> /priv enabledhcp=0 address=<ILBIP> probeport=59999 subnetmask=255.255.255.255
NOTE – I don’t know if it is a fluke. On occasion I have run this command and it looks like it works, but it doesn’t complete the job and I have to start again. The way I can tell if it worked is by looking at the Subnet Mask of the SQL Server IP Resource. If it is not 255.255.255.255 then you know it didn’t run successfully. It may simply be a GUI refresh issue. Do try restarting the cluster GUI to verify the subnet mask was updated.
After it runs successfully, take the resource offline and bring it back online for the changes to take effect.
Create The Load Balancer
The final step is to create the load balancer. In this case we are assuming you are running the Default Instance of SQL Server, listening on port 1433.
The Private IP Address you define when you Create the load balancer will be the exact same address your SQL Server FCI uses.
Add just the two SQL Server instances to the backend pool. Do NOT add the FSW to the backend pool.
In this load balancing rule, you must enable Floating IP.
Test The Cluster
The most simple test is to open SQL Server Management Studio on the passive node and connect to the cluster. Congratulations! You did everything correctly as it connects! If you can’t connect, don’t fear. I wrote a blog article to help troubleshoot the issue. Managing the cluster is exactly the same as managing a traditional shared storage cluster. Everything is controlled through Failover Cluster Manager.
Optional – Relocate TempDB
For optimal performance it would be advisable to move tempdb to the local, non replicated, SSD. But, SQL Server 2008 R2 requires tempdb to be on a clustered disk. SIOS has a solution called a Non-Mirrored Volume Resource which addresses this issue. It would be advisable to create a non-mirrored volume resource of the local SSD drive and move tempdb there. Do note, the local SSD drive is non-persistent. You must take care to ensure the folder holding tempdb and the permissions on that folder are recreated each time the server reboots.
After you create the Non-Mirrored Volume Resource of the local SSD, follow the steps in this article to relocate tempdb. The startup script described in that article must be added to each cluster node.
Reproduced with permission from Clusteringformeremortals.com
Multi-Instance SQL Server Failover Cluster With New Azure ILB Feature
New Azure ILB Feature Allows You To Build A Multi-Instance SQL Server Failover Cluster
At Microsoft Ignite this past September, Microsoft made some announcements around Azure. One of these announcements was the general availability of multiple VIPs on internal load balancers. Why is this so important to a SQL Server DBA? Well, up until now if you want to deploy highly available SQL Server in Azure you were limited to a single SQL Server FCI per cluster or a single Availability Group listener.
This limitation forced you to deploy a new cluster for each instance of SQL Server you wanted to protect in a Failover Cluster. It also forced you to group all of your databases into a single Availability Group if you wanted automatic failover and client redirection in your AlwaysOn AG configuration.
How To Get Out Of These Restrictions?
Those restrictions have now been lifted with these new ILB features. In this post I am going to walk you through the process of deploying a SQL Server FCI in Azure that contains two SQL Server instances. In a future post I will walk you through the same process for SQL Server AlwaysOn AG.
Let’s Start With A Multi-Instance SQL Server Failover Cluster
Build a basic, single instance SQL Server FCI in Azure as I describe in my post Deploying Microsoft SQL Server 2014 Failover Clusters in Azure Resource Manager .
That post describes the process of creating the Multi-Instance SQL Server Failover Cluster. Using DataKeeper to create the replicated volume resources used in the cluster, try creating the Internal Load Balancer (ILB) and then fixing the SQL Server Cluster IP Resource to work with the ILB. If you want to skip that process and jumpstart your configuration you can always use the Azure Deployment Template that creates a 2-Node SQL Server FCI using SIOS DataKeeper
Assuming you now have a basic two node SQL Server FCI, the steps to add a 2nd named instance are as follows:
- Create another DataKeeper Volume Resource on another volume that is not currently being used. You may need to add additional disks to your Azure instance if you have no available volumes. As part of this volume creation process the new DataKeeper Volume resource will be registered in Available Storage in the cluster. Refer to the article referenced earlier for the details.
- Install a named instance of SQL Server on the first node, specifying the DataKeeper Volume that we just created as the storage location.
- “Add a node” to the cluster on the second node.
- Lock down the port number of this new named instance to a port that is not in use. In my example I use port 1440.
Adjust ILB To Second Instance
Next we have to adjust the ILB to redirect traffic to this second instance. Here are the steps you need to follow:
Add a frontend IP address that is identical to the SQL cluster IP address you used for the second instance of SQL Server as shown below.
Next, we will need to add another probe since the instances could be running on different servers. As shown below, I added a probe that probes port 59998 (instead of the usual 59999). We will need to make sure the new rules reference this probe. We will also need to remember that port number since we will need to update IP address associated with this instance during the last step of this process.
Now we need to add two new rules to the ILB to direct traffic destined for this 2ndinstance of SQL. Of course we need to add a rule to redirect TCP port 1440 (the port I used for the named instance of SQL), but because we are now using named instances we will also need to have a port to support the SQL Server Browser Service, UDP Port 1434.
In the picture below depicting the rule for the SQL Server Browser Service, take note that the Front End IP Address is referencing the new FrontendIP address (10.0.0.201), UDP port 1434 for both the Port and Backend Port. In the pool you will need to specify the two servers in the cluster, and finally make sure you choose the new Health Probe we just created.
We will now add a rule for TCP/1440. As show in the picture below, add a new rule for port TCP 1440, or whatever port locked down for the named instance of SQL Server. Again, be sure to choose the new FrontEnd IP Address and the new Health Probe (59998). Also, make sure the Floating IP (direct server return) is enabled.
The Last Step
Now that the load balancer is configured, the final step is to run the PowerShell script to update the new Cluster IP address associated with this 2nd instance of SQL Server. This PowerShell script only needs to be run on one of the cluster nodes.
# Define variables $ClusterNetworkName = “” # the cluster network name (Use Get-ClusterNetwork on Windows Server 2012 of higher to find the name) $IPResourceName = “” # the IP Address resource name of the second instance of SQL Server $ILBIP = “” # the IP Address of the second instance of SQL, which should be the same as the new Frontend IP address as well Import-Module FailoverClusters # If you are using Windows Server 2012 or higher: Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{Address=$ILBIP;ProbePort=59998; SubnetMask="255.255.255.255";Network=$ClusterNetworkName;EnableDhcp=0} # If you are using Windows Server 2008 R2 use this: #cluster res $IPResourceName /priv enabledhcp=0 address=$ILBIP probeport=59998 subnetmask=255.255.255.255
You now have a fully functional multi-instance SQL Server FCI in Azure. Let me know if you have any questions to build a Multi-Instance SQL Server Failover Cluster With New Azure ILB Feature
Reproduced from Clusteringformeremortals.com
- 1
- 2
- 3
- …
- 8
- Next Page »