August 24, 2022 |
White Paper: High Availability Solutions Provide Cost Savings and Flexibility for OracleWhite Paper: High Availability Solutions Provide Cost Savings and Flexibility for OracleMigrating Oracle databases can be a tall order. The process is often complex and can be costly. Sometimes, however, migration is unavoidable. This is especially true when support and licensing considerations come together to make migration a necessity. And when migrating business-critical databases, it is important to provide high availability protection to prevent downtime, which results in lost revenue and productivity. Oracle has announced that support for Oracle Database 12c will soon end. Therefore, organizations running Oracle Database 12c will face the imminent requirement to migrate. Download White Paper: High Availability Solutions Provide Cost Savings and Flexibility for Oracle |
August 20, 2022 |
White Paper: Understanding the Complexity of High Availability for Business-Critical Applications
|
August 18, 2022 |
Introducing the Generic Load-Balancer Kit for SIOS LifeKeeper and Google CloudIntroducing the Generic Load-Balancer Kit for SIOS LifeKeeper and Google CloudThis will discuss the Generic Load-Balancer Application Recovery Kit (ARK) for SIOS Lifekeeper for Linux and specifically how to configure it on Google Cloud. SIOS ARKs are plug-ins to SIOS LifeKeeper products that add platform or application – awareness. This blog shows how using a two-node NFS cluster and the NFS exports they provide can ultimately be accessed via the load-balancer. SIOS has created this ARK to facilitate client redirection in SIOS LifeKeeper clusters running in GCP. Since GCP does not support gratuitous ARP – a broadcast request for a router’s own IP address – clients cannot connect directly to traditional cluster virtual IP addresses. Instead, clients must connect to a load balancer and the load balancer redirects traffic to the active cluster node. GCP implements separate load-balancer solutions that operates on layer 4 TCP, layer 4 UDP or layer 7 HTTP/HTTPs, the Load-Balancer can be configured to have private or public frontend IP(s), a health-probe that can determine which node is active, a series of backend IP addresses (for each node in a cluster) and incoming/outgoing network traffic rules. Traditionally the health probe would monitor the active port on an application and determine which node that application is active on, The SIOS generic load-balancer ARK is configured to have the active node listen on a user defined port. This port is then configured in the GCP Load-balancer as the Health Probe Port. This allows the active cluster node to respond to TCP health check probes, enabling automatic client redirection.. Installation and configuration in GCP is straightforward and detailed below:Within the GCP portal, select load-balancing In this instance we want TCP Load Balancing Create a load-balancer, you will select the resource group where you want this to be deployed as well as the name, I like to use a name that lines up with the cluster type that I’m using the load balancer with for example IMA-NFS-LB will sit in front of both IMA-NFS nodes. You can determine whether this will be an Internet facing or internal to your VPC. In this case I’m configuring an internal only Load-Balancer to front my NFS server for use only within my VPC.
Once you determine what the name, region etc will be then you will be asked to assign a Backend configuration, this will require an instance group that contains the HA nodes you will front-end. Once you have assigned an instance group, you will define a health check – this is the port that matches the port that you will use in the Lifekeeper Generic Load-balancer configuration in this case I’m using 54321.
Again, pay attention to the port number as this will be used with Lifekeeper. I just stuck with the default values for Health criteria. Once the backend configuration information and healthcheck is entered for the load-balancer you will need to define the frontend configuration. This consists of the subnet, region and IP you want to create for the load balancer. You will configure your IP and this will match the Lifekeeper IP that you are protecting. Once you are happy with the configuration, you can either review it or simply click create. Once we select “create” then GCP will start the deployment of the Load Balancer, this can take several minutes and once complete then the configuration moves on to the SIOS Protection Suite. Configuration with SIOS Protection SuiteFor this blog I have configured three NFS exports to be protected using SPS-L, the three exports are configured to use the same IP as the GCP load balancer’s frontend IP. I’m using Datakeeper to replicate the data stored on the exports. First step is to obtain the scripts, the simplest way is to use wget but you can also download the entire package and upload the rpm directly to the nodes using winscp or a similar tool. You need to install the Hotfix on all nodes in the Lifekeeper cluster. The entire recovery kit can be obtained here: http://ftp.us.sios.com/pickup/LifeKeeper_Linux_Core_en_9.5.1/patches/Gen-LB-PL-7172-9.5.1 The parts can be found here with wget: wget http://ftp.us.sios.com/pickup/Gen-LB-PL-7172-9.5.1/Gen-LB-readme.txt Once downloaded, verify the MD5 sum against the value recorded on the FTP site. Install the RPM as follows: rpm -ivh steeleye-lkHOTFIX-Gen-LB-PL-7172-9.5.1-7154.x86_64.rpm Check that the install was successful by running: rpm -qa | grep steeleye-lkHOTFIX-Gen-LB-PL-7172 Should you need to remove the RPM for some reason, this can be done by running: rpm -e steeleye-lkHOTFIX-Gen-LB-PL-7172-9.5.1-7154.x86_64 Below is the GUI showing my three NFS exports that I’ve already configured: What we need to do within the SIOS Protection Suite is define the Load Balancer using the Hotfix scripts provided by SIOS. First we create a new resource hierarchy, we select Generic Application from the drop down Define the restore.pl script located in /opt/Lifekeeper/SIOS_Hotfixes/Gen-LB-PL-7172/ Define the remove.pl script located in /opt/Lifekeeper/SIOS_Hotfixes/Gen-LB-PL-7172/ Define the quickCheck script located in /opt/Lifekeeper/SIOS_Hotfixes/Gen-LB-PL-7172/ There is no local recovery script, so make sure you clear this input When asked for Application Info, we want to enter the same port number as we configured in the Healthcheck port e.g. 54321 We will choose to bring the service into service once it’s created Resource Tag is the name that we will see displayed in the SPS-L GUI, I like to use something that makes it easy to identify If everything is configured correctly you will see “END successful restore”, we can then extend this to the other node so that the resource can be hosted on either node. This shows the completed Load Balancer configuration following extension to both nodes The last step for this cluster is to create child dependencies for the three NFS exports, this means that all the NFS exports complete with Datakeeper mirrors and IPs will rely on the Load Balancer. Should a serious issue occur on the active node then all these resources will fail-over to the other functioning node. Above, the completed hierarchy in the Lifekeeper GUI. Below shows the expanded GUI view showing the NFS exports, IP, Filesystems and DataKeeper replicated volumes as children of the Load Balancer resource. This is just one example of how you can use SIOS LifeKeeper in GCP to protect a simple NFS cluster. The same concept applies to any business critical application you need to protect. You simply need to leverage the Load Balancer ARK supplied by SIOS to allow the GCP Load Balancer (Internet or Internal) to determine which node is currently hosting the application. |
August 12, 2022 |
How to Reduce Downtime in SAPHow to Reduce Downtime in SAPThinking about how to reduce downtime in SAP is an important topic that should be visited during initial solution design. Changes to existing SAP landscapes can be made, these can be more tricky in existing production environments where downtime will be an issue. There are several typical components in an SAP landscape that can be considered single points of failure; ASCS (Central Services), HANA DB, NFS nodes and SAP Application servers. Ideally these should be protected by using redundant servers in a High Availability configuration. HA/DR Goals for SAPCore goals when designing components of High Availability/Disaster recovery for SAP should be:● Minimize Downtime In today’s modern cloud environments the infrastructure of the underlying hardware is typically well protected from failures by using multiple redundant NICs, redundant storage and hardware availability zones – however, this still doesn’t guarantee that your SAP application will be running and responding to requests. Using a high availability solution such as the SIOS Protection Suite introduces intelligent High Availability coupled with local disk replication to ensure that your SAP applications and services are continually monitored, protected and have the ability to automatically switch to redundant hardware when a failure is detected. Now lets consider a simple example of an SAP configuration that’s not HA protected, it might look something like this (figure 1): If this environment is used to process transactions from a web server that is used to sell clothing to customers, SAP is being used to process sales, track orders, track inventory and provide multiple automated ordering etc based on these transactions. Now let’s imagine that this sales processing environment (pictured above) was configured in the cloud without HA because the architect thought that highly redundant hardware in the cloud environment was good enough to protect from failures. If that HANA DB experiences an issue and shuts down let’s look at the steps that are typical required to get the database back up and running: ● Even if HANA is configured with HANA System Replication, failover to the secondary HANA DB system isn’t automated. This will require someone who knows HANA to correct, after the failure is detected and they are notified of the outage. If this small clothing retailer transacts about $10 million annually from web based sales, that equates to roughly $1150 per hour in sales equalized over the year. Peak times would cost much more per hour. This report from IBM suggests that the average downtime cost per hour is $10k Figure 2: SAP Landscape with HA/DR If HA software had been in use (figure 2), HANA DB failover would have been automatic and interruption to the web server would have been within the configured timeouts and absolutely no sales would have been lost. An alert would be generated and the cause could be looked at and diagnosed in a more leisurely manner than a system down situation. Scale up the customer size and it’s very likely that any system down situation would start to cost hundreds of thousands of dollars and consume significant people resources to resolve. Another IBM report suggests a staggering 44% of respondents had bi-monthly unplanned outages and another 35% had monthly unplanned outages. Planned outages themselves are another potential problem with 46% of respondents reporting monthly planned outages and a further 29% reporting yearly planned outages. Having applications and services protected by HA software can also mitigate these planned outages by allowing services to be moved to running systems during maintenance activities. Learn more about high availablity for SAP and S/4HANA. |
August 8, 2022 |
White Paper: Exploring High Availability Use Cases in Regulated IndustriesWhite Paper: Exploring High Availability Use Cases in Regulated IndustriesWhile downtime in business-critical systems, databases, and applications imposes costs on every organization, different industries have different consequences associated with unplanned downtime. In this tech brief, we explore high availability (HA) use cases and SIOS customer success stories in the financial services, healthcare, manufacturing, and education industries. Reproduced with permission from SIOS
|