High-availability clusters (also known as HA clusters or fail-over clusters) are groups of computers that support server applications that can be reliably utilized with a minimum of down-time. They operate by using high availability software to harness redundant computers in groups or clusters that provide continued service when system components fail. Without clustering, if a server running a particular application crashes, the application will be unavailable until the crashed server is fixed. HA clustering remedies this situation by detecting hardware/software faults, and immediately restarting the application on another system without requiring administrative intervention, a process known as failover. As part of this process, clustering software may configure the node before starting the application on it. For example, appropriate filesystems may need to be imported and mounted, network hardware may have to be configured, and some supporting applications may need to be running as well.
HA clusters are often used for critical databases, file sharing on a network, business applications, and customer services such as electronic commerce websites.
HA cluster implementations attempt to build redundancy into a cluster to eliminate single points of failure, including multiple network connections and data storage which is redundantly connected via storage area networks.
HA clusters usually use a heartbeat private network connection which is used to monitor the health and status of each node in the cluster. One subtle but serious condition all clustering software must be able to handle is split-brain, which occurs when all of the private links go down simultaneously, but the cluster nodes are still running. If that happens, each node in the cluster may mistakenly decide that every other node has gone down and attempt to start services that other nodes are still running. Having duplicate instances of services may cause data corruption on the shared storage.
Clusters are groups of similar components that work together to service client requests.
A cluster is a parallel or distributed computing system made up of many discrete hosts that form a single, unified computing resource. Through clustering, you can partition the system load across redundant Unwired Platform components to design a highly-available system.
There are two tiers you can install in a cluster: the server tier (Unwired Server) and the data tier (runtime databases). The server tier may further be supported by a relay server and a load balancer, depending on the scale of your rollout.
Each tier uses a different type of cluster model:
Load-balancing server tier clusters – improve the system performance by sharing workloads and requests, and improving the efficiency of Unwired Server services (like synchronization and deployment). Requests initiated from the client are managed by a load balancer (or application request routing services in the case of some IIS deployments) for two or more relay servers. This load balancer distributes requests to these relay servers (or some other mechanism) among all the Unwired Server cluster nodes. Each node in the Unwired Server cluster scales independently and automatically when another Unwired Server node joins or leaves the cluster.
Failover data tier clusters – improve the availability of runtime database services to Unwired Servers. Failover clusters have a redundant node, which provides data tier services when the primary node fails. The most common size for a failover consolidated database cluster is two nodes, which is the minimum requirement to provide redundancy and thereby eliminate single points of failure.
Note: The data tier does not need a relay server, since it will not be accessed directly from outside the firewall.
Each Unwired Server cluster has a primary node:
The primary node contains the master copy of the configuration repository for all nodes in the cluster. The primary node distributes its configuration to the other nodes. When a primary node fails, a new primary is elected from the remaining nodes.
Each secondary node gets its configuration from the primary node. Nodes must have unique names, but are identified as members of the same cluster as the primary node.
Each Unwired Server in a cluster runs on a separate host. It can connect to the data tier independently, as well as have its own copy of the system files required for execution. Sybase recommends that each cluster node run from its own installation directory.
Load balancing is a high-availability strategy that prevents any one host from getting overloaded with work, thereby adversely affecting overall system performance.
Failover is another high-availability strategy that allows a secondary node to take over in the event of a fault or failure in the first node, thus allowing normal use to continue. Automatic failover is a default behaviour of the cluster; it does not require manual intervention.