The Red Hat High Availability Add-On and Load Balancer are distinct solutions designed to address different aspects of system availability and performance, though they can also be used in conjunction to create more robust environments.
Here are the fundamental differences and use cases for each:
Red Hat High Availability Add-On
The Red Hat High Availability Add-On is a clustered system primarily focused on providing reliability, scalability, and availability to critical production services by eliminating single points of failure. It achieves this mainly through failover of services from one cluster node to another if a node becomes inoperative.
Core Components and Concepts:
• Pacemaker: This is the cluster resource manager used by the High Availability. It oversees cluster membership, manages services, and monitors resources.
• Cluster Infrastructure: Provides fundamental functions like configuration file management, membership management, lock management, and fencing, enabling nodes to work together as a cluster.
• High Availability Service Management: Facilitates the failover of services in case of node failure.
• Fencing (STONITH): This is a critical mechanism to ensure data integrity by physically isolating or “shooting” an unresponsive node, preventing it from corrupting shared data or resources. Red Hat only supports clusters with fencing enabled.
• Quorum: Cluster systems use quorum to prevent data corruption and loss, especially in “split-brain” scenarios where network communication issues could cause parts of the cluster to operate independently. A cluster has quorum when more than half of its nodes are online.
• Cluster Resources: These are instances of programs, data, or applications managed by the cluster service through “agents” that provide a standard interface. Resources can be configured with constraints (location, ordering, colocation) to determine their behavior within the cluster.
• LVM Support: It supports LVM volumes in two configurations:
◦ Active/Passive (HA-LVM): Only a single node accesses storage at any given time, avoiding cluster coordination overhead for increased performance. This is suitable for applications not designed for concurrent operation.
◦ Active/Active (LVM with lvmlockd**):** Multiple nodes require simultaneous read/write access to LVM volumes, with lvmlockd coordinating activation and changes to LVM metadata. This is used for cluster-aware applications and file systems like GFS2.
Key Use Cases for Red Hat High Availability Add-On:
• Maintaining High Availability: Ensuring critical services like Apache HTTP servers, NFS servers, or Samba servers remain available even if a node fails, by failing them over to another healthy node.
• Data Integrity: Crucial for services that read and write data via shared file systems, ensuring data consistency during failover.
• Active/Passive Configurations: For most applications not designed for concurrent execution.
• Active/Active Configurations: For specific cluster-aware applications like GFS2 or Samba that require simultaneous access to shared storage.
• Virtual Environments: Managing virtual domains as cluster resources and individual services within them.
• Disaster Recovery: Configuring two clusters (primary and disaster recovery) where resources can be manually failed over to the recovery site if the primary fails.
• Multi-site Clusters: Using Booth cluster ticket manager to span clusters across multiple sites and manage resources based on granted tickets, ensuring resources run at only one site at a time.
• Remote Node Integration: Integrating nodes not running corosync into the cluster to manage their resources remotely, allowing for scalability beyond standard node limits.
Load Balancer
The Load Balancer (specifically using Keepalived and HAProxy in Red Hat Enterprise Linux 7) is designed to provide load balancing and high-availability to network services, dispatching network service requests to multiple cluster nodes to distribute the request load.
Core Components and Concepts:
• Keepalived:
◦ Runs on active and passive Linux Virtual Server (LVS) routers.
◦ Uses Virtual Redundancy Routing Protocol (VRRP) to elect an active router and manage failover of the virtual IP address (VIP) to backup routers if the active one fails.
◦ Performs load balancing for real servers and health checks on service integrity.
◦ Operates primarily at Layer 4 (Transport layer) for TCP connections.
◦ Supports various scheduling algorithms (e.g., Round-Robin, Least-Connection) to distribute traffic.
◦ Offers persistence and firewall marks to ensure client requests consistently go to the same real server for stateful connections (e.g., multi-screen web forms, FTP).
• HAProxy:
◦ Offers load-balanced services for HTTP and TCP-based services.
◦ Processes events on thousands of connections across a pool of real servers.
◦ Allows defining proxy services with front-end (VIP and port) and back-end (pool of real servers) systems.
◦ Performs load-balancing management at Layer 7 (Application layer).
◦ Supports various scheduling algorithms (e.g., Round-Robin, Least-Connection, Source, URI, URL Parameter).
Key Use Cases for Load Balancer:
• Traffic Distribution: Balancing network service requests across multiple “real servers” to optimize performance and throughput.
• Scalability: Cost-effectively scaling services by adding more real servers to handle increased load.
• High-Volume Services: Ideal for production web applications and other Internet-connected services that experience high traffic.
• Router Failover: Keepalived ensures that the Virtual IP (VIP) address and, consequently, access to the load-balanced services, remains available even if the primary load balancer router fails.
• Diverse Hardware: Weighted scheduling algorithms allow for efficient load distribution among real servers with varying capacities.
• Stateful Connections: Using persistence or firewall marks to direct a client’s subsequent connections to the same real server for applications requiring session consistency (e.g., e-commerce, FTP).
• Flexible Routing: Supports both NAT (Network Address Translation) routing and Direct Routing, offering flexibility in network topology and performance.
Fundamental Differences
| Feature | Red Hat High Availability Add-On | Load Balancer |
| Primary Goal | Ensuring service availability through failover to eliminate single points of failure. | Distributing network traffic across multiple servers to enhance scalability and performance. |
| Core Mechanism | Manages application and service resources and moves them between nodes upon failure. | Directs client requests to multiple backend servers based on load balancing algorithms. |
| Key Components | Pacemaker, Corosync, Fencing (STONITH), CIB, CRMd, LRMd, GFS2, LVM. | Keepalived (LVS, VRRP) and HAProxy. |
| Operating Layer | Application/Service layer (manages state and startup/shutdown of services). | Layer 4 (TCP) and/or Layer 7 (HTTP/HTTPS). |
| Data Integrity | Actively ensures data integrity during failover, especially with shared storage (e.g., lvmlockd, GFS2). |
Does not directly manage data integrity; relies on backend servers to handle data consistency. |
| Redundancy Type | Primarily active/passive failover for services (one active, others standby), though active/active is supported with specific tools. | Typically active/active for real servers (all serving requests) with active/passive for load balancer routers. |
| Configuration | Uses pcs command-line interface or pcsd Web UI to configure Pacemaker and Corosync. |
Configured via keepalived.conf and haproxy.cfg files. |
In summary, the High Availability Add-On focuses on maintaining uptime of a service or application by ensuring it can reliably restart or move to another server if its current host fails, with a strong emphasis on data integrity. The Load Balancer, conversely, focuses on distributing incoming client requests across multiple servers to handle higher traffic volumes and improve overall system performance, while also providing failover at the routing level. They can complement each other, with an HA cluster protecting the backend services that are being load-balanced.
Leave a Reply