Keepalived概述
keepalived是一个高可用软件,可以和任何应用配合使用
什么是高可用
一般是2台机器启动着完全相同的业务系统,当一台机器down机了,另外一台服务器就能快速的接管,对于访问用户是无感知的。
高可用软件
- 硬件
 
- F5
 
- 软件
 
- keepalived
 - heartbeatr
 
- MYSQL
 
- MGR
 - MHA
 
- Redis
 
- Redis-Cluster
 - Sentinel
 
keepalived实现原理
keepalived底层协议:VRRP(虚拟路由冗余协议)
比如公司的网络是通过网关进行上网的,那么如果该路由器故障了,网关无法转发报文了,此时所有人都无法上网了,怎么办?

通常做法是给路由器增加一台北街店,但是问题是,如果我们的主网关master故障了,用户是需要手动指向backup的,如果用户过多修改起来会非常麻烦。

如何才能做到出现故障自动转移,此时VRRP就出现了,我们的VRRP其实是通过软件或者硬件的形式在Master和Backup外面增加一个虚拟的MAC地址(VMAC)与虚拟IP地址(VIP),那么在这种情况下,PC请求VIP的时候,无论是Master处理还是Backup处理,PC仅会在ARP缓存表中记录VMAC与VIP的信息。
高可用keepalived使用场景
通常业务系统需要保证7×24小时不DOWN机,比如公司内部的OA系统,每天公司人员都需要使用,则不允许Down机,作为业务系统来说随时都可用

keepalived核心概念
1):通过选举投票,决定谁是主节点(服务器)谁是备节点(选举)
2):如果Master故障,Backup自动接管,那么Master恢复后会夺权吗(抢占式、非抢占式)
3):两台服务器都认为自己是master,那么会出现一个故障(脑裂)
keepalived安装配置
环境准备
 主机名   |  WanIp      |  LanIP        |  角色                      |  应用         | 
 lb01     |  10.0.0.5   |  172.16.1.5   |  Master keepalived主节点   |  keepalived   | 
 lb02     |  10.0.0.6   |  172.16.1.6   |  Backup keepalived备节点   |  keepalived   | 
部署keepalived
# 1.安装keepalived
[root@lb01 <sub>]# yum install -y keepalived
[root@lb02 </sub>]# yum install -y keepalived
# 2.查找keepalived配置文件
[root@lb01 <sub>]# rpm -ql keepalived
/etc/keepalived/keepalived.conf
# 3.修改keepalived(Master)配置文件
[root@lb01 </sub>]# vim /etc/keepalived/keepalived.conf
global_defs {                   #全局配置
    router_id lb01              #标识身份->名称
}
vrrp_instance VI_1 {
    state MASTER                #标识角色状态
    interface eth0              #网卡绑定接口
    virtual_router_id 50        #虚拟路由id
    priority 150                #优先级
    advert_int 1                #监测间隔时间
    authentication {            #认证
        auth_type PASS          #认证方式
        auth_pass 1111          #认证密码
    }
    virtual_ipaddress {
        10.0.0.3                #虚拟的VIP地址
    }
}
# 4.修改keepalived(Backup)配置文件
[root@lb02 ~]# vim /etc/keepalived/keepalived.conf
global_defs {                   #全局配置
    router_id lb02              #标识身份->名称
}
vrrp_instance VI_1 {
    state BACKUP                #标识角色状态
    interface eth0              #网卡绑定接口
    virtual_router_id 50        #虚拟路由id
    priority 100                #优先级
    advert_int 1                #监测间隔时间
    authentication {            #认证
        auth_type PASS          #认证方式
        auth_pass 1111          #认证密码
    }
    virtual_ipaddress {
        10.0.0.3                #虚拟的VIP地址
    }
} keepalived配置区别   |  Master节点配置   |  Backup节点配置   | 
 router_id            |  lb01             |  lb02             | 
 state                |  MASTER           |  BACKUP           | 
 priority             |  150              |  100              | 
# 5.先启动Master上的keepalived
[root@lb01 <sub>]# systemctl start keepalived
[root@lb01 </sub>]# systemctl enable keepalived
# 6.启动Backup上的keepalived
[root@lb02 <sub>]# systemctl start keepalived
[root@lb02 </sub>]# systemctl enable keepalived
###### 测试 ######
## 关闭节点1的keepalived
[root@lb01 <sub>]# systemctl stop keepalived
## 节点2联系不上节点1,主动接管VIP
[root@lb02 </sub>]# ip addr | grep 10.0.0.3
inet 10.0.0.3/32 scope global eth0
## 此时重新启动Master上的keepalived,会发现VIP被强行抢占
[root@lb01 <sub>]# systemctl start keepalived
[root@lb01 </sub>]# ip a
inet 10.0.0.3/32 scope global eth0注意:只要停止掉keepalived,VIP会漂移到另外一个节点。
非抢占式配置
## 配置需求
1.两个节点的state都必须配置为BACKUP
2.两个节点都必须加上配置 nopreempt
3.其中一个节点的优先级必须要高于另一个节点的优先级
## Master节点配置
[root@lb01 <sub>]# vim /etc/keepalived/keepalived.conf
global_defs {                   #全局配置
    router_id lb01              #标识身份->名称
}
vrrp_instance VI_1 {
    state BACKUP                #标识角色状态
    interface eth0              #网卡绑定接口
    nopreempt
    virtual_router_id 50        #虚拟路由id
    priority 150                #优先级
    advert_int 1                #监测间隔时间
    authentication {            #认证
        auth_type PASS          #认证方式
        auth_pass 1111          #认证密码
    }
    virtual_ipaddress {
        10.0.0.3                #虚拟的VIP地址
    }
}
## Backup节点配置
[root@lb02 </sub>]# vim /etc/keepalived/keepalived.conf
global_defs {                   #全局配置
    router_id lb02              #标识身份->名称
}
vrrp_instance VI_1 {
    state BACKUP                #标识角色状态
    interface eth0              #网卡绑定接口
    nopreempt
    virtual_router_id 50        #虚拟路由id
    priority 100                #优先级
    advert_int 1                #监测间隔时间
    authentication {            #认证
        auth_type PASS          #认证方式
        auth_pass 1111          #认证密码
    }
    virtual_ipaddress {
        10.0.0.3                #虚拟的VIP地址
    }
}
## 重新加载keepalived
[root@lb01 <sub>]# systemctl reload keepalived
[root@lb02 </sub>]# systemctl reload keepalived脑裂的原因
1):服务器网线松动等网络故障
2):服务器硬件发生损坏现象而崩溃
3):主备都开启firewalld防火墙
## 解决脑裂故障
# 如果发生闹裂,则随机kill掉一台即可
# 在备上编写检测脚本, 测试如果能ping通主并且备节点还有VIP的话则认为产生了脑裂
[root@lb02 ~]# cat check_split_brain.sh
vip=10.0.0.3
lb01_ip=10.0.0.5
while true;do
    ping -c 2 $lb01_ip &>/dev/null
    if [ $? -eq 0 -a `ip add|grep "$vip"|wc -l` -eq 1 ];then
        echo "ha is split brain.warning."
    else
        echo "ha is ok"
    fi
sleep 5
doneKeepalived结合nginx做高可用
环境准备
 主机名   |  WanIp      |  LanIp        |  角色                              |  应用                | 
 lb01     |  10.0.0.5   |  172.16.1.5   |  keepalived主节点、nginx负载均衡   |  keepalived、nginx   | 
 lb02     |  10.0.0.6   |  172.16.1.6   |  keepalived备节点、nginx负载均衡   |  keepalived、nginx   | 
 web01    |  10.0.0.7   |  172.16.1.7   |  web网站                           |  nginx、php          | 
 web02    |  10.0.0.8   |  172.16.1.8   |  web网站                           |  nginx、php          | 
关联nginx
# 1.上课测试脚本
[root@lb01 <sub>]# vim check_count.sh
nginx_count=$(ps -ef|grep [n]ginx|wc -l)
 
## 判断Nginx是否存活,如果不存活则尝试启动Nginx
if [ $nginx_count -eq 0 ];then
        systemctl stop keepalived
fi
# 2.给脚本加上执行权限
[root@lb01 </sub>]# chmod +x /root/check_web.sh
#### 公司使用脚本 ####
[root@lb01 ~]# vim check_web.sh
nginx_count=$(ps -ef|grep [n]ginx|wc -l)
 
#1.判断Nginx是否存活,如果不存活则尝试启动Nginx
if [ $nginx_count -eq 0 ];then
    systemctl start nginx
    sleep 3
    #2.等待3秒后再次获取一次Nginx状态
    nginx_count=$(ps -ef|grep [n]ginx|wc -l) 
    #3.再次进行判断, 如Nginx还不存活则停止Keepalived,让地址进行漂移,并退出脚本  
    if [ $nginx_count -eq 0 ];then
        systemctl stop keepalived
   fi
fi先配置两台负载均衡
## web01
[root@lb01 <sub>]# cat /etc/nginx/conf.d/www.jin.conf 
upstream www.jin.com {
  server 172.16.1.7;
  server 172.16.1.8;
}
server {
  listen 80;
  server_name www.jin.com;
  rewrite (.*) https://www.jin.com$1 redirect;
  ssl_session_cache shared:SSL:10m; 
  ssl_session_timeout 1440m;  
  ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers on;
}
server {
  listen 443 ssl;
  server_name www.jin.com;
  ssl_certificate ssl/server.crt;
  ssl_certificate_key ssl/server.key;
  location /{
    proxy_pass http://www.jin.com;
    include proxy_params;   
  }
}
## web02
[root@lb02 </sub>]# cat /etc/nginx/conf.d/www.jin.com.conf 
upstream www.jin.com {
  server 172.16.1.7;
  server 172.16.1.8;
}
server {
  listen 80;
  server_name www.jin.com;
  rewrite (.*) https://www.jin.com$1 redirect;
  ssl_session_cache shared:SSL:10m; 
  ssl_session_timeout 1440m;  
  ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers on;
}
server {
  listen 443 ssl;
  server_name www.jin.com;
  ssl_certificate ssl/server.crt;
  ssl_certificate_key ssl/server.key;
  location /{
    proxy_pass http://www.jin.com;
    include proxy_params;   
  }
}
Keepalived关联nginx
## 修改keepalived配置文件
[root@lb01 ~]# cat /etc/keepalived/keepalived.conf
global_defs {           
    router_id lb01      
}
 
#每5秒执行一次脚本,脚本执行内容不能超过5秒,否则会中断再次重新执行脚本
vrrp_script check_web {
    script "/root/check_web.sh"
    interval 5
}
 
vrrp_instance VI_1 {
    state MASTER        
    interface eth0      
    virtual_router_id 50    
    priority 150        
    advert_int 1        
    authentication {    
        auth_type PASS  
        auth_pass 1111  
    }
    virtual_ipaddress { 
        10.0.0.3    
    }
 
#调用并运行脚本
track_script {
    check_web
   }
}
# 在Master的keepalived中调用脚本,抢占式,仅需在master配置即可。(注意,如果配置为非抢占式,那么需要两台服务器都使用该脚本)









