标签归档:高可用

Keepalived Nginx双网络(内外网)故障非同步漂移双活双主模式(实战)

介绍:

有了keepalived+Lvs这样的高性能组合,为什么还需keepalived+Nginx呢。keepalived是为了Lvs而设计。Lvs是一个四层的负载均衡设备,虽然有着高性能的优势,但它无后端服务器的健康检查机制。keepalived为lvs提供一系列的健康检查机制,例如:TCP_CHECK,UDP_CHECK,HTTP_GET等。同时lvs也可以自己写健康检查脚脚本。或者结合ldirectory来实现后端健康检测。但LVS始终无法摆脱它是一个四层设备,无法对上层协议进行解析。而Nginx就不一样了,Nginx是一个七层的设备可以对七层协议进行解析,可以对一些请求进行过滤,还可以对请求结果进行缓存。这些都是Nginx独有的优势。但是keepalived并没有为Nginx提供健康检测。需要自己去写一些脚步来进行健康检测。

下面主要讲解Keepalived+Nginx的模式,不包含lvs。如果不是大型负载,一般用不到LVS,当然你也可以参阅:《Keepalived LVS-DR Nginx单网络双活双主配置模式(实战)》篇。

准备四台服务器或虚拟机:

Web Nginx 内网:10.16.8.8/10.16.8.9

Keepalived 内网:10.16.8.10(ka67)/10.16.8.11(ka68)
Keepalived 公网:172.16.8.10/172.16.8.11

Keepalived 内网VIP:10.16.8.100/10.16.8.101
Keepalived 公网VIP:172.16.8.100/172.16.8.101

OS:CentOS Linux release 7.4.1708 (Core)

先决条件:

安装keepalived。
时间同步。
设置SELinux和防火墙。
互相之间/etc/hosts文件添加对方主机名(可选)。
确认网络接口支持多播(组播)新网卡默认支持。

以上部署请参阅:《keepalived 安装及配置文件讲解》。

1.ka67配置文件

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance External_1 {
    state MASTER
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"  
}
vrrp_instance External_2 {
    state BACKUP
    interface eth1
    virtual_router_id 172
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        10.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"  
}
vrrp_instance Internal_1 {
    state MASTER
    interface eth0
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole2
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          
}
vrrp_instance Internal_2 {
    state BACKUP
    interface eth0
    virtual_router_id 192
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole3
    }
    virtual_ipaddress {
        172.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          
}

2.ka68配置文件

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance External_1 {
    state BACKUP
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          
 }
 
vrrp_instance External_2 {
    state MASTER
    interface eth1
    virtual_router_id 172
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        10.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          
   }
   
vrrp_instance Internal_1 {
    state BACKUP
    interface eth0
    virtual_router_id 191
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole2
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          
}
vrrp_instance Internal_2 {
    state MASTER
    interface eth0
    virtual_router_id 192
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole3
    }
    virtual_ipaddress {
        172.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          
}

3.创建检测通用脚本

$ vim /usr/local/keepalived/etc/keepalived/notify.sh
#!/bin/bash
#
contact='root@localhost'
                
notify() {
    local mailsubject="$(hostname) to be $1, vip floating"
    local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
    echo "$mailbody" | mail -s "$mailsubject" $contact
}
                
case $1 in
master)
    notify master   
    ;;
backup)
    notify backup
    systemctl start nginx   # 此处配置后,Nginx服务挂了能自动启动   
    ;;
fault)
    notify fault    
    ;;
*)
    echo "Usage: $(basename $0) {master|backup|fault}"
    exit 1
    ;;
esac

4.启动keepalived服务并测试

启动ka67后查看其网卡状态:

[root@ka67 ~]# systemctl start keepalived
[root@ka67 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:ae:02:78 brd ff:ff:ff:ff:ff:ff
    inet 172.16.8.10/24 brd 172.16.8.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.8.100/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.8.101/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::436e:b837:43b:797c/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:ae:02:84 brd ff:ff:ff:ff:ff:ff
    inet 10.16.8.10/24 brd 10.16.8.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet 10.16.8.100/32 scope global eth1
       valid_lft forever preferred_lft forever
    inet 10.16.8.101/32 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::1261:7633:b595:7719/64 scope link
       valid_lft forever preferred_lft forever

在ka68没有启动时,ka67添加了4个VIP,分别是:

公网eth0:

172.16.8.100/32
172.16.8.101/32

内网eth1:

10.16.8.100/32
10.16.8.101/32

启动ka68后查看其网卡状态:

[root@ka68 ~]# systemctl start keepalived
[root@ka68 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:ae:02:79 brd ff:ff:ff:ff:ff:ff
    inet 172.16.8.11/24 brd 103.28.204.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.8.101/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::3d2c:ecdc:5e6d:70ba/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:ae:02:82 brd ff:ff:ff:ff:ff:ff
    inet 10.16.8.11/24 brd 10.16.8.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet 10.16.8.101/32 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::4fb3:d0a8:f08c:4536/64 scope link
       valid_lft forever preferred_lft forever

ka68添加了2个VIP,分别是:

公网eth0:

172.16.8.101/32

内网eth1:

10.16.8.101/32

再次查看ka67的网卡状态信息:

[root@ka67 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:ae:02:78 brd ff:ff:ff:ff:ff:ff
    inet 172.16.8.10/24 brd 172.16.8.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.8.100/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::436e:b837:43b:797c/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:ae:02:84 brd ff:ff:ff:ff:ff:ff
    inet 10.16.8.10/24 brd 10.16.8.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet 10.16.8.100/32 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::1261:7633:b595:7719/64 scope link
       valid_lft forever preferred_lft forever

注意到 172.16.8.101/10.16.8.101 已经被移除了,此时无论停掉任意一台服务器,4个VIP都不会停止通信。

另外可以在ka67/ka68通过如下命令查看组播地址的心跳状态:

[root@ka67 ~]# tcpdump -nn -i eth1 host 224.0.0.111
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
02:00:15.690389 IP 10.16.8.10 > 224.0.0.111: VRRPv2, Advertisement, vrid 171, prio 100, authtype simple, intvl 1s, length 20
02:00:15.692654 IP 10.16.8.11 > 224.0.0.111: VRRPv2, Advertisement, vrid 172, prio 100, authtype simple, intvl 1s, length 20
02:00:16.691552 IP 10.16.8.10 > 224.0.0.111: VRRPv2, Advertisement, vrid 171, prio 100, authtype simple, intvl 1s, length 20
02:00:16.693814 IP 10.16.8.11 > 224.0.0.111: VRRPv2, Advertisement, vrid 172, prio 100, authtype simple, intvl 1s, length 20
02:00:17.692710 IP 10.16.8.10 > 224.0.0.111: VRRPv2, Advertisement, vrid 171, prio 100, authtype simple, intvl 1s, length 20

到目前为止,vrrp的高可用配置&测试已完成,接下来我们继续配置Web Nginx服务。

5.安装并配置Nginx

分别在后端服务器 10.16.8.8/10.16.8.9 安装Nginx:

关于Nginx请参阅:《Centos 7源码编译安装 Nginx》。

或通过以下方式yum安装Nginx;简单快速:

$ yum install epel-release -y
$ yum install nginx -y

测试环境为区分机器的不同,故将web页面设置服务器IP地址,但在生产环境中获取的内容是一致的。

分别在10.16.8.8/10.16.8.9执行如下命令:

$ echo "Server 10.16.8.8" > /usr/share/nginx/html/index.html
$ echo "Server 10.16.8.9" > /usr/share/nginx/html/index.html

测试是否访问正常:

$ curl //10.16.8.8
Server 10.16.8.8

分别在ka67/ka68上安装Nginx,我这里用yum安装:

$ yum install nginx psmisc -y

说明:psmisc包含了:fuser,killall,pstree等命令。

ka67/ka68上配置Nginx:

备份默认配置文件:

$ mv /etc/nginx/conf.d/default.conf{,.bak}
$ mv /etc/nginx/nginx.conf{,.bak}

分别在ka67/ka68将nginx主配置文件中添加如下内容:

$ vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    include /etc/nginx/conf.d/*.conf;
    upstream webserverapps {
    server 10.16.8.8:80;
    server 10.16.8.9:80;
    #server 127.0.0.1:8080 backup;
   }

server {
        listen 80;
        server_name _;
location / {
     proxy_pass //webserverapps;
     proxy_redirect off;
     proxy_set_header Host $host;
     proxy_set_header X-Real-IP $remote_addr;
     proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
     client_max_body_size 10m;
     client_body_buffer_size 128k;
     proxy_connect_timeout 90;
     proxy_send_timeout 90;
     proxy_read_timeout 90;
     proxy_buffer_size 4k;
     proxy_buffers 4 32k;
     proxy_busy_buffers_size 64k;
     proxy_temp_file_write_size 64k;
     add_header Access-Control-Allow-Origin *;
       }
    }

}

注意:以上配置主要添加了蓝色部分,其他默认,仅为测试使用。生产环境请根据自己需求调整配置。

ka67/ka68重启Nginx服务:

$ systemctl restart nginx

分别在ka67/ka68上测试:

[root@ka67 ~]# for i in `seq 10`; do curl 10.16.8.10; done
Server 10.16.8.8
Server 10.16.8.9
Server 10.16.8.8
Server 10.16.8.9
Server 10.16.8.8
Server 10.16.8.9
Server 10.16.8.8
Server 10.16.8.9
Server 10.16.8.9
Server 10.16.8.9

到目前为止,Nginx反代功能也已实现,下面我们将把Nginx与Keepalived结合起来,使Nginx支持高可用。

6.配置Keepalived Nginx高可用

分别在ka67/ka68配置文件/usr/local/keepalived/etc/keepalived/keepalived.conf的全局配置块global_defs下方添加vrrp_script配置块:

vrrp_script chk_nginx {
    script "killall -0 nginx"
    interval 2
    weight -10
    fall 2
    rise 2
}

在所有vrrp_instance实例块里,添加track_script块:

track_script {
    chk_nginx
}

例如:

...
vrrp_instance External_1 {
    state BACKUP
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    track_script {
    chk_nginx
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_"/usr/local/keepalived/etc/keepalived/notify.sh fault"
 }
...

配置完以后,重启ka67/ka68的keepalived服务:

$ systemctl stop keepalived
$ systemctl start keepalived

总结:

在配置过程中出现了无法漂移的情况,跨网段问题。解决通道,还是要多看日志,多分析判断,最终还是能解决问题的。无论在何种情况下,既然选择了keepalived,就要坚信自己的初心。
如你在配置过程中出现任何问题,欢迎留言,共同解决问题。

Keepalived LVS-DR Nginx单网络双活双主配置模式(实战)

何为LVS/DR模式?

LVS是Linux Virtual Server的简写,意即Linux虚拟服务器,是一个虚拟的服务器集群系统。LVS目前有三种IP负载均衡技术(VS/NAT、VS/TUN和VS/DR)、十种调度算法(rrr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq)。

LVS在Unix-like系统中是作为一个前端(Director)存在的,又称为调度器,它本身不提供任何的服务,只是将通过互联网进来的请求接受后再转发给后台运行的真正的服务器(RealServer)进行处理,然后响应给客户端。

LVS集群采用IP负载均衡技术和基于内容请求分发技术。调度器具有很好的吞吐率,将请求均衡地转移到不同的服务器上执行,且调度器自动屏蔽掉服务器的故障,从而将一组服务器构成一个高性能的、高可用的虚拟服务器。整个服务器集群的结构对客户是透明的,而且无需修改客户端和服务器端的程序。为此,在设计时需要考虑系统的透明性、可伸缩性、高可用性和易管理性。

LVS有两个重要的组件:一个是IPVS,一个是IPVSADM。ipvs是LVS的核心组件,它本身只是一个框架,类似于iptables,工作于内核空间中。ipvsadm 是用来定义LVS的转发规则的,工作于用户空间中。

LVS有三种转发类型:

LVS-NAT模式:

称为网络地址转换,实现起来比较简单,所有的RealServer集群节点和前端调度器Director都要在同一个子网中,这种模型可以实现端口映射,RealServer的操作系统可以是任意操作系统,前端的Director既要处理客户端发起的请求,又要处理后台RealServer的响应信息,将RealServer响应的信息再转发给客户端,前端Director很容易成为整个集群系统性能的瓶颈。通常情况下RealServer的IP地址(以下简成RIP)为私有地址,便于RealServer集群节点之间进行通信通常情况下前端的Director有两个IP地址,一个为VIP,是虚拟的IP地址,客户端向此IP地址发起请求。一个是DIP,是真正的Director的IP地址,RIP的网关要指向Director的DIP。

LVS-DR模式:

DR:直接路由(direct routing)模式,此种模式通过MAC地址转发工作,所有的RealServer集群节点和前端调度器Director都要在同一个物理网络中,此种模式不支持端口映射,此种模式的性能要优于LVS-NAT,RIP可以使用公网的IP,RIP的网关不能指向DIP。

优点:

相对LVS/NAT模式,DR模式不需要把返回的数据通过负载均衡转发,想要他发挥优势,那么就要相应的数据包的数量和长度远远大于请求数据包,幸运的是,大部分WEB服务都具备这样的特点,响应和请求并不对称,因此常用的WEB服务,都可以使用这种模式。

这种方式,负载均衡器不再是系统的瓶颈。如果你的负载均衡器只拥有100M的全双工网卡和带宽的话,通过集群的横向扩展也可以让整个系统达到1G的流量。

来自LVS官方站点的测试结果也告诉我们,LVS-DR可以容纳100台以上的实际应用服务器,对一般的服务而已,这样的表现足够了。

不足:

DR模式下不能跨网段转发数据,如果必须要跨网段进行负载,那么就必须使用LVS/TUN模式。

LVS-TUN模式:

称为隧道模型RealServer服务器与前端的Director可以在不同的网络中,此种模型也不支持端口映射,RealServer只能使用哪些支持IP隧道的操作系统,前端的Director只处理客户端的请求,然后将请求转发给RealServer,由后台的RealServer直接响应客户端,不再经过Director,RIP一定不能是私有IP,在DR、TUN模式中,数据包是直接返回给用户的,所以,在Director Server上以及集群的每个节点上都需要设置这个地址。此IP在Real Server上一般绑定在回环地址上,例如lo:0,同样,在Director Server上,虚拟IP绑定在真实的网络接口设备上,例如eth0:0。

开始部署:

准备四台服务器或虚拟机:

Web Nginx:10.16.8.8/10.16.8.9
Keepalived:10.16.8.10/10.16.8.11
Keepalived VIP:10.16.8.100/10.16.8.101
OS:CentOS Linux release 7.4.1708 (Core)

先决条件:

安装keepalived。
时间同步。
设置SELinux和防火墙。
互相之间/etc/hosts文件添加对方主机名(可选)。
确认网络接口支持多播(组播)新网卡默认支持。

以上部署请参阅:《keepalived 安装及配置文件讲解》。

1.ka67配置文件

$ vim /usr/local/keepalived/etc/keepalived/keepalived.conf
global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 60
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance VI_1 {
    state MASTER
    interface eth1
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"        
}
vrrp_instance VI_2 {
    state BACKUP
    interface eth1
    virtual_router_id 192
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        10.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"         
}

2.ka68配置文件

$ vim /usr/local/keepalived/etc/keepalived/keepalived.conf
global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 60
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth1
    virtual_router_id 191
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"         
}
vrrp_instance VI_2 {
    state MASTER
    interface eth1
    virtual_router_id 192
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        10.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"        
}

3.创建通用notify.sh检测脚本

分别创建此脚本:

$ vim /usr/local/keepalived/etc/keepalived/notify.sh
#!/bin/bash
#
contact='root@localhost'
                
notify() {
    local mailsubject="$(hostname) to be $1, vip floating"
    local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
    echo "$mailbody" | mail -s "$mailsubject" $contact
}
                
case $1 in
master)
    notify master   
    ;;
backup)
    notify backup   
    ;;
fault)
    notify fault    
    ;;
*)
    echo "Usage: $(basename $0) {master|backup|fault}"
    exit 1
    ;;
esac

4.启动keepalived服务

$ systemctl start keepalived
$ systemctl enable keepalived

5.查看组播状态

我们还可以在任意一台keepalived节点,通过tcpdump命令查看组播心跳状态,例如:

$ tcpdump -nn -i eth1 host 224.0.0.111
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
00:32:31.714987 IP 10.16.8.10 > 224.0.0.111: VRRPv2, Advertisement, vrid 191, prio 100, authtype simple, intvl 1s, length 20
00:32:31.715739 IP 10.16.8.11 > 224.0.0.111: VRRPv2, Advertisement, vrid 192, prio 100, authtype simple, intvl 1s, length 20
00:32:32.716150 IP 10.16.8.10 > 224.0.0.111: VRRPv2, Advertisement, vrid 191, prio 100, authtype simple, intvl 1s, length 20
00:32:32.716292 IP 10.16.8.11 > 224.0.0.111: VRRPv2, Advertisement, vrid 192, prio 100, authtype simple, intvl 1s, length 20
00:32:33.717327 IP 10.16.8.10 > 224.0.0.111: VRRPv2, Advertisement, vrid 191, prio 100, authtype simple, intvl 1s, length 20
00:32:33.721361 IP 10.16.8.11 > 224.0.0.111: VRRPv2, Advertisement, vrid 192, prio 100, authtype simple, intvl 1s, length 20

如果提示报错:-bash: tcpdump: command not found.

安装tcpdump即可:

$ yum install tcpdump -y

6.配置LVS

分别安装lvs。CentOS7已经集成了LVS的核心,所以只需要安装LVS的管理工具就可以了:

$ yum -y install ipvsadm

分别停止ka67/ka68的keepalived服务:

$ systemctl stop keepalived

分别在ka67/ka68配置文件最后添加Virtual Server配置:

$ vim /usr/local/keepalived/etc/keepalived/keepalived.conf
virtual_server 10.16.8.100 80 {
    delay_loop 3
    lb_algo rr
    lb_kind DR
    protocol TCP
    # sorry_server 127.0.0.1 80
    real_server 10.16.8.8 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 1
            nb_get_retry 3
            delay_before_retry 1
        }
    }
    real_server 10.16.8.9 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 1
            nb_get_retry 3
            delay_before_retry 1
        }
    }
}
virtual_server 10.16.8.101 80 {
    delay_loop 3
    lb_algo rr
    lb_kind DR
    protocol TCP
    # sorry_server 127.0.0.1 80
    real_server 10.16.8.8 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 1
            nb_get_retry 3
            delay_before_retry 1
        }
    }
    real_server 10.16.8.9 80 {
        weight 1
        HTTP_GET {
            url {
                path /
                status_code 200
            }
            connect_timeout 1
            nb_get_retry 3
            delay_before_retry 1
        }
    }
}

7.配置RS(Real Server)Web服务

分别在Web服务器安装Apache Httpd或Nginx作为Web服务,这里安装Nginx。

关于Nginx请参阅:《Centos 7 源码编译安装 Nginx》。

或通过以下方式安装Nginx,简单快速:

$ yum install epel-release -y
$ yum install nginx -y

测试环境为区分机器的不同,故将显示页面设置成服务器IP地址,但在生产环境中获取的内容是一致的。

分别在web8/web9执行如下命令:

$ echo "Server 10.16.8.8" > /usr/share/nginx/html/index.html
$ echo "Server 10.16.8.9" > /usr/share/nginx/html/index.html

测试是否访问正常:

$ curl //127.0.0.1
Server 10.16.8.8

8.添加RS脚本

由于该脚本部分命令,在Centos7 最小化安装中没有,所以请先安装网络工具包:

$ yum install net-tools -y

分别在web服务器上添加rs.sh脚本:

$ vim /tmp/rs.sh
#!/bin/bash
vip1=10.16.8.100
vip2=10.16.8.101
dev1=lo:1
dev2=lo:2
case $1 in
start)
    echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
    echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
    ifconfig $dev1 $vip1 netmask 255.255.255.255 broadcast $vip1 up
    ifconfig $dev2 $vip2 netmask 255.255.255.255 broadcast $vip2 up
    echo "VS Server is Ready!"
    ;;
stop)
    ifconfig $dev down
    echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
    echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
    echo "VS Server is Cancel!"
    ;;
*)
    echo "Usage `basename $0` start|stop"
    exit 1
    ;;
esac

再分别启动该脚本:

$ /tmp/rs.sh start

如果需要停止,请执行如下命令:

$ /tmp/rs.sh stop

9.测试

在另一台服务器测试是否能够访问

[root@localhost ~]# for i in `seq 5`; do
>     curl 10.16.8.100
>     curl 10.16.8.101
> done
Server 10.16.8.9
Server 10.16.8.8
Server 10.16.8.8
Server 10.16.8.9
Server 10.16.8.9
Server 10.16.8.8
Server 10.16.8.8
Server 10.16.8.9
Server 10.16.8.9
Server 10.16.8.8

根据测试结果判断,已经实现了Keepalived+LVS-DR+Nginx高可用故障切换模式。

Keepalived双网络(内外网)故障同步漂移双活双主模式

前言:

在生产环境中,公网与内网都是独立分开的,所以称之为双网络。公网和内网在故障时实现同步漂移,例如:Keepalived+LVS-NAT 模式,这时就需要用到vrrp_sync_group设置同步漂移组。如果做双主双活,需要分别在两端加2个VIP,以达到互为主备的效果。

1.示意图:

  • 多播IP是:224.0.0.111。
  • 内网VIP1与内网VIP2互为主备。
  • 公网VIP1与公网VIP2互为主备。
  • 内网VIP1和公网VIP1是为一个同步组。
  • 内网VIP2和公网VIP2是为一个同步组。
                        +------+
			|Client|
			+------+
                           /\
		       +--------+   
                       |Internet|
		       +--------+
                           /\
		       +--------+  
                       |NAT 网络|
		       +--------+
                           /\
                +----------------------+
                | 内网VIP1:10.16.8.100 |
	        | 内网VIP2:10.16.8.101 |
                +----------------------+
                   /                \
+-----------------------+      +-----------------------+
|      KA+Lvs-NAT       |      |       KA+Lvs-NAT      |
|内网VIP1:Master (eth1) |      |内网VIP1:BACKUP (eth1) |
|内网VIP2:BACKUP (eth1) |      |内网VIP2:Master (eth1) |
|内网:10.16.8.10 (eth1) |<---->|内网:10.16.8.11 (eth1) |
|-----------------------|多播IP|-----------------------|
|公网VIP1:Master (eth2) |<---->|公网VIP1:BACKUP (eth2) |
|公网VIP2:BACKUP (eth2) |      |公网VIP2:Master (eth2) |
|公网:172.16.8.10(eth2) |      |公网:172.16.8.11(eth2) |                 
+-----------------------+      +-----------------------+
                   \                /
		+-----------------------+	 
                | 公网VIP1:172.16.8.100 |
		| 公网VIP2:172.16.8.101 |
		+-----------------------+
		           \/
			+------+
			|资源池|
		        +------+

2.ka67配置文件

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka67
   vrrp_mcast_group4 224.0.0.111
}
vrrp_sync_group VG_1 {
    group {
        External_1
        Internal_1
    }
}
vrrp_sync_group VG_2 {
    group {
        External_2
        Internal_2
    }
}
vrrp_instance External_1 {
    state MASTER
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1    
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance External_2 {
    state BACKUP
    interface eth1
    virtual_router_id 172
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        10.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_1 {
    state MASTER
    interface eth2
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole2
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_2 {
    state BACKUP
    interface eth2
    virtual_router_id 192
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole3
    }
    virtual_ipaddress {
        172.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}

3.ka68配置文件

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka68
   vrrp_mcast_group4 224.0.0.111
}
vrrp_sync_group VG_1 {
    group {
        External_1
        Internal_1
    }
}
vrrp_sync_group VG_2 {
    group {
        External_2
        Internal_2
    }
}
vrrp_instance External_1 {
    state BACKUP
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1    
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance External_2 {
    state MASTER
    interface eth1
    virtual_router_id 172
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        10.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_1 {
    state BACKUP
    interface eth2
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole2
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_2 {
    state MASTER
    interface eth2
    virtual_router_id 192
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole3
    }
    virtual_ipaddress {
        172.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}

Keepalived双网络(内外网)故障非同步漂移双活双主模式

前言:

在生产环境中,公网与内网都是独立分开的,所以称之为双网络。下面配置将要实现内网和公网故障时不必同步漂移,例如:Keepalived+LVS-DRKeepalived+NginxKeepalived+HAProxy 这些都无需同步漂移的。另外Keepalived+LVS-NAT则需要同步漂移。

1.示意图:

  • 多播IP是:224.0.0.111。
  • 一台机器的VIP内外网互为主备。
                        +------+
			|Client|
			+------+
                           /\
		       +--------+   
                       |Internet|
		       +--------+
                           /\
		       +--------+  
                       |NAT 网络|
		       +--------+
                           /\
                +----------------------+
                | 内网VIP1:10.16.8.100 |
		| 内网VIP2:10.16.8.101 |
                +----------------------+
                   /                \
+-----------------------+      +-----------------------+
|KA+Lvs-DR/Nginx/HAProxy|      |KA+Lvs-DR/Nginx/HAProxy|
|内网VIP1:Master (eth1) |      |内网VIP1:BACKUP (eth1) |
|内网VIP2:BACKUP (eth1) |      |内网VIP2:Master (eth1) |
|内网:10.16.8.10 (eth1) |<---->|内网:10.16.8.11 (eth1) |
|-----------------------|多播IP|-----------------------|
|公网VIP1:Master (eth2) |<---->|公网VIP1:BACKUP (eth2) |
|公网VIP2:BACKUP (eth2) |      |公网VIP2:Master (eth2) |
|公网:172.16.8.10(eth2) |      |公网:172.16.8.11(eth2) |                 
+-----------------------+      +-----------------------+
                   \                /
	        +-----------------------+	 
                | 公网VIP1:172.16.8.100 |
		| 公网VIP2:172.16.8.101 |
	        +-----------------------+
		           \/
			+------+
			|资源池|
			+------+

2.ka67配置文件

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka67
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance External_1 {
    state MASTER
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1    
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance External_2 {
    state BACKUP
    interface eth1
    virtual_router_id 172
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        10.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_1 {
    state MASTER
    interface eth2
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole2
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_2 {
    state BACKUP
    interface eth2
    virtual_router_id 192
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole3
    }
    virtual_ipaddress {
        172.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}

3.ka68配置文件

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka68
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance External_1 {
    state BACKUP
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1    
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance External_2 {
    state MASTER
    interface eth1
    virtual_router_id 172
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        10.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_1 {
    state BACKUP
    interface eth2
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole2
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_2 {
    state MASTER
    interface eth2
    virtual_router_id 192
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole3
    }
    virtual_ipaddress {
        172.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}

Keepalived双网络(内外网)故障同步漂移主备单主模式

前言:

在生产环境当中,内网与公网都是分开的,要实现内网和公网同步漂移,比如:Keepalived+LVS-NAT模式,那么就需要设置vrrp_sync_group同步组,不同的是下面配置只是主备模式,而不是主主模式。

1.示意图:

  • 多播IP是:224.0.0.111。
  • Master内网和公网VIP属同组。
  • BACKUP内网与公网VIP属同组。
                        +------+
			|Client|
			+------+
                           /\
		       +--------+   
                       |Internet|
		       +--------+
                           /\
		       +--------+  
                       |NAT 网络|
		       +--------+
                           /\
                +---------------------+
                | 内网VIP:10.16.8.100 |
                +---------------------+
                  /                \
+-----------------------+      +-----------------------+
|KA+Lvs/Nginx/HAProxy   |      |KA+Lvs/Nginx/HAProxy   |
|内网VIP:Master  (eth1) |      |内网VIP:BACKUP  (eth1) |
|内网:10.16.8.10 (eth1) |<---->|内网:10.16.8.11 (eth1) |
|-----------------------|多播IP|-----------------------|
|公网VIP:Master  (eth2) |<---->|公网VIP:BACKUP  (eth2) |
|公网:172.16.8.10(eth2) |      |公网:172.16.8.11(eth2) |                  
+-----------------------+      +-----------------------+
                   \                /
		+----------------------+	 
                | 公网VIP:172.16.8.100 |
		+----------------------+
		           \/
			+------+
			|资源池|
		        +------+

2.ka67配置文件

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka67
   vrrp_mcast_group4 224.0.0.111
}
vrrp_sync_group VG_1 {
    group {
        External_1
        Internal_1
    }
}
vrrp_instance External_1 {
    state MASTER
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1    
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_1 {
    state MASTER
    interface eth2
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}

3.ka68配置文件

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka68
   vrrp_mcast_group4 224.0.0.111
}
vrrp_sync_group VG_1 {
    group {
        External_1
        Internal_1
    }
}
vrrp_instance External_1 {
    state BACKUP
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1    
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_1 {
    state BACKUP
    interface eth2
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}

Keepalived双网络(内外网)故障非同步漂移主备单主模式

前言:

在生产环境当中,内网与公网都是独立分开的,因此内网和公网不用同步漂移,例如:Keepalived+LVS-DR、Keepalived+Nginx、Keepalived+HAProxy 都无需同步漂移。

注:Keepalived+LVS-NAT模式除外。

1.示意图:

多播IP是:224.0.0.111。

                        +------+
			|Client|
			+------+
                           /\
		       +--------+   
                       |Internet|
		       +--------+
                           /\
		       +--------+  
                       |NAT 网络|
		       +--------+
                           /\
                +---------------------+
                | 内网VIP:10.16.8.100 |
                +---------------------+
                  /                \
+-----------------------+      +-----------------------+
|KA+Lvs/Nginx/HAProxy   |      |KA+Lvs/Nginx/HAProxy   |
|内网VIP:Master  (eth1) |      |内网VIP:BACKUP  (eth1) |
|内网:10.16.8.10 (eth1) |<---->|内网:10.16.8.11 (eth1) |
|-----------------------|多播IP|-----------------------| 
|公网VIP:Master  (eth2) |<---->|公网VIP:BACKUP  (eth2) |
|公网:172.16.8.10(eth2) |      |公网:172.16.8.11(eth2) |                  
+-----------------------+      +-----------------------+
                   \                /
	        +----------------------+	 
                | 公网VIP:172.16.8.100 |
	        +----------------------+
		           \/
			+------+
			|资源池|
		        +------+

2.ka67配置文件

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka67
   vrrp_mcast_group4 224.0.0.111
}
vrrp_sync_group VG_1 {
    group {
        External_1
        Internal_1
    }
}
vrrp_instance External_1 {
    state MASTER
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1    
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_1 {
    state MASTER
    interface eth2
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}

3.ka68配置文件

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka68
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance External_1 {
    state BACKUP
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1    
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_1 {
    state BACKUP
    interface eth2
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}

Keepalived单网络双活双主配置模式

前言:

一般这种模式不需要相对复杂的配置,相对单网络单主模式,多了一个主可用模式。主要实现单网络双主故障漂移模式。

1.架构示意图:

多播IP是:224.0.0.111。
NAT网络可根据自己的实际情况配置。

                        +------+
			|Client|
			+------+
                           /\
		       +--------+   
                       |Internet|
		       +--------+
                           /\
		       +--------+  
                       |NAT 网络|
		       +--------+
                           /\
	        +-----------------------+	 
                | 公网VIP1:172.16.8.100 |
		| 公网VIP2:172.16.8.101 |
		+-----------------------+
                   /                \
+-----------------------+      +-----------------------+
| KA+Lvs/Nginx/HAProxy  |      | KA+Lvs/Nginx/HAProxy  |
|                       |<---->|                       |
| VIP1:Master    (eth1) |多播IP| VIP1:BACKUP    (eth1) |
| VIP2:BACKUP    (eth1) |<---->| VIP2:Master    (eth1) |
| IP1:172.16.8.10(eth1) |      | IP1:172.16.8.11(eth1) |
+-----------------------+      +-----------------------+
                   \                /
	        +-----------------------+	 
                | 公网VIP1:172.16.8.100 |
		| 公网VIP2:172.16.8.101 |
		+-----------------------+
			   \/
			+------+
			|资源池|
			+------+

2.ka67配置文件:

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka67
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance VG_1 {
    state MASTER
    interface eth0
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          
}
vrrp_instance VG_2 {
    state BACKUP
    interface eth0
    virtual_router_id 192
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        172.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          
}

3.ka68配置文件:

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka68
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance VG_1 {
    state BACKUP
    interface eth0
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"  
}
vrrp_instance VG_2 {
    state MASTER
    interface eth0
    virtual_router_id 192
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        172.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}

Keepalived单网络主备单主配置模式(实战)

前言:

以下阐述在Keepalived中配置最简单的主备模式,后面我将一直讲述从简单的单网络单主主备模式,到双网络双主双同步的故障漂移模式。

关于Keepalived介绍,这里就不再叙述,可参阅前文:

keepalived 源代码编译安装及配置文件讲解》。

架构示意图:

多播IP是:224.0.0.111。
NAT网络可根据自己的实际情况配置。

                        +------+
			|Client|
			+------+
                           /\
		       +--------+
                       |Internet|
		       +--------+
                           /\
		       +--------+
                       |NAT 网络|
		       +--------+
                           /\
		+-----------------------+
                | 公网VIP1:172.16.8.100 |
		+-----------------------+
                   /                \
+-----------------------+      +-----------------------+
| KA+Lvs/Nginx/HAProxy  |      | KA+Lvs/Nginx/HAProxy  |
| VIP1:Master    (eth1) |多播IP| VIP1:BACKUP    (eth1) |
| IP1:172.16.8.10(eth1) |      | IP1:172.16.8.11(eth1) |
+-----------------------+      +-----------------------+
                   \                /
                +-----------------------+
                | 公网VIP1:172.16.8.100 |
		+-----------------------+
		           \/
		        +------+
			|资源池|
			+------+

环境:

MASTER:172.16.8.10
BACKUP:172.16.8.11
VIP:172.16.8.100
OS:CentOS Linux release 7.4.1708 (Core)

先决条件:

  • 时间同步。
  • 设置SELinux和防火墙。
  • 互相之间/etc/hosts文件添加对方主机名(可选)。
  • 确认接口支持多播(组播)新网卡默认支持。

keepalived 源代码编译安装及配置文件讲解》文中已完成以上必备条件。

1.单网络主备配置文件

MASTER 配置文件:

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka67
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance VG_1 {
    state MASTER
    interface eth0
    virtual_router_id 103
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          
}

BACKUP 配置文件:

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka68@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka68
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance VG_1 {
    state BACKUP
    interface eth0
    virtual_router_id 103
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"             
}

2.通用脚本

以下内容是notfiy.sh通用检测脚本:

$ cat /usr/local/keepalived/etc/keepalived/notify.sh
#!/bin/bash
contact='root@localhost'
                
notify() {
    local mailsubject="$(hostname) to be $1, vip floating"
    local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
    echo "$mailbody" | mail -s "$mailsubject" $contact
}
                
case $1 in
master)
    notify master   
    ;;
backup)
    notify backup   
    ;;
fault)
    notify fault    
    ;;
*)
    echo "Usage: $(basename $0) {master|backup|fault}"
    exit 1
    ;;
esac

3.主备测试

测试MASTER

启动keepalived之前,查看网卡信息:

[root@ka67 keepalived]# ip a
...
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
   link/ether 00:15:5d:ae:02:78 brd ff:ff:ff:ff:ff:ff
   inet 172.16.8.10/24 brd 172.16.8.255 scope global eth0
      valid_lft forever preferred_lft forever
   inet6 fe80::436e:b837:43b:797c/64 scope link
      valid_lft forever preferred_lft forever

启动keepalived后,再次查看网卡信息:

[root@ka67 keepalived]# ip a
...
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
   link/ether 00:15:5d:ae:02:78 brd ff:ff:ff:ff:ff:ff
   inet 172.16.8.10/24 brd 172.16.8.255 scope global eth0
      valid_lft forever preferred_lft forever
   inet 172.16.8.100/32 scope global eth0
      valid_lft forever preferred_lft forever
   inet6 fe80::436e:b837:43b:797c/64 scope link
      valid_lft forever preferred_lft forever

已经成功添加VIP 172.16.8.100。

测试MASTER

启动keepalived:

[root@ka68 keepalived]# systemctl start keepalived

现在停止 MASTER,看会不会漂移到BACKUP:

[root@ka67 keepalived]# systemctl stop keepalived

查看BACKUP运行日志:

[root@ka68 keepalived]# cat /cat /var/log/messages
...
Keepalived_vrrp[1451]: VRRP_Instance(VG_1) Transition to MASTER STATE
Keepalived_vrrp[1451]: VRRP_Instance(VG_1) Entering MASTER STATE
Keepalived_vrrp[1451]: VRRP_Instance(VG_1) setting protocol VIPs.
Keepalived_vrrp[1451]: Sending gratuitous ARP on eth0 for 172.16.8.100
...

已经成功漂移到BACKUP 主机。

再次启动MASTER:

[root@ka67 keepalived]# systemctl start keepalived

查看BACKUP Keepalived服务状态:

[root@ka68 keepalived]# systemctl status keepalived
keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-03-02 22:13:14 EST; 15min ago
  Process: 1448 ExecStart=/usr/local/keepalived/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 1449 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─1449 /usr/local/keepalived/sbin/keepalived -D
           ├─1450 /usr/local/keepalived/sbin/keepalived -D
           └─1451 /usr/local/keepalived/sbin/keepalived -D

Keepalived_vrrp[1451]: Sending gratuitous ARP on eth0 for 172.16.8.100
Keepalived_vrrp[1451]: VRRP_Instance(VG_1) Sending/queueing gratuitous ARPs on eth0 for 172.16.8.100
Keepalived_vrrp[1451]: Sending gratuitous ARP on eth0 for 172.16.8.100
Keepalived_vrrp[1451]: Sending gratuitous ARP on eth0 for 172.16.8.100
Keepalived_vrrp[1451]: Sending gratuitous ARP on eth0 for 172.16.8.100
Keepalived_vrrp[1451]: Sending gratuitous ARP on eth0 for 172.16.8.100
Keepalived_vrrp[1451]: VRRP_Instance(VG_1) Entering BACKUP STATE
Keepalived_vrrp[1451]: VRRP_Instance(VG_1) removing protocol VIPs.
Keepalived_vrrp[1451]: Opening script file /usr/local/keepalived/etc/keepalived/notify.sh

以上状态表明,当MASTER恢复服务后,BACKUP的Keepalived会自动漂移到MASTER上。因为MASTER的权重值比BACKUP高。以上是BACKUP的漂移到MASTER的状态。

在Centos 7上安装配置 Apche Kafka 分布式消息系统集群

Apache Kafka是一种颇受欢迎的分布式消息代理系统,旨在有效地处理大量的实时数据。Kafka集群不仅具有高度可扩展性和容错性,而且与其他消息代理(如ActiveMQ和RabbitMQ)相比,还具有更高的吞吐量。虽然它通常用作pub/sub消息传递系统,但许多组织也将其用于日志聚合,因为它为发布的消息提供持久存储。

您可以在一台服务器上部署Kafka,也可以构建一个分布式的Kafka集群来提高性能。本文介绍如何在多节点CentOS 7服务器实例上安装Apache Kafka。

先决条件:

欲安装kafka集群服务器,首先要安装以下组件:

Linux JAVA JDK JRE 环境变量安装与配置
在 Linux 多节点安装配置 Apache Zookeeper 分布式集群

服务器列表:

10.10.204.63
10.10.204.64
10.10.204.65

1.安装

创建用户和组:

 # groupadd kafka
 # useradd -g kafka -s /sbin/nologin kafka

下载Kafka包:

 # cd /usr/local
 # wget //apache.fayea.com/kafka/0.10.2.1/kafka_2.10-0.10.2.1.tgz

解压创建软连接:

 # tar zxvf kafka_2.10-0.10.2.1.tgz
 # ln -s kafka_2.10-0.10.2.1 kafka

设置权限及创建Kafka日志存放目录:

 # chown -R kafka:kafka kafka_2.10-0.10.2.1 kafka
 # mkdir -p /usr/local/kafka/logs

添加系统变量:

编辑:/etc/profile 文件,在最下面添加以下内容:

 export KAFKA_HOME=/usr/local/kafka_2.10-0.10.2.1
 export PATH=$KAFKA_HOME/bin:$PATH

使变量生效:

 # source /etc/profile

2.配置

修改添加Kafka服务器的配置文件:

 # cd /usr/local/kafka/config
 # vim server.properties

#唯一值,每个server填写不一样。
broker.id=63
#允许删除主题。
delete.topic.enable=true
#修改;协议、当前broker机器ip、端口,此值可以配置多个,跟SSL等有关系。
listeners=PLAINTEXT://10.10.204.63:9092
#修改;kafka数据的存放地址,多个地址的话用逗号分割,例如 /data/kafka-logs-1,/data/kafka-logs-2。
log.dirs=/usr/local/kafka/logs/kafka-logs
#每个topic的分区个数,若是在topic创建时候没有指定的话会被topic创建时的指定参数覆盖。
num.partitions=3
#新增;表示消息体的最大大小,单位是字节。
message.max.bytes=5242880
#新增;是否允许自动创建topic,若是false,就需要通过命令创建topic。
default.replication.factor=2
#新增;replicas每次获取数据的最大大小。
replica.fetch.max.bytes=5242880
#新增;配置文件中必须使用以下配置,否则只会标记为删除,而不是真正删除。
delete.topic.enable=true
#新增;是否允许 leader 进行自动平衡,boolean 值,默认为 true。
auto.leader.rebalance.enable=true
#kafka连接的zk地址,各个broker配置一致。
zookeeper.connect=10.10.204.63:2181,10.10.204.64:2181,10.10.204.65:2181

#可选配置
#是否允许自动创建 topic,boolean 值,默认为 true。
auto.create.topics.enable=true
#指定 topic 的压缩方式,string 值,可选有。
compression.type=high
#会把所有的日志同步到磁盘上,避免重启之后的日志恢复,减少重启时间。
controlled.shutdown.enable=true

注:broker的配置文件中有zookeeper的地址,也有自己的broker ID, 当broker启动后,会在zookeeper中新建一个znode。

修改其他配置文件:

 # vim zookeeper.properties

修改为:

 dataDir=/usr/local/zookeeper/data
新增:

 server.1=10.10.204.63:2888:3888
 server.2=10.10.204.64:2888:3888
 server.3=10.10.204.65:2888:3888

修改以下配置文件:

# vim producer.properties

bootstrap.servers=10.10.204.63:9092,10.10.204.64:9092,10.10.204.65:9092

# vim consumer.properties

zookeeper.connect=10.10.204.63:2181,10.10.204.64:2181,10.10.204.65:2181

3.启动

启动所有节点kafka服务(可以通过查看日志,或者检查进程状态,保证Kafka集群启动成功):

# /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties

执行上述命令后,会出来滚动的启动信息,直至窗口静止,此时需重开终端检查是否启动成功;

# jps
 9939 Jps
 2201 QuorumPeerMain
 2303 Kafka

4.使用测试

下面操作可以在任意节点,重新打开一个终端操作:

执行以下命令,建立一个名为 renwole 的topic。

 # cd /usr/local/kafka/bin
 # ./kafka-topics.sh --create --zookeeper 10.10.204.63:2181,10.10.204.64:2181,10.10.204.65:2181 --replication-factor 1 --partitions 1 --topic renwole
 Created topic "renwole".

解释:

 --replication-factor 1 复制1份
 --partitions 1 创建1个分区
 --topic 主题为renwole

查看已创建的topic:

# ./kafka-topics.sh --list --zookeeper 10.10.204.63:2181
 _consumer_offsets
 renwole

注:可以配置 broker 自动创建 topic。

发送消息(Kafka 使用一个简单的命令行producer(然后可以随意输入内容,回车可以发送,ctrl+c 退出)默认的每条命令将发送一条消息。):

# ./kafka-console-producer.sh --broker-list 10.10.204.64:9092 --topic renwole

在消息接收端,执行以下命令查看收到的消息:

# ./kafka-console-consumer.sh --bootstrap-server 10.10.204.63:9092 --topic renwole --from-beginning

执行以下命令删除topic:

# ./kafka-topics.sh --delete --zookeeper 10.10.204.63:2181,10.10.204.64:2181,10.10.204.65:2181 --topic renwole

5.查看集群状态

kafka已经成功完成安装,查看kafka集群节点ID状态:

注:可在任意节点连接zookeeper客户端。

 # cd /usr/local/zookeeper/bin
 # ./zkCli.sh
 Connecting to localhost:2181
 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
 [myid:] - INFO [main:Environment@100] - Client environment:host.name=10-10-204-63.10.10.204.63
 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.8.0_144
 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/java/jdk1.8.0_144/jre
 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/usr/local/zookeeper/bin/../build/classes:/usr/local/zookeeper/bin/../build/lib/*.jar:/usr/local/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/local/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/usr/local/zookeeper/bin/../lib/netty-3.10.5.Final.jar:/usr/local/zookeeper/bin/../lib/log4j-1.2.16.jar:/usr/local/zookeeper/bin/../lib/jline-0.9.94.jar:/usr/local/zookeeper/bin/../zookeeper-3.4.10.jar:/usr/local/zookeeper/bin/../src/java/lib/*.jar:/usr/local/zookeeper/bin/../conf:.:/usr/java/jdk1.8.0_144/jre/lib/rt.jar:/usr/java/jdk1.8.0_144/lib/dt.jar:/usr/java/jdk1.8.0_144/lib/tools.jar:/usr/java/jdk1.8.0_144/jre/lib
 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=
 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux
 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64
 [myid:] - INFO [main:Environment@100] - Client environment:os.version=3.10.0-514.21.2.el7.x86_64
 [myid:] - INFO [main:Environment@100] - Client environment:user.name=root
 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/root
 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/usr/local/zookeeper-3.4.10/bin
 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@69d0a921
 Welcome to ZooKeeper!
 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
 JLine support is enabled
 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x35ddf80430b0008, negotiated timeout = 30000
 WATCHER::
 WatchedEvent state:SyncConnected type:None path:null
 [zk: localhost:2181(CONNECTED) 0] ls /brokers/ids #查看ID
 [63, 64, 65]

可以看到三台kafka实例ID都在线,如果模拟任意一台节点掉线,查看结果就会不同(这个ID号是前面设置的 broker.id )。

开放端口加入防火墙:

# firewall-cmd --permanent --add-port=9092/tcp
# firewall-cmd --reload

6.开机启动

创建system单元文件:

在 /usr/lib/systemd/system 目录下创建 kafka.service 填写以下内容:

[Unit]
 Description=Apache Kafka server (broker)
 Documentation=//kafka.apache.org/documentation/
 Requires=network.target remote-fs.target
 After=network.target remote-fs.target

[Service]
 Type=simple
 Environment="LOG_DIR=/usr/local/kafka/logs"
 User=kafka
 Group=kafka
 #Environment=JAVA_HOME=/usr/java/jdk1.8.0_144
 ExecStart=/usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties
 ExecStop=/usr/local/kafka/bin/kafka-server-stop.sh
 Restart=on-failure
 SyslogIdentifier=kafka

[Install]
 WantedBy=multi-user.target

启动kafka实例服务器:

# systemctl daemon-reload
# systemctl enable kafka.service
# systemctl start kafka.service
# systemctl status kafka.service

7.此外Kafka也可以通过 Web UI 界面进行集群的管理,你可以参阅:

安装配置 Kafka Manager 分布式管理工具

滴滴,好了,到目前为止,kafka分布式消息队列已经部署完成,其中包括必要的优化信息。目前,您可以用于大多数编程语言的Kafka客户端去创建Kafka生产者和消费者,轻松地将其用于您的项目中。

参考资料:
//kafka.apache.org/documentation.html#quickstart
//www.ibm.com/developerworks/cn/opensource/os-cn-kafka/
//tech.meituan.com/kafka-fs-design-theory.html
//blog.jobbole.com/99195/

在 Linux 多节点安装配置 Apache Zookeeper 分布式集群

规划:

三台物理服务器就形成了(法定人数)。对于高可用性集群,您可以使用高于3的任何奇数。例如,如果设置5台服务器,则集群可以处理两个故障节点等。

物理服务器需要开启的端口 2888 3888 和 2181 上有入站连接。如果启用了 IPtables 或 Firewall,请确保启用指定的端口,因为zookeeper 需要通过这些端口进行通信。

OS:Centos 7.4 x64
Zookeeper-3.4.10

在本教程中,我们将在以下3台服务器部署zookeeper分布式群集:

10.10.204.63
10.10.204.64
10.10.204.65

先决条件:

在安装 Zookeeper 之前,你应该在系统中安装并配置好 JDKOracle Java8,这将与Zookeeper配合使用。

Linux JAVA JDK JRE 环境变量安装与配置

步骤1: 在各个实例上安装Zookeeper。

 下载Zookeeper
 # cd /tmp
 # wget //apache.fayea.com/zookeeper/stable/zookeeper-3.4.10.tar.gz
 解压它
 # tar zxvf zookeeper-3.4.10.tar.gz
 移动 Zookeeper 到 /usr/local/
 # mv zookeeper-3.4.10 /usr/local/
 创建软连接
 # ln -s /usr/local/zookeeper-3.4.10 /usr/local/zookeeper
 拷贝配置文件
 # cp /usr/local/zookeeper/conf/zoo_sample.cfg /usr/local/zookeeper/conf/zoo.cfg
 创建数据及日志存放目录
 # mkdir -p /usr/local/zookeeper/data
 # mkdir -p /usr/local/zookeeper/logs
 新建用户
 # groupadd zookeeper
 # useradd -g zookeeper -s /sbin/nologin zookeeper
 赋予Zookeeper目录权限
 # chown -R zookeeper:zookeeper /usr/local/zookeeper-3.4.10 /usr/local/zookeeper
 # chmod +755 /usr/local/zookeeper-3.4.10

步骤2:修改配置文件。

 # vim /usr/local/zookeeper/conf/zoo.cfg

默认值:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# //zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to “0” to disable auto purge feature
#autopurge.purgeInterval=1

修改为:

#服务器之间或客户端与服务器之间维持心跳的时间间隔,每隔tickTime时间就会发送一个心跳。
tickTime=2000
#这个配置项是用来配置Zookeeper接受客户端(这里所说的客户端不是用户连接Zookeeper 服务器的客户端,而是Zookeeper服务器集群中连接到Leader的Follower 服务器)初始化连接时最长 能忍受多少个心跳时间间隔数。当已经超过 10 个心跳的时间(也就是 tickTime)长度后 Zookeeper 服务器还没有收到客户端的返回信息,那么表明这个客户端连接失败。总的时间长度就是 10*2000=20 秒。
initLimit=10
#这个配置项标识Leader与Follower之间发送消息,请求和应答时间长度,最长不能超过多少个tickTime的时间长度,总的时间长度就是5*2000=10秒(适用于3.4以上版本)。
syncLimit=5
#这个参数和上面的参数搭配使用,这个参数指定了需要保留的文件数目。默认是保留3个。
autopurge.snapRetainCount=3
#这个参数指定了清理频率,单位是小时,需要填写一个1或更大的整数,默认是0,表示不开启自己清理功能(适用于3.4以上版本)。
autopurge.purgeInterval=1
maxClientCnxns=60
#修改数据目录(可以是任意目录)。
dataDir=/usr/local/zookeeper/data
#新增日志目录(可以是任意目录)。
dataLogDir=/usr/local/zookeeper/logs
#Zookeeper服务器监听的端口,以接受客户端的访问请求。
clientPort=2181
#新增以下内容。
server.1=10.10.204.63:2888:3888
server.2=10.10.204.64:2888:3888
server.3=10.10.204.65:2888:3888

步骤3:分别在各个Zookeeper实例中创建myid文件。

 # echo "1" >> /usr/local/zookeeper/data/myid
 # echo "2" >> /usr/local/zookeeper/data/myid
 # echo "3" >> /usr/local/zookeeper/data/myid

步骤4:添加系统变量。

编辑:/etc/profile  文件,添加以下内容:

 export ZOOKEEPER_HOME=/usr/local/zookeeper/
 export PATH=$ZOOKEEPER_HOME/bin:$PATH

执行以下命令使其系统变量永久生效:

 # source /etc/profile

步骤5:创建系统单元文件。

在 /usr/lib/systemd/system 目录下创建  zookeeper.service  ,并填写如下内容:

[Unit]
 Description=zookeeper.service
 After=network.target

[Service]
 Type=forking
 Environment=ZOO_LOG_DIR=/usr/local/zookeeper/
 ExecStart=/usr/local/zookeeper/bin/zkServer.sh start
 ExecStop=/usr/local/zookeeper/bin/zkServer.sh stop
 ExecReload=/usr/local/zookeeper/bin/zkServer.sh restart
 Restart=always
 User=zookeeper
 Group=zookeeper

[Install]
 WantedBy=multi-user.target

步骤6:启动Zookeeper。

重新加载配置信息:systemctl daemon-reload
启动zookeeper服务:systemctl start zookeeper.service
关闭zookeeper服务:systemctl stop zookeeper.service
查看进程状态及日志:systemctl status zookeeper.service
开机自启动:systemctl enable zookeeper.service
关闭自启动:systemctl disable zookeeper.service

步骤7:放行 2888、3888、2181 端口。

 # firewall-cmd --permanent --zone=public --add-port=2888/tcp
 # firewall-cmd --permanent --zone=public --add-port=3888/tcp
 # firewall-cmd --permanent --zone=public --add-port=2181/tcp

重载防火墙:

 # firewall-cmd --reload

步骤8:查看Zookeeper状态

分别检测3台服务器的运行状态是否正常。

查看 10.10.204.63 节点;

 [root@10-10-204-63 ~]# /usr/local/zookeeper/bin/zkServer.sh status
 ZooKeeper JMX enabled by default
 Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: leader

查看 10.10.204.64 节点;

 [root@10-10-204-64 ~]# /usr/local/zookeeper/bin/zkServer.sh status
 ZooKeeper JMX enabled by default
 Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower

查看 10.10.204.65 节点;

 [root@10-10-204-65 ~]# /usr/local/zookeeper/bin/zkServer.sh status
 ZooKeeper JMX enabled by default
 Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
 Mode: follower

服务器中ZooKeeper分别扮演者不同的角色,1台将处于 leader(领导)地位,另外2台将处于 follower (追随者)。如果您获得相同的结果,那么你已经正确地安装配置好了ZooKeeper集群服务器。

步骤9:在3台物理服务器中的任意一台使用客户端连接。

客户端连接信息如下所示:

 [root@10-10-204-63 ~]# /usr/local/zookeeper/bin/zkCli.sh -server 10.10.204.64:2181
 Connecting to 10.10.204.64:2181
 2017-08-13 20:30:11,816 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
 2017-08-13 20:30:11,863 [myid:] - INFO [main:Environment@100] - Client environment:host.name=103-28-204-63.10.10.204.63
 2017-08-13 20:30:11,863 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.8.0_144
 2017-08-13 20:30:11,875 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
 2017-08-13 20:30:11,883 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/java/jdk1.8.0_144/jre
 2017-08-13 20:30:11,883 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/usr/local/zookeeper/bin/../build/classes:/usr/local/zookeeper/bin/../build/lib/*.jar:/usr/local/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/local/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/usr/local/zookeeper/bin/../lib/netty-3.10.5.Final.jar:/usr/local/zookeeper/bin/../lib/log4j-1.2.16.jar:/usr/local/zookeeper/bin/../lib/jline-0.9.94.jar:/usr/local/zookeeper/bin/../zookeeper-3.4.10.jar:/usr/local/zookeeper/bin/../src/java/lib/*.jar:/usr/local/zookeeper/bin/../conf:.:/usr/java/jdk1.8.0_144/jre/lib/rt.jar:/usr/java/jdk1.8.0_144/lib/dt.jar:/usr/java/jdk1.8.0_144/lib/tools.jar:/usr/java/jdk1.8.0_144/jre/lib
 2017-08-13 20:30:11,884 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
 2017-08-13 20:30:11,884 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
 2017-08-13 20:30:11,884 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=
 2017-08-13 20:30:11,884 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux
 2017-08-13 20:30:11,884 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64
 2017-08-13 20:30:11,885 [myid:] - INFO [main:Environment@100] - Client environment:os.version=3.10.0-514.21.2.el7.x86_64
 2017-08-13 20:30:11,885 [myid:] - INFO [main:Environment@100] - Client environment:user.name=root
 2017-08-13 20:30:11,885 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/root
 2017-08-13 20:30:11,885 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/root
 2017-08-13 20:30:11,893 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=10.10.204.64:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@69d0a921
 Welcome to ZooKeeper!
 2017-08-13 20:30:12,103 [myid:] - INFO [main-SendThread(10.10.204.64:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server 10.10.204.64/10.10.204.64:2181. Will not attempt to authenticate using SASL (unknown error)
 JLine support is enabled
 2017-08-13 20:30:12,768 [myid:] - INFO [main-SendThread(10.10.204.64:2181):ClientCnxn$SendThread@876] - Socket connection established to 10.10.204.64/10.10.204.64:2181, initiating session
 2017-08-13 20:30:12,935 [myid:] - INFO [main-SendThread(10.10.204.64:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server 10.10.204.64/10.10.204.64:2181, sessionid = 0x15dda7deb6c0000, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
 [zk: 10.10.204.64:2181(CONNECTED) 2] create /renwoledb 'renwole' #创建数据节点
 Created /renwoledb
 [zk: 10.10.204.64:2181(CONNECTED) 3] get /renwoledb #调出节点数据
 renwole
 cZxid = 0x500000002
 ctime = Sun Aug 13 21:19:24 CST 2017
 mZxid = 0x500000002
 mtime = Sun Aug 13 21:19:24 CST 2017
 pZxid = 0x500000002
 cversion = 0
 dataVersion = 0
 aclVersion = 0
 ephemeralOwner = 0x0
 dataLength = 7
 numChildren = 0

整个zookeeper集群到此已经搭建并测试完成。如果 leader 节点出现故障,其他 follower (追随者)会投票选择新的 leader ,所以这就是我们想要的 Zookeeper 分布式集群。

转载请注明本文地址。