标签归档:nginx

Keepalived Nginx双网络(内外网)故障非同步漂移双活双主模式(实战)

介绍:

有了keepalived+Lvs这样的高性能组合,为什么还需keepalived+Nginx呢。keepalived是为了Lvs而设计。Lvs是一个四层的负载均衡设备,虽然有着高性能的优势,但它无后端服务器的健康检查机制。keepalived为lvs提供一系列的健康检查机制,例如:TCP_CHECK,UDP_CHECK,HTTP_GET等。同时lvs也可以自己写健康检查脚脚本。或者结合ldirectory来实现后端健康检测。但LVS始终无法摆脱它是一个四层设备,无法对上层协议进行解析。而Nginx就不一样了,Nginx是一个七层的设备可以对七层协议进行解析,可以对一些请求进行过滤,还可以对请求结果进行缓存。这些都是Nginx独有的优势。但是keepalived并没有为Nginx提供健康检测。需要自己去写一些脚步来进行健康检测。

下面主要讲解Keepalived+Nginx的模式,不包含lvs。如果不是大型负载,一般用不到LVS,当然你也可以参阅:《Keepalived LVS-DR Nginx单网络双活双主配置模式(实战)》篇。

准备四台服务器或虚拟机:

Web Nginx 内网:10.16.8.8/10.16.8.9

Keepalived 内网:10.16.8.10(ka67)/10.16.8.11(ka68)
Keepalived 公网:172.16.8.10/172.16.8.11

Keepalived 内网VIP:10.16.8.100/10.16.8.101
Keepalived 公网VIP:172.16.8.100/172.16.8.101

OS:CentOS Linux release 7.4.1708 (Core)

先决条件:

安装keepalived。
时间同步。
设置SELinux和防火墙。
互相之间/etc/hosts文件添加对方主机名(可选)。
确认网络接口支持多播(组播)新网卡默认支持。

以上部署请参阅:《keepalived 安装及配置文件讲解》。

1.ka67配置文件

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance External_1 {
    state MASTER
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"  
}
vrrp_instance External_2 {
    state BACKUP
    interface eth1
    virtual_router_id 172
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        10.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"  
}
vrrp_instance Internal_1 {
    state MASTER
    interface eth0
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole2
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          
}
vrrp_instance Internal_2 {
    state BACKUP
    interface eth0
    virtual_router_id 192
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole3
    }
    virtual_ipaddress {
        172.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          
}

2.ka68配置文件

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance External_1 {
    state BACKUP
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          
 }
 
vrrp_instance External_2 {
    state MASTER
    interface eth1
    virtual_router_id 172
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        10.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          
   }
   
vrrp_instance Internal_1 {
    state BACKUP
    interface eth0
    virtual_router_id 191
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole2
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          
}
vrrp_instance Internal_2 {
    state MASTER
    interface eth0
    virtual_router_id 192
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole3
    }
    virtual_ipaddress {
        172.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          
}

3.创建检测通用脚本

$ vim /usr/local/keepalived/etc/keepalived/notify.sh
#!/bin/bash
#
contact='root@localhost'
                
notify() {
    local mailsubject="$(hostname) to be $1, vip floating"
    local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
    echo "$mailbody" | mail -s "$mailsubject" $contact
}
                
case $1 in
master)
    notify master   
    ;;
backup)
    notify backup
    systemctl start nginx   # 此处配置后,Nginx服务挂了能自动启动   
    ;;
fault)
    notify fault    
    ;;
*)
    echo "Usage: $(basename $0) {master|backup|fault}"
    exit 1
    ;;
esac

4.启动keepalived服务并测试

启动ka67后查看其网卡状态:

[root@ka67 ~]# systemctl start keepalived
[root@ka67 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:ae:02:78 brd ff:ff:ff:ff:ff:ff
    inet 172.16.8.10/24 brd 172.16.8.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.8.100/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.8.101/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::436e:b837:43b:797c/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:ae:02:84 brd ff:ff:ff:ff:ff:ff
    inet 10.16.8.10/24 brd 10.16.8.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet 10.16.8.100/32 scope global eth1
       valid_lft forever preferred_lft forever
    inet 10.16.8.101/32 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::1261:7633:b595:7719/64 scope link
       valid_lft forever preferred_lft forever

在ka68没有启动时,ka67添加了4个VIP,分别是:

公网eth0:

172.16.8.100/32
172.16.8.101/32

内网eth1:

10.16.8.100/32
10.16.8.101/32

启动ka68后查看其网卡状态:

[root@ka68 ~]# systemctl start keepalived
[root@ka68 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:ae:02:79 brd ff:ff:ff:ff:ff:ff
    inet 172.16.8.11/24 brd 103.28.204.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.8.101/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::3d2c:ecdc:5e6d:70ba/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:ae:02:82 brd ff:ff:ff:ff:ff:ff
    inet 10.16.8.11/24 brd 10.16.8.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet 10.16.8.101/32 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::4fb3:d0a8:f08c:4536/64 scope link
       valid_lft forever preferred_lft forever

ka68添加了2个VIP,分别是:

公网eth0:

172.16.8.101/32

内网eth1:

10.16.8.101/32

再次查看ka67的网卡状态信息:

[root@ka67 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:ae:02:78 brd ff:ff:ff:ff:ff:ff
    inet 172.16.8.10/24 brd 172.16.8.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.16.8.100/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::436e:b837:43b:797c/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:ae:02:84 brd ff:ff:ff:ff:ff:ff
    inet 10.16.8.10/24 brd 10.16.8.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet 10.16.8.100/32 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::1261:7633:b595:7719/64 scope link
       valid_lft forever preferred_lft forever

注意到 172.16.8.101/10.16.8.101 已经被移除了,此时无论停掉任意一台服务器,4个VIP都不会停止通信。

另外可以在ka67/ka68通过如下命令查看组播地址的心跳状态:

[root@ka67 ~]# tcpdump -nn -i eth1 host 224.0.0.111
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
02:00:15.690389 IP 10.16.8.10 > 224.0.0.111: VRRPv2, Advertisement, vrid 171, prio 100, authtype simple, intvl 1s, length 20
02:00:15.692654 IP 10.16.8.11 > 224.0.0.111: VRRPv2, Advertisement, vrid 172, prio 100, authtype simple, intvl 1s, length 20
02:00:16.691552 IP 10.16.8.10 > 224.0.0.111: VRRPv2, Advertisement, vrid 171, prio 100, authtype simple, intvl 1s, length 20
02:00:16.693814 IP 10.16.8.11 > 224.0.0.111: VRRPv2, Advertisement, vrid 172, prio 100, authtype simple, intvl 1s, length 20
02:00:17.692710 IP 10.16.8.10 > 224.0.0.111: VRRPv2, Advertisement, vrid 171, prio 100, authtype simple, intvl 1s, length 20

到目前为止,vrrp的高可用配置&测试已完成,接下来我们继续配置Web Nginx服务。

5.安装并配置Nginx

分别在后端服务器 10.16.8.8/10.16.8.9 安装Nginx:

关于Nginx请参阅:《Centos 7源码编译安装 Nginx》。

或通过以下方式yum安装Nginx;简单快速:

$ yum install epel-release -y
$ yum install nginx -y

测试环境为区分机器的不同,故将web页面设置服务器IP地址,但在生产环境中获取的内容是一致的。

分别在10.16.8.8/10.16.8.9执行如下命令:

$ echo "Server 10.16.8.8" > /usr/share/nginx/html/index.html
$ echo "Server 10.16.8.9" > /usr/share/nginx/html/index.html

测试是否访问正常:

$ curl //10.16.8.8
Server 10.16.8.8

分别在ka67/ka68上安装Nginx,我这里用yum安装:

$ yum install nginx psmisc -y

说明:psmisc包含了:fuser,killall,pstree等命令。

ka67/ka68上配置Nginx:

备份默认配置文件:

$ mv /etc/nginx/conf.d/default.conf{,.bak}
$ mv /etc/nginx/nginx.conf{,.bak}

分别在ka67/ka68将nginx主配置文件中添加如下内容:

$ vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    include /etc/nginx/conf.d/*.conf;
    upstream webserverapps {
    server 10.16.8.8:80;
    server 10.16.8.9:80;
    #server 127.0.0.1:8080 backup;
   }

server {
        listen 80;
        server_name _;
location / {
     proxy_pass //webserverapps;
     proxy_redirect off;
     proxy_set_header Host $host;
     proxy_set_header X-Real-IP $remote_addr;
     proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
     client_max_body_size 10m;
     client_body_buffer_size 128k;
     proxy_connect_timeout 90;
     proxy_send_timeout 90;
     proxy_read_timeout 90;
     proxy_buffer_size 4k;
     proxy_buffers 4 32k;
     proxy_busy_buffers_size 64k;
     proxy_temp_file_write_size 64k;
     add_header Access-Control-Allow-Origin *;
       }
    }

}

注意:以上配置主要添加了蓝色部分,其他默认,仅为测试使用。生产环境请根据自己需求调整配置。

ka67/ka68重启Nginx服务:

$ systemctl restart nginx

分别在ka67/ka68上测试:

[root@ka67 ~]# for i in `seq 10`; do curl 10.16.8.10; done
Server 10.16.8.8
Server 10.16.8.9
Server 10.16.8.8
Server 10.16.8.9
Server 10.16.8.8
Server 10.16.8.9
Server 10.16.8.8
Server 10.16.8.9
Server 10.16.8.9
Server 10.16.8.9

到目前为止,Nginx反代功能也已实现,下面我们将把Nginx与Keepalived结合起来,使Nginx支持高可用。

6.配置Keepalived Nginx高可用

分别在ka67/ka68配置文件/usr/local/keepalived/etc/keepalived/keepalived.conf的全局配置块global_defs下方添加vrrp_script配置块:

vrrp_script chk_nginx {
    script "killall -0 nginx"
    interval 2
    weight -10
    fall 2
    rise 2
}

在所有vrrp_instance实例块里,添加track_script块:

track_script {
    chk_nginx
}

例如:

...
vrrp_instance External_1 {
    state BACKUP
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    track_script {
    chk_nginx
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_"/usr/local/keepalived/etc/keepalived/notify.sh fault"
 }
...

配置完以后,重启ka67/ka68的keepalived服务:

$ systemctl stop keepalived
$ systemctl start keepalived

总结:

在配置过程中出现了无法漂移的情况,跨网段问题。解决通道,还是要多看日志,多分析判断,最终还是能解决问题的。无论在何种情况下,既然选择了keepalived,就要坚信自己的初心。
如你在配置过程中出现任何问题,欢迎留言,共同解决问题。

Keepalived LVS-DR Nginx单网络双活双主配置模式(实战)

何为LVS/DR模式?

LVS是Linux Virtual Server的简写,意即Linux虚拟服务器,是一个虚拟的服务器集群系统。LVS目前有三种IP负载均衡技术(VS/NAT、VS/TUN和VS/DR)、十种调度算法(rrr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq)。

LVS在Unix-like系统中是作为一个前端(Director)存在的,又称为调度器,它本身不提供任何的服务,只是将通过互联网进来的请求接受后再转发给后台运行的真正的服务器(RealServer)进行处理,然后响应给客户端。

LVS集群采用IP负载均衡技术和基于内容请求分发技术。调度器具有很好的吞吐率,将请求均衡地转移到不同的服务器上执行,且调度器自动屏蔽掉服务器的故障,从而将一组服务器构成一个高性能的、高可用的虚拟服务器。整个服务器集群的结构对客户是透明的,而且无需修改客户端和服务器端的程序。为此,在设计时需要考虑系统的透明性、可伸缩性、高可用性和易管理性。

LVS有两个重要的组件:一个是IPVS,一个是IPVSADM。ipvs是LVS的核心组件,它本身只是一个框架,类似于iptables,工作于内核空间中。ipvsadm 是用来定义LVS的转发规则的,工作于用户空间中。

LVS有三种转发类型:

LVS-NAT模式:

称为网络地址转换,实现起来比较简单,所有的RealServer集群节点和前端调度器Director都要在同一个子网中,这种模型可以实现端口映射,RealServer的操作系统可以是任意操作系统,前端的Director既要处理客户端发起的请求,又要处理后台RealServer的响应信息,将RealServer响应的信息再转发给客户端,前端Director很容易成为整个集群系统性能的瓶颈。通常情况下RealServer的IP地址(以下简成RIP)为私有地址,便于RealServer集群节点之间进行通信通常情况下前端的Director有两个IP地址,一个为VIP,是虚拟的IP地址,客户端向此IP地址发起请求。一个是DIP,是真正的Director的IP地址,RIP的网关要指向Director的DIP。

LVS-DR模式:

DR:直接路由(direct routing)模式,此种模式通过MAC地址转发工作,所有的RealServer集群节点和前端调度器Director都要在同一个物理网络中,此种模式不支持端口映射,此种模式的性能要优于LVS-NAT,RIP可以使用公网的IP,RIP的网关不能指向DIP。

优点:

相对LVS/NAT模式,DR模式不需要把返回的数据通过负载均衡转发,想要他发挥优势,那么就要相应的数据包的数量和长度远远大于请求数据包,幸运的是,大部分WEB服务都具备这样的特点,响应和请求并不对称,因此常用的WEB服务,都可以使用这种模式。

这种方式,负载均衡器不再是系统的瓶颈。如果你的负载均衡器只拥有100M的全双工网卡和带宽的话,通过集群的横向扩展也可以让整个系统达到1G的流量。

来自LVS官方站点的测试结果也告诉我们,LVS-DR可以容纳100台以上的实际应用服务器,对一般的服务而已,这样的表现足够了。

不足:

DR模式下不能跨网段转发数据,如果必须要跨网段进行负载,那么就必须使用LVS/TUN模式。

LVS-TUN模式:

称为隧道模型RealServer服务器与前端的Director可以在不同的网络中,此种模型也不支持端口映射,RealServer只能使用哪些支持IP隧道的操作系统,前端的Director只处理客户端的请求,然后将请求转发给RealServer,由后台的RealServer直接响应客户端,不再经过Director,RIP一定不能是私有IP,在DR、TUN模式中,数据包是直接返回给用户的,所以,在Director Server上以及集群的每个节点上都需要设置这个地址。此IP在Real Server上一般绑定在回环地址上,例如lo:0,同样,在Director Server上,虚拟IP绑定在真实的网络接口设备上,例如eth0:0。

开始部署:

准备四台服务器或虚拟机:

Web Nginx:10.16.8.8/10.16.8.9
Keepalived:10.16.8.10/10.16.8.11
Keepalived VIP:10.16.8.100/10.16.8.101
OS:CentOS Linux release 7.4.1708 (Core)

先决条件:

安装keepalived。
时间同步。
设置SELinux和防火墙。
互相之间/etc/hosts文件添加对方主机名(可选)。
确认网络接口支持多播(组播)新网卡默认支持。

以上部署请参阅:《keepalived 安装及配置文件讲解》。

1.ka67配置文件

$ vim /usr/local/keepalived/etc/keepalived/keepalived.conf
global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 60
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance VI_1 {
    state MASTER
    interface eth1
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"        
}
vrrp_instance VI_2 {
    state BACKUP
    interface eth1
    virtual_router_id 192
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        10.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"         
}

2.ka68配置文件

$ vim /usr/local/keepalived/etc/keepalived/keepalived.conf
global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 60
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth1
    virtual_router_id 191
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"         
}
vrrp_instance VI_2 {
    state MASTER
    interface eth1
    virtual_router_id 192
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        10.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"        
}

3.创建通用notify.sh检测脚本

分别创建此脚本:

$ vim /usr/local/keepalived/etc/keepalived/notify.sh
#!/bin/bash
#
contact='root@localhost'
                
notify() {
    local mailsubject="$(hostname) to be $1, vip floating"
    local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
    echo "$mailbody" | mail -s "$mailsubject" $contact
}
                
case $1 in
master)
    notify master   
    ;;
backup)
    notify backup   
    ;;
fault)
    notify fault    
    ;;
*)
    echo "Usage: $(basename $0) {master|backup|fault}"
    exit 1
    ;;
esac

4.启动keepalived服务

$ systemctl start keepalived
$ systemctl enable keepalived

5.查看组播状态

我们还可以在任意一台keepalived节点,通过tcpdump命令查看组播心跳状态,例如:

$ tcpdump -nn -i eth1 host 224.0.0.111
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
00:32:31.714987 IP 10.16.8.10 > 224.0.0.111: VRRPv2, Advertisement, vrid 191, prio 100, authtype simple, intvl 1s, length 20
00:32:31.715739 IP 10.16.8.11 > 224.0.0.111: VRRPv2, Advertisement, vrid 192, prio 100, authtype simple, intvl 1s, length 20
00:32:32.716150 IP 10.16.8.10 > 224.0.0.111: VRRPv2, Advertisement, vrid 191, prio 100, authtype simple, intvl 1s, length 20
00:32:32.716292 IP 10.16.8.11 > 224.0.0.111: VRRPv2, Advertisement, vrid 192, prio 100, authtype simple, intvl 1s, length 20
00:32:33.717327 IP 10.16.8.10 > 224.0.0.111: VRRPv2, Advertisement, vrid 191, prio 100, authtype simple, intvl 1s, length 20
00:32:33.721361 IP 10.16.8.11 > 224.0.0.111: VRRPv2, Advertisement, vrid 192, prio 100, authtype simple, intvl 1s, length 20

如果提示报错:-bash: tcpdump: command not found.

安装tcpdump即可:

$ yum install tcpdump -y

6.配置LVS

分别安装lvs。CentOS7已经集成了LVS的核心,所以只需要安装LVS的管理工具就可以了:

$ yum -y install ipvsadm

分别停止ka67/ka68的keepalived服务:

$ systemctl stop keepalived

分别在ka67/ka68配置文件最后添加Virtual Server配置:

$ vim /usr/local/keepalived/etc/keepalived/keepalived.conf
virtual_server 10.16.8.100 80 {
    delay_loop 3
    lb_algo rr
    lb_kind DR
    protocol TCP
    # sorry_server 127.0.0.1 80
    real_server 10.16.8.8 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 1
            nb_get_retry 3
            delay_before_retry 1
        }
    }
    real_server 10.16.8.9 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 1
            nb_get_retry 3
            delay_before_retry 1
        }
    }
}
virtual_server 10.16.8.101 80 {
    delay_loop 3
    lb_algo rr
    lb_kind DR
    protocol TCP
    # sorry_server 127.0.0.1 80
    real_server 10.16.8.8 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 1
            nb_get_retry 3
            delay_before_retry 1
        }
    }
    real_server 10.16.8.9 80 {
        weight 1
        HTTP_GET {
            url {
                path /
                status_code 200
            }
            connect_timeout 1
            nb_get_retry 3
            delay_before_retry 1
        }
    }
}

7.配置RS(Real Server)Web服务

分别在Web服务器安装Apache Httpd或Nginx作为Web服务,这里安装Nginx。

关于Nginx请参阅:《Centos 7 源码编译安装 Nginx》。

或通过以下方式安装Nginx,简单快速:

$ yum install epel-release -y
$ yum install nginx -y

测试环境为区分机器的不同,故将显示页面设置成服务器IP地址,但在生产环境中获取的内容是一致的。

分别在web8/web9执行如下命令:

$ echo "Server 10.16.8.8" > /usr/share/nginx/html/index.html
$ echo "Server 10.16.8.9" > /usr/share/nginx/html/index.html

测试是否访问正常:

$ curl //127.0.0.1
Server 10.16.8.8

8.添加RS脚本

由于该脚本部分命令,在Centos7 最小化安装中没有,所以请先安装网络工具包:

$ yum install net-tools -y

分别在web服务器上添加rs.sh脚本:

$ vim /tmp/rs.sh
#!/bin/bash
vip1=10.16.8.100
vip2=10.16.8.101
dev1=lo:1
dev2=lo:2
case $1 in
start)
    echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
    echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
    ifconfig $dev1 $vip1 netmask 255.255.255.255 broadcast $vip1 up
    ifconfig $dev2 $vip2 netmask 255.255.255.255 broadcast $vip2 up
    echo "VS Server is Ready!"
    ;;
stop)
    ifconfig $dev down
    echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
    echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
    echo "VS Server is Cancel!"
    ;;
*)
    echo "Usage `basename $0` start|stop"
    exit 1
    ;;
esac

再分别启动该脚本:

$ /tmp/rs.sh start

如果需要停止,请执行如下命令:

$ /tmp/rs.sh stop

9.测试

在另一台服务器测试是否能够访问

[root@localhost ~]# for i in `seq 5`; do
>     curl 10.16.8.100
>     curl 10.16.8.101
> done
Server 10.16.8.9
Server 10.16.8.8
Server 10.16.8.8
Server 10.16.8.9
Server 10.16.8.9
Server 10.16.8.8
Server 10.16.8.8
Server 10.16.8.9
Server 10.16.8.9
Server 10.16.8.8

根据测试结果判断,已经实现了Keepalived+LVS-DR+Nginx高可用故障切换模式。

Keepalived双网络(内外网)故障非同步漂移双活双主模式

前言:

在生产环境中,公网与内网都是独立分开的,所以称之为双网络。下面配置将要实现内网和公网故障时不必同步漂移,例如:Keepalived+LVS-DRKeepalived+NginxKeepalived+HAProxy 这些都无需同步漂移的。另外Keepalived+LVS-NAT则需要同步漂移。

1.示意图:

  • 多播IP是:224.0.0.111。
  • 一台机器的VIP内外网互为主备。
                        +------+
			|Client|
			+------+
                           /\
		       +--------+   
                       |Internet|
		       +--------+
                           /\
		       +--------+  
                       |NAT 网络|
		       +--------+
                           /\
                +----------------------+
                | 内网VIP1:10.16.8.100 |
		| 内网VIP2:10.16.8.101 |
                +----------------------+
                   /                \
+-----------------------+      +-----------------------+
|KA+Lvs-DR/Nginx/HAProxy|      |KA+Lvs-DR/Nginx/HAProxy|
|内网VIP1:Master (eth1) |      |内网VIP1:BACKUP (eth1) |
|内网VIP2:BACKUP (eth1) |      |内网VIP2:Master (eth1) |
|内网:10.16.8.10 (eth1) |<---->|内网:10.16.8.11 (eth1) |
|-----------------------|多播IP|-----------------------|
|公网VIP1:Master (eth2) |<---->|公网VIP1:BACKUP (eth2) |
|公网VIP2:BACKUP (eth2) |      |公网VIP2:Master (eth2) |
|公网:172.16.8.10(eth2) |      |公网:172.16.8.11(eth2) |                 
+-----------------------+      +-----------------------+
                   \                /
	        +-----------------------+	 
                | 公网VIP1:172.16.8.100 |
		| 公网VIP2:172.16.8.101 |
	        +-----------------------+
		           \/
			+------+
			|资源池|
			+------+

2.ka67配置文件

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka67
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance External_1 {
    state MASTER
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1    
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance External_2 {
    state BACKUP
    interface eth1
    virtual_router_id 172
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        10.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_1 {
    state MASTER
    interface eth2
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole2
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_2 {
    state BACKUP
    interface eth2
    virtual_router_id 192
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole3
    }
    virtual_ipaddress {
        172.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}

3.ka68配置文件

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka68
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance External_1 {
    state BACKUP
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1    
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance External_2 {
    state MASTER
    interface eth1
    virtual_router_id 172
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        10.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_1 {
    state BACKUP
    interface eth2
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole2
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_2 {
    state MASTER
    interface eth2
    virtual_router_id 192
    priority 95
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole3
    }
    virtual_ipaddress {
        172.16.8.101
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}

Keepalived双网络(内外网)故障非同步漂移主备单主模式

前言:

在生产环境当中,内网与公网都是独立分开的,因此内网和公网不用同步漂移,例如:Keepalived+LVS-DR、Keepalived+Nginx、Keepalived+HAProxy 都无需同步漂移。

注:Keepalived+LVS-NAT模式除外。

1.示意图:

多播IP是:224.0.0.111。

                        +------+
			|Client|
			+------+
                           /\
		       +--------+   
                       |Internet|
		       +--------+
                           /\
		       +--------+  
                       |NAT 网络|
		       +--------+
                           /\
                +---------------------+
                | 内网VIP:10.16.8.100 |
                +---------------------+
                  /                \
+-----------------------+      +-----------------------+
|KA+Lvs/Nginx/HAProxy   |      |KA+Lvs/Nginx/HAProxy   |
|内网VIP:Master  (eth1) |      |内网VIP:BACKUP  (eth1) |
|内网:10.16.8.10 (eth1) |<---->|内网:10.16.8.11 (eth1) |
|-----------------------|多播IP|-----------------------| 
|公网VIP:Master  (eth2) |<---->|公网VIP:BACKUP  (eth2) |
|公网:172.16.8.10(eth2) |      |公网:172.16.8.11(eth2) |                  
+-----------------------+      +-----------------------+
                   \                /
	        +----------------------+	 
                | 公网VIP:172.16.8.100 |
	        +----------------------+
		           \/
			+------+
			|资源池|
		        +------+

2.ka67配置文件

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka67
   vrrp_mcast_group4 224.0.0.111
}
vrrp_sync_group VG_1 {
    group {
        External_1
        Internal_1
    }
}
vrrp_instance External_1 {
    state MASTER
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1    
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_1 {
    state MASTER
    interface eth2
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}

3.ka68配置文件

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from ka@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id ka68
   vrrp_mcast_group4 224.0.0.111
}
vrrp_instance External_1 {
    state BACKUP
    interface eth1
    virtual_router_id 171
    priority 100
    advert_int 1    
    authentication {
        auth_type PASS
        auth_pass renwole0
    }
    virtual_ipaddress {
        10.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}
vrrp_instance Internal_1 {
    state BACKUP
    interface eth2
    virtual_router_id 191
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass renwole1
    }
    virtual_ipaddress {
        172.16.8.100
    }
    notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master"
    notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup"
    notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"
}

Nginx创建密码验证保护网站目录

在生产环境中,网站需要授权访问的场景非常之多,比如数据库管理工具:phpMyAdminMysqlUpBackUp等等。有时还需要一些私有目录文件的保护,为了实现这一伟大目标,我们就需要用到Nginx location匹配规则,下面将进行讲解。

1.创建htpasswd文件

$ vim /usr/local/nginx/conf/htpasswd

添加以下内容:
renwole:xEmWnqjTJipoE

此文件的书写格式是:

用户名:密码

注意:每行一个用户和密码,这里的password不是明文,而是将password进行crypt(3)加密后的字符串。

2.密码生成

可以打开以下网址输入用户信息进行密码生成:

//tool.oschina.net/htpasswd

3.Nginx加密目录配置

在Nginx虚拟主机配置文件中的合适区域加入以下内容:

如果保护tools目录:

location ^~ /tools/ {
auth_basic            "Restricted Area";
auth_basic_user_file  conf/htpasswd;
}

说明:若不加 ^~ 只能对目录进行弹窗验证,访问此目录下的文件则无需验证。

如果保护整个网站根目录:

location / {
auth_basic            "Restricted Area";
auth_basic_user_file  conf/htpasswd;
}

添加需要保护的目录后,重载Nginx配置文件,否则不生效。

Apache Nginx 禁止目录执行PHP脚本文件

我们在搭建网站的时候,可能需要单独对一些目录进行设置权限,以达到我们需要的安全效果。下面举例说明在Apache或Nginx下如何设置禁止某个目录执行php文件。

1.Apache配置

<Directory /apps/web/renwole/wp-content/uploads>
 php_flag engine off
</Directory>
<Directory ~ "^/apps/web/renwole/wp-content/uploads">
 <Files ~ ".php">
 Order allow,deny
 Deny from all
 </Files>
</Directory>

2.Nginx配置

location /wp-content/uploads {
    location ~ .*\.(php)?$ {
    deny all;
    }
}

Nginx禁止多个目录执行PHP:

location ~* ^/(css|uploads)/.*\.(php)${
    deny all;
}

配置完成后,重载配置文件或重启Apache或Nginx服务,之后所有通过uploads来访问php文件,都将返回403,大大地增加了web目录安全性。

Nginx 反向代理实现 Kibana 登录认证功能

Kibana 5.5 版后,已不支持认证功能,也就是说,直接打开页面就能管理,想想都不安全,不过官方提供了 X-Pack 认证,但有时间限制。毕竟X-Pack是商业版。

下面我将操作如何使用Nginx反向代理实现kibana的认证功能。

先决条件:

Centos 7 源码编译安装 Nginx

1.安装 Apache Httpd 密码生成工具

$ yum install httpd-tools -y

2.生成Kibana认证密码

$ mkdir -p /usr/local/nginx/conf/passwd
$ htpasswd -c -b /usr/local/nginx/conf/passwd/kibana.passwd Userrenwolecom GN5SKorJ
Adding password for user Userrenwolecom

3.配置Nginx反向代理

在Nginx配置文件中添加如下内容(或新建配置文件包含):

$ vim /usr/local/nginx/conf/nginx.conf

server {
listen 10.28.204.65:5601;
auth_basic "Restricted Access";
auth_basic_user_file /usr/local/nginx/conf/passwd/kibana.passwd;
location / {
proxy_pass //10.28.204.65:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade; 
}
}

4.配置Kibana

取消下面注释:

$ vim /usr/local/kibana/config/kibana.yml

server.host: "10.28.204.65"

5.重启 Kibana 及 Nginx 服务使配置生效

$ systemctl restart kibana.service
$ systemctl restart nginx.service

接下来浏览器访问 //103.28.204.65:5601/ 会提示验证弹窗,输入以上生成的用户密码登录即可。

最佳 Nginx 配置文件优化方案

以下是生产环境中适用的Nginx优化文件,你也可以根据自己的需求调优。

 user www www;
 #用户&组
 worker_processes auto;
 #通常是CPU核的数量存储数据的硬盘数量及负载模式,不确定时将其设置为可用的CPU内核数(设置为“auto”将尝试自动检测它)
 error_log /usr/local/nginx/logs/error.log crit;
 pid /usr/local/nginx/logs/nginx.pid;
 #指定pid文件的位置,默认值就可以

 worker_rlimit_nofile 65535;
 #更改worker进程的最大打开文件数限制
 events {
 use epoll;
 multi_accept on;
 #在Nginx接到一个新连接通知后,调用accept()来接受尽量多的连接
 worker_connections 65535;
 #最大访问客户数,修改此值时,不能超过 worker_rlimit_nofile 值
 }
 http {
 include mime.types;
 default_type application/octet-stream;
 #使用的默认的 MIME-type
 log_format '$remote_addr - $remote_user [$time_local] "$request" '
 '$status $body_bytes_sent "$http_referer" '
 '"$http_user_agent" "$http_x_forwarded_for"';
 #定义日志格式
 charset UTF-8;
 #设置头文件默认字符集
 server_tokens off;
 #Nginx打开网页报错时,关闭版本号显示
 access_log off;
 sendfile on;
 tcp_nopush on;
 #告诉nginx在一个数据包里发送所有头文件,而不一个接一个的发送
 tcp_nodelay on;
 #是否启用 nagle 缓存算法,告诉nginx不要缓存数据
 sendfile_max_chunk 512k;
 #每个进程每次调用传输数量不能大于设定的值,默认为0,即不设上限
 keepalive_timeout 65;
 #HTTP连接持续时间,值越大无用的线程变的越多,0:关闭此功能,默认为75
 client_header_timeout 10;
 client_body_timeout 10;
 #以上两项是设置请求头和请求体各自的超时时间
 reset_timedout_connection on;
 #告诉nginx关闭不响应的客户端连接
 send_timeout 30;
 #客户端响应超时时间,若客户端停止读取数据,释放过期的客户端连接,默认60s
 limit_conn_zone $binary_remote_addr zone=addr:5m;
 #用于保存各种key,如:当前连接数的共享内存的参数,5m是5兆字节,这个值应该被设置的足够大,以存储(32K*5)32byte状态或者(16K*5)64byte状态
 limit_conn addr 100;
 #key最大连接数,这里key是addr,我设置的值是100,这样就允许每个IP地址最多同时打开100个连接数
 server_names_hash_bucket_size 128;
 #nginx启动出现could not build the server_names_hash, you should increase错误时,请提高这个参数的值一般设成64就够了
 client_body_buffer_size 10K;
 client_header_buffer_size 32k;
 #客户端请求头部的缓冲区大小,这个可以根据你的系统分页大小进行设置
 large_client_header_buffers 4 32k;
 client_max_body_size 8m;
 #上传文件大小设置,一般是动态应用类型

 #线程池优化,使用--with-threads配置参数编译
 #aio threads;
 #thread_pool default threads=32 max_queue=65536;
 #aio threads=default;
 #关于更多线程请点击查看

 #fastcgi性能调优

 fastcgi_connect_timeout 300;
 #连接到后端 Fastcgi 的超时时间
 fastcgi_send_timeout 300;
 #与 Fastcgi 建立连接后多久不传送数据,就会被自动断开
 fastcgi_read_timeout 300;
 #接收 Fastcgi 应答超时时间
 fastcgi_buffers 4 64k;
 #可以设置为 FastCGI 返回的大部分应答大小,这样可以处理大部分请求,较大的请求将被缓冲到磁盘
 fastcgi_buffer_size 64k;
 #指定读取 Fastcgi 应答第一部分需要多大的缓冲区,可以设置gastcgi_buffers选项指定的缓冲区大小
 fastcgi_busy_buffers_size 128k;
 #繁忙时的buffer,可以是fastcgi_buffer的两倍
 fastcgi_temp_file_write_size 128k;
 #在写入fastcgi_temp_path时将用多大的数据块,默认值是fastcgi_buffers的两倍,该值越小越可能报 502 BadGateway
 fastcgi_intercept_errors on;
 #是否传递4**&5**错误信息到客户端,或允许nginx使用error_page处理错误信息.

 #fastcgi_cache配置优化(若是多站点虚拟主机,除fastcgi_cache_path(注意keys_zone=名称)全部加入php模块中)

 fastcgi_cache fastcgi_cache;
 #开启FastCGI缓存并指定一个名称,开启缓存可以降低CPU的负载,防止502错误出现
 fastcgi_cache_valid 200 302 301 1h;
 #定义哪些http头要缓存
 fastcgi_cache_min_uses 1;
 #URL经过多少次请求将被缓存
 fastcgi_cache_use_stale error timeout invalid_header http_500;
 #定义哪些情况下用过期缓存
 #fastcgi_temp_path /usr/local/nginx/fastcgi_temp;
 fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2 keys_zone=fastcgi_cache:15m inactive=1d max_size=1g;
 #keys_zone=缓存空间的名字,cache=用多少内存,inactive=默认失效时间,max_size=最多用多少硬盘空间。
 #缓存目录,可以设置目录层级,举例:1:2会生成16*256个字目录
 fastcgi_cache_key $scheme$request_method$host$request_uri;
 #定义fastcgi_cache的key
 #fastcgi_ignore_headers Cache-Control Expires Set-Cookie;

 #响应头

 add_header X-Cache $upstream_cache_status;
 #缓存命中
 add_header X-Frame-Options SAMEORIGIN;
 #是为了减少点击劫持(Clickjacking)而引入的一个响应头
 add_header X-Content-Type-Options nosniff;

 #GZIP性能优化

 gzip on;
 gzip_min_length 1100;
 #对数据启用压缩的最少字节数,如:请求小于1K文件,不要压缩,压缩小数据会降低处理此请求的所有进程速度
 gzip_buffers 4 16k;
 gzip_proxied any;
 #允许或者禁止压缩基于请求和响应的响应流,若设置为any,将会压缩所有请求
 gzip_http_version 1.0;
 gzip_comp_level 9;
 #gzip压缩等级在0-9内,数值越大压缩率越高,CPU消耗也就越大
 gzip_types text/plain text/css application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript application/json image/jpeg image/gif image/png;
 #压缩类型
 gzip_vary on;
 #varyheader支持,让前端的缓存服务器识别压缩后的文件,代理
 include /usr/local/nginx/conf/vhosts/*.conf;
 #在当前文件中包含另一个文件内容的指令

 #静态文件的缓存性能调优

 open_file_cache max=65535 inactive=20s;
 #这个将为打开文件指定缓存,max 指定缓存数量.建议和打开文件数一致.inactive 是指经过多长时间文件没被请求后删除缓存
 open_file_cache_valid 30s;
 #这个是指多长时间检查一次缓存的有效信息,例如我一直访问这个文件,30秒后检查是否更新,反之更新
 open_file_cache_min_uses 2;
 #定义了open_file_cache中指令参数不活动时间期间里最小的文件数
 open_file_cache_errors on;
 #NGINX可以缓存在文件访问期间发生的错误,这需要设置该值才能有效,如果启用错误缓存.则在访问资源(不查找资源)时.NGINX会报告相同的错误

 #资源缓存优化
 server {

 #防盗链设置

 location ~* \.(jpg|gif|png|swf|flv|wma|asf|mp3|mmf|zip|rar)$ {
 #防盗类型
 valid_referers none blocked *.renwole.com renwole.com;
 #none blocked参数可选.允许使用资源文件的域名
 if ($invalid_referer) {
 return 403;
 #rewrite ^/ //renwole.com
 #若不符合条件域名,则返回403或404也可以是域名
 }
 }
 location ~ .*\.(js|css)$ {
 access_log off;
 expires 180d;
 #健康检查或图片.JS.CSS日志.不需要记录日志.在统计PV时是按照页面计算.而且写入频繁会消耗IO.
 }
 location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|swf|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
 access_log off;
 log_not_found off;
 expires 180d;
 #视图&元素很少改变.可将内容缓存到用户本地.再次访问网站时就无需下载.节省流量.加快访问速度.缓存180天
 }
 }
 server {
 listen 80 default_server;
 server_name .renwole.com;
 rewrite ^ //renwole.com$request_uri?;
 }
 server {
 listen 443 ssl http2 default_server;
 listen [::]:443 ssl http2;
 server_name .renwole.com;
 root /home/web/renwole;
 index index.html index.php;

 ssl_certificate /etc/letsencrypt/live/renwole.com/fullchain.pem;
 ssl_certificate_key /etc/letsencrypt/live/renwole.com/privkey.pem;

 ssl_dhparam /etc/nginx/ssl/dhparam.pem;
 ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
 ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';

 ssl_session_cache shared:SSL:50m;
 ssl_session_timeout 1d;
 ssl_session_tickets off;
 ssl_prefer_server_ciphers on;
 add_header Strict-Transport-Security max-age=15768000;
 ssl_stapling on;
 ssl_stapling_verify on;

 include /usr/local/nginx/conf/rewrite/wordpress.conf;
 access_log /usr/local/nginx/logs/renwole.log;

 location ~ \.php$ {
 root /home/web/renwole;
 #fastcgi_pass 127.0.0.1:9000;
 fastcgi_pass unix:/var/run/www/php-cgi.sock;
 fastcgi_index index.php;
 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
 include fastcgi_params;
 }
 }
 }

以上就是本站所使用的环境优化情况,如你有更好的方案不妨分享出来。如有什么有不足之处,还请谅解。

更多模块了解,请参阅//nginx.org/en/docs/

Nginx 集群负载均衡器:NFS文件存储共享安装配置优化篇

上一篇文章中介绍了《Nginx 1.3 集群负载均衡 反向代理安装配置优化》这篇重点写文件存储(File Storage)过程。

10.10.204.62 Load Balancing
10.10.204.63 Nginx Web server
10.10.204.64 Nginx Web server
10.10.204.65 File Storage

1.File Storage 服务器安装

yum -y install nfs-utils

2.配置NFS并创建共享目录

# mkdir -p /Data/webapp
# vim /etc/exports

/Data/webapp 10.10.204.0/24(rw,sync,no_subtree_check,no_root_squash)

3.开启自启动

# systemctl enable rpcbind
# systemctl enable nfs-server
# systemctl start rpcbind
# systemctl start nfs

4.相关参数:

rw:read-write:可读写; ro:read-only,只读; sync:文件同时写入硬盘和内存。
no_root_squash:来访的root用户保持root帐号权限;显然开启这项是不安全的。
root_squash:将来访的root用户映射为匿名用户或用户组;通常它将使用nobody或nfsnobody身份。
all_squash:所有访问用户都映射为匿名用户或用户组;
anonuid:匿名用户的UID值,可以在此处自行设定。 anongid:匿名用户的GID值。
sync:将数据同步写入内存缓冲区与磁盘中,效率低,但可以保证数据的一致性。
async:文件暂存于内存,而不是直接写入内存。
no_subtree_check :即使输出目录是一个子目录,nfs服务器也不检查其父目录的权限,这样可以提高效率。

5.File Storage 服务器防火墙配置

# firewall-cmd --permanent --add-service=rpc-bind
# firewall-cmd --permanent --add-service=nfs
# firewall-cmd --reload

6.Nginx Web server 服务器安装以及挂载

# yum -y install nfs-utils
# mkdir -p /Data/webapp
# mount -t nfs 10.10.204.65:/Data/webapp /Data/webapp

7.如果需要开机自动挂载,在该文件最下方添加一行即可

# vim /etc/fstab

10.10.204.65:/Data/webapp /Data/webapp nfs auto,rw,vers=3,hard,intr,tcp,rsize=32768,wsize=32768 0 0

 

8.Nginx Web server 服务器测试

连续写16384个16KB的块到nfs目录下的testfile文件

# time dd if=/dev/zero of=/Data/webapp/testfile bs=16k count=16384

  16384+0 records in
  16384+0 records out
  268435456 bytes (268 MB) copied, 2.89525 s, 92.7 MB/s
  real 0m2.944s
  user 0m0.015s
  sys 0m0.579s

测试读的性能

# time dd if=/nfsfolder/testfile of=/dev/null bs=16k

  16384+0 records in
  16384+0 records out
  268435456 bytes (268 MB) copied, 0.132925 s, 2.0 GB/s
  real 0m0.138s
  user 0m0.003s
  sys 0m0.127s

综合来讲,NFS的速度还算理想。如果觉得速度慢,那么添加相关参数后,反复挂载卸载并测试读写,找到适合自己的配置方案。

Nginx 1.3 集群负载均衡器 反向代理安装配置优化

什么是负载均衡?简言之:负载平衡是指通过一组后端服务器(也称为服务器集群或服务器池)有效地分发传入的网络流量。就像交通信号灯一样,横跨能够履行最大化速度和容量的利用率,确保没有任何一台服务器是超负荷状态,并在所有服务器上路由客户端请求。如果单个服务器关闭或出现故障则可能会降低性能,负载平衡器会将流量重定向到其余的在线服务器。当新的服务器被添加到服务器组时,负载均衡器会自动开始向其发送请求。

环境:
OS:CentOS Linux release 7.3.1611 (Core) x86_64
Web server:nginx version: nginx/1.13.3

10.10.204.62 Load Balancing
10.10.204.63 Nginx Web server
10.10.204.64 Nginx Web server
10.10.204.65 File Storage

1.Nginx Web server安装,我就不在叙述,请参阅《Nginx安装篇》。

2.分别修改4台服务器的主机名,一般都是IP地址,修改完成后重启服务器。

[root@localhost ~]# vim /etc/hostname

3.10.10.204.62 Load Balancing Nginx完整配置文件。注;观察红色部分;

[root@10-10-204-62 ~]# cat /usr/local/nginx/conf/nginx.conf
 user www www;
 worker_processes 1;

 #error_log logs/error.log;
 #error_log logs/error.log notice;
 #error_log logs/error.log info;

pid /usr/local/nginx/logs/nginx.pid;

worker_rlimit_nofile 65535;

events {
 use epoll;
 worker_connections 65535;
 }

http {
 include mime.types;
 default_type application/octet-stream;
 server_tokens off;
 #log_format main '$remote_addr - $remote_user [$time_local] "$request" '
 # '$status $body_bytes_sent "$http_referer" '
 # '"$http_user_agent" "$http_x_forwarded_for"';
 #access_log logs/access.log main;

 sendfile on;
 tcp_nopush on;

 server_names_hash_bucket_size 128;
 client_header_buffer_size 32k;
 large_client_header_buffers 4 32k;
 client_max_body_size 8m;
 tcp_nodelay on;
 keepalive_timeout 65;
 fastcgi_connect_timeout 300;
 fastcgi_send_timeout 300;
 fastcgi_read_timeout 300;
 fastcgi_buffer_size 64k;
 fastcgi_buffers 4 64k;
 fastcgi_busy_buffers_size 128k;
 fastcgi_temp_file_write_size 128k;

 gzip on;
 gzip_min_length 1100;
 gzip_buffers 4 16k;
 gzip_http_version 1.0;
 #gzip压缩的等级,0-9之间,数字越大,压缩率越高,但是cpu消耗也大。
 gzip_comp_level 9;
 gzip_types text/plain text/html text/css application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
 gzip_vary on;
 #
 upstream webserverapps {
 ip_hash;
 server 10.10.204.63:8023 weight=1 max_fails=2 fail_timeout=2;
 server 10.10.204.64:8023 weight=1 max_fails=2 fail_timeout=2;
 #server 127.0.0.1:8080 backup;
 }
 server {
        listen 80;
        server_name www.myname.com myname.com;
 location / {
     proxy_pass //webserverapps;
     proxy_redirect off;
     proxy_set_header Host $host;
     proxy_set_header X-Real-IP $remote_addr;
     proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
     client_max_body_size 10m;
     client_body_buffer_size 128k;
     proxy_connect_timeout 90;
     proxy_send_timeout 90;
     proxy_read_timeout 90;
     proxy_buffer_size 4k;
     proxy_buffers 4 32k;
     proxy_busy_buffers_size 64k;
     proxy_temp_file_write_size 64k;
     add_header Access-Control-Allow-Origin *;
       }
    }
 }

4.Nginx Web server 服务器配置:10.10.204.63 & 10.10.204.64。

server {
 listen 8989;
 server_name IP地址;
 location / {
 root /home/web/wwwroot/;
 index index.html index.htm index.php;
 #获取Client端IP需要Nginx支持http_realip_module模块
 set_real_ip_from 10.10.204.0/24;
 real_ip_header X-Real-IP;
     }
 }

5.Nginx做负载均衡需要用到 upstream 是Nginx的HTTP Upstream模块。

Nginx 的负载均衡 upstream 模块目前支持 6 种调度算法,下面进行分别介绍,其中后两项属于第三方调度算法。

轮询(默认):每个请求按时间顺序逐一分配到不同的后端服务器,如果后端某台服务器宕机,故障系统被自动剔除,使用户访问不受影响。Weight 指定轮询权重值,Weight 值越大,分配到的访问机率越高。
ip_hash:每个用户请求按访问 IP 的 hash 结果分配,这样来自同一个 IP 的访客固定访问一个后端服务器,有效解决了动态网页存在的 session 共享问题。
fair:比上面两个更智能的负载均衡算法。此算法可以根据&加载时间的长短智能进行负载分配,根据后端服务器响应时长分配请求,响应时间越短就会优先分配。
url_hash:此方法按访问 url 的 hash 结果来分配请求,使每个 url 定向到同一个后端服务器,可以进一步提高后端缓存服务器的效率。
least_conn:最少连接负载均衡算法,简单来说就是每次选择的后端都是当前最少连接的一个 server(这个最少连接不是共享的,是每个 worker 都有自己的一个数组进行记录后端 server 的连接数)。
hash:这个 hash 模块又支持两种模式 hash, 一种是普通的 hash, 另一种是一致性 hash(consistent)。

6.在HTTP Upstream模块中,可以通过server指令指定后端服务器的IP地址和端口,同时还可以设定每个后端服务器在负载均衡调度中的状态。常用的健康状态有:

down:表示当前的 server 暂时不参与负载均衡。
backup:预留的备份机器。当其他所有的非 backup 机器出现故障或者忙的时候,才会请求 backup 机器,因此这台机器的压力最轻。
max_fails:允许请求失败的次数,默认为 1 。当超过最大次数时,返回 proxy_next_upstream 模块定义的错误。
fail_timeout:在经历了 max_fails 次失败后,暂停服务的时间。max_fails 可以和 fail_timeout 一起使用。

注:当负载调度算法为 ip_hash 时,后端服务器在负载均衡调度中的状态不能是 backup。

注:stream 是定义在 server{ } 之外,不能定义在 server{ } 内。定义好upstream后,用 proxy_pass 引用即可。此外每一次修改nginx.conf配置文件都需要重新加载Nginx服务。

7.关于 session 共享问题有更多解决方案,请继续参阅《Redis 缓存 PHP 7.2 session 变量共享》。

8.文件服务器

注意:其实大家可能会有个疑问,那么多服务器分布计算,他们需要的文件从哪里来,不仅要保证数据的一致性,还要保证数据的安全性。这个时候就要用到文件存储服务器了,只需要将10.10.204.65文件服务器的数据文件共享给 10.10.204.63 & 10.10.204.64 即可。通常由NFS网络文件系统完成文件共享,我这里已经安装完成。若你还没有安装,请参阅Nginx 集群负载均衡:NFS文件存储共享安装配置优化篇