k8s专题[5.k8s高可用集群部署]

1.高可用集群概述

部署k8s的高可用集群,要做到无单电,主要是依靠负载均衡和k8s集群自身的高可用性。对于k8s的api,它本身是无状态的,自身不具备高可用,但是如果要它提供高可用的服务,做到其他节点和组件能无时无刻都可以访问它,需要在k8s api前面加一层tcp负载均衡器。一般k8s集群至少三个master节点,所以tcp转发层需要对这三台k8s api节点做负载均衡。这里有人可能问了,为什么需要tcp负载均衡,不做http负载均衡?原因就是k8s api的通信是https加密的,负载均衡器做https解析会比较麻烦,而且做tcp负载均衡效率也会高一些,也会比较方便,高版本的nginx就直接支持tcp转发。还有就是这三台master节点的服务器怎么放呢?这个要看个人的资源环境,比如你只是单机房的,那只能把master节点部署在一个机房里面了,然后三台服务器之间加个vip,使用keepalived做vip漂移,达到k8s api的高可用,如果你是同城三机房的,可以在每个机房部署一个master节点,然后每个机房部署一个高可用的负载均衡器,负载均衡器把请求转发到三个机房的k8s api上面去。然后对于etcd数据库,他本身自带高可用功能,部署三个以上节点的节点就好了,对于kube-scheduler组件,它们内部会自动选择一个leader节点,默认kubeadm安装情况下–leader-elect参数已经设置为true,保证master集群中只有一个kube-scheduler处于活跃状态,对于kube-controller-manager和kube-scheduler也是类似,默认kubeadm安装情况下–leader-elect参数已经设置为true,保证master集群中只有一个kube-controller-manager处于活跃状态。对于k8s核心组件的功能,下面再说明一下:

kube-apiserver:集群核心,集群API接口、集群各个组件通信的中枢;集群安全控制。

etcd:集群的数据中心,用于存放集群的配置以及状态信息,非常重要,如果数据丢失那么集群将无法恢复;因此高可用集群部署首先就是etcd是高可用集群;

kube-scheduler:集群Pod的调度中心;默认kubeadm安装情况下–leader-elect参数已经设置为true,保证master集群中只有一个kube-scheduler处于活跃状态;

kube-controller-manager:集群状态管理器,当集群状态与期望不同时,kcm会努力让集群恢复期望状态,比如:当一个pod死掉,kcm会努力新建一个pod来恢复对应replicas set期望的状态;默认kubeadm安装情况下–leader-elect参数已经设置为true,保证master集群中只有一个kube-controller-manager处于活跃状态;

kubelet:kubernetes node agent,负责与node上的docker engine打交道;

kube-proxy:每个node上一个,负责service vip到endpoint pod的流量转发,当前主要通过设置iptables规则实现。

2.k8s高可用架构图

这里单机房和多机房的区别就在于单机房整个机房就一个vip,多机房就是每个机房都有一个vip,每个机房都有一个api访问入口。
Kubernetes 高可用架构图

3.部署过程

  • 本次单机房v1.9.2版本,版本比较老了,因为文档比较老了,不过新版本大同小异

3.1:资源清单

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
主机节点清单
    主机名                 IP地址               说明            组件
    k8s-master1        192.168.3.148    master        keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster、calico
    k8s-master2        192.168.3.149    master        keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster、calico
    k8s-master3        192.168.3.150    master        keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster、calico
                            192.168.3.157    虚拟IP        
    k8s-node1           192.168.3.154    node          kubelet、kube-proxy
    k8s-node2           192.168.3.155    node          kubelet、kube-proxy

kubernetes对应docker版本
    Kubernetes 1.9  <--Docker 1.11.2 to 1.13.1 and 17.03.x
    Kubernetes 1.8  <--Docker 1.11.2 to 1.13.1 and 17.03.x
    Kubernetes 1.7  <--Docker 1.10.3,  1.11.2,  1.12.6
    Kubernetes 1.6  <--Docker 1.10.3,  1.11.2,  1.12.6
    Kubernetes 1.5  <--Docker 1.10.3,  1.11.2,  1.12.3

docker镜像准备
    master节点:
    gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3
    gcr.io/google_containers/kube-proxy-amd64:v1.9.2
    gcr.io/google_containers/kube-controller-manager-amd64:v1.9.2
    gcr.io/google_containers/kube-apiserver-amd64:v1.9.2
    gcr.io/google_containers/kube-scheduler-amd64:v1.9.2
    gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
    gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
    gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
    gcr.io/google_containers/etcd-amd64:3.1.10
    gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3
    gcr.io/google_containers/heapster-grafana-amd64:v4.4.3
    gcr.io/google_containers/heapster-amd64:v1.4.2
    gcr.io/google_containers/pause-amd64:3.0
    quay.io/calico/node:v3.0.3
    quay.io/calico/kube-controllers:v2.0.1
    quay.io/calico/cni:v2.0.1
    quay.io/coreos/flannel:v0.9.1-amd64
    nginx:latest

    node节点:
    gcr.io/google_containers/pause-amd64:3.0
    gcr.io/google_containers/kube-proxy-amd64:v1.9.2
    quay.io/calico/node:v3.0.3
    quay.io/calico/kube-controllers:v2.0.1
    quay.io/calico/cni:v2.0.1
    quay.io/coreos/flannel:v0.9.1-amd64

3.2:软件安装,所有节点都同样处理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
    mkdir -p /data/soft/docker
    yum -y install epel-release
    yum install -y git wget lrzsz vim net-tools yum-utils device-mapper-persistent-data lvm2
    yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    cat > /etc/yum.repos.d/kubernetes.repo <<EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=0
    EOF

    cd /data/soft/docker
    yumdownloader docker-ce-17.03.0.ce
    yumdownloader docker-ce-selinux-17.03.0.ce
    yum remove docker docker-common docker-selinux docker-engine
    yum localinstall *
    yum -y install kubelet-1.9.2 kubectl-1.9.2 kubeadm-1.9.2

    systemctl stop firewalld && systemctl disable firewalld
    systemctl start docker && systemctl enable docker
    systemctl start kubelet && systemctl enable kubelet

    master节点:
    yum install -y keepalived
    systemctl enable keepalived && systemctl start keepalived

3.3:系统设置,所有节点都同样处理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
#selinux
    setenforce 0
    cp /etc/selinux/config /etc/selinux/config.bak
    sed -i '/SELINUX=enforcing/s/enforcing/disabled/' /etc/selinux/config

#配置系统路由参数,防止kubeadm报路由警告、禁用swap
    echo "
    vm.swappiness = 0
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward=1
    " >> /etc/sysctl.conf
    sysctl -p
    swapoff -a

#修改kubelet环境变量,修改KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs

    cp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.bak
    sed -i '/KUBELET_CGROUP_ARGS/s/systemd/cgroupfs/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

    cat > /etc/systemd/system/kubelet.service.d/20-pod-infra-image.conf <<EOF
    [Service]
    Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.test.sui.internal/test/pause-amd64:3.0"
    EOF

#修改docker配置
    cp -p /usr/lib/systemd/system/docker.service /usr/lib/systemd/system/docker.service.bak
    sed -i '/ExecStart=/s/$/& --insecure-registry registry.test.sui.internal/' /usr/lib/systemd/system/docker.service

    systemctl daemon-reload
    systemctl restart kubelet
    systemctl restart docker

#防火墙设置
    master:在所有master节点上开放相关firewalld端口(因为以上服务基于docker部署,如果docker版本为17.x,可以不进行以下设置,因为docker会自动修改iptables添加相关端口)
    协议    方向    端口        说明
    TCP        Inbound    16443*        Load balancer Kubernetes API server port
    TCP        Inbound    6443*        Kubernetes API server
    TCP        Inbound    4001        etcd listen client port
    TCP        Inbound    2379-2380    etcd server client API
    TCP        Inbound    10250        Kubelet API
    TCP        Inbound    10251        kube-scheduler
    TCP        Inbound    10252        kube-controller-manager
    TCP        Inbound    10255        Read-only Kubelet API
    TCP        Inbound    30000-32767    NodePort Services

    $ systemctl status firewalld
    $ firewall-cmd --zone=public --add-port=16443/tcp --permanent
    $ firewall-cmd --zone=public --add-port=6443/tcp --permanent
    $ firewall-cmd --zone=public --add-port=4001/tcp --permanent
    $ firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent
    $ firewall-cmd --zone=public --add-port=10250/tcp --permanent
    $ firewall-cmd --zone=public --add-port=10251/tcp --permanent
    $ firewall-cmd --zone=public --add-port=10252/tcp --permanent
    $ firewall-cmd --zone=public --add-port=10255/tcp --permanent
    $ firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent
    $ firewall-cmd --reload
    $ firewall-cmd --list-all --zone=public

    public (active)
      target: default
      icmp-block-inversion: no
      interfaces: ens2f1 ens1f0 nm-bond
      sources:
      services: ssh dhcpv6-client
      ports: 4001/tcp 6443/tcp 2379-2380/tcp 10250/tcp 10251/tcp 10252/tcp 10255/tcp 30000-32767/tcp
      protocols:
      masquerade: no
      forward-ports:
      source-ports:
      icmp-blocks:
      rich rules:

#node:在所有worker节点上开放相关firewalld端口(因为以上服务基于docker部署,如果docker版本为17.x,可以不进行以下设置,因为docker会自动修改iptables添加相关端口)
    协议    方向    端口        说明
    TCP        Inbound    10250        Kubelet API
    TCP        Inbound    10255        Read-only Kubelet API
    TCP        Inbound    30000-32767    NodePort Services

    $ systemctl status firewalld

    $ firewall-cmd --zone=public --add-port=10250/tcp --permanent
    $ firewall-cmd --zone=public --add-port=10255/tcp --permanent
    $ firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent

    $ firewall-cmd --reload

    $ firewall-cmd --list-all --zone=public
    public (active)
      target: default
      icmp-block-inversion: no
      interfaces: ens2f1 ens1f0 nm-bond
      sources:
      services: ssh dhcpv6-client
      ports: 10250/tcp 10255/tcp 30000-32767/tcp
      protocols:
      masquerade: no
      forward-ports:
      source-ports:
      icmp-blocks:
      rich rules:

所有节点,在所有kubernetes节点上允许kube-proxy的forward
    $ firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1 -i docker0 -j ACCEPT -m comment --comment "kube-proxy redirects"
    $ firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 1 -o docker0 -j ACCEPT -m comment --comment "docker subnet"
    $ firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 1 -i flannel.1 -j ACCEPT -m comment --comment "flannel subnet"
    $ firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 1 -o flannel.1 -j ACCEPT -m comment --comment "flannel subnet"
    $ firewall-cmd --reload

    $ firewall-cmd --direct --get-all-rules
    ipv4 filter INPUT 1 -i docker0 -j ACCEPT -m comment --comment 'kube-proxy redirects'
    ipv4 filter FORWARD 1 -o docker0 -j ACCEPT -m comment --comment 'docker subnet'
    ipv4 filter FORWARD 1 -i flannel.1 -j ACCEPT -m comment --comment 'flannel subnet'
    ipv4 filter FORWARD 1 -o flannel.1 -j ACCEPT -m comment --comment 'flannel subnet'

    #在所有kubernetes节点上,删除iptables的设置,解决kube-proxy无法启用nodePort。(注意:每次重启firewalld必须执行以下命令)
    iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited

3.3:配置文件初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128

#在所有master节点上获取代码,并进入代码目录
    git clone https://github.com/cookeem/kubeadm-ha
    cd kubeadm-ha

    #在所有master节点上设置初始化脚本配置,每一项配置参见脚本中的配置说明,请务必正确配置。该脚本用于生成相关重要的配置文件
    [root@k8s-master1 kubeadm-ha]# cat create-config.sh
    #!/bin/bash

    # local machine ip address
    export K8SHA_IPLOCAL=192.168.3.148

    # local machine etcd name, options: etcd1, etcd2, etcd3
    export K8SHA_ETCDNAME=etcd1

    # local machine keepalived state config, options: MASTER, BACKUP. One keepalived cluster only one MASTER, other's are BACKUP
    export K8SHA_KA_STATE=MASTER

    # local machine keepalived priority config, options: 102, 101, 100. MASTER must 102
    export K8SHA_KA_PRIO=102

    # local machine keepalived network interface name config, for example: eth0
    export K8SHA_KA_INTF=ens3

    #######################################
    # all masters settings below must be same
    #######################################

    # master keepalived virtual ip address
    export K8SHA_IPVIRTUAL=192.168.3.157

    # master01 ip address
    export K8SHA_IP1=192.168.3.148

    # master02 ip address
    export K8SHA_IP2=192.168.3.149

    # master03 ip address
    export K8SHA_IP3=192.168.3.150

    # master01 hostname
    export K8SHA_HOSTNAME1=k8s-master1

    # master02 hostname
    export K8SHA_HOSTNAME2=k8s-master2

    # master03 hostname
    export K8SHA_HOSTNAME3=k8s-master3

    # keepalived auth_pass config, all masters must be same
    export K8SHA_KA_AUTH=4cdf7dc3b4c90194d1600c483e10ad1d

    # kubernetes cluster token, you can use 'kubeadm token generate' to get a new one
    export K8SHA_TOKEN=7f276c.0741d82a5337f526

    # kubernetes CIDR pod subnet, if CIDR pod subnet is "10.244.0.0/16" please set to "10.244.0.0\\/16"
    export K8SHA_CIDR=10.244.0.0\\/16

    # kubernetes CIDR service subnet, if CIDR service subnet is "10.96.0.0/12" please set to "10.96.0.0\\/12"
    export K8SHA_SVC_CIDR=10.96.0.0\\/16

    # calico network settings, set a reachable ip address for the cluster network interface, for example you can use the gateway ip address
    export K8SHA_CALICO_REACHABLE_IP=192.168.3.1

    ##############################
    # please do not modify anything below
    ##############################

    # set etcd cluster docker-compose.yaml file
    sed \
    -e "s/K8SHA_ETCDNAME/$K8SHA_ETCDNAME/g" \
    -e "s/K8SHA_IPLOCAL/$K8SHA_IPLOCAL/g" \
    -e "s/K8SHA_IP1/$K8SHA_IP1/g" \
    -e "s/K8SHA_IP2/$K8SHA_IP2/g" \
    -e "s/K8SHA_IP3/$K8SHA_IP3/g" \
    etcd/docker-compose.yaml.tpl > etcd/docker-compose.yaml

    echo 'set etcd cluster docker-compose.yaml file success: etcd/docker-compose.yaml'

    # set keepalived config file
    mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

    cp keepalived/check_apiserver.sh /etc/keepalived/

    sed \
    -e "s/K8SHA_KA_STATE/$K8SHA_KA_STATE/g" \
    -e "s/K8SHA_KA_INTF/$K8SHA_KA_INTF/g" \
    -e "s/K8SHA_IPLOCAL/$K8SHA_IPLOCAL/g" \
    -e "s/K8SHA_KA_PRIO/$K8SHA_KA_PRIO/g" \
    -e "s/K8SHA_IPVIRTUAL/$K8SHA_IPVIRTUAL/g" \
    -e "s/K8SHA_KA_AUTH/$K8SHA_KA_AUTH/g" \
    keepalived/keepalived.conf.tpl > /etc/keepalived/keepalived.conf

    echo 'set keepalived config file success: /etc/keepalived/keepalived.conf'

    # set nginx load balancer config file
    sed \
    -e "s/K8SHA_IP1/$K8SHA_IP1/g" \
    -e "s/K8SHA_IP2/$K8SHA_IP2/g" \
    -e "s/K8SHA_IP3/$K8SHA_IP3/g" \
    nginx-lb/nginx-lb.conf.tpl > nginx-lb/nginx-lb.conf

    echo 'set nginx load balancer config file success: nginx-lb/nginx-lb.conf'

    # set kubeadm init config file
    sed \
    -e "s/K8SHA_HOSTNAME1/$K8SHA_HOSTNAME1/g" \
    -e "s/K8SHA_HOSTNAME2/$K8SHA_HOSTNAME2/g" \
    -e "s/K8SHA_HOSTNAME3/$K8SHA_HOSTNAME3/g" \
    -e "s/K8SHA_IP1/$K8SHA_IP1/g" \
    -e "s/K8SHA_IP2/$K8SHA_IP2/g" \
    -e "s/K8SHA_IP3/$K8SHA_IP3/g" \
    -e "s/K8SHA_IPVIRTUAL/$K8SHA_IPVIRTUAL/g" \
    -e "s/K8SHA_TOKEN/$K8SHA_TOKEN/g" \
    -e "s/K8SHA_CIDR/$K8SHA_CIDR/g" \
    -e "s/K8SHA_SVC_CIDR/$K8SHA_SVC_CIDR/g" \
    kubeadm-init.yaml.tpl > kubeadm-init.yaml

    echo 'set kubeadm init config file success: kubeadm-init.yaml'

    # set canal deployment config file

    sed \
    -e "s/K8SHA_CIDR/$K8SHA_CIDR/g" \
    -e "s/K8SHA_CALICO_REACHABLE_IP/$K8SHA_CALICO_REACHABLE_IP/g" \
    kube-canal/canal.yaml.tpl > kube-canal/canal.yaml

    echo 'set canal deployment config file success: kube-canal/canal.yaml'

3.4:在所有master节点上运行配置脚本,创建对应的配置文件,配置文件包括

1.etcd集群docker-compose.yaml文件
2.keepalived配置文件
3.nginx负载均衡集群docker-compose.yaml文件
4.kubeadm init 配置文件
5.canal配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
    $ ./create-config.sh
    set etcd cluster docker-compose.yaml file success: etcd/docker-compose.yaml
    set keepalived config file success: /etc/keepalived/keepalived.conf
    set nginx load balancer config file success: nginx-lb/nginx-lb.conf
    set kubeadm init config file success: kubeadm-init.yaml
    set canal deployment config file success: kube-canal/canal.yaml

    [root@k8s-master1 kubeadm-ha]# cat etcd/docker-compose.yaml
    version: '2'
    services:
      etcd:
        image: gcr.io/google_containers/etcd-amd64:3.1.10
        container_name: etcd
        hostname: etcd
        volumes:
        - /etc/ssl/certs:/etc/ssl/certs
        - /var/lib/etcd-cluster:/var/lib/etcd
        ports:
        - 4001:4001
        - 2380:2380
        - 2379:2379
        restart: always
        command: ["sh", "-c", "etcd --name=etcd1 \
          --advertise-client-urls=http://192.168.3.148:2379,http://192.168.3.148:4001 \
          --listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \
          --initial-advertise-peer-urls=http://192.168.3.148:2380 \
          --listen-peer-urls=http://0.0.0.0:2380 \
          --initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \
          --initial-cluster=etcd1=http://192.168.3.148:2380,etcd2=http://192.168.3.149:2380,etcd3=http://192.168.3.150:2380 \
          --initial-cluster-state=new \
          --auto-tls \
          --peer-auto-tls \
          --data-dir=/var/lib/etcd"]

    [root@k8s-master1 kubeadm-ha]# ls keepalived/
    check_apiserver.sh  keepalived.conf.tpl
    [root@k8s-master1 kubeadm-ha]# cat /etc/keepalived/keepalived.conf
    ! Configuration File for keepalived
    global_defs {
        router_id LVS_DEVEL
    }
    vrrp_script chk_apiserver {
        script "/etc/keepalived/check_apiserver.sh"
        interval 2
        weight -5
        fall 3  
        rise 2
    }
    vrrp_instance VI_1 {
        state MASTER
        interface ens3
        mcast_src_ip 192.168.3.148
        virtual_router_id 157
        priority 102
        advert_int 2
        authentication {
            auth_type PASS
            auth_pass 4cdf7dc3b4c90194d1600c483e10ad1d
        }
        virtual_ipaddress {
            192.168.3.157
        }
        track_script {
           chk_apiserver
        }
    }

    [root@k8s-master1 kubeadm-ha]# cat nginx-lb/docker-compose.yaml
    version: '2'
    services:
      etcd:
        image: nginx:latest
        container_name: nginx-lb
        hostname: nginx-lb
        volumes:
        - ./nginx-lb.conf:/etc/nginx/nginx.conf
        ports:
        - 16443:16443
        restart: always

    [root@k8s-master1 kubeadm-ha]# cat kubeadm-init.yaml
    apiVersion: kubeadm.k8s.io/v1alpha1
    kind: MasterConfiguration
    kubernetesVersion: v1.9.2
    networking:
      podSubnet: 10.244.0.0/16
      serviceSubnet: 10.96.0.0/16
    apiServerCertSANs:
    - k8s-master1
    - k8s-master2
    - k8s-master3
    - 192.168.3.148
    - 192.168.3.149
    - 192.168.3.150
    - 192.168.3.157
    - 127.0.0.1
    etcd:
      endpoints:
      - http://192.168.3.148:2379
      - http://192.168.3.149:2379
      - http://192.168.3.150:2379
    token: 7f276c.0741d82a5337f526
    tokenTTL: "0"

#独立etcd集群部署,如已安装过,需清除历史数据
    rm -rf /var/lib/etcd
    docker-compose --file etcd/docker-compose.yaml stop
    docker-compose --file etcd/docker-compose.yaml rm -f

#启动etcd集群
    docker-compose --file etcd/docker-compose.yaml up -d

#验证etcd集群状态是否正常
    docker exec -ti etcd etcdctl cluster-health
    [root@k8s-master1 kubeadm-ha]# docker exec -ti etcd etcdctl cluster-health
    member 89e5a572671ddffc is healthy: got healthy result from http://192.168.3.148:2379
    member bf48e493e3fe022d is healthy: got healthy result from http://192.168.3.149:2379
    member d9c99f531d70dd69 is healthy: got healthy result from http://192.168.3.150:2379
    cluster is healthy

    docker exec -ti etcd etcdctl member list
    [root@k8s-master1 kubeadm-ha]# docker exec -ti etcd etcdctl member list
    89e5a572671ddffc: name=etcd1 peerURLs=http://192.168.3.148:2380 clientURLs=http://192.168.3.148:2379,http://192.168.3.148:4001 isLeader=false
    bf48e493e3fe022d: name=etcd2 peerURLs=http://192.168.3.149:2380 clientURLs=http://192.168.3.149:2379,http://192.168.3.149:4001 isLeader=true
    d9c99f531d70dd69: name=etcd3 peerURLs=http://192.168.3.150:2380 clientURLs=http://192.168.3.150:2379,http://192.168.3.150:4001 isLeader=false

3.5:开始进行master初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
#第一台master初始化
kubeadm初始化:
    kubeadm init --config=kubeadm-init.yaml

#在所有master节点上设置kubectl客户端连接
    echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bashrc
    source ~/.bashrc

#安装基础组件
#安装canal网络组件
    kubectl apply -f kube-canal/

    等待所有pods正常
    kubectl get pods --all-namespaces -o wide

#安装dashboard
    kubectl apply -f kube-dashboard/

    serviceaccount "admin-user" created
    clusterrolebinding "admin-user" created
    secret "kubernetes-dashboard-certs" created
    serviceaccount "kubernetes-dashboard" created
    role "kubernetes-dashboard-minimal" created
    rolebinding "kubernetes-dashboard-minimal" created
    deployment "kubernetes-dashboard" created
    service "kubernetes-dashboard" created

    通过浏览器访问dashboard地址
    https://k8s-master1:30000/#!/login

    获取token,把token粘贴到login页面的token中,即可进入dashboard
    kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

#安装heapster
    kubectl apply -f kube-heapster/influxdb/
    service "monitoring-grafana" created
    serviceaccount "heapster" created
    deployment "heapster" created
    service "heapster" created
    deployment "monitoring-influxdb" created
    service "monitoring-influxdb" created

    kubectl apply -f kube-heapster/rbac/
    clusterrolebinding "heapster" created

    kubectl get pods --all-namespaces

#master集群高可用设置
#复制配置
    在k8s-master1上复制目录/etc/kubernetes/pki到k8s-master2、k8s-master3,从v1.9.x开始,kubeadm会检测pki目录是否有证书,如果已经存在证书则跳过证书生成的步骤
    scp -r /etc/kubernetes/pki k8s-master2:/etc/kubernetes/
    scp -r /etc/kubernetes/pki k8s-master3:/etc/kubernetes/

#其余master节点初始化
#在k8s-master2进行初始化,等待所有pods正常启动后再进行下一个master初始化,特别要保证kube-apiserver-{current-node-name}处于running状态
    kubeadm init --config=kubeadm-init.yaml

#在k8s-master3进行初始化,等待所有pods正常启动后再进行下一个master初始化,特别要保证kube-apiserver-{current-node-name}处于running状态
    kubeadm init --config=kubeadm-init.yaml

#dns支持多节点
    kubectl scale --replicas=2 -n kube-system deployment/kube-dns

#检查集群状态
    kubectl get pods --all-namespaces -o wide
    [root@k8s-master1 kubeadm-ha]# kubectl get pods --all-namespaces
    NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
    kube-system   canal-kcnfp                             3/3       Running   0          3h
    kube-system   canal-lzzf8                             3/3       Running   17         2d
    kube-system   canal-sf8l6                             3/3       Running   0          2d
    kube-system   canal-trdqt                             3/3       Running   0          2d
    kube-system   canal-znmxq                             3/3       Running   1          3h
    kube-system   heapster-698c5f45bd-7xqc9               1/1       Running   0          5h
    kube-system   kube-apiserver-k8s-master1              1/1       Running   13         4h
    kube-system   kube-apiserver-k8s-master2              1/1       Running   0          5h
    kube-system   kube-apiserver-k8s-master3              1/1       Running   0          2d
    kube-system   kube-controller-manager-k8s-master1     1/1       Running   2          4h
    kube-system   kube-controller-manager-k8s-master2     1/1       Running   0          5h
    kube-system   kube-controller-manager-k8s-master3     1/1       Running   0          2d
    kube-system   kube-dns-6f4fd4bdf-bjq2m                3/3       Running   0          5h
    kube-system   kube-dns-6f4fd4bdf-bqxmc                3/3       Running   0          2d
    kube-system   kube-proxy-5g4q7                        1/1       Running   0          4h
    kube-system   kube-proxy-c2fwf                        1/1       Running   1          3h
    kube-system   kube-proxy-j4fcd                        1/1       Running   0          4h
    kube-system   kube-proxy-jqbgr                        1/1       Running   0          4h
    kube-system   kube-proxy-wsqlr                        1/1       Running   1          3h
    kube-system   kube-scheduler-k8s-master1              1/1       Running   1          4h
    kube-system   kube-scheduler-k8s-master2              1/1       Running   0          5h
    kube-system   kube-scheduler-k8s-master3              1/1       Running   0          2d
    kube-system   kubernetes-dashboard-54cc6684f5-7t6nx   1/1       Running   0          5h
    kube-system   monitoring-grafana-5ffb49ff84-cb7b5     1/1       Running   0          5h
    kube-system   monitoring-influxdb-5b77d47fdd-lw76t    1/1       Running   0          5h

#keepalived配置,在master上重启keepalived
    systemctl restart keepalived

#检查虚拟IP是否正常
    ping -c 2 192.168.3.157

#nginx负载均衡配置,在master上安装并启动nginx作为负载均衡
docker-compose -f nginx-lb/docker-compose.yaml up -d

#在master上验证负载均衡和keepalived是否成功
curl -k https://192.168.3.157:16443

#kube-proxy配置,在k8s-master1上设置proxy高可用,设置server指向高可用虚拟IP以及负载均衡的16443端口
    kubectl edit -n kube-system configmap/kube-proxy
    server: https://192.168.3.157:16443

#在master上重启proxy
    kubectl delete pod -n kube-system $(kubectl get pods --all-namespaces -o wide | grep kube-proxy|awk '{print $2}')

#node节点加入高可用集群设置,kubeadm加入高可用集群
#在所有worker节点上进行加入kubernetes集群操作,这里统一使用k8s-master1的apiserver地址来加入集群
    kubeadm join --token xxx 192.168.3.148:6443 --discovery-token-ca-cert-hash sha256:xxxx

#在所有worker节点上修改kubernetes集群设置,更改server为高可用虚拟IP以及负载均衡的16443端口
    sed -i '/server:/s#https:.*#https://192.168.3.157:16443#g' /etc/kubernetes/bootstrap-kubelet.conf
    sed -i '/server:/s#https:.*#https://192.168.3.157:16443#g' /etc/kubernetes/kubelet.conf

    grep 192.168.3.157 /etc/kubernetes/*.conf
    [root@k8s-node1 ~]# grep 192.168.3.157 /etc/kubernetes/*.conf
    /etc/kubernetes/bootstrap-kubelet.conf:    server: https://192.168.3.157:16443
    /etc/kubernetes/kubelet.conf:    server: https://192.168.3.157:16443

    systemctl restart docker kubelet

#设置workers的节点标签
    kubectl label nodes k8s-node1 role=worker
    kubectl label nodes k8s-node2 role=worker
    kubectl label nodes k8s-node3 role=worker

4.总结

总体部署下来步骤太多,后面写了一个部署脚本,只要写上节点IP就能一件部署,不过前期要做好资源本地化,就是要把准备好的工具,组件等放到内网,保证下载资源各方面都没问题,减少外部网络的影响。后面整理好再放到github上面。

# 推荐文章
  1.docker常用容器管理命令
  2.docker镜像管理
  3.dockerfile指令
  4.深刻理解Docker镜像大小
  5.k8s专题[1.k8s基础概念]

评论


:D 一言句子获取中...