0
点赞
收藏
分享

微信扫一扫

使用kubeadm搭建高可用的k8s集群(亲测有效)

kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。

这个工具能通过两条指令完成一个kubernetes集群的部署:

# 创建一个 Master 节点
$ kubeadm init

# 将一个 Node 节点加入到当前集群中
$ kubeadm join <Master节点的IP和端口 >

安装要求

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

  • 一台或多台机器,操作系统 CentOS7.x-86_x64
  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
  • 可以访问外网,需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点
  • 禁止swap分区

准备环境

角色

IP

master1

192.168.3.155

master2

192.168.3.156

node1

192.168.3.157

VIP(虚拟ip)

192.168.3.158

# 关闭防火墙 如果是minimal安装,默认没有装firewalld
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时

# 关闭swap
swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久

# 根据规划设置主机名
hostnamectl set-hostname <hostname> #分别设置为master1、master2、node1

# 在master添加hosts
cat >> /etc/hosts << EOF
192.168.3.158 master.k8s.io k8s-vip
192.168.3.155 master01.k8s.io master1
192.168.3.156 master02.k8s.io master2
192.168.3.157 node01.k8s.io node1
EOF
ping node1或ping node01.k8s.io #确认配置生效

# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效

# 时间同步
yum install ntpdate -y
ntpdate time.windows.com

所有master节点部署keepalived

3.1 安装相关包和keepalived

yum install -y conntrack-tools libseccomp libtool-ltdl

yum install -y keepalived

3.2配置master节点

master1节点配置

cat > /etc/keepalived/keepalived.conf <<EOF 
! Configuration File for keepalived

global_defs {
router_id k8s
}

vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}

vrrp_instance VI_1 {
state MASTER
interface eno33554984
virtual_router_id 51
priority 250
advert_int 1
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab
}
virtual_ipaddress {
192.168.3.158
}
track_script {
check_haproxy
}
}
EOF

master2节点配置

cat > /etc/keepalived/keepalived.conf <<EOF 
! Configuration File for keepalived

global_defs {
router_id k8s
}

vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}

vrrp_instance VI_1 {
state BACKUP
interface eno33554984
virtual_router_id 51
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass ceb1b3ec013d66163d6ab
}
virtual_ipaddress {
192.168.3.158
}
track_script {
check_haproxy
}
}
EOF

3.3 启动和检查

在两台master节点都执行

# 启动keepalived
$ systemctl start keepalived.service
设置开机启动
$ systemctl enable keepalived.service
# 查看启动状态
$ systemctl status keepalived.service # 以master1为例
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2022-02-06 02:42:13 EST; 11s ago
Main PID: 2985 (keepalived)
CGroup: /system.slice/keepalived.service
├─2985 /usr/sbin/keepalived -D
├─2986 /usr/sbin/keepalived -D
└─2987 /usr/sbin/keepalived -D

Feb 06 02:42:15 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
Feb 06 02:42:15 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
Feb 06 02:42:15 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
Feb 06 02:42:15 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
Feb 06 02:42:20 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
Feb 06 02:42:20 master1 Keepalived_vrrp[2987]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eno33554984 for 192.168.3.158
Feb 06 02:42:20 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
Feb 06 02:42:20 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
Feb 06 02:42:20 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158
Feb 06 02:42:20 master1 Keepalived_vrrp[2987]: Sending gratuitous ARP on eno33554984 for 192.168.3.158

启动后查看master1的网卡信息

$ ip a s eno33554984
3: eno33554984: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:b8:e6:c1 brd ff:ff:ff:ff:ff:ff
inet 192.168.3.155/24 brd 192.168.3.255 scope global eno33554984
valid_lft forever preferred_lft forever
inet 192.168.3.158/32 scope global eno33554984
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:feb8:e6c1/64 scope link
valid_lft forever preferred_lft forever

部署haproxy

4.1 安装

yum install -y haproxy

4.2 配置

两台master节点的配置均相同,配置中声明了后端代理的两个master节点服务器,指定了haproxy运行的端口为16443等,因此16443端口为集群的入口

cat > /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2

chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon

# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
mode tcp
bind *:16443
option tcplog
default_backend kubernetes-apiserver
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
mode tcp
balance roundrobin
server master01.k8s.io 192.168.3.155:6443 check
server master02.k8s.io 192.168.3.156:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
bind *:1080
stats auth admin:awesomePassword
stats refresh 5s
stats realm HAProxy\ Statistics
stats uri /admin?stats
EOF

4.3 启动和检查

两台master都启动

# 设置开机启动
$ systemctl enable haproxy
# 开启haproxy
$ systemctl start haproxy
# 查看启动状态
$ systemctl status haproxy # 以master1为例
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2022-02-06 02:43:21 EST; 7s ago
Main PID: 3067 (haproxy-systemd)
CGroup: /system.slice/haproxy.service
├─3067 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
├─3068 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
└─3069 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

Feb 06 02:43:21 master1 systemd[1]: Started HAProxy Load Balancer.
Feb 06 02:43:21 master1 systemd[1]: Starting HAProxy Load Balancer...
Feb 06 02:43:21 master1 haproxy-systemd-wrapper[3067]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
Feb 06 02:43:21 master1 haproxy-systemd-wrapper[3067]: [WARNING] 036/024321 (3068) : config : 'option forwardfor' ignored for frontend 'kubernetes-api...TP mode.
Feb 06 02:43:21 master1 haproxy-systemd-wrapper[3067]: [WARNING] 036/024321 (3068) : config : 'option forwardfor' ignored for backend 'kubernetes-apis...TP mode.
Hint: Some lines were ellipsized, use -l to show in full.

检查端口

$ yum install -y net-tools
$ netstat -lntup|grep haproxy
tcp 0 0 0.0.0.0:1080 0.0.0.0:* LISTEN 3069/haproxy
tcp 0 0 0.0.0.0:16443 0.0.0.0:* LISTEN 3069/haproxy
udp 0 0 0.0.0.0:52599 0.0.0.0:* 3068/haproxy

所有节点安装Docker/kubeadm/kubelet

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。

5.1 安装Docker

$ yum install -y wget
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
$ yum -y install docker-ce-18.06.1.ce-3.el7
$ systemctl enable docker && systemctl start docker
$ systemctl status docker # 以master1为例
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2022-02-06 02:46:29 EST; 6s ago
Docs: https://docs.docker.com
Main PID: 14229 (dockerd)
Memory: 49.1M
CGroup: /system.slice/docker.service
├─14229 /usr/bin/dockerd
└─14236 docker-containerd --config /var/run/docker/containerd/containerd.toml

Feb 06 02:46:29 master1 dockerd[14229]: time="2022-02-06T02:46:29.698832784-05:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4...odule=grpc
Feb 06 02:46:29 master1 dockerd[14229]: time="2022-02-06T02:46:29.698859262-05:00" level=info msg="Loading containers: start."
Feb 06 02:46:29 master1 dockerd[14229]: time="2022-02-06T02:46:29.708720986-05:00" level=warning msg="Running modprobe bridge br_netfilter failed with message...
Feb 06 02:46:29 master1 dockerd[14229]: time="2022-02-06T02:46:29.815060362-05:00" level=info msg="Default bridge (docker0) is assigned with an IP a...P address"
Feb 06 02:46:29 master1 dockerd[14229]: time="2022-02-06T02:46:29.879757245-05:00" level=info msg="Loading containers: done."
Feb 06 02:46:29 master1 dockerd[14229]: time="2022-02-06T02:46:29.897387839-05:00" level=info msg="Docker daemon" commit=e68fc7a graphdriver(s)=devi...18.06.1-ce
Feb 06 02:46:29 master1 dockerd[14229]: time="2022-02-06T02:46:29.897655494-05:00" level=info msg="Daemon has completed initialization"
Feb 06 02:46:29 master1 dockerd[14229]: time="2022-02-06T02:46:29.903063067-05:00" level=warning msg="Could not register builder git source: failed ... in $PATH"
Feb 06 02:46:29 master1 dockerd[14229]: time="2022-02-06T02:46:29.918047621-05:00" level=info msg="API listen on /var/run/docker.sock"
Feb 06 02:46:29 master1 systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.

$ docker --version # 以master1为例
Docker version 18.06.1-ce, build e68fc7a

$ cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

$ systemctl restart docker

5.2 添加阿里云YUM软件源

$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

5.3 安装kubeadm,kubelet和kubectl

由于版本更新频繁,这里指定版本号部署:

$ yum install -y kubelet-1.16.3 kubeadm-1.16.3 kubectl-1.16.3
$ systemctl enable kubelet

部署Kubernetes Master

6.1 创建kubeadm配置文件

在具有vip的master上操作,这里为master1

$ mkdir /usr/local/kubernetes/manifests -p

$ cd /usr/local/kubernetes/manifests/

$ vi kubeadm-config.yaml #创建文件,复制以下内容
apiServer:
certSANs:
- master1
- master2
- master.k8s.io
- 192.168.3.158
- 192.168.3.155
- 192.168.3.156
- 127.0.0.1
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "master.k8s.io:16443"
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.16.3
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.1.0.0/16
scheduler: {}

6.2 在master1节点执行

$ kubeadm init --config kubeadm-config.yaml

按照提示保存以下内容,一会要使用(kubeadm init中的回显内容):

kubeadm join master.k8s.io:16443 --token a8r4cl.ipnc8uwnwg35alhn \
--discovery-token-ca-cert-hash sha256:2686517c55d2093a7e59ca34ecf72a1a44b36b416e1c2d9ac15565e5b2affb39 \
--control-plane

按照提示配置环境变量,使用kubectl工具:

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 NotReady master 68s v1.16.3

$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-58cc8c89f4-r5q69 0/1 Pending 0 59s
coredns-58cc8c89f4-tkhpq 0/1 Pending 0 59s
etcd-master1 1/1 Running 0 19s
kube-apiserver-master1 1/1 Running 0 7s
kube-controller-manager-master1 0/1 Pending 0 2s
kube-proxy-68d6s 1/1 Running 0 59s
kube-scheduler-master1 0/1 Pending 0 4s

查看集群状态

$ kubectl get cs
NAME AGE
controller-manager <unknown>
scheduler <unknown>
etcd-0 <unknown>

$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-58cc8c89f4-r5q69 0/1 Pending 0 79s
coredns-58cc8c89f4-tkhpq 0/1 Pending 0 79s
etcd-master1 1/1 Running 0 39s
kube-apiserver-master1 1/1 Running 0 27s
kube-controller-manager-master1 1/1 Running 0 22s
kube-proxy-68d6s 1/1 Running 0 79s
kube-scheduler-master1 1/1 Running 0 24s

安装集群网络

创建kube-flannel.yml,在master1上执行

cat > kube-flannel.yml << EOF
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- arm64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- arm
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
EOF

安装flannel网络

$ podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
kubectl apply -f kube-flannel.yml
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

检查

$ kubectl get pods -n kube-system # 执行apply完后等待会儿再查看
NAME READY STATUS RESTARTS AGE
coredns-58cc8c89f4-r5q69 0/1 Pending 0 2m18s
coredns-58cc8c89f4-tkhpq 0/1 Pending 0 2m18s
etcd-master1 1/1 Running 0 98s
kube-apiserver-master1 1/1 Running 0 86s
kube-controller-manager-master1 1/1 Running 0 81s
kube-flannel-ds-amd64-7qhgr 1/1 Running 0 35s
kube-proxy-68d6s 1/1 Running 0 2m18s
kube-scheduler-master1 1/1 Running 0 83s

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready master 2m46s v1.16.3

master2节点加入集群

8.1 复制密钥及相关文件

从master1复制密钥及相关文件到master2

$ ssh root@192.168.3.156 mkdir -p /etc/kubernetes/pki/etcd

$ scp /etc/kubernetes/admin.conf root@192.168.3.156:/etc/kubernetes

$ scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@192.168.3.156:/etc/kubernetes/pki

$ scp /etc/kubernetes/pki/etcd/ca.* root@192.168.3.156:/etc/kubernetes/pki/etcd

8.2 master2加入集群

执行在master1上init后输出的join命令,需要带上参数--control-plane表示把master控制节点加入集群(之前kubeadm init回显内容)

kubeadm join master.k8s.io:16443 --token a8r4cl.ipnc8uwnwg35alhn \
--discovery-token-ca-cert-hash sha256:2686517c55d2093a7e59ca34ecf72a1a44b36b416e1c2d9ac15565e5b2affb39 \
--control-plane

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

检查状态(master1上执行)

$ kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready master 6m28s v1.16.3
master2 Ready master 43s v1.16.3

$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-58cc8c89f4-r5q69 1/1 Running 0 6m28s
kube-system coredns-58cc8c89f4-tkhpq 1/1 Running 0 6m28s
kube-system etcd-master1 1/1 Running 0 5m48s
kube-system etcd-master2 1/1 Running 0 60s
kube-system kube-apiserver-master1 1/1 Running 0 5m36s
kube-system kube-apiserver-master2 1/1 Running 0 61s
kube-system kube-controller-manager-master1 1/1 Running 1 5m31s
kube-system kube-controller-manager-master2 1/1 Running 0 61s
kube-system kube-flannel-ds-amd64-2ntbn 1/1 Running 0 61s
kube-system kube-flannel-ds-amd64-7qhgr 1/1 Running 0 4m45s
kube-system kube-proxy-68d6s 1/1 Running 0 6m28s
kube-system kube-proxy-kdjwc 1/1 Running 0 61s
kube-system kube-scheduler-master1 1/1 Running 1 5m33s
kube-system kube-scheduler-master2 1/1 Running 0 61s

加入Kubernetes Node

在node1上执行

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令(之前kubeadm init回显内容,注意不加--control-plane):

$ kubeadm join master.k8s.io:16443 --token a8r4cl.ipnc8uwnwg35alhn \
--discovery-token-ca-cert-hash sha256:2686517c55d2093a7e59ca34ecf72a1a44b36b416e1c2d9ac15565e5b2affb39
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

集群网络重新安装,因为添加了新的node节点(在master1上执行)

kubectl delete -f kube-flannel.yml
kubectl apply -f kube-flannel.yml

检查状态(在master1上执行)

$ kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready master 9m10s v1.16.3
master2 Ready master 3m25s v1.16.3
node1 Ready <none> 104s v1.16.3

$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-58cc8c89f4-r5q69 1/1 Running 0 9m8s
kube-system coredns-58cc8c89f4-tkhpq 1/1 Running 0 9m8s
kube-system etcd-master1 1/1 Running 0 8m28s
kube-system etcd-master2 1/1 Running 0 3m40s
kube-system kube-apiserver-master1 1/1 Running 0 8m16s
kube-system kube-apiserver-master2 1/1 Running 0 3m41s
kube-system kube-controller-manager-master1 1/1 Running 1 8m11s
kube-system kube-controller-manager-master2 1/1 Running 0 3m41s
kube-system kube-flannel-ds-amd64-44fdc 1/1 Running 0 93s
kube-system kube-flannel-ds-amd64-swgdm 1/1 Running 0 93s
kube-system kube-flannel-ds-amd64-swwck 1/1 Running 0 93s
kube-system kube-proxy-68d6s 1/1 Running 0 9m8s
kube-system kube-proxy-kdjwc 1/1 Running 0 3m41s
kube-system kube-proxy-lvc4v 1/1 Running 0 2m
kube-system kube-scheduler-master1 1/1 Running 1 8m13s
kube-system kube-scheduler-master2 1/1 Running 0 3m41s

测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-86c57db685-2mwds 0/1 ContainerCreating 0 8s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 9m48s
service/nginx NodePort 10.1.4.26 <none> 80:31030/TCP 5s

访问地址:

​​http://192.168.3.158:31030​​

使用kubeadm搭建高可用的k8s集群(亲测有效)_edn

举报

相关推荐

0 条评论