快速搭建k8s集群环境

Linux系统
257
0
0
2024-03-02
标签   Kubernetes

1 创建三台虚拟机

1.1 规划三台虚拟机

k8s-node1192.168.56.100

k8s-node2192.168.56.101

k8s-node3192.168.56.102

1.2 新建Vagrantfile文件

在电脑创建一个文件夹F:javatoolvirtualguli,新建一个文件Vagrantfile,Vagrantfile的内容如下:

Vagrant.configure("") do |config|
  (..3).each do |i|
      config.vm.define "ks-node#{i}" do |node|
          # 设置虚拟机的Box
          node.vm.box = "centos/"
          # 设置虚拟机的主机名
          node.vm.hostname="ks-node#{i}"
   
          # 设置虚拟机的IP
          node.vm.network "private_network", ip: ".168.56.#{99+i}", netmask: "255.255.255.0"
   
          # 设置主机与虚拟机的共享目录
          # node.vm.synced_folder "~/Documents/vagrant/share", "/home/vagrant/share"
   
          # VirtaulBox相关配置
          node.vm.provider "virtualbox" do |v|
              # 设置虚拟机的名称
              v.name = "ks-node#{i}"
              # 设置虚拟机的内存大小
              v.memory =
              # 设置虚拟机的CPU个数
              v.cpus =
          end
      end
  end
end

1.3 执行vagrant up命令

用window的CMD命名在Vagrantfile所在目录执行vagrant up,执行需要等待一段时间,执行日志

F:javatoolvirtualguli>vagrant up
Bringing machine 'ks-node1' up with 'virtualbox' provider...
Bringing machine 'ks-node2' up with 'virtualbox' provider...
Bringing machine 'ks-node3' up with 'virtualbox' provider...
==> ks-node1: Importing base box 'centos/7'...
==> ks-node1: Matching MAC address for NAT networking...
==> ks-node1: Checking if box 'centos/7' version '2004.01' is up to date...
==> ks-node1: Setting the name of the VM: k8s-node1
==> ks-node1: Clearing any previously set network interfaces...
==> ks-node1: Preparing network interfaces based on configuration...
  ks-node1: Adapter 1: nat
  ks-node1: Adapter 2: hostonly
==> ks-node1: Forwarding ports...
  ks-node1: 22 (guest) => 2222 (host) (adapter 1)
==> ks-node1: Running 'pre-boot' VM customizations...
==> ks-node1: Booting VM...
==> ks-node1: Waiting for machine to boot. This may take a few minutes...
  ks-node1: SSH address: 127.0.0.1:2222
  ks-node1: SSH username: vagrant
  ks-node1: SSH auth method: private key
  ks-node1:
  ks-node1: Vagrant insecure key detected. Vagrant will automatically replace
  ks-node1: this with a newly generated keypair for better security.
  ks-node1:
  ks-node1: Inserting generated public key within guest...
  ks-node1: Removing insecure key from the guest if it's present...
  ks-node1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> ks-node1: Machine booted and ready!
==> ks-node1: Checking for guest additions in VM...
  ks-node1: No guest additions were detected on the base box for this VM! Guest
  ks-node1: additions are required for forwarded ports, shared folders, host only
  ks-node1: networking, and more. If SSH fails on this machine, please install
  ks-node1: the guest additions and repackage the box to continue.
  ks-node1:
  ks-node1: This is not an error message; everything may continue to work properly,
  ks-node1: in which case you may ignore this message.
==> ks-node1: Setting hostname...
==> ks-node1: Configuring and enabling network interfaces...
==> ks-node1: Rsyncing folder: /cygdrive/f/javatool/virtual/guli/ => /vagrant
==> ks-node2: Importing base box 'centos/7'...
==> ks-node2: Matching MAC address for NAT networking...
==> ks-node2: Checking if box 'centos/7' version '2004.01' is up to date...
==> ks-node2: Setting the name of the VM: k8s-node2
==> ks-node2: Fixed port collision for 22 => 2222. Now on port 2200.
==> ks-node2: Clearing any previously set network interfaces...
==> ks-node2: Preparing network interfaces based on configuration...
  ks-node2: Adapter 1: nat
  ks-node2: Adapter 2: hostonly
==> ks-node2: Forwarding ports...
  ks-node2: 22 (guest) => 2200 (host) (adapter 1)
==> ks-node2: Running 'pre-boot' VM customizations...
==> ks-node2: Booting VM...
==> ks-node2: Waiting for machine to boot. This may take a few minutes...
  ks-node2: SSH address: 127.0.0.1:2200
  ks-node2: SSH username: vagrant
  ks-node2: SSH auth method: private key
  ks-node2:
  ks-node2: Vagrant insecure key detected. Vagrant will automatically replace
  ks-node2: this with a newly generated keypair for better security.
  ks-node2:
  ks-node2: Inserting generated public key within guest...
  ks-node2: Removing insecure key from the guest if it's present...
  ks-node2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> ks-node2: Machine booted and ready!
==> ks-node2: Checking for guest additions in VM...
  ks-node2: No guest additions were detected on the base box for this VM! Guest
  ks-node2: additions are required for forwarded ports, shared folders, host only
  ks-node2: networking, and more. If SSH fails on this machine, please install
  ks-node2: the guest additions and repackage the box to continue.
  ks-node2:
  ks-node2: This is not an error message; everything may continue to work properly,
  ks-node2: in which case you may ignore this message.
==> ks-node2: Setting hostname...
==> ks-node2: Configuring and enabling network interfaces...
==> ks-node2: Rsyncing folder: /cygdrive/f/javatool/virtual/guli/ => /vagrant
==> ks-node3: Importing base box 'centos/7'...
==> ks-node3: Matching MAC address for NAT networking...
==> ks-node3: Checking if box 'centos/7' version '2004.01' is up to date...
==> ks-node3: Setting the name of the VM: k8s-node3
==> ks-node3: Fixed port collision for 22 => 2222. Now on port 2201.
==> ks-node3: Clearing any previously set network interfaces...
==> ks-node3: Preparing network interfaces based on configuration...
  ks-node3: Adapter 1: nat
  ks-node3: Adapter 2: hostonly
==> ks-node3: Forwarding ports...
  ks-node3: 22 (guest) => 2201 (host) (adapter 1)
==> ks-node3: Running 'pre-boot' VM customizations...
==> ks-node3: Booting VM...
==> ks-node3: Waiting for machine to boot. This may take a few minutes...
  ks-node3: SSH address: 127.0.0.1:2201
  ks-node3: SSH username: vagrant
  ks-node3: SSH auth method: private key
  ks-node3:
  ks-node3: Vagrant insecure key detected. Vagrant will automatically replace
  ks-node3: this with a newly generated keypair for better security.
  ks-node3:
  ks-node3: Inserting generated public key within guest...
  ks-node3: Removing insecure key from the guest if it's present...
  ks-node3: Key inserted! Disconnecting and reconnecting using new SSH key...
==> ks-node3: Machine booted and ready!
==> ks-node3: Checking for guest additions in VM...
  ks-node3: No guest additions were detected on the base box for this VM! Guest
  ks-node3: additions are required for forwarded ports, shared folders, host only
  ks-node3: networking, and more. If SSH fails on this machine, please install
  ks-node3: the guest additions and repackage the box to continue.
  ks-node3:
  ks-node3: This is not an error message; everything may continue to work properly,
  ks-node3: in which case you may ignore this message.
==> ks-node3: Setting hostname...
==> ks-node3: Configuring and enabling network interfaces...
==> ks-node3: Rsyncing folder: /cygdrive/f/javatool/virtual/guli/ => /vagrant

1.4 虚拟机初始化完成

执行完成后打开Virtual,会发现有三台只在运行中的虚拟机,三台虚拟机的名字分别为k8s-node1,k8s-node2,k8s-nod3。

1.5 虚拟机密码访问设置

敲入命令 vagrant ssh k8s-node1 ,切换到root用户 su root ,会出现输入密码选项,初始密码 vargant

使用编辑命令 vi /etc/ssh/sshd_config

将文件中的 PasswordAuthentication no 改成 PasswordAuthentication yes ,保存修改并退出;

重启一下sshd service sshd restart

执行一次 exit; 退出root用户;

执行第二次 exit; 退出当前虚拟机;

F:javatoolvirtualguli>vagrant ssh ks-node1
[vagrant@ks-node1 ~]$ su root
Password:
[root@ks-node1 vagrant]# vi /etc/ssh/sshd_config
[root@ks-node1 vagrant]# service sshd restart
Redirecting to /bin/systemctl restart sshd.service
[root@ks-node1 vagrant]# exit;
exit
[vagrant@ks-node1 ~]$ exit;
logout
Connection to.0.0.1 closed.

重复上面命令在k8s-node2,k8s-node3中执行。

用xshell连上上面三台虚拟机,虚拟机创建结束。

2 k8s集群搭建前置环境设置

2.1 查看三台机器的ip route

执行命令 ip route show

[root@ks-node1 ~]# ip route show
default via.0.2.2 dev eth0 proto dhcp metric 100
.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.100 metric 101
[root@ks-node2 ~]# ip route show
default via.0.2.2 dev eth0 proto dhcp metric 101
.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 101
.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.101 metric 100
[root@ks-node3 ~]# ip route show
default via.0.2.2 dev eth0 proto dhcp metric 100
.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.102 metric 101

发现三台虚拟机使用的都是 eth0 网卡,ip也都是一样的 10.0.2.15 ,这是由于网络默认采用的是 网络地址转换(NAT) 模式。

我们需要修改这种网络模式,打开virtual,选择 管理->全局设定(P)->网络->新增新NAT网络

接着需要给每台虚拟机设置网络连接方式,连接方式为NAT网络,界面名称为刚才创建的网络名称,刷新MAC地址

执行命令 ip route show

[root@ks-node1 ~]# ip route show
default via.0.2.1 dev eth0 proto dhcp metric 100
.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.100 metric 101
[root@ks-node2 ~]# ip route show
default via.0.2.1 dev eth0 proto dhcp metric 101
.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.4 metric 101
.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.101 metric 100
[root@ks-node3 ~]# ip route show
default via.0.2.1 dev eth0 proto dhcp metric 101
.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.5 metric 101
.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.102 metric 100

发现三台虚拟的网卡地址不一样了。

三台机器ip地址相互ping一下,测试网络是否通畅;三台机器都ping一下www.baidu.com测试外网是否能ping通。

2.2 设置linux环境

# ()关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
# ()关闭selinux
setenforce
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# ()关闭swap
swapoff -a
sed -i '/swap/s/^(.*)$/#/g' /etc/fstab
# ()配置iptables的ACCEPT规则
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# ()设置系统参数
cat <<EOF > /etc/sysctl.d/ks.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-iptables =
EOF
sysctl --system

添加主机名与IP对应关系

vi /etc/hosts

10.0.2.15 k8s-node1

10.0.2.4 k8s-node2

10.0.2.5 k8s-node3

hostnamectl set-hostname <newhostname>

2.3 安装docker

卸载之前的docker
sudo yum remove docker
                docker-client
                docker-client-latest
                docker-common
                docker-latest
                docker-latest-logrotate
                docker-logrotate
                docker-engine
                  安装必要的依赖
sudo yum install -y yum-utils
  device-mapper-persistent-data
  lvm
    设置docker仓库 [设置阿里云镜像仓库可以先自行百度,后面课程也会有自己的docker hub讲解]
sudo yum-config-manager
     --add-repo
   
     
  [访问这个地址,使用自己的阿里云账号登录,查看菜单栏左下角,发现有一个镜像加速器:
​ 安装docker
sudo yum install -y docker-ce docker-ce-cli containerd.io 启动docker
sudo systemctl start docker 测试docker安装是否成功
sudo docker run hello-world 设置docker开机启动
   sudo systemctl enable docker

2.4 设置镜像加速器

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
 "registry-mirrors": ["#;]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

2.5 Installing kubeadm, kubelet and kubectl

2.5.1 配置yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=
enabled=
gpgcheck=
repo_gpgcheck=
gpgkey=
     
EOF

2.5.2 安装kubeadm&kubelet&kubectl

 yum install -y kubeadm-.17.3 kubelet-1.17.3 kubectl-1.17.3 

2.5.3 docker和k8s设置同一个cgroup

# docker
vi /etc/docker/daemon.json
   "exec-opts": ["native.cgroupdriver=systemd"],
   
systemctl restart docker
   
# kubelet,这边如果发现输出directory not exist,也说明是没问题的,大家继续往下进行即可
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/-kubeadm.conf
systemctl enable kubelet && systemctl start kubelet

2.5.4 初始化master节点

官网:

注意 此操作是在主节点上进行 如果没有镜像先执行master_images.sh

kubeadm init --kubernetes-version=.17.3 --apiserver-advertise-address=10.0.2.15  --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/16
【若要重新初始化集群状态:kubeadm reset,然后再进行上述操作】
#!/bin/bash
images=(
      kube-apiserver:v.17.3
  kube-proxy:v.17.3
      kube-controller-manager:v.17.3
      kube-scheduler:v.17.3
      coredns:.6.5
      etcd:.4.3-0
  pause:.1
)
for imageName in ${images[@]} ; do
  docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
  docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName ks.gcr.io/$imageName
done
~      

主节点安装成功日志

Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join.0.2.15:6443 --token njegi6.oj7rc4x6agiu1go2
   --discovery-token-ca-cert-hash sha:c282026afc2e329f4f80f6793966a906a75940bee4daeb36261ea383f69b4154

按照提示执行

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装网络 插件,执行命令 kubectl apply -f kube-flannel.yml

下面是kube-flannel.yml的内容

---
apiVersion: policy/vbeta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
  seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
  seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
  apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
  apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
 # Users and groups
runAsUser:
  rule: RunAsAny
supplementalGroups:
  rule: RunAsAny
fsGroup:
  rule: RunAsAny
 # Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
 # Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
 # Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min:
  max:
 # SELinux
seLinux:
   # SELinux is unused in CaaSP
  rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.ks.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
    - ""
  resources:
    - pods
  verbs:
    - get
- apiGroups:
    - ""
  resources:
    - nodes
  verbs:
    - list
    - watch
- apiGroups:
    - ""
  resources:
    - nodes/status
  verbs:
    - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.ks.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.ks.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
  tier: node
  app: flannel
data:
cni-conf.json: |
  {
    "name": "cbr",
    "cniVersion": ".3.1",
    "plugins": [
      {
        "type": "flannel",
        "delegate": {
          "hairpinMode": true,
          "isDefaultGateway": true
        }
      },
      {
        "type": "portmap",
        "capabilities": {
          "portMappings": true
        }
      }
    ]
  }
net-conf.json: |
  {
    "Network": ".244.0.0/16",
    "Backend": {
      "Type": "vxlan"
    }
  }
---
apiVersion: apps/v
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd
namespace: kube-system
labels:
  tier: node
  app: flannel
spec:
selector:
  matchLabels:
    app: flannel
template:
  metadata:
    labels:
      tier: node
      app: flannel
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: beta.kubernetes.io/os
                  operator: In
                  values:
                    - linux
                - key: beta.kubernetes.io/arch
                  operator: In
                  values:
                    - amd
    hostNetwork: true
    tolerations:
    - operator: Exists
      effect: NoSchedule
    serviceAccountName: flannel
    initContainers:
    - name: install-cni
      image: quay.io/coreos/flannel:v.11.0-amd64
      command:
      - cp
      args:
      - -f
      - /etc/kube-flannel/cni-conf.json
      - /etc/cni/net.d/-flannel.conflist
      volumeMounts:
      - name: cni
        mountPath: /etc/cni/net.d
      - name: flannel-cfg
        mountPath: /etc/kube-flannel/
    containers:
    - name: kube-flannel
      image: quay.io/coreos/flannel:v.11.0-amd64
      command:
      - /opt/bin/flanneld
      args:
      - --ip-masq
      - --kube-subnet-mgr
      resources:
        requests:
          cpu: "m"
          memory: "Mi"
        limits:
          cpu: "m"
          memory: "Mi"
      securityContext:
        privileged: false
        capabilities:
          add: ["NET_ADMIN"]
      env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
      - name: POD_NAMESPACE
        valueFrom:
          fieldRef:
            fieldPath: metadata.namespace
      volumeMounts:
      - name: run
        mountPath: /run/flannel
      - name: flannel-cfg
        mountPath: /etc/kube-flannel/
    volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
---
apiVersion: apps/v
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
  tier: node
  app: flannel
spec:
selector:
  matchLabels:
    app: flannel
template:
  metadata:
    labels:
      tier: node
      app: flannel
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: beta.kubernetes.io/os
                  operator: In
                  values:
                    - linux
                - key: beta.kubernetes.io/arch
                  operator: In
                  values:
                    - arm
    hostNetwork: true
    tolerations:
    - operator: Exists
      effect: NoSchedule
    serviceAccountName: flannel
    initContainers:
    - name: install-cni
      image: quay.io/coreos/flannel:v.11.0-arm64
      command:
      - cp
      args:
      - -f
      - /etc/kube-flannel/cni-conf.json
      - /etc/cni/net.d/-flannel.conflist
      volumeMounts:
      - name: cni
        mountPath: /etc/cni/net.d
      - name: flannel-cfg
        mountPath: /etc/kube-flannel/
    containers:
    - name: kube-flannel
      image: quay.io/coreos/flannel:v.11.0-arm64
      command:
      - /opt/bin/flanneld
      args:
      - --ip-masq
      - --kube-subnet-mgr
      resources:
        requests:
          cpu: "m"
          memory: "Mi"
        limits:
          cpu: "m"
          memory: "Mi"
      securityContext:
        privileged: false
        capabilities:
            add: ["NET_ADMIN"]
      env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
      - name: POD_NAMESPACE
        valueFrom:
          fieldRef:
            fieldPath: metadata.namespace
      volumeMounts:
      - name: run
        mountPath: /run/flannel
      - name: flannel-cfg
        mountPath: /etc/kube-flannel/
    volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
---
apiVersion: apps/v
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
  tier: node
  app: flannel
spec:
selector:
  matchLabels:
    app: flannel
template:
  metadata:
    labels:
      tier: node
      app: flannel
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: beta.kubernetes.io/os
                  operator: In
                  values:
                    - linux
                - key: beta.kubernetes.io/arch
                  operator: In
                  values:
                    - arm
    hostNetwork: true
    tolerations:
    - operator: Exists
      effect: NoSchedule
    serviceAccountName: flannel
    initContainers:
    - name: install-cni
      image: quay.io/coreos/flannel:v.11.0-arm
      command:
      - cp
      args:
      - -f
      - /etc/kube-flannel/cni-conf.json
      - /etc/cni/net.d/-flannel.conflist
      volumeMounts:
      - name: cni
        mountPath: /etc/cni/net.d
      - name: flannel-cfg
        mountPath: /etc/kube-flannel/
    containers:
    - name: kube-flannel
      image: quay.io/coreos/flannel:v.11.0-arm
      command:
      - /opt/bin/flanneld
      args:
      - --ip-masq
      - --kube-subnet-mgr
      resources:
        requests:
          cpu: "m"
          memory: "Mi"
        limits:
          cpu: "m"
          memory: "Mi"
      securityContext:
        privileged: false
        capabilities:
            add: ["NET_ADMIN"]
      env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
      - name: POD_NAMESPACE
        valueFrom:
          fieldRef:
            fieldPath: metadata.namespace
      volumeMounts:
      - name: run
        mountPath: /run/flannel
      - name: flannel-cfg
        mountPath: /etc/kube-flannel/
    volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
---
apiVersion: apps/v
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppcle
namespace: kube-system
labels:
  tier: node
  app: flannel
spec:
selector:
  matchLabels:
    app: flannel
template:
  metadata:
    labels:
      tier: node
      app: flannel
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: beta.kubernetes.io/os
                  operator: In
                  values:
                    - linux
                - key: beta.kubernetes.io/arch
                  operator: In
                  values:
                    - ppcle
    hostNetwork: true
    tolerations:
    - operator: Exists
      effect: NoSchedule
    serviceAccountName: flannel
    initContainers:
    - name: install-cni
      image: quay.io/coreos/flannel:v.11.0-ppc64le
      command:
      - cp
      args:
      - -f
      - /etc/kube-flannel/cni-conf.json
      - /etc/cni/net.d/-flannel.conflist
      volumeMounts:
      - name: cni
        mountPath: /etc/cni/net.d
      - name: flannel-cfg
        mountPath: /etc/kube-flannel/
    containers:
    - name: kube-flannel
      image: quay.io/coreos/flannel:v.11.0-ppc64le
      command:
      - /opt/bin/flanneld
      args:
      - --ip-masq
      - --kube-subnet-mgr
      resources:
        requests:
          cpu: "m"
          memory: "Mi"
        limits:
          cpu: "m"
          memory: "Mi"
      securityContext:
        privileged: false
        capabilities:
            add: ["NET_ADMIN"]
      env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
      - name: POD_NAMESPACE
        valueFrom:
          fieldRef:
            fieldPath: metadata.namespace
      volumeMounts:
      - name: run
        mountPath: /run/flannel
      - name: flannel-cfg
        mountPath: /etc/kube-flannel/
    volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
---
apiVersion: apps/v
kind: DaemonSet
metadata:
name: kube-flannel-ds-sx
namespace: kube-system
labels:
  tier: node
  app: flannel
spec:
selector:
  matchLabels:
    app: flannel
template:
  metadata:
    labels:
      tier: node
      app: flannel
  spec:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: beta.kubernetes.io/os
                  operator: In
                  values:
                    - linux
                - key: beta.kubernetes.io/arch
                  operator: In
                  values:
                    - sx
    hostNetwork: true
    tolerations:
    - operator: Exists
      effect: NoSchedule
    serviceAccountName: flannel
    initContainers:
    - name: install-cni
      image: quay.io/coreos/flannel:v.11.0-s390x
      command:
      - cp
      args:
      - -f
      - /etc/kube-flannel/cni-conf.json
      - /etc/cni/net.d/-flannel.conflist
      volumeMounts:
      - name: cni
        mountPath: /etc/cni/net.d
      - name: flannel-cfg
        mountPath: /etc/kube-flannel/
    containers:
    - name: kube-flannel
      image: quay.io/coreos/flannel:v.11.0-s390x
      command:
      - /opt/bin/flanneld
      args:
      - --ip-masq
      - --kube-subnet-mgr
      resources:
        requests:
          cpu: "m"
          memory: "Mi"
        limits:
          cpu: "m"
          memory: "Mi"
      securityContext:
        privileged: false
        capabilities:
            add: ["NET_ADMIN"]
      env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
      - name: POD_NAMESPACE
        valueFrom:
          fieldRef:
            fieldPath: metadata.namespace
      volumeMounts:
      - name: run
        mountPath: /run/flannel
      - name: flannel-cfg
        mountPath: /etc/kube-flannel/
    volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

执行完成后查询是否安装成功,需要等待3分钟左右

[root@ks-node1 k8s]# kubectl get pods --all-namespaces
NAMESPACE     NAME                               READY   STATUS   RESTARTS   AGE
kube-system   coredns-f44-2qrmj            1/1     Running   0         14m
kube-system   coredns-f44-s4t2r            1/1     Running   0         14m
kube-system   etcd-ks-node1                      1/1     Running   0         14m
kube-system   kube-apiserver-ks-node1            1/1     Running   0         14m
kube-system   kube-controller-manager-ks-node1   1/1     Running   0         14m
kube-system   kube-flannel-ds-amd-nlwhz         1/1     Running   0         2m4s
kube-system   kube-proxy-mlv                    1/1     Running   0         14m
kube-system   kube-scheduler-ks-node1            1/1     Running   0         14m

当都是running状态代表成功。

2.5.5 nodes节点加入master

执行加入命令

kubeadm join.0.2.15:6443 --token njegi6.oj7rc4x6agiu1go2
   --discovery-token-ca-cert-hash sha:c282026afc2e329f4f80f6793966a906a75940bee4daeb36261ea383f69b4154

此过程比较耗时,可以执行 kubectl get pod -n kube-system -o wide 查看状态

[root@ks-node1 k8s]# kubectl get pod -n kube-system -o wide
NAME                               READY   STATUS             RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
coredns-f44-2qrmj            1/1     Running             0         27m    10.244.0.2   k8s-node1   <none>           <none>
coredns-f44-s4t2r            1/1     Running             0         27m    10.244.0.3   k8s-node1   <none>           <none>
etcd-ks-node1                      1/1     Running             0         27m    10.0.2.15   k8s-node1   <none>           <none>
kube-apiserver-ks-node1            1/1     Running             0         27m    10.0.2.15   k8s-node1   <none>           <none>
kube-controller-manager-ks-node1   1/1     Running             0         27m    10.0.2.15   k8s-node1   <none>           <none>
kube-flannel-ds-amd-dhnt8         0/1     Init:0/1            0         108s   10.0.2.5     k8s-node3   <none>           <none>
kube-flannel-ds-amd-nlwhz         1/1     Running             0         15m    10.0.2.15   k8s-node1   <none>           <none>
kube-flannel-ds-amd-zqhzv         0/1     Init:0/1            0         115s   10.0.2.4     k8s-node2   <none>           <none>
kube-proxy-cvjd                    0/1     ContainerCreating   0         108s   10.0.2.5     k8s-node3   <none>           <none>
kube-proxy-ml8                    0/1     ContainerCreating   0         115s   10.0.2.4     k8s-node2   <none>           <none>
kube-proxy-mlv                    1/1     Running             0         27m    10.0.2.15   k8s-node1   <none>           <none>
kube-scheduler-ks-node1            1/1     Running             0         27m    10.0.2.15   k8s-node1   <none>           <none>
[root@ks-node1 k8s]# kubectl get nodes
NAME       STATUS   ROLES   AGE     VERSION
ks-node1   Ready   master   12m     v1.17.3
ks-node2   Ready   <none>   9m44s   v1.17.3
ks-node3   Ready   <none>   6m35s   v1.17.3