kubernetes 1.28.2 集群安装
相关部署环境及部署组件
| 主机名 | IP地址 | 节点类型 | 系统版本 |
|---|---|---|---|
| master | 192.168.200.10 | master,etcd | centos7.9 |
| node1 | 192.168.200.20 | worker | centos7.9 |
| node2 | 192.168.200.30 | worker | centos7.9 |
| 组件 | 版本 | 说明 |
|---|---|---|
| kubernetes | v1.28.2 | 主程序 |
| containerd | 1.6.33 | 容器运行时 |
| calico | v3.28.0 | 网络插件 |
| etcd | 3.5.4 | 数据库 |
| coredns | v1.9.3 | dns组件 |
集群部署
集群部署分以下几个部分:
- 环境准备
- 部署master
- 添加worker节点
- 安装网络插件
- 检查
环境准备
三台主机全部需要连接网络
准备需要在所有节点上操作,包含的过程如下:
- 配置主机名
- 配置 /etc/hosts
- 清空防火墙
- 设置yum源
- 配置时间同步
- 关闭swap
- 配置内核参数
- 加载 ip_vs 内核模块
- 安装 containerd
- 安装 kubelet , kubectl , kubeadm
修改主机名
# masterhostnamectlset-hostnamemaster# node1hostnamectlset-hostnamenode1# node2hostnamectlset-hostnamenode2添加 hosts:
# /etc/hosts192.168.200.10 master 192.168.200.20 node1 192.168.200.30 node2关闭防火墙 清空规则 ,关闭 selinux:
systemctl disable--now firewalld setenforce 0 sed-i's/SELINUX=/SELINUX=disabled/g'/etc/selinux/config iptables-F;iptables-X;iptables-Z;iptables-save设置 yum 源:
curl-o/etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo sed-i-e'/mirrors.cloud.aliyuncs.com/d'-e'/mirrors.aliyuncs.com/d'/etc/yum.repos.d/CentOS-Base.repo配置时间同步:
yum install-y chrony systemctl enable--now chronyd;systemctl restart chronyd关闭swap:
# kubernetes 不允许其安装节点开启 swap ,如果有 swap 分区 建议关闭# 在安装系统的同时也可以将 swap 分区 删除# 临时禁用 swapswapoff-a# 修改 /etc/fstab ,将 swap 挂载注释,可以确保节点重启后 swap 仍然禁用# 可以通过 free -m 查看swap 是否禁用free-m# swap 的值为0 即为禁用total used free shared buff/cache available Mem: 7963 366 6868 11 728 7345 Swap: 0 0 0加载内核模块:
cat>/etc/sysconfig/modules/ipvs.modules << EOF#!/bin/bashmodprobe--br_netfilter# 关键模块modprobe--ip_vs# 高性能负载均衡modprobe--ip_vs_rr modprobe--ip_vs_wrr modprobe--ip_vs_sh modprobe--nf_conntrack_ipv4 EOF chmod 755/etc/sysconfig/modules/ipvs.modules && \ bash/etc/sysconfig/modules/ipvs.modules && \ lsmod|grep-E"ip_vs|nf_conntrack_ipv4"# 在 Linux 内核较新版本(4.0+)中,nfs_conntrack_ipv4 模块已被合并到 nf_conntrack 中,# 因此运行 modprobe nfs_conntrack_ipv4 可能会失败。开启流量过滤及转发
cat>>/etc/sysctl.d/k8s.conf <<EOF net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-ip6tables=1 net.bridge.bridge-nf-call-iptables=1 EOF sysctl-p/etc/sysctl.d/k8s.conf安装 containerd :
yum install-y yum-utils yum-config-manager--add-repohttps://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install-y docker-ce docker-ce-clicontainerd.io docker-buildx-plugin docker-compose-plugin# 生成 containerd 的配置文件mkdir-p/etc/containerd containerd config default >/etc/containerd/config.toml sed-i"s#registry.k8s.io/pause#registry.aliyuncs.com/google_containers/pause#g"/etc/containerd/config.toml sed-i's/SystemdCgroup = false/SystemdCgroup = true/g'/etc/containerd/config.toml systemctl restart containerdcat<<EOF >/etc/crictl.yaml runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 5 debug: false EOF安装 kubeadm , kubelet , kubectl :
cat<<EOF|tee/etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/ enabled=1 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/repodata/repomd.xml.key EOF yum install-y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2 systemctl enable kubelet && systemctlstartkubelet部署master
只需要在 master 节点配置 :
# 初始化集群kubeadm init--apiserver-advertise-address=192.168.200.10--kubernetes-version=v1.28.2--image-repository registry.aliyuncs.com/google_containers--pod-network-cidr=10.244.0.0/16# 出现以下这些 说明初始化完成Your Kubernetes control-plane has initialized successfully! Tostartusingyour cluster,you need to run the following as a regular user: mkdir-p$HOME/.kube sudocp-i/etc/kubernetes/admin.conf$HOME/.kube/config sudo chown $(id-u):$(id-g)$HOME/.kube/config Alternatively,ifyou are the root user,you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster.Run"kubectl apply -f [podnetwork].yaml"with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.200.10:6443--token 82c61s.adnjs7dxyjqivoqz \--discovery-token-ca-cert-hash sha256:7c8f8ba7d2033229af0828e05b95b5bfd46269b6918551e47af08989f489d020# 配置环境变量mkdir-p$HOME/.kube sudocp-i/etc/kubernetes/admin.conf$HOME/.kube/config sudo chown $(id-u):$(id-g)$HOME/.kube/configecho'export KUBECONFIG=/etc/kubernetes/admin.conf'>>/etc/profileecho'source <(kubectl completion bash)'>>/etc/profile source/etc/profile添加worker节点
worker 节点全部执行
kubeadm join 192.168.200.10:6443--token 82c61s.adnjs7dxyjqivoqz \--discovery-token-ca-cert-hash sha256:7c8f8ba7d2033229af0828e05b95b5bfd46269b6918551e47af08989f489d020安装网络插件
因为受政策影响,目前安装 calico 暂时无法直接通过国内网络拉取镜像,所以采用离线镜像安装。上传Calico_v3.28目录至服务器。
软件包可以在网上下载 这里我用的是这个百度网盘分享
# master 节点执行 上传镜像scp-r/root/Calico_v3.28/images node1:/root/ scp-r/root/Calico_v3.28/images node2:/root/ ctr-n k8s.io image import/root/Calico_v3.28/images/apiserver-v3.28.0.tar ctr-n k8s.io image import/root/Calico_v3.28/images/cni-v3.28.0.tar ctr-n k8s.io image import/root/Calico_v3.28/images/csi-v3.28.0.tar ctr-n k8s.io image import/root/Calico_v3.28/images/kube-controllers-v3.28.0.tar ctr-n k8s.io image import/root/Calico_v3.28/images/node-driver-registrar-v3.28.0.tar ctr-n k8s.io image import/root/Calico_v3.28/images/node-v3.28.0.tar ctr-n k8s.io image import/root/Calico_v3.28/images/operator-v1.34.0.tar ctr-n k8s.io image import/root/Calico_v3.28/images/pod2daemon-flexvol-v3.28.0.tar ctr-n k8s.io image import/root/Calico_v3.28/images/typha-v3.28.0.tar# worker 节点执行 上传镜像ctr-n k8s.io image import/root/images/apiserver-v3.28.0.tar ctr-n k8s.io image import/root/images/cni-v3.28.0.tar ctr-n k8s.io image import/root/images/csi-v3.28.0.tar ctr-n k8s.io image import/root/images/kube-controllers-v3.28.0.tar ctr-n k8s.io image import/root/images/node-driver-registrar-v3.28.0.tar ctr-n k8s.io image import/root/images/node-v3.28.0.tar ctr-n k8s.io image import/root/images/operator-v1.34.0.tar ctr-n k8s.io image import/root/images/pod2daemon-flexvol-v3.28.0.tar ctr-n k8s.io image import/root/images/typha-v3.28.0.tar# 仅在 master 节点执行kubectl create-f Calico_v3.28/tigera-operator-v3.28.0.yaml kubectl create-f Calico_v3.28/custom-resources-v3.28.0.yaml检查
[root@master ~]# kubectl get pod -ANAMESPACE NAME READY STATUS RESTARTS AGE calico-apiserver calico-apiserver-6757859c78-5dz9z 1/1 Running 0 2m6s calico-apiserver calico-apiserver-6757859c78-ztst4 1/1 Running 0 2m6s calico-system calico-kube-controllers-8559cc859c-59pgk 1/1 Running 0 2m43s calico-system calico-node-45wqq 1/1 Running 0 2m43s calico-system calico-node-5m7wd 1/1 Running 0 2m calico-system calico-node-llj4t 1/1 Running 0 2m3s calico-system calico-typha-84676bc4c8-4mvjw 1/1 Running 0 2m43s calico-system calico-typha-84676bc4c8-hnzfx 1/1 Running 0 112s calico-system csi-node-driver-gbwjj 2/2 Running 0 2m3s calico-system csi-node-driver-knv96 2/2 Running 0 2m43s calico-system csi-node-driver-whktb 2/2 Running 0 2m kube-system coredns-66f779496c-dv7wz 1/1 Running 0 19m kube-system coredns-66f779496c-r567m 1/1 Running 0 19m kube-system etcd-master 1/1 Running 0 19m kube-system kube-apiserver-master 1/1 Running 0 19m kube-system kube-controller-manager-master 1/1 Running 0 19m kube-system kube-proxy-c6mbr 1/1 Running 0 19m kube-system kube-proxy-npjfw 1/1 Running 0 2m3s kube-system kube-proxy-nrkrz 1/1 Running 0 119s kube-system kube-scheduler-master 1/1 Running 0 19m tigera-operator tigera-operator-76c4974c85-cdv2s 1/1 Running 0 2m55s[root@master ~]# kubectl get nodesNAME STATUS ROLES AGE VERSION master Ready control-plane 19m v1.28.2 node1 Ready <none> 2m7s v1.28.2 node2 Ready <none> 2m4s v1.28.2