13 KiB
搭建k8s
集群
1.1 准备环境
# Linux版本
Rocky Linux 9.4 Mini
# 更新系统
dnf clean all
dnf update
# K8S的三台服务器
10.10.14.200 k8s-master
10.10.14.201 k8s-node1
10.10.14.202 k8s-node2
# Docker镜像仓库
K8S-IMAGES 10.10.14.203
2.2 系统初始化
设置系统时区为上海
timedatectl set-timezone Asia/Shanghai
clock -w
# 查看时区
ls -l /etc/localtime
关闭防火墙:
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
关闭swap
分区:
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a
在master
上执行
hostnamectl set-hostname k8s-master
在node1
上执行
hostnamectl set-hostname k8s-node1
在node2
上执行
hostnamectl set-hostname k8s-node2
在每个节点添加hosts
:
cat >> /etc/hosts << EOF
10.10.14.200 k8s-master
10.10.14.201 k8s-node1
10.10.14.202 k8s-node2
EOF
将桥接的IPv4
流量传递到iptables
的链:
在每个节点添加如下的命令:
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
在每个节点加载br\_netfilter
模块
modprobe br_netfilter
# 生效
sysctl --system
查看是否加载
lsmod | grep br_netfilter
在每个节点添加时间同步:
安装ntpdate
时间同步插件
dnf install chrony -y
systemctl enable --now chronyd
编辑内容
vi /etc/chrony.conf
server 0.pool.ntp.org iburst
server 1.pool.ntp.org iburst
server 2.pool.ntp.org iburst
server 3.pool.ntp.org iburst
重新启动
systemctl restart chronyd
手工同步
chronyc makestep
在每个节点安装ipset
和ipvsadm
:
安装
yum -y install ipset ipvsadm
配置
mkdir -p /etc/sysconfig/modules/
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
授权、运行、检查是否加载:
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
所有节点安装Docker/kubeadm/kubelet/kubectl
k8s默认CRI(容器运行时)为Docker,因此需要先安装Docker!
所有节点安装Docker:
获取镜像源
yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
安装
# 找出有哪些可用的Docker版本,这里我们选择20系列最新的版本
yum list docker-ce --showduplicates | sort -r
yum list docker-ce-cli --showduplicates | sort -r
yum list containerd.io --showduplicates | sort -r
docker-ce.x86_64 3:20.10.24-3.el9 docker-ce-stable
docker-ce.x86_64 3:20.10.23-3.el9 docker-ce-stable
docker-ce.x86_64 3:20.10.22-3.el9 docker-ce-stable
docker-ce.x86_64 3:20.10.21-3.el9 docker-ce-stable
docker-ce.x86_64 3:20.10.20-3.el9 docker-ce-stable
docker-ce.x86_64 3:20.10.19-3.el9 docker-ce-stable
docker-ce.x86_64 3:20.10.18-3.el9 docker-ce-stable
docker-ce.x86_64 3:20.10.17-3.el9 docker-ce-stable
docker-ce.x86_64 3:20.10.16-3.el9 docker-ce-stable
docker-ce.x86_64 3:20.10.15-3.el9 docker-ce-stable
...
docker-ce-cli.x86_64 1:20.10.24-3.el9 docker-ce-stable
docker-ce-cli.x86_64 1:20.10.23-3.el9 docker-ce-stable
docker-ce-cli.x86_64 1:20.10.22-3.el9 docker-ce-stable
docker-ce-cli.x86_64 1:20.10.21-3.el9 docker-ce-stable
docker-ce-cli.x86_64 1:20.10.20-3.el9 docker-ce-stable
docker-ce-cli.x86_64 1:20.10.19-3.el9 docker-ce-stable
docker-ce-cli.x86_64 1:20.10.18-3.el9 docker-ce-stable
docker-ce-cli.x86_64 1:20.10.17-3.el9 docker-ce-stable
docker-ce-cli.x86_64 1:20.10.16-3.el9 docker-ce-stable
docker-ce-cli.x86_64 1:20.10.15-3.el9 docker-ce-stable
...
yum install -y docker-ce-20.10.15-3.el9 docker-ce-cli-20.10.15-3.el9 containerd.io-1.6.10-3.1.el9
设置开机自启动并启动
systemctl enable docker && systemctl start docker
配置加速
#创建文件夹
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
systemctl daemon-reload
systemctl restart docker
三、安装kubeadm
本章节操作在k8s集群所有机器(即master、所有node)都需要执行成功
# 配置k8s 下载的地址
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# 安装3大件
yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes
# 启动kubelet
systemctl enable --now kubelet
查看kubeadm、kubelet和kubectl 是否安装成功
kubeadm version
kubelet --version
kubectl version --client
设置k8s服务自启动
systemctl enable kubelet
部署kubetnets
该操作只需要在master节点机器上执行
#原命令
kubeadm init --kubernetes-version=1.19.0 --apiserver-advertise-address=master的ip --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
#根据机器实际修改后的命令
kubeadm init --kubernetes-version=1.20.9 --apiserver-advertise-address=10.10.14.200 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.10.14.200:6443 --token ivocyb.4f2p3qu1nc5jptwf \
--discovery-token-ca-cert-hash sha256:e088f075df466e689b8db3ace62a7650f27a11b6f7b36ee61d1ebbbd8a720c16
再根据日志提示命令结果在对应机器上执行
Master机器
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
Node机器
#将两台node加入到集群中,分别在node1根node2执行刚刚 kubeadm init成功后下面打印的命令
#注:日志里复制的“\”换行符要记得去掉
kubeadm join 10.10.14.200:6443 --token ivocyb.4f2p3qu1nc5jptwf \
--discovery-token-ca-cert-hash sha256:e088f075df466e689b8db3ace62a7650f27a11b6f7b36ee61d1ebbbd8a720c16
Node机器都执行完成后,在master节点机器执行该命令
三台都下载yml文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
三台都安装flannel的pod
kubectl apply -f kube-flannel.yml
kubectl get nodes
kubectl get pod -A
部署 k8s
可视化界面dashboard
#命令执行【被墙了,需要科学上网后下载】
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
rz -be 选择:recommended.yaml
kubectl apply -f recommended.yaml
kubectl apply -f dashboard.yaml
# 将 type: ClusterIP 改为 type: NodePort
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
# 查询端口
kubectl get svc -A |grep kubernetes-dashboard
如上所示,Dashboard
已经在31475
端口上公开,现在可以在外部使用https://10.10.14.202:31475进行访问。
注意:在多节点的集群中,必须找到运行Dashboard
节点的IP来访问,而不是Master
节点的IP;可以通过如下命令查询:
kubectl get pod -owide --namespace kubernetes-dashboard
可以看到dashboard 部署在k8s-node2,而本例中,master的ip为:10.10.14.203 故访问:https://10.10.14.202:31475
1)界面打开大概率会提示“你的连接不是专用连接.....”,直接点击“继续访问”
(注:如果没有“继续访问”的提示,则多换个浏览器,笔者是从Google、edge换到火狐才行的)
(2)打开之后,会显示如下
此时暂不点击界面,执行以下操作
#创建访问账号
rz -be
选择: D:\dsWork\dsExam\操作文档\dash.yaml
#执行语句
kubectl apply -f dash.yaml
#获取访问令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
eyJhbGciOiJSUzI1NiIsImtpZCI6ImpWR1F0b3o3LUEzeXR2NXlhNE5xUDNLUnNmUkoyaHkzWmNocC1NQURBZjQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWo1a3piIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyN2FjNDJiYi05ODk5LTQzNjctOGQzNC01NzZjYjEyNWYwZGMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.p6SzEz6JTEtqAiXGHEeXp1nSRfNgQtUIu0kF3obON_bsmev5p-vw14SAYKoU7Tw320hzJpD-Db7mv3nQ5ppXKxzO6HdOhSyrOldwS_2PpA8omSdIb2rQefxrjoXqdn1QWD4wwffyFadjLpAlKla4D33TKlgXYEtItWRjMphhG7aj_rFJFqWJ3LYXB6kbWKx23mXl5lMMTIjGWc_kHJo_a_8Sr7kshNcuZSYeyjVP42vYZMLPRA0_GCT_K-MXYlFlaLwLogTt9hDnnlXMgs5H8zEap1ARXfzIs1EYDGZgPDDj86RfDD2zX74SnEdqtBvEdW_roQpyihzMIgTAX7-Giw
#将运行结果下述白色内容复制到之前dashboard的登陆界面
登录成功如下图所示
官方镜像站 【似乎需要翻墙~】
参考文档
[BUG] runtime network not ready: NetworkReady=false reason:NetworkPluginNotRead