Files
dsProject/Doc/5、k8s集群.md
2025-08-14 15:45:08 +08:00

30 KiB
Raw Blame History

搭建$k8s$集群

【版本 $V1.29$】

一、前期准备

1.1 准备环境

# Linux版本
Rocky Linux 9.4 Mini

# 更新系统【愿意做可以做,不愿意做也没啥事】
# dnf clean all -y
# dnf update -y

# K8S的三台服务器
10.10.14.200 k8s-master
10.10.14.201 k8s-node1
10.10.14.202 k8s-node2

1.2 系统初始化

设置系统时区为上海

timedatectl set-timezone Asia/Shanghai
clock -w

# 查看时区
 ls -l /etc/localtime

关闭防火墙:

systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

关闭$swap$分区:

sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a

在$master$上执行

hostnamectl set-hostname k8s-master

在$node1$上执行

hostnamectl set-hostname  k8s-node1

在$node2$上执行

hostnamectl set-hostname k8s-node2

在每个节点添加$hosts$

cat >> /etc/hosts << EOF
10.10.14.200 k8s-master
10.10.14.201 k8s-node1
10.10.14.202 k8s-node2
EOF

将桥接的$IPv4$流量传递到$iptables$的链:

在每个节点添加如下的命令:

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF

在每个节点加载$br_netfilter$模块

modprobe br_netfilter
sysctl --system

查看是否加载

lsmod | grep br_netfilter

在每个节点添加时间同步:

安装$ntpdate$时间同步插件

dnf install chrony -y
systemctl enable --now chronyd

编辑内容

cat > /etc/chrony.conf << EOF
	server 0.pool.ntp.org iburst
	server 1.pool.ntp.org iburst
	server 2.pool.ntp.org iburst
	server 3.pool.ntp.org iburst
EOF
systemctl restart chronyd

手工同步

chronyc makestep

在每个节点安装$ipset$和$ipvsadm$

安装

yum -y install ipset ipvsadm

配置

mkdir -p /etc/sysconfig/modules/
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

授权、运行、检查是否加载:

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

二、部署安装

2.1 安装Docker

所有节点安装$Docker$

# 获取镜像源
yum install -y yum-utils
curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum clean all
yum makecache

# 开始安装
yum install -y docker-ce docker-ce-cli   containerd.io

# 设置开机自启动并启动
systemctl enable docker && systemctl start docker

配置加速

mkdir -p /etc/docker 
tee /etc/docker/daemon.json <<-'EOF'
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

systemctl daemon-reload
systemctl restart docker

k8sv1.24版本以后使用CRI shim调用流程kubelet(客户端) ->CRI shim(被contained内置) -> containerd -> containerd-shim -> runc

# https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.8/cri-dockerd-0.3.8.amd64.tgz
# 使用SFTP上传软件上传 cri-dockerd-0.3.8.amd64.tgz 

tar xf cri-dockerd-0.3.8.amd64.tgz 

mv cri-dockerd/cri-dockerd  /usr/bin/
rm -rf  cri-dockerd  cri-dockerd-0.3.8.amd64.tgz

# 配置启动项
cat > /etc/systemd/system/cri-docker.service<<EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
# ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd://
# 指定用作 Pod 的基础容器的容器镜像“pause 镜像”)
ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.k8s.io/pause:3.9 --container-runtime-endpoint fd:// 
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF

cat > /etc/systemd/system/cri-docker.socket <<EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF

systemctl daemon-reload 
systemctl enable cri-docker && systemctl start cri-docker && systemctl status cri-docker

2.2 安装 kubelet、kubeadm、kubectl

配置$k8s$源(所有节点)

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
# exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

安装kubelet、kubeadm、kubectl (所有节点)

# 安装默认版本
#yum install -y kubelet kubeadm kubectl 

# 安装指定版本
yum -y install  kubeadm-1.29.0-150500.1.1  kubelet-1.29.0-150500.1.1 kubectl-1.29.0-150500.1.1  

配置 cgroup 驱动与$docker$一致(所有节点)

cp /etc/sysconfig/kubelet{,.bak}
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF
systemctl enable kubelet

安装自动补全工具【可选】

yum install bash-completion -y 
source /usr/share/bash-completion/bash_completion
echo "source <(kubectl completion bash)" >> ~/.bashrc
source  ~/.bashrc 

查看镜像配置

kubeadm config images list --kubernetes-version=v1.29.0

如果网络有限制,请提前下载需要用到的镜像

# 重点,重点,重点
# kubeadm部署集群需要用到k8s配置镜像和Calico网络配置镜像
# 由于默认拉取镜像地址k8s.gcr.io国内无法访问国内镜像仓库我也没有找到所以建议提前下载好导入镜像。

# 所有需要用到镜像(k8s配置镜像和Calico网络配置镜像)
# docker images
REPOSITORY                                TAG        IMAGE ID       CREATED         SIZE
calico/kube-controllers                   v3.27.0    4e87edec0297   12 days ago     75.5MB
calico/cni                                v3.27.0    8e8d96a874c0   12 days ago     211MB
calico/pod2daemon-flexvol                 v3.27.0    6506d2e0be2d   12 days ago     15.4MB
calico/node                               v3.27.0    1843802b91be   13 days ago     340MB
registry.k8s.io/kube-apiserver            v1.29.0    1443a367b16d   2 weeks ago     127MB
registry.k8s.io/kube-scheduler            v1.29.0    7ace497ddb8e   2 weeks ago     59.5MB
registry.k8s.io/kube-controller-manager   v1.29.0    0824682bcdc8   2 weeks ago     122MB
registry.k8s.io/kube-proxy                v1.29.0    98262743b26f   2 weeks ago     82.2MB
registry.k8s.io/etcd                      3.5.10-0   a0eed15eed44   8 weeks ago     148MB
registry.k8s.io/coredns/coredns           v1.11.1    cbb01a7bd410   4 months ago    59.8MB
registry.k8s.io/pause                     3.9        e6f181688397   14 months ago   744kB

办法

# 这四个可以直接下载
docker pull calico/kube-controllers:v3.27.0
docker pull calico/cni:v3.27.0
docker pull calico/pod2daemon-flexvol:v3.27.0
docker pull calico/node:v3.27.0 

# 国内镜像源
# https://docker.aityp.com/

# 偷梁换柱
docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/kube-apiserver:v1.29.8
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/kube-apiserver:v1.29.8 registry.k8s.io/kube-apiserver:v1.29.0

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/kube-scheduler:v1.29.8
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/kube-scheduler:v1.29.8 registry.k8s.io/kube-scheduler:v1.29.0

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/kube-controller-manager:v1.29.8
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/kube-controller-manager:v1.29.8 registry.k8s.io/kube-controller-manager:v1.29.0

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/kube-proxy:v1.29.8
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/kube-proxy:v1.29.8 registry.k8s.io/kube-proxy:v1.29.0

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/etcd:3.5.10-0
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/etcd:3.5.10-0

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/coredns/coredns:v1.11.1
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/coredns/coredns:v1.11.1 

docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/pause:3.9
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/pause:3.9 registry.k8s.io/pause:3.9

集群初始化($master$节点运行)

# 初始化集群
kubeadm init --apiserver-advertise-address 10.10.14.200 --kubernetes-version v1.29.0 --pod-network-cidr=10.244.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock

# 如果网络有问题,请使用如下命令初始化 【我没用上】
kubeadm init \
--apiserver-advertise-address 10.10.14.200 # master节点ip  \
--kubernetes-version v1.29.0 \
--pod-network-cidr=10.244.0.0/16 # pod分配的ip \
--cri-socket=unix:///var/run/cri-dockerd.sock --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers

记录下面的命令:

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.14.200:6443 --token 3yk4me.5k595v6hm2qz463s \
        --discovery-token-ca-cert-hash sha256:9e83f5ebfaefa83523e16d546d56b9f3803d4083a71d18fe49217f72306a2058 

创建配置目录($master$

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 创建可永久使用的token
kubeadm token create --ttl 0  --print-join-command

$node$节点执行如下命令添加节点($node$节点运行)

注意:在上面返回的命令后一定要加:--cri-socket unix:///var/run/cri-dockerd.sock

kubeadm join 10.10.14.200:6443 --token 3yk4me.5k595v6hm2qz463s \
        --discovery-token-ca-cert-hash sha256:9e83f5ebfaefa83523e16d546d56b9f3803d4083a71d18fe49217f72306a2058  --cri-socket unix:///var/run/cri-dockerd.sock

2.3 集群网络插件 calico 部署

$master$节点运行)

建议使用$flannel$组件

# 将此文件下载直接apply即可
# wget  https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
# 我已经下载好了,直接上传就行了

kubectl apply -f kube-flannel.yml

应用$operator$资源清单文件

# wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml
kubectl create -f tigera-operator.yaml

通过自定义方式安装

#wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/custom-resources.yaml

修改文件第13行修改为使用kubeadm init ----pod-network-cidr对应的IP地址段

vi custom-resources.yaml
 11     ipPools:
 12     - blockSize: 26
 13       cidr: 10.244.0.0/16 
 14       encapsulation: VXLANCrossSubnet

应用资源清单文件

kubectl apply -f custom-resources.yaml

监视$calico-system$命名空间中$pod$运行情况

watch kubectl get pods -n calico-system

查看$calico$是否正常运行

kubectl get pods -n calico-system

NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7bc767bbcb-pxppk   1/1     Running   0          5m46s
calico-node-6cc8l                          0/1     Running   0          5m46s
calico-node-vkvjz                          1/1     Running   0          5m46s
calico-node-wvk6q                          1/1     Running   0          5m46s
calico-typha-74545574b-6jpgq               1/1     Running   0          5m46s
calico-typha-74545574b-vx9kv               1/1     Running   0          5m40s
csi-node-driver-7pxtt                      2/2     Running   0          5m46s
csi-node-driver-lflc6                      2/2     Running   0          5m46s
csi-node-driver-r5npp                      2/2     Running   0          5m46s

查看集群节点是否正常运行

kubectl get nodes

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES           AGE   VERSION
k8s-master   Ready    control-plane   15m   v1.29.0
k8s-node1    Ready    <none>          12m   v1.29.0
k8s-node2    Ready    <none>          12m   v1.29.0

查看所有$pod$是否正常运行

kubectl get pod -A 

NAMESPACE          NAME                                       READY   STATUS             RESTARTS       AGE
calico-apiserver   calico-apiserver-6bb6cf484f-48c57          1/1     Running            0              56s
calico-apiserver   calico-apiserver-6bb6cf484f-xlbh4          1/1     Running            0              56s
calico-system      calico-kube-controllers-7bc767bbcb-pxppk   1/1     Running            0              6m59s
calico-system      calico-node-6cc8l                          1/1     Running            0              6m59s
calico-system      calico-node-vkvjz                          1/1     Running            0              6m59s
calico-system      calico-node-wvk6q                          1/1     Running            0              6m59s
calico-system      calico-typha-74545574b-6jpgq               1/1     Running            0              6m59s
calico-system      calico-typha-74545574b-vx9kv               1/1     Running            0              6m53s
calico-system      csi-node-driver-7pxtt                      2/2     Running            0              6m59s
calico-system      csi-node-driver-lflc6                      2/2     Running            0              6m59s
calico-system      csi-node-driver-r5npp                      2/2     Running            0              6m59s
kube-flannel       kube-flannel-ds-qf7tg                      0/1     CrashLoopBackOff   6 (5m6s ago)   11m
kube-flannel       kube-flannel-ds-tlczf                      1/1     Running            0              11m
kube-flannel       kube-flannel-ds-xn98c                      1/1     Running            0              11m
kube-system        coredns-76f75df574-q6vps                   1/1     Running            0              16m
kube-system        coredns-76f75df574-srxnf                   1/1     Running            0              16m
kube-system        etcd-k8s-master                            1/1     Running            0              16m
kube-system        kube-apiserver-k8s-master                  1/1     Running            0              16m
kube-system        kube-controller-manager-k8s-master         1/1     Running            0              16m
kube-system        kube-proxy-8t78q                           1/1     Running            0              13m
kube-system        kube-proxy-glwfx                           1/1     Running            0              16m
kube-system        kube-proxy-qg4t7                           1/1     Running            0              13m
kube-system        kube-scheduler-k8s-master                  1/1     Running            0              16m
tigera-operator    tigera-operator-7f8cd97876-7s58q           1/1     Running            0              9m19s

以后所有yaml文件都只在Master节点执行。 安装目录:/etc/kubernetes/ 组件配置文件目录:/etc/kubernetes/manifests/

[root@k8s-master ~]#  kubectl get pods -n kube-flannel
NAME                    READY   STATUS             RESTARTS         AGE
kube-flannel-ds-qf7tg   0/1     CrashLoopBackOff   30 (3m21s ago)   133m
kube-flannel-ds-tlczf   1/1     Running            0                133m
kube-flannel-ds-xn98c   1/1     Running            0                133m

发现有一个$POD$有问题,不停的 CrashLoopBackOff

参考文档:https://www.cnblogs.com/williamzheng/p/18357226

[root@k8s-master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
# 记录下来上面的网卡名称ens192
kubectl edit ds kube-flannel-ds -n  kube-flannel

- --iface=ens192

:w
:q!

再次查看

kubectl get pod -A 

NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-6bb6cf484f-48c57          1/1     Running   0          136m
calico-apiserver   calico-apiserver-6bb6cf484f-xlbh4          1/1     Running   0          136m
calico-system      calico-kube-controllers-7bc767bbcb-pxppk   1/1     Running   0          142m
calico-system      calico-node-6cc8l                          1/1     Running   0          142m
calico-system      calico-node-vkvjz                          1/1     Running   0          142m
calico-system      calico-node-wvk6q                          1/1     Running   0          142m
calico-system      calico-typha-74545574b-6jpgq               1/1     Running   0          142m
calico-system      calico-typha-74545574b-vx9kv               1/1     Running   0          142m
calico-system      csi-node-driver-7pxtt                      2/2     Running   0          142m
calico-system      csi-node-driver-lflc6                      2/2     Running   0          142m
calico-system      csi-node-driver-r5npp                      2/2     Running   0          142m
default            web-76fd95c67-ckvcn                        1/1     Running   0          134m
default            web-76fd95c67-zl9dz                        1/1     Running   0          134m
kube-flannel       kube-flannel-ds-48wgr                      1/1     Running   0          3m32s
kube-flannel       kube-flannel-ds-9fmfz                      1/1     Running   0          3m38s
kube-flannel       kube-flannel-ds-nfhhb                      1/1     Running   0          3m39s
kube-system        coredns-76f75df574-q6vps                   1/1     Running   0          152m
kube-system        coredns-76f75df574-srxnf                   1/1     Running   0          152m
kube-system        etcd-k8s-master                            1/1     Running   0          152m
kube-system        kube-apiserver-k8s-master                  1/1     Running   0          152m
kube-system        kube-controller-manager-k8s-master         1/1     Running   0          152m
kube-system        kube-proxy-8t78q                           1/1     Running   0          149m
kube-system        kube-proxy-glwfx                           1/1     Running   0          152m
kube-system        kube-proxy-qg4t7                           1/1     Running   0          149m
kube-system        kube-scheduler-k8s-master                  1/1     Running   0          152m
tigera-operator    tigera-operator-7f8cd97876-7s58q           1/1     Running   0          145m

现在终于都正常了~

三、测试

测试集群是否正常运行$pod$

#创建测试pod nginx
kubectl create deployment web -r 2 --image=nginx

deployment.apps/web created

# 使用nodeport将端口映射出来
kubectl expose deployment web --port=80  --type=NodePort

service/web exposed

查看$pod$运行状态

kubectl get pod,svc

NAME                      READY   STATUS              RESTARTS   AGE
pod/web-76fd95c67-ckvcn   0/1     ContainerCreating   0          23s
pod/web-76fd95c67-zl9dz   0/1     ContainerCreating   0          23s

NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP        17m
service/web          NodePort    10.107.0.63   <none>        80:31129/TCP   11s

5.2、命令行测试

curl 10.10.14.201:31129
111-test

curl 10.10.14.201:31129
2222-test
kubectl get pods
kubectl get pods

四、管理工具Kuboard

官方文档

https://kuboard.cn/v4/install/quickstart.html#%E9%9B%86%E6%88%90%E5%A4%96%E9%83%A8%E7%94%A8%E6%88%B7%E5%BA%93
docker run -d --restart=unless-stopped --name=kuboard -p 80:80/tcp -p 10081:10081/tcp -e KUBOARD_ENDPOINT="http://10.10.14.200:80" -e KUBOARD_AGENT_SERVER_TCP_PORT="10081"   -v /root/kuboard-data:/data \
 eipwork/kuboard:v3

访问路径

http://10.10.14.200/
用户名: admin
密 码: Kuboard123

获取集群信息

cat ~/.kube/config 

返回结果

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJQ3pNUkpBbEJVSzB3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRBNU1URXdPREUyTlRSYUZ3MHpOREE1TURrd09ESXhOVFJhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUNobzZhakc4RVFiNktyUDhnd3BpOEoyMFFNRGxRNGYvYkJJSGVCVS9mN28yc3ZjcmRPUktvVGxDVnQKMHpvWmJuZHhrd0pNaURBZVJ4Z3R2NHBQTnFwNlV4M2tMZ1NqRE55Mnl4NThUOXdEc3M5YkJvNUlza01GZ2JNUQp3K3NGWEVCelN5R3B0aVA2L3FKbU5mYVlQcWhIRkRpdlJkRjFqTVJKK2JpNlNTakhPSmJ6aXhnd2VjSFMxdVY0CmZldkczNWROckZCZEI4WVNmczY0cGwvOXdiWC84S0s5M1ovWUF1K1RVNUF4T0FhY0c5U3FKWEZQajJoS01QT3gKcHYwc1ExTVIxSmduazF1MEx6T2RoWDdOOTkydThRRVZHQ1hwQmRjOWxVQSt4MmhPc2lUWjE1WW1GWmxQdStmYgo4UjhvYitNZ0NIdGNkUUtzOXFFSnB3L1h3aU1qQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJUZk9hNWlyRmJBSGdxaVZkMFFnRURia2s1QlpEQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQWpzQ3pXUTFnKwprc0lvNTg5Y1ZncFVOSFlIaWhVRXlpcDhUbE1ySGR3eFBuU0dOaFNyS0haTGJkVWZnWWorR2x3QlRHcXVrL3JzCmR0dWZWUmxaNXA1ZlUzZSs2SW5CY0xmWFdEaGRRMFlvSUpzSU4rbUZCL01haDBjQkxlQzZqOWN1eXArcWllUXAKMml6ZDJzb1F6cjN2TmlQM3l1T0NOU0dGdFBvdXRHUis5YWJzVC9lYUMzVTJvcFo4Tm5KenVFb2ljQnV4ZWt2SApIZVZadHhCT3l5QkpWbzEyZzdmTVhuSm1PRlR6TmhCdVpzVFZ1cndTZTdPTmtLOUVsbHBXeCtWcGRhOXdPZ3NOCmFId010Qkc1YUJVTDU0T0Fac004eXpJaldtb0dPZzBySkl1Mjk0YTNXTjJhR2UrMHFWdVdpV2U4N2k1SWVNcHAKSE05amswUVBrbjFaCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://10.10.14.200:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLVENDQWhHZ0F3SUJBZ0lJWmVoaXMzR3dHVUF3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRBNU1URXdPREUyTlRSYUZ3MHlOVEE1TVRFd09ESXlNREJhTUR3eApIekFkQmdOVkJBb1RGbXQxWW1WaFpHMDZZMngxYzNSbGNpMWhaRzFwYm5NeEdUQVhCZ05WQkFNVEVHdDFZbVZ5CmJtVjBaWE10WVdSdGFXNHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEcDFvNzcKNzJaOWhERVdJQWVldm5YYXVWRG5aZkdlNGF2Ly90cDhudjlHTHBNd2hOT0JaaEpZNE55SkhhQTdYVTAvVTVBYwpSWG01VVNJb0NaRU9FQXpjNW9kOGNZNm5sOE9qMUhxa0dDZG1ZcllpMUVKRElDdnJ4d1phQ2RCRVYxNzJ1WFJYCitSY1hxaDdlcWRPRDJ4SHl0N2ZBZXFHQ3NSVjNnbFJ3ak5DMUovalFnRmtYUGs3NXdXZDRDcXh4VVEvcXdNeUUKVVRlVTg0SHFTZzRzVzRpU1gwbm83YnBlNTZNRG8rMnZYTmVDQzQwZGNJMVRWZDAyc21PbEx6VGhtd1M4U05PSAo3bUdlSkJWZ2hlaXRyR3kwSjlWT1NXdyszVHl1NEdQZm1YbmhPeU5DUVR6UVMzUDNQNmlyNzR4UGRUci9JT2JrCno3bGE3SjdIdjV6MGZUcnhBZ01CQUFHalZqQlVNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUsKQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUFNQjhHQTFVZEl3UVlNQmFBRk44NXJtS3NWc0FlQ3FKVgozUkNBUU51U1RrRmtNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUI5dzRmd2RhS1R3L3YxODlhK01LbHM3c3NTCml5SnNDd0RDQUtHMnY4UmhNTXl6N2JkaTM0M1JOMFJHQnU5bGZKWk0zNWlpNXA1c29nK2NGcnlBb1FYVVI5cmEKS0NBS1VXNUg5eE90Qm95S3hYaXFkNTd6WWs5VVlUNDdEYkh4VllSaVgwNWZpaWM0NVNxT0pBNUdzUGNDdmlObwowYVo0MnFISTkrVnB5WXN5TGN1eGd3U1lkR1h5VjdiR1liclVDZmpwNk5USnVOTEh0NTl4VGtaNWNsZzdCTEFNCkVPV2E3WWVZWDE3VUdHbmh6YUN1WngvNzJZaXBWdkNLdjR5VXRTZVJpMm1HRnpZWXBoSmpXdm9wR1VoQUM5MWYKTlFUWEE1ZU9VTTZpOFo2OG5YRE9WUWFTb3BNbnZvZGM2UDlwSFR6YkVHdGdmdW82czI2dkoyRTZsUlVUCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBNmRhTysrOW1mWVF4RmlBSG5yNTEycmxRNTJYeG51R3IvLzdhZko3L1JpNlRNSVRUCmdXWVNXT0RjaVIyZ08xMU5QMU9RSEVWNXVWRWlLQW1SRGhBTTNPYUhmSEdPcDVmRG85UjZwQmduWm1LMkl0UkMKUXlBcjY4Y0dXZ25RUkZkZTlybDBWL2tYRjZvZTNxblRnOXNSOHJlM3dIcWhnckVWZDRKVWNJelF0U2Y0MElCWgpGejVPK2NGbmVBcXNjVkVQNnNETWhGRTNsUE9CNmtvT0xGdUlrbDlKNk8yNlh1ZWpBNlB0cjF6WGdndU5IWENOClUxWGROckpqcFM4MDRac0V2RWpUaCs1aG5pUVZZSVhvcmF4c3RDZlZUa2xzUHQwOHJ1QmozNWw1NFRzalFrRTgKMEV0ejl6K29xKytNVDNVNi95RG01TSs1V3V5ZXg3K2M5SDA2OFFJREFRQUJBb0lCQVFDcklpNlVyTmxLUk9PVAp1Szg2KzFMUFYwNmhleGRBMnhJQkVTZ2ZpbEZ5c0lWaVBlTjQwUlhlVy9xcWtyY0FtMEQ4ZHBDQ1VFcE1XTmR4Cmk4YlFEdWtMQmQva01FdGgxZzBGS216ekNRWlV4U3RkQkJEV2hZWC9VVElSMVJySjJWT1RwNWhCQmZoamhrcC8KVkxTS3pGb3ZVMHMwbjhyeUZkMkxFQ1B5RnV4cmxzWWk3dlAwSmIyMllFTTJqZXBaZ1dVU3FyR2xQWmo3YmZ2SApZbC9MSGZ3bmM5cU1VUzJHV2tDeUR5bWdCSVJ1SnA1S0FSTTNRaHd1UlhsekFiT1hSUFZyd3FXKzlEdHpzT0x3CkRITDNXYytYVG90QkJmZ2RkdG1leHNlRmp2TFF1QlQ5ZUI1RHg1dDRGQU9lRUM2T09Fa0R1RHM2dWpmYUNIK3IKa0s5RE56TTlBb0dCQU82dGkyS0xjaEkvYUZEQ3ZDcGJpUWl3RUg1QnhNeU5rTVFnbDAxbUg4Vi9VckFxUTZxawpRcTRkSkZzYnl5NC9tVjZNTGE3eDRGM1c2Z04rQ1lvZEh3WmlDOGQzM3o5RzRFYjVrZUFZTml2VUR3bHRGYU54Clp4M0hRQ0RYV05LRzV4U2NVRmxIOU1iQjAyM2h3d2Y2ZmNEa0N6c2pHanJlWHpYOVJzT2dHOGF6QW9HQkFQclAKR0N6QXJ0QUh1YUVhbUVBMm9SN1kzWDJwaEdxQ3p4eWR5VWdNb1Zab0w5S0hhdmZqb3h3UW5leElMcDlZSnpXegpSUHlKRm5qK2JmU1VWYUdxMWNGTWhyVGZUL1NOSTZadUxEbmhybzczaWNnbXNIZG5lVURidUt4Nkg5NSs3Q1RkCmRJY1dxWXpBMncxOWk1UXlyMlM4RU91K0Fmb0QzREp0dk1OM3pDbkxBb0dBZEdGQzJlWk0xUUQrQ0lNcjVTdUYKQWl0M24xaktjVU9HRjF3YzZxeWxTVlB3S2Q0eDZINzMxSlo1SjhQQnF1ZHdEVjRrMkcwd2poRkJRanF1eEIyMwpCeEcvMUo5cXlCdnpPQ2h4TE9naFlmV2c3Mk8xYldEYWV2YXhHbEpuQ1NDbWhMSkRxNFVlb2R2WkVIZEk5aGI2ClFwZnZzZ0pIdy9TeVVFMFR1RWZWdzJrQ2dZQVQwOTFvWkU4dG1QNiswcmhva3lrSHBFTldWTmxvQmpGVFpOSHQKeFRuWDkrS1g5U2FxdEM5SDM3UnNZb1IxQ21ZSEk4WDNaT3NHNDY1VG9JcG9mblhwa3lBdkdseGF5L0dlamFVbgpha1QvZm1oQkQzWHg2cGMyWG1ocUVqbUV3R253dkNVakxOSjRreUorSFllMFRwRjVHRGtLT2ZvMEJxd1l2SDRvCndjYTlJd0tCZ0RPLzVxcnFzZFZOdHljZFduQ1BhU1orR3QyMGIyMDI1QzFXZXBraVd4ZWs4eTEzNDRkWnRQaWYKOFhoTVZpNzBhYkt3QWNBeVdlWmdKNXM5bm41NlFFcHlCRFN1L2o0Tk5CczZieGNwbUZ4d3phaTFJdkd5NGRVRwpialorZ2NXMXVFQTZDQ3FrYnQ0Smh2RkVEVUZmUUlQMUZaZ1JNc2g4OFpnZ2FjV2YrRktMCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

五、通过$YAML$创建发布与服务

    1. 为了方便,一般不再全新创建命名空间,使用默认的$default$命名空间。
    1. 为什么不使用界面或者命令行进行创建发布和服务呢?这是因为命令行创建的功能限制比较多,比如不能管控最终暴露的端口,不能配置内存和$CPU$等,至于为什么不用界面,原因也很简单,因为今天用$Kuboard$,明天可能就用其它的管理工具了,不如万年不变的$YAML$文件准称。

4.1 创建Deployment

openresty-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: openresty-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: openresty
  template:
    metadata:
      labels:
        app: openresty
    spec:
      containers:
      - name: openresty
        image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/openresty/openresty:1.25.3.1-5-alpine-fat
        resources:
          limits:
            cpu: "1"
            memory: "2Gi"
          requests:
            cpu: "1"
            memory: "2Gi"
        ports:
        - containerPort: 80

4.2 创建Service

openresty-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: openresty-service
spec:
  selector:
    app: openresty
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    nodePort: 30080
  type: NodePort

通过公网$ip+NodePort$端口号的方式访问$nginx$服务

http://10.10.14.200:30080

六、镜像仓库

这里,我选择了阿里云提供的免费个人版本的镜像仓库

https://blog.csdn.net/nuptaxin/article/details/124008353

七、资料

$K8S$可能会产生垃圾或者僵尸$pod$,在删除$rc$的时候,相应的$pod$没有被删除,手动删除$pod$后会自动重新创建,这时一般需要先删除掉相关联的resources

原因

先删除pod的话马上会创建一个新的pod因为$deployment.yaml$文件中定义了副本数量

正确方法

查看deployment

kubectl get deployment

删除deployment

kubectl delete deployment <name>

然后再删除pod

kubectl delete pod <name>

如果$pod$还在的话 查看$rc$和rs

kubectl get rc
kubectl get rs

把$pod$对应的都删除即可

kubectl delete rc <name>
kubectl delete rs <name>
$kubectl create secret docker-registry \
registry-secret-smokelee.com \
--docker-server=registry.i.smokelee.com:5000 \
--docker-username=opuser --docker-password=123