You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1120 lines
38 KiB

This file contains invisible Unicode characters!

This file contains invisible Unicode characters that may be processed differently from what appears below. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to reveal hidden characters.

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

# 搭建$k8s$集群
> 【版本 $V1.29$】
### 一、前期准备
#### **1.1 准备环境**
```apl
# Linux版本
Rocky Linux 9.4 Mini
# 更新系统【愿意做可以做,不愿意做也没啥事】
# dnf clean all -y
# dnf update -y
# K8S的三台服务器
10.10.14.200 k8s-master
10.10.14.201 k8s-node1
10.10.14.202 k8s-node2
```
#### **1.2 系统初始化**
设置系统时区为上海
```shell
timedatectl set-timezone Asia/Shanghai
clock -w
# 查看时区
ls -l /etc/localtime
```
关闭防火墙:
```shell
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
```
关闭$swap$分区:
```shell
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a
```
在$master$上执行
```shell
hostnamectl set-hostname k8s-master
```
在$node1$上执行
```shell
hostnamectl set-hostname k8s-node1
```
在$node2$上执行
```shell
hostnamectl set-hostname k8s-node2
```
在每个节点添加$hosts$
```shell
cat >> /etc/hosts << EOF
10.10.14.200 k8s-master
10.10.14.201 k8s-node1
10.10.14.202 k8s-node2
EOF
```
将桥接的$IPv4$流量传递到$iptables$的链:
在每个节点添加如下的命令:
```shell
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
```
在每个节点加载$br\_netfilter$模块
```shell
modprobe br_netfilter
sysctl --system
```
查看是否加载
```
lsmod | grep br_netfilter
```
在每个节点添加时间同步:
安装$ntpdate$时间同步插件
```shell
dnf install chrony -y
systemctl enable --now chronyd
```
编辑内容
```
cat > /etc/chrony.conf << EOF
server 0.pool.ntp.org iburst
server 1.pool.ntp.org iburst
server 2.pool.ntp.org iburst
server 3.pool.ntp.org iburst
EOF
systemctl restart chronyd
```
手工同步
```
chronyc makestep
```
在每个节点安装$ipset$和$ipvsadm$
安装
```shell
yum -y install ipset ipvsadm
```
配置
```
mkdir -p /etc/sysconfig/modules/
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
```
授权、运行、检查是否加载:
```
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
```
### 二、部署安装
#### 2.1 安装$Docker$
所有节点安装$Docker$
```shell
# 获取镜像源
yum install -y yum-utils
curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum clean all
yum makecache
# 开始安装
yum install -y docker-ce docker-ce-cli containerd.io
# 设置开机自启动并启动
systemctl enable docker && systemctl start docker
```
配置加速
```shell
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl daemon-reload
systemctl restart docker
```
> **k8sv1.24版本以后使用CRI shim调用流程kubelet(客户端) ->CRI shim(被contained内置) -> containerd -> containerd-shim -> runc**
```shell
# https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.8/cri-dockerd-0.3.8.amd64.tgz
# 使用SFTP上传软件上传 cri-dockerd-0.3.8.amd64.tgz
tar xf cri-dockerd-0.3.8.amd64.tgz
mv cri-dockerd/cri-dockerd /usr/bin/
rm -rf cri-dockerd cri-dockerd-0.3.8.amd64.tgz
# 配置启动项
cat > /etc/systemd/system/cri-docker.service<<EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
# ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd://
# 指定用作 Pod 的基础容器的容器镜像“pause 镜像”)
ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.k8s.io/pause:3.9 --container-runtime-endpoint fd://
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
cat > /etc/systemd/system/cri-docker.socket <<EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF
systemctl daemon-reload
systemctl enable cri-docker && systemctl start cri-docker && systemctl status cri-docker
```
#### 2.2 安装 $kubelet、kubeadm、kubectl$
配置$k8s$源(所有节点)
```shell
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
# exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
```
安装$kubelet、kubeadm、kubectl$ (所有节点)
```shell
# 安装默认版本
#yum install -y kubelet kubeadm kubectl
# 安装指定版本
yum -y install kubeadm-1.29.0-150500.1.1 kubelet-1.29.0-150500.1.1 kubectl-1.29.0-150500.1.1
```
配置$ cgroup$ 驱动与$docker$一致(所有节点)
```shell
cp /etc/sysconfig/kubelet{,.bak}
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF
systemctl enable kubelet
```
安装自动补全工具【可选】
```
yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
echo "source <(kubectl completion bash)" >> ~/.bashrc
source ~/.bashrc
```
查看镜像配置
```
kubeadm config images list --kubernetes-version=v1.29.0
```
如果网络有限制,请提前下载需要用到的镜像
```
# 重点,重点,重点
# kubeadm部署集群需要用到k8s配置镜像和Calico网络配置镜像
# 由于默认拉取镜像地址k8s.gcr.io国内无法访问国内镜像仓库我也没有找到所以建议提前下载好导入镜像。
# 所有需要用到镜像(k8s配置镜像和Calico网络配置镜像)
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
calico/kube-controllers v3.27.0 4e87edec0297 12 days ago 75.5MB
calico/cni v3.27.0 8e8d96a874c0 12 days ago 211MB
calico/pod2daemon-flexvol v3.27.0 6506d2e0be2d 12 days ago 15.4MB
calico/node v3.27.0 1843802b91be 13 days ago 340MB
registry.k8s.io/kube-apiserver v1.29.0 1443a367b16d 2 weeks ago 127MB
registry.k8s.io/kube-scheduler v1.29.0 7ace497ddb8e 2 weeks ago 59.5MB
registry.k8s.io/kube-controller-manager v1.29.0 0824682bcdc8 2 weeks ago 122MB
registry.k8s.io/kube-proxy v1.29.0 98262743b26f 2 weeks ago 82.2MB
registry.k8s.io/etcd 3.5.10-0 a0eed15eed44 8 weeks ago 148MB
registry.k8s.io/coredns/coredns v1.11.1 cbb01a7bd410 4 months ago 59.8MB
registry.k8s.io/pause 3.9 e6f181688397 14 months ago 744kB
```
**办法**
```
# 这四个可以直接下载
docker pull calico/kube-controllers:v3.27.0
docker pull calico/cni:v3.27.0
docker pull calico/pod2daemon-flexvol:v3.27.0
docker pull calico/node:v3.27.0
# 国内镜像源
# https://docker.aityp.com/
# 偷梁换柱
docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/kube-apiserver:v1.29.8
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/kube-apiserver:v1.29.8 registry.k8s.io/kube-apiserver:v1.29.0
docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/kube-scheduler:v1.29.8
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/kube-scheduler:v1.29.8 registry.k8s.io/kube-scheduler:v1.29.0
docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/kube-controller-manager:v1.29.8
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/kube-controller-manager:v1.29.8 registry.k8s.io/kube-controller-manager:v1.29.0
docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/kube-proxy:v1.29.8
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/kube-proxy:v1.29.8 registry.k8s.io/kube-proxy:v1.29.0
docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/etcd:3.5.10-0
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/etcd:3.5.10-0
docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/coredns/coredns:v1.11.1
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/coredns/coredns:v1.11.1
docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/pause:3.9
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/pause:3.9 registry.k8s.io/pause:3.9
```
集群初始化($master$节点运行)
```
# 初始化集群
kubeadm init --apiserver-advertise-address 10.10.14.200 --kubernetes-version v1.29.0 --pod-network-cidr=10.244.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock
# 如果网络有问题,请使用如下命令初始化 【我没用上】
kubeadm init \
--apiserver-advertise-address 10.10.14.200 # master节点ip \
--kubernetes-version v1.29.0 \
--pod-network-cidr=10.244.0.0/16 # pod分配的ip \
--cri-socket=unix:///var/run/cri-dockerd.sock --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
```
记录下面的命令:
```shell
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.10.14.200:6443 --token 3yk4me.5k595v6hm2qz463s \
--discovery-token-ca-cert-hash sha256:9e83f5ebfaefa83523e16d546d56b9f3803d4083a71d18fe49217f72306a2058
```
创建配置目录($master$
```
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 创建可永久使用的token
kubeadm token create --ttl 0 --print-join-command
```
$node$节点执行如下命令添加节点($node$节点运行)
> **注意**:在上面返回的命令后一定要加:--cri-socket unix:///var/run/cri-dockerd.sock
```shell
kubeadm join 10.10.14.200:6443 --token 3yk4me.5k595v6hm2qz463s \
--discovery-token-ca-cert-hash sha256:9e83f5ebfaefa83523e16d546d56b9f3803d4083a71d18fe49217f72306a2058 --cri-socket unix:///var/run/cri-dockerd.sock
```
#### 2.3 集群网络插件 $calico$ 部署
**$master$节点运行)**
**建议使用$flannel$组件**
```shell
# 将此文件下载直接apply即可
# wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
# 我已经下载好了,直接上传就行了
kubectl apply -f kube-flannel.yml
```
应用$operator$资源清单文件
```shell
# wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml
kubectl create -f tigera-operator.yaml
```
通过自定义方式安装
```
#wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/custom-resources.yaml
```
修改文件第13行修改为使用kubeadm init ----pod-network-cidr对应的IP地址段
```yaml
vi custom-resources.yaml
11 ipPools:
12 - blockSize: 26
13 cidr: 10.244.0.0/16
14 encapsulation: VXLANCrossSubnet
```
应用资源清单文件
```shell
kubectl apply -f custom-resources.yaml
```
监视$calico-system$命名空间中$pod$运行情况
```shell
watch kubectl get pods -n calico-system
```
查看$calico$是否正常运行
```shell
kubectl get pods -n calico-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-7bc767bbcb-pxppk 1/1 Running 0 5m46s
calico-node-6cc8l 0/1 Running 0 5m46s
calico-node-vkvjz 1/1 Running 0 5m46s
calico-node-wvk6q 1/1 Running 0 5m46s
calico-typha-74545574b-6jpgq 1/1 Running 0 5m46s
calico-typha-74545574b-vx9kv 1/1 Running 0 5m40s
csi-node-driver-7pxtt 2/2 Running 0 5m46s
csi-node-driver-lflc6 2/2 Running 0 5m46s
csi-node-driver-r5npp 2/2 Running 0 5m46s
```
查看集群节点是否正常运行
```shell
kubectl get nodes
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 15m v1.29.0
k8s-node1 Ready <none> 12m v1.29.0
k8s-node2 Ready <none> 12m v1.29.0
```
查看所有$pod$是否正常运行
```shell
kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-6bb6cf484f-48c57 1/1 Running 0 56s
calico-apiserver calico-apiserver-6bb6cf484f-xlbh4 1/1 Running 0 56s
calico-system calico-kube-controllers-7bc767bbcb-pxppk 1/1 Running 0 6m59s
calico-system calico-node-6cc8l 1/1 Running 0 6m59s
calico-system calico-node-vkvjz 1/1 Running 0 6m59s
calico-system calico-node-wvk6q 1/1 Running 0 6m59s
calico-system calico-typha-74545574b-6jpgq 1/1 Running 0 6m59s
calico-system calico-typha-74545574b-vx9kv 1/1 Running 0 6m53s
calico-system csi-node-driver-7pxtt 2/2 Running 0 6m59s
calico-system csi-node-driver-lflc6 2/2 Running 0 6m59s
calico-system csi-node-driver-r5npp 2/2 Running 0 6m59s
kube-flannel kube-flannel-ds-qf7tg 0/1 CrashLoopBackOff 6 (5m6s ago) 11m
kube-flannel kube-flannel-ds-tlczf 1/1 Running 0 11m
kube-flannel kube-flannel-ds-xn98c 1/1 Running 0 11m
kube-system coredns-76f75df574-q6vps 1/1 Running 0 16m
kube-system coredns-76f75df574-srxnf 1/1 Running 0 16m
kube-system etcd-k8s-master 1/1 Running 0 16m
kube-system kube-apiserver-k8s-master 1/1 Running 0 16m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 16m
kube-system kube-proxy-8t78q 1/1 Running 0 13m
kube-system kube-proxy-glwfx 1/1 Running 0 16m
kube-system kube-proxy-qg4t7 1/1 Running 0 13m
kube-system kube-scheduler-k8s-master 1/1 Running 0 16m
tigera-operator tigera-operator-7f8cd97876-7s58q 1/1 Running 0 9m19s
```
以后所有yaml文件都只在Master节点执行。
安装目录:/etc/kubernetes/
组件配置文件目录:/etc/kubernetes/manifests/
```shell
[root@k8s-master ~]# kubectl get pods -n kube-flannel
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-qf7tg 0/1 CrashLoopBackOff 30 (3m21s ago) 133m
kube-flannel-ds-tlczf 1/1 Running 0 133m
kube-flannel-ds-xn98c 1/1 Running 0 133m
```
发现有一个$POD$有问题,不停的 $CrashLoopBackOff$
参考文档https://www.cnblogs.com/williamzheng/p/18357226
```shell
[root@k8s-master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
# 记录下来上面的网卡名称ens192
```
```shell
kubectl edit ds kube-flannel-ds -n kube-flannel
```
![](https://dsideal.obs.cn-north-1.myhuaweicloud.com/HuangHai/BlogImages/202409111853398.png)
```shell
- --iface=ens192
:w
:q!
```
再次查看
```
kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-6bb6cf484f-48c57 1/1 Running 0 136m
calico-apiserver calico-apiserver-6bb6cf484f-xlbh4 1/1 Running 0 136m
calico-system calico-kube-controllers-7bc767bbcb-pxppk 1/1 Running 0 142m
calico-system calico-node-6cc8l 1/1 Running 0 142m
calico-system calico-node-vkvjz 1/1 Running 0 142m
calico-system calico-node-wvk6q 1/1 Running 0 142m
calico-system calico-typha-74545574b-6jpgq 1/1 Running 0 142m
calico-system calico-typha-74545574b-vx9kv 1/1 Running 0 142m
calico-system csi-node-driver-7pxtt 2/2 Running 0 142m
calico-system csi-node-driver-lflc6 2/2 Running 0 142m
calico-system csi-node-driver-r5npp 2/2 Running 0 142m
default web-76fd95c67-ckvcn 1/1 Running 0 134m
default web-76fd95c67-zl9dz 1/1 Running 0 134m
kube-flannel kube-flannel-ds-48wgr 1/1 Running 0 3m32s
kube-flannel kube-flannel-ds-9fmfz 1/1 Running 0 3m38s
kube-flannel kube-flannel-ds-nfhhb 1/1 Running 0 3m39s
kube-system coredns-76f75df574-q6vps 1/1 Running 0 152m
kube-system coredns-76f75df574-srxnf 1/1 Running 0 152m
kube-system etcd-k8s-master 1/1 Running 0 152m
kube-system kube-apiserver-k8s-master 1/1 Running 0 152m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 152m
kube-system kube-proxy-8t78q 1/1 Running 0 149m
kube-system kube-proxy-glwfx 1/1 Running 0 152m
kube-system kube-proxy-qg4t7 1/1 Running 0 149m
kube-system kube-scheduler-k8s-master 1/1 Running 0 152m
tigera-operator tigera-operator-7f8cd97876-7s58q 1/1 Running 0 145m
```
现在终于都正常了~
### 三、测试
**测试集群是否正常运行$pod$**
```shell
#创建测试pod nginx
kubectl create deployment web -r 2 --image=nginx
deployment.apps/web created
# 使用nodeport将端口映射出来
kubectl expose deployment web --port=80 --type=NodePort
service/web exposed
```
查看$pod$运行状态
```
kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/web-76fd95c67-ckvcn 0/1 ContainerCreating 0 23s
pod/web-76fd95c67-zl9dz 0/1 ContainerCreating 0 23s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17m
service/web NodePort 10.107.0.63 <none> 80:31129/TCP 11s
```
5.2、命令行测试
```
curl 10.10.14.201:31129
111-test
curl 10.10.14.201:31129
2222-test
```
```
kubectl get pods
kubectl get pods
```
### 四、管理工具$Kuboard$
**官方文档**
```
https://kuboard.cn/v4/install/quickstart.html#%E9%9B%86%E6%88%90%E5%A4%96%E9%83%A8%E7%94%A8%E6%88%B7%E5%BA%93
```
```shell
docker run -d --restart=unless-stopped --name=kuboard -p 80:80/tcp -p 10081:10081/tcp -e KUBOARD_ENDPOINT="http://10.10.14.200:80" -e KUBOARD_AGENT_SERVER_TCP_PORT="10081" -v /root/kuboard-data:/data \
eipwork/kuboard:v3
```
**访问路径**
```
http://10.10.14.200/
用户名: admin
密 码: Kuboard123
```
**获取集群信息**
```shell
cat ~/.kube/config
```
**返回结果**
```shell
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJQ3pNUkpBbEJVSzB3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRBNU1URXdPREUyTlRSYUZ3MHpOREE1TURrd09ESXhOVFJhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUNobzZhakc4RVFiNktyUDhnd3BpOEoyMFFNRGxRNGYvYkJJSGVCVS9mN28yc3ZjcmRPUktvVGxDVnQKMHpvWmJuZHhrd0pNaURBZVJ4Z3R2NHBQTnFwNlV4M2tMZ1NqRE55Mnl4NThUOXdEc3M5YkJvNUlza01GZ2JNUQp3K3NGWEVCelN5R3B0aVA2L3FKbU5mYVlQcWhIRkRpdlJkRjFqTVJKK2JpNlNTakhPSmJ6aXhnd2VjSFMxdVY0CmZldkczNWROckZCZEI4WVNmczY0cGwvOXdiWC84S0s5M1ovWUF1K1RVNUF4T0FhY0c5U3FKWEZQajJoS01QT3gKcHYwc1ExTVIxSmduazF1MEx6T2RoWDdOOTkydThRRVZHQ1hwQmRjOWxVQSt4MmhPc2lUWjE1WW1GWmxQdStmYgo4UjhvYitNZ0NIdGNkUUtzOXFFSnB3L1h3aU1qQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJUZk9hNWlyRmJBSGdxaVZkMFFnRURia2s1QlpEQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQWpzQ3pXUTFnKwprc0lvNTg5Y1ZncFVOSFlIaWhVRXlpcDhUbE1ySGR3eFBuU0dOaFNyS0haTGJkVWZnWWorR2x3QlRHcXVrL3JzCmR0dWZWUmxaNXA1ZlUzZSs2SW5CY0xmWFdEaGRRMFlvSUpzSU4rbUZCL01haDBjQkxlQzZqOWN1eXArcWllUXAKMml6ZDJzb1F6cjN2TmlQM3l1T0NOU0dGdFBvdXRHUis5YWJzVC9lYUMzVTJvcFo4Tm5KenVFb2ljQnV4ZWt2SApIZVZadHhCT3l5QkpWbzEyZzdmTVhuSm1PRlR6TmhCdVpzVFZ1cndTZTdPTmtLOUVsbHBXeCtWcGRhOXdPZ3NOCmFId010Qkc1YUJVTDU0T0Fac004eXpJaldtb0dPZzBySkl1Mjk0YTNXTjJhR2UrMHFWdVdpV2U4N2k1SWVNcHAKSE05amswUVBrbjFaCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://10.10.14.200:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLVENDQWhHZ0F3SUJBZ0lJWmVoaXMzR3dHVUF3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRBNU1URXdPREUyTlRSYUZ3MHlOVEE1TVRFd09ESXlNREJhTUR3eApIekFkQmdOVkJBb1RGbXQxWW1WaFpHMDZZMngxYzNSbGNpMWhaRzFwYm5NeEdUQVhCZ05WQkFNVEVHdDFZbVZ5CmJtVjBaWE10WVdSdGFXNHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEcDFvNzcKNzJaOWhERVdJQWVldm5YYXVWRG5aZkdlNGF2Ly90cDhudjlHTHBNd2hOT0JaaEpZNE55SkhhQTdYVTAvVTVBYwpSWG01VVNJb0NaRU9FQXpjNW9kOGNZNm5sOE9qMUhxa0dDZG1ZcllpMUVKRElDdnJ4d1phQ2RCRVYxNzJ1WFJYCitSY1hxaDdlcWRPRDJ4SHl0N2ZBZXFHQ3NSVjNnbFJ3ak5DMUovalFnRmtYUGs3NXdXZDRDcXh4VVEvcXdNeUUKVVRlVTg0SHFTZzRzVzRpU1gwbm83YnBlNTZNRG8rMnZYTmVDQzQwZGNJMVRWZDAyc21PbEx6VGhtd1M4U05PSAo3bUdlSkJWZ2hlaXRyR3kwSjlWT1NXdyszVHl1NEdQZm1YbmhPeU5DUVR6UVMzUDNQNmlyNzR4UGRUci9JT2JrCno3bGE3SjdIdjV6MGZUcnhBZ01CQUFHalZqQlVNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUsKQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUFNQjhHQTFVZEl3UVlNQmFBRk44NXJtS3NWc0FlQ3FKVgozUkNBUU51U1RrRmtNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUI5dzRmd2RhS1R3L3YxODlhK01LbHM3c3NTCml5SnNDd0RDQUtHMnY4UmhNTXl6N2JkaTM0M1JOMFJHQnU5bGZKWk0zNWlpNXA1c29nK2NGcnlBb1FYVVI5cmEKS0NBS1VXNUg5eE90Qm95S3hYaXFkNTd6WWs5VVlUNDdEYkh4VllSaVgwNWZpaWM0NVNxT0pBNUdzUGNDdmlObwowYVo0MnFISTkrVnB5WXN5TGN1eGd3U1lkR1h5VjdiR1liclVDZmpwNk5USnVOTEh0NTl4VGtaNWNsZzdCTEFNCkVPV2E3WWVZWDE3VUdHbmh6YUN1WngvNzJZaXBWdkNLdjR5VXRTZVJpMm1HRnpZWXBoSmpXdm9wR1VoQUM5MWYKTlFUWEE1ZU9VTTZpOFo2OG5YRE9WUWFTb3BNbnZvZGM2UDlwSFR6YkVHdGdmdW82czI2dkoyRTZsUlVUCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBNmRhTysrOW1mWVF4RmlBSG5yNTEycmxRNTJYeG51R3IvLzdhZko3L1JpNlRNSVRUCmdXWVNXT0RjaVIyZ08xMU5QMU9RSEVWNXVWRWlLQW1SRGhBTTNPYUhmSEdPcDVmRG85UjZwQmduWm1LMkl0UkMKUXlBcjY4Y0dXZ25RUkZkZTlybDBWL2tYRjZvZTNxblRnOXNSOHJlM3dIcWhnckVWZDRKVWNJelF0U2Y0MElCWgpGejVPK2NGbmVBcXNjVkVQNnNETWhGRTNsUE9CNmtvT0xGdUlrbDlKNk8yNlh1ZWpBNlB0cjF6WGdndU5IWENOClUxWGROckpqcFM4MDRac0V2RWpUaCs1aG5pUVZZSVhvcmF4c3RDZlZUa2xzUHQwOHJ1QmozNWw1NFRzalFrRTgKMEV0ejl6K29xKytNVDNVNi95RG01TSs1V3V5ZXg3K2M5SDA2OFFJREFRQUJBb0lCQVFDcklpNlVyTmxLUk9PVAp1Szg2KzFMUFYwNmhleGRBMnhJQkVTZ2ZpbEZ5c0lWaVBlTjQwUlhlVy9xcWtyY0FtMEQ4ZHBDQ1VFcE1XTmR4Cmk4YlFEdWtMQmQva01FdGgxZzBGS216ekNRWlV4U3RkQkJEV2hZWC9VVElSMVJySjJWT1RwNWhCQmZoamhrcC8KVkxTS3pGb3ZVMHMwbjhyeUZkMkxFQ1B5RnV4cmxzWWk3dlAwSmIyMllFTTJqZXBaZ1dVU3FyR2xQWmo3YmZ2SApZbC9MSGZ3bmM5cU1VUzJHV2tDeUR5bWdCSVJ1SnA1S0FSTTNRaHd1UlhsekFiT1hSUFZyd3FXKzlEdHpzT0x3CkRITDNXYytYVG90QkJmZ2RkdG1leHNlRmp2TFF1QlQ5ZUI1RHg1dDRGQU9lRUM2T09Fa0R1RHM2dWpmYUNIK3IKa0s5RE56TTlBb0dCQU82dGkyS0xjaEkvYUZEQ3ZDcGJpUWl3RUg1QnhNeU5rTVFnbDAxbUg4Vi9VckFxUTZxawpRcTRkSkZzYnl5NC9tVjZNTGE3eDRGM1c2Z04rQ1lvZEh3WmlDOGQzM3o5RzRFYjVrZUFZTml2VUR3bHRGYU54Clp4M0hRQ0RYV05LRzV4U2NVRmxIOU1iQjAyM2h3d2Y2ZmNEa0N6c2pHanJlWHpYOVJzT2dHOGF6QW9HQkFQclAKR0N6QXJ0QUh1YUVhbUVBMm9SN1kzWDJwaEdxQ3p4eWR5VWdNb1Zab0w5S0hhdmZqb3h3UW5leElMcDlZSnpXegpSUHlKRm5qK2JmU1VWYUdxMWNGTWhyVGZUL1NOSTZadUxEbmhybzczaWNnbXNIZG5lVURidUt4Nkg5NSs3Q1RkCmRJY1dxWXpBMncxOWk1UXlyMlM4RU91K0Fmb0QzREp0dk1OM3pDbkxBb0dBZEdGQzJlWk0xUUQrQ0lNcjVTdUYKQWl0M24xaktjVU9HRjF3YzZxeWxTVlB3S2Q0eDZINzMxSlo1SjhQQnF1ZHdEVjRrMkcwd2poRkJRanF1eEIyMwpCeEcvMUo5cXlCdnpPQ2h4TE9naFlmV2c3Mk8xYldEYWV2YXhHbEpuQ1NDbWhMSkRxNFVlb2R2WkVIZEk5aGI2ClFwZnZzZ0pIdy9TeVVFMFR1RWZWdzJrQ2dZQVQwOTFvWkU4dG1QNiswcmhva3lrSHBFTldWTmxvQmpGVFpOSHQKeFRuWDkrS1g5U2FxdEM5SDM3UnNZb1IxQ21ZSEk4WDNaT3NHNDY1VG9JcG9mblhwa3lBdkdseGF5L0dlamFVbgpha1QvZm1oQkQzWHg2cGMyWG1ocUVqbUV3R253dkNVakxOSjRreUorSFllMFRwRjVHRGtLT2ZvMEJxd1l2SDRvCndjYTlJd0tCZ0RPLzVxcnFzZFZOdHljZFduQ1BhU1orR3QyMGIyMDI1QzFXZXBraVd4ZWs4eTEzNDRkWnRQaWYKOFhoTVZpNzBhYkt3QWNBeVdlWmdKNXM5bm41NlFFcHlCRFN1L2o0Tk5CczZieGNwbUZ4d3phaTFJdkd5NGRVRwpialorZ2NXMXVFQTZDQ3FrYnQ0Smh2RkVEVUZmUUlQMUZaZ1JNc2g4OFpnZ2FjV2YrRktMCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
```
![](https://dsideal.obs.cn-north-1.myhuaweicloud.com/HuangHai/BlogImages/202409141357727.png)
### 五、通过$YAML$创建发布与服务
- 1. **为了方便,一般不再全新创建命名空间,使用默认的$default$命名空间。**
- 2. 为什么不使用界面或者命令行进行创建发布和服务呢?这是因为命令行创建的功能限制比较多,比如不能管控最终暴露的端口,不能配置内存和$CPU$等,至于为什么不用界面,原因也很简单,因为今天用$Kuboard$,明天可能就用其它的管理工具了,不如万年不变的$YAML$文件准称。
#### 4.1 创建$Deployment$
`openresty-deployment.yaml`
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: openresty-deployment
spec:
replicas: 1
selector:
matchLabels:
app: openresty
template:
metadata:
labels:
app: openresty
spec:
containers:
- name: openresty
image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/openresty/openresty:1.25.3.1-5-alpine-fat
resources:
limits:
cpu: "1"
memory: "2Gi"
requests:
cpu: "1"
memory: "2Gi"
ports:
- containerPort: 80
```
#### 4.2 创建$Service$
`openresty-service.yaml`
```yaml
apiVersion: v1
kind: Service
metadata:
name: openresty-service
spec:
selector:
app: openresty
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
type: NodePort
```
**通过公网$ip+NodePort$端口号的方式访问$nginx$服务**
```shell
http://10.10.14.200:30080
```
![](https://dsideal.obs.cn-north-1.myhuaweicloud.com/HuangHai/BlogImages/202409151001962.png)
### 六、镜像仓库
这里,我选择了阿里云提供的免费个人版本的镜像仓库
https://blog.csdn.net/nuptaxin/article/details/124008353
### 七、其它软件安装
#### $7.1$ 部署$mysql$
> https://www.cnblogs.com/blayn/p/16037199.html
在$k8s$集群中挂载$MySQL$数据卷 需要安装一个$NFS$。
**主节点** 安装$NFS$
```shell
yum install -y nfs-utils rpcbind
```
**主节点** 创建目录
```shell
mkdir -p /nfs
chmod 777 /nfs
```
 
更改归属组与用户
```shell
#chown -R nfsnobody:nfsnobody /nfs
```
 
配置共享目录
```shell
echo "/nfs *(insecure,rw,sync,no_root_squash)" > /etc/exports
```
  
创建$mysql$共享目录
```shell
mkdir -p /nfs/mysql
```
启动服务+设置开启启动
```shell
systemctl enable --now nfs-server rpcbind
```
检查
```shell
[root@k8s-master openresty]# exportfs
showmount -e 10.10.14.200
/nfs <world>
Export list for 10.10.14.200:
/nfs *
```
通过
```
mkdir /data/mysql/data -p
mkdir /data/mysql/logs -p
```
**创建$MySQL$ $Deployment$**
编写一个 `mysql.yaml` 配置文件
```yaml
apiVersion: apps/v1 # apiserver的版本
kind: Deployment # 副本控制器deployment管理pod和RS
metadata:
name: mysql # deployment的名称全局唯一
namespace: default # deployment所在的命名空间
labels:
app: mysql
spec:
replicas: 1 # Pod副本期待数量
selector:
matchLabels: # 定义RS的标签
app: mysql # 符合目标的Pod拥有此标签
strategy: # 定义升级的策略
type: RollingUpdate # 滚动升级,逐步替换的策略
template: # 根据此模板创建Pod的副本实例
metadata:
labels:
app: mysql # Pod副本的标签对应RS的Selector
spec:
nodeName: k8s-node1 # 指定pod运行在的node
containers: # Pod里容器的定义部分
- name: mysql # 容器的名称
image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/mysql:5.7.44 # 容器对应的docker镜像
volumeMounts: # 容器内挂载点的定义部分
- name: time-zone # 容器内挂载点名称
mountPath: /etc/localtime # 容器内挂载点路径,可以是文件或目录
- name: mysql-data
mountPath: /var/lib/mysql # 容器内mysql的数据目录
- name: mysql-logs
mountPath: /var/log/mysql # 容器内mysql的日志目录
ports:
- containerPort: 3306 # 容器暴露的端口号
env: # 写入到容器内的环境容量
- name: MYSQL_ROOT_PASSWORD # 定义了一个mysql的root密码的变量
value: "root"
volumes: # 本地需要挂载到容器里的数据卷定义部分
- name: time-zone # 数据卷名称,需要与容器内挂载点名称一致
hostPath:
path: /etc/localtime # 挂载到容器里的路径将localtime文件挂载到容器里可让容器使用本地的时区
- name: mysql-data
hostPath:
path: /data/mysql/data # 本地存放mysql数据的目录
- name: mysql-logs
hostPath:
path: /data/mysql/logs # 本地存入mysql日志的目录
```
`mysql-deployment.yaml`配置文件上传至虚拟机的 /root 目录下,在 /root 目录下执行命令:
```shell
kubectl create -f mysql.yaml
```
![](https://dsideal.obs.cn-north-1.myhuaweicloud.com/HuangHai/BlogImages/202409162012122.png)
执行添加服务的命令(这行不执行应该也不影响结果):
```shell
kubectl apply -f mysql.yaml
```
![](https://dsideal.obs.cn-north-1.myhuaweicloud.com/HuangHai/BlogImages/202409162014765.png)
编写一个提供对外访问的service `mysql-svc.yaml`
```yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: 30001
selector:
app: mysql
```
`mysql-svc.yaml`配置文件上传至虚拟机的 /root 目录下,在 /root 目录下执行命令:
```shell
kubectl create -f mysql-svc.yaml
```
![](https://dsideal.obs.cn-north-1.myhuaweicloud.com/HuangHai/BlogImages/202409162015173.png)
  
访问数据库并验证其运行正常
```shell
kubectl get pod
```
![](https://dsideal.obs.cn-north-1.myhuaweicloud.com/HuangHai/BlogImages/202409162018178.png)
删除$pods$
```
kubectl delete pods el-admin-mysql-rc-z2xjr
```
![](https://dsideal.obs.cn-north-1.myhuaweicloud.com/HuangHai/BlogImages/202409162019824.png)
![](https://dsideal.obs.cn-north-1.myhuaweicloud.com/HuangHai/BlogImages/202409162027325.png)
mysqld failed while attempting to check config
![](https://dsideal.obs.cn-north-1.myhuaweicloud.com/HuangHai/BlogImages/202409162039179.png)
```shell
# 查看日志
kubectl describe pod mysql-db89ddf68-gf45x
kubectl get events --field-selector involvedObject.name=mysql-db89ddf68-gf45x
kubectl logs mysql-db89ddf68-gf45x -c <container_name>
[root@k8s-master ~]# kubectl logs mysql-db89ddf68-gf45x
2024-09-16 20:45:16+08:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.44-1.el7 started.
2024-09-16 20:45:28+08:00 [ERROR] [Entrypoint]: mysqld failed while attempting to check config
command was: mysqld --verbose --help --log-bin-index=/tmp/tmp.lFVXa084HQ
```
```shell
kubectl exec -it mysql-db89ddf68-gf45x --mysql -uroot -pDsideaL147258369
```
(密码也是root)
## mysql开放远程连接
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'root' WITH GRANT OPTION;
flush privileges; #刷新权限表,使配置生效
## 开启防火墙端口
systemctl start firewalld.service
firewall-cmd --zone=public --add-port=6443/tcp --permanent
firewall-cmd --zone=public --add-port=30001/tcp --permanent
firewall-cmd --reload
firewall-cmd --zone=public --list-ports
systemctl stop firewalld.service
systemctl disable firewalld.service
要开放6443端口k8s集群连接地址和30001端口k8s集群mysql端口
## 查看已安装的mysql
kubectl get pod
kubectl get svc
mysql服务安装在default命名空间中查看其所在节点
kubectl get pod -n default -o wide
此次部署的mysql服务被我们指定运行在了 k8s-worker01 节点上。
在本地用SQLyog连接到k8s集群的mysql
#### $7.2$ 部署$redis$
> https://devpress.csdn.net/k8s/66c9a10413e4054e7e7d4f79.html
#### $7.3$ 部署$MongoDB$
> https://www.cnblogs.com/gaoyanbing/p/17694272.html
#### $7.4$ 部署$Minio$
> https://blog.csdn.net/baidu_35848778/article/details/131436354
### 八、资料
**[kubernetes](https://link.csdn.net/?target=https%3A%2F%2Flink.csdn.net%2F%3Ftarget%3Dhttps%3A%2F%2Fgitcode.com%2Fgh_mirrors%2Fk8s%2Fk8s-deploy%3Futm_source%3Ddevpress_k8s%26login%3Dfrom_csdn) 可能会产生垃圾或者僵尸pod在删除rc的时候相应的pod没有被删除手动删除pod后会自动重新创建这时一般需要先删除掉相关联的resources**
**原因**
**先删除pod的话马上会创建一个新的pod因为[deploy](https://link.csdn.net/?target=https%3A%2F%2Flink.csdn.net%2F%3Ftarget%3Dhttps%3A%2F%2Fgitcode.com%2Fgh_mirrors%2Fk8s%2Fk8s-deploy%3Futm_source%3Ddevpress_k8s%26login%3Dfrom_csdn)ment.yaml文件中定义了副本数量**
**正确方法**
先删除[deploy](https://link.csdn.net/?target=https%3A%2F%2Flink.csdn.net%2F%3Ftarget%3Dhttps%3A%2F%2Fgitcode.com%2Fgh_mirrors%2Fk8s%2Fk8s-deploy%3Futm_source%3Ddevpress_k8s%26login%3Dfrom_csdn)ment
查看[deploy](https://link.csdn.net/?target=https%3A%2F%2Flink.csdn.net%2F%3Ftarget%3Dhttps%3A%2F%2Fgitcode.com%2Fgh_mirrors%2Fk8s%2Fk8s-deploy%3Futm_source%3Ddevpress_k8s%26login%3Dfrom_csdn)ment
```routeros
kubectl get deployment
```
删除[deploy](https://link.csdn.net/?target=https%3A%2F%2Flink.csdn.net%2F%3Ftarget%3Dhttps%3A%2F%2Fgitcode.com%2Fgh_mirrors%2Fk8s%2Fk8s-deploy%3Futm_source%3Ddevpress_k8s%26login%3Dfrom_csdn)ment
```pgsql
kubectl delete deployment <name>
```
然后再删除pod
```pgsql
kubectl delete pod <name>
```
如果pod还在的话
查看rc和rs
```routeros
kubectl get rc
kubectl get rs
```
把pod对应的都删除即可
```pgsql
kubectl delete rc <name>
kubectl delete rs <name>
```