|
|
|
@ -1,4 +1,4 @@
|
|
|
|
|
[一文教你从零快速搭建k8s集群](https://blog.csdn.net/m0_37719874/article/details/120966546?share_token=6d60db3d-9637-406f-ac3a-cb1d7301a06b)
|
|
|
|
|
https://zhuanlan.zhihu.com/p/693571878
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
@ -7,9 +7,20 @@
|
|
|
|
|
**1.1 准备环境**
|
|
|
|
|
|
|
|
|
|
```apl
|
|
|
|
|
k8s-master 10.10.14.200
|
|
|
|
|
k8s-node1 10.10.14.201
|
|
|
|
|
k8s-node2 10.10.14.202
|
|
|
|
|
# Linux版本
|
|
|
|
|
Rocky Linux 9.4 Mini
|
|
|
|
|
|
|
|
|
|
# 更新系统
|
|
|
|
|
dnf clean all
|
|
|
|
|
dnf update
|
|
|
|
|
|
|
|
|
|
# K8S的三台服务器
|
|
|
|
|
10.10.14.200 k8s-master
|
|
|
|
|
10.10.14.201 k8s-node1
|
|
|
|
|
10.10.14.202 k8s-node2
|
|
|
|
|
|
|
|
|
|
# Docker镜像仓库
|
|
|
|
|
K8S-IMAGES 10.10.14.203
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
@ -35,52 +46,48 @@ sed -i 's/enforcing/disabled/' /etc/selinux/config
|
|
|
|
|
setenforce 0
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
关闭swap分区:
|
|
|
|
|
关闭$swap$分区:
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
sed -ri 's/.*swap.*/#&/' /etc/fstab
|
|
|
|
|
swapoff -a
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
在master上执行
|
|
|
|
|
在$master$上执行
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
hostnamectl set-hostname K8S-MASTER
|
|
|
|
|
hostnamectl set-hostname k8s-master
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
在node1上执行
|
|
|
|
|
在$node1$上执行
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
hostnamectl set-hostname K8S-NODE1
|
|
|
|
|
hostnamectl set-hostname k8s-node1
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
在node2上执行
|
|
|
|
|
在$node2$上执行
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
hostnamectl set-hostname K8S-NODE2
|
|
|
|
|
hostnamectl set-hostname k8s-node2
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
在每个节点添加hosts:
|
|
|
|
|
在每个节点添加$hosts$:
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
cat >> /etc/hosts << EOF
|
|
|
|
|
10.10.14.200 K8S-MASTER
|
|
|
|
|
10.10.14.201 K8S-NODE1
|
|
|
|
|
10.10.14.202 K8S-NODE2
|
|
|
|
|
10.10.14.200 k8s-master
|
|
|
|
|
10.10.14.201 k8s-node1
|
|
|
|
|
10.10.14.202 k8s-node2
|
|
|
|
|
EOF
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
将桥接的IPv4流量传递到iptables的链:
|
|
|
|
|
将桥接的$IPv4$流量传递到$iptables$的链:
|
|
|
|
|
|
|
|
|
|
在每个节点添加如下的命令:
|
|
|
|
|
|
|
|
|
@ -95,10 +102,13 @@ EOF
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
加载br_netfilter模块
|
|
|
|
|
在每个节点加载$br\_netfilter$模块
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
```shell
|
|
|
|
|
modprobe br_netfilter
|
|
|
|
|
|
|
|
|
|
# 生效
|
|
|
|
|
sysctl --system
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
@ -109,36 +119,50 @@ modprobe br_netfilter
|
|
|
|
|
lsmod | grep br_netfilter
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
生效
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
sysctl --system
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
在每个节点添加时间同步:
|
|
|
|
|
|
|
|
|
|
安装ntpdate时间同步插件
|
|
|
|
|
安装$ntpdate$时间同步插件
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
dnf install chrony -y
|
|
|
|
|
systemctl enable --now chronyd
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
编辑内容
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
yum install ntpdate -y
|
|
|
|
|
vi /etc/chrony.conf
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
server 0.pool.ntp.org iburst
|
|
|
|
|
server 1.pool.ntp.org iburst
|
|
|
|
|
server 2.pool.ntp.org iburst
|
|
|
|
|
server 3.pool.ntp.org iburst
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
重新启动
|
|
|
|
|
|
|
|
|
|
开启时间同步
|
|
|
|
|
```
|
|
|
|
|
systemctl restart chronyd
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
手工同步
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
ntpdate time.windows.com
|
|
|
|
|
chronyc makestep
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
在每个节点安装ipset和ipvsadm:
|
|
|
|
|
在每个节点安装$ipset$和$ipvsadm$:
|
|
|
|
|
|
|
|
|
|
安装
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
```shell
|
|
|
|
|
yum -y install ipset ipvsadm
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
@ -147,13 +171,14 @@ yum -y install ipset ipvsadm
|
|
|
|
|
配置
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
mkdir -p /etc/sysconfig/modules/
|
|
|
|
|
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
|
|
|
|
|
#!/bin/bash
|
|
|
|
|
modprobe -- ip_vs
|
|
|
|
|
modprobe -- ip_vs_rr
|
|
|
|
|
modprobe -- ip_vs_wrr
|
|
|
|
|
modprobe -- ip_vs_sh
|
|
|
|
|
modprobe -- nf_conntrack_ipv4
|
|
|
|
|
modprobe -- nf_conntrack
|
|
|
|
|
EOF
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
@ -162,11 +187,12 @@ EOF
|
|
|
|
|
授权、运行、检查是否加载:
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
|
|
|
|
|
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4.3 所有节点安装Docker/kubeadm/kubelet/kubectl
|
|
|
|
|
|
|
|
|
|
**所有节点安装$Docker/kubeadm/kubelet/kubectl$**
|
|
|
|
|
|
|
|
|
|
k8s默认CRI(容器运行时)为Docker,因此需要先安装Docker!
|
|
|
|
|
|
|
|
|
@ -174,255 +200,249 @@ k8s默认CRI(容器运行时)为Docker,因此需要先安装Docker!
|
|
|
|
|
|
|
|
|
|
获取镜像源
|
|
|
|
|
|
|
|
|
|
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
|
|
|
|
|
# 安装
|
|
|
|
|
yum -y install docker-ce-18.06.3.ce-3.el7
|
|
|
|
|
# 设置开机自启动并启动
|
|
|
|
|
```shell
|
|
|
|
|
yum install -y yum-utils
|
|
|
|
|
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
安装
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
# 找出有哪些可用的Docker版本,这里我们选择20系列最新的版本
|
|
|
|
|
yum list docker-ce --showduplicates | sort -r
|
|
|
|
|
yum list docker-ce-cli --showduplicates | sort -r
|
|
|
|
|
yum list containerd.io --showduplicates | sort -r
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
docker-ce.x86_64 3:20.10.24-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce.x86_64 3:20.10.23-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce.x86_64 3:20.10.22-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce.x86_64 3:20.10.21-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce.x86_64 3:20.10.20-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce.x86_64 3:20.10.19-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce.x86_64 3:20.10.18-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce.x86_64 3:20.10.17-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce.x86_64 3:20.10.16-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce.x86_64 3:20.10.15-3.el9 docker-ce-stable
|
|
|
|
|
|
|
|
|
|
...
|
|
|
|
|
docker-ce-cli.x86_64 1:20.10.24-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce-cli.x86_64 1:20.10.23-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce-cli.x86_64 1:20.10.22-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce-cli.x86_64 1:20.10.21-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce-cli.x86_64 1:20.10.20-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce-cli.x86_64 1:20.10.19-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce-cli.x86_64 1:20.10.18-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce-cli.x86_64 1:20.10.17-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce-cli.x86_64 1:20.10.16-3.el9 docker-ce-stable
|
|
|
|
|
docker-ce-cli.x86_64 1:20.10.15-3.el9 docker-ce-stable
|
|
|
|
|
...
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
yum install -y docker-ce-20.10.15-3.el9 docker-ce-cli-20.10.15-3.el9 containerd.io-1.6.10-3.1.el9
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
设置开机自启动并启动
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
systemctl enable docker && systemctl start docker
|
|
|
|
|
# 查看版本
|
|
|
|
|
docker version
|
|
|
|
|
# 设置镜像加速器
|
|
|
|
|
sudo mkdir -p /etc/docker
|
|
|
|
|
sudo tee /etc/docker/daemon.json <<-'EOF'
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
配置加速
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
#创建文件夹
|
|
|
|
|
mkdir -p /etc/docker
|
|
|
|
|
tee /etc/docker/daemon.json <<-'EOF'
|
|
|
|
|
{
|
|
|
|
|
"exec-opts": ["native.cgroupdriver=systemd"],
|
|
|
|
|
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
|
|
|
|
|
"registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
|
|
|
|
|
"exec-opts": ["native.cgroupdriver=systemd"],
|
|
|
|
|
"log-driver": "json-file",
|
|
|
|
|
"log-opts": {
|
|
|
|
|
"max-size": "100m"
|
|
|
|
|
},
|
|
|
|
|
"storage-driver": "overlay2"
|
|
|
|
|
}
|
|
|
|
|
EOF
|
|
|
|
|
# 重载配置
|
|
|
|
|
sudo systemctl daemon-reload
|
|
|
|
|
# 重启docker
|
|
|
|
|
sudo systemctl restart docker
|
|
|
|
|
# 添加阿里云的yum软件源
|
|
|
|
|
cat > /etc/yum.repos.d/kubernetes.repo << EOF
|
|
|
|
|
systemctl daemon-reload
|
|
|
|
|
systemctl restart docker
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## **三、安装kubeadm**
|
|
|
|
|
|
|
|
|
|
本章节操作在k8s集群所有机器(即master、所有node)都需要执行成功
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
# 配置k8s 下载的地址
|
|
|
|
|
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
|
|
|
|
|
[kubernetes]
|
|
|
|
|
name=Kubernetes
|
|
|
|
|
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
|
|
|
|
|
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
|
|
|
|
|
enabled=1
|
|
|
|
|
gpgcheck=0
|
|
|
|
|
repo_gpgcheck=0
|
|
|
|
|
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
|
|
|
|
|
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
|
|
|
|
|
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
|
|
|
|
|
exclude=kubelet kubeadm kubectl
|
|
|
|
|
EOF
|
|
|
|
|
|
|
|
|
|
1
|
|
|
|
|
2
|
|
|
|
|
3
|
|
|
|
|
4
|
|
|
|
|
5
|
|
|
|
|
6
|
|
|
|
|
7
|
|
|
|
|
8
|
|
|
|
|
9
|
|
|
|
|
10
|
|
|
|
|
11
|
|
|
|
|
12
|
|
|
|
|
13
|
|
|
|
|
14
|
|
|
|
|
15
|
|
|
|
|
16
|
|
|
|
|
17
|
|
|
|
|
18
|
|
|
|
|
19
|
|
|
|
|
20
|
|
|
|
|
21
|
|
|
|
|
22
|
|
|
|
|
23
|
|
|
|
|
24
|
|
|
|
|
25
|
|
|
|
|
26
|
|
|
|
|
27
|
|
|
|
|
28
|
|
|
|
|
29
|
|
|
|
|
30
|
|
|
|
|
安装kubeadm、kubelet和kubectl:
|
|
|
|
|
|
|
|
|
|
# 指定版本号安装
|
|
|
|
|
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
|
|
|
|
|
# 为了实现Docker使用的cgroup drvier和kubelet使用的cgroup drver一致,需要修改"/etc/sysconfig/kubelet"文件的内容:
|
|
|
|
|
vim /etc/sysconfig/kubelet
|
|
|
|
|
# 修改
|
|
|
|
|
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
|
|
|
|
|
# 设置开机自启动
|
|
|
|
|
# 安装3大件
|
|
|
|
|
yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes
|
|
|
|
|
|
|
|
|
|
# 启动kubelet
|
|
|
|
|
systemctl enable --now kubelet
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
**查看kubeadm、kubelet和kubectl 是否安装成功**
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
kubeadm version
|
|
|
|
|
kubelet --version
|
|
|
|
|
kubectl version --client
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
**设置k8s服务自启动**
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
systemctl enable kubelet
|
|
|
|
|
1
|
|
|
|
|
2
|
|
|
|
|
3
|
|
|
|
|
4
|
|
|
|
|
5
|
|
|
|
|
6
|
|
|
|
|
7
|
|
|
|
|
8
|
|
|
|
|
4.4 部署k8s的Master节点
|
|
|
|
|
|
|
|
|
|
# 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里需要指定阿里云镜像仓库地址
|
|
|
|
|
# 只在master执行
|
|
|
|
|
kubeadm init \
|
|
|
|
|
# 此处填写实际ip,其他地方不用动
|
|
|
|
|
--apiserver-advertise-address=192.168.40.136 \
|
|
|
|
|
--image-repository registry.aliyuncs.com/google_containers \
|
|
|
|
|
--kubernetes-version v1.18.0 \
|
|
|
|
|
--service-cidr=10.96.0.0/12 \
|
|
|
|
|
--pod-network-cidr=10.244.0.0/16
|
|
|
|
|
# 配置环境变量(只在master执行)
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
**部署kubetnets**
|
|
|
|
|
|
|
|
|
|
> 该操作只需要在master节点机器上执行
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
#原命令
|
|
|
|
|
kubeadm init --kubernetes-version=1.19.0 --apiserver-advertise-address=master的ip --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
|
|
|
|
|
|
|
|
|
|
#根据机器实际修改后的命令
|
|
|
|
|
kubeadm init --kubernetes-version=1.20.9 --apiserver-advertise-address=10.10.14.200 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
[addons] Applied essential addon: CoreDNS
|
|
|
|
|
[addons] Applied essential addon: kube-proxy
|
|
|
|
|
|
|
|
|
|
Your Kubernetes control-plane has initialized successfully!
|
|
|
|
|
|
|
|
|
|
To start using your cluster, you need to run the following as a regular user:
|
|
|
|
|
|
|
|
|
|
mkdir -p $HOME/.kube
|
|
|
|
|
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
|
|
|
|
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
|
|
|
|
|
|
|
|
|
Alternatively, if you are the root user, you can run:
|
|
|
|
|
|
|
|
|
|
export KUBECONFIG=/etc/kubernetes/admin.conf
|
|
|
|
|
|
|
|
|
|
You should now deploy a pod network to the cluster.
|
|
|
|
|
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
|
|
|
|
|
https://kubernetes.io/docs/concepts/cluster-administration/addons/
|
|
|
|
|
|
|
|
|
|
Then you can join any number of worker nodes by running the following on each as root:
|
|
|
|
|
|
|
|
|
|
kubeadm join 10.10.14.200:6443 --token ivocyb.4f2p3qu1nc5jptwf \
|
|
|
|
|
--discovery-token-ca-cert-hash sha256:e088f075df466e689b8db3ace62a7650f27a11b6f7b36ee61d1ebbbd8a720c16
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
**再根据日志提示命令结果在对应机器上执行**
|
|
|
|
|
|
|
|
|
|
Master机器
|
|
|
|
|
|
|
|
|
|
```shell
|
|
|
|
|
mkdir -p $HOME/.kube
|
|
|
|
|
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
|
|
|
|
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
|
|
|
|
1
|
|
|
|
|
2
|
|
|
|
|
3
|
|
|
|
|
4
|
|
|
|
|
5
|
|
|
|
|
6
|
|
|
|
|
7
|
|
|
|
|
8
|
|
|
|
|
9
|
|
|
|
|
10
|
|
|
|
|
11
|
|
|
|
|
12
|
|
|
|
|
13
|
|
|
|
|
4.5 添加k8s的Node节点
|
|
|
|
|
|
|
|
|
|
# 在master节点获取token:
|
|
|
|
|
kubeadm token create --print-join-command --ttl 0
|
|
|
|
|
# 在node1和node2添加如下的命令向k8s集群中添加Node节点(为上方命令返回的内容,复制即可):
|
|
|
|
|
kubeadm join 192.168.40.136:6443 --token yruyio.n4hal2qdb5iweknf --discovery-token-ca-cert-hash sha256:0ac7ed632224e1e07cef223b1159d03b2231dfc0456817db7eaf3c8651eef49c
|
|
|
|
|
# 获取所有节点(正常可以获取到master,node1和node2)
|
|
|
|
|
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
|
|
|
|
chown $(id -u):$(id -g) $HOME/.kube/config
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Node机器
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
#将两台node加入到集群中,分别在node1根node2执行刚刚 kubeadm init成功后下面打印的命令
|
|
|
|
|
#注:日志里复制的“\”换行符要记得去掉
|
|
|
|
|
kubeadm join 10.10.14.200:6443 --token ivocyb.4f2p3qu1nc5jptwf \
|
|
|
|
|
--discovery-token-ca-cert-hash sha256:e088f075df466e689b8db3ace62a7650f27a11b6f7b36ee61d1ebbbd8a720c16
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|

|
|
|
|
|
|
|
|
|
|
**Node机器都执行完成后,在master节点机器执行该命令**
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
kubectl get nodes
|
|
|
|
|
1
|
|
|
|
|
2
|
|
|
|
|
3
|
|
|
|
|
4
|
|
|
|
|
5
|
|
|
|
|
6
|
|
|
|
|
4.6 部署CNI网络插件
|
|
|
|
|
|
|
|
|
|
在master节点部署CNI网络插件:
|
|
|
|
|
|
|
|
|
|
# 此链接被墙了,需要科学上网解决,或者在文末获取相关资源
|
|
|
|
|
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
|
|
|
|
|
kubectl apply -f kube-flannel.yml
|
|
|
|
|
# 查看部署进度(可能会出现镜像拉取失败的情况,耐心等待一会就好了)
|
|
|
|
|
kubectl get pods -n kube-system
|
|
|
|
|
1
|
|
|
|
|
2
|
|
|
|
|
3
|
|
|
|
|
4
|
|
|
|
|
5
|
|
|
|
|
查看集群健康状态:
|
|
|
|
|
|
|
|
|
|
kubectl get cs
|
|
|
|
|
kubectl cluster-info
|
|
|
|
|
1
|
|
|
|
|
2
|
|
|
|
|
至此,k8s集群基本搭建完毕!
|
|
|
|
|
|
|
|
|
|
5.测试kubernetes集群
|
|
|
|
|
在Kubernetes集群中创建一个pod,验证是否正常运行,这里以nginx为例:
|
|
|
|
|
|
|
|
|
|
# 创建deployment
|
|
|
|
|
kubectl create deployment nginx --image=nginx
|
|
|
|
|
# 修改端口类型为nodePort供外界访问 编辑方式跟vim一样
|
|
|
|
|
kubectl edit svc nginx
|
|
|
|
|
...
|
|
|
|
|
spec:
|
|
|
|
|
clusterIP: 10.106.212.113
|
|
|
|
|
externalTrafficPolicy: Cluster
|
|
|
|
|
ports:
|
|
|
|
|
# 外界暴露指定端口 32627(30000-32767)
|
|
|
|
|
- nodePort: 32627
|
|
|
|
|
# 容器暴露的端口
|
|
|
|
|
port: 80
|
|
|
|
|
protocol: TCP
|
|
|
|
|
# 集群内访问的单口
|
|
|
|
|
targetPort: 80
|
|
|
|
|
selector:
|
|
|
|
|
app: nginx
|
|
|
|
|
sessionAffinity: None
|
|
|
|
|
# type改为NodePort
|
|
|
|
|
type: NodePort
|
|
|
|
|
...
|
|
|
|
|
|
|
|
|
|
1
|
|
|
|
|
2
|
|
|
|
|
3
|
|
|
|
|
4
|
|
|
|
|
5
|
|
|
|
|
6
|
|
|
|
|
7
|
|
|
|
|
8
|
|
|
|
|
9
|
|
|
|
|
10
|
|
|
|
|
11
|
|
|
|
|
12
|
|
|
|
|
13
|
|
|
|
|
14
|
|
|
|
|
15
|
|
|
|
|
16
|
|
|
|
|
17
|
|
|
|
|
18
|
|
|
|
|
19
|
|
|
|
|
20
|
|
|
|
|
21
|
|
|
|
|
22
|
|
|
|
|
访问 http://192.168.40.136:32627:
|
|
|
|
|
|
|
|
|
|
至此,我们已经成功部署了一个nginx的deployment,deployment控制对应的pod的生命周期,service则对外提供相应的服务。
|
|
|
|
|
|
|
|
|
|
6.部署 Dashboard
|
|
|
|
|
Dashboard是k8s的一套桌面管理应用,通过Dashboard我们可以通过可视化的方式查看k8s集群的状态,执行相关的操作,但没有kubectl来的简单直接,了解一下即可。
|
|
|
|
|
|
|
|
|
|
首先需要下载kubernetes-dashboard.yaml,这个文件现在也被墙了,大家可以通过科学上网或者文末获取!
|
|
|
|
|
|
|
|
|
|
在master执行以下命令:
|
|
|
|
|
|
|
|
|
|
kubectl apply -f kubernetes-dashboard.yaml
|
|
|
|
|
# 开启代理 ip写自己实际ip
|
|
|
|
|
kubectl proxy --address=192.168.40.136 --disable-filter=true &
|
|
|
|
|
1
|
|
|
|
|
2
|
|
|
|
|
3
|
|
|
|
|
访问https://192.168.40.136:30001
|
|
|
|
|
|
|
|
|
|
谷歌访问可能会被禁止,可以通过以下操作解决:
|
|
|
|
|
|
|
|
|
|
mkdir key && cd key
|
|
|
|
|
#生成证书
|
|
|
|
|
openssl genrsa -out dashboard.key 2048
|
|
|
|
|
openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=192.168.246.200'
|
|
|
|
|
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
|
|
|
|
|
#删除原有的证书secret
|
|
|
|
|
kubectl delete secret kubernetes-dashboard-certs -n kube-system
|
|
|
|
|
#创建新的证书secret
|
|
|
|
|
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kube-system
|
|
|
|
|
#查看pod复制dashboard的pod名称
|
|
|
|
|
kubectl get pod -n kube-system
|
|
|
|
|
#重启pod(删除会自动重启)
|
|
|
|
|
kubectl delete pod <pod name> -n kube-system
|
|
|
|
|
1
|
|
|
|
|
2
|
|
|
|
|
3
|
|
|
|
|
4
|
|
|
|
|
5
|
|
|
|
|
6
|
|
|
|
|
7
|
|
|
|
|
8
|
|
|
|
|
9
|
|
|
|
|
10
|
|
|
|
|
11
|
|
|
|
|
12
|
|
|
|
|
13
|
|
|
|
|
如下,通过令牌方式访问:
|
|
|
|
|
|
|
|
|
|
获取token:
|
|
|
|
|
|
|
|
|
|
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/cluster-admin/{print $1}')
|
|
|
|
|
1
|
|
|
|
|
复制,点击登录,稍等片刻即可:
|
|
|
|
|
|
|
|
|
|
结语
|
|
|
|
|
本文到这里就结束了,主要介绍了k8s集群的部署方式以及具体步骤,感兴趣的朋友可以在本地虚拟机搭建一套,至于k8s中组件的作用以及使用方式,大家可以在网上自行查阅,在集群中执行相关命令自行体会学习,k8s在实际的生产中主要用于系统的自动化部署,自动扩缩容等,帮助我们提升运维效率,作为程序员我们也要熟悉其常用的命令以及原理,掌握到一定程度之后可以尝试基于k8s开发一套自动化运维管理平台,可以扩充我们的知识面,提升自己的技术水平!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
搭建自己的的docker本地仓库
|
|
|
|
|
https://blog.csdn.net/gaoxiangfei/article/details/130941906?share_token=c035c4fd-224f-4503-b057-7fb1030d97c2
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
#提示如下则安装成功,status要为ready状态
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|

|
|
|
|
|
|
|
|
|
|
**$status$为$notReady$如何排查解决**
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
#1-检查 kubelet 服务
|
|
|
|
|
systemctl status kubelet
|
|
|
|
|
|
|
|
|
|
#2-查看节点日志,通过日志信息详细排查
|
|
|
|
|
kubectl describe node k8s-node1
|
|
|
|
|
kubectl describe node k8s-node2
|
|
|
|
|
|
|
|
|
|
MASTER上执行:
|
|
|
|
|
yum install lrzsz -y
|
|
|
|
|
sz /etc/kubernetes/admin.conf
|
|
|
|
|
|
|
|
|
|
NODE1,NODE2上分别执行:
|
|
|
|
|
yum install lrzsz -y
|
|
|
|
|
cd /etc/kubernetes/
|
|
|
|
|
rz -be 选择 admin.conf
|
|
|
|
|
|
|
|
|
|
然后配置环境变量:
|
|
|
|
|
|
|
|
|
|
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
|
|
|
|
|
source ~/.bash_profile
|
|
|
|
|
|
|
|
|
|
接着再运行kubectl命令就OK了
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#3-检查 API server 可达性: 从节点上,使用 curl 或其他工具尝试访问 API server 的健康检查端点。如下命令:
|
|
|
|
|
curl -k https://10.10.14.200:6443/healthz
|
|
|
|
|
|
|
|
|
|
#4-检查证书,如果集群使用自签名证书,确保节点上的 kubelet 有正确的 CA 证书,以便它可以安全地与 API server 通信
|
|
|
|
|
|
|
|
|
|
#5-重启 kubelet 服务,如果上述步骤都没有发现问题,可以尝试重启 kubelet 服务
|
|
|
|
|
systemctl restart kubelet
|
|
|
|
|
|
|
|
|
|
#6-重新加入节点: 如果问题仍然存在,可能需要从集群中删除节点,然后重新加入它们。使用以下命令删除节点:
|
|
|
|
|
kubectl delete node <node-name>
|
|
|
|
|
#随后使用 kubeadm join 命令重新加入节点
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
```text
|
|
|
|
|
# 在最底部增加新的环境变量
|
|
|
|
|
vi /etc/profile
|
|
|
|
|
export KUBECONFIG=/etc/kubernetes/admin.conf
|
|
|
|
|
|
|
|
|
|
# 读取变量生效
|
|
|
|
|
source /etc/profile
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
https://zhuanlan.zhihu.com/p/672518868
|