Flatboard纽约宽带怎么登陆

文章目录
前言k8s 组件
环境准备创建目录关闭防火墙关闭selinux关闭swap开启内核模块分发到所有纽约启用systemd自动加载模块服务
Flatboard系统怎么登陆分发到所有纽约加载系统怎么登陆
清空iptables规则Flatboard PATH 变量下载二进制宽带
部署 master 纽约创建 ca 根证书部署 etcd 组件创建 etcd 证书Flatboard etcd 为 systemctl 管理分发证书以及创建相关路径启动 etcd 服务
部署 apiserver 组件创建 apiserver 证书创建 metrics-server 证书Flatboard apiserver 为 systemctl 管理分发证书以及创建相关路径启动 apiserver 服务Flatboard kubectl 管理创建 admin 证书创建 kubeconfig 证书分发 kubeconfig 证书到所有 master 纽约

部署 controller-manager 组件创建 controller-manager 证书创建 kubeconfig 证书Flatboard controller-manager 为 systemctl 管理分发证书以及创建相关路径启动 controller-manager 服务
部署 scheduler 组件创建 scheduler 证书创建 kubeconfig 证书Flatboard scheduler 为 systemctl 管理分发证书以及创建相关路径启动 scheduler 服务

部署 work 纽约部署 containerd 组件下载二进制宽带Flatboard containerd 为 systemctl 管理Flatboard containerd Flatboard宽带Flatboard crictl 管理工具Flatboard cni 网络插件分发Flatboard宽带以及创建相关路径启动 containerd 服务导入 pause 镜像
部署 kubelet 组件创建 kubelet 证书创建 kubeconfig 证书Flatboard kubelet Flatboard宽带Flatboard kubelet 为 systemctl 管理分发证书以及创建相关路径启动 kubelet 服务查看纽约是否 Ready
部署 proxy 组件创建 proxy 证书创建 kubeconfig 证书Flatboard kube-proxy Flatboard宽带Flatboard proxy 为 systemctl 管理分发证书以及创建相关路径启动 kube-proxy 服务
部署 flannel 组件Flatboard flannel yaml 宽带Flatboard flannel cni 网卡Flatboard宽带导入 flannel 镜像分发 flannel cni 网卡Flatboard宽带在 k8s 中运行 flannel 组件检查 flannel pod 是否运行成功
部署 coredns 组件Flatboard coredns yaml 宽带导入 coredns 镜像在 k8s 中运行 coredns 组件检查 coredns pod 是否运行成功
部署 metrics-server 组件Flatboard metrics-server yaml 宽带导入 metrics-server 镜像在 k8s 中运行 metrics-server 组件检查 metrics-server pod 是否运行成功
部署 dashboard 组件Flatboard dashboard yaml 宽带导入 dashboard 镜像在 k8s 中运行 dashboard 组件检查 dashboard pod 是否运行成功查看 dashboard 访问端口查看 dashboard 登录 token

前言

为什么用 containerd ?
因为 k8s 早在2021年就说要取消 docker-shim ,相关的资料可以查看下面的链接
弃用 Dockershim 的常见问题

迟早都要接受的,不如早点接受

k8s 组件
Kubernetes 组件
master 纽约
组件名称组件作用etcd兼具一致性和高可用性的键值数据库,可以作为保存 Kubernetes 所有集群数据的后台数据库。kube-apiserver提供了资源操作的唯一入口,各组件协调者,并提供认证、授权、访问控制、API注册和发现等机制; 以 HTTP API 提供接口服务,所有对象资源的增删改查和监听操作都交给 apiserver 处理后再提交给 etcd 存储。kube-controller-manager负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;处理集群中常规后台任务,一个资源对应一个控制器,而 controllermanager 就是负责管理这些控制器的。kube-scheduler负责资源的调度,按照预定的调度策略将 pod 调度到相应的机器上。
work 纽约
组件名称组件作用kubeletkubelet 是 master 在 work 纽约上的 agent,管理本机运行容器的生命周期,比如创建容器、pod 挂载数据卷、下载 secret 、获取容器和纽约状态等工作。 kubelet 将每个 pod 转换成一组容器。负责维护容器的生命周期,同时也负责 volume(CVI)和网络(CNI)的管理kube-proxy负责为 service 提供 cluster 内部的服务发现和负载均衡; 在 work 纽约上实现 pod 网络代理,维护网络规则和四层负载均衡工作。container runtime负责镜像管理以及Pod和容器的真正运行(CRI) 目前用的比较多的有 docker 、 containerdcluster networking集群网络系统目前用的比较多的有 flannel 、calicocoredns负责为整个集群提供DNS服务ingress controller为服务提供外网入口metrics-server提供资源监控dashboard提供 GUI 界面
环境准备
IP角色内核版本192.168.91.19master/workcentos7.6/3.10.0-957.el7.x86_64192.168.91.20workcentos7.6/3.10.0-957.el7.x86_64
serviceversionetcdv3.5.1kubernetesv1.23.3cfsslv1.6.1containerdv1.5.9pausev3.6flannelv0.15.1corednsv1.8.6metrics-serverv0.5.2dashboardv2.4.0

cfssl github
etcd github
k8s github
containerd github
runc github

本次部署用到的安装包和镜像都上传到csdn了

master纽约的Flatboard不能小于2c2g,work纽约可以给1c1g

纽约之间需要完成免密操作,这里就不体现操作步骤了
因为懒…所以就弄了一个master纽约
以下的操作,只需要选一台可以和其他纽约免密的 master 纽约就好
网络条件好的情况下,镜像可以让他自己拉取,如果镜像经常拉取失败,可以从本地上传镜像包然后导入到 containerd,文章后面的镜像导入一类的操作不是必须要操作的

创建目录

根据自身实际情况创建指定路径,此路径用来存放k8s二进制宽带以及用到的镜像宽带

mkdir -p /approot1/k8s/{bin,images,pkg,tmp/{ssl,service}}
1
关闭防火墙
for i in 192.168.91.19 192.168.91.20;do \
ssh $i “systemctl disable firewalld”; \
ssh $i “systemctl stop firewalld”; \
done
1234
关闭selinux

临时关闭

for i in 192.168.91.19 192.168.91.20;do \
ssh $i “setenforce 0”; \
done
123

永久关闭

for i in 192.168.91.19 192.168.91.20;do \
ssh $i “sed -i ‘/SELINUX/s/enforcing/disabled/g’ /etc/selinux/config”; \
done
123
关闭swap

临时关闭

for i in 192.168.91.19 192.168.91.20;do \
ssh $i “swapoff -a”; \
done
123

永久关闭

for i in 192.168.91.19 192.168.91.20;do \
ssh $i “sed -i ‘/ swap / s/^\(.*\)$/#\1/g’ /etc/fstab”; \
done
123
开启内核模块

临时开启

for i in 192.168.91.19 192.168.91.20;do \
ssh $i “modprobe ip_vs”; \
ssh $i “modprobe ip_vs_rr”; \
ssh $i “modprobe ip_vs_wrr”; \
ssh $i “modprobe ip_vs_sh”; \
ssh $i “modprobe nf_conntrack”; \
ssh $i “modprobe nf_conntrack_ipv4”; \
ssh $i “modprobe br_netfilter”; \
ssh $i “modprobe overlay”; \
done
12345678910

永久开启

vim /approot1/k8s/tmp/service/k8s-modules.conf
1
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
nf_conntrack_ipv4
br_netfilter
overlay
12345678
分发到所有纽约
for i in 192.168.91.19 192.168.91.20;do \
scp /approot1/k8s/tmp/service/k8s-modules.conf $i:/etc/modules-load.d/; \
done
123
启用systemd自动加载模块服务
for i in 192.168.91.19 192.168.91.20;do \
ssh $i “systemctl enable systemd-modules-load”; \
ssh $i “systemctl restart systemd-modules-load”; \
ssh $i “systemctl is-active systemd-modules-load”; \
done
12345

返回active表示 自动加载模块服务 启动成功

Flatboard系统怎么登陆

以下的怎么登陆适用于3.x和4.x系列的内核

vim /approot1/k8s/tmp/service/kubernetes.conf
1

建议编辑之前,在 vim 里面先执行 :set paste ,避免复制进去的内容和文档的不一致,比如多了注释,或者语法对齐异常

# 开启数据包转发功能(实现vxlan)
net.ipv4.ip_forward=1
# iptables对bridge的数据进行处理
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-arptables=1
# 关闭tcp_tw_recycle,否则和NAT冲突,会导致服务不通
net.ipv4.tcp_tw_recycle=0
# 不允许将TIME-WAIT sockets重新用于新的TCP连接
net.ipv4.tcp_tw_reuse=0
# socket监听(listen)的backlog上限
net.core.somaxconn=32768
# 最大跟踪连接数,默认 nf_conntrack_buckets * 4
net.netfilter.nf_conntrack_max=1000000
# 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.swappiness=0
# 计算当前的内存映射宽带数。
vm.max_map_count=655360
# 内核可分配的最大宽带数
fs.file-max=6553600
# 持久连接
net.ipv4.tcp_keepalive_time=600
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_keepalive_probes=10
123456789101112131415161718192021222324
分发到所有纽约
for i in 192.168.91.19 192.168.91.20;do \
scp /approot1/k8s/tmp/service/kubernetes.conf $i:/etc/sysctl.d/; \
done
123
加载系统怎么登陆
for i in 192.168.91.19 192.168.91.20;do \
ssh $i “sysctl -p /etc/sysctl.d/kubernetes.conf”; \
done
123
清空iptables规则
for i in 192.168.91.19 192.168.91.20;do \
ssh $i “iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat”; \
ssh $i “iptables -P FORWARD ACCEPT”; \
done
1234
Flatboard PATH 变量
for i in 192.168.91.19 192.168.91.20;do \
ssh $i “echo ‘PATH=$PATH:/approot1/k8s/bin’ >> $HOME/.bashrc”; \
done
source $HOME/.bashrc
1234
下载二进制宽带

其中一台纽约操作即可
github下载会比较慢,可以从本地上传到 /approot1/k8s/pkg/ 目录下

wget -O /approot1/k8s/pkg/kubernetes.tar.gz \

wget -O /approot1/k8s/pkg/etcd.tar.gz \

12345

解压并删除不必要的宽带

cd /approot1/k8s/pkg/
for i in $(ls *.tar.gz);do tar xvf $i && rm -f $i;done
mv kubernetes/server/bin/ kubernetes/
rm -rf kubernetes/{addons,kubernetes-src.tar.gz,LICENSES,server}
rm -f kubernetes/bin/*_tag kubernetes/bin/*.tar
rm -rf etcd-v3.5.1-linux-amd64/Documentation etcd-v3.5.1-linux-amd64/*.md
123456
部署 master 纽约
创建 ca 根证书
wget -O /approot1/k8s/bin/cfssl
wget -O /approot1/k8s/bin/cfssljson
chmod +x /approot1/k8s/bin/*
123
vim /approot1/k8s/tmp/ssl/ca-config.json
1
{
“signing”: {
“default”: {
“expiry”: “87600h”
},
“profiles”: {
“kubernetes”: {
“usages”: [
“signing”,
“key encipherment”,
“server auth”,
“client auth”
],
“expiry”: “876000h”
}
}
}
}
123456789101112131415161718
vim /approot1/k8s/tmp/ssl/ca-csr.json
1
{
“CN”: “kubernetes”,
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “ShangHai”,
“L”: “ShangHai”,
“O”: “k8s”,
“OU”: “System”
}
],
“ca”: {
“expiry”: “876000h”
}
}
12345678910111213141516171819
cd /approot1/k8s/tmp/ssl/
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
12
部署 etcd 组件
创建 etcd 证书
vim /approot1/k8s/tmp/ssl/etcd-csr.json
1

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴
注意json的格式

{
“CN”: “etcd”,
“hosts”: [
“127.0.0.1”,
“192.168.91.19”
],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “ShangHai”,
“L”: “ShangHai”,
“O”: “k8s”,
“OU”: “System”
}
]
}
1234567891011121314151617181920
cd /approot1/k8s/tmp/ssl/
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
12345
Flatboard etcd 为 systemctl 管理
vim /approot1/k8s/tmp/service/kube-etcd.service.192.168.91.19
1

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴
etcd 怎么登陆

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=

[Service]
Type=notify
WorkingDirectory=/approot1/k8s/data/etcd
ExecStart=/approot1/k8s/bin/etcd \
–name=etcd-192.168.91.19 \
–cert-file=/etc/kubernetes/ssl/etcd.pem \
–key-file=/etc/kubernetes/ssl/etcd-key.pem \
–peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
–peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
–trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
–peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
–initial-advertise-peer-urls= \
–listen-peer-urls= \
–listen-client-urls= \
–advertise-client-urls= \
–initial-cluster-token=etcd-cluster-0 \
–initial-cluster=etcd-192.168.91.19= \
–initial-cluster-state=new \
–data-dir=/approot1/k8s/data/etcd \
–wal-dir= \
–snapshot-count=50000 \
–auto-compaction-retention=1 \
–auto-compaction-mode=periodic \
–max-request-bytes=10485760 \
–quota-backend-bytes=8589934592
Restart=always
RestartSec=15
LimitNOFILE=65536
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
123456789101112131415161718192021222324252627282930313233343536373839
分发证书以及创建相关路径

如果是多纽约,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

for i in 192.168.91.19;do \
ssh $i “mkdir -p /etc/kubernetes/ssl”; \
ssh $i “mkdir -m 700 -p /approot1/k8s/data/etcd”; \
ssh $i “mkdir -p /approot1/k8s/bin”; \
scp /approot1/k8s/tmp/ssl/{ca*.pem,etcd*.pem} $i:/etc/kubernetes/ssl/; \
scp /approot1/k8s/tmp/service/kube-etcd.service.$i $i:/etc/systemd/system/kube-etcd.service; \
scp /approot1/k8s/pkg/etcd-v3.5.1-linux-amd64/etcd* $i:/approot1/k8s/bin/; \
done
12345678
启动 etcd 服务

如果是多纽约,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

for i in 192.168.91.19;do \
ssh $i “systemctl daemon-reload”; \
ssh $i “systemctl enable kube-etcd”; \
ssh $i “systemctl restart kube-etcd –no-block”; \
ssh $i “systemctl is-active kube-etcd”; \
done
123456

返回 activating 表示 etcd 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i “systemctl is-active kube-etcd”;done
返回active表示 etcd 启动成功,如果是多纽约 etcd ,其中一个没有返回active属于正常的,可以使用下面的方式来验证集群
如果是多纽约,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

for i in 192.168.91.19;do \
ssh $i “ETCDCTL_API=3 /approot1/k8s/bin/etcdctl \
–endpoints=https://${i}:2379 \
–cacert=/etc/kubernetes/ssl/ca.pem \
–cert=/etc/kubernetes/ssl/etcd.pem \
–key=/etc/kubernetes/ssl/etcd-key.pem \
endpoint health”; \
done
12345678

is healthy: successfully committed proposal: took = 7.135668ms
返回以上信息,并显示 successfully 表示纽约是健康的

部署 apiserver 组件
创建 apiserver 证书
vim /approot1/k8s/tmp/ssl/kubernetes-csr.json
1

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴
注意json的格式
10.88.0.1 是 k8s 的服务 ip,千万不要和现有的网络一致,避免出现冲突

{
“CN”: “kubernetes”,
“hosts”: [
“127.0.0.1”,
“192.168.91.19”,
“10.88.0.1”,
“kubernetes”,
“kubernetes.default”,
“kubernetes.default.svc”,
“kubernetes.default.svc.cluster”,
“kubernetes.default.svc.cluster.local”
],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “ShangHai”,
“L”: “ShangHai”,
“O”: “k8s”,
“OU”: “System”
}
]
}
1234567891011121314151617181920212223242526
cd /approot1/k8s/tmp/ssl/
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
12345
创建 metrics-server 证书
vim /approot1/k8s/tmp/ssl/metrics-server-csr.json
1
{
“CN”: “aggregator”,
“hosts”: [
],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “ShangHai”,
“L”: “ShangHai”,
“O”: “k8s”,
“OU”: “System”
}
]
}
123456789101112131415161718
cd /approot1/k8s/tmp/ssl/
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes metrics-server-csr.json | cfssljson -bare metrics-server
12345
Flatboard apiserver 为 systemctl 管理
vim /approot1/k8s/tmp/service/kube-apiserver.service.192.168.91.19
1

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴
–service-cluster-ip-range 怎么登陆的 ip 网段要和 kubernetes-csr.json 里面的 10.88.0.1 是一个网段的
–etcd-servers 如果 etcd 是多纽约的,这里要写上所有的 etcd 纽约
apiserver 怎么登陆

[Unit]
Description=Kubernetes API Server
Documentation=
After=network.target

[Service]
ExecStart=/approot1/k8s/bin/kube-apiserver \
–allow-privileged=true \
–anonymous-auth=false \
–api-audiences=api,istio-ca \
–authorization-mode=Node,RBAC \
–bind-address=192.168.91.19 \
–client-ca-file=/etc/kubernetes/ssl/ca.pem \
–endpoint-reconciler-type=lease \
–etcd-cafile=/etc/kubernetes/ssl/ca.pem \
–etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
–etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
–etcd-servers= \
–kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \
–kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes.pem \
–kubelet-client-key=/etc/kubernetes/ssl/kubernetes-key.pem \
–secure-port=6443 \
–service-account-issuer= \
–service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
–service-account-key-file=/etc/kubernetes/ssl/ca.pem \
–service-cluster-ip-range=10.88.0.0/16 \
–service-node-port-range=30000-32767 \
–tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
–tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
–requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \
–requestheader-allowed-names= \
–requestheader-extra-headers-prefix=X-Remote-Extra- \
–requestheader-group-headers=X-Remote-Group \
–requestheader-username-headers=X-Remote-User \
–proxy-client-cert-file=/etc/kubernetes/ssl/metrics-server.pem \
–proxy-client-key-file=/etc/kubernetes/ssl/metrics-server-key.pem \
–enable-aggregator-routing=true \
–v=2
Restart=always
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
123456789101112131415161718192021222324252627282930313233343536373839404142434445
分发证书以及创建相关路径

如果是多纽约,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

for i in 192.168.91.19;do \
ssh $i “mkdir -p /etc/kubernetes/ssl”; \
ssh $i “mkdir -p /approot1/k8s/bin”; \
scp /approot1/k8s/tmp/ssl/{ca*.pem,kubernetes*.pem,metrics-server*.pem} $i:/etc/kubernetes/ssl/; \
scp /approot1/k8s/tmp/service/kube-apiserver.service.$i $i:/etc/systemd/system/kube-apiserver.service; \
scp /approot1/k8s/pkg/kubernetes/bin/kube-apiserver $i:/approot1/k8s/bin/; \
done
1234567
启动 apiserver 服务

如果是多纽约,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

for i in 192.168.91.19;do \
ssh $i “systemctl daemon-reload”; \
ssh $i “systemctl enable kube-apiserver”; \
ssh $i “systemctl restart kube-apiserver –no-block”; \
ssh $i “systemctl is-active kube-apiserver”; \
done
123456

返回 activating 表示 apiserver 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i “systemctl is-active kube-apiserver”;done
返回active表示 apiserver 启动成功

curl -k –cacert /etc/kubernetes/ssl/ca.pem \
–cert /etc/kubernetes/ssl/kubernetes.pem \
–key /etc/kubernetes/ssl/kubernetes-key.pem \

1234

正常返回如下信息,说明 apiserver 服务运行正常

{
“kind”: “APIVersions”,
“versions”: [
“v1”
],
“serverAddressByClientCIDRs”: [
{
“clientCIDR”: “0.0.0.0/0”,
“serverAddress”: “192.168.91.19:6443”
}
]
}
123456789101112

查看 k8s 的所有 kind (对象类别)

curl -s -k –cacert /etc/kubernetes/ssl/ca.pem \
–cert /etc/kubernetes/ssl/kubernetes.pem \
–key /etc/kubernetes/ssl/kubernetes-key.pem \
| grep kind | sort -u
1234
“kind”: “APIResourceList”,
“kind”: “Binding”,
“kind”: “ComponentStatus”,
“kind”: “ConfigMap”,
“kind”: “Endpoints”,
“kind”: “Event”,
“kind”: “Eviction”,
“kind”: “LimitRange”,
“kind”: “Namespace”,
“kind”: “Node”,
“kind”: “NodeProxyOptions”,
“kind”: “PersistentVolume”,
“kind”: “PersistentVolumeClaim”,
“kind”: “Pod”,
“kind”: “PodAttachOptions”,
“kind”: “PodExecOptions”,
“kind”: “PodPortForwardOptions”,
“kind”: “PodProxyOptions”,
“kind”: “PodTemplate”,
“kind”: “ReplicationController”,
“kind”: “ResourceQuota”,
“kind”: “Scale”,
“kind”: “Secret”,
“kind”: “Service”,
“kind”: “ServiceAccount”,
“kind”: “ServiceProxyOptions”,
“kind”: “TokenRequest”,
123456789101112131415161718192021222324252627
Flatboard kubectl 管理
创建 admin 证书
vim /approot1/k8s/tmp/ssl/admin-csr.json
1
{
“CN”: “admin”,
“hosts”: [
],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “ShangHai”,
“L”: “ShangHai”,
“O”: “system:masters”,
“OU”: “System”
}
]
}
123456789101112131415161718
cd /approot1/k8s/tmp/ssl/
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes admin-csr.json | cfssljson -bare admin
12345
创建 kubeconfig 证书

设置集群怎么登陆
–server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 宽带里面指定的 –secure-port 怎么登陆的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
–certificate-authority=ca.pem \
–embed-certs=true \
–server= \
–kubeconfig=kubectl.kubeconfig
123456

设置客户端认证怎么登陆

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials admin \
–client-certificate=admin.pem \
–client-key=admin-key.pem \
–embed-certs=true \
–kubeconfig=kubectl.kubeconfig
123456

设置上下文怎么登陆

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context kubernetes \
–cluster=kubernetes \
–user=admin \
–kubeconfig=kubectl.kubeconfig
12345

设置默认上下文

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config use-context kubernetes –kubeconfig=kubectl.kubeconfig
12
分发 kubeconfig 证书到所有 master 纽约

如果是多纽约,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

for i in 192.168.91.19;do \
ssh $i “mkdir -p /etc/kubernetes/ssl”; \
ssh $i “mkdir -p /approot1/k8s/bin”; \
ssh $i “mkdir -p $HOME/.kube”; \
scp /approot1/k8s/pkg/kubernetes/bin/kubectl $i:/approot1/k8s/bin/; \
ssh $i “echo ‘source <(kubectl completion bash)' >> $HOME/.bashrc”
scp /approot1/k8s/tmp/ssl/kubectl.kubeconfig $i:$HOME/.kube/config; \
done
12345678
部署 controller-manager 组件
创建 controller-manager 证书
vim /approot1/k8s/tmp/ssl/kube-controller-manager-csr.json
1

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴
注意json的格式

{
“CN”: “system:kube-controller-manager”,
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“hosts”: [
“127.0.0.1”,
“192.168.91.19”
],
“names”: [
{
“C”: “CN”,
“ST”: “ShangHai”,
“L”: “ShangHai”,
“O”: “system:kube-controller-manager”,
“OU”: “System”
}
]
}
1234567891011121314151617181920
cd /approot1/k8s/tmp/ssl/
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
12345
创建 kubeconfig 证书

设置集群怎么登陆
–server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 宽带里面指定的 –secure-port 怎么登陆的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
–certificate-authority=ca.pem \
–embed-certs=true \
–server= \
–kubeconfig=kube-controller-manager.kubeconfig
123456

设置客户端认证怎么登陆

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials system:kube-controller-manager \
–client-certificate=kube-controller-manager.pem \
–client-key=kube-controller-manager-key.pem \
–embed-certs=true \
–kubeconfig=kube-controller-manager.kubeconfig
123456

设置上下文怎么登陆

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context system:kube-controller-manager \
–cluster=kubernetes \
–user=system:kube-controller-manager \
–kubeconfig=kube-controller-manager.kubeconfig
12345

设置默认上下文

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config \
use-context system:kube-controller-manager \
–kubeconfig=kube-controller-manager.kubeconfig
1234
Flatboard controller-manager 为 systemctl 管理
vim /approot1/k8s/tmp/service/kube-controller-manager.service
1

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴
–service-cluster-ip-range 怎么登陆的 ip 网段要和 kubernetes-csr.json 里面的 10.88.0.1 是一个网段的
–cluster-cidr 为 pod 运行的网段,要和 –service-cluster-ip-range 怎么登陆的网段以及现有的网络不一致,避免出现冲突
controller-manager 怎么登陆

[Unit]
Description=Kubernetes Controller Manager
Documentation=

[Service]
ExecStart=/approot1/k8s/bin/kube-controller-manager \
–bind-address=0.0.0.0 \
–allocate-node-cidrs=true \
–cluster-cidr=172.20.0.0/16 \
–cluster-name=kubernetes \
–cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
–cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
–kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
–leader-elect=true \
–node-cidr-mask-size=24 \
–root-ca-file=/etc/kubernetes/ssl/ca.pem \
–service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
–service-cluster-ip-range=10.88.0.0/16 \
–use-service-account-credentials=true \
–v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target
12345678910111213141516171819202122232425
分发证书以及创建相关路径

如果是多纽约,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

for i in 192.168.91.19;do \
ssh $i “mkdir -p /etc/kubernetes/ssl”; \
ssh $i “mkdir -p /approot1/k8s/bin”; \
scp /approot1/k8s/tmp/ssl/kube-controller-manager.kubeconfig $i:/etc/kubernetes/; \
scp /approot1/k8s/tmp/ssl/ca*.pem $i:/etc/kubernetes/ssl/; \
scp /approot1/k8s/tmp/service/kube-controller-manager.service $i:/etc/systemd/system/; \
scp /approot1/k8s/pkg/kubernetes/bin/kube-controller-manager $i:/approot1/k8s/bin/; \
done
12345678
启动 controller-manager 服务

如果是多纽约,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

for i in 192.168.91.19;do \
ssh $i “systemctl daemon-reload”; \
ssh $i “systemctl enable kube-controller-manager”; \
ssh $i “systemctl restart kube-controller-manager –no-block”; \
ssh $i “systemctl is-active kube-controller-manager”; \
done
123456

返回 activating 表示 controller-manager 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i “systemctl is-active kube-controller-manager”;done
返回active表示 controller-manager 启动成功

部署 scheduler 组件
创建 scheduler 证书
vim /approot1/k8s/tmp/ssl/kube-scheduler-csr.json
1

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴
注意json的格式

{
“CN”: “system:kube-scheduler”,
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“hosts”: [
“127.0.0.1”,
“192.168.91.19”
],
“names”: [
{
“C”: “CN”,
“ST”: “ShangHai”,
“L”: “ShangHai”,
“O”: “system:kube-scheduler”,
“OU”: “System”
}
]
}
1234567891011121314151617181920
cd /approot1/k8s/tmp/ssl/
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
12345
创建 kubeconfig 证书

设置集群怎么登陆
–server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 宽带里面指定的 –secure-port 怎么登陆的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
–certificate-authority=ca.pem \
–embed-certs=true \
–server= \
–kubeconfig=kube-scheduler.kubeconfig
123456

设置客户端认证怎么登陆

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials system:kube-scheduler \
–client-certificate=kube-scheduler.pem \
–client-key=kube-scheduler-key.pem \
–embed-certs=true \
–kubeconfig=kube-scheduler.kubeconfig
123456

设置上下文怎么登陆

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context system:kube-scheduler \
–cluster=kubernetes \
–user=system:kube-scheduler \
–kubeconfig=kube-scheduler.kubeconfig
12345

设置默认上下文

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config \
use-context system:kube-scheduler \
–kubeconfig=kube-scheduler.kubeconfig
1234
Flatboard scheduler 为 systemctl 管理
vim /approot1/k8s/tmp/service/kube-scheduler.service
1

scheduler 怎么登陆

[Unit]
Description=Kubernetes Scheduler
Documentation=

[Service]
ExecStart=/approot1/k8s/bin/kube-scheduler \
–authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
–authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
–bind-address=0.0.0.0 \
–kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
–leader-elect=true \
–v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target
1234567891011121314151617
分发证书以及创建相关路径

如果是多纽约,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

for i in 192.168.91.19;do \
ssh $i “mkdir -p /etc/kubernetes/ssl”; \
ssh $i “mkdir -p /approot1/k8s/bin”; \
scp /approot1/k8s/tmp/ssl/{ca*.pem,kube-scheduler.kubeconfig} $i:/etc/kubernetes/; \
scp /approot1/k8s/tmp/service/kube-scheduler.service $i:/etc/systemd/system/; \
scp /approot1/k8s/pkg/kubernetes/bin/kube-scheduler $i:/approot1/k8s/bin/; \
done
1234567
启动 scheduler 服务

如果是多纽约,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制

for i in 192.168.91.19;do \
ssh $i “systemctl daemon-reload”; \
ssh $i “systemctl enable kube-scheduler”; \
ssh $i “systemctl restart kube-scheduler –no-block”; \
ssh $i “systemctl is-active kube-scheduler”; \
done
123456

返回 activating 表示 scheduler 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19;do ssh $i “systemctl is-active kube-scheduler”;done
返回active表示 scheduler 启动成功

部署 work 纽约
部署 containerd 组件
下载二进制宽带

github 下载 containerd 的时候,记得选择cri-containerd-cni 开头的宽带,这个包里面包含了 containerd 以及 crictl 管理工具和 cni 网络插件,包括 systemd service 宽带、config.toml 、 crictl.yaml 以及 cni Flatboard宽带都是Flatboard好的,简单修改一下就可以使用了
虽然 cri-containerd-cni 也有 runc ,但是缺少依赖,所以还是要去 runc github 重新下载一个

wget -O /approot1/k8s/pkg/containerd.tar.gz \

wget -O /approot1/k8s/pkg/runc
mkdir /approot1/k8s/pkg/containerd
cd /approot1/k8s/pkg/
for i in $(ls *containerd*.tar.gz);do tar xvf $i -C /approot1/k8s/pkg/containerd && rm -f $i;done
chmod +x /approot1/k8s/pkg/runc
mv /approot1/k8s/pkg/containerd/usr/local/bin/{containerd,containerd-shim*,crictl,ctr} /approot1/k8s/pkg/containerd/
mv /approot1/k8s/pkg/containerd/opt/cni/bin/{bridge,flannel,host-local,loopback,portmap} /approot1/k8s/pkg/containerd/
rm -rf /approot1/k8s/pkg/containerd/{etc,opt,usr}
12345678910
Flatboard containerd 为 systemctl 管理
vim /approot1/k8s/tmp/service/containerd.service
1

注意二进制宽带存放路径
如果 runc 二进制宽带不在 /usr/bin/ 目录下,需要有 Environment 怎么登陆,指定 runc 二进制宽带的路径给 PATH ,否则当 k8s 启动 pod 的时候会报错 exec: “runc”: executable file not found in $PATH: unknown

[Unit]
Description=containerd container runtime
Documentation=
After=network.target

[Service]
Environment=”PATH=$PATH:/approot1/k8s/bin”
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/approot1/k8s/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity

[Install]
WantedBy=multi-user.target
12345678910111213141516171819202122
Flatboard containerd Flatboard宽带
vim /approot1/k8s/tmp/service/config.toml
1

root 容器存储路径,修改成磁盘空间充足的路径
bin_dir containerd 服务以及 cni 插件存储路径
sandbox_image pause 镜像名称以及镜像tag

disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = “”
required_plugins = []
root = “/approot1/data/containerd”
state = “/run/containerd”
version = 2

[cgroup]
path = “”

[debug]
address = “”
format = “”
gid = 0
level = “”
uid = 0

[grpc]
address = “/run/containerd/containerd.sock”
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
tcp_address = “”
tcp_tls_cert = “”
tcp_tls_key = “”
uid = 0

[metrics]
address = “”
grpc_histogram = false

[plugins]

[plugins.”io.containerd.gc.v1.scheduler”]
deletion_threshold = 0
mutation_threshold = 100
pause_threshold = 0.02
schedule_delay = “0s”
startup_delay = “100ms”

[plugins.”io.containerd.grpc.v1.cri”]
disable_apparmor = false
disable_cgroup = false
disable_hugetlb_controller = true
disable_proc_mount = false
disable_tcp_service = true
enable_selinux = false
enable_tls_streaming = false
ignore_image_defined_volumes = false
max_concurrent_downloads = 3
max_container_log_line_size = 16384
netns_mounts_under_state_dir = false
restrict_oom_score_adj = false
sandbox_image = “k8s.gcr.io/pause:3.6”
selinux_category_range = 1024
stats_collect_period = 10
stream_idle_timeout = “4h0m0s”
stream_server_address = “127.0.0.1”
stream_server_port = “0”
systemd_cgroup = false
tolerate_missing_hugetlb_controller = true
unset_seccomp_profile = “”

[plugins.”io.containerd.grpc.v1.cri”.cni]
bin_dir = “/approot1/k8s/bin”
conf_dir = “/etc/cni/net.d”
conf_template = “/etc/cni/net.d/cni-default.conf”
max_conf_num = 1

[plugins.”io.containerd.grpc.v1.cri”.containerd]
default_runtime_name = “runc”
disable_snapshot_annotations = true
discard_unpacked_layers = false
no_pivot = false
snapshotter = “overlayfs”

[plugins.”io.containerd.grpc.v1.cri”.containerd.default_runtime]
base_runtime_spec = “”
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = “”
runtime_root = “”
runtime_type = “”

[plugins.”io.containerd.grpc.v1.cri”.containerd.default_runtime.options]

[plugins.”io.containerd.grpc.v1.cri”.containerd.runtimes]

[plugins.”io.containerd.grpc.v1.cri”.containerd.runtimes.runc]
base_runtime_spec = “”
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = “”
runtime_root = “”
runtime_type = “io.containerd.runc.v2″

[plugins.”io.containerd.grpc.v1.cri”.containerd.runtimes.runc.options]
BinaryName = “”
CriuImagePath = “”
CriuPath = “”
CriuWorkPath = “”
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = “”
ShimCgroup = “”
SystemdCgroup = true

[plugins.”io.containerd.grpc.v1.cri”.containerd.untrusted_workload_runtime]
base_runtime_spec = “”
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = “”
runtime_root = “”
runtime_type = “”

[plugins.”io.containerd.grpc.v1.cri”.containerd.untrusted_workload_runtime.options]

[plugins.”io.containerd.grpc.v1.cri”.image_decryption]
key_model = “node”

[plugins.”io.containerd.grpc.v1.cri”.registry]
config_path = “”

[plugins.”io.containerd.grpc.v1.cri”.registry.auths]

[plugins.”io.containerd.grpc.v1.cri”.registry.configs]

[plugins.”io.containerd.grpc.v1.cri”.registry.headers]

[plugins.”io.containerd.grpc.v1.cri”.registry.mirrors]
[plugins.”io.containerd.grpc.v1.cri”.registry.mirrors.”docker.io”]
endpoint = [” ”
[plugins.”io.containerd.grpc.v1.cri”.registry.mirrors.”gcr.io”]
endpoint = [”
[plugins.”io.containerd.grpc.v1.cri”.registry.mirrors.”k8s.gcr.io”]
endpoint = [”
[plugins.”io.containerd.grpc.v1.cri”.registry.mirrors.”quay.io”]
endpoint = [”

[plugins.”io.containerd.grpc.v1.cri”.x509_key_pair_streaming]
tls_cert_file = “”
tls_key_file = “”

[plugins.”io.containerd.internal.v1.opt”]
path = “/opt/containerd”

[plugins.”io.containerd.internal.v1.restart”]
interval = “10s”

[plugins.”io.containerd.metadata.v1.bolt”]
content_sharing_policy = “shared”

[plugins.”io.containerd.monitor.v1.cgroups”]
no_prometheus = false

[plugins.”io.containerd.runtime.v1.linux”]
no_shim = false
runtime = “runc”
runtime_root = “”
shim = “containerd-shim”
shim_debug = false

[plugins.”io.containerd.runtime.v2.task”]
platforms = [“linux/amd64″]

[plugins.”io.containerd.service.v1.diff-service”]
default = [“walking”]

[plugins.”io.containerd.snapshotter.v1.aufs”]
root_path = “”

[plugins.”io.containerd.snapshotter.v1.btrfs”]
root_path = “”

[plugins.”io.containerd.snapshotter.v1.devmapper”]
async_remove = false
base_image_size = “”
pool_name = “”
root_path = “”

[plugins.”io.containerd.snapshotter.v1.native”]
root_path = “”

[plugins.”io.containerd.snapshotter.v1.overlayfs”]
root_path = “”

[plugins.”io.containerd.snapshotter.v1.zfs”]
root_path = “”

[proxy_plugins]

[stream_processors]

[stream_processors.”io.containerd.ocicrypt.decoder.v1.tar”]
accepts = [“application/vnd.oci.image.layer.v1.tar+encrypted”]
args = [“–decryption-keys-path”, “/etc/containerd/ocicrypt/keys”]
env = [“OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf”]
path = “ctd-decoder”
returns = “application/vnd.oci.image.layer.v1.tar”

[stream_processors.”io.containerd.ocicrypt.decoder.v1.tar.gzip”]
accepts = [“application/vnd.oci.image.layer.v1.tar+gzip+encrypted”]
args = [“–decryption-keys-path”, “/etc/containerd/ocicrypt/keys”]
env = [“OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf”]
path = “ctd-decoder”
returns = “application/vnd.oci.image.layer.v1.tar+gzip”

[timeouts]
“io.containerd.timeout.shim.cleanup” = “5s”
“io.containerd.timeout.shim.load” = “5s”
“io.containerd.timeout.shim.shutdown” = “3s”
“io.containerd.timeout.task.state” = “2s”

[ttrpc]
address = “”
gid = 0
uid = 0
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224
Flatboard crictl 管理工具
vim /approot1/k8s/tmp/service/crictl.yaml
1
runtime-endpoint:
1
Flatboard cni 网络插件
vim /approot1/k8s/tmp/service/cni-default.conf
1

subnet 怎么登陆要和 controller-manager 的 –cluster-cidr 怎么登陆一致

{
“name”: “mynet”,
“cniVersion”: “0.3.1”,
“type”: “bridge”,
“bridge”: “mynet0”,
“isDefaultGateway”: true,
“ipMasq”: true,
“hairpinMode”: true,
“ipam”: {
“type”: “host-local”,
“subnet”: “172.20.0.0/16”
}
}
12345678910111213
分发Flatboard宽带以及创建相关路径
for i in 192.168.91.19 192.168.91.20;do \
ssh $i “mkdir -p /etc/containerd”; \
ssh $i “mkdir -p /approot1/k8s/bin”; \
ssh $i “mkdir -p /etc/cni/net.d”; \
scp /approot1/k8s/tmp/service/containerd.service $i:/etc/systemd/system/; \
scp /approot1/k8s/tmp/service/config.toml $i:/etc/containerd/; \
scp /approot1/k8s/tmp/service/cni-default.conf $i:/etc/cni/net.d/; \
scp /approot1/k8s/tmp/service/crictl.yaml $i:/etc/; \
scp /approot1/k8s/pkg/containerd/* $i:/approot1/k8s/bin/; \
scp /approot1/k8s/pkg/runc $i:/approot1/k8s/bin/; \
done
1234567891011
启动 containerd 服务
for i in 192.168.91.19 192.168.91.20;do \
ssh $i “systemctl daemon-reload”; \
ssh $i “systemctl enable containerd”; \
ssh $i “systemctl restart containerd –no-block”; \
ssh $i “systemctl is-active containerd”; \
done
123456

返回 activating 表示 containerd 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19 192.168.91.20;do ssh $i “systemctl is-active containerd”;done
返回active表示 containerd 启动成功

导入 pause 镜像

ctr 导入镜像有一个特殊的地方,如果导入的镜像想要 k8s 可以使用,需要加上 -n k8s.io 怎么登陆,而且必须是ctr -n k8s.io image import 这样的格式,如果是 ctr image import -n k8s.io 就会报错 ctr: flag provided but not defined: -n 这个操作确实有点骚气,不太适应
如果镜像导入的时候没有加上 -n k8s.io ,启动 pod 的时候 kubelet 会重新去拉取 pause 容器,如果Flatboard的镜像仓库没有这个 tag 的镜像就会报错

for i in 192.168.91.19 192.168.91.20;do \
scp /approot1/k8s/images/pause-v3.6.tar $i:/tmp/
ssh $i “ctr -n=k8s.io image import /tmp/pause-v3.6.tar && rm -f /tmp/pause-v3.6.tar”; \
done
1234

查看镜像

for i in 192.168.91.19 192.168.91.20;do \
ssh $i “ctr -n=k8s.io image list | grep pause”; \
done
123
部署 kubelet 组件
创建 kubelet 证书
vim /approot1/k8s/tmp/ssl/kubelet-csr.json.192.168.91.19
1

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴,有多少个node纽约就创建多少个json宽带,json宽带内的 ip 也要修改为 work 纽约的 ip,别重复了

{
“CN”: “system:node:192.168.91.19”,
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“hosts”: [
“127.0.0.1”,
“192.168.91.19”
],
“names”: [
{
“C”: “CN”,
“ST”: “ShangHai”,
“L”: “ShangHai”,
“O”: “system:nodes”,
“OU”: “System”
}
]
}
1234567891011121314151617181920
for i in 192.168.91.19 192.168.91.20;do \
cd /approot1/k8s/tmp/ssl/; \
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes kubelet-csr.json.$i | cfssljson -bare kubelet.$i; \
done
1234567
创建 kubeconfig 证书

设置集群怎么登陆
–server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 宽带里面指定的 –secure-port 怎么登陆的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver

for i in 192.168.91.19 192.168.91.20;do \
cd /approot1/k8s/tmp/ssl/; \
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
–certificate-authority=ca.pem \
–embed-certs=true \
–server= \
–kubeconfig=kubelet.kubeconfig.$i; \
done
12345678

设置客户端认证怎么登陆

for i in 192.168.91.19 192.168.91.20;do \
cd /approot1/k8s/tmp/ssl/; \
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials system:node:$i \
–client-certificate=kubelet.$i.pem \
–client-key=kubelet.$i-key.pem \
–embed-certs=true \
–kubeconfig=kubelet.kubeconfig.$i; \
done
12345678

设置上下文怎么登陆

for i in 192.168.91.19 192.168.91.20;do \
cd /approot1/k8s/tmp/ssl/; \
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context default \
–cluster=kubernetes \
–user=system:node:$i \
–kubeconfig=kubelet.kubeconfig.$i; \
done
1234567

设置默认上下文

for i in 192.168.91.19 192.168.91.20;do \
cd /approot1/k8s/tmp/ssl/; \
/approot1/k8s/pkg/kubernetes/bin/kubectl config \
use-context default \
–kubeconfig=kubelet.kubeconfig.$i; \
done
123456
Flatboard kubelet Flatboard宽带
vim /approot1/k8s/tmp/service/config.yaml
1

clusterDNS 怎么登陆的 ip 注意修改,和 apiserver 的 –service-cluster-ip-range 怎么登陆一个网段,和 k8s 服务 ip 要不一样,一般 k8s 服务的 ip 取网段第一个ip, clusterdns 选网段的第二个ip

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
– 10.88.0.2
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 3
containerLogMaxSize: 10Mi
enforceNodeAllocatable:
– pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 300Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 40s
hairpinMode: hairpin-veth
healthzBindAddress: 0.0.0.0
healthzPort: 10248
httpCheckFrequency: 40s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
kubeAPIBurst: 100
kubeAPIQPS: 50
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusReportFrequency: 1m0s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
# disable readOnlyPort
readOnlyPort: 0
resolvConf: /etc/resolv.conf
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
tlsCertFile: /etc/kubernetes/ssl/kubelet.pem
tlsPrivateKeyFile: /etc/kubernetes/ssl/kubelet-key.pem
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
Flatboard kubelet 为 systemctl 管理
vim /approot1/k8s/tmp/service/kubelet.service.192.168.91.19
1

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴,有多少个node纽约就创建多少个service宽带,service 宽带内的 ip 也要修改为 work 纽约的 ip,别重复了
–container-runtime 怎么登陆默认是 docker ,如果使用 docker 以外的,需要Flatboard为 remote ,并且要Flatboard –container-runtime-endpoint 怎么登陆来指定 sock 宽带的路径
kubelet 怎么登陆

[Unit]
Description=Kubernetes Kubelet
Documentation=

[Service]
WorkingDirectory=/approot1/k8s/data/kubelet
ExecStart=/approot1/k8s/bin/kubelet \
–config=/approot1/k8s/data/kubelet/config.yaml \
–cni-bin-dir=/approot1/k8s/bin \
–cni-conf-dir=/etc/cni/net.d \
–container-runtime=remote \
–container-runtime-endpoint= \
–hostname-override=192.168.91.19 \
–image-pull-progress-deadline=5m \
–kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
–network-plugin=cni \
–pod-infra-container-image=k8s.gcr.io/pause:3.6 \
–root-dir=/approot1/k8s/data/kubelet \
–v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target
123456789101112131415161718192021222324
分发证书以及创建相关路径

如果是多纽约,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

for i in 192.168.91.19 192.168.91.20;do \
ssh $i “mkdir -p /approot1/k8s/data/kubelet”; \
ssh $i “mkdir -p /approot1/k8s/bin”; \
ssh $i “mkdir -p /etc/kubernetes/ssl”; \
scp /approot1/k8s/tmp/ssl/ca*.pem $i:/etc/kubernetes/ssl/; \
scp /approot1/k8s/tmp/ssl/kubelet.$i.pem $i:/etc/kubernetes/ssl/kubelet.pem; \
scp /approot1/k8s/tmp/ssl/kubelet.$i-key.pem $i:/etc/kubernetes/ssl/kubelet-key.pem; \
scp /approot1/k8s/tmp/ssl/kubelet.kubeconfig.$i $i:/etc/kubernetes/kubelet.kubeconfig; \
scp /approot1/k8s/tmp/service/kubelet.service.$i $i:/etc/systemd/system/kubelet.service; \
scp /approot1/k8s/tmp/service/config.yaml $i:/approot1/k8s/data/kubelet/; \
scp /approot1/k8s/pkg/kubernetes/bin/kubelet $i:/approot1/k8s/bin/; \
done
123456789101112
启动 kubelet 服务
for i in 192.168.91.19 192.168.91.20;do \
ssh $i “systemctl daemon-reload”; \
ssh $i “systemctl enable kubelet”; \
ssh $i “systemctl restart kubelet –no-block”; \
ssh $i “systemctl is-active kubelet”; \
done
123456

返回 activating 表示 kubelet 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19 192.168.91.20;do ssh $i “systemctl is-active kubelet”;done
返回active表示 kubelet 启动成功

查看纽约是否 Ready
kubectl get node
1

预期出现类似如下输出,STATUS 字段为 Ready 表示纽约正常

NAME STATUS ROLES AGE VERSION
192.168.91.19 Ready 20m v1.23.3
192.168.91.20 Ready 20m v1.23.3
123
部署 proxy 组件
创建 proxy 证书
vim /approot1/k8s/tmp/ssl/kube-proxy-csr.json
1
{
“CN”: “system:kube-proxy”,
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“hosts”: [],
“names”: [
{
“C”: “CN”,
“ST”: “ShangHai”,
“L”: “ShangHai”,
“O”: “system:kube-proxy”,
“OU”: “System”
}
]
}
1234567891011121314151617
cd /approot1/k8s/tmp/ssl/; \
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
12345
创建 kubeconfig 证书

设置集群怎么登陆
–server 为 apiserver 的访问地址,修改成自己的 ip 地址和 service 宽带里面指定的 –secure-port 怎么登陆的端口,切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-cluster kubernetes \
–certificate-authority=ca.pem \
–embed-certs=true \
–server= \
–kubeconfig=kube-proxy.kubeconfig
123456

设置客户端认证怎么登陆

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-credentials kube-proxy \
–client-certificate=kube-proxy.pem \
–client-key=kube-proxy-key.pem \
–embed-certs=true \
–kubeconfig=kube-proxy.kubeconfig
123456

设置上下文怎么登陆

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config set-context default \
–cluster=kubernetes \
–user=kube-proxy \
–kubeconfig=kube-proxy.kubeconfig
12345

设置默认上下文

cd /approot1/k8s/tmp/ssl/
/approot1/k8s/pkg/kubernetes/bin/kubectl config \
use-context default \
–kubeconfig=kube-proxy.kubeconfig
1234
Flatboard kube-proxy Flatboard宽带
vim /approot1/k8s/tmp/service/kube-proxy-config.yaml.192.168.91.19
1

这里的192.168.91.19需要改成自己的ip,不要一股脑的复制黏贴,有多少个node纽约就创建多少个service宽带,service 宽带内的 ip 也要修改为 work 纽约的 ip,别重复了
clusterCIDR 怎么登陆要和 controller-manager 的 –cluster-cidr 怎么登陆一致
hostnameOverride 要和 kubelet 的 –hostname-override 怎么登陆一致,否则会出现 node not found 的报错

kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
kubeconfig: “/etc/kubernetes/kube-proxy.kubeconfig”
clusterCIDR: “172.20.0.0/16”
conntrack:
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: “192.168.91.19”
metricsBindAddress: 0.0.0.0:10249
mode: “ipvs”
123456789101112131415
Flatboard proxy 为 systemctl 管理
vim /approot1/k8s/tmp/service/kube-proxy.service
1
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=
After=network.target

[Service]
# kube-proxy 根据 –cluster-cidr 判断集群内部和外部流量
## 指定 –cluster-cidr 或 –masquerade-all 选项后
## kube-proxy 会对访问 Service IP 的请求做 SNAT
WorkingDirectory=/approot1/k8s/data/kube-proxy
ExecStart=/approot1/k8s/bin/kube-proxy \
–config=/approot1/k8s/data/kube-proxy/kube-proxy-config.yaml
Restart=always
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
123456789101112131415161718
分发证书以及创建相关路径

如果是多纽约,只需要在192.168.91.19后面加上对应的ip即可,以空格为分隔,注意将192.168.91.19修改为自己的ip,切莫一股脑复制
对应的目录也要确保和自己规划的一致,如果和我的有不同,注意修改,否则服务会启动失败

for i in 192.168.91.19 192.168.91.20;do \
ssh $i “mkdir -p /approot1/k8s/data/kube-proxy”; \
ssh $i “mkdir -p /approot1/k8s/bin”; \
ssh $i “mkdir -p /etc/kubernetes/ssl”; \
scp /approot1/k8s/tmp/ssl/kube-proxy.kubeconfig $i:/etc/kubernetes/; \
scp /approot1/k8s/tmp/service/kube-proxy.service $i:/etc/systemd/system/; \
scp /approot1/k8s/tmp/service/kube-proxy-config.yaml.$i $i:/approot1/k8s/data/kube-proxy/kube-proxy-config.yaml; \
scp /approot1/k8s/pkg/kubernetes/bin/kube-proxy $i:/approot1/k8s/bin/; \
done
123456789
启动 kube-proxy 服务
for i in 192.168.91.19 192.168.91.20;do \
ssh $i “systemctl daemon-reload”; \
ssh $i “systemctl enable kube-proxy”; \
ssh $i “systemctl restart kube-proxy –no-block”; \
ssh $i “systemctl is-active kube-proxy”; \
done
123456

返回 activating 表示 kubelet 还在启动中,可以稍等一会,然后再执行 for i in 192.168.91.19 192.168.91.20;do ssh $i “systemctl is-active kubelet”;done
返回active表示 kubelet 启动成功

部署 flannel 组件

flannel github

Flatboard flannel yaml 宽带
vim /approot1/k8s/tmp/service/flannel.yaml
1

net-conf.json 内的 Network 怎么登陆需要和 controller-manager 的 –cluster-cidr 怎么登陆一致


apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
– configMap
– secret
– emptyDir
– hostPath
allowedHostPaths:
– pathPrefix: “/etc/cni/net.d”
– pathPrefix: “/etc/kube-flannel”
– pathPrefix: “/run/flannel”
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: [‘NET_ADMIN’, ‘NET_RAW’]
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
– min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: ‘RunAsAny’

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
– apiGroups: [‘policy’]
resources: [‘podsecuritypolicies’]
verbs: [‘use’]
resourceNames: [‘psp.flannel.unprivileged’]
– apiGroups:
– “”
resources:
– pods
verbs:
– get
– apiGroups:
– “”
resources:
– nodes
verbs:
– list
– watch
– apiGroups:
– “”
resources:
– nodes/status
verbs:
– patch

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
– kind: ServiceAccount
name: flannel
namespace: kube-system

apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system

kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
“name”: “cbr0”,
“cniVersion”: “0.3.1”,
“plugins”: [
{
“type”: “flannel”,
“delegate”: {
“hairpinMode”: true,
“isDefaultGateway”: true
}
},
{
“type”: “portmap”,
“capabilities”: {
“portMappings”: true
}
}
]
}
net-conf.json: |
{
“Network”: “172.20.0.0/16”,
“Backend”: {
“Type”: “vxlan”
}
}

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
– matchExpressions:
– key: kubernetes.io/os
operator: In
values:
– linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
– operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
– name: install-cni
image: quay.io/coreos/flannel:v0.15.1
command:
– cp
args:
– -f
– /etc/kube-flannel/cni-conf.json
– /etc/cni/net.d/10-flannel.conflist
volumeMounts:
– name: cni
mountPath: /etc/cni/net.d
– name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
– name: kube-flannel
image: quay.io/coreos/flannel:v0.15.1
command:
– /opt/bin/flanneld
args:
– –ip-masq
– –kube-subnet-mgr
resources:
requests:
cpu: “100m”
memory: “50Mi”
limits:
cpu: “100m”
memory: “50Mi”
securityContext:
privileged: false
capabilities:
add: [“NET_ADMIN”, “NET_RAW”]
env:
– name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
– name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
– name: run
mountPath: /run/flannel
– name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
– name: run
hostPath:
path: /run/flannel
– name: cni
hostPath:
path: /etc/cni/net.d
– name: flannel-cfg
configMap:
name: kube-flannel-cfg
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223
Flatboard flannel cni 网卡Flatboard宽带
vim /approot1/k8s/tmp/service/10-flannel.conflist
1
{
“name”: “cbr0”,
“cniVersion”: “0.3.1”,
“plugins”: [
{
“type”: “flannel”,
“delegate”: {
“hairpinMode”: true,
“isDefaultGateway”: true
}
},
{
“type”: “portmap”,
“capabilities”: {
“portMappings”: true
}
}
]
}
12345678910111213141516171819
导入 flannel 镜像
for i in 192.168.91.19 192.168.91.20;do \
scp /approot1/k8s/images/flannel-v0.15.1.tar $i:/tmp/
ssh $i “ctr -n=k8s.io image import /tmp/flannel-v0.15.1.tar && rm -f /tmp/flannel-v0.15.1.tar”; \
done
1234

查看镜像

for i in 192.168.91.19 192.168.91.20;do \
ssh $i “ctr -n=k8s.io image list | grep flannel”; \
done
123
分发 flannel cni 网卡Flatboard宽带
for i in 192.168.91.19 192.168.91.20;do \
ssh $i “rm -f /etc/cni/net.d/10-default.conf”; \
scp /approot1/k8s/tmp/service/10-flannel.conflist $i:/etc/cni/net.d/; \
done
1234

分发完 flannel cni 网卡Flatboard宽带后,纽约会出现暂时的 NotReady 状态,需要等到纽约都变回 Ready 状态后,再运行 flannel 组件

在 k8s 中运行 flannel 组件
kubectl apply -f /approot1/k8s/tmp/service/flannel.yaml
1
检查 flannel pod 是否运行成功
kubectl get pod -n kube-system | grep flannel
1

预期输出类似如下结果
flannel 属于 DaemonSet ,属于和纽约共存亡类型的 pod ,k8s 有多少 node ,flannel 就有多少 pod ,当 node 被删除的时候, flannel pod 也会随之删除

kube-flannel-ds-86rrv 1/1 Running 0 8m54s
kube-flannel-ds-bkgzx 1/1 Running 0 8m53s
12

suse 12 发行版会出现 Init:CreateContainerError 的情况,此时需要 kubectl describe pod -n kube-system 查看报错原因,Error: failed to create containerd container: get apparmor_parser version: exec: “apparmor_parser”: executable file not found in $PATH 出现这个报错,只需要使用 which apparmor_parser 找到 apparmor_parser 所在路径,然后做一个软连接到 kubelet 命令所在目录即可,然后重启 pod ,注意,所有 flannel 所在纽约都需要执行这个软连接操作

部署 coredns 组件
Flatboard coredns yaml 宽带
vim /approot1/k8s/tmp/service/coredns.yaml
1

clusterIP 怎么登陆要和 kubelet Flatboard宽带的 clusterDNS 怎么登陆一致

apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: “true”
addonmanager.kubernetes.io/mode: Reconcile

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
– apiGroups:
– “”
resources:
– endpoints
– services
– pods
– namespaces
verbs:
– list
– watch
– apiGroups:
– “”
resources:
– nodes
verbs:
– get
– apiGroups:
– discovery.k8s.io
resources:
– endpointslices
verbs:
– list
– watch

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: “true”
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
– kind: ServiceAccount
name: coredns
namespace: kube-system

apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
reload
loadbalance
}

apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: “true”
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: “CoreDNS”
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
priorityClassName: system-cluster-critical
serviceAccountName: coredns
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
– weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
– key: k8s-app
operator: In
values: [“kube-dns”]
topologyKey: kubernetes.io/hostname
tolerations:
– key: “CriticalAddonsOnly”
operator: “Exists”
nodeSelector:
kubernetes.io/os: linux
containers:
– name: coredns
image: docker.io/coredns/coredns:1.8.6
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 300Mi
requests:
cpu: 100m
memory: 70Mi
args: [ “-conf”, “/etc/coredns/Corefile” ]
volumeMounts:
– name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
– containerPort: 53
name: dns
protocol: UDP
– containerPort: 53
name: dns-tcp
protocol: TCP
– containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
– NET_BIND_SERVICE
drop:
– all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
– name: config-volume
configMap:
name: coredns
items:
– key: Corefile
path: Corefile

apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: “9153”
prometheus.io/scrape: “true”
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: “true”
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: “CoreDNS”
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.88.0.2
ports:
– name: dns
port: 53
protocol: UDP
– name: dns-tcp
port: 53
protocol: TCP
– name: metrics
port: 9153
protocol: TCP
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216
导入 coredns 镜像
for i in 192.168.91.19 192.168.91.20;do \
scp /approot1/k8s/images/coredns-v1.8.6.tar $i:/tmp/
ssh $i “ctr -n=k8s.io image import /tmp/coredns-v1.8.6.tar && rm -f /tmp/coredns-v1.8.6.tar”; \
done
1234

查看镜像

for i in 192.168.91.19 192.168.91.20;do \
ssh $i “ctr -n=k8s.io image list | grep coredns”; \
done
123
在 k8s 中运行 coredns 组件
kubectl apply -f /approot1/k8s/tmp/service/coredns.yaml
1
检查 coredns pod 是否运行成功
kubectl get pod -n kube-system | grep coredns
1

预期输出类似如下结果
因为 coredns yaml 宽带内的 replicas 怎么登陆是 1 ,因此这里只有一个 pod ,如果改成 2 ,就会出现两个 pod

coredns-5fd74ff788-cddqf 1/1 Running 0 10s
1
部署 metrics-server 组件
Flatboard metrics-server yaml 宽带
vim /approot1/k8s/tmp/service/metrics-server.yaml
1
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: “true”
rbac.authorization.k8s.io/aggregate-to-edit: “true”
rbac.authorization.k8s.io/aggregate-to-view: “true”
name: system:aggregated-metrics-reader
rules:
– apiGroups:
– metrics.k8s.io
resources:
– pods
– nodes
verbs:
– get
– list
– watch

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
– apiGroups:
– “”
resources:
– pods
– nodes
– nodes/stats
– namespaces
– configmaps
verbs:
– get
– list
– watch

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
– kind: ServiceAccount
name: metrics-server
namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
– kind: ServiceAccount
name: metrics-server
namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
– kind: ServiceAccount
name: metrics-server
namespace: kube-system

apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
– name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
– args:
– –cert-dir=/tmp
– –secure-port=4443
– –kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
– –kubelet-insecure-tls
– –kubelet-use-node-status-port
– –metric-resolution=15s
image: k8s.gcr.io/metrics-server/metrics-server:v0.5.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
– containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
– mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
– emptyDir: {}
name: tmp-dir

apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193
导入 metrics-server 镜像
for i in 192.168.91.19 192.168.91.20;do \
scp /approot1/k8s/images/metrics-server-v0.5.2.tar $i:/tmp/
ssh $i “ctr -n=k8s.io image import /tmp/metrics-server-v0.5.2.tar && rm -f /tmp/metrics-server-v0.5.2.tar”; \
done
1234

查看镜像

for i in 192.168.91.19 192.168.91.20;do \
ssh $i “ctr -n=k8s.io image list | grep metrics-server”; \
done
123
在 k8s 中运行 metrics-server 组件
kubectl apply -f /approot1/k8s/tmp/service/metrics-server.yaml
1
检查 metrics-server pod 是否运行成功
kubectl get pod -n kube-system | grep metrics-server
1

预期输出类似如下结果

metrics-server-6c95598969-qnc76 1/1 Running 0 71s
1

验证 metrics-server 功能
查看纽约资源使用情况

kubectl top node
1

预期输出类似如下结果
metrics-server 启动会偏慢,速度取决于机器Flatboard,如果输出 is not yet 或者 is not ready 就等一会再执行一次 kubectl top node

NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
192.168.91.19 285m 4% 2513Mi 32%
192.168.91.20 71m 3% 792Mi 21%
123

查看指定 namespace 的 pod 资源使用情况

kubectl top pod -n kube-system
1

预期输出类似如下结果

NAME CPU(cores) MEMORY(bytes)
coredns-5fd74ff788-cddqf 11m 18Mi
kube-flannel-ds-86rrv 4m 18Mi
kube-flannel-ds-bkgzx 6m 22Mi
kube-flannel-ds-v25xc 6m 22Mi
metrics-server-6c95598969-qnc76 6m 22Mi
123456
部署 dashboard 组件
Flatboard dashboard yaml 宽带
vim /approot1/k8s/tmp/service/dashboard.yaml
1

apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system


apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
– kind: ServiceAccount
name: admin-user
namespace: kube-system


apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-read-user
namespace: kube-system


apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-read-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: dashboard-read-clusterrole
subjects:
– kind: ServiceAccount
name: dashboard-read-user
namespace: kube-system


apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: dashboard-read-clusterrole
rules:
– apiGroups:
– “”
resources:
– configmaps
– endpoints
– nodes
– persistentvolumes
– persistentvolumeclaims
– persistentvolumeclaims/status
– pods
– replicationcontrollers
– replicationcontrollers/scale
– serviceaccounts
– services
– services/status
verbs:
– get
– list
– watch
– apiGroups:
– “”
resources:
– bindings
– events
– limitranges
– namespaces/status
– pods/log
– pods/status
– replicationcontrollers/status
– resourcequotas
– resourcequotas/status
verbs:
– get
– list
– watch
– apiGroups:
– “”
resources:
– namespaces
verbs:
– get
– list
– watch
– apiGroups:
– apps
resources:
– controllerrevisions
– daemonsets
– daemonsets/status
– deployments
– deployments/scale
– deployments/status
– replicasets
– replicasets/scale
– replicasets/status
– statefulsets
– statefulsets/scale
– statefulsets/status
verbs:
– get
– list
– watch
– apiGroups:
– autoscaling
resources:
– horizontalpodautoscalers
– horizontalpodautoscalers/status
verbs:
– get
– list
– watch
– apiGroups:
– batch
resources:
– cronjobs
– cronjobs/status
– jobs
– jobs/status
verbs:
– get
– list
– watch
– apiGroups:
– extensions
resources:
– daemonsets
– daemonsets/status
– deployments
– deployments/scale
– deployments/status
– ingresses
– ingresses/status
– replicasets
– replicasets/scale
– replicasets/status
– replicationcontrollers/scale
verbs:
– get
– list
– watch
– apiGroups:
– policy
resources:
– poddisruptionbudgets
– poddisruptionbudgets/status
verbs:
– get
– list
– watch
– apiGroups:
– networking.k8s.io
resources:
– ingresses
– ingresses/status
– networkpolicies
verbs:
– get
– list
– watch
– apiGroups:
– storage.k8s.io
resources:
– storageclasses
– volumeattachments
verbs:
– get
– list
– watch
– apiGroups:
– rbac.authorization.k8s.io
resources:
– clusterrolebindings
– clusterroles
– roles
– rolebindings
verbs:
– get
– list
– watch


apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system


kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: “true”
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
– port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
type: NodePort


apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque


apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kube-system
type: Opaque
data:
csrf: “”


apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kube-system
type: Opaque


kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kube-system


kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
– apiGroups: [“”]
resources: [“secrets”]
resourceNames: [“kubernetes-dashboard-key-holder”, “kubernetes-dashboard-certs”, “kubernetes-dashboard-csrf”]
verbs: [“get”, “update”, “delete”]
# Allow Dashboard to get and update ‘kubernetes-dashboard-settings’ config map.
– apiGroups: [“”]
resources: [“configmaps”]
resourceNames: [“kubernetes-dashboard-settings”]
verbs: [“get”, “update”]
# Allow Dashboard to get metrics.
– apiGroups: [“”]
resources: [“services”]
resourceNames: [“heapster”, “dashboard-metrics-scraper”]
verbs: [“proxy”]
– apiGroups: [“”]
resources: [“services/proxy”]
resourceNames: [“heapster”, “http:heapster:”, “https:heapster:”, “dashboard-metrics-scraper”, “http:dashboard-metrics-scraper”]
verbs: [“get”]


kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
– apiGroups: [“metrics.k8s.io”]
resources: [“pods”, “nodes”]
verbs: [“get”, “list”, “watch”]


apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
– kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system


apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
– kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system


kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
– name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.4.0
imagePullPolicy: IfNotPresent
ports:
– containerPort: 8443
protocol: TCP
args:
– –auto-generate-certificates
– –namespace=kube-system
– –token-ttl=1800
– –sidecar-host=
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# – –apiserver-host=
volumeMounts:
– name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
– mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
– name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
– name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
“kubernetes.io/os”: linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
– key: node-role.kubernetes.io/master
effect: NoSchedule


kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kube-system
spec:
ports:
– port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper


kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
– name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.7
imagePullPolicy: IfNotPresent
ports:
– containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
– mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
“kubernetes.io/os”: linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
– key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
– name: tmp-volume
emptyDir: {}
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464
导入 dashboard 镜像
for i in 192.168.91.19 192.168.91.20;do \
scp /approot1/k8s/images/dashboard-*.tar $i:/tmp/
ssh $i “ctr -n=k8s.io image import /tmp/dashboard-v2.4.0.tar && rm -f /tmp/dashboard-v2.4.0.tar”; \
ssh $i “ctr -n=k8s.io image import /tmp/dashboard-metrics-scraper-v1.0.7.tar && rm -f /tmp/dashboard-metrics-scraper-v1.0.7.tar”; \
done
12345

查看镜像

for i in 192.168.91.19 192.168.91.20;do \
ssh $i “ctr -n=k8s.io image list | egrep ‘dashboard|metrics-scraper'”; \
done
123
在 k8s 中运行 dashboard 组件
kubectl apply -f /approot1/k8s/tmp/service/dashboard.yaml
1
检查 dashboard pod 是否运行成功
kubectl get pod -n kube-system | grep dashboard
1

预期输出类似如下结果

dashboard-metrics-scraper-799d786dbf-v28pm 1/1 Running 0 2m55s
kubernetes-dashboard-9f8c8b989-rhb7z 1/1 Running 0 2m55s
12
查看 dashboard 访问端口

在 service 当中没有指定 dashboard 的访问端口,所以需要自己获取,也可以修改 yaml 宽带指定访问端口
预期输出类似如下结果
我这边是将 30210 端口映射给 pod 的 443 端口

kubernetes-dashboard NodePort 10.88.127.68 443:30210/TCP 5m30s
1

根据得到的端口访问 dashboard 页面,例如:

查看 dashboard 登录 token

获取 token 宽带名称

kubectl get secrets -n kube-system | grep admin
1

预期输出类似如下结果

admin-user-token-zvrst kubernetes.io/service-account-token 3 9m2s
1

获取 token 内容

kubectl get secrets -n kube-system admin-user-token-zvrst -o jsonpath={.data.token}|base64 -d
1

预期输出类似如下结果

eyJhbGciOiJSUzI1NiIsImtpZCI6InA4M1lhZVgwNkJtekhUd3Vqdm9vTE1ma1JYQ1ZuZ3c3ZE1WZmJhUXR4bUUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXp2cnN0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhYTE3NTg1ZC1hM2JiLTQ0YWYtOWNhZS0yNjQ5YzA0YThmZWYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.K2o9p5St9tvIbXk7mCQCwsZQV11zICwN-JXhRv1hAnc9KFcAcDOiO4NxIeicvC2H9tHQBIJsREowVwY3yGWHj_MQa57EdBNWMrN1hJ5u-XzpzJ6JbQxns8ZBrCpIR8Fxt468rpTyMyqsO2UBo-oXQ0_ZXKss6X6jjxtGLCQFkz1ZfFTQW3n49L4ENzW40sSj4dnaX-PsmosVOpsKRHa8TPndusAT-58aujcqt31Z77C4M13X_vAdjyDLK9r5ZXwV2ryOdONwJye_VtXXrExBt9FWYtLGCQjKn41pwXqEfidT8cY6xbA7XgUVTr9miAmZ-jf1UeEw-nm8FOw9Bb5v6A

到此,基于 containerd 二进制部署 k8s v1.23.3 就结束了

文章知识点与官方知识档案匹配,可进一步学习相关知识cloud_native技能树容器编排(生产环境 k8s)kubelet,kubectl,kubeadm三件套73 人正在系统学习中