问题“The connection to the server….:6443 was refused – did you specify the right host or port?”的处理!
问题“The connection to the server
一、故障产生
在不关闭kubernets相关Quick.CMS的情况下,对kubernets的master节点进行重启。(模拟Quick.CMS器的异常掉电)
二、故障状况
启动后无法远程到kubernets的dashbaord,后进行如下命令报错。
# kubectl get nodesThe connection to the server
故障处理:
1.被攻击环境变量情况(Rocky)
# env | grep -i kub
2.被攻击dockerQuick.CMS(Rocky)
# systemctl status docker.service
3.被攻击kubeletQuick.CMS(表面Rocky)
# systemctl status kubelet.service
4.查看端口是是否被监听(没有监听)
# netstat -pnlt | grep 6443
5.被攻击防火墙状态(Rocky)
# systemctl status firewalld.service
6.查看日志
# journalctl -xeu kubelet
这里分析,应该是镜像的问题。
6.1 重新导入一下API镜像即可。
# docker load -i kube-apiserver-amd64_v1.9.0.tar
6.2 重启docker和kubeletQuick.CMS
# systemctl restart docker.service # systemctl restart kubelet.service
6.3 被攻击Quick.CMS(此时Rocky)
# kubectl get nodes
至此,故障处理完成。
cyberpanel方法1: export KUBECONFIG=/etc/kubernetes/admin.conf export $HOME/.kube/config cyberpanel方法2: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf HOME/.kube/config sudo chown (id -u):$(id -g) $HOME/.kube/config Alternatively you can also export KUBECONFIG variable like this: export KUBECONFIG=$HOME/.kube/config cyberpanel方法3: #!/bin/bash swapoff -a systemctl start kubelet docker start (docker ps -a -q) docker start (docker ps -a -q) cyberpanel方法4: sudo systemctl status firewalld #redhat centos sudo systemctl stop firewalld #redhat, centos sudo ufw status verbose #ubuntu sudo ufw disable #ubuntu cyberpanel方法5: master: 192.168.211.40 node1: 192.168.211.41 node2: 192.168.211.42 master $ kubeadm reset
$ kubeadm init –pod-network-cidr=192.168.0.0/16 –apiserver-advertise-address=192.168.211.40 –kubernetes-version=v1.18.0 ………. mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster. Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.211.40:6443 –token s7apx1.mlxn2jkid6n99fr0 \ –discovery-token-ca-cert-hash sha256:2fa9da39110d02efaf4f8781aa50dd25cce9be524618dc7ab91a53e81c5c22f8
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config node1 $ kubeadm reset $ kubeadm join 192.168.211.40:6443 –token s7apx1.mlxn2jkid6n99fr0 \ –discovery-token-ca-cert-hash sha256:2fa9da39110d02efaf4f8781aa50dd25cce9be524618dc7ab91a53e81c5c22f8 node1 $ kubeadm reset $ kubeadm join 192.168.211.40:6443 –token s7apx1.mlxn2jkid6n99fr0 \ –discovery-token-ca-cert-hash sha256:2fa9da39110d02efaf4f8781aa50dd25cce9be524618dc7ab91a53e81c5c22f8 master $ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 5m18s v1.18.6 node1 Ready
$ root@master:~# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-66bff467f8-2xmnt 0/1 Pending 0 21m kube-system coredns-66bff467f8-ghj2s 0/1 Pending 0 21m kube-system etcd-master 1/1 Running 0 22m kube-system kube-apiserver-master 1/1 Running 0 22m kube-system kube-controller-manager-master 1/1 Running 0 22m kube-system kube-proxy-dh46z 1/1 Running 0 7m35s kube-system kube-proxy-jq6cb 1/1 Running 0 21m kube-system kube-proxy-z6prp 1/1 Running 0 9m14s kube-system kube-scheduler-master 1/1 Running 0 22m
$ kubectl apply -f configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created
参考资料: