Linux使用技巧 ·

使用kubeadm在Centos8上部署kubernetes

0、镜像地址

清华大学开源软件镜像站
阿里云官方镜像站
DockerDownLoad官网地址

1、系统准备

2、添加阿里源

rm -rfv /etc/yum.repos.d/*
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo

3、安装常用包

yum install vim bash-completion net-tools gcc -y

4、使用aliyun源安装docker-ce

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce
上次元数据过期检查:0:00:14 前,执行于 2021年01月01日 星期五 06时49分09秒。
错误:
 问题: package docker-ce-3:20.10.1-3.el7.x86_64 requires containerd.io >= 1.4.1, but none of the providers can be installed
  - cannot install the best candidate for the job
  - package containerd.io-1.4.3-3.1.el7.x86_64 is filtered out by modular filtering
(尝试添加 '--skip-broken' 来跳过无法安装的软件包 或 '--nobest' 来不只使用最佳选择的软件包)
wget https://mirrors.aliyun.com/docker-ce/linux/centos/8/x86_64/stable/Packages/containerd.io-1.4.3-3.1.el8.x86_64.rpm
yum install containerd.io-1.4.3-3.1.el8.x86_64.rpm -y

遇到下面的报错

[root@localhost ~]# yum install containerd.io-1.4.3-3.1.el8.x86_64.rpm -y
上次元数据过期检查:0:25:07 前,执行于 2021年01月01日 星期五 06时49分09秒。
错误:
 问题: problem with installed package podman-2.0.5-5.module_el8.3.0+512+b3b58dca.x86_64
  - package podman-2.0.5-5.module_el8.3.0+512+b3b58dca.x86_64 requires runc >= 1.0.0-57, but none of the providers can be installed
  - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-68.rc92.module_el8.3.0+475+c50ce30b.x86_64
  - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-68.rc92.module_el8.3.0+475+c50ce30b.x86_64
  - conflicting requests
  - package runc-1.0.0-64.rc10.module_el8.3.0+479+69e2ae26.x86_64 is filtered out by modular filtering
  - package containerd.io-1.2.10-3.2.el7.x86_64 is filtered out by modular filtering
  - package containerd.io-1.2.13-3.1.el7.x86_64 is filtered out by modular filtering
  - package containerd.io-1.2.13-3.2.el7.x86_64 is filtered out by modular filtering
  - package containerd.io-1.2.2-3.3.el7.x86_64 is filtered out by modular filtering
  - package containerd.io-1.2.2-3.el7.x86_64 is filtered out by modular filtering
  - package containerd.io-1.2.4-3.1.el7.x86_64 is filtered out by modular filtering
  - package containerd.io-1.2.5-3.1.el7.x86_64 is filtered out by modular filtering
  - package containerd.io-1.2.6-3.3.el7.x86_64 is filtered out by modular filtering
  - package containerd.io-1.3.7-3.1.el7.x86_64 is filtered out by modular filtering
  - package containerd.io-1.3.9-3.1.el7.x86_64 is filtered out by modular filtering
  - package containerd.io-1.4.3-3.1.el7.x86_64 is filtered out by modular filtering
(尝试在命令行中添加 '--allowerasing' 来替换冲突的软件包 或 '--skip-broken' 来跳过无法安装的软件包 或 '--nobest' 来不只使用最佳选择的软件包)

运行下面的命令后重新安装containerd.io-1.4.3-3.1.el8.x86_64.rpm

yum erase podman buildah

然后再安装docker-ce即可成功
添加aliyundocker仓库加速器

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://y4syxeu0.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
systemctl enable docker.service

4、安装kubectl、kubelet、kubeadm

添加阿里kubernetes源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install kubectl kubelet kubeadm -y
systemctl enable kubelet

查看安装版本信息
[root@localhost ~]# kubectl version

kubeadm init --kubernetes-version=v1.20.1 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16

记录生成的最后部分内容,此内容需要在其它节点加入Kubernetes集群时执行。
根据提示创建kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

执行下面命令,使kubectl可以自动补充
source <(kubectl completion bash)
查看节点,pod

[root@localhost ~]# kubectl get node
NAME                    STATUS     ROLES                  AGE     VERSION
localhost.localdomain   NotReady   control-plane,master   2m51s   v1.20.1
[root@localhost ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE
kube-system   coredns-7f89b7bc75-4snt6                        0/1     Pending   0          5m15s
kube-system   coredns-7f89b7bc75-z59c8                        0/1     Pending   0          5m15s
kube-system   etcd-localhost.localdomain                      1/1     Running   0          5m22s
kube-system   kube-apiserver-localhost.localdomain            1/1     Running   0          5m22s
kube-system   kube-controller-manager-localhost.localdomain   1/1     Running   0          5m22s
kube-system   kube-proxy-rczvn                                1/1     Running   0          5m15s
kube-system   kube-scheduler-localhost.localdomain            1/1     Running   0          5m22s

node节点为NotReady,因为corednspod没有启动,缺少网络pod

6、安装calico网络

[root@localhost ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created

查看pod和node

[root@localhost ~]# kubectl get node
NAME                    STATUS   ROLES                  AGE     VERSION
localhost.localdomain   Ready    control-plane,master   7m47s   v1.20.1
[root@localhost ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-744cfdf676-qfj9b        1/1     Running   0          73s
kube-system   calico-node-5x22q                               1/1     Running   0          73s
kube-system   coredns-7f89b7bc75-4snt6                        1/1     Running   0          7m33s
kube-system   coredns-7f89b7bc75-z59c8                        1/1     Running   0          7m33s
kube-system   etcd-localhost.localdomain                      1/1     Running   0          7m40s
kube-system   kube-apiserver-localhost.localdomain            1/1     Running   0          7m40s
kube-system   kube-controller-manager-localhost.localdomain   1/1     Running   0          7m40s
kube-system   kube-proxy-rczvn                                1/1     Running   0          7m33s
kube-system   kube-scheduler-localhost.localdomain            1/1     Running   0          7m40s

此时集群状态正常

7、安装kubernetes-dashboard

官方部署dashboard的服务没使用nodeport,将yaml文件下载到本地,在service里添加nodeport

wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
vim recommended.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard

kubectl create -f recommended.yaml

[root@localhost ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.10.169.183   <none>        8000/TCP        2m19s
kubernetes-dashboard        NodePort    10.10.205.11    <none>        443:30000/TCP   2m19s

查看凭证
kubectl -n kubernetes-dashboard get secret

[root@localhost ~]# kubectl -n kubernetes-dashboard get secret
NAME                               TYPE                                  DATA   AGE
default-token-d9njz                kubernetes.io/service-account-token   3      6m42s
kubernetes-dashboard-certs         Opaque                                0      6m42s
kubernetes-dashboard-csrf          Opaque                                1      6m42s
kubernetes-dashboard-key-holder    Opaque                                2      6m42s
kubernetes-dashboard-token-gk7wt   kubernetes.io/service-account-token   3      6m42s

使用token进行登录,执行下面命令获取token

 kubectl describe secrets -n kubernetes-dashboard kubernetes-dashboard-token-gk7wt  | grep token | awk 'NR==3{print $2}'

获取命名空间

[root@localhost ~]# kubectl get namespaces
NAME                   STATUS   AGE
default                Active   39m
kube-node-lease        Active   39m
kube-public            Active   39m
kube-system            Active   39m
kubernetes-dashboard   Active   21m
[root@localhost ~]# kubectl get pods --namespace=kubernetes-dashboard
NAME                                        READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-c95fcf479-5xcwp   1/1     Running   0          22m
kubernetes-dashboard-5bc6d86cfd-vrxnm       1/1     Running   0          22m

登录后如下展示,如果没有namespace可选,并且提示找不到资源 ,那么就是权限问题
通过查看dashboard日志,得到如下 信息

kubectl logs -f -n kubernetes-dashboard kubernetes-dashboard-5bc6d86cfd-vrxnm
2021/01/01 00:35:19 Getting list of all pet sets in the cluster
2021/01/01 00:35:19 Non-critical error occurred during resource retrieval: statefulsets.apps is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" cannot list resource "statefulsets" in API group "apps" in the namespace "default"
2021/01/01 00:35:19 Non-critical error occurred during resource retrieval: pods is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" cannot list resource "pods" in API group "" in the namespace "default"
2021/01/01 00:35:19 Non-critical error occurred during resource retrieval: events is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" cannot list resource "events" in API group "" in the namespace "default"
2021/01/01 00:35:19 [2021-01-01T00:35:19Z] Outcoming response to 10.0.0.197:19134 with 200 status code
2021/01/01 00:35:19 [2021-01-01T00:35:19Z] Incoming HTTP/2.0 GET /api/v1/ingress/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 10.0.0.197:19134: 
2021/01/01 00:35:19 Non-critical error occurred during resource retrieval: ingresses.extensions is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" cannot list resource "ingresses" in API group "extensions" in the namespace "default"
2021/01/01 00:35:19 [2021-01-01T00:35:19Z] Outcoming response to 10.0.0.197:19134 with 200 status code
2021/01/01 00:35:20 [2021-01-01T00:35:20Z] Incoming HTTP/2.0 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 10.0.0.197:19134: 
2021/01/01 00:35:20 [2021-01-01T00:35:20Z] Incoming HTTP/2.0 GET /api/v1/configmap/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 10.0.0.197:19134: 
2021/01/01 00:35:20 Getting list config maps in the namespace default
2021/01/01 00:35:20 Getting list of all services in the cluster
2021/01/01 00:35:20 Non-critical error occurred during resource retrieval: configmaps is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" cannot list resource "configmaps" in API group "" in the namespace "default"
2021/01/01 00:35:20 Non-critical error occurred during resource retrieval: services is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" cannot list resource "services" in API group "" in the namespace "default"
2021/01/01 00:35:20 [2021-01-01T00:35:20Z] Outcoming response to 10.0.0.197:19134 with 200 status code
2021/01/01 00:35:20 [2021-01-01T00:35:20Z] Outcoming response to 10.0.0.197:19134 with 200 status code
2021/01/01 00:35:20 [2021-01-01T00:35:20Z] Incoming HTTP/2.0 GET /api/v1/persistentvolumeclaim/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 10.0.0.197:19134: 
2021/01/01 00:35:20 [2021-01-01T00:35:20Z] Incoming HTTP/2.0 GET /api/v1/secret/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 10.0.0.197:19134:

解决方法
授予超级用户访问权限给集群范围内的所有服务帐户(强烈不鼓励)

[root@localhost ~]# kubectl create clusterrolebinding serviceaccounts-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccounts
clusterrolebinding.rbac.authorization.k8s.io/serviceaccounts-cluster-admin created