{"id":578,"date":"2021-01-01T02:29:01","date_gmt":"2020-12-31T18:29:01","guid":{"rendered":"https:\/\/qtvz.com\/?p=578"},"modified":"2021-01-01T02:29:01","modified_gmt":"2020-12-31T18:29:01","slug":"%e4%bd%bf%e7%94%a8kubeadm%e5%9c%a8centos8%e4%b8%8a%e9%83%a8%e7%bd%b2kubernetes","status":"publish","type":"post","link":"https:\/\/qtvz.com\/578.html","title":{"rendered":"\u4f7f\u7528kubeadm\u5728Centos8\u4e0a\u90e8\u7f72kubernetes"},"content":{"rendered":"<!--wp-compress-html--><!--wp-compress-html no compression--><h2>0\u3001\u955c\u50cf\u5730\u5740<\/h2>\n<h5><a href=\"https:\/\/qtvz.com\/redirect\/aHR0cHM6Ly9taXJyb3JzLnR1bmEudHNpbmdodWEuZWR1LmNuLw==\" target=\"_blank\">\u6e05\u534e\u5927\u5b66\u5f00\u6e90\u8f6f\u4ef6\u955c\u50cf\u7ad9<\/a><\/h5>\n<h5><a href=\"https:\/\/qtvz.com\/redirect\/aHR0cHM6Ly9kZXZlbG9wZXIuYWxpeXVuLmNvbS9taXJyb3Iv\" target=\"_blank\">\u963f\u91cc\u4e91\u5b98\u65b9\u955c\u50cf\u7ad9<\/a><\/h5>\n<h5><a href=\"https:\/\/qtvz.com\/redirect\/aHR0cHM6Ly9kb3dubG9hZC5kb2NrZXIuY29tL2xpbnV4L2NlbnRvcw==\" target=\"_blank\">DockerDownLoad\u5b98\u7f51\u5730\u5740<\/a><\/h5>\n<h2>1\u3001\u7cfb\u7edf\u51c6\u5907<\/h2>\n<h2>2\u3001\u6dfb\u52a0\u963f\u91cc\u6e90<\/h2>\n<pre><code>rm -rfv \/etc\/yum.repos.d\/*<\/code><\/pre>\n<pre><code>curl -o \/etc\/yum.repos.d\/CentOS-Base.repo http:\/\/mirrors.aliyun.com\/repo\/Centos-8.repo<\/code><\/pre>\n<h2>3\u3001\u5b89\u88c5\u5e38\u7528\u5305<\/h2>\n<pre><code>yum install vim bash-completion net-tools gcc -y<\/code><\/pre>\n<h2>4\u3001\u4f7f\u7528aliyun\u6e90\u5b89\u88c5docker-ce<\/h2>\n<pre><code>yum install -y yum-utils device-mapper-persistent-data lvm2<\/code><\/pre>\n<pre><code>yum-config-manager --add-repo https:\/\/mirrors.aliyun.com\/docker-ce\/linux\/centos\/docker-ce.repo<\/code><\/pre>\n<pre><code>yum -y install docker-ce<\/code><\/pre>\n<pre><code>\u4e0a\u6b21\u5143\u6570\u636e\u8fc7\u671f\u68c0\u67e5\uff1a0:00:14 \u524d\uff0c\u6267\u884c\u4e8e 2021\u5e7401\u670801\u65e5 \u661f\u671f\u4e94 06\u65f649\u520609\u79d2\u3002\n\u9519\u8bef\uff1a\n \u95ee\u9898: package docker-ce-3:20.10.1-3.el7.x86_64 requires containerd.io &gt;= 1.4.1, but none of the providers can be installed\n  - cannot install the best candidate for the job\n  - package containerd.io-1.4.3-3.1.el7.x86_64 is filtered out by modular filtering\n(\u5c1d\u8bd5\u6dfb\u52a0 &#039;--skip-broken&#039; \u6765\u8df3\u8fc7\u65e0\u6cd5\u5b89\u88c5\u7684\u8f6f\u4ef6\u5305 \u6216 &#039;--nobest&#039; \u6765\u4e0d\u53ea\u4f7f\u7528\u6700\u4f73\u9009\u62e9\u7684\u8f6f\u4ef6\u5305)<\/code><\/pre>\n<pre><code>wget https:\/\/mirrors.aliyun.com\/docker-ce\/linux\/centos\/8\/x86_64\/stable\/Packages\/containerd.io-1.4.3-3.1.el8.x86_64.rpm<\/code><\/pre>\n<pre><code>yum install containerd.io-1.4.3-3.1.el8.x86_64.rpm -y<\/code><\/pre>\n<p>\u9047\u5230\u4e0b\u9762\u7684\u62a5\u9519<\/p>\n<pre><code>[root@localhost ~]# yum install containerd.io-1.4.3-3.1.el8.x86_64.rpm -y\n\u4e0a\u6b21\u5143\u6570\u636e\u8fc7\u671f\u68c0\u67e5\uff1a0:25:07 \u524d\uff0c\u6267\u884c\u4e8e 2021\u5e7401\u670801\u65e5 \u661f\u671f\u4e94 06\u65f649\u520609\u79d2\u3002\n\u9519\u8bef\uff1a\n \u95ee\u9898: problem with installed package podman-2.0.5-5.module_el8.3.0+512+b3b58dca.x86_64\n  - package podman-2.0.5-5.module_el8.3.0+512+b3b58dca.x86_64 requires runc &gt;= 1.0.0-57, but none of the providers can be installed\n  - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-68.rc92.module_el8.3.0+475+c50ce30b.x86_64\n  - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-68.rc92.module_el8.3.0+475+c50ce30b.x86_64\n  - conflicting requests\n  - package runc-1.0.0-64.rc10.module_el8.3.0+479+69e2ae26.x86_64 is filtered out by modular filtering\n  - package containerd.io-1.2.10-3.2.el7.x86_64 is filtered out by modular filtering\n  - package containerd.io-1.2.13-3.1.el7.x86_64 is filtered out by modular filtering\n  - package containerd.io-1.2.13-3.2.el7.x86_64 is filtered out by modular filtering\n  - package containerd.io-1.2.2-3.3.el7.x86_64 is filtered out by modular filtering\n  - package containerd.io-1.2.2-3.el7.x86_64 is filtered out by modular filtering\n  - package containerd.io-1.2.4-3.1.el7.x86_64 is filtered out by modular filtering\n  - package containerd.io-1.2.5-3.1.el7.x86_64 is filtered out by modular filtering\n  - package containerd.io-1.2.6-3.3.el7.x86_64 is filtered out by modular filtering\n  - package containerd.io-1.3.7-3.1.el7.x86_64 is filtered out by modular filtering\n  - package containerd.io-1.3.9-3.1.el7.x86_64 is filtered out by modular filtering\n  - package containerd.io-1.4.3-3.1.el7.x86_64 is filtered out by modular filtering\n(\u5c1d\u8bd5\u5728\u547d\u4ee4\u884c\u4e2d\u6dfb\u52a0 &#039;--allowerasing&#039; \u6765\u66ff\u6362\u51b2\u7a81\u7684\u8f6f\u4ef6\u5305 \u6216 &#039;--skip-broken&#039; \u6765\u8df3\u8fc7\u65e0\u6cd5\u5b89\u88c5\u7684\u8f6f\u4ef6\u5305 \u6216 &#039;--nobest&#039; \u6765\u4e0d\u53ea\u4f7f\u7528\u6700\u4f73\u9009\u62e9\u7684\u8f6f\u4ef6\u5305)<\/code><\/pre>\n<p>\u8fd0\u884c\u4e0b\u9762\u7684\u547d\u4ee4\u540e\u91cd\u65b0\u5b89\u88c5<code>containerd.io-1.4.3-3.1.el8.x86_64.rpm<\/code><\/p>\n<pre><code>yum erase podman buildah<\/code><\/pre>\n<p>\u7136\u540e\u518d\u5b89\u88c5docker-ce\u5373\u53ef\u6210\u529f<br \/>\n\u6dfb\u52a0aliyundocker\u4ed3\u5e93\u52a0\u901f\u5668<\/p>\n<pre><code>sudo mkdir -p \/etc\/docker\nsudo tee \/etc\/docker\/daemon.json &lt;&lt;-&#039;EOF&#039;\n{\n  &quot;registry-mirrors&quot;: [&quot;https:\/\/y4syxeu0.mirror.aliyuncs.com&quot;]\n}\nEOF\nsudo systemctl daemon-reload\nsudo systemctl restart docker\nsystemctl enable docker.service<\/code><\/pre>\n<h2>4\u3001\u5b89\u88c5kubectl\u3001kubelet\u3001kubeadm<\/h2>\n<p>\u6dfb\u52a0\u963f\u91cckubernetes\u6e90<\/p>\n<pre><code>cat &lt;&lt;EOF &gt; \/etc\/yum.repos.d\/kubernetes.repo\n[kubernetes]\nname=Kubernetes\nbaseurl=https:\/\/mirrors.aliyun.com\/kubernetes\/yum\/repos\/kubernetes-el7-x86_64\/\nenabled=1\ngpgcheck=1\nrepo_gpgcheck=1\ngpgkey=https:\/\/mirrors.aliyun.com\/kubernetes\/yum\/doc\/yum-key.gpg https:\/\/mirrors.aliyun.com\/kubernetes\/yum\/doc\/rpm-package-key.gpg\nEOF<\/code><\/pre>\n<pre><code>yum install kubectl kubelet kubeadm -y\nsystemctl enable kubelet<\/code><\/pre>\n<p>\u67e5\u770b\u5b89\u88c5\u7248\u672c\u4fe1\u606f<br \/>\n<code>[root@localhost ~]# kubectl version<\/code><\/p>\n<pre><code>kubeadm init --kubernetes-version=v1.20.1 --image-repository registry.aliyuncs.com\/google_containers --service-cidr=10.10.0.0\/16 --pod-network-cidr=10.122.0.0\/16<\/code><\/pre>\n<p>\u8bb0\u5f55\u751f\u6210\u7684\u6700\u540e\u90e8\u5206\u5185\u5bb9\uff0c\u6b64\u5185\u5bb9\u9700\u8981\u5728\u5176\u5b83\u8282\u70b9\u52a0\u5165Kubernetes\u96c6\u7fa4\u65f6\u6267\u884c\u3002<br \/>\n\u6839\u636e\u63d0\u793a\u521b\u5efakubectl<\/p>\n<pre><code>mkdir -p $HOME\/.kube\nsudo cp -i \/etc\/kubernetes\/admin.conf $HOME\/.kube\/config\nsudo chown $(id -u):$(id -g) $HOME\/.kube\/config<\/code><\/pre>\n<p>\u6267\u884c\u4e0b\u9762\u547d\u4ee4\uff0c\u4f7fkubectl\u53ef\u4ee5\u81ea\u52a8\u8865\u5145<br \/>\n<code>source &lt;(kubectl completion bash)<\/code><br \/>\n\u67e5\u770b\u8282\u70b9\uff0cpod<\/p>\n<pre><code>[root@localhost ~]# kubectl get node\nNAME                    STATUS     ROLES                  AGE     VERSION\nlocalhost.localdomain   NotReady   control-plane,master   2m51s   v1.20.1\n[root@localhost ~]# kubectl get pod --all-namespaces\nNAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE\nkube-system   coredns-7f89b7bc75-4snt6                        0\/1     Pending   0          5m15s\nkube-system   coredns-7f89b7bc75-z59c8                        0\/1     Pending   0          5m15s\nkube-system   etcd-localhost.localdomain                      1\/1     Running   0          5m22s\nkube-system   kube-apiserver-localhost.localdomain            1\/1     Running   0          5m22s\nkube-system   kube-controller-manager-localhost.localdomain   1\/1     Running   0          5m22s\nkube-system   kube-proxy-rczvn                                1\/1     Running   0          5m15s\nkube-system   kube-scheduler-localhost.localdomain            1\/1     Running   0          5m22s<\/code><\/pre>\n<p>node\u8282\u70b9\u4e3aNotReady\uff0c\u56e0\u4e3acorednspod\u6ca1\u6709\u542f\u52a8\uff0c\u7f3a\u5c11\u7f51\u7edcpod<\/p>\n<h2>6\u3001\u5b89\u88c5calico\u7f51\u7edc<\/h2>\n<pre><code>[root@localhost ~]# kubectl apply -f https:\/\/docs.projectcalico.org\/manifests\/calico.yaml\nconfigmap\/calico-config created\ncustomresourcedefinition.apiextensions.k8s.io\/bgpconfigurations.crd.projectcalico.org created\ncustomresourcedefinition.apiextensions.k8s.io\/bgppeers.crd.projectcalico.org created\ncustomresourcedefinition.apiextensions.k8s.io\/blockaffinities.crd.projectcalico.org created\ncustomresourcedefinition.apiextensions.k8s.io\/clusterinformations.crd.projectcalico.org created\ncustomresourcedefinition.apiextensions.k8s.io\/felixconfigurations.crd.projectcalico.org created\ncustomresourcedefinition.apiextensions.k8s.io\/globalnetworkpolicies.crd.projectcalico.org created\ncustomresourcedefinition.apiextensions.k8s.io\/globalnetworksets.crd.projectcalico.org created\ncustomresourcedefinition.apiextensions.k8s.io\/hostendpoints.crd.projectcalico.org created\ncustomresourcedefinition.apiextensions.k8s.io\/ipamblocks.crd.projectcalico.org created\ncustomresourcedefinition.apiextensions.k8s.io\/ipamconfigs.crd.projectcalico.org created\ncustomresourcedefinition.apiextensions.k8s.io\/ipamhandles.crd.projectcalico.org created\ncustomresourcedefinition.apiextensions.k8s.io\/ippools.crd.projectcalico.org created\ncustomresourcedefinition.apiextensions.k8s.io\/kubecontrollersconfigurations.crd.projectcalico.org created\ncustomresourcedefinition.apiextensions.k8s.io\/networkpolicies.crd.projectcalico.org created\ncustomresourcedefinition.apiextensions.k8s.io\/networksets.crd.projectcalico.org created\nclusterrole.rbac.authorization.k8s.io\/calico-kube-controllers created\nclusterrolebinding.rbac.authorization.k8s.io\/calico-kube-controllers created\nclusterrole.rbac.authorization.k8s.io\/calico-node created\nclusterrolebinding.rbac.authorization.k8s.io\/calico-node created\ndaemonset.apps\/calico-node created\nserviceaccount\/calico-node created\ndeployment.apps\/calico-kube-controllers created\nserviceaccount\/calico-kube-controllers created\npoddisruptionbudget.policy\/calico-kube-controllers created<\/code><\/pre>\n<p>\u67e5\u770bpod\u548cnode<\/p>\n<pre><code>[root@localhost ~]# kubectl get node\nNAME                    STATUS   ROLES                  AGE     VERSION\nlocalhost.localdomain   Ready    control-plane,master   7m47s   v1.20.1\n[root@localhost ~]# kubectl get pod --all-namespaces\nNAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE\nkube-system   calico-kube-controllers-744cfdf676-qfj9b        1\/1     Running   0          73s\nkube-system   calico-node-5x22q                               1\/1     Running   0          73s\nkube-system   coredns-7f89b7bc75-4snt6                        1\/1     Running   0          7m33s\nkube-system   coredns-7f89b7bc75-z59c8                        1\/1     Running   0          7m33s\nkube-system   etcd-localhost.localdomain                      1\/1     Running   0          7m40s\nkube-system   kube-apiserver-localhost.localdomain            1\/1     Running   0          7m40s\nkube-system   kube-controller-manager-localhost.localdomain   1\/1     Running   0          7m40s\nkube-system   kube-proxy-rczvn                                1\/1     Running   0          7m33s\nkube-system   kube-scheduler-localhost.localdomain            1\/1     Running   0          7m40s<\/code><\/pre>\n<p>\u6b64\u65f6\u96c6\u7fa4\u72b6\u6001\u6b63\u5e38<\/p>\n<h2>7\u3001\u5b89\u88c5kubernetes-dashboard<\/h2>\n<p>\u5b98\u65b9\u90e8\u7f72dashboard\u7684\u670d\u52a1\u6ca1\u4f7f\u7528nodeport\uff0c\u5c06yaml\u6587\u4ef6\u4e0b\u8f7d\u5230\u672c\u5730\uff0c\u5728service\u91cc\u6dfb\u52a0nodeport<\/p>\n<pre><code>wget  https:\/\/raw.githubusercontent.com\/kubernetes\/dashboard\/v2.0.0-rc7\/aio\/deploy\/recommended.yaml\nvim recommended.yaml<\/code><\/pre>\n<pre><code>kind: Service\napiVersion: v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kubernetes-dashboard\nspec:\n  type: NodePort\n  ports:\n    - port: 443\n      targetPort: 8443\n      nodePort: 30000\n  selector:\n    k8s-app: kubernetes-dashboard<\/code><\/pre>\n<p><code>kubectl create -f recommended.yaml<\/code><\/p>\n<pre><code>[root@localhost ~]# kubectl get svc -n kubernetes-dashboard\nNAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE\ndashboard-metrics-scraper   ClusterIP   10.10.169.183   &lt;none&gt;        8000\/TCP        2m19s\nkubernetes-dashboard        NodePort    10.10.205.11    &lt;none&gt;        443:30000\/TCP   2m19s<\/code><\/pre>\n<p>\u67e5\u770b\u51ed\u8bc1<br \/>\n<code>kubectl -n kubernetes-dashboard get secret<\/code><\/p>\n<pre><code>[root@localhost ~]# kubectl -n kubernetes-dashboard get secret\nNAME                               TYPE                                  DATA   AGE\ndefault-token-d9njz                kubernetes.io\/service-account-token   3      6m42s\nkubernetes-dashboard-certs         Opaque                                0      6m42s\nkubernetes-dashboard-csrf          Opaque                                1      6m42s\nkubernetes-dashboard-key-holder    Opaque                                2      6m42s\nkubernetes-dashboard-token-gk7wt   kubernetes.io\/service-account-token   3      6m42s<\/code><\/pre>\n<p>\u4f7f\u7528token\u8fdb\u884c\u767b\u5f55\uff0c\u6267\u884c\u4e0b\u9762\u547d\u4ee4\u83b7\u53d6token<\/p>\n<pre><code> kubectl describe secrets -n kubernetes-dashboard kubernetes-dashboard-token-gk7wt  | grep token | awk &#039;NR==3{print $2}&#039;<\/code><\/pre>\n<p>\u83b7\u53d6\u547d\u540d\u7a7a\u95f4<\/p>\n<pre><code>[root@localhost ~]# kubectl get namespaces\nNAME                   STATUS   AGE\ndefault                Active   39m\nkube-node-lease        Active   39m\nkube-public            Active   39m\nkube-system            Active   39m\nkubernetes-dashboard   Active   21m<\/code><\/pre>\n<pre><code>[root@localhost ~]# kubectl get pods --namespace=kubernetes-dashboard\nNAME                                        READY   STATUS    RESTARTS   AGE\ndashboard-metrics-scraper-c95fcf479-5xcwp   1\/1     Running   0          22m\nkubernetes-dashboard-5bc6d86cfd-vrxnm       1\/1     Running   0          22m<\/code><\/pre>\n<p>\u767b\u5f55\u540e\u5982\u4e0b\u5c55\u793a\uff0c\u5982\u679c\u6ca1\u6709namespace\u53ef\u9009\uff0c\u5e76\u4e14\u63d0\u793a\u627e\u4e0d\u5230\u8d44\u6e90 \uff0c\u90a3\u4e48\u5c31\u662f\u6743\u9650\u95ee\u9898<br \/>\n\u901a\u8fc7\u67e5\u770bdashboard\u65e5\u5fd7\uff0c\u5f97\u5230\u5982\u4e0b \u4fe1\u606f<\/p>\n<pre><code>kubectl logs -f -n kubernetes-dashboard kubernetes-dashboard-5bc6d86cfd-vrxnm<\/code><\/pre>\n<pre><code>2021\/01\/01 00:35:19 Getting list of all pet sets in the cluster\n2021\/01\/01 00:35:19 Non-critical error occurred during resource retrieval: statefulsets.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard&quot; cannot list resource &quot;statefulsets&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot;\n2021\/01\/01 00:35:19 Non-critical error occurred during resource retrieval: pods is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;default&quot;\n2021\/01\/01 00:35:19 Non-critical error occurred during resource retrieval: events is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard&quot; cannot list resource &quot;events&quot; in API group &quot;&quot; in the namespace &quot;default&quot;\n2021\/01\/01 00:35:19 [2021-01-01T00:35:19Z] Outcoming response to 10.0.0.197:19134 with 200 status code\n2021\/01\/01 00:35:19 [2021-01-01T00:35:19Z] Incoming HTTP\/2.0 GET \/api\/v1\/ingress\/default?itemsPerPage=10&amp;page=1&amp;sortBy=d,creationTimestamp request from 10.0.0.197:19134: \n2021\/01\/01 00:35:19 Non-critical error occurred during resource retrieval: ingresses.extensions is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard&quot; cannot list resource &quot;ingresses&quot; in API group &quot;extensions&quot; in the namespace &quot;default&quot;\n2021\/01\/01 00:35:19 [2021-01-01T00:35:19Z] Outcoming response to 10.0.0.197:19134 with 200 status code\n2021\/01\/01 00:35:20 [2021-01-01T00:35:20Z] Incoming HTTP\/2.0 GET \/api\/v1\/service\/default?itemsPerPage=10&amp;page=1&amp;sortBy=d,creationTimestamp request from 10.0.0.197:19134: \n2021\/01\/01 00:35:20 [2021-01-01T00:35:20Z] Incoming HTTP\/2.0 GET \/api\/v1\/configmap\/default?itemsPerPage=10&amp;page=1&amp;sortBy=d,creationTimestamp request from 10.0.0.197:19134: \n2021\/01\/01 00:35:20 Getting list config maps in the namespace default\n2021\/01\/01 00:35:20 Getting list of all services in the cluster\n2021\/01\/01 00:35:20 Non-critical error occurred during resource retrieval: configmaps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard&quot; cannot list resource &quot;configmaps&quot; in API group &quot;&quot; in the namespace &quot;default&quot;\n2021\/01\/01 00:35:20 Non-critical error occurred during resource retrieval: services is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard&quot; cannot list resource &quot;services&quot; in API group &quot;&quot; in the namespace &quot;default&quot;\n2021\/01\/01 00:35:20 [2021-01-01T00:35:20Z] Outcoming response to 10.0.0.197:19134 with 200 status code\n2021\/01\/01 00:35:20 [2021-01-01T00:35:20Z] Outcoming response to 10.0.0.197:19134 with 200 status code\n2021\/01\/01 00:35:20 [2021-01-01T00:35:20Z] Incoming HTTP\/2.0 GET \/api\/v1\/persistentvolumeclaim\/default?itemsPerPage=10&amp;page=1&amp;sortBy=d,creationTimestamp request from 10.0.0.197:19134: \n2021\/01\/01 00:35:20 [2021-01-01T00:35:20Z] Incoming HTTP\/2.0 GET \/api\/v1\/secret\/default?itemsPerPage=10&amp;page=1&amp;sortBy=d,creationTimestamp request from 10.0.0.197:19134:<\/code><\/pre>\n<p>\u89e3\u51b3\u65b9\u6cd5<br \/>\n\u6388\u4e88\u8d85\u7ea7\u7528\u6237\u8bbf\u95ee\u6743\u9650\u7ed9\u96c6\u7fa4\u8303\u56f4\u5185\u7684\u6240\u6709\u670d\u52a1\u5e10\u6237\uff08\u5f3a\u70c8\u4e0d\u9f13\u52b1\uff09<\/p>\n<pre><code>[root@localhost ~]# kubectl create clusterrolebinding serviceaccounts-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccounts\nclusterrolebinding.rbac.authorization.k8s.io\/serviceaccounts-cluster-admin created<\/code><\/pre>\n<!--wp-compress-html no compression--><!--wp-compress-html-->","protected":false},"excerpt":{"rendered":"0\u3001\u955c\u50cf\u5730\u5740 \u6e05\u534e\u5927\u5b66\u5f00\u6e90\u8f6f\u4ef6\u955c\u50cf\u7ad9 \u963f\u91cc\u4e91\u5b98\u65b9\u955c\u50cf\u7ad9 DockerDownLoad\u5b98\u7f51\u5730\u5740 1\u3001\u7cfb\u7edf\u51c6\u5907 2\u3001\u6dfb\u52a0\u963f\u91cc\u6e90 rm -rfv \/etc\/yum.repos.d\/* curl -o \/et \u00b7\u00b7\u00b7","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[43],"tags":[],"views":4223,"_links":{"self":[{"href":"https:\/\/qtvz.com\/api\/wp\/v2\/posts\/578"}],"collection":[{"href":"https:\/\/qtvz.com\/api\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/qtvz.com\/api\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/qtvz.com\/api\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/qtvz.com\/api\/wp\/v2\/comments?post=578"}],"version-history":[{"count":2,"href":"https:\/\/qtvz.com\/api\/wp\/v2\/posts\/578\/revisions"}],"predecessor-version":[{"id":580,"href":"https:\/\/qtvz.com\/api\/wp\/v2\/posts\/578\/revisions\/580"}],"wp:attachment":[{"href":"https:\/\/qtvz.com\/api\/wp\/v2\/media?parent=578"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/qtvz.com\/api\/wp\/v2\/categories?post=578"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/qtvz.com\/api\/wp\/v2\/tags?post=578"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}