通天战队|Kubeadm部署CentOS8三节点Kubernetes V1.18.0集群实践( 二 )


Error: Problem: package docker-ce-3:19.03.8-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed- cannot install the best candidate for the job- package containerd.io-1.2.10-3.2.el7.x86_64 is excluded- package containerd.io-1.2.13-3.1.el7.x86_64 is excluded- package containerd.io-1.2.2-3.3.el7.x86_64 is excluded- package containerd.io-1.2.2-3.el7.x86_64 is excluded- package containerd.io-1.2.4-3.1.el7.x86_64 is excluded- package containerd.io-1.2.5-3.1.el7.x86_64 is excluded- package containerd.io-1.2.6-3.3.el7.x86_64 is excluded(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)解决办法是:
wget yum install containerd.io-1.2.6-3.3.el7.x86_64.rpm然后再安装docker-ce , 应该就可以可成功了 。
为了Docker能够加速下载Docker Image , 我们需要给Docker设置加速镜像地址:
cat < /etc/docker/daemon.json{"registry-mirrors": [""]}EOFsystemctl daemon-reloadsystemctl restart dockerDocker的相信安装过程可以参考Docker官方文档:
安装 kubeadm、kubelet 和 kubectl
分别在三个节点上执行如下命令:
cat < /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=enabled=1gpgcheck=1repo_gpgcheck=1gpgkey= EOF# 将 SELinux 设置为 permissive 模式(相当于将其禁用)setenforce 0sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/configyum install -y kubelet kubeadm kubectl --disableexcludes=kubernetessystemctl enable --now kubelet

  • 为了加速rpm包下载 , 我们采用了阿里的镜像repo;
  • 关闭SELinux , 防止安装过程中出现一些未知的错误;
  • 安装完毕后开启kubelet开机自启动;
最后我们还要执行如下命令:
cat </etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsysctl --system官方文档是这样解释的:
一些 RHEL/CentOS 7 的用户曾经遇到过问题:由于 iptables 被绕过而导致流量无法正确路由的问题 。 您应该确保 在 sysctl 配置中的 net.bridge.bridge-nf-call-iptables 被设置为 1
到此 , 所有的准备工作基本完成 , 接下来就是使用kubeadm初始化Kubernetes集群 。
通天战队|Kubeadm部署CentOS8三节点Kubernetes V1.18.0集群实践开始工作
kubeadm初始化master节点
kubeadm init --kubernetes-version=1.18.0\--apiserver-advertise-address=10.0.0.10\--image-repository registry.aliyuncs.com/google_containers\--pod-network-cidr=10.122.0.0/16
  • kubernetes-version指定Kubernetes的版本 , 这里我们采用最新的1.18.0;
  • apiserver-advertise-address指定API Server的IP地址 , 我们使用master节点的IP地址10.0.0.10;
  • image-repository为了加速下载google的基础镜像 , 我们使用阿里的registry;
  • pod-network-cidr指明 pod 网络可以使用的 IP 地址段 , 我们使用10.122.0.0/16这个地址段 , 这个地址段在后面也会用到;
[root@instance01 ~]# kubeadm init --kubernetes-version=1.18.0\> --apiserver-advertise-address=10.0.0.10\> --image-repository registry.aliyuncs.com/google_containers\> --pod-network-cidr=10.122.0.0/16W0811 10:35:41.9509795522 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.18.0[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at[WARNING FileExisting-tc]: tc not found in system path[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [instance01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.10][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [instance01 localhost] and IPs [10.0.0.10 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [instance01 localhost] and IPs [10.0.0.10 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"W0811 10:35:45.7473655522 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-scheduler"W0811 10:35:45.7486165522 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 21.003165 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node instance01 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node instance01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: 5mlkf9.ofhd9du86w4n5alp[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.0.0.10:6443 --token 5mlkf9.ofhd9du86w4n5alp \--discovery-token-ca-cert-hash sha256:c401ca643111b710e03ecd19bf5fb257651689fd5e5cdf5efcf5739da9b14ade


推荐阅读