环境准备

操作系统:centos 7 1908 mini

设置主机名

hostnamectl set-hostname k8s-sh-21.vm.90.vc
hostnamectl set-hostname k8s-sh-22.vm.90.vc
hostnamectl set-hostname k8s-sh-23.vm.90.vc

配置hosts

vim /etc/hosts

10.0.2.21 k8s-sh-21.vm.90.vc
10.0.2.22 k8s-sh-22.vm.90.vc
10.0.2.23 k8s-sh-23.vm.90.vc

防火墙与selinux

主机防火墙需要开放相应端口,具体可参考kubernetes官方文档 Check required ports 一节。

这里为了方便,直接关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

目前为止,kubernetes还不支持selinux,所以必须将selinux关闭。

将 SELinux 设置为 permissive 模式(相当于将其禁用)

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

关闭swap分区

自kubernetes1.8版本开始,要求必须关闭swap,否则kubelet无法启动

修改/etc/fstab配置文件,注释掉swap行。

执行以下操作,关闭swap。

swapoff -a
sysctl -w vm.swappiness=0
echo "vm.swappiness=0">> /etc/sysctl.conf 
sysctl -p

调整系统参数

确保系统加载了 br_netfilter 模块。这可以通过运行 lsmod | grep br_netfilter 来检测。如没有加载则运行如下命令加载

modprobe br_netfilter

在 RHEL/CentOS 7 中:由于 iptables 可能被绕过而导致流量无法正确路由的问题。因此需要确保 在 sysctl 配置中的 net.bridge.bridge-nf-call-iptables 被设置为 1,在sysctl中添加如下配置

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

运行 sysctl --system 生效

安装docker

从 v1.6.0 版本起,Kubernetes 开始默认允许使用 CRI(容器运行时接口)。

从 v1.14.0 版本起,kubeadm 将通过观察已知的 UNIX 域套接字来自动检测 Linux 节点上的容器运行时。如果同时检测到 docker 和 containerd,则优先选择 docker。

kubernetes 1.17官方推荐安装 docker 19.03.4 版本,但 1.13.1、17.03、17.06、17.09、18.06 和 18.09 版本也是可以的。

安装docker依赖及yum配置工具

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装docker

yum -y install docker-ce-19.03.4-3.el7

修改docker cgroup driversystemd,以及一些docker的其他设置

cat > /etc/docker/daemon.json <<EOF
{
  "graph": "/data/docker",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

重启docker并运行 docker info |grep cgroup 验证修改

kubernetes 安装

配置kubernetes的yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm、kubelet、kubectl

yum -y install kubeadm kubelet kubectl

kubernetes初始化

打印集群默认初始化配置

kubeadm config print init-defaults > k8s-init.yaml

编辑得到的k8s-init.yaml配置文件,修改如下项:

  • advertiseAddress,将其修改为master的IP地址。
  • imageRepository,因为k8s.gcr.io在国内不可用,因此,将的值修改为azure中国的镜像k8s.azure.cn/google_containers,否则,kubernetes需要的一些docker镜像拉不下来,kubernetes初始化会一直挂起。
  • etcd相关设置,这里只修改一下etcd本地的存储目录。
  • serviceSubnet子网掩码的范围,这里保持默认。
  • 新增配置项podSubnet,在network配置中增加pod的子网范围。

修改后的初始化配置如下

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.14.230.21
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-sh-21.vm.90.vc
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /data/k8s/etcd
imageRepository: k8s.azk8s.cn/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
  dnsDomain: k8s.90.vc
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}

执行kubeadm init --config k8s-init.yaml 进行初始化

返回结果如下

W0229 21:02:26.431332   41307 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0229 21:02:26.431451   41307 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-sh-21.vm.90.vc kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.k8s.90.vc] and IPs [10.96.0.1 10.14.230.21]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-sh-21.vm.90.vc localhost] and IPs [10.14.230.21 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-sh-21.vm.90.vc localhost] and IPs [10.14.230.21 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0229 21:02:30.970566   41307 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0229 21:02:30.971420   41307 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 33.501703 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-sh-21.vm.90.vc as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-sh-21.vm.90.vc as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.2.21:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:8248b6235e0ecf9cb8579dfefce45b6c1d2b3de928cebdf5582621e022f28963 

如果执行结果返回Your Kubernetes control-plane has initialized successfully!的话,则说明kubernetes初始化成功。

接下来,依据提示,分别进行添加kubectl的配置文件、网络插件、添加其他节点等操作。

管理配置文件

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

部署flannel网络插件

执行如下操作

curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml

返回如下信息

podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

这里可能会遇到quay.io无法访问的情况,可以修改kube-flannel文件中的镜像地址,采用国内镜像源获取相关的docker镜像即可。

在其余节点上执行如下操作,加入kubernetes集群

kubeadm join 10.0.2.21:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:8248b6235e0ecf9cb8579dfefce45b6c1d2b3de928cebdf5582621e022f28963

检查kubernetes集群运行状态

[root@k8s-sh-21 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
[root@k8s-sh-21 ~]# kubectl get node
NAME                 STATUS   ROLES    AGE    VERSION
k8s-sh-21.vm.90.vc   Ready    master   101m   v1.17.3
k8s-sh-22.vm.90.vc   Ready    <none>   99m    v1.17.3
k8s-sh-23.vm.90.vc   Ready    <none>   99m    v1.17.3
[root@k8s-sh-21 ~]# kubectl get pods -A -o wide
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE    IP             NODE                 NOMINATED NODE   READINESS GATES
kube-system   coredns-6cd559f5d5-8vkm8                     1/1     Running   0          101m   10.244.0.4     k8s-sh-21.vm.90.vc   <none>           <none>
kube-system   coredns-6cd559f5d5-clhrt                     1/1     Running   0          101m   10.244.0.3     k8s-sh-21.vm.90.vc   <none>           <none>
kube-system   etcd-k8s-sh-21.vm.90.vc                      1/1     Running   0          101m   10.14.230.21   k8s-sh-21.vm.90.vc   <none>           <none>
kube-system   kube-apiserver-k8s-sh-21.vm.90.vc            1/1     Running   0          101m   10.14.230.21   k8s-sh-21.vm.90.vc   <none>           <none>
kube-system   kube-controller-manager-k8s-sh-21.vm.90.vc   1/1     Running   0          101m   10.14.230.21   k8s-sh-21.vm.90.vc   <none>           <none>
kube-system   kube-flannel-ds-amd64-6mvlf                  1/1     Running   0          100m   10.14.230.23   k8s-sh-23.vm.90.vc   <none>           <none>
kube-system   kube-flannel-ds-amd64-dxpgk                  1/1     Running   0          100m   10.14.230.21   k8s-sh-21.vm.90.vc   <none>           <none>
kube-system   kube-flannel-ds-amd64-mjmzz                  1/1     Running   0          100m   10.14.230.22   k8s-sh-22.vm.90.vc   <none>           <none>
kube-system   kube-proxy-2xtwb                             1/1     Running   0          100m   10.14.230.22   k8s-sh-22.vm.90.vc   <none>           <none>
kube-system   kube-proxy-fl9bs                             1/1     Running   0          101m   10.14.230.21   k8s-sh-21.vm.90.vc   <none>           <none>
kube-system   kube-proxy-thhn8                             1/1     Running   0          100m   10.14.230.23   k8s-sh-23.vm.90.vc   <none>           <none>
kube-system   kube-scheduler-k8s-sh-21.vm.90.vc            1/1     Running   0          101m   10.14.230.21   k8s-sh-21.vm.90.vc   <none>           <none>

确保以上node、pod等均为ready和running即可。

将kube-proxy改为ipvs模式

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

编辑configmap中kube-proxy的配置文件。将mode字段修改为ipvs

kubectl edit cm kube-proxy -n kube-system

修改后重启kube-proxy

kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

检查修改结果,kube-proxy日志中输出ipvs即表示已经启用ipvs模式,反之则是iptables

 kubectl get pod -n kube-system | grep kube-proxy |awk '{system("kubectl logs "$1"  -n kube-system")}'|grep Proxier

Helm的安装及使用

部署dashboard

ingress

最后修改:2020 年 06 月 04 日
如果觉得我的文章对你有用,请随意赞赏