kubernetes实践(二)MAC OS系统下k8s集群的搭建
1.版本统一
centos7 Docker 18.09.0 kubeadm-1.14.0-0 kubelet-1.14.0-0 kubectl-1.14.0-0 k8s.gcr.io/kube-apiserver:v1.14.0 k8s.gcr.io/kube-controller-manager:v1.14.0 k8s.gcr.io/kube-scheduler:v1.14.0 k8s.gcr.io/kube-proxy:v1.14.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1 calico:v3.9
2.准备3台centos7
1.vargrant准备centos机器
mkdir k8s-docker-centos7 cd k8s-docker-centos7 vi Vagrantfile #====================================================================== boxes = [ { :name => "m", :eth1 => "192.168.50.14", :mem => "4096", :cpu => "2" }, { :name => "w1", :eth1 => "192.168.50.15", :mem => "4096", :cpu => "2" }, { :name => "w2", :eth1 => "192.168.50.16", :mem => "4096", :cpu => "2" } ] Vagrant.configure(2) do |config| config.vm.box = "centos/7" boxes.each do |opts| config.vm.define opts[:name] do |config| config.vm.hostname = opts[:name] config.vm.network :public_network, bridge: "en0: Wi-Fi (Wireless)", ip: opts[:eth1] config.vm.provider "vmware_fusion" do |v| v.vmx["memsize"] = opts[:mem] v.vmx["numvcpus"] = opts[:cpu] end config.vm.provider "virtualbox" do |v| v.customize ["modifyvm", :id, "--memory", opts[:mem]] v.customize ["modifyvm", :id, "--cpus", opts[:cpu]] v.customize ["modifyvm", :id, "--name", opts[:name]] end end end end #======================================================================
2.配置机器登陆权限
(1).启动之后进入到对应的centos里面,使得root账户能够登陆
cd k8s-docker-centos7 vagrant up [启动虚拟机] vagrant ssh master-kubeadm-k8s [进入manager-node] vagrant ssh worker01-kubeadm-k8s [进入worker01-node] vagrant ssh worker02-kubeadm-k8s [进入worker02-node]
(2).分别登陆三个节点,执行下面操作,改成可以密码登陆root账户
sudo -i [进入root账户] vi /etc/ssh/sshd_config [设置root账户可以密码登陆] 修改PasswordAuthentication yes passwd [修改密码] systemctl restart sshd [重启sshd服务]
3.更新并安装依赖
3台机器都执行
yum -y update yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
4.安装docker
3台机器都需要执行,安装版本为18.09.0
(1).安装依赖
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
(2).添加源信息
sudo yum-config-manager \ --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo sudo yum makecache fast
(3).刷新yum源缓存
yum clean all yum makecache yum repolist
(4).设置加速镜像
sudo mkdir -p /etc/docker vi /etc/docker/daemon.json { "registry-mirrors": ["这边替换成自己的实际地址"] }
(5).安装docker
sudo yum install -y docker-ce-18.09.0 docker-ce-cli-18.09.0 containerd.io
(6).启动docker
sudo systemctl start docker && sudo systemctl enable docker
5.修改host文件
(1)master
# 设置master的hostname,并且修改hosts文件
sudo hostnamectl set-hostname m
vi /etc/hosts
192.168.50.14 m
192.168.50.15 w1
192.168.50.16 w2
(2)两个worker
# 设置worker01/02的hostname,并且修改hosts文件
sudo hostnamectl set-hostname w1
sudo hostnamectl set-hostname w2
vi /etc/hosts
192.168.50.14 m
192.168.50.15 w1
192.168.50.16 w2
(3)使用ping测试一下
ping m ping w1 ping w2
6.centos7系统设置
(1)关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
(2)关闭selinux
setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
(3)关闭swap
swapoff -a sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
(4)配置iptables的ACCEPT规则
强制在机器启动时候在forward 链中增加一条全转发规则:
a.写一个脚本
#!/bin/bash sleep 10 iptables -I FORWARD -s 0.0.0.0/0 -d 0.0.0.0/0 -j ACCEPT
为了简单可以直接用
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
b.在systemd中增加一个service并开机启动
[Unit] Description=enable forward all for forward chain in filter table After=network.target [Service] ExecStart=/bin/bash /etc/sysconfig/add-forward-iptable-rule.sh [Install] WantedBy=multi-user.target
systemctl enable add-iptable-rule.service
(5)设置系统参数
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
7.安装
(1).配置yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
(2).刷新yum源缓存
yum clean all yum makecache yum repolist
(3).安装kubeadm,kubelet,kubectl
yum install -y kubeadm-1.14.0-0 kubelet-1.14.0-0 kubectl-1.14.0-0
(4)docker和k8s设置同一个cgroup
# docker vi /etc/docker/daemon.json "exec-opts": ["native.cgroupdriver=systemd"], systemctl restart docker
# kubelet sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf systemctl enable kubelet && systemctl start kubelet
8.下载k8s镜像
(1).查询需要的镜像
kubeadm config images list k8s.gcr.io/kube-apiserver:v1.14.0 k8s.gcr.io/kube-controller-manager:v1.14.0 k8s.gcr.io/kube-scheduler:v1.14.0 k8s.gcr.io/kube-proxy:v1.14.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1
(2)解决国外镜像不能访问的问题
#!/bin/bash set -e KUBE_VERSION=v1.14.0 KUBE_PAUSE_VERSION=3.1 ETCD_VERSION=3.3.10 CORE_DNS_VERSION=1.3.1 GCR_URL=k8s.gcr.io ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers images=(kube-proxy:${KUBE_VERSION} kube-scheduler:${KUBE_VERSION} kube-controller-manager:${KUBE_VERSION} kube-apiserver:${KUBE_VERSION} pause:${KUBE_PAUSE_VERSION} etcd:${ETCD_VERSION} coredns:${CORE_DNS_VERSION}) for imageName in ${images[@]} ; do docker pull $ALIYUN_URL/$imageName docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName docker rmi $ALIYUN_URL/$imageName done
# 运行脚本 sh ./kubeadm.sh # 查看镜像 docker images
9.kube init 初始化
(1).初始化master节点
kubeadm init --kubernetes-version=1.14.0 --apiserver-advertise-address=192.168.50.14 --pod-network-cidr=10.244.0.0/16
【若要重新初始化集群状态:kubeadm reset,然后再进行上述操作】
执行结果
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.50.14:6443 --token rzrqpo.1hdg1yc2eikpunx8 \ --discovery-token-ca-cert-hash sha256:32fb668badcee3bef094b7d69d7beb97ad2e26a7106236f078c3be369e89ecbc
初始化master执行流程
01-进行一系列检查,以确定这台机器可以部署kubernetes 02-生成kubernetes对外提供服务所需要的各种证书可对应目录 /etc/kubernetes/pki/* 03-为其他组件生成访问kube-ApiServer所需的配置文件 ls /etc/kubernetes/ admin.conf controller-manager.conf kubelet.conf scheduler.conf 04-为 Master组件生成Pod配置文件。 ls /etc/kubernetes/manifests/*.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml 05-生成etcd的Pod YAML文件。 ls /etc/kubernetes/manifests/*.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml etcd.yaml 06-一旦这些 YAML 文件出现在被 kubelet 监视的/etc/kubernetes/manifests/目录下,kubelet就会自动创建这些yaml文件定义的pod,即master组件的容器。master容器启动后,kubeadm会通过检查localhost:6443/healthz这个master组件的健康状态检查URL,等待master组件完全运行起来 07-为集群生成一个bootstrap token 08-将ca.crt等 Master节点的重要信息,通过ConfigMap的方式保存在etcd中,工后续部署node节点使用 09-最后一步是安装默认插件,kubernetes默认kube-proxy和DNS两个插件是必须安装的
(2).根据日志执行
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
(3).验证是否成功
kubectl get pods -n kube-system 【查看kube-system的pods】 kubectl get pods --all-namespaces 【查看所有pods】
(4).健康检查
curl -k https://localhost:6443/healthz
10.部署calico网络插件
(1).查询calico插件镜像
curl https://docs.projectcalico.org/v3.9/manifests/calico.yaml | grep image
结果
image: calico/cni:v3.9.5 image: calico/pod2daemon-flexvol:v3.9.5 image: calico/node:v3.9.5 image: calico/kube-controllers:v3.9.5
(2).手动拉取calico插件镜像
docker pull calico/cni:v3.9.5 docker pull calico/pod2daemon-flexvol:v3.9.5 docker pull calico/node:v3.9.5 docker pull calico/kube-controllers:v3.9.5
(3).安装calico插件
calico如果有节点是多网卡,所以需要在资源清单文件中指定内网网卡
# DaemonSet中添加该环境变量 ... spec: containers: - env: - name: DATASTORE_TYPE value: kubernetes # 指定内网网卡 - name: IP_AUTODETECTION_METHOD value: interface=eth0 - name: WAIT_FOR_DATASTORE value: "true"
安装插件
yum install -y wget wget https://docs.projectcalico.org/v3.9/manifests/calico.yaml kubectl apply -f calico.yaml
(4).确认calico插件是否安装成功
kubectl get pods --all-namespaces -w
11.work节点加入
(1)在woker01和worker02上执行上述命令
kubeadm join 192.168.50.14:6443 --token rzrqpo.1hdg1yc2eikpunx8 \
--discovery-token-ca-cert-hash sha256:32fb668badcee3bef094b7d69d7beb97ad2e26a7106236f078c3be369e89ecbc
(2)在master节点上检查集群信息
kubectl get nodes NAME STATUS ROLES AGE VERSION master-kubeadm-k8s Ready master 19m v1.14.0 worker01-kubeadm-k8s Ready <none> 3m6s v1.14.0 worker02-kubeadm-k8s Ready <none> 2m41s v1.14.0