记一次Kubernetes搭建过程

scadm 2021年4月25日23:11:28记一次Kubernetes搭建过程已关闭评论9494 17440字

Kubernetes入门及概念介绍

Kubernetes(k8s)是自动化容器操作的开源平台,这些操作包括部署,调度和节点集群间扩展。如果你曾经用过Docker容器技术部署容器,可以将Docker看成Kubernetes内部使用的低级别组件。 Kubernetes不仅支持Docker,还支持Rocket,这是另一种容器技术。使用Kubernetes可以实现如下功能:

  • 自动化容器的部署和复制;
  • 随时扩展或收缩容器规模;
  • 将容器组织成组,并且提供容器间的负载均衡;
  • 很容易地升级应用程序容器的新版本;
  • 提供容器弹性,如果容器失效就替换它等。

Kubernetes平台组件概念

Kubernetes集群中主要存在两种类型的节点:master、minion节点,Minion节点为运行 Docker容器的节点,负责和节点上运行的 Docker 进行交互,并且提供了代理功能。

  • Kubelect MasterMaster节点负责对外提供一系列管理集群的API接口,并且通过和 Minion 节点交互来实现对集群的操作管理。
  • Apiserver:用户和 kubernetes 集群交互的入口,封装了核心对象的增删改查操作,提供了 RESTFul 风格的 API 接口,通过etcd来实现持久化并维护对象的一致性。
  • Scheduler:负责集群资源的调度和管理,例如当有 pod 异常退出需要重新分配机器时,scheduler 通过一定的调度算法从而找到最合适的节点。
  • Controller-manager:主要是用于保证 replication Controller 定义的复制数量和实际运行的 pod 数量一致,另外还保证了从 service 到 pod 的映射关系总是最新的。
  • Kubelet:运行在 minion节点,负责和节点上的Docker交互,例如启停容器,监控运行状态等。
  • Proxy:运行在 minion 节点,负责为 pod 提供代理功能,会定期从 etcd 获取 service 信息,并根据 service 信息通过修改 iptables 来实现流量转发(最初的版本是直接通过程序提供转发功能,效率较低。),将流量转发到要访问的 pod 所在的节点上去。
  • Etcdetcd 是一个分布式一致性k-v存储系统数据库,可用于服务注册发现与共享配置储数据库,用来存储kubernetes的信息的,etcd组件作为一个高可用、强一致性的服务发现存储仓库,渐渐为开发人员所关注。在云计算时代,如何让服务快速透明地接入到计算集群中,如何让共享配置信息快速被集群中的所有机器发现,更为重要的是,如何构建这样一套高可用、安全、易于部署以及响应快速的服务集群,etcd的诞生就是为解决该问题。
  • Flannel:Flannel是CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,Flannel 目的就是为集群中的所有节点重新规划 IP 地址的使用规则,从而使得不同节点上的容器能够获得同属一个内网且不重复的 IP 地址,并让属于不同节点上的容器能够直接通过内网 IP 通信。

 

部署Kubernetes

云计算平台,至少准备两台服务器

192.168.93.101 Master

192.168.93.104 Etcd

192.168.93.103 Node

配置yum-base源

每台服务器主机上都运行如下命令:

systemctl stop firewalld
systemctl disable firewalld

Etcd部署

# yum install etcd -y
[root@scyun-node-4 etc]# cd /etc/etcd/
[root@scyun-node-4 etcd]# ls
etcd.conf
[root@scyun-node-4 etcd]# grep -aivE "#|^$" etcd.conf 
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"
ETCD_NAME="default"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"

#复制这几行代码到etcd.conf 
[root@scyun-node-4 etcd]# cat etcd.conf 
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"    #监听地址
ETCD_NAME="default"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.93.104:2379"   #对外宣告IP

启动Etcd服务

[root@scyun-node-4 etcd]# systemctl start etcd    #启动etcd服务
[root@scyun-node-4 etcd]# netstat -nltp     #查看端口2379|2380
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:2380          0.0.0.0:*               LISTEN      7400/etcd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      6959/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      7089/master         
tcp6       0      0 :::2379                 :::*                    LISTEN      7400/etcd           
tcp6       0      0 :::22                   :::*                    LISTEN      6959/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      7089/master         
[root@scyun-node-4 etcd]# ps -ef |grep -aiE etcd     #查看进程
etcd       7400      1  0 22:19 ?        00:00:00 /usr/bin/etcd --name=default --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=http://0.0.0.0:2379
root       7414   7225  0 22:20 pts/0    00:00:00 grep --color=auto -aiE etcd
配置key-为docker分配地址
[root@scyun-node-4 etcd]# etcdctl  mk  /atomic.io/network/config '{"Network":"172.17.0.0/16"}'
{"Network":"172.17.0.0/16"}

Master部署

[root@scyun-node-1 yum.repos.d]# yum install kubernetes-master  flannel -y   #安装服务

#查看配置文件
[root@scyun-node-1 yum.repos.d]# cd /etc/kubernetes/
[root@scyun-node-1 kubernetes]# ll
总用量 16
-rw-r--r--. 1 root root 767 7月 3 2017 apiserver
-rw-r--r--. 1 root root 655 7月 3 2017 config
-rw-r--r--. 1 root root 189 7月 3 2017 controller-manager
-rw-r--r--. 1 root root 111 7月 3 2017 scheduler

配置文件

#配置apiserver文件

[root@scyun-node-1 kubernetes]# vim apiserver  
[root@scyun-node-1 kubernetes]# cat apiserver 
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"    #api监听地址

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"        #

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.93.104:2379"     #etcd地址

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

#配置config 文件

[root@scyun-node-1 kubernetes]# vim config 
[root@scyun-node-1 kubernetes]# cat config 
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.93.101:8080"

启动Master服务

[root@scyun-node-1 kubernetes]# systemctl start kube-apiserver 
[root@scyun-node-1 kubernetes]# systemctl start kube-controller-manager 
[root@scyun-node-1 kubernetes]# systemctl start kube-scheduler

#查看进程

[root@scyun-node-1 kubernetes]# ps -ef |grep -aiE kube
kube 22755 1 1 22:30 ? 00:00:00 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://192.168.93.104:2379 --insecure-bind-address=0.0.0.0 --port=8080 --allow-privileged=false --service-cluster-ip-range=10.254.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota
kube 22767 1 1 22:30 ? 00:00:00 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://192.168.93.101:8080
kube 22780 1 0 22:30 ? 00:00:00 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://192.168.93.101:8080
root 22789 7229 0 22:31 pts/0 00:00:00 grep --color=auto -aiE kube

#查看端口
[root@scyun-node-1 kubernetes]# netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name 
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 6962/sshd 
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 7111/master 
tcp6 0 0 :::10251 :::* LISTEN 22780/kube-schedule 
tcp6 0 0 :::6443 :::* LISTEN 22755/kube-apiserve 
tcp6 0 0 :::10252 :::* LISTEN 22767/kube-controll 
tcp6 0 0 :::8080 :::* LISTEN 22755/kube-apiserve 
tcp6 0 0 :::22 :::* LISTEN 6962/sshd 
tcp6 0 0 ::1:25 :::* LISTEN 7111/master

部署Master-Flannel

[root@scyun-node-1 kubernetes]# vim /etc/sysconfig/flanneld 
[root@scyun-node-1 kubernetes]# cat /etc/sysconfig/flanneld
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.93.104:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

#启动master-flanneld 服务
[root@scyun-node-1 kubernetes]# systemctl start flanneld
[root@scyun-node-1 kubernetes]# ps -ef |grep -aiE flanneld
root 22806 1 0 22:51 ? 00:00:00 /usr/bin/flanneld -etcd-endpoints=http://192.168.93.104:2379 -etcd-prefix=/atomic.io/network
root 22869 7229 0 22:51 pts/0 00:00:00 grep --color=auto -aiE flanneld
[root@scyun-node-1 kubernetes]# 

Node部署

[root@scyun-node-3 ~]# yum install kubernetes-node  docker flannel  -y

配置文件

[root@scyun-node-3 ~]# cd /etc/kubernetes/
[root@scyun-node-3 kubernetes]# ll
总用量 12
-rw-r--r--. 1 root root 655 7月   3 2017 config
-rw-r--r--. 1 root root 615 7月   3 2017 kubelet
-rw-r--r--. 1 root root 103 7月   3 2017 proxy

#config配置文件

[root@scyun-node-3 kubernetes]# vim config 
[root@scyun-node-3 kubernetes]# cat config 
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.93.101:8080"

#配置kubelet文件

[root@scyun-node-3 kubernetes]# vim kubelet 
[root@scyun-node-3 kubernetes]# cat kubelet 
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.93.103"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.93.101:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

配置Node-Flannel

[root@scyun-node-3 kubernetes]# vim /etc/sysconfig/flanneld 
[root@scyun-node-3 kubernetes]# cat /etc/sysconfig/flanneld 
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.93.104:2379"    #etcd地址

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

启动服务

[root@scyun-node-3 kubernetes]# systemctl start kubelet 
[root@scyun-node-3 kubernetes]# systemctl start kube-proxy
[root@scyun-node-3 kubernetes]# systemctl start docker
[root@scyun-node-3 kubernetes]# systemctl start flanneld 

#查看进程
[root@scyun-node-3 kubernetes]# ps -ef |grep kubelet 
root 23599 1 1 22:40 ? 00:00:08 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://192.168.93.101:8080 --address=0.0.0.0 --hostname-override=192.168.93.103 --allow-privileged=false --pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest
root 28706 7236 0 22:48 pts/0 00:00:00 grep --color=auto kubelet
[root@scyun-node-3 kubernetes]# ps -ef |grep kube-proxy
root 23691 1 1 22:40 ? 00:00:08 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://192.168.93.101:8080
root 28779 7236 0 22:48 pts/0 00:00:00 grep --color=auto kube-proxy

[root@scyun-node-3 kubernetes]# ps -ef |grep docker
root 23457 1 0 22:40 ? 00:00:02 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json --selinux-enabled --log-driver=journald --signature-verification=false --storage-driver overlay2
root 23463 23457 0 22:40 ? 00:00:00 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-containerd-shim --runtime docker-runc --runtime-args --systemd-cgroup=true
root 28851 7236 0 22:48 pts/0 00:00:00 grep --color=auto docker

[root@scyun-node-3 kubernetes]# ps -ef |grep flanneld
root 29018 1 0 22:48 ? 00:00:00 /usr/bin/flanneld -etcd-endpoints=http://192.168.93.104:2379 -etcd-prefix=/atomic.io/network
root 29170 7236 0 22:48 pts/0 00:00:00 grep --color=auto flanneld

验证k8s是否部署成功

#Ready状态就ok了

[root@scyun-node-1 kubernetes]# kubectl get nodes
NAME             STATUS    AGE
192.168.93.103   Ready     12m

验证网络

# master端PING 通node端docker地址即可

[root@scyun-node-3 kubernetes]# iptables -t filter -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0           

Chain FORWARD (policy DROP)    #转发被禁止了 ,开启即可
target     prot opt source               destination         
DOCKER-ISOLATION  all  --  0.0.0.0/0            0.0.0.0/0           
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0           

Chain DOCKER (1 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION (1 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination         
DROP       all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000

Chain KUBE-SERVICES (1 references)
target     prot opt source               destination     

#开启转发
[root@scyun-node-3 kubernetes]# iptables -P FORWARD ACCEPT

#重启 flanneld docker 服务即可
[root@scyun-node-3 kubernetes]# systemctl restart flanneld 
[root@scyun-node-3 kubernetes]# systemctl restart docker

# 查看node-IP地址
[root@scyun-node-3 kubernetes]# ifconfig 
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.98.1 netmask 255.255.255.0 broadcast 0.0.0.0
ether 02:42:99:33:0b:de txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.93.103 netmask 255.255.255.0 broadcast 192.168.93.255
inet6 fe80::c737:2983:5843:bfec prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:83:cb:bd txqueuelen 1000 (Ethernet)
RX packets 63582 bytes 87833784 (83.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 30721 bytes 2840714 (2.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
inet 172.17.98.0 netmask 255.255.0.0 destination 172.17.98.0
inet6 fe80::c084:cd98:879a:a38c prefixlen 64 scopeid 0x20<link>
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 1 bytes 84 (84.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4 bytes 228 (228.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

#在master端ping-node节点
[root@scyun-node-1 kubernetes]# ping 172.17.98.1
PING 172.17.98.1 (172.17.98.1) 56(84) bytes of data.
64 bytes from 172.17.98.1: icmp_seq=1 ttl=62 time=0.856 ms
64 bytes from 172.17.98.1: icmp_seq=2 ttl=62 time=1.08 ms
64 bytes from 172.17.98.1: icmp_seq=3 ttl=62 time=0.830 ms
64 bytes from 172.17.98.1: icmp_seq=4 ttl=62 time=2.09 ms
64 bytes from 172.17.98.1: icmp_seq=5 ttl=62 time=2.15 ms

Kubernetes Dashboard UI 部署

#node上部署如下:

如下为配置kubernetes dashboard完整过程,在Node节点提前导入两个列表镜像(从云盘下载即可):
	pod-infrastructure
	kubernetes-dashboard-amd64
#导入pod-infrastructure.tgz 
[root@scyun-node-3 ~]# docker load <pod-infrastructure.tgz 
c1eac31e742f: Loading layer [==================================================>] 205.9 MB/205.9 MB
9161a60cc964: Loading layer [==================================================>] 10.24 kB/10.24 kB
6872307367a6: Loading layer [==================================================>] 12.74 MB/12.74 MB
Loaded image ID: sha256:99965fb984237718c1682830d7513926082844194567e8a3f42fd7cca1b00a09
#将导入的pod镜像名称修改
[root@scyun-node-3 ~]# docker tag $(docker images|grep none|awk '{print $3}') registry.access.redhat.com/rhel7/pod-infrastructure
[root@scyun-node-3 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 3 years ago 209 MB


#导入kubernetes-dashboard-amd64.tgz

[root@scyun-node-3 ~]# docker load <pod-infrastructure.tgz 
c1eac31e742f: Loading layer [==================================================>] 205.9 MB/205.9 MB
9161a60cc964: Loading layer [==================================================>] 10.24 kB/10.24 kB
6872307367a6: Loading layer [==================================================>] 12.74 MB/12.74 MB
Loaded image ID: sha256:99965fb984237718c1682830d7513926082844194567e8a3f42fd7cca1b00a09
[root@scyun-node-3 ~]# docker tag $(docker images|grep none|awk '{print $3}') registry.access.redhat.com/rhel7/pod-infrastructure
[root@scyun-node-3 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 3 years ago 209 MB
#将导入的pod镜像名称修改
[root@scyun-node-3 ~]# docker load <kubernetes-dashboard-amd64.tgz 
5f70bf18a086: Loading layer [==================================================>] 1.024 kB/1.024 kB
dd6ff7c6d5f0: Loading layer [==================================================>] 139.3 MB/139.3 MB
Loaded image ID: sha256:9595afede088e05779a589ea3c12f09bea5ada0fefddd52d45dbfaac64f87539
[root@scyun-node-3 ~]# docker tag $(docker images|grep none|awk '{print $3}') bestwu/kubernetes-dashboard-amd64:v1.6.3

#查看镜像
[root@scyun-node-3 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 3 years ago 209 MB
bestwu/kubernetes-dashboard-amd64 v1.6.3 9595afede088 3 years ago 139 MB

Master端上传 dashboard-controller.yaml  dashboard-service.yaml

[root@scyun-node-1 kubernetes]# ll
总用量 24
-rw-r--r--. 1 root root  895 4月  25 22:25 apiserver
-rw-r--r--. 1 root root  660 4月  25 22:27 config
-rw-r--r--. 1 root root  189 7月   3 2017 controller-manager
-rw-r--r--. 1 root root 1134 4月  24 13:45 dashboard-controller.yaml
-rw-r--r--. 1 root root  274 4月  24 13:45 dashboard-service.yaml
-rw-r--r--. 1 root root  111 7月   3 2017 scheduler

#文件内容

[root@scyun-node-1 kubernetes]# vim dashboard-controller.yaml 
[root@scyun-node-1 kubernetes]# cat dashboard-controller.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
    spec:
      containers:
      - name: kubernetes-dashboard
        image: bestwu/kubernetes-dashboard-amd64:v1.6.3
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        ports:
        - containerPort: 9090
        args:
          - --apiserver-host=http://192.168.93.101:8080   #api地址
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30



###dashboard-service.yaml
[root@scyun-node-1 kubernetes]# cat dashboard-service.yaml

apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090

#创建以下两个文件

[root@scyun-node-1 kubernetes]# kubectl create -f dashboard-controller.yaml 
deployment "kubernetes-dashboard" created
[root@scyun-node-1 kubernetes]# kubectl create -f dashboard-service.yaml 
service "kubernetes-dashboard" created

#查看pods状态 Running状态就ok了

[root@scyun-node-1 kubernetes]# kubectl get pods -n kube-system
NAME                                   READY     STATUS    RESTARTS   AGE
kubernetes-dashboard-709530931-kfz8t   1/1       Running   0          1m

WEB访问 192.168.93.101:8080/ui

记一次Kubernetes搭建过程

下载信息
网盘密码:发表评论并刷新可见
下载地址
继续阅读
scadm
  • 本文由 发表于 2021年4月25日23:11:28
  • 转载请务必保留本文链接:https://www.wscyun.com/1340
k8s-1.21.8快速部署脚本 kubernetes

k8s-1.21.8快速部署脚本

docker-20.10.12+k8s-1.21.8 脚本·使用·说明: 使用此脚本可在线快速部署一主两从的经典k8s集群 当然可以使用此脚本部署:1mast+nnode集群 比如主机规划: 主机名 ...