1 个不稳定版本
0.4.3 | 2021年8月23日 |
---|
#5 在 #helm
320KB
3.5K SLoC
KEnv - 使用KinD的本地Kubernetes环境
Kenv允许使用KinD启动本地Kubernetes集群,并安装预配置的应用程序以加快开发/测试实验(组织到命名空间/发布中的Helm图表和基于Kustomize的应用程序)。它还可以用于在CI/CD管道中运行烟雾测试。
先决条件
Kenv使用kubectl
、kind
、helm
和docker
(如果需要,docker引擎可以运行在别处,但docker
cli应该可用)。Kenv可以依赖于系统已存在的工具,或者您可以选择运行kenv tools update
以安装最新版本,而不管系统中已安装了什么。
安装
当所有先决条件都满足时,开始使用Kenv只需安装即可。这可以通过克隆此存储库并运行cargo install
来完成,或者简单地下载发布页面上的静态链接的二进制文件。
仓库结构
但在使用Kenv启动和销毁本地Kubernetes集群之前,让我们先探索应用程序仓库的结构。根目录下只有两个文件夹 - charts
和kustomize
。charts
文件夹列出文件夹,每个文件夹代表一个Kubernetes命名空间,在每个命名空间下可以找到包含Helm图表的文件夹(每个名称代表Helm发布)。kustomize
文件夹很简单 - 它由一系列文件夹组成,每个文件夹都是一个Kenv将尝试安装的不同应用程序。简而言之,结构看起来像这样
.
├── charts/ # helm-based pre-configured applications
│ ├── system/ # ├── namespace (installed by default)
│ │ └── .../ # │ └── chart folder with a name used as a release name
│ ├── monitoring/ # ├── namespace (optional)
│ │ └── .../ # │ └── chart folder with a name used as a release name
│ └── ... # └── ... other optional namespaces
└── kustomize/ # kustomize-based pre-configured applications
└── ... # └── ... applications
所有预配置的应用程序都可以在charts/
(基于Helm的应用程序)和kustomize/
(基于Kustomize的应用程序)下找到。在charts/
下,您将找到与您安装其应用程序时将创建的命名空间对应的文件夹。
存在一种特殊的预配置应用程序,在所有集群中无条件安装——在 system
命名空间下。这个命名空间应该包含许多其他服务的先决条件工作负载:带有其 CRDs 的证书管理器、Nginx Ingress 控制器等。
在 charts/
下的其他可选命名空间是为了根据特定需求构建开发者/测试者环境。这些可选命名空间在需要时可以启用。
示例仓库可以在 example_repo/
下找到并检查。
启动新的 Kubernetes 集群
控制本地 kind 集群的主要方式是通过 kenv cluster up
命令。让我们更深入地了解 kenv cluster up --help
,因为它是这个项目中最重要的工具。
kenv-clusters-up
Starts up Kubernetes cluster
USAGE:
kenv clusters up [FLAGS] [OPTIONS]
FLAGS:
-c, --enable-calico-cni Enables Calico CNI
-i, --enable-image-registry Enables container image registry
-h, --help Prints help information
OPTIONS:
-n, --cluster-name <name> The name of the cluster to spin up [default: local]
-v, --version <version> The kubernetes version [default: latest]
-w, --workers <workers> The number of worker nodes [default: 0]
--image-registry-port <image-registry-port>
The port where container image registry is exposed [default: 5000]
--image-registry-ui-port <image-registry-ui-port>
The port where container image registry UI is exposed [default: 5001]
-p, --expose-port <expose-port>... The port to expose with format host_port:container_port
-m, --mount-volume <mount-volume>... The volume mount with format host_path:container_path
--custom-registry <custom-registry> Custom container registry fot kindest/node images
kenv cluster up
命令设计得允许它启动多个共存的本地域 Kubernetes 集群。让我们考虑一些例子。
使用默认参数启动集群
运行以下命令
kenv cluster up
输出应该如下所示
############################################################################################
# >>> Starting up [local] cluster <<< #
############################################################################################
# >> Starting up registry | skip
# >> Connect registry to kind network | skip
# >> Starting up registry UI | skip
# >> Starting up cluster |
# Ensuring node image (kindest/node:v1.21.2) | ✓
# Preparing nodes | ✓
# Writing configuration | ✓
# Starting control-plane | ✓
# Installing CNI | ✓
# Installing StorageClass | ✓
# >> Starting all pods done
#-----------------------------------------------------+------------------------------------#
注意,多次运行 kenv cluster up
是完全可以接受的
############################################################################################
# >>> Starting up [local] cluster <<< #
############################################################################################
# >> Starting up registry | skip
# >> Connect registry to kind network | skip
# >> Starting up registry UI | skip
# >> Starting up cluster | already exists
#-----------------------------------------------------+------------------------------------#
这个新的 local
集群看起来是这样的(如果没有显式地设置其他名称,则 local
是集群的默认名称)
# ports exposed to localhost (7080 and 7443)
$ docker ps | grep local | sed 's/ /\n/g' | grep 6443
127.0.0.1:35435->6443/tcp
# new "kind-local" context is registered
$ kubectl config get-contexts | grep local
* kind-local kind-local kind-local
# there is only one node powering "local" cluster
kubectl --context kind-local get nodes
NAME STATUS ROLES AGE VERSION
local-control-plane Ready control-plane,master 3m31s v1.21.2
# the following pods are running
$ kubectl --context kind-local get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-558bd4d5db-b27xf 1/1 Running 0 3m26s
kube-system coredns-558bd4d5db-z747f 1/1 Running 0 3m26s
kube-system etcd-local-control-plane 1/1 Running 0 3m43s
kube-system kindnet-5rjsl 1/1 Running 0 3m26s
kube-system kube-apiserver-local-control-plane 1/1 Running 0 3m42s
kube-system kube-controller-manager-local-control-plane 1/1 Running 0 3m42s
kube-system kube-proxy-5q6ch 1/1 Running 0 3m26s
kube-system kube-scheduler-local-control-plane 1/1 Running 0 3m41s
local-path-storage local-path-provisioner-85494db59d-hxdff 1/1 Running 0 3m26s
使用自定义参数启动集群
运行以下命令
kenv cluster up --cluster-name wide \
--workers 3 \
--enable-calico-cni \
--enable-image-registry \
-p 7080:31080 -p 7443:31443 \
-m $(pwd)/_data:/data
这次完成需要更长的时间,其输出应该如下所示
############################################################################################
# >>> Starting up [wide] cluster <<< #
############################################################################################
# >> Starting up registry | done
# >> Connect registry to kind network | done
# >> Starting up registry UI | done
# >> Starting up cluster |
# Ensuring node image (kindest/node:v1.21.2) | ✓
# Preparing nodes | ✓
# Writing configuration | ✓
# Starting control-plane | ✓
# Installing StorageClass | ✓
# Joining worker nodes | ✓
# >> Installing Calico CNI
# Release "calico" does not exist. Installing it now.
# W0801 22:03:10.656968 64675 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
# W0801 22:03:10.762030 64675 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
# NAME: calico
# LAST DEPLOYED: Sun Aug 1 22:03:10 2021
# NAMESPACE: default
# STATUS: deployed
# REVISION: 1
# TEST SUITE: None
# >> Starting Calico CNI done
# >> Starting all pods done
#-----------------------------------------------------+------------------------------------#
如前所述,运行多个本地 Kubernetes 集群是完全可能的。这次运行启动了一个具有以下属性的集群
- 集群名称是
wide
- 集群由
1
个主节点和3
个工作节点构建(而不是local
集群中的单个主节点) - 使用 Calico CNI 而不是默认的 Kindnet CNI(当你需要本地测试网络策略等时很有用)
- 端口
7080
和7443
将分别暴露节点端口31080
和31443
- 额外的卷挂载
$(pwd)/_data
将在wide
集群内部可用为/data
- 镜像注册库作为一个 docker 容器设置,包括其 UI(默认托管在
localhost:5000
和localhost:5001
分别)
使用 Calico CNI 而不是内置的 Kindnet CNI 可能很有用,当你需要实现和测试某些网络策略时。而且,使用可在你的主机上使用并在 Kubernetes 集群内部受信任的镜像注册库,可能允许你设置 CI/CD 管道。
这个新的 wide
集群看起来是这样的
# ports exposed to localhost (8080 and 8443) - not clashing with 7080 and 7443 for "local" cluster
$ docker ps | grep wide | sed 's/ /\n/g' | grep 6443
127.0.0.1:39675->6443/tcp, 0.0.0.0:7080->31080/tcp, 0.0.0.0:7443->31443/tcp
# new "kind-wide" context is registered
$ kubectl config get-contexts | grep wide
* kind-wide kind-wide kind-wide
# there are more nodes powering "wide" cluster
$ kubectl --context kind-wide get nodes
NAME STATUS ROLES AGE VERSION
wide-control-plane Ready control-plane,master 7m49s v1.21.2
wide-worker Ready <none> 7m12s v1.21.2
wide-worker2 Ready <none> 7m12s v1.21.2
wide-worker3 Ready <none> 7m12s v1.21.2
# the following pods are running
$ kubectl --context kind-wide get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-system calico-kube-controllers-6bf8c44b7b-qsjgl 1/1 Running 0 7m7s
calico-system calico-node-2vmws 1/1 Running 0 7m7s
calico-system calico-node-pkk4h 1/1 Running 0 7m7s
calico-system calico-node-wnsnt 1/1 Running 0 7m7s
calico-system calico-node-z5p5s 1/1 Running 0 7m7s
calico-system calico-typha-6d5c659854-54kmk 1/1 Running 0 7m
calico-system calico-typha-6d5c659854-lmkrp 1/1 Running 0 7m7s
calico-system calico-typha-6d5c659854-swmqq 1/1 Running 0 7m
kube-system coredns-558bd4d5db-94774 1/1 Running 0 7m49s
kube-system coredns-558bd4d5db-lfw9l 1/1 Running 0 7m49s
kube-system etcd-wide-control-plane 1/1 Running 0 8m4s
kube-system kube-apiserver-wide-control-plane 1/1 Running 0 8m4s
kube-system kube-controller-manager-wide-control-plane 1/1 Running 0 8m5s
kube-system kube-proxy-5cst5 1/1 Running 0 7m31s
kube-system kube-proxy-bddjr 1/1 Running 0 7m31s
kube-system kube-proxy-f5trr 1/1 Running 0 7m49s
kube-system kube-proxy-jbt9f 1/1 Running 0 7m31s
kube-system kube-scheduler-wide-control-plane 1/1 Running 0 8m5s
local-path-storage local-path-provisioner-85494db59d-gs98q 1/1 Running 0 7m49s
tigera-operator tigera-operator-9c5c8797c-479zq 1/1 Running 0 7m28s
安装应用程序
要从示例应用程序仓库安装一些预配置的应用程序,可以这样做
$ kenv apps rollout --path $(pwd)/example_repo --extra-namespace monioting --skip-if-already-installed
############################################################################################
# >>> Rolling out applications to [local] cluster <<< #
############################################################################################
# >> Validating cluster | valid
# >> Validating repository | valid
# >> Rolling out [system] releases |
# 1 cert-manager | ...
# Release "cert-manager" does not exist. Installing it now.
# NAME: cert-manager
# LAST DEPLOYED: Sun Aug 1 22:13:44 2021
# NAMESPACE: system
# STATUS: deployed
# REVISION: 1
# TEST SUITE: None
# 2 ingress-controller | ...
# Release "ingress-controller" does not exist. Installing it now.
# NAME: ingress-controller
# LAST DEPLOYED: Sun Aug 1 22:14:24 2021
# NAMESPACE: system
# STATUS: deployed
# REVISION: 1
# TEST SUITE: None
# >> Rolling out [monioting] releases | skipped
#-----------------------------------------------------+------------------------------------#
你能猜到它安装到了哪个集群吗? local
还是 wide
? :D 正确答案是 local
,因为它是默认的集群名称。
有一个特殊的标志 --skip-if-installed
(或其简写版本 -s
)被传递以确保现有的发布不会处理超过一次。这允许在只对特定命名空间感兴趣时快速迭代。
如果你再次运行相同的命令,它将快速运行
$ kenv apps rollout --path $(pwd)/example_repo --extra-namespace monioting --skip-if-already-installed
############################################################################################
# >>> Rolling out applications to [local] cluster <<< #
############################################################################################
# >> Validating cluster | valid
# >> Validating repository | valid
# >> Rolling out [system] releases |
# 1 cert-manager | exists
# 2 ingress-controller | exists
# >> Rolling out [monioting] releases | skipped
#-----------------------------------------------------+------------------------------------#
列出应用程序
要列出所有命名空间中安装的应用程序(Helm 发布),只需运行以下命令:
$ kenv apps list
############################################################################################
# >>> Listing applications in [local] cluster <<< #
############################################################################################
# >> default | ...
# >> kube-node-lease | ...
# >> kube-public | ...
# >> kube-system | ...
# >> local-path-storage | ...
# >> system | ...
# 1 cert-manager | deployed (rev: 1)
# 2 ingress-controller | deployed (rev: 1)
#-----------------------------------------------------+------------------------------------#
如果需要,也可以指定另一个集群或特定命名空间。更多详细信息,请参阅
kenv apps list --help
。
停止集群
停止集群与启动它们一样简单,只需确保指定正确的集群名称。
$ kenv cluster down -n local
############################################################################################
# >>> Shutting down [local] cluster <<< #
############################################################################################
# >> Shutting down cluster | done
# >> Shutting down registry UI | already down
# >> Shutting down registry | already down
#-----------------------------------------------------+------------------------------------#
$ kenv cluster down -n wide
############################################################################################
# >>> Shutting down [wide] cluster <<< #
############################################################################################
# >> Shutting down cluster | done
# >> Shutting down registry UI | done
# >> Shutting down registry | done
#-----------------------------------------------------+------------------------------------#
依赖项
~8–22MB
~351K SLoC