Install kubernetes on mini3

Page content

mini1에서 mini3로의 이전을 준비 중. 기존에 mini3에는 재미삼아 k3s를 설치해 놓았는데 왠지 새로운 설정 방식을 알아야 할 필요가 있나 하는 생각이 들어 이전처럼 다시 vanilla kubernetes 를 설치하기로 했다. minkkube처럼 VM을 만들어야 설치가 되는 것도 아니고 그냥 host OS에 설치하면 되니까 설치도 간단하고(물론 바이너리 하나 설치하면 되는 k3s와는 비교하기 어렵지만) 부하를 감당하기 어려운 정도의 CPU도 아니라서.

Installing kubeadm | Kubernetes

# /etc/modules-load.d/k8s.conf
br_netfilter
# /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
$ sudo sysctl --system

Install Containerd as a Container Runtime

docker를 CRI로 사용하는 것은 곧 deprecated예정이니까 containerd를 사용해 보자.

https://kubernetes.io/docs/setup/production-environment/container-runtimes

하지만 위 문서에서는 containerd 자체에 대한 내용은 없고, 별도 문서에서 설치 과정을 언급하고 있지만, docker를 사용하여 설치하는 것으로 기술하고 있어 별도의 링크를 참고하여 설치 중

https://www.techrepublic.com/article/how-to-install-kubernetes-on-ubuntu-server-without-docker

# /etc/modules-load.d/containerd.conf 
overlay
br_netfilter
# /etc/sysctl.d/99-kubernetes-cri.conf 
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
$ sudo sysctl --ystem

이제 Containerd 를 패키지로 설치. github에 있는 Containerd는 최신 버전이 1.4.4지만 아직 Ubuntu repository에 있는 버전은 1.3.3라 좀 아쉽지만 일단 설치를 해 보자. 아니면 직접 빌드해서 설치하면 되긴한데, 그러면 앞으로 업데이트도 매번 빌드를 해야 할 것 같아서.

$ sudo apt search containerd
...
containerd/focal-updates,focal-security,now 1.3.3-0ubuntu2.3 amd64 
  daemon to control runC
$ sudo apt install containerd -y
$ sudo mkdir -p /etc/containerd
$ containerd config default | sudo tee /etc/containerd/config.toml
$ sudo systemctl restart containerd

Install kubeadm, kubelet and kubectl

kubeadm 등 k8s 관련 앱 들을 설치를 하기 위해 repo 추가

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
OK

$ sudo apt-add-repository ‘deb http://apt.kubernetes.io/ kubernetes-xenial main’

Hit:2 http://kr.archive.ubuntu.com/ubuntu focal InRelease
Get:3 http://kr.archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Get:4 http://kr.archive.ubuntu.com/ubuntu focal-backports InRelease [101 kB]
Get:5 http://kr.archive.ubuntu.com/ubuntu focal-security InRelease [109 kB]
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [9,383 B]
Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [45.5 kB]
Fetched 378 kB in 5s (77.5 kB/s)   
Reading package lists... Done
$ sudo apt-get install kubeadm kubelet kubectl -y

Disable swap

Reboot후에도 swap을 만들지 않도록 조치하고

# /etc/fstab
...
#/swap.img

현재 활성화된 swap도 끄고

$ free -h
              total        used        free      shared  buff/cache   available
Mem:           15Gi       292Mi       2.7Gi       2.0Mi        12Gi        14Gi
Swap:         4.0Gi       0.0Ki       4.0Gi

$ sudo swapoff -a

$ free -h
              total        used        free      shared  buff/cache   available
Mem:           15Gi       282Mi       2.7Gi       2.0Mi        12Gi        14Gi
Swap:            0B          0B          0B

Setup Cluster

이제 준비가 되었으니 kubeadm으로 설치 시작. mini1에 k8s 설치할 때 kubeadm을 사용해서 이번에는 다른 방법으로 해 보려고 찾아봤는데 딱히 마음에 드는 게 없어서 그냥 이번에도 kubeadm으로 진행. Terraform으로 어떻게 할 수 있지 않을까 했는데 자료를 찾지 못했다는. Public cloud 환경에 cluster를 구성하거나, 이미 구성되어 있는 cluster에 namespace를 추가하고, 툴을 설치하는 등의 예제는 많은데 내가 원하는 on-premise 환경에 kubernetes cluster를 구성하는 내용에 대한 자료는 의의로 많이 없었다는. 그래서 k8s홈페이지에서도 여전히 kubeadm, Kubespray, kops 정도만 소개하고 있는 걸까?

$ sudo kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.21.0
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.21.0
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.21.0
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.21.0
[config/images] Pulled k8s.gcr.io/pause:3.4.1
[config/images] Pulled k8s.gcr.io/etcd:3.4.13-0
[config/images] Pulled k8s.gcr.io/coredns/coredns:v1.8.0
$ sudo kubeadm init --pod-network-cidr=10.245.0.0/16
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local mini3] and IPs [10.96.0.1 192.168.0.101][certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost mini3] and IPs [192.168.0.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost mini3] and IPs [192.168.0.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

1분 정도(아래 로그를 보니 73초) 아무런 진행이 없다 다시 설치가 진행되고, 마지막에 성공했다는 메시지가 짠. 언제나 설치 성공은 기쁜 일이지.

        [kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 73.009593 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node mini3 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node mini3 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: rc93da.7cseyuwmnfvhgyyx
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.101:6443 --token qs9eij.pm7l5jzihbk2rmvs \
        --discovery-token-ca-cert-hash sha256:754b224773ada603d486d3e6652437539a847323dee6fa011ae472e85b3bcdbc 

시키는 대로 kube configuration 파일을 홈 디렉토리에 복사해주고. 이 파일만 있으면 어느 머신에서든 kubectl이나 k9s같은 툴을 이용해서 cluster의 상태를 확인하고, 변경할 수 있다는.

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Cluster 접근에 필요한 certificate등의 정보가 저장되어 있는 파일이라 파일 퍼미션은 저렇게 600으로 되어 있는 듯.

$ ls -al $HOME/.kube/config
-rw------- 1 cychong cychong 5597 Apr 12 14:02 /home/cychong/.kube/config

Install Calico

기본 CNI인 flannel을 사용해도 되지만, 이번에도 익숙한(?) Calico를 굳이 설치.

이전(v3.8) 설치 때와 달리 지금 Calico버전은 Operator를 이용해서 설치 https://docs.projectcalico.org/getting-started/kubernetes/quickstart

$ kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org createdcustomresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/
kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
namespace/tigera-operator created
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/tigera-operator created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created

tigera-operator 라는 namespace가 생성되고, 거기에 tigera-operator pod가 하나 실행된다.

$ kubectl get pods --all-namespaces
NAMESPACE         NAME                               READY   STATUS    RESTARTS   AGE
kube-system       coredns-558bd4d5db-7qm2n           0/1     Pending   0          7m3s
kube-system       coredns-558bd4d5db-vrhmk           0/1     Pending   0          7m3s
kube-system       etcd-mini3                         1/1     Running   0          7m7s
kube-system       kube-apiserver-mini3               1/1     Running   0          7m7s
kube-system       kube-controller-manager-mini3      1/1     Running   0          7m7s
kube-system       kube-proxy-ll97z                   1/1     Running   0          7m3s
kube-system       kube-scheduler-mini3               1/1     Running   0          7m7s
tigera-operator   tigera-operator-675ccbb69c-fg4k9   1/1     Running   0          2m27s

이제 나머지 Calico를 설치하는데, IP subnet을 변경했으니 설정파일을 받아서 수정한 후 설치하는 걸로.

$ wget https://docs.projectcalico.org/manifests/custom-resources.yaml
--2021-04-12 14:20:49--  https://docs.projectcalico.org/manifests/custom-resources.yaml
Resolving docs.projectcalico.org (docs.projectcalico.org)... 3.0.239.142, 104.248.158.121, 2406:da18:880:3802:371c:4bf1:923b:fc30, ...
Connecting to docs.projectcalico.org (docs.projectcalico.org)|3.0.239.142|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 545 [text/yaml]
Saving to: ‘custom-resources.yaml’

custom-resources.yaml   100%[============================>]     545  --.-KB/s    in 0s      

2021-04-12 14:20:50 (3.36 MB/s) - ‘custom-resources.yaml’ saved [545/545]

파일에서 “CIDR” 필드 값을 변경한다.

$ vi custom-resource.yaml

앞에서 kubeadm init할때 사용했던 subnet과 같은 값으로 변경.

$ grep cidr custom-resources.yaml 
      cidr: 10.245.0.0/16
$ kubectl apply -f custom-resources.yaml 
installation.operator.tigera.io/default created

Wait for all Calico pods are in RUNNING state

# watch -n2 kubectl get pods -n calico-system 
Every 2.0s: kubectl get pods -n calico-system

NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5cbf59cb6f-lg9xq   1/1     Running   0          3m7s
calico-node-dglfb                          1/1     Running   0          3m8s
calico-typha-d798686b4-hf6bb               1/1     Running   0          3m8s

이제 모든 namespaces 해 보면,

$ kubectl get pods --all-namespaces
NAMESPACE         NAME                                       READY   STATUS    RESTARTS   AGEcalico-system     calico-kube-controllers-5cbf59cb6f-lg9xq   1/1     Running   0          8m18s
calico-system     calico-node-dglfb                          1/1     Running   0          8m19s
calico-system     calico-typha-d798686b4-hf6bb               1/1     Running   0          8m19s
kube-system       coredns-558bd4d5db-2wgvk                   1/1     Running   0          18mkube-system       coredns-558bd4d5db-sxmnq                   1/1     Running   0          18mkube-system       etcd-mini3                                 1/1     Running   0          18mkube-system       kube-apiserver-mini3                       1/1     Running   0          18m
kube-system       kube-controller-manager-mini3              1/1     Running   0          18mkube-system       kube-proxy-bnsv7                           1/1     Running   0          18m
kube-system       kube-scheduler-mini3                       1/1     Running   0          18m
tigera-operator   tigera-operator-675ccbb69c-z9hrz           1/1     Running   0          11m

Single node cluster 로 사용하기 위해

$ kubectl taint nodes --all node-role.kubernetes.io/master-
node/mini3 untainted
$ kubectl get nodes -o wide
NAME    STATUS   ROLES                  AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
mini3   Ready    control-plane,master   14m   v1.21.0   192.168.0.101   <none>        Ubuntu 20.04.2 LTS   5.4.0-66-generic   containerd://1.3.3-0ubuntu2.3

이전과 달리 CONTAINER-RUNTIMEdocker가 아닌 containerd가 표시된 걸 보니 Containerd 설치가 제대로 된 듯

$ kubectl describe node mini3
Name:               mini3
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=mini3
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node-role.kubernetes.io/master=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 192.168.0.101/24
                    projectcalico.org/IPv4VXLANTunnelAddr: 10.245.211.0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 12 Apr 2021 14:12:49 +0900
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  mini3
  AcquireTime:     <unset>
  RenewTime:       Mon, 12 Apr 2021 14:32:28 +0900
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Mon, 12 Apr 2021 14:24:14 +0900   Mon, 12 Apr 2021 14:24:14 +0900   CalicoIsUp                   Calico is running on this node
  MemoryPressure       False   Mon, 12 Apr 2021 14:30:07 +0900   Mon, 12 Apr 2021 14:12:47 +0900   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 12 Apr 2021 14:30:07 +0900   Mon, 12 Apr 2021 14:12:47 +0900   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 12 Apr 2021 14:30:07 +0900   Mon, 12 Apr 2021 14:12:47 +0900   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Mon, 12 Apr 2021 14:30:07 +0900   Mon, 12 Apr 2021 14:23:56 +0900   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.0.101
  Hostname:    mini3
Capacity:
  cpu:                2
  ephemeral-storage:  114336932Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16311204Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  105372916357
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16208804Ki
  pods:               110
System Info:
  Machine ID:                 7369831895a5443e9806a29d674b929b
  System UUID:                a61d4c15-ad23-4b7c-9f11-c07cd13f6216
  Boot ID:                    7bbad4d9-72fc-4bdf-8604-26cb6fb2bc99
  Kernel Version:             5.4.0-66-generic
  OS Image:                   Ubuntu 20.04.2 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.3.3-0ubuntu2.3
  Kubelet Version:            v1.21.0
  Kube-Proxy Version:         v1.21.0
PodCIDR:                      10.245.0.0/24
PodCIDRs:                     10.245.0.0/24
Non-terminated Pods:          (11 in total)
  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
  calico-system               calico-kube-controllers-5cbf59cb6f-lg9xq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
  calico-system               calico-node-dglfb                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
  calico-system               calico-typha-d798686b4-hf6bb                0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
  kube-system                 coredns-558bd4d5db-2wgvk                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (1%)     19m
  kube-system                 coredns-558bd4d5db-sxmnq                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (1%)     19m
  kube-system                 etcd-mini3                                  100m (5%)     0 (0%)      100Mi (0%)       0 (0%)         19m
  kube-system                 kube-apiserver-mini3                        250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
  kube-system                 kube-controller-manager-mini3               200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
  kube-system                 kube-proxy-bnsv7                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
  kube-system                 kube-scheduler-mini3                        100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
  tigera-operator             tigera-operator-675ccbb69c-z9hrz            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                850m (42%)  0 (0%)
  memory             240Mi (1%)  340Mi (2%)
  ephemeral-storage  100Mi (0%)  0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type     Reason                   Age                From        Message
  ----     ------                   ----               ----        -------
  Normal   NodeHasNoDiskPressure    21m (x5 over 21m)  kubelet     Node mini3 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     21m (x5 over 21m)  kubelet     Node mini3 status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  20m (x6 over 21m)  kubelet     Node mini3 status is now: NodeHasSufficientMemory
  Normal   Starting                 19m                kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      19m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  19m                kubelet     Node mini3 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    19m                kubelet     Node mini3 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     19m                kubelet     Node mini3 status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  19m                kubelet     Updated Node Allocatable limit across pods
  Normal   Starting                 19m                kube-proxy  Starting kube-proxy.
  Normal   NodeReady                8m40s              kubelet     Node mini3 status is now: NodeReady

기본 설치 끝.

그런데 docker를 설치하지 않았으니 docker 명령어를 사용할 수 없네. 흠.. podman을 설치할까 그냥 docker를 설치할까. podman으로 docker-compose를 사용할 수 있던가…