Contents
概要
Kubernetes clusterを構築します。
設定
プロキシ参照設定
Kubernetesはインターネットから各イメージを取得してきますが、プロキシ経由の通信についてはあまり積極的では無いように思えます。プロキシ経由にすることも可能ですが、後述のkubeadm, kubectlコマンドは、一般的なLinux OSの環境変数設定では不足です。ある個所に設定が必要です。本稿では省略します。
Control Planeの起動
問題なければ下記のように出力があります。最後の”kubeadm join”コマンドは、Woker nodeからMaster nodeへの接続で必要となる為、控えておきましょう。
root@k8s-01:~# kubeadm init --pod-network-cidr=10.255.0.0/16
[init] Using Kubernetes version: v1.22.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.68.11]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-01 localhost] and IPs [192.168.68.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-01 localhost] and IPs [192.168.68.11 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.506248 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: tyz96v.1rnjo515o6l0x8rr
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.68.11:6443 --token tyz96v.1rnjo515o6l0x8rr \
--discovery-token-ca-cert-hash sha256:0511356c9b273ca10450081e93469103d163ec5c1b1a1675943cabaab626ce96
正常に完了したら下記を実施します。前述の出力の後半にあるコマンドです。意外と忘れがちな手順です。
root@k8s-01:~# mkdir -p $HOME/.kube root@k8s-01:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config root@k8s-01:~# chown $(id -u):$(id -g) $HOME/.kube/config root@k8s-01:~# export KUBECONFIG=/etc/kubernetes/admin.conf
エラー発生時の例
エラー発生時の例です。CPUを2個にしてSWAPをOFFにせよとのことです。
root@k8s-01:~# kubeadm init --pod-network-cidr=10.255.0.0/16
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
CNIの追加
https://docs.projectcalico.org/getting-started/kubernetes/quickstart
Kubernetes clusterにはCNIが必要ですが、何故かデフォルトでは存在せず、サードパーティのCNIを導入しなければなりません。本稿ではCalicoを使用します。
tigera-operator.yaml
公式サイトの説明の通り、”kubectl create”で必要なyamlを適用していきますが、環境に応じてyamlを編集してから適用するケースも良くあります。公開されているyamlは割と細かく説明がされているので、一旦、yamlをダウンロードして内容をある程度確認した後に適用するのがお勧めです。
root@k8s-01:~# wget https://docs.projectcalico.org/manifests/tigera-operator.yaml root@k8s-01:~# kubectl create -f tigera-operator.yaml customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created namespace/tigera-operator created Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ podsecuritypolicy.policy/tigera-operator created serviceaccount/tigera-operator created clusterrole.rbac.authorization.k8s.io/tigera-operator created clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created deployment.apps/tigera-operator created
custom-resources.yaml
custom-resources.yamlでは、cidrの設定(デフォルト値:192.168.0.0/16)を変更します。この値は、前述の”kubeadm init –pod-network-cidr=10.255.0.0/16″で指定したcidrに合わせます。
root@k8s-01:~# wget https://docs.projectcalico.org/manifests/custom-resources.yaml root@k8s-01:~# vim custom-resources.yaml apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: # Configures Calico networking. calicoNetwork: # Note: The ipPools section cannot be modified post-install. ipPools: - blockSize: 26 cidr: 10.255.0.0/16 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() root@k8s-01:~# kubectl create -f custom-resources.yaml installation.operator.tigera.io/default created apiserver.operator.tigera.io/default created
Calicoの状態確認
Calicoの状態を確認します。問題なければ、namespace(calico-apiserver, calico-system, tigera-operator)が追加され、calico-system内で各Podが稼働(Running)します。
root@k8s-01:~# kubectl get namespace NAME STATUS AGE calico-apiserver Active 8h calico-system Active 8h default Active 8h kube-node-lease Active 8h kube-public Active 8h kube-system Active 8h tigera-operator Active 8h root@k8s-01:~# kubectl get pods -n calico-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-679f78745-jq8h6 1/1 Running 0 56s calico-node-9vrth 1/1 Running 0 56s calico-typha-7bdddb4586-5qml2 1/1 Running 0 56s
上記以外にもCalico関連のオブジェクト(deployment, service等)がいくつか追加されます。下記のコマンドで確認しましょう。
root@k8s-01:~# kubectl get all -A
Worker nodeからControl Planeへ接続
Worker nodeから下記のコマンド(”kubeadm init –pod-network-cidr=10.255.0.0/16″の出力の後半にあるコマンド)を実施します。
root@k8s-02:~# kubeadm join 192.168.68.11:6443 --token tyz96v.1rnjo515o6l0x8rr --discovery-token-ca-cert-hash sha256:0511356c9b273ca10450081e93469103d163ec5c1b1a1675943cabaab626ce96
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
正常に完了したら、”kubectl get nodes”でReadyを確認します。
root@k8s-01:~# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-01 Ready control-plane,master 48m v1.22.2 k8s-02 Ready47s v1.22.2