c2sr bootcamp docs Help

Task 2: Verify Control Plane Installation

This guide provides a comprehensive set of steps to verify that your k3s control plane was installed successfully and is fully operational.

Before you start

Make sure that:

  • Task 1 (Control Plane Installation) has been successfully completed.

  • All participants are in the scahred terminal session.

  • The designated Driver will run the commands, others will observe.

How to Verify installation

The designated Driver will execute the commands, observers should watch the shared screen and verify the output of each command before proceeding to the next step.

Step 1: List the nodes associated with the kubernetes control plane

kubectl get nodes

Expected output:

NAME STATUS ROLES AGE VERSION k3smain Ready control-plane,master 32m v1.30.6+k3s1

Possible error:

WARN[0000] Unable to read /etc/rancher/k3s/k3s.yaml, please start server with --write-kubeconfig-mode or --write-kubeconfig-group to modify kube config permissions error: error loading config file "/etc/rancher/k3s/k3s.yaml": open /etc/rancher/k3s/k3s.yaml: permission denied

Step 1.1: Fix

The fix should only be run if the above error is encountered

sudo chmod 0644 /etc/rancher/k3s/k3s.yaml

Step 2: Check for a more detailed output to confirm the node ip

kubectl get nodes -o wide

Expected Output:

kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k3smain Ready control-plane,master 7h12m v1.30.6+k3s1 192.168.0.32 192.168.1.5 Ubuntu 24.04.2 LTS 6.11.0-26-generic containerd://1.7.22-k3s1

Step 3: Check the control plane health from the API.

kubectl get --raw='/livez?verbose'

Expected Response:

[+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check passed

Step 4: Check components of the control plane

kubectl get componentstatuses

Expected output:

Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR etcd-0 Healthy ok scheduler Healthy ok controller-manager Healthy ok

Step 4: Obtain the node token

In this step, you will retrieve the unique node token from the control-plane server. This token acts as a secure password, authenticating new worker nodes and authorizing them to join your K3s cluster.

Save the token output in a text file on your computer.

sudo cat /var/lib/rancher/k3s/server/node-token

Expected output:

K10d5453f3d93::server:74651130fff

Accomplishments

Node Health and Registration

  • The kubectl get nodes command confirmed that the k3smain node has successfully registered with the cluster.

  • It is in a Ready state and correctly identified with the control-plane and master roles.

Network Confirmation

  • The detailed output from the -o wide flag verified that the node's INTERNAL-IP addresses are correctly configured.

  • This ensures proper network visibility and connectivity.

API Server Deep-Dive

  • A raw query to the /livez endpoint passed successfully.

  • All critical API server components returned a verbose and positive status.

  • This confirms the health of the Kubernetes core engine, including:

    • Connection to the etcd backend

    • Post-start hooks

    • Controllers

Core Component Health

  • The kubectl get componentstatuses command reported Healthy status for:

    • etcd

    • The scheduler

    • The controller-manager

  • This verifies that the cluster's:

    • Data store

    • Pod scheduling logic

    • State-management controllers


      are functioning correctly.

Worker Node Authentication

  • The unique Node Token required for new nodes to join the cluster was retrieved from the control plane.

With all these checks passed, the installation is validated.
The cluster is ready for the next steps, such as:

  • Joining worker nodes

  • Deploying applications

Reflection

Please take a moment to write down any questions, issues, or doubts you encountered during this milestone.
This will help guide the next discussion and ensure everyone is on the same page before moving forward.

Next Steps

  • You may now exit out of the screen session ctrl + a then press d.

  • Log out from the shared user account.

  • Disconnect from the control plane node SSH session ctrl + d on the control plane.

  • Rejoin the common Discord lobby to await further instructions or support your peers.

Last modified: 23 June 2025