Creating a cluster with kubeadm – Kubernetes

kubeadm-stacked-color.png With kubeadm, you can create a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use kubeadm to set up a cluster that passes Kubernetes compliance tests. Kubeadm also supports other cluster lifecycle features such as bootstrap tokens and cluster upgrades.

The kubeadm tool is good if you need:

A simple way to

  • try Kubernetes, possibly for the first time
  • . A

  • way for existing users to automate the configuration of a cluster and test your application
  • .

  • A building block in another ecosystem and/or installation tools with a wider scope.

You can install and use kubeadm on multiple machines: your laptop, a set of cloud servers, a Raspberry Pi, and more. Whether you’re deploying in the cloud or on-premises, you can integrate kubeadm into provisioning systems like Ansible or Terraform.

Before you begin

To follow this guide

, you need: One or more machines

  • running a Linux operating system that supports deb/rpm; for example: Ubuntu or CentOS
  • . 2

  • GiB or more of RAM per machine, less leaves little room for your applications
  • .

  • At least 2 CPUs on the computer that you use as the control plane node
  • .

  • Full network connectivity between all machines in the cluster. You can use a public or private network.

You should also use a version of kubeadm that can deploy the version of Kubernetes you want to use in your new cluster.

Kubernetes’ version bias and release support policy applies to both kubeadm and Kubernetes in general. See that policy for information on which versions of Kubernetes and kubeadm are supported. This page is written for Kubernetes v1.27.

The overall status of the kubeadm tool feature is General Availability (GA). Some subfeatures are still in active development. The implementation of cluster creation may change slightly as the tool evolves, but the overall deployment should be fairly stable.

Objectives

Install a

  • single Kubernetes cluster in the control plane
  • Install a Pod network on the

  • cluster so your Pods can communicate with each other

Instructions

Preparing

the hosts

Install a container and kubeadm runtime on all hosts. For detailed instructions and other prerequisites, see Installing kubeadm.

Preparing the required container images This step is optional and only applies in case you want kubeadm

init and kubeadm to join so as not to download the default container images that are hosted on registry.k8s.io. Kubeadm

has commands that can help you pre-extract the required images by creating a cluster without an internet connection on your nodes.

See Running kubeadm without an Internet connection for details.

Kubeadm allows you to use a custom image repository for the required images. See Using custom images for more information.

Initializing

the Control Plane Node The control plane node is the machine where the control plane

components run, including etcd (the cluster database) and the API server (with which the kubectl command-line tool communicates).

  1. (Recommended) If you have plans to upgrade this kubeadm cluster from single control plane to high availability, You must specify the -control-plane-endpoint endpoint to set the shared endpoint for all nodes in the control plane. Such an endpoint can be a DNS name or an IP address of a load balancer.
  2. Choose a Pod network plugin and check if it requires any arguments to be passed to kubeadm init. Depending on the third-party vendor you choose, you may need to set -pod-network-cidr to a vendor-specific value. See Install a Pod network plug-in.
  3. (Optional) kubeadm attempts to detect the container runtime by using a list of known endpoints. To use a different container runtime, or if more than one is installed on the provisioned node, specify the -cri-socket argument in kubeadm. See Installing a runtime.
  4. (Optional) Unless otherwise specified, kubeadm uses the network interface associated with the default gateway to set the advertisement address for the API server of this particular control plane node. To use a different network interface, specify the -apiserver-advertise-address=<ip-address> argument to kubeadm init. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you must specify an IPv6 address, for example -apiserver-advertise-address=2001:db8::101 To

initialize the control plane node,

run:

apiserver-advertise-address

and ControlPlaneEndpoint considerations While -apiserver-advertise-address

can be used to set

the announcement address for the API server of this particular control plane node, -control-plane-endpoint can be used to set the shared endpoint for all control plane nodes.

control-plane-endpoint allows both IP addresses and DNS names that can be mapped to IP addresses. Contact your network administrator to evaluate possible solutions regarding such assignment.

The following is an example mapping:

192.168.0.102 cluster-endpoint

Where 192.168.0.102 is the IP address of this node and cluster-endpoint is a custom DNS name that is assigned to this IP. This will allow you to pass -control-plane-endpoint=cluster-endpoint to kubeadm init and pass the same DNS name to kubeadm join. Later, you can modify the cluster endpoint to point to the direction of the load balancer in a high availability scenario.

Kubeadm does not support converting a single control plane cluster created without -control-plane-endpoint to a high-availability cluster.

More

information

For more information about kubeadm start arguments, see the kubeadm reference guide.

To configure kubeadm init with a configuration file

, see Using kubeadm init with a configuration file.

To customize control plane components,

including optional IPv6 assignment to the life probe for control plane components and the etcd server, provide additional arguments to each component as documented in the custom arguments

.

To reconfigure a cluster that has already been created, see Reconfiguring a kubeadm cluster. To

run kubeadm init again, You must first bring down the cluster.

If you join a node with a different architecture to the cluster, ensure that the deployed DaemonSets have container image support for this architecture.

kubeadm init first runs a series of pre-checks to make sure the machine is ready to run Kubernetes. These prechecks expose warnings and exit errors. Kubeadm Init then downloads and installs the cluster control plane components. This may take several minutes. After it finishes, you should see: Your Kubernetes

control plane has been successfully initialized! To start using your cluster, you must run the following as a normal user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You must now deploy a Pod network in the cluster. Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at: /docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join <control-plane-host>:<control-plane-port> -token <token> -discovery-token-ca-cert-hash sha256:<hash>

To make kubectl work for your non-root user, run these commands, which are also part of

the kubeadm startup output:Alternatively, if you

are the root user, you can run:

Create a record of the kubeadm join command that generates kubeadm init. You need this command to join nodes to the cluster.

The token is used for mutual authentication between the control plane node and the union nodes. The token included here is secret. Keep it secure, because anyone with this token can add authenticated nodes to their cluster. These tokens can be listed, created, and deleted with the kubeadm token command. See the kubeadm reference guide.

Installing a network plug-in

Pod

Several external projects provide Kubernetes Pod networking using CNI, some of which also support network policy

.

See a list of plug-ins that implement the Kubernetes network model.

You can install a Pod network

plug-in with the following command on the control plane node or on a node that has kubeconfig credentials:

You can install only one Pod network per cluster

.

Once a Pod network has been installed, you can confirm that it is working by checking that the CoreDNS Pod is running on the kubectl get pods -all-namespaces output. And once the CoreDNS Pod is up and running, you can continue to join your nodes together.

If your network is down or CoreDNS is not in the Running state, refer to the kubeadm troubleshooting guide.

Managed

node tags

By default, kubeadm enables the NodeRestriction admission handler that restricts which tags can be autoapplied by kubelets in the node registry. The admission controller documentation covers which labels can be used with the kubelet -node-labels option. The node-role.kubernetes.io/control-plane tag is a restricted tag and kubeadm applies it manually using a privileged client after a node has been created. To do this manually, you can do the same by using kubectl label and make sure you are using a privileged kubeconfig like kubeadm managed /etc/kubernetes/admin.conf.

Control plane node isolation

By default, the cluster will not schedule pods on control plane nodes for security reasons. If you want to be able to program Pods on the control plane nodes, for example, for a single-machine Kubernetes cluster, run

:

The output will look something like this:

node”test-01″ without blemish…

This will remove the node-role.kubernetes.io/control-plane:NoSchedule stain from any node that has it, including control plane nodes, meaning the programmer will be able to program Pods everywhere.

Join the nodes

Nodes are where your workloads (containers and Pods, etc.) run. To add new nodes to your cluster, do the following for each machine:

  • SSH to the machine

  • Become root (for example, sudo su -)

  • Install a runtime if necessary

  • Run the command that generated kubeadm init. For example

:

If you do not have the token, you can get it by running the following command on the control plane node

:

The output is similar to this:

By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new

token by running the following command on the control plane node:

The output is similar to this

:

If you do not have the value of -discovery-token-ca-cert-hash, you can obtain it by running the following command string on the control plane node:

The output is similar to:

The output should look similar to:

[pre-check] Running pre-flight checks…

(join workflow log output) … Node binding completed: * Certificate signing request sent to the control plane and received response. * Kubelet informed of new secure connection details. Run ‘kubectl get nodes’ in the control plane to see how this machine is joined.

A few seconds later, you should notice that this node in the output of kubectl get nodes when running on the control plane node.

(Optional) Controlling the cluster from computers other than the control plane node To get a kubectl on another computer (

e.g. laptop) to communicate with your cluster, you need to copy the administrator kubeconfig file from your control plane node to your workstation as follows

:

(Optional) API server proxy to localhost

If you want to connect to the API server

from

outside the cluster, You can use Kubectl Proxy: You

can

now access the API server locally in http://localhost:8001/api/v1

Clean

If you used disposable servers for your cluster, for testing, you can disable them and not perform further cleanups. You can use kubectl config delete-cluster to delete your local references to the cluster.

However, if you want to deprovision the cluster more cleanly, you must first drain the node and ensure that the node is empty, and then deconfigure it.

Remove

the node Talking to

the control plane node with the appropriate credentials, run

:

Before deleting the node, reset the state installed by kubeadm:

The reset process does not reset or clean iptables or IPVS tables.

If you want to reset iptables, you need to do it manually:If you want to reset the

IPVS tables

, you need to run the following command:

Now delete the node:

If you want to start over, run

kubeadm init or kubeadm join with the appropriate arguments.

Clean

the control plane

You can use kubeadm reset on the control plane host to trigger a best-effort cleanup.

See the kubeadm reset reference documentation for more information about this subcommand and its options.

What’s next

  • Verify that your cluster is running correctly with Sonobuoy
  • See Upgrading kubeadm

  • clusters for details on upgrading your cluster using kubeadm.
  • Learn more about advanced use of kubeadm in the

  • kubeadm reference documentation Learn
  • more

  • about Kubernetes and kubectl concepts
  • . See the

  • Cluster Networking page for a more extensive list of Pod network plug-ins. See
  • the list of add-ons to explore other add-ons, including tools for logging, monitoring, network policy, visualization, and control of your Kubernetes cluster.
  • Configure how the cluster handles logs for cluster events and applications running on Pods. See Logging architecture for an overview of what it entails.

Feedback

For errors, visit

  • kubeadm’s GitHub issue tracker For
  • support, visit the #kubeadm
  • Slack channel General development of the GIS cluster lifecycle

  • Slack channel
  • : cluster lifecycle #sig GIS Information Cluster Lifecycle

  • GIS
  • Cluster Lifecycle

  • Mailing List GIS – kubernetes-sig-cluster-lifecycle

Bias

policy version Although kubeadm allows version bias against some components that it manages, it is recommended that you match the version of kubeadm to the versions of the control plane components, kube-proxy, and kubelet.

Kubeadm’s bias against

Kubernetes version Kubeadm

can be used with Kubernetes components that are the same version as Kubeadm or an earlier version. The Kubernetes version can be specified in kubeadm using the kubeadm init -kubernetes-version flag or the ClusterConfiguration.kubernetesVersion field when using -config. This option will control the versions of kube-apiserver, kube-controller-manager, kube-scheduler and kube-proxy.

Example: kubeadm is at 1.27 kubernetesVersion must be at 1.27 or 1.26 kubeadm’s bias against kubelet Similar

to the Kubernetes version, kubeadm can be used with a kubelet version that is the same version as

kubeadm or earlier.

Example:

kubeadm

is at 1.27

    kubelet

  • on host must be at 1.27 or 1.26
  • kubeadm’s bias against kubeadm

There are certain limitations on how kubeadm commands can operate on existing nodes or entire clusters managed by kubeadm.

If new nodes are joined to the cluster, the kubeadm binary used for kubeadm join must match the latest version of kubeadm used to create the cluster with kubeadm init or to upgrade the same node with kubeadm upgrade. Similar rules apply to all other kubeadm commands with the exception of the kubeadm update.

Kubeadm join example:kubeadm version 1.27 was used to create a cluster with kubeadm init Join nodes must use a kubeadm binary that is

  • in version 1.27 Nodes
  • being upgraded must use a version of kubeadm that is the same MINOR version or a newer MINOR version than the

  • version of kubeadm

used to manage the node.

Example kubeadm upgrade:

kubeadm version 1.26 was used to create or upgrade the node The

  • kubeadm version used to upgrade the node must be in 1.26 or 1.27

For more information about version bias between different Kubernetes components, see the Version Bias Policy.

Limitations

Cluster

resiliency The cluster created here has a single control plane node, with a single etcd database running on it. This means that if the control plane node fails, the cluster may lose data and may need to be recreated from scratch.

Workarounds:

  • Perform regular backups of etcd. The etcd data directory configured by kubeadm is in /var/lib/etcd on the control plane node.

  • Use multiple nodes in the control plane. You can read Options for High Availability Topology to choose a cluster topology that provides high availability.

Platform compatibility

Kubeadm DEB/RPM packages and binaries are created for AMD64, ARM (32-bit), ARM64, PPC64LE and S390X following the cross-platform approach

.

Cross-platform container images for the control plane and plug-ins are also supported since version 1.12

.

Only a few of the network providers offer solutions for all platforms. Check the list of network providers above or the documentation for each provider to find out if the provider supports your chosen platform.

Troubleshooting

If you’re having trouble with kubeadm, check out our troubleshooting documents.