Kubernetes Cluster Setup Using Kubeadm on AWS

W
Waseem Akram
Researcher, Pentester, Dev
2025-02-14
7 min read
4,144 views
Featured image for Kubernetes Cluster Setup Using Kubeadm on AWS

In this guide, I will explain how to set up a Kubernetes cluster with one master node and two worker nodes using Kubeadm. We will be doing it on the AWS cloud. Each step below includes a bit more detailing and explanation to help you understand the process more thoroughly.

Prerequisites for this setup:

  • AWS account. (You will launch EC2 instances on AWS)
  • MobaXterm installed on your system. (Facilitates SSH and multi-execution)
  • Knowledge about Kubernetes architecture. (Familiarity with control plane, worker nodes, pods, and services)

What is Kubeadm?

Kubeadm is a tool to automate the installation and configuration of Kubernetes clusters without complex setups. It configures the minimum cluster requirements, letting you focus on how to operate and manage Kubernetes. It’s developed and maintained by the official Kubernetes community and is ideal for both testing and production-like environments.

Kubeadm prerequisites:

We need to create the EC2 instances with the following configurations:

  • Launch one Ubuntu instance with a minimum of 2 vCPU and 2GB RAM for the master node (ensures enough resources for control plane components).
  • Launch two Ubuntu instances with a minimum of 1 vCPU and 2GB RAM for the worker nodes (enough to run pods and other containerized workloads).

Kubeadm port requirements:

Open the following ports in the security groups to allow traffic between the nodes:

Advertisement

Control Plane Node

  • 6443/tcp: Kubernetes API Server
  • 2379-2380/tcp: etcd server client API
  • 10248-10260/tcp: Kubelet and other control plane APIs
  • 80, 8080, 443/tcp: Generic HTTP/HTTPS ports
  • 30000-32767/tcp: NodePort range for external access

Worker Node

  • 10248-10260/tcp: Kubelet API and related components
  • 30000-32767/tcp: NodePort range for external access

Cluster setup steps:

Below are the major steps in setting up a Kubernetes cluster with one master and two worker nodes using Kubeadm.

  • Launch 3 EC2 t2.medium instances with the defined ports open.
  • Install a container runtime (we use CRI-O) on all nodes.
  • Install Kubeadm, Kubelet, and kubectl on all the nodes.
  • Initiate Kubeadm control plane configuration on the master node.
  • Install the Calico network plugin for pod networking.
  • Join the worker nodes to the cluster using the join command.
  • Validate all cluster nodes and components.
  • Deploy a test Nginx app and confirm the setup works.

Let’s proceed step by step.

Launch EC2 instances:

Launch three t2.medium instances on AWS and SSH into them through MobaXterm (multi-execution mode lets you run commands on all nodes simultaneously). Ensure each instance is reachable and ports are configured as per the prerequisites.

Advertisement

1. Disable SWAP

Disable SWAP on all Ubuntu instances (master and workers) because Kubernetes requires swap to be off to avoid performance issues and scheduling problems.

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

2. Enable Bridged Traffic for iptables on All Nodes

Adjust kernel parameters so that iptables can inspect bridged traffic properly.

Load Kernel Modules

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
 
sudo modprobe overlay
sudo modprobe br_netfilter

Configure sysctl for Bridged Traffic

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
 
sudo sysctl --system

This ensures that traffic to pods and services can be handled consistently by the kernel.

Advertisement

3. Install CRI-O as the Container Runtime

A container runtime is required by Kubernetes to run containers. Here, we choose CRI-O for its simplicity and lightweight approach.

Enable CRI-O Repositories (Version 1.28)

OS="xUbuntu_22.04"
VERSION="1.28"
 
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
EOF
 
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
EOF

Add GPG Keys for CRI-O

curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -

Install CRI-O and CRI Tools

sudo apt-get update
sudo apt-get install cri-o cri-o-runc cri-tools -y

Enable and Start CRI-O

sudo systemctl daemon-reload
sudo systemctl enable crio --now

CRI-O provides a minimal environment for container execution. The cri-tools package includes crictl, which is helpful for troubleshooting containers.

4. Install Kubeadm, Kubelet, and Kubectl

Kubeadm handles cluster setup, Kubelet runs on each node to manage containers, and kubectl is the CLI for interacting with the cluster.

Advertisement

Install the required dependencies

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

Download the GPG key

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://dl.k8s.io/apt/doc/apt-key.gpg

Add the Kubernetes APT repository

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Update apt repo

sudo apt-get update -y

Install the latest version

sudo apt-get install -y kubelet kubeadm kubectl

Hold the packages

sudo apt-mark hold kubelet kubeadm kubectl

Add the node IP to KUBELET_EXTRA_ARGS

sudo apt-get install -y jq
local_ip="$(ip --json a s | jq -r '.[] | if .ifname == "eth1" then .addr_info[] | if .family == "inet" then .local else empty end else empty end')"
 
cat > /etc/default/kubelet << EOF
KUBELET_EXTRA_ARGS=--node-ip=$local_ip
EOF

This ensures the correct IP is registered in Kubernetes for each node.

5. Initialize Kubeadm On Master Node

Perform these steps only on the master node. This process sets up the control plane components.

Set the environment variables

IPADDR=$(curl ifconfig.me && echo "")
NODENAME=$(hostname -s)
POD_CIDR="192.168.0.0/16"

Initialize the master node

sudo kubeadm init --control-plane-endpoint=$IPADDR  --apiserver-cert-extra-sans=$IPADDR  --pod-network-cidr=$POD_CIDR --node-name $NODENAME --ignore-preflight-errors Swap

On successful initialization, you will receive a join command. Copy this for use on worker nodes.

Advertisement

Configure kubeconfig to use kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Verify the kubeconfig

kubectl get po -n kube-system

Check for coredns, kube-proxy, and other pods. They might be in a state like Pending until a network plugin is installed.

Verify component health

kubectl get --raw='/readyz?verbose'

By default, the master node is tainted to restrict scheduling. Remove the taint if you want to run workloads on it:

kubectl taint nodes --all node-role.kubernetes.io/control-plane-

6. Install Calico Network Plugin

A network plugin is required for pod-to-pod communication. We choose Calico for its performance and network policy features.

Advertisement

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml -O
kubectl create -f custom-resources.yaml

7. Joining of Worker nodes to master node

Use the join command generated by the master setup to add each worker node.

Example:

kubeadm join <MASTER_PUBLIC_IP>:6443 --token <YOUR_TOKEN> \
    --discovery-token-ca-cert-hash <SHA256_VALUE>

Run this only on each worker node. Once these nodes successfully join, verify:

Advertisement

kubectl get nodes

8. Deploy a sample Nginx App

Finally, test your cluster by deploying a simple Nginx app:

kubectl run nginx --image=nginx --port=80
kubectl expose pod nginx --port=80 --type=NodePort
kubectl get svc

You can access the Nginx pod using the assigned NodePort on any node IP.

Wrapping Up!

In this guide, we walked through the process of deploying a Kubernetes cluster using Kubeadm on AWS. We covered how to disable swap, enable bridged traffic, install CRI-O, set up the master node with Kubeadm, join worker nodes, and validate the setup by deploying a sample Nginx app.

Advertisement

Thanks for reading, and see you in the next guide!

Advertisement

W

Waseem Akram

Researcher, Pentester, Dev

Cybersecurity expert and educator with a passion for sharing knowledge and helping others stay safe online.

Comments

Comments are currently disabled. Please share your thoughts on social media.

Related Articles

What is Cloud Computing

What is Cloud Computing

Cloud computing refers to the delivery of computing services over the internet (the cloud). These services include servers. servers, storage, databases, networking, software

6 min read