As you may know, I have been heavily focused on Kubernetes in the Enterprise for the past 2 years, mainly with VMware PKS and NSX-T as the CNI.
This is a great combination for your enterprise deployments of Kubernetes as it allows you to address all three main stakeholders in large and medium-sized companies:
- Developers/DevOps – these are the users. They need a Kubernetes cluster for use, which actually only means they need a Kubernetes API to interact with and it’s LCM capabilities. All the underlying infrastructure has to be handled automatically without a single ticket being opened for the network, storage, security, and Load balancing. These folks need also the freedom to do whatever they want in their Kubernetes cluster even though it is managed as a service and that it’ll be the same with upstream Kubernetes. (openshift is its own thing that some like and others less as they prefer just straight-up upstream)
- Network team – care about being able to provide the required level of network services in a consistent operational model. Integration with the physical network using BGP, ECMP, Path prepending, etc are crucial.
- Security team – Conformance, control, and operational consistency. This team needs to make sure that no one is doing what they are not supposed to, or (god forbid) pushing security policies that are unauthorized to production.
The user experience with PKS, once it is up and running, is that the folks who are managing the platform can push a new cluster with a single command (pks create-cluster) while being able to control elaborate networking configs using network profile.
While PKS and NSX-T do achieve those objectives there are cases where developers just want to have Kubernetes on their laptops for testing or in the datacenter with just upstream Kubernetes. sometimes, all you want is a minimally viable solution, especially from a CNI point of view. For these fast and easy deployments, NSX-T may be overkill and why VMware has created a new open-source CNI project called “Antrea“
Antrea, is a nifty CNI, based on OVS instead of iptables and is easily deployed using a single “kubectl apply” command.

Today it supports basic network functions and network policies. In the future, VMware is planning to provide a way to manage many “Antreas” using the NSX control plane, so that if you have a ton of developers using it, you can control the security policies and network configuration they deploy. As part of my “NSX Service Mesh” testing that I do with NSX Service Mesh, I decided I need to test it out. But then I found out that I haven’t deployed Kubernetes that is not PKS and I don’t know what I am doing ?
After trying to deploy a cluster the really manual way for a day with no success, I called my goto person for anything K Oren Penso (@openso), Oren gave me his raw notes on deploying k8s with kubeadm, which I refined a bit and added Antrea as the CNI and Metallb as the ingress controller. And this post is about sharing those. So, with no further ado here are the steps:
- I use Ubuntu as my leader and follower OSs.
(Leaders and followers is how I call the previously called Master and node, this is following the lead of
Joe Beda @jbeda, no pun intended ?)
- The first thing you want to do is make sure to disable the swap in the OS by commenting out the swap entry in /etc/fstab and running:
sudo swapoff -a
- Now, we need Docker installed. The version of Docker needs to match what is supported in the Kubernetes version you are deploying. Check the Kubernetes changelog for the supported versions, here’s the one for Kuberntes 1.16:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md . To install docker follow the following instructions:
https://docs.docker.com/install/linux/docker-ce/ubuntu/
- Next, we want to install kubeadm. starting on the Leader. Now, there are other ways to deploy Kubernetes, such as using kind, or even cluster API (which I will switch to later this year) but for now, kubeadm is the best way for me. One thing is you need to make sure that the kubeadm version matches the version of Kubernetes you want to deploy. for example, I want to deploy the latest, I will run the following commands:
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
But if I want to deploy Kubernetes 1.15, which is what I wanted to do I had to deploy the Kubeadm and kubelet for kubeadm with a matching version like this:
apt-get install -y kubelet=1.15.5-00 kubeadm=1.15.5-00 kubectl
- The next step is to deploy Kubernetes itself on the Leader. We will run the “kubeadm” command to do so, with a couple of notes. we want to specify a pod CIDR for our pod internal IPs. this is relating to the way Kubernetes allow pods to communicate internally and these IPs do not conflict with other clusters, but you don’t want it to overlap with the actual CIDR of the nodes themselves. In my lab the nodes are on 192.168.x.x so I had to specify also a new CIDR. Also, unless you want the latest Kubernetes, you need to specify the version as well, which in my case I did specify version 1.15 like this:
kubeadm init --kubernetes-version 1.15.5 --pod-network-cidr 172.12.0.0/16 | tee kubeadm-init.out
- After this is done you should have Kubernetes running on your Leader.
- The terminal will display the command you will need later to run on each one follower nodes to join the cluster. Copy it aside for later use. It will look something like this:
kubeadm join 192.168.110.121:6443 --token pcf2jy.v3z05ndfedumgrjs --discovery-token-ca-cert-hash sha256:8e771038080aeec633e2d17b20541ef18522cf47f6f01ab23ac4a1cde8a3f7d9
Also, in order to be able to interact with Kubernetes from the Lader terminal, you need to run the following commands on it (these command are also displayed in the output of the kubeadm command:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- Now that we have Kubernetes working let’s check that we can interact with it by running
kubectl cluster-info
The next step is to install the CNI, which in our case is Antrea! the instructions are detailed here
https://github.com/vmware-tanzu/antrea/blob/master/docs/getting-started.md. but it is really straightforward. All you need to do to install the latest Antrea is run the following command:
kubectl apply -f https://raw.githubusercontent.com/vmware-tanzu/antrea/master/build/yamls/antrea.yml
That’s all to it for the CNI! To check that it is working, check that kube-dns or core-dns pods got IPs from the pod CIDR by running:
kubectl get pods -o wide -n kube-system
The last thing we will need to do on the leader before deploying the followers is to deploy an ingress controller. There are plenty of those, my favorite in the Enterprise is AVI but for the lab deployment or your laptop
Duffie Cooley (@mauilion) showed me a really cool and easy one called “metallb” just follow the instructions here:
https://metallb.universe.tf/installation/ but generally what you need to do is run:
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
And then configure it by creating a YAML configmap like this and giving it a few IPs to be used as VIPs (these IPs need to be routable in your node network:
Vim metallb-configmap.yaml
------
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
kubectl apply -f metallb-configmap.yaml
That’s it for the leader, now switching to the follower, the steps are exactly the same as the leader, disable swap, install docker and kubeadm, kubectl etc. But then run the join command you saved previously. This will join the follower to its leader.
kubeadm join 192.168.110.121:6443 --token pcf2jy.v3z05ndfedumgrjs --discovery-token-ca-cert-hash sha256:8e771038080aeec633e2d17b20541ef18522cf47f6f01ab23ac4a1cde8a3f7d9
That’s it, you can see that installing the CNI with Antrea is really easy and works like a charm, imagine now your developers using it and you have central control, pretty cool stuff. now from the Leader, you can start deploying stuff. For example, I run NSX Service mesh and Acme demos but that is the next blog ?
If you like what you see, please share this post, if you can contribute to Antrea do so, and also star the Antrea project as this will be helpful to promote this new project.
Cheers, feel free to leave your comments
Niran
Leave a Reply