My Kubernetes Installation Notes Using Antrea

As you may know, I have been heavily focused on Kubernetes in the Enterprise for the past 2 years, mainly with VMware PKS and NSX-T as the CNI.

This is a great combination for your enterprise deployments of Kubernetes as it allows you to address all three main stakeholders in large and medium-sized companies:

  1. Developers/DevOps – these are the users. They need a Kubernetes cluster for use, which actually only means they need a Kubernetes API to interact with and it’s LCM capabilities. All the underlying infrastructure has to be handled automatically without a single ticket being opened for the network, storage, security, and Load balancing. These folks need also the freedom to do whatever they want in their Kubernetes cluster even though it is managed as a service and that it’ll be the same with upstream Kubernetes. (openshift is its own thing that some like and others less as they prefer just straight-up upstream)
  2. Network team – care about being able to provide the required level of network services in a consistent operational model. Integration with the physical network using BGP, ECMP, Path prepending, etc are crucial.
  3. Security team – Conformance, control, and operational consistency. This team needs to make sure that no one is doing what they are not supposed to, or (god forbid) pushing security policies that are unauthorized to production.

The user experience with PKS, once it is up and running, is that the folks who are managing the platform can push a new cluster with a single command (pks create-cluster) while being able to control elaborate networking configs using network profile.

While PKS and NSX-T do achieve those objectives there are cases where developers just want to have Kubernetes on their laptops for testing or in the datacenter with just upstream Kubernetes. sometimes, all you want is a minimally viable solution, especially from a CNI point of view. For these fast and easy deployments, NSX-T may be overkill and why VMware has created a new open-source CNI project called “Antrea

Antrea, is a nifty CNI, based on OVS instead of iptables and is easily deployed using a single “kubectl apply” command.

Today it supports basic network functions and network policies. In the future, VMware is planning to provide a way to manage many “Antreas” using the NSX control plane, so that if you have a ton of developers using it, you can control the security policies and network configuration they deploy. As part of my “NSX Service Mesh” testing that I do with NSX Service Mesh, I decided I need to test it out. But then I found out that I haven’t deployed Kubernetes that is not PKS and I don’t know what I am doing ūü§®

After trying to deploy a cluster the really manual way for a day with no success, I called my goto person for anything K Oren Penso (@openso), Oren gave me his raw notes on deploying k8s with kubeadm, which I refined a bit and added Antrea as the CNI and Metallb as the ingress controller. And this post is about sharing those. So, with no further ado here are the steps:

  • I use Ubuntu as my leader and follower OSs.
    (Leaders and followers is how I call the previously called Master and node, this is following the lead of
    Joe Beda @jbeda, no pun intended ūüėä)
  • The first thing you want to do is make sure to disable the swap in the OS by commenting out the swap entry in /etc/fstab and running:
sudo swapoff -a
  • Next, we want to install kubeadm. starting on the Leader. Now, there are other ways to deploy Kubernetes, such as using kind, or even cluster API (which I will switch to later this year) but for now, kubeadm is the best way for me. One thing is you need to make sure that the kubeadm version matches the version of Kubernetes you want to deploy. for example, I want to deploy the latest, I will run the following commands:
apt-get update && apt-get install -y apt-transport-https curl
curl -s | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb kubernetes-xenial main
apt-get update
apt-get install -y kubelet kubeadm kubectl

But if I want to deploy Kubernetes 1.15, which is what I wanted to do I had to deploy the Kubeadm and kubelet for kubeadm with a matching version like this:

apt-get install -y kubelet=1.15.5-00 kubeadm=1.15.5-00 kubectl
  • The next step is to deploy Kubernetes itself on the Leader. We will run the “kubeadm” command to do so, with a couple of notes. we want to specify a pod CIDR for our pod internal IPs. this is relating to the way Kubernetes allow pods to communicate internally and these IPs do not conflict with other clusters, but you don’t want it to overlap with the actual CIDR of the nodes themselves. In my lab the nodes are on 192.168.x.x so I had to specify also a new CIDR. Also, unless you want the latest Kubernetes, you need to specify the version as well, which in my case I did specify version 1.15 like this:

kubeadm init --kubernetes-version 1.15.5 --pod-network-cidr | tee kubeadm-init.out read more

Istio, mTLS and the OSI layer

I have been playing a lot with Istio and recently tested mTLS encryption. The test, which I describe in this post, really materialized the OSI layer in front of my eyes. which is always interesting how new stuff can dust off your old basic knowledge.

The entire concept of service mesh and Istio is exciting and revolutionary in my view… but just like any new groundbreaking tech, it takes a few cycles to realize how it manifests beyond the papers, blogs and theory, at least for me. So, as I usually do, I share my experiences on this blog and in my sessions with other in the thought that if I can help even one person understand it better I have achieved my goal.
read more

Only I have the solution! and it is…

We live in a truly hyped era. Kubernetes, Docker, Istio, Serverless, PaaS, CaaS, FaaS you name the buzzwords, these words draw all the attention of the Dev/IT worlds, interstingly enough only a small percentage of organizations actually employ these technologies today, in production or even at all.

Like any new tech there are a barriers of knowledge and investment to get in, weighing the cost of moving to these platforms vs the pain it solves is hard to quantify. For each one of these trends and more that I may have forgotten, there is a group of followers who see these solutions as the be-all-end-all solution for every problem conceivable: read more

Going For The Double

I can’t believe i’m writing this post, I have achieved a second VCDX certification (or as it’s being referred to in the community a 2X ūüôā ). This time the design was for cloud (CMA) and it came just one year and some change¬†after I became a VCDX DCV.

Just being a VCDX was a long time career aspiration of mine and I am so grateful I was able to work on the second one.

Short disclaimer – Since I am a VCDX panelist¬†I am forbidden¬†from mentoring¬†candidates through their VCDX process or giving out advice on the design itself, this is so¬†that I won’t¬†give anyone¬†an unfair advantage. I’ll keep this post about my personal experience towards achieving the double and keep the advice about the process.

For this round I again worked with my partner from the first design Mr. Agustin Malanco (@agmalanco) where we designed a vRealize automation (vRA) on top of the previous DCV design.

When we created the DCV design (which was factious just like this one) we intentionally designed it with cloud as  the next phase in mind. This is actually a recommended approach being discussed through the VCDX workshops as well, if you can create the first one planning ahead for the second do it.

That doesn’t mean it wasn’t a lot of work, hell yeah it was!

We spent nights and weekends for about 4 months, working out the design decisions, figuring out our process and installing the system to validate it and create the install guide.

So, here are a few words of advice for anyone going for double :

  • If possible,¬†when¬†you are designing¬†the first VCDX, plan ahead and build the foundation for the second and perhaps even the third.
  • In most cases when you are going for a second VCDX you only need to submit a design, there is no defense though there might be a phone interview that the reviewer will want some¬†clarifications. That means your design¬†will likely to be reviewed very¬†carefully. Make sure your documentation is top notch. Remember, there is no second chance to defend it, after submission that’s it. note:
    • If your first VCDX was for non-vSphere NSX design if you submit for either CMA/DCV/DT design as your second you will need to defend again.
    • For top notch documentation refer to a¬†previous VCDX article I wrote about the subject here.
  • If this is a fictitious design or partially fictitious validate the design by installing it in the lab.
  • This advice is true for the second VCDX as well as the first, if you can work in a team. It worked very well for me and Agustin.
  • read more