Securing A Multi-Cloud App With Service Mesh

Multi-cloud is becoming a reality for many organizations, but what is multi-cloud? Multi-cloud is a very wide term that encompasses any organization using more than one cloud, whether running apps across those clouds or not. For example, if one BU in my org is using GKE and another BU is using AWS my business is already operating in a multi-cloud environment, and this needs to be operated and secured. 

So we have defined multi-cloud, what is hybrid cloud and what are multi-cloud app then? The “hybrid-cloud” term came out very close to the emergence of public clouds and private clouds. Hybrid just like public and private is about location. How do I connect the public and the private clouds, mostly from the infrastructure point of view. The hybrid-cloud seems more and more suitable for “heritage workloads” cloud migrations where the “hybridity” is about connecting distinct pieces of infrastructure together for the ability of either moving stuff to the cloud or bursting to it. Multi-cloud apps, which this article focuses on is when we run an application through multiple clouds. read more

My Kubernetes Installation Notes Using Antrea

As you may know, I have been heavily focused on Kubernetes in the Enterprise for the past 2 years, mainly with VMware PKS and NSX-T as the CNI.

This is a great combination for your enterprise deployments of Kubernetes as it allows you to address all three main stakeholders in large and medium-sized companies:

  1. Developers/DevOps – these are the users. They need a Kubernetes cluster for use, which actually only means they need a Kubernetes API to interact with and it’s LCM capabilities. All the underlying infrastructure has to be handled automatically without a single ticket being opened for the network, storage, security, and Load balancing. These folks need also the freedom to do whatever they want in their Kubernetes cluster even though it is managed as a service and that it’ll be the same with upstream Kubernetes. (openshift is its own thing that some like and others less as they prefer just straight-up upstream)
  2. Network team – care about being able to provide the required level of network services in a consistent operational model. Integration with the physical network using BGP, ECMP, Path prepending, etc are crucial.
  3. Security team – Conformance, control, and operational consistency. This team needs to make sure that no one is doing what they are not supposed to, or (god forbid) pushing security policies that are unauthorized to production.

The user experience with PKS, once it is up and running, is that the folks who are managing the platform can push a new cluster with a single command (pks create-cluster) while being able to control elaborate networking configs using network profile.

While PKS and NSX-T do achieve those objectives there are cases where developers just want to have Kubernetes on their laptops for testing or in the datacenter with just upstream Kubernetes. sometimes, all you want is a minimally viable solution, especially from a CNI point of view. For these fast and easy deployments, NSX-T may be overkill and why VMware has created a new open-source CNI project called “Antrea

Antrea, is a nifty CNI, based on OVS instead of iptables and is easily deployed using a single “kubectl apply” command.

Today it supports basic network functions and network policies. In the future, VMware is planning to provide a way to manage many “Antreas” using the NSX control plane, so that if you have a ton of developers using it, you can control the security policies and network configuration they deploy. As part of my “NSX Service Mesh” testing that I do with NSX Service Mesh, I decided I need to test it out. But then I found out that I haven’t deployed Kubernetes that is not PKS and I don’t know what I am doing 🤨

After trying to deploy a cluster the really manual way for a day with no success, I called my goto person for anything K Oren Penso (@openso), Oren gave me his raw notes on deploying k8s with kubeadm, which I refined a bit and added Antrea as the CNI and Metallb as the ingress controller. And this post is about sharing those. So, with no further ado here are the steps:

  • I use Ubuntu as my leader and follower OSs.
    (Leaders and followers is how I call the previously called Master and node, this is following the lead of
    Joe Beda @jbeda, no pun intended 😊)
  • The first thing you want to do is make sure to disable the swap in the OS by commenting out the swap entry in /etc/fstab and running:
sudo swapoff -a
  • Next, we want to install kubeadm. starting on the Leader. Now, there are other ways to deploy Kubernetes, such as using kind, or even cluster API (which I will switch to later this year) but for now, kubeadm is the best way for me. One thing is you need to make sure that the kubeadm version matches the version of Kubernetes you want to deploy. for example, I want to deploy the latest, I will run the following commands:
apt-get update && apt-get install -y apt-transport-https curl
curl -s | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb kubernetes-xenial main
apt-get update
apt-get install -y kubelet kubeadm kubectl

But if I want to deploy Kubernetes 1.15, which is what I wanted to do I had to deploy the Kubeadm and kubelet for kubeadm with a matching version like this:

apt-get install -y kubelet=1.15.5-00 kubeadm=1.15.5-00 kubectl
  • The next step is to deploy Kubernetes itself on the Leader. We will run the “kubeadm” command to do so, with a couple of notes. we want to specify a pod CIDR for our pod internal IPs. this is relating to the way Kubernetes allow pods to communicate internally and these IPs do not conflict with other clusters, but you don’t want it to overlap with the actual CIDR of the nodes themselves. In my lab the nodes are on 192.168.x.x so I had to specify also a new CIDR. Also, unless you want the latest Kubernetes, you need to specify the version as well, which in my case I did specify version 1.15 like this:

kubeadm init --kubernetes-version 1.15.5 --pod-network-cidr | tee kubeadm-init.out read more

My sessions recordings from VMworld US 2016

It is so nice that VMworld has released the sessions recordings from VMworld US publicly for everyone, thanks to William Lam for publishing
all the direct URLs here

As for the sessions themselves, we had a nice turnout of about 220 folks in each session and the reviews were great.

Here are the recordings:

VIRT7575 – Architecting NSX with Business Critical Applications for Security, Automation and Business Continuity

VIRT7654 – SQL Server on vSphere: A Panel with Some of the World’s Most Renowned Experts


Have fun,


My VMworld in 2016

This is it, this year I am finally taking a very active role at VMworld after a few years of only being an attendee (except for one session at VMworld Europe in 2009 ) .

For this year’s VMworld I am going to take on the role of the Booth captain for the Virtualzing apps track booth, (YES!)  I will be working with a staff of 4: Sudhir Balasubramanian, Vas Mitra, Agustin Malanco the man (Twitter – @agmalanco ) and Ryan DaWaele. such a great crew!

We are planning 2 stations this year, where station #1 is going to run the traditional demos for Business critical applications with vSphere, features like: DRS, vMotion etc and new this year with vVols and vRA.

Station #2 is new this year, we are going to have a second station solely focused on business critical apps with NSX demos. We are already working really hard on developing these demos so I don’t want to spoil it, but it is going to be epic! really cool stuff around Oracle RAC, SQL, SAP etc with really cool NSX demos. expect to be wowed.

That’s not all, I have 2 sessions this year:

  • Architecting NSX with Business Critical Applications for Security, Automation and Business Continuity [VIRT7575] – A session covering the Business critical apps use cases with NSX where me and my colleague Sudhir Balasubramanian are going to cover the use cases to app owners and networking folks who are interested in applying NSX goodness to their app owners.
  • SQL Server on vSphere: A Panel with Some of the World’s Most Renowned Experts [VIRT7654] – I will be facilitating a panel of world renowned SQL Server experts about anything SQL on VMware. The panelists are:
    Denny Cherry, Twitter – @mrdenny Principal Consultant, Denny Cherry & Associates Consulting
    Allan Hirt, Twitter –  @SQLHA Managing Partner, SQLHA LLC
    David Klee, Twitter – @kleegeek  Founder, Heraflux Technologies
    Thomas Larock, Twitter  – @SQLRockstar  Head Geek, SolarWinds
  • read more

    VMware NSX Question – Can You Figure it Out?

    I wrote a blog post in the VMware official blog about a demo I recorded called “Dynamically enforcing Security On a Hot Cloned SQL Server With VMware NSX“.

    A bit long of a title but captures the essence of the demo perfectly. You can see the demo as well here:

    I got a question from a colleague of mine with has a very keen eye:

    “I just saw the great video you made, at 0:50 second of the demo we can see the rules for the prod app

    What is the meaning of rule 6?  If the source is the datacenter and is broader than the App Server in rule 5, and the rule allows for ANY service, doesn’t it make rule 5 irrelevant? “

    This is a great observation by Manuel with a very simple explanation which demonstrates perfectly the power of VMware NSX, can you figure out the answer?


    Rule 6 makes sense, only if you know your NSX 🙂



    IOPS reservations in SIOC 6 , what’s the deal?

    Storage I/O control has been available for a long time now since vSphere 4.1, if you don’t know what SIOC is you can read about it in many blogs out there, my personal favorite for anything storage is Cormac Hogan‘s blog, here is also a link to Cormac’s post about SIOC.

    Some of you might have read about the new SIOC feature in vSphere 6 called IOPS reservations.

    In case you didn’t let’s quickly review it, In version 5.5 VMware introduced a new scheduler call mClock, this I/O scheduler is more efficient but also it has the capability to set I/O reservations on VMDK’s. In vSphere 6 VMware added the ability to set those reservations on the VMDK level, not through the web client but by setting the “reservation” property on the VMDK, see this post by William Lam that has a nice PowerCLI script to do this for you. read more

    Taking my career to the next level

    If your reading this it’s probably because you are wondering where I am heading, so I’ll start with that first: I recently accepted a new position with a bigger impact on VMware and its customers and on the IT community.

    Starting OCT 1’st I will become a Staff solutions architect for Microsoft enterprise applications architecture at VMware’s Global center of excellence, focusing mainly on MS SQL running on VMware’s platforms.

    Why? you may ask. Considering my current job is indeed wonderful it is a fair question. Currently I am working with the best SE specialists team in the world. My team is responsible for delivering pre-sales engagements about the most strategic solutions we have at VMware to our most strategic customers. As a cloud automation specialist I learned a lot and engaged in the most interesting conversations with customers about their Cloud needs and wants, conducted POC’s, got to work for the best manager I had the privilege to work for in my career and most importantly built strong relationships and friendships.

    So again, why? First, as an SE it’s very hard for me to utilize my “VCDX skills”,  just not enough architecture work. Don’t get me wrong, I LOVE being an SE, working with customers and helping them understand our solutions is something I love doing very much but there are other skills that I want to develop to get me to the next level in my career, If you haven’t done so, read Duncan Epping’s “How I get to the next level” post, it helped me sort things out, especially got me to the understanding that I need now to go out of my comfort zone and take on a position in which I will do new things that will help me acquire the skills I need to advanced my career to the next level.

    And I’m not a stranger to Microsoft solutions, I started my career as a Microsoft expert, I managed and architected deployments for Exchange (since 5.5), Windows (NT4) and terminal  services/Citrix, I am an MSCE and when I started working with VMware tech in 2001 even though it became my main “thing” I never stopped working with MS technologies, mostly virtualizing them of course 🙂 . Also for a year and a half as Storage and performance expert at VMware I helped many of our customer’ss DBAs to understand the methodology of virtualizing Business critical application, so this is not something entirely new to me.

    This position I am taking is a global position, I will contribute to writing official documentation and present at conferences in front of bigger audiences,  I will get to combine my VMware passion with Microsoft solutions architecture enhancing it beyond vSphere into VMware’s newest platforms like vCloud Air and EVO.

    You may have noticed I haven’t written in this blog for a while, mainly because Agustin and I worked on our second VCDX (cloud) but also because I just didn’t have much to say, this is going to change.



    VMware openness for humanity

    Last week I attended VMware’s World wide sales kick off event where all of VMware’s sales organization gathered in New Orleans to learn about the vision for the future, celebrate the wins of the past (6 Billion Revenue this year!) and network with our peers.

    What a great event that was, started with VMware volunteers for “Habitat for humanity” event where 300 of our finest went to build houses for people who are in need.

    Hey, I maybe drinking from my company’s KoolAid  but as @virtualJad says “its a great KoolAid!” . I don’t know any other company that is spending so much time and energy on different charity and giving back to the community initiatives, where people that are involved in different volunteer work are being praised and where each employee can take 40 hours a year to volunteer for community work and there is a lot more.  This is part of the company’s DNA and I am very proud to be in it.

    With that in mind there were quite a few major announcements in PEX this week:

    • vSphere 6.0 with vVols, SMP FT, Cross vCenter vMotion, Long distance vMotion and more. read about it here on Adam Eckerle AKA @eck79 blog
    • VMware Integrated OpenStack (VIO) – VMware’s own openstack deployment that is free for ENT+ customers! read about it here.
    • VAN 6.0v – all flash support, new SPBM capabilities, multiple racks support and more which you can read about here


    These announcements are great and shows the extent of effort being put to integrate the solutions into a unified solution stack and to embrace new trends in the market instead of just trying to kill them, VMware does not see OpenStack, off prem cloud or containers as threats but more as opportunity to help our customers achieve their goals (read a small preview into our vision for containers in this blog post by Kit Colbert) .

    And that brings me to the statement of “Openess” I titled this post with, VMware is the company that is the most OPEN out there, period.

    When it comes down to promoting a culture of giving back to the community and volunteer work, when it comes down to embracing new technologies that might seem as a threat in the beginning but are also a great opportunity to solve real world problems for our customers, VMware is not afraid of making bold decisions, VMware is not afraid to dare.



    vRA tidbit – AWS provisioning and the key pair conundrum

    One of the main advantages of vRealize automation in the Cloud management space is that it provides customers with choices, this is true in many aspects of the solution like where to consume services from, how to deploy them, how the forms will look etc but in this post I want to talk about the creation of AWS key pairs.

    There are many solutions out there that provide an interface for provisioning instances to AWS,  some have more capabilities than others and without getting into a full feature by feature comparison I will just say that vRA is one of the more comprehensive solutions out there with many capabilities that are required for cloud management such as self-service portal, multi cloud/vendor provisioning, automation and orchestration capabilities and much more.

    One of the choices vRA gives cloud admins is how to create AWS key pairs. In a nutshell a key pair is the credentials used to access an instance, many of the CMP solutions out there will allow the creation of either a global key pair or a key pair per deployed instance.

    Having a global key pair is not granular enough for most of our customer’s requirements and it will add management overhead especially on billing and security, while the other tools that create key pair for each instance are probably too granular and create a management and maintenance nightmare, the EC2 management console will be flooded with Key pairs, this can also pose a security concern as there are potentially thousands  of credentials issued which is quite a mess.


    vRA has a more elegant solution that also provides choice, choice between having a key pair generated for each Business group, A key pair per instance or a global key pair for a reservation. In most cases it will be suitable to have just one key pair per business group which be secure enough and will not clutter the environment with hundreds and thousands of key pairs, but if needs be Cloud admins can decide to provision certain instances with their own key pairs or set a key pair per reservation (the resolution of the reservation is decided by the admin) . This might not seem as a big deal but for those who work with AWS it is important.

    When it comes down to cloud provisioning  where instances are being built and destroyed automatically, constantly and on demand having that choice can make a difference so you can feel your CMP solution fits to your requirements.




    Quick and easy vRA ASD demo – send email

    Let’s say you want to test vRealize Automation Advanced Service Designer which allows you to create custom services that are based on Orchestrator, sending an email from the vRA catalog is a quick and easy solution to try it out.

    The following steps will guide you to create a service that sends an email using the mail notification workflow in the embedded vRO.

    Obviously you need to have a fully configured vRA environment including vRO (formerly vCO) , to demonstrate the email being sent and received I used FakeSMTP , which is a small Java application that acts as a mail server and displays incoming emails, pretty neat solution for demos check it out.

    Let’s get to it, first lets configure the vRO workflow.

    • log in to the vRO client
    • In the “Design” pane under Library/Mail right click on the workflow named “Send notification” and click “Duplicate Workflow”
    • Call the new workflow something like “Send Email” and place it in a folder of your choice, I created a “Cloud-Abstract” folder in my lab
  • What we want to do now is hard code a few parameters such as SMTP host, SMTP port and credentials because we don’t want the system to prompt to the user for these details. Browse to the new workflow that you created and edit it
  • read more