There are many ways to kick the tires of Kubernetes, or set it up for production. I have written previously about why it is of interest to developers , and also about Jenkins X (an opinionated way to do continuous integration/continuous delivery and apps on Kubernetes), so it is time to look at avoiding installing it.
I say avoiding it, as there is an arms race between the big cloud platforms to make Kubernetes a turnkey and automatically managed service, from very small to very large scale, so why not make use of them?
One of my mottos is:
(Michael speaking at DockerCon Barcelona, 2015)
So, in this case, that means waiting for the cloud platforms to provide great support for Kubernetes, which is happening rapidly (EKS from Amazon announced in November last year made this screamingly obvious for everyone). You could set it up yourself, on VMs or locally, but if you can be lazy, why not? It seems the longer you wait, the better these cloud platforms are (so in this case, laziness pays off both short- and long-term?)
Shameless cross promotion and product pitch: Codeship has a great guide on Kubernetes managed services .
There are excellent ways to install Kubernetes if you really want to. On desktops, there is Minikube , which is part of the Kubernetes project. There is also Kops , for running production workloads on clouds that don’t yet have full native support. The Docker desktop app also has Kubernetes support:
Let’s look at development: You want a development cluster available to kick the tyres. I have a 2015 era MacBook pro - the last of the “good models.” It has 15G RAM. It can run things like Minikube or Docker desktop just fine, but can feel a little warm, or get a little stressed (you may notice the hyperkit process chewing up lots of CPU or RAM). As soon as you start running several apps, you will find you want just a bit more memory. Besides, if you are like me, you are running dev tools, email apps, chat apps (so many chat apps), browsers and Slack! That 15G doesn’t go far. It would be nice to have an upgrade.
What does it cost to get me say a 32G machine to give me some headroom: An iMac Pro: https://www.apple.com/au/shop/buy-mac/imac-pro is $5,600 AUS. Sure, expensive, but I’m worth it.
What about an alternative: Google have some great computers, right here in data centres in Sydney (where I live) only milliseconds away. What would that cost for a non-trivial cluster:
(3 times n2-standard-2 gives me 22G RAM and 6 CPUs), that is about $145 AUS a month.
The fancy Mac, assuming I write it off over three years, is $156 a month. However with the Google Cloud computers, I get more CPUs available (and distributed), more RAM available and an AMAZINGLY well-connected network, for less without being clever.
But I like being clever. Given simple scripting, or something like Jenkins X where launching is a simple` jx create cluster gke|aws` command while you have your coffee, you could setup and tear it down every day. That means you are really only using it 25% of the time, drops the cost more. Google also have preemptible instances which are 80% off, and AWS have spot instances (similar, but different…) reducing things even more.
Putting this as a chart (GKE = Google Kubernetes, work day = running it for 10 hours each work day):
You can see combining these “clever” things it gets very, very cheap, very, very quickly. More to the point of this article, with GKE I don’t even have to install Kubernetes, it comes as part of it. If I picked Amazon, and EKS with the upcoming Fargate platform , it takes it even a little further: I don’t even pick the number of instances but just CPU/constraints and it takes care of it.
It is looking better and better to not install Kubernetes, even for development purposes (let alone production). These cloud platforms will also only get cheaper, and faster and lower latency (there is likely one coming to a town near you!).
I think this is kind of profound, potentially revolutionary for the developer experience. The production side benefits of cloud are obvious, but the same applies to development time. As powerful as it is that you can run a replica of things on a local machine, having more production like tools available in dev time (in Kubernetes, often namespaces are cheap, even new clusters are relatively cheap) will just accelerate development (and save you pocket money).
In a drive-by discussion with James Strachan (about Jenkins X) the other week he wrote:
“So ideally I'd like to be able to edit code on my laptop and have Jenkins X automatically redeploy the app to a Preview namespace for me ASAP - while in parallel running all the tests to check I've not broken anything. Let's reuse the cloud to make developers faster at all stages of development. I love that PRs generate Preview environments and continuous delivery pipelines releasing after merging to controller. But I think we could do better to reuse the same technology, containers, platform, images + pipelines for development too - not just from 'create PR' onwards."
"e.g. I'd like to move us more away from running stuff on our laptops that’s never the same as the Docker images we use for continuous integration/continuous delivery - towards reusing the exact same Docker images, build pods, tool versions, operating system, disk, network, platform at all stages of development. This would reduce lots of waste when we think there are issues - but really it’s just the operating system / local install / tool versions on our laptops, rather than the one we're gonna run production on.”
To me this is the profound shift. It happened for servers and ops some time ago, so why not the development side of things? It’s time to move things off our desktops or laptops. It also means, phew, you don’t have to install Kubernetes!
AWS Fargate and other clouds
One of the things I really like about the promise of Fargate and EKS (I say promise, as at the time of writing it is not Generally Available) is exactly this:
“With Fargate launch type, all you have to do is package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application.”
Well, not exactly. But the idea that you only think in apps and constraints, not servers. It seems a bit backwards in this day and age to be thinking about instances and sizes and the discreteness of servers. Instead it should be about capacity and services.
“Autoscaling” of Kubernetes nodes is well supported in GKE (and soon AKS from Microsoft will also support it for Kubernetes). Amazon Autoscale is also well proven, and it looks like it will work well for the nascent EKS. Autoscale helps keep costs under control and allow computation to meet demand.
Fargate from Amazon aims to take this further, making the scaling more like the serverless model of computation when the cost of things “at rest” is closer to zero. It will be fascinating to see the update of EKS + Fargate to validate some of these concepts.
I didn’t want to get into a shootout between the different platforms' implementation of Kubernetes (that is a bigger subject for another time), but I do think it is worth keeping an eye out on Fargate.
So, thankfully, laziness pays off both now and in the future.
Michael Neale is a development manager at CloudBees. Michael is an open source polyglot developer. For his sins, he spent time at JBoss, working on the Drools project and then Red Hat, where he continually caused trouble on various projects. Post-Red Hat, Michael was one of the co-founders of CloudBees, seeing great opportunity in cloud technologies. His interest in the cloud is natural, as he is attracted to disruptive technologies. Michael lives and works in the Blue Mountains, just outside Sydney, Australia. He has no hobbies, as his hobbies tend to turn into work anyway. Mic’s motto: “Find something you love doing, and then you will never work a day in your life again.” Follow Michael on Twitter .