Serverless and Kubernetes: Serverless Isn't Processless

Written by: Michael Neale

7 min read

Hot on the heels of the rise of microservices is this new hot thing called serverless (sometimes called “Functions as a Service” or FaaS). Most would credit Amazon Lambda with popularising this way of delivering server side apps, but there is more to the history of it.

What makes serverless special:

  1. Zero or very low cost “at rest”

  2. No managing servers

  3. Composing your app in terms of “functions”

If we look at the first two, this has been around for a long time: Google announced Google App Engine 10 years ago this year, so an alternative title for this blog could have been Serverless is Now 10 Years Old. Recruiters could ask for people with 10 years of serverless experience! Shortly after Google App Engine was around, Heroku was on the scene and it provided similar benefits. These platforms kept the costs low (almost near zero, in the case of Google App Engine) when there was no web traffic, and allowed things to burst as load increased.

The nuance seems to be around point three: whilst the proceeding platforms were web/http centric, serverless positioned itself as more of an event handler. These aren’t functions in the pure mathematical sense, but more like very small (in theory) units of functionality:

Looking at the sequence of operation of a typical serverless setup: 1) You have some kind of event source, this could be an HTTP API Gateway (for example, in the case of Amazon Lambda), or a queue event, or a message - anything, really. 2) The scheduler finds a place to run the “function,” which is stored (often as a binary/zip) in 3) the artifact storage. Finally, something in the magical pool of executors (4) (which may be kept warm, or started cold) wakes up and takes the binary/artifacts and runs them. Of course, there is more - logs, errors, monitoring, etc., but you get the idea.

In some platforms, like Amazon Lambda, you barely have to pay for any of this, other than when the execution pool of your functions (4) is doing work (there may be some cost associated with http (1), IPs or storage of binaries that your function may need, but it would be relatively small).

Benefits

Obviously, cost and inherent horizontal scaling of load is important to serverless, but one other fact is that, for the most part, each invocation starts fresh. This make the point number two (the server-less) aspect even stronger: as there are no servers (of your responsibility) running, managing critical patches at the lower layers really isn’t your problem - you have no control. However, any non-trivial serverless app would make use of many libraries and frameworks, and you clearly are responsible for the health and security at that layer.

Scaling serverless functions (horizontally) is much finer grained and faster than microservices or applications in general, so they have the best chance to make efficient use of resources if the load changes a lot.

Kubernetes and serverless

A common pattern I hear is people making use of serverless (Lambda) where they can, for efficiency, and Kubernetes (e.g. EKS on Amazon, when it is available) for all the rest.

For people making use of Kubernetes for their app development or microservices (perhaps by using one of the popular out of the box Kubernetes cloud platforms ), there are ways to get a serverless experience right on the Kubernetes platform. The benefits are everything stated above, and sensible use of the cluster resources. When serverless functions aren’t running, resources are freed up to serve scaling needs of other apps or services.

Given the health of the Kubernetes community, there are two great solutions for this:

Alarmingly, neither of these have a nautical theme. Kubeless is the one I want to give a tour of:

Installation is near trivial, if you have Kubectl set up and connected to your cluster (you only need to do this once):

kubectl create -f https://github.com/ kubeless / kubeless /releases/download/$RELEASE/ kubeless -$RELEASE.yaml

This will setup some controllers and machinery to enable serverless functionality.

Once installed, “functions” (written in a language like Javascript or Python, for example) can be deployed via the kubeless command line utility (which makes it much easier to play with). A function can then be invoked by a pub-sub event, http or directly on the command line:

kubeless function call get-python --data '{"echo": "echo echo"}'

Each function, once deployed is a Custom Resource Definition , and when invoked, will launch runtimes on demand, providing the function (which is stored when you deploy it) to each runtime, once it is ready to serve. Kubernetes “HorizontalPodAutoscale ” is used by Kubeless to make sure the functions are able to serve the load.

Kubeless sticks very close to Amazon Lambda, aming to be CLI-compatible. Kuberless has awesome docs and articles and more, check it out .

Serverless is not processless

With serverless patterns, we may have gotten away from thinking about operating systems, or even containers, but still all the rest of the software development lifecycle applies.

What we don’t want is:

In all the excitement, people can forget there is a lot more to software than just deploying it: testing it, for one; version control, workflows, promotion, rollback, artifacts and more.

Thankfully the https://serverless.com/ framework provides some of this structure. As an additional benefit, in theory it provides some portability: the framework can support functions that run across Google Cloud functions, AWS Lambda functions and also has direct Kubeless support.

Serverless Kubernetes

You will note that in the serverless world, no one talks about containers. They are abstracted behind the scenes in the “runtimes.” Some tools allow you to use containers as functions, but typically a container may be considered to be heavier weight, as there would be more layers of an image to pull when invoking it on demand. By thinking in terms of functions, you get the benefit of common runtimes, which means launch times can be milliseconds, even if containers are used under the covers.

I have mentioned previously the promise of Amazon Fargate . I say promise, as it is still early days, but it has the potential to bring the 1st benefit of serverless (low or no cost at rest) to Kubernetes itself. This will make serverless patterns more attractive on Kubernetes over time, as you can choose what is a microservice, what is a function etc. To me this looks like a coming convergence of containers (which are an implementation concern) with functions as a service.

It is worth noting that serverless support is planned on the roadmap for Jenkins X .

Michael Neale is a development manager at CloudBees. Michael is an open source polyglot developer. For his sins, he spent time at JBoss, working on the Drools project and then Red Hat, where he continually caused trouble on various projects. Post-Red Hat, Michael was one of the co-founders of CloudBees, seeing great opportunity in cloud technologies. His interest in the cloud is natural, as he is attracted to disruptive technologies. Michael lives and works in the Blue Mountains, just outside Sydney, Australia. He has no hobbies, as his hobbies tend to turn into work anyway. Mic’s motto: “Find something you love doing, and then you will never work a day in your life again.” Follow Michael on Twitter .

Stay up-to-date with the latest insights

Sign up today for the CloudBees newsletter and get our latest and greatest how-to’s and developer insights, product updates and company news!