AWS recently rolled out EKS, and we’re excited to help you get started using it as part of your CI/CD process with Codeship.
What is EKS, though? It’s essentially AWS’s managed Kubernetes platform -- but if you’re a little bit sick of hearing terms like “managed Kubernetes platform” all the time lately, we can take a minute here to unpack that in more detail.
Defining EKS
EKS means you can run Kubernetes without directly managing any of the underlying machines running on your account that Kubernetes is running on -- and without worrying about the setting-up-Kubernetes part of it. If you need more machines, fewer machines, different types of resources, changes to your cluster configuration or anything else that may be currently making your Kubernetes setup a bit expensive when it comes to your time (or even stopping you from looking at production Kubernetes altogether) -- EKS is here to handle all of that.
It’s not the first or only managed Kubernetes platform, but it is perhaps the most comprehensive since it’s native in existing AWS ecosystem and can make full use of your existing accounts, permissions, and scales across AWS availability zones for maximum availability.
Because AWS has a great EKS setup guide and a very helpful blog post, we won’t cover the AWS side of your setup here. Essentially, you’ll need a user with the right permissions and a cluster up and running!
Setting Up Codeship Pro
Hopefully you’ve got your EKS cluster configured and the users you need set up with appropriate permissions. If not, we recommend reading AWS’s EKS setup guide. If you’ve got those things configured, we’ll begin setting up our CI/CD process on Codeship -- specifically, using Codeship Pro. While EKS may handle the control of your cluster, you still need to test and deploy your application, at which point from commit to running Kubernetes cluster, you will have everything automated and nothing to manually oversee!
Codeship Pro is the Docker-based offering, and uses two .yml files -- the Services file and the Steps file -- to define you build environment, application, and pipeline definition.
If you’re new to Codeship, the comprehensive introductory guide and the quickstart repos may be a good jumping-off point.
We’ll use the Rails quickstart for our example here, but you can of course use any language, framework or service you’d like. We have more quick starts available in our CodeShip Library repo. These quick starts are just simple, proof of concept starter apps configured to run on CodeShip out of the box.
Creating the Services files
When setting up a new Codeship Pro project, the first thing you do is create your codeship-services.yml and codeship-steps.yml files. These two configuration files define your environment and your CI/CD pipeline on CodeShip.
The Services file is modeled after Docker Compose and orchestrated both your application’s necessary containers as well as any containers you need specific to your CI/CD pipeline; for instance, a container with the kubectl
tool.
In this case, the Services file will look like:
app: build: image: codeship/ruby-rails-quickstart dockerfile: Dockerfile environment: PGHOST: pg-db PGUSER: sample_postgres_user PGPASSWORD: sample_postgres_password PGDATABASE: postgres RAILS_ENV: test depends_on: - pg-db cached: true volumes: - ./deployment:/deploy pg-db: image: healthcheck/postgres:alpine environment: POSTGRES_USER: sample_postgres_user POSTGRES_PASSWORD: sample_postgres_password POSTGRES_DATABASE: postgres
Creating the Steps files
And in our Steps file, we’ll create a simple pipeline:
- name: run_rails_test command: /bin/bash -c 'bundle exec rake db:migrate && rails test' # bash wrapper not required unless passing along env variables or combining commands service: app
In these configuration examples, we’re creating a simple container named app
using the pg-db
service as a database to run tests against our Rails applications.
Kubernetes Deployment to EKS
The next step in our CI/CD pipeline will be our Kubernetes-based deployment to EKS, and for that we’ll need to define a new service as well as at least one new step. To do this, we’re going to use Codeship’s Kubernetes deployment service which lets you easily encrypt and authenticate via your kubeconfig
file and use the kubectl
tool for any deployment commands you may need. In this case, that kubeconfig
file will authenticate with your AWS account.
In our Services file, we’ll add:
kubernetes-deployment: encrypted_env_file: k8s-env.encrypted image: codeship/kubectl
There are two things to note here. First, we’re pulling an image named dkcodeship/k8s-controller
; this is the image Codeship hosts and maintains for easy kubeconfig
-based auths in your builds. If you wanted to, you could build this service yourself or modify our approach by checking out the code.
The next thing you’ll notice is that we are using the Codeship encrypted_env_file
directive to set encrypted environment variables. The use case here is a little bit unique, as these encrypted environment variables will actually be our kubeconfig
file contents. By providing them this way, you can keep your authentication credentials totally secure in your repo and throughout your CI/CD build process.
Create your encrypted kubeconfig
Let’s take a moment and create the encrypted environment variables file we’ll need to use to be able to authenticate with AWS EKS during our build.
First, we’ll start with the kubeconfig
file configured to use an AWS user with appropriate IAM permissions. Here’s one example:
apiVersion: v1 clusters: - cluster: server: <endpoint -url> certificate-authority-data: <base64 -encoded-ca-cert> name: kubernetes contexts: - context: cluster: kubernetes user: aws name: aws current-context: aws kind: Config preferences: {} users: - name: aws user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 command: aws-iam-authenticator args: - "token" - "-i" - "<cluster -name>" # - "-r" # - "<role -arn>"</role></cluster></base64></endpoint>
The EKS documentation is the best place to go to find more information on getting your user and your initial authentication created.
Once you have this file ready, you’ll need to be sure you have both the kubectl
and Codeship’s Jet CLI installed locally. We’ll use each to run a few commands:
kubectl config view --minify --flatten > kubeconfigdata docker run --rm -it -v $(pwd):/files codeship/env-var-helper cp kubeconfigdata:/root/.kube/config k8s-env jet encrypt k8s-env k8s-env.encrypted rm kubeconfigdata k8s-env
First, we’re using the --minify --flatten
options on the kubectl config view
command to collapse our configuration file into a manageable format. Then, we’re using Codeship’s Kubernetes helper image to convert that file into an environment variable format to be parsed by our deployment image, using the docker run
command. Finally, we take the new file where our flattened config file is set up in environment variable format and encrypt it using the jet encrypt
command of Codeship’s Jet CLI.
Note that when using jet encrypt
, you will need to make sure that you’ve grabbed the AES encryption key unique to your Codeship project. You can read our encrypted environment variables documentation for more information on how to do this.
Deployment commands
Now that we have our Service defined and authenticating with AWS, we will return to the Steps file to add our Kubernetes deployment commands. Since the service we defined is just a container with the kubectl
tool installed, you can use any kubectl
commands you’d like. In our case, we’ll reference a script file in our repo:
- name: deploy service: kubernetes-deployment command: ./kubernetes.sh
Inside this kubernetes.sh
script, we can include any Kubernetes commands we like. As one example:
#!/bin/sh kubectl apply -f ./deployment.yaml
Conclusion
Deploying to AWS EKS via Codeship Pro is as simple as defining a container that runs kubectl
and authenticates with AWS exactly as any of your users may be doing locally on their computers already.
This means the full capabilities of EKS and Kubernetes are available without any complicated or proprietary setup on Codeship Pro and allow your CI/CD tool to adapt to any Kubernetes-based workflow that AWS’s powerful new platform has enabled.