How-to's and Support

The Top 5 Considerations for Creating a Successful Cloud-Based Pipeline

Written by: Kiley Nichols

The following is a guest post by Evan Glazer.

Your development operation is extremely important. You already monitor your deployments and releases. So, why not automate the processes behind them?

This article will cover the top five considerations of creating and running an automated, cloud-based pipeline.

  1. Consider Your Business Needs First

  2. Develop Your Cloud-Based Pipeline as Part of Your Apps

  3. It's All About Continuous Improvement

  4. Enable Self Service Features

  5. Track Your Pipeline, Microservices and Compliance Policies

It is very important to think about the whole picture of your business and application. Before development, you must ensure you understand policies, compliances, and other data-oriented securities and build a plan to keep your application in line with them. There isn't an exact prescription for DevOps success when it comes to your application. It is important to look at your team and everyone involved in the processes to start figuring out how to build in automation to speed up release times. When you are finished reading this article, you will have the knowledge you need to create a successful cloud-based pipeline for your applications.

1. Consider Your Business Needs First

Organizations are quickly adapting to a hybrid computing model in order to continue to support their monoliths and still stay current with newer architectures. Gaining visibility into all of your applications and environments is a key principle to hybrid computing and should be a goal for every company.

Hybrid cloud is a combination of mixed computing, services and data storage that can be spun up in person, via private cloud, or via public cloud services. Public cloud is widely used today with Amazon Web Services (AWS) or Google Cloud Platform, along with other various platforms.

Am I Hybrid?

If you're using your public platform to send data to a private cloud or leveraging your SaaS applications and moving data between private resources - then you're already hybrid computing. The goal of hybrid computing is to combine services and data from various cloud models to build a unified, automated, well-managed computing environment.

What do I need to research?

It is important to understand that if you're a compliance oriented organization such as those in finance, law, healthcare and other securities - you will have limits to what cloud services you can use in your cloud-based pipeline. These orgnizations need private cloud services to meet security compliances. If you're running an e-commerce store, then you don't have to worry so much about this until you grow and need to start becoming compliant to a larger customer base.

2. Develop Your Cloud-Based Pipeline As Part of Your Apps

Everything As Code: Configurations, Environments, Automations etc.

Setting up your configuration and environment files is a very important step to developing a pipeline. You might have cloud-formation scripts to build certain stacks of your architecture and it would not be a good practice to set-up from the AWS GUI. The bad practices of setting up servers without any configurable way to spin up a new instance immediately will not work when creating automation.

It's very important to model everything to be reused in your application so that it is easy to maintain. And don't forget automate everything: scheduled jobs, migration, compiling jobs, etc.

Put Everything in the Source Code

Keeping a repository for your application is key to being able to deploy. Setting up different environments in your codebase is an important part of the deploy process for your controller release, staging and development branches.

Process, Architecture, Deploy

Everything is a process. It is important to develop an architecture that can focus on helping your team work together better to produce great software. Deployments should be quick and easy when testing or releasing software and shouldn't be a burden on the team to have to spend hours or days to properly release.

3. It's All About Continuous Improvement

A pipeline implementation will always need continuous improvement. When building out your continuous integrations (CI), this will be a time to think about the gaps between development and operation to automate building, tests and deployments of your applications.

Steep Learning Curves

Before you begin creating your cloud-based pipleline, you might have some learning to do. It can be intimidating to learn Kubernetes, AWS products, Google Cloud products, Microsoft Azure products, etc. Or maybe your jobs product timeline leaves no extra time to delve into new technologies that can really help advance your application. As a community we must spend time to dig into new technologies, learn how to put them in use and help advance these new sklils in our industry. Even if it means we have to spend weeks figuring out and understanding how to implement such technology.

Keep

You should investigate ways your application will fail if a configuration file is changed or a file is incorrectly updated to the pipeline. What will happen? Well, you don't want to lock yourself out of your application nor have downtime. Taking time to plan and you can limit the possibility of this happening.

Take this situation for example: You know that if a configuration is updated via a console, it may cause a defect if incorrectly implemented. As a fail-safe, you should limit console access and the ability to update root level files to specific roles.

4. Enable Self Service Feature

Rapid Onboarding to New Teams and Pipelines.

Adding a role to your architecture should be easy when you onboard new engineers. Spending countless hours figuring out access to things can be a tireless process for you and your new team members as they took to start development immediately.

Visibility of Applications

Providing your team with the tools they need to monitor their application day-to-day is key to the self-service concept. It should never take hours or days to track down defects. You can also utilize integrations to run audits on your application's health.

5. Track Your Pipeline, Microservices, and Compliance Policies

The Importance of Dashboards

Having a general-purpose dashboard to manage or view your clusters will allow you to troubleshoot defects and manage clusters' health of all your applications in different namespaces such as production, staging and development.

A dashboard will also give you the ability to view cluster roles, active namespaces, nodes, pods, jobs, and the list goes on. Sometimes it is easier viewing these on a web page versus getting the data on the console. Having a birds-eye view of all microservices running and an easy dashboard to view will make your life much simpler.

Enforce Policies: Approval Gates, Compliance Checks, and ACLs

Setting up roles for your engineering team and understanding the roles everyone will have to the application is very important. If an engineer is only going to have access to navigate pods or application console and will never touch any of the development operations, then their role should be different from those who will need access to root level files of the application. Finally, you should set-up compliance checks to keep track of roles and their access to the application.

Stay up-to-date with the latest insights

Sign up today for the CloudBees newsletter and get our latest and greatest how-to’s and developer insights, product updates and company news!