DevOps isn’t only a fun amalgam of two terms (developers and operations), it has its own culture within small organizations, startups, and digital factories where there is a high demand for efficient communication between teams, and agility and flexibility at both the development and operational level. Over the last several years, DevOps has been successfully implemented across top enterprises like Amex, Facebook, LinkedIn, Microsoft, Amazon and too many others to count.
Getting started with DevOps
The goal of DevOps is to unify application development (Dev) and its operations (Ops) throughout the software development life-cycle (SDLC), from strategy, planning, coding, building, and testing, through release, deploy, operate and monitor. DevOps encourages the maximum possibility of automation by using DevOps tools and scripts.
Dev and Ops in the software development lifecycle
Amazon brings the slogan ‘You build it, you run it’ which aims to bring a product from development to production and cut time-to-market by anywhere from 10-15 days.
Version Management with GitFlow
For the source code of the application, rather than using a monolithic repository, I recommend using one repository by each microservice. The management of the GitFlow workflow would be represented in this context. You can create a feature branch and a release branch before applying the ‘branch filter’ when you create your ‘build’ definition (more on this below).
DevOps can’t be separated from the concept of CI/CD. Continuous integration(CI) simply means a series of practices to implement when integrating working copies to a shared repository. It represents the beginning of the CI/CD pipeline which starts from the new changes in source code and is committed from a local source control repository then pushed to a remote source code repository, a trigger based on the aforementioned process can be configured to trigger an automated build, and it is also possible to run tests during the build.
CI/CD tools such as Visual Studio Team Services (VSTS) provided by Microsoft allows you to create the Build definition to turn this process into reality. It allows you to configure different build tasks, and these tasks perform the build from the source code. There are two kinds of triggers available in VSTS:
- Continuous integration trigger: this is applied on a Git, TFS ( which is recommended by Microsoft) or another source control repository which allows you to specify a listening branch while new changes in code are committed and then run an automated build.
- Scheduled trigger: this is applied on select days and times to run an automated build.
Two types of build triggers
Other available DevOps tools that I’d like to mention are Jenkins, Ansible, Github, and Bitbucket.
Continuous Delivery vs Continuous Deployment
Let’s continue discussing CI/CD, the release part of pipeline started when a new, successful build is available. There are two types of CD which is continuous delivery and continuous deployment.
Continuous delivery is a series of practices to deliver each change to a staging environment which can then be deployed to production in a manual way.
Continuous deployment is different from continuous delivery because every change after a successful build is deployed to production in an automatic way.
Continuous Delivery vs Continuous Deployment
Similarly, VSTS also allows the continuous delivery or deployment by creating a ‘release’ definition (shown below). The definition can be based on the build from VSTS or directly from Jenkins or another source that publishes the artifact to be deployed:
Creating release definition from different artifacts
And then, by configuring different release tasks, these tasks can be automated or manual as shown:
Task catalog in release definition
Getting Started With Containers
As high-level virtualization technology, containers provide an isolated and independent environment. Containers such as Docker perform the virtualization of the operating system as well as the related infrastructure. Docker enforces the portability and agility of applications and acts as a deployment unit while deploying multiple containers clusters.
Docker uses a client-server architecture and can be built into three essential parts: the Docker client, the Docker host with Docker daemon and Docker registry within the architecture.
- Docker Client where Docker environments should be installed to build Docker images with a target application.
- Docker Host is a managed host with Docker daemon (also known as Dockerd which is the persistent process that manages containers).
- Docker Registry provides or stores different Docker images. The best known open communities for Docker images are Docker hub, nginx (the official Docker image) and the Docker store.
The Docker client can communicate with Docker daemon by using the RESTful API over UNIX sockets or a network interface in two ways:
- Docker clients can run on the same system with Docker daemon.
- Docker clients connect to a remote Docker daemon.
Caption : docker architecture from https://docs.docker.com/engine/docker-overview/
Other important Docker-related tools are:
Docker Compose is a tool used to define and run multi-container Docker applications. A configuration file in YAML format is used to configure your application’s services.
A DockFile is necessary to define what will be added in the Docker container, then paired with the Docker command to build a Docker image and pull the image to the Docker registry.
Docker on Azure
Microsoft Azure provides the Azure Container Service (ACS) to secure and manage enterprise container apps in the cloud and an Azure Container Registry (ACR) to manage Docker images which are controlled by Azure AD. Azure also provides Azure Container Instances (ACI) which offers a more simple, faster way to run a container in Azure without thinking about the infrastructure level.
Microsoft also provides some useful tools while working with Docker in Azure such as Virtual Studio online, Visual Studio code, and the Visual Studio extension Docker.
Container-clustering Solutions in Azure
To run an application with more than 100 instances to act as container clusters, there are a couple of container orchestrators which simplify the management of container clusters.
The open-source container orchestrators are popular in the market such as Docker Swarm, Kubernetes, and Marathon of Mesosphere’s DC/OS (which is designed for big data analytics solution and facilities to deploy Hadoop clusters, OpenShift, Rancher, CoreOS Tectonic, Docker EE and others).
Kubernetes in Azure
As time goes by, I think that Kubernetes is winning the competition between container orchestrators for many reasons. Microsoft Azure has also launched a service currently in preview mode, Azure Kubernetes Service (AKS). Microsoft considers Kubernetes the best balance between function and performance. It applies master/slave architecture which is a model of communication where one device or process acts as the master to control one or more other devices (slave). It facilitates the deployment of microservices while each node should scale and work independently.
Microsoft recommends deploying multiple masters (generally three nodes as master node) in Azure, then balances the number of slave nodes depending on the scenario. If you’re working with Microsoft Azure and need to deploy clusters with master/slave architecture, you can check my GitHub repository where you can find some useful ARMs (Azure Resource Manager) templates.
Microsoft Azure also provides a container service known as Azure Container Services. It implements the popular container orchestrators managed by Kubernetes in Azure, and it offers many useful features such as:
- Easy management of containers even when there are more than 100 instances.
- Easy scaling.
- Support popular operating systems such as Linux and Windows.
- Easy roll-out and rollback.
- Can combine with batch processing or cron jobs.
- Automatic bin packing (e.g., depends on GPU/CPU usage).
In the Azure Marketplace, a standard/advanced version of Docker EE for Azure is available so users can deploy Docker directly in Azure. There are options such as Mesos (DC/OS on Azure) for the same purpose.
Reference Deployment Methods
To improve the resilience of the applications that we’ve deployed with container-clustering solutions, I’d like to recommend several deployment methods.
Blue/Green deployments, also known as B/G deployments or Red/Black deployments. The main principle is to deploy two identical environments which are configured in the same way. Generally, while one environment is live and in use by users, the other environment stays idle. When downtime occurs, this architecture allows to redirect the incoming traffic to the idle configuration which runs the original version with the help of load balancer. The target is to reduce downtime during production deployments.
In this method – which is typically used in production without impacting the current version – the implementation is similar to B/G deployment by deploying the latest version of the application into production which acts as a ‘canary.’ This version integrates with other apps and every infrastructure environment to simulate what it will look like in the real production, but to which no public users are routed at the moment. After that, all related influencers feel satisfied with the new version, more servers will be released and more users will be routed to the new version. Remember to implement a safety rollback strategy in case of any issue detected in the future.
Recommended Deployment Tools
Infrastructure as code software such as TerraForm would be a great choice to manage the high-level configuration. It creates files with a .tf extension and the content in YAML format.
The command you need to apply to all your configurations is terraform apply.
Digital transformations – when transitioning from on-premise to the cloud – DevOps, containers and container-clustering solutions – such as Kubernetes – are exciting and relevant topics to make organizations ‘cloud ready’ across technical and cultural aspects. Despite this progress, there’s a gap between enthusiasm and execution, and I hope to see more and more organizations bringing DevOps culture within their teams.
Originally from https://www.datree.io/blog/getting-started-with-devops-containers-and-kubernetes
Authored by MelonyQ
Categories: Microsoft Azure
Woman in STEM. She is a former Microsoft senior cloud computing technology evangelist currently working as a cloud solution architect, she is an Azure e-learning training courses Instructor and blog author on Microsoft Azure. She holds all Azure certifications, as well as CKA and CKAD and She, is mainly working on her contributions towards OSS, DevOps, Containers and Kubernetes, Serverless Computing, and IoT on Microsoft Azure in the community and help different size of business and partners succeed in Azure projects with focus on Azure Kubernetes and Azure DevOps. She can be reached out via Twitter @MelonyQ and her blog website : cloud-melon.com