What is Modern Application Delivery?

- By Sara Jeanes on July 15, 2016

Development practices have come a long way since the time of Waterfall. Development shops have progressed through Agile methodologies and have built a culture of continuously delivering value to their customers, both internally and externally. Many shops have also since implemented Scrum and are experimenting with containerization technologies.


The exact characteristics of development practices vary from organization to organization. At their core, however, is a modern delivery chain comprised of modular parts. In place of the monolithic application stacks of old, organizations now deploy workflows broken into their individual constituent parts. Implementing this new method requires new tools and new techniques. These tools help manage the delivery pipeline throughout the lifecycle of the application, from ideation to development and through sunset.


Continuous integration platforms, like Shippable, are an important component of this pipeline. But they’re only one of the many components that a team needs to be successful and agile.


Below, I discuss from a broad perspective how an organization benefits from a modern, microservices-based delivery chain. I then explain how microservices can be leveraged at each particular part of the delivery pipeline, from development to testing, and to production.

The advantages of microservices


There are some key benefits to taking your monolithic application and breaking it down into its core components. By linking together each traditional tier of the application (database, business logic, and web layers), individual services can address each other by leveraging an API that more closely mirrors a MVC-style pattern. Using an API interface, the application can scale significantly to accommodate large spikes in traffic. A service-oriented architecture can also provide significantly greater visibility into the operation of the application, making it easier to troubleshoot.


Another advantage of microservices is their ability to be built and signed to ensure their integrity. Verified base images can be leveraged from trusted repositories like Docker Hub to speed builds. Containers built from these trusted images are then able to be run on any suitable container platform. The containers expose a higher level of visibility within an environment than an IT team would traditionally see with virtual machines. This added visibility lends itself to a number of beneficial virtual machine-container deployment topologies, depending on the needs of the development team. It also allows an operations team to continue to use a familiar suite of tools while migrating an application to a microservices-container deployment architecture.


Much like virtual machines, containers are language independent. The tooling to manage, deploy, and maintain containers provides a consistent framework, but it also allows flexibility in choosing the language and framework that works best in the particular situation. It’s also possible to add consistency by better sizing an application with containers. Since the application is broken into its constituent microservices, each element can be scaled to the necessary size by deploying additional containers of that image. Each container can be built with only its required components. This also helps to keep deployed container size small, while additionally reducing the potential surface area for attack.


Last but not least, each microservice packaged into its own container is also resilient against failure. In other words, the failure of one microservice does not impact another microservice, and a container manager like Docker Swarm can be used to restart additional containers automatically if one should fail. Ultimately, the underlying container interface is portable, providing resistance to lock-in on a particular platform or cloud. It also creates an explicit exit strategy for the team to either move the application to a different cloud provider or move it back in-house on a private environment.


Those are the overarching benefits of a microservices-based delivery chain. Now, let’s discuss how organizations can leverage these benefits at each particular stage of the delivery pipeline, from development to production to an app’s end of life.


In Development


In the past, it was common for engineers to set up development environments manually. Slightly more sophisticated organizations might have used tools like Vagrant to keep the environment consistent. Both of these methods are prone to “drift” between individual environments—which means environments that are supposed to remain identical and consistent will change over time due to local updates on the host system, changes in local networking configuration, and so on.


Containers alleviate this concern by enforcing a read-only method of development while running builds. They also make it possible to place both code and artifacts into source control. Combined with the ability to destroy and rebuild an identical environment on demand using  containers, drift between environments becomes nearly nonexistent.


Microservices also come in handy in development environments because they allow a small team of engineers to work on small components of the application without significantly affecting dependencies of the other components of the application. This is done by maintaining backwards compatibility of the services interface when developing. Since each service comprises only a portion of the full application, the time to set up a dev environment is significantly reduced. Containers take less than a second to deploy and can be recreated quickly and easily from the command prompt.


To put it another way: Containers form a discrete unit of development containing compiled code. Source control is used to maintain and version code, a dockerfile controls the container build process, and a docker-compose file controls the multi-container build process. An artifact repository can then be used to maintain versioning on built images and shared templates. Each of these elements creates an explicit definition of the application, lending visibility to the build process. With this visibility, it is easier to maintain agility and iterate more quickly through the build and deployment process.


In Testing


Testing is a part of the delivery chain where many factors can go awry. Code drift, versioning issues based on dependencies, and change conflicts can create a number of issues. These problems tend to create issues that would not otherwise exist when in production, and mask issues that exist in production, but go unidentified in test.


A key benefit of containers and microservices is their ability to address these challenges by guaranteeing the integrity of the artifacts and components being delivered through a continuous integration workflow. Microservices mean that each service is packaged with its dependencies. This creates a more concise testing procedure for an engineer before delivering the changes. In addition, the services are built from a trusted source image that is maintained and patched. This alone increases the security of the environment by ensuring components are up-to-date. Each service is also packaged to be independent, making it more difficult for a bug in one service to grant direct access to another container. It is important to note, however, that a container, unlike a virtual machine, will share an underlying operating system with a number of other containers.


When leveraging artifact versioning, any change introduced into a test environment is captured. Validation testing can be performed on each microservice individually, and then the environment as a whole, in order to validate the versioned container. These test results can then be communicated back to the rest of the team, making it easy to keep all team members aware of the state of the test environment. This also adds extra visibility to any changes that have been made, especially if the changes were to the API for the service.


In Production


One of the primary virtues of creating an explicit development pipeline is the ability to deliver changes rapidly to production many times a day. The process looks like this: Engineers check in code. This code is delivered to a continuous integration server, which merges code multiple times a day. This code, delivered in containers, is subjected to testing. After that, it’s ready for deployment. This continuous delivery method creates positive and beneficial feedback for the engineer and reinforces the benefit of sticking to the pipeline, rather than trying to circumvent it.


Once this level of sophistication is achieved, a number of opportunities arise that make clear the worth of all the effort put in to develop this capability. Blue/green deployments describe a method whereby an entirely separate production environment is deployed parallel to the old version. This new environment is tested to ensure it is properly instantiated. Once this consistency check is done, the primary load balancer or DNS resolver is pointed at this new instance. The new instance is checked again before the old version is destroyed. This enforces a read-only production instance while also allowing a no-downtime update to be performed.


An additional benefit during the production stage of using microservices is that changes can be deployed to a small number of customers, and tested before being deployed to the entire customer base. This way, a large number of versions can be tested and only the best version is deployed throughout the entire environment to all production containers.


Each of these benefits relies on production being a linked set of instances that can be disposed of as needed, possibly many times a day. The improved abstraction of containers lends itself to this ephemeral need, while enforcing the benefits of promotion through each step of the pipeline, from test to production.


Sunsetting


The sunsetting stage, which means ending development of an app and bringing it out of production, is often overlooked. But it’s an essential part of the delivery chain, too. It’s also one where microservices can be leveraged to great advantage.


There are a few things to consider when building microservices that will make sunsetting containerized apps significantly easier. Engineers should inventory all the components that compose an application and any other application that depends on them. During sunset, they should then ensure that all components of the application are decommissioned. Under these conditions, microservices make it especially easy to upgrade or decommission individual components of the application. For instance, it is relatively easy to substitute a new logging or monitoring tool for an old one that is being sunsetted.


It is also important to keep in mind that the most important element of any application sunset is feedback. Continuous improvement is a corollary to continuous delivery, and the best way to improve an application is by providing information to engineers on how to make their product better.


Conclusion


Deploying an effective delivery chain based on microservices requires organizations to think holistically about how to make the most of microservices, as well as the integration of microservices into each individual phase of the delivery pipeline. Continuous integration tools from platforms like Shippable’s are one key ingredient, but to make the most of modern application delivery, engineers should also leverage microservices across the entire delivery pipeline

Try Shippable 

Topics: containers, continuous delivery, microservices