The CI/CD and DevOps Blog

Learn about various tried-and-tested strategies that will help you ship code faster

Are you Stuck in The New DevOps Matrix From Hell?

If you google "matrix from hell", you'll see many articles about how Docker solves the matrix from hell. So what is the matrix from hell? Put simply, it is the challenge of packaging any application, regardless of language/frameworks/dependencies, so that it can run on any cloud, regardless of operating systems/hardware/infrastructure.

                     

The original matrix from hell: applications were tightly coupled with underlying hardware

 

Docker solved for the matrix from hell by decoupling the application from the underlying operating system and hardware. It did this by packaging all dependencies inside Docker containers, including the OS. This makes Docker containers "portable", i,e, they can run on any cloud or machine without the dreaded "it works on this machine" problems. This is the single biggest reason Docker is considered the hottest new technology of the last decade.

With DevOps principles gaining center stage over the last few years, Ops teams have started automating their tasks like provisioning infrastructure,  managing config, triggering production deployments, etc. IT automation tools like Ansible and Terraform help tremendously with these use cases since they allow you to represent your infrastructure-as-code, which can be versioned and committed to source control. Most of these tools are configured with a YAML or JSON based language which describes the activity you're trying to achieve. 

Build, test and deploy applications independently from a monorepo

In our previous blog posts, Our journey to microservices: mono repo vs multiple repositories, we shared our thoughts and experiences on our approach with monorepo. We received a few questions after that blog on how CI and deploys go with the monorepo.

In this article we will learn how to run CI, build and deploy applications independently from a monorepo. On each PR/commit we will run tests on the service which has changed build a docker image from it and push it to a registry. This image can then be deployed to to a cluster on any supported Container ServiceWe will use Shippable for this scenario.

7 things to consider while moving to a microservices architecture

In part I of my four part blog series on Microservices, I explained what microservices are and the benefits you will see by adopting this architecture.

However, life is all about tradeoffs. In part II of this series, I will go over the things you need to consider while moving to microservices, as well as some challenges that crop up even when you do everything right.

Microservices for greenfield projects

Anytime your team develops a new application from scratch, it feels great not to inherit technical debt and be locked into outdated decisions made years ago.  Most teams developing new apps today would probably choose to containerize them using Docker and adopt microservices architecture for speed and agility.

Why you should adopt a microservices architecture

Microservices are the new cool kids in tech town and everyone's trying to join the party. After all, microservices are considered the panacea that brings speed, agility, and innovation to software powered businesses.

For the most part, this is true. In Part I of my four blog series, we will take a look at how software architecture has evolved over the years and why you should consider adopting microservices.

What is Modern Application Delivery?

Development practices have come a long way since the time of Waterfall. Development shops have progressed through Agile methodologies and have built a culture of continuously delivering value to their customers, both internally and externally. Many shops have also since implemented Scrum and are experimenting with containerization technologies.