The Shippable Blog

CI/CD enhancements: JaCoCo, JFrog Artifactory, and more

Happy 2017! As the new year kicks in, we wanted to start a monthly blog that lets you know the new features we've launched in the last month. No more trees (aka features) falling silently in the forest... they'll make a big THUDDDDD in this monthly series. So without further ado.... TA DA!

Continuous Integration

Custom variables for manual builds: You can now inject custom environment variables while triggering a manual build through the UI. This is great while debugging when you don't want to make fake commits just to trigger builds with different env values. 

Integrated JaCoCo code coverage reportsYou can visualize rich reports within the Shippable UI and drill down to see which llnes of code are not covered by your tests. 

7 things to consider while moving to a microservices architecture

In part I of my four part blog series on Microservices, I explained what microservices are and the benefits you will see by adopting this architecture.

However, life is all about tradeoffs. In part II of this series, I will go over the things you need to consider while moving to microservices, as well as some challenges that crop up even when you do everything right.

Microservices for greenfield projects

Anytime your team develops a new application from scratch, it feels great not to inherit technical debt and be locked into outdated decisions made years ago.  Most teams developing new apps today would probably choose to containerize them using Docker and adopt microservices architecture for speed and agility.

Why you should adopt a microservices architecture

Microservices are the new cool kids in tech town and everyone's trying to join the party. After all, microservices are considered the panacea that brings speed, agility, and innovation to software powered businesses.

For the most part, this is true. In Part I of my four blog series, we will take a look at how software architecture has evolved over the years and why you should consider adopting microservices.

Provisioning AWS infrastructure with Terraform

Provisioning and updating infrastructure is a the first step in setting up your development, beta, or production environments. Hasicorp's Terraform format is fast becoming very popular for this use case.  We love Terraform at Shippable due to its easy declarative syntax, similar to our pipelines syntax. Other advantages are: 

  • Infrastructure as Code: Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.

  • Execution Plans: Terraform has a "planning" step where it generates anexecution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure.

  • Resource Graph: Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.

  • Change Automation: Complex changesets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors.

At Shippable, we use Terraform to provision all our environments and automate the provisioning using our Pipelines feature. If you're interested in taking a look at our terraform scripts and pipelines config, we have made our repositories public so you can check them out:  

Interested in trying it yourself? The following example walks you through a sample project that provisions two t2.micro instances on AWS. We've kept it simple for easy understanding, but you can also automate provisioning of complex environments as seen in our beta infra scripts above.

ReST API Best Practice: OAuth for Token Authentication and Authorization

 A big challenge with API based microservices architecture is handling authentication (authN) and authorization (authZ) . If you are like most companies today, you are probably using some sort of OAuth identity provider like OpenID, Google, GitHub, etc. This takes care of both identity and authentication, but authorization (AuthZ) is not addressed by this.

In our previous blog posts, we discussed two REST API best practices for making one Database call per API route and assembling complex objects that need to be displayed in the UI.  In response, one of our readers asked a great question: If the design pattern is to always make one DB call per API route and then handle joins in the UI to create complex objects, how do we manage authorization/permissions? With a finished API, you can abstract it across the lower level APIs.

This blog describes pros and cons of two options we considered for handling authZ and why we chose the approach we did. Our two possible approaches were:

- Create a user on the DB for every single user who interacted with our service and manage all permissions at the DB level

- Create a superuser DB account that has “data modification access” and no “data definition access,” and use that account to access data

We were initially hesitant to go with option 2 since it meant accessing all data with superuser credentials, which felt like we weren't enforcing permissions at the lowest level we could. 

Let's look at both options in greater detail.