Provisioning and updating infrastructure is a the first step in setting up your development, beta, or production environments. Hasicorp's Terraform format is fast becoming very popular for this use case. We love Terraform at Shippable due to its easy declarative syntax, similar to our pipelines syntax. Other advantages are:
Infrastructure as Code: Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.
Execution Plans: Terraform has a "planning" step where it generates anexecution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure.
Resource Graph: Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.
Change Automation: Complex changesets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors.
At Shippable, we use Terraform to provision all our environments and automate the provisioning using our Pipelines feature. If you're interested in taking a look at our terraform scripts and pipelines config, we have made our repositories public so you can check them out:
Interested in trying it yourself? The following example walks you through a sample project that provisions two t2.micro instances on AWS. We've kept it simple for easy understanding, but you can also automate provisioning of complex environments as seen in our beta infra scripts above.
A big challenge with API based microservices architecture is handling authentication (authN) and authorization (authZ) . If you are like most companies today, you are probably using some sort of OAuth identity provider like OpenID, Google, GitHub, etc. This takes care of both identity and authentication, but authorization (AuthZ) is not addressed by this.
In our previous blog posts, we discussed two REST API best practices for making one Database call per API route and assembling complex objects that need to be displayed in the UI. In response, one of our readers asked a great question: If the design pattern is to always make one DB call per API route and then handle joins in the UI to create complex objects, how do we manage authorization/permissions? With a finished API, you can abstract it across the lower level APIs.
This blog describes pros and cons of two options we considered for handling authZ and why we chose the approach we did. Our two possible approaches were:
- Create a user on the DB for every single user who interacted with our service and manage all permissions at the DB level
- Create a superuser DB account that has “data modification access” and no “data definition access,” and use that account to access data
We were initially hesitant to go with option 2 since it meant accessing all data with superuser credentials, which felt like we weren't enforcing permissions at the lowest level we could.
Let's look at both options in greater detail.
Everyone agrees that continuous deployment helps accelerate innovation. However, Continuous Deployment (CD) today is synonymous with fragile homegrown solutions made of disjointed tools cobbled together with thousands of lines of imperative scripts. Avi Cavale walks you through the CD maturity model and demos an end to end continuous deployment with declarative pipelines for Docker applications.
As you know, we released our new implementation of continuous deployment pipelines last month. While our basic documentation is up to date, we believe that learning the new pipelines is best done with quick tutorials that demonstrate the power of CD and how easy it is to get started.
We have created a sample project and sample configuration to deploy the project to a test environment, create a release with semantic versioning, and deploy the project to production. The entire end to end scenario should take less than 30 mins to try out and while you won't learn every little trick, it will definitely make you comfortable with the configuration and how to set things up.
So read on and try it out!
In the previous part, we went over the steps of source code deployment to AWS Elastic Beanstalk using a simple Node.js app. We deployed the source code natively at first, then compared with deploying it through Shippable. The latter approach showed actions in the work flow executed automatically for you, by Shippable's unified CI/CD platform.
I'll take a similar approach for this part where we'll go through a deployment of a Docker container of a Node.js app to AWS Elastic Beanstalk. To fully understand this tutorial, complete the previous source code deployment to AWS Elastic Beanstalk first.