The Shippable Blog

Avi Cavale

Avi Cavale
Find me on:

Recent Posts

Moving Up The DevOps Maturity Curve

“I don't get no respect”
       - Rodney Dangerfield.
Most DevOps automation engineers probably feel the same way Rodney Dangerfield did. While they work hard to make CI/CD frictionless and ship applications faster than ever before, other principles of DevOps like culture and collaboration get much more attention than automation. Organizations expect DevOps to help accelerate software releases and ship better quality products, but they often underestimate the time and investment that is needed to implement the automation that will get them close to the nirvana of Continuous Delivery.
 
There is a reason, however, why automation has failed to capture the attention of the DevOps community: relative to the other aspects like culture and collaboration, automation tooling is a laggard and hasn't matured to a point where it can help accelerate the evolution of the process of shipping software from craft to industry. Simply put, automation tools available today are too primitive and are on the lower end of the maturity curve. 

ReST API Best Practice: OAuth for Token Authentication and Authorization

 A big challenge with API based microservices architecture is handling authentication (authN) and authorization (authZ) . If you are like most companies today, you are probably using some sort of OAuth identity provider like OpenID, Google, GitHub, etc. This takes care of both identity and authentication, but authorization (AuthZ) is not addressed by this.

In our previous blog posts, we discussed two REST API best practices for making one Database call per API route and assembling complex objects that need to be displayed in the UI.  In response, one of our readers asked a great question: If the design pattern is to always make one DB call per API route and then handle joins in the UI to create complex objects, how do we manage authorization/permissions? With a finished API, you can abstract it across the lower level APIs.

This blog describes pros and cons of two options we considered for handling authZ and why we chose the approach we did. Our two possible approaches were:

- Create a user on the DB for every single user who interacted with our service and manage all permissions at the DB level

- Create a superuser DB account that has “data modification access” and no “data definition access,” and use that account to access data

We were initially hesitant to go with option 2 since it meant accessing all data with superuser credentials, which felt like we weren't enforcing permissions at the lowest level we could. 

Let's look at both options in greater detail.

REST API Best Practice: Assemble complex objects in the UI layer

I spent the first decade of my career at Microsoft. As a result, the only stack I was familiar with was Microsoft SQL Server at the backend, an API layer using SOAP + XML in the middle, topped with a web layer built on .Net. 

I was drunk on the SOAP kool-aid and completely ignored the inefficiencies created by SOAP + .NET. For example, the view state transferred for every interaction between the API and Web layer was very heavy and led to the following:

Complicated stored procedures: We tried to minimize calls between Web and API layers, which meant that any call that retrieved complex information from multiple tables needed a SQL Server stored procedure. 

Multiple APIs to manage CRUD: API contracts did not represent the DB schema and multiple CRUD APIs interacted with the same database object. This led to confusion among developers and frequent regression issues since it was difficult to find all code locations where an object was being created or updated.

Fragile database: The above issues made us reluctant to change anything in the database since it caused bugs and regressions. This meant our database was virtually frozen.

Having experienced this as a developer, manager, and eventually a Product Unit Manager, the first thing I did at Shippable was to pledge total allegiance to REST. One of the most important principles of REST is that every object should have an http routable method. Now this led us to a very interesting conundrum: where should we compose the objects that need to be displayed in the UI? Should we build a layer of finished APIs that return a ready-to-display object or should we compose the object in the UI layer by making multiple calls to the basic CRUD APIs?

REST API best practice: One Database call per Route

Shippable has been on a roll for the last 8 months. We scaled our team by 3X, had the best-ever release of our continuous integration and delivery platform on February 29, and we’ve continued launching 10+ features every month since then. 

As we scaled our development team, every new developer joined us with preconceived ideas about how software is developed. Unfortunately, software development is pretty inefficient at most places, and this is one of the main reasons we started Shippable. We at Shippable do things differently. We focus on shipping code faster and faster, and we hold some principles very close to our heart. 

One of our strongest beliefs is in pure REST APIs. This means we follow the cardinal rule: thou shalt not make multiple calls to DB objects from inside a single RESTable route. So when any new developer joins our team, his first question is : Why do we call our API from within our API?

Our journey to microservices: mono repo vs multiple repositories

Microservices are currently the hottest topic in software development. The concept is simple: Break down your application into smaller pieces that each perform a single business function and can be developed and deployed independently. These pieces, commonly called services, can then be assembled into an application using some flavor of service discovery like nginx or consul. The microservices approach is considered the architecture of choice for teams that want to build scalable platforms and efficiently and rapidly innovate on them. 

As infatuated as I am with this architecture, our journey to microservices was a long and winding road. It has finally led us to a version of the architecture that gives us the scalability and agility we require as a business. I want to share my thoughts, experiences, and lessons learned in a series of blogs around this topic so you may benefit from our experiences. Also, I would love to get your feedback or comments on our approach.

When you start moving to microservices, the first question before you write a single line of code is: How do you organize your codebase? Do you create a repository for each service, or do you create a single ‘mono repo’ for all services? The two approaches are illustrated below: