Shippable 6.5.1 Is Live: Satisfy Your Need For Speed!

- By Manisha Sahasrabudhe on May 30, 2018

We are excited to announce the launch of Shippable 6.5.1.You can find the release notes here: 6.5.1 Release Notes.

This release is geared towards making your CI and CD processes much faster and more efficient. Read on to discover some of the major features released today, including node caching that lets you cache docker images and everything else on the node, faster nodes with more memory, and the ability to rerun only failed jobs in a build matrix. 

 

Node caching

This is by far the most important piece of functionality we've launched in months! If turned on, your build nodes (minions) will be paused between jobs instead of being terminated and spun up again. This not only saves node spin up time, but any Docker images you pulled or built during your CI or CD workflows will be available on your nodes.

Node caching is most useful for customers taking the following actions as part of their CI/CD workflows:

  • Building a Docker image: Your Docker layers are cached between builds, so Docker builds are blazing fast.
  • Using a custom Docker image: Your Docker image will be cached with the node, and only layers that were recently changed will be pulled again. 
  • Pulling a Docker image: Similar to using a custom Docker image, your images will be cached with the node and will not need to be pulled each time unless they change.
  • Pulling large dependencies: If your build needs to pull large dependencies, you can copy these to a mounted volume so that they can be preserved between builds and not pulled each time.   

It also makes sense for customers whose CI/CD jobs are super quick since they no longer waste the time waiting for a build node to spin up and execute their job.

To avoid situations where nodes run out of disk space, you can configure your Subscription to recycle nodes based on a calendar, or at a specified interval. By default, nodes are recycled when they start running out of storage, i.e. when they have less than 20% space free and available.

Node caching is a premium feature that can be purchased for any on-demand Node SKU for an additional price of $25/node per month.

Try it out today and if you're not satisfied, we'll be happy to refund the extra cost if you cancel within the first 14 days!

 

Faster nodes with more memory

With this release, our build nodes can spin up in less than a minute, which is a 3-4x improvement over our earlier spin up times.

We have also doubled the memory on all our on-demand nodes that use Ubuntu or CentOS. The specs are as follows:

Size CPU Memory Price Price with caching
L 2 cores 7.5 GB $25/node/month $50/node/month
XL 4 cores 15 GB $75/node/month $100/node/month 
2XL 8 cores 30 GB $150/node/month $175/node/month 

 

Customers on the free plan will get one L node, with a restriction of 150 builds/month for private repositories.

 

Running builds directly on the machine  

A few customers had requested the ability to run jobs directly on the host machine instead of inside a build container. This has been supported for a few releases now with runSh jobs, but I wanted to include this here ICYMI. Using the container:true setting, you can configure your job to run directly on the host machine. This is helpful for customers who want to avoid using containers for their builds and when combined with node caching, will provide you with the flexibility of faster builds without needing to worry about what runs inside a container and what doesn't.

 

Retrying failed matrix builds

This is another key feature that has been on our radar for over a year, and has been requested several times during conversations with customers, as well as through the support repository: Issue 2680 and Issue 794.

If you have a CI workflow that leverages matrix builds, you might have one or two jobs in the matrix that fail due to a non-code related temporary reason, such as a network glitch, npm mirrors being down, etc. In these cases, you want to retry the failed jobs, but not necessarily the jobs that succeeded. The new Retry failed jobs option will let you do exactly that!

To make things clear, say you have a build matrix of 5. For build # 450, you see that 450.3 and 450.4 failed, while 450.1, 450.2, and 450.5 succeeded. You click on Rerun failed jobs for build #450. Here is what happens:

  • A new build is spun up, let us assume it is 451.
  • All jobs that were successful (450.1, 450.2, 450.5) are simply copied over to the new build, as 451.1, 451.2, 451.5, along with their status.
  • All failing jobs (451.3, 451.4) are queued to be run again. 
  • Build status for 451 is the aggregate of the jobs that were copied over plus the jobs that were run again.
  • The new build will be clearly marked as a re-run of the original build.
  • If 450 was a pull request, build #451 will overwrite the pull request status in your source control provider.

While this is immensely useful and saves time for large matrix builds, there is one small caveat. If some time passes between the original build and the rerun, the successful status for the copied over jobs might no longer be valid due to merges and other changes that might happen in that time. So if you choose to rerun only failed jobs, you might have a false sense of security. For this reason, we recommend that if a bunch of code has changed since the original build, you should rerun the whole build instead of only failed jobs.

 

Keep up-to-date with release notes

We publish release notes on a weekly basis here: Shippable Release Notes. We will also publish a similar blog for each release to recap some of the more interesting features.

If you're not currently a customer and are interested in trying Shippable, you can sign in and start using it for free. Or schedule a demo to chat with us and get a walk-through of the platform and features:

Schedule a demo

Happy Shipping!

Topics: features, release notes, continuous delivery, continuous integration (CI), devops