Alexander Beletsky's development blog

My profession is engineering

Continuous Delivery: Overview and benefits

First time I’ve heard about continuous production systems about 5 years ago. I was amazed with simple idea, not only build binaries and run tests against it but: generate documentation, deploy and release! Wow. But that time, I’ve been working on product that distributed on CD’s and it was not really easy to build Continuous Production around that. Web changed the way of distribution of software - you need only to update one “place” and all users would immediately receive latest version. Continuous Production very much suites web product development and month ago I’ve created my own Continues Production configuration and now would like to share some thoughts and that topic.

(Integration + Deployment) * Continues = (Continuous Production)

As for me, formula of continuous production is simple.. Decide how to do integration, decide how to do deployment and make it run continuously. The process is triggered by event. Event is raised by some event source.

What I used for that?

It is basically 4 components: UppercuT and RoundhousE for Integration and Deployment. Jenkins as build server. Github as event source and SCM.

You should check my previous posts about configuration of UppercuT and RoundhousE. It rather simple and allowed me version all assemblies and database, build all binaries and web site, run tests and put all build artifacts to single package. It also generates deployment and database migration scripts.

I was really happy about Jenkins. It is easy to install, understand and configure. Even it is Java application, it allows you to work with .NET and contains infinite number of plugins for every need (Batch build, Nant, Git, Svn, Msbuild etc).

Why should I use that?

Ok, to explain you the value, I would describe my production process Before and After implementation of CP.

Before

  1. Prepare release branch and merge all required changes there
  2. Update version in uppercut.config
  3. Commit changes to SCM
  4. Run build.bat
  5. FTP package to deployment server
  6. RDP to deployment server
  7. Unpackage .zip content to temp folder
  8. Manually backup staging database
  9. Stop Stage Web site in IIS manager
  10. Run migration scripts for staging database
  11. Run deployment scripts for staging environment
  12. Run Stage Web site in IIS manager
  13. Test manually that on staging server, that build works fine
  14. If something missed (note it is 60% of all cases) go to 1
  15. Manually backup production database
  16. Stop Production Web site in IIS manager
  17. Run migration scripts for production database
  18. Run deployment scripts for production environment
  19. Run Production Web site in IIS manager
  20. Test manually that on production server, that build works fine

Depending on how much lucky I’m that took from 0.5 hour to 1.5 hour to update production server. Moreover, because it is all manual changes Down time of web site was about 3-5 minutes. Sad figures.

After

  1. Prepare release branch and merge all required changes there
  2. Update version in uppercut.config
  3. Commit changes to SCM

That’s it! The rest is steps is being automated by CP server.

It takes from 1 to 2 minutes total. Down time of site is now only 1 second. That means velocity of “going live” improved in 45x, site down time improved 300x. Taking into account that I spent near 10 hours to configure whole system, I would say - that was pretty good investment on time. My staging server is being updated every time I push new code changes, so I could immediately test it and correct. Production server update run manually, as soon as I got stable release branch with only one button click.

Further reading