Alexander Beletsky's development blog

My profession is engineering

Continuous Delivery: Setup and run

In my previous post I tried to make it clear that continuous production is for good. This post I’ll show how to prepare environment for continuous production. The idea is that you should be able to configure and test it locally. All configurations have to be part of source code under SCM. It should not depend on machine and run any environment you like. Success criteria is: pick up clean machine, do checkout, run build.bat/deploy.bat and have installed web application.

Integration and database deployment

As I said, UppercuT and RoundhousE are really nice tools for that. As soon as you follow the instruction’s you will have a build.bat, that would be able to build up all binaries, run tests against that and put all build artifacts to package. That is pretty good for start, but we still missing “deployment” part.

As you a little bit more familiar with UppercuT, it provides good facilities for deployment as well. Basically, there are folder deployment\templates\ where you able to define your custom deployment scripts. Typical web application requires 2 scripts:

  • AppDeployment.bat - for web site deployment
  • DbDeployment.bat - for database deployment

There files are templates, from which script for particular environment is generated. The environment is defined in settings folder and include such information as deploy folder, web site name, database name, server name as number of variables. Example,

<?xml version="1.0" encoding="utf-8" ?>
<project name="Settings">
  <!-- environment settings -->
  <property name="environment" value="PRODUCTION" />
  <!-- servers -->
  <property name="server.database" value=".\SQLEXPRESS" />
  <property name="web.deploy.folder" value="c:\trackyt.net\web\" />
  <property name="web.site.name" value="trackyt.net" />

  <property name="database.name" value="trackytdb" />
  <property name="log.level" value="DEBUG" />
  <property name="app.user.name" value="alexander.beletsky" />

  <!-- base settings -->
  <property name="project.name" value="trackyt.net" overwrite="false" />
  <property name="repository.path" value="git://github.com/alexbeletsky/trackyt.net" />
  <property name="folder.app.drop" value="${project.name}" overwrite="false" />
  <property name="folder.database" value="db" overwrite="false" />

  <!-- database deployment -->
  <property name="dirs.db" value="..\${folder.database}" />
  <property name="file.version" value="_BuildInfo.xml" overwrite="false" />
  <property name="restore.from.path" value="..\${database.name}.bak" overwrite="false" />

</project>

In template .bat file it is possible to refer, to some particular variable, so it is possible to make those quite generic. After build, template .bat files are post-processed and actual batch is generated. The name would be like ENVIRONMENT.AppDeployment.bat, where ENVIRONMENT is type of environment you defined.

Web site deployment script

If you do ASP.net (MVC) website in 99.9% cases you will be happy with simple XCOPY deployment type. Basically it means, simple copy of website to defined IIS folder.

But, as soon it is continuous production deploy it means that Web Site is already running. It would not be possible to re-write some files, since they could be used by IIS. So, we need to stop the site before update. I found very good possibility for that with %windir%\system32\inetsrv\appcmd command. After site is stopped, we just copy full content of Web folder, remove some redundant files and run site again. In batch code it would look like,

@echo off

SET DIR=%~d0%~p0%

SET web.deploy.folder="${web.deploy.folder}"

echo stopping web site..
call %windir%\system32\inetsrv\appcmd stop site ${web.site.name}
if %ERRORLEVEL% NEQ 0 goto errors

echo copy application content
rmdir /s /q %web.deploy.folder%
xcopy /E /F /H /R ..\_PublishedWebSites\Web %web.deploy.folder%
xcopy ..\build_artifacts\_BuildInfo.xml %web.deploy.folder%
if %ERRORLEVEL% NEQ 0 goto errors

echo remove redudant files
del %web.deploy.folder%*Tests*.htm*
del %web.deploy.folder%Web.Debug.config
del %web.deploy.folder%Web.Release.config
del %web.deploy.folder%*packages*
if %ERRORLEVEL% NEQ 0 goto errors

echo starting web site
%windir%\system32\inetsrv\appcmd start site ${web.site.name}
if %ERRORLEVEL% NEQ 0 goto errors

goto finish

:errors
EXIT /B %ERRORLEVEL%

:finish

Database deployment script

RoundhousE does all infrastructure work for us. All we need to create a batch file, that would be able to run during continuous production cycle. As well as AppDeployment.bat I defined DbDeployment.bat in deployment\templates\ folder. But before any update of database it is always good to have a backup, to be able to restore from it if something went wrong. Actually, RoundhousE should have such ability, but unfortunately I didn’t get how to use it. I’ve created my simple SQL script that is able to backup.

USE $(Database);
GO
BACKUP DATABASE $(Database)
TO DISK = 'C:\backup\$(Database).bak'
   WITH FORMAT,
      MEDIANAME = 'C_SQLServerBackups',
      NAME = 'Full Backup of $(Database)';
GO

And corresponding batch file, that would run backup.sql.

@echo off

if '%1' == '' goto usage
if '%2' == '' goto usage

sqlcmd -S %1 -i .\scripts\backupdb.sql -v Database = %2 -e
if %ERRORLEVEL% NEQ 0 goto errors

goto finish

:usage
echo.
echo Usage: backup.bat [server] [database]
echo [server] - server eg. mymachine\SQLEXPRESS
echo [database] - name of database to backup
echo.
EXIT /B 1

:errors
EXIT /B %ERRORLEVEL%

:finish

Both files are placed into deployment\scripts folder. So, the DbDeployment.bat template, would first run database backup and if it is successfull, run RoundhousE to update database.

@echo off

SET database.name="${database.name}"
SET sql.files.directory="${dirs.db}"
SET server.database="${server.database}"
SET repository.path="${repository.path}"
SET version.file="${file.version}"
SET version.xpath="//buildInfo/version"
SET environment="${environment}"

echo backup database
call .\scripts\backupdb.bat %server.database% %database.name%
if %ERRORLEVEL% NEQ 0 goto errors

echo update database
"%DIR%rh\rh.exe" /d=%database.name% /f=%sql.files.directory% /s=%server.database% /vf=%version.file% /vx=%version.xpath% /r=%repository.path% /env=%environment% --ni --simple
if %ERRORLEVEL% NEQ 0 goto errors

goto finish

:errors
EXIT /B %ERRORLEVEL%

:finish

Putting it all together

We already have build.bat as part of UppercuT, now we need to define deploy.bat that would do deployment of product. It would be called immediately after build.bat finished, so binaries are ready, tests passed and code_drop folder contains all artifacts for deployment. The script is rather simple and utilize stuff we did previously.

@echo off

if '%1' == '' goto usage

SET ENV=%1

cd .\code_drop\deployment

echo Deploy database
call .\%ENV%.DbDeployment.bat
if %ERRORLEVEL% NEQ 0 goto errors

echo Deploy application
call .\%ENV%.AppDeployment.bat
if %ERRORLEVEL% NEQ 0 goto errors

goto finish

:usage
echo.
echo tracky.net deploy script
echo Usage: deploy.bat [environment]
echo [environment] - deployment environment could be STAGING or PRODUCTION
echo.
EXIT /B 1

:errors
echo Build FAILED
EXIT /B %ERRORLEVEL%

:finish
echo Build SUCCESS

Notice, it receives the parameter ENV. It will contain type of environment for deployment. For staging environment, you should call deploy.bar STAGING, for production deploy.bat PRODUCTION.

Testing it out

That’s basically it. Now, you should make sure everything works as expected. Run build.bat/deploy.bat, make sure build went with no errors, deploy.bat correctly does back up of database, updates database and update site content.

As I said on top, it is very important that configuration is part of product, part of source code. If you follow this, it will be possible to deploy application but just getting sources from SCM. This is first step of setting up your Continuous Production server.

Continuous Delivery: Overview and benefits

First time I’ve heard about continuous production systems about 5 years ago. I was amazed with simple idea, not only build binaries and run tests against it but: generate documentation, deploy and release! Wow. But that time, I’ve been working on product that distributed on CD’s and it was not really easy to build Continuous Production around that. Web changed the way of distribution of software - you need only to update one “place” and all users would immediately receive latest version. Continuous Production very much suites web product development and month ago I’ve created my own Continues Production configuration and now would like to share some thoughts and that topic.

(Integration + Deployment) * Continues = (Continuous Production)

As for me, formula of continuous production is simple.. Decide how to do integration, decide how to do deployment and make it run continuously. The process is triggered by event. Event is raised by some event source.

What I used for that?

It is basically 4 components: UppercuT and RoundhousE for Integration and Deployment. Jenkins as build server. Github as event source and SCM.

You should check my previous posts about configuration of UppercuT and RoundhousE. It rather simple and allowed me version all assemblies and database, build all binaries and web site, run tests and put all build artifacts to single package. It also generates deployment and database migration scripts.

I was really happy about Jenkins. It is easy to install, understand and configure. Even it is Java application, it allows you to work with .NET and contains infinite number of plugins for every need (Batch build, Nant, Git, Svn, Msbuild etc).

Why should I use that?

Ok, to explain you the value, I would describe my production process Before and After implementation of CP.

Before

  1. Prepare release branch and merge all required changes there
  2. Update version in uppercut.config
  3. Commit changes to SCM
  4. Run build.bat
  5. FTP package to deployment server
  6. RDP to deployment server
  7. Unpackage .zip content to temp folder
  8. Manually backup staging database
  9. Stop Stage Web site in IIS manager
  10. Run migration scripts for staging database
  11. Run deployment scripts for staging environment
  12. Run Stage Web site in IIS manager
  13. Test manually that on staging server, that build works fine
  14. If something missed (note it is 60% of all cases) go to 1
  15. Manually backup production database
  16. Stop Production Web site in IIS manager
  17. Run migration scripts for production database
  18. Run deployment scripts for production environment
  19. Run Production Web site in IIS manager
  20. Test manually that on production server, that build works fine

Depending on how much lucky I’m that took from 0.5 hour to 1.5 hour to update production server. Moreover, because it is all manual changes Down time of web site was about 3-5 minutes. Sad figures.

After

  1. Prepare release branch and merge all required changes there
  2. Update version in uppercut.config
  3. Commit changes to SCM

That’s it! The rest is steps is being automated by CP server.

It takes from 1 to 2 minutes total. Down time of site is now only 1 second. That means velocity of “going live” improved in 45x, site down time improved 300x. Taking into account that I spent near 10 hours to configure whole system, I would say - that was pretty good investment on time. My staging server is being updated every time I push new code changes, so I could immediately test it and correct. Production server update run manually, as soon as I got stable release branch with only one button click.

Further reading

Prevention of JS/CSS content to be cached

To do not waste a time in pauses between AgileBaseCamp talks I’ve created an utility that I wish to have, but had not time to do. It is very simple and solves caching issue for CSS and JS that we all suffered many times. Because every modern browsers try to cache static content, to make site rendering faster, we having trap then CSS or JS is being updated, but after deployment on production customers still see no results, because they use previous versions of files.

Classical way

Classical way of solving the issue is to add version info to content url and update it after each production update.

<link rel="stylesheet" href="Content/public.css?v=123" type="text/css" media="all" />
<script src="Scripts/script.js?v=123" type="text/javascript"></script>

The problem is that you have to remember to update version manually, before each deployment to production. That was something I was doing a lot and now decieded to automate it.

Better way

Idea is very simple, create a helper that would use version from assembly and append version to resource. Now, in markup I wrote:

<link rel="stylesheet" href="@Url.ContentWithVersion("~/Content/public.css")" type="text/css" media="all" />
<script src="@Url.ContentWithVersion("~/Scripts/script.js")" type="text/javascript"></script>

Code

namespace Web.Helpers.Extensions
{
    public static class UrlContentWithVersionExtension
    {
        private static string _currentAssemblyVersion;

        public static string ContentWithVersion(this UrlHelper helper, string path)
        {
            var contentPath = helper.Content(path);
            var assemblyVersionString = GetAssemblyVersionString();

            return string.Format("{0}?ver={1}", contentPath, assemblyVersionString);
        }

        private static string GetAssemblyVersionString()
        {
            if (_currentAssemblyVersion == null)
            {
                var currentAssemblyVersion = Assembly.GetExecutingAssembly().GetName().Version;
                _currentAssemblyVersion = currentAssemblyVersion.ToString().Replace(".", "");
            }

            return _currentAssemblyVersion;
        }
    }
}

And don’t forget that each page that uses the extension method, should include this namespace:

<!-- For MVC2 -->
<%@Import Namespace="Web.Helpers.Extensions" %>

<!-- For MVC3 (Razor) -->
@using Web.Helpers.Extensions

Example of usage

As always all code is on github, so extension itself is here. And commit there I integrated changes is here (would be useful to review, before your actual changes).

AgileBaseCamp Kiev 2011: Dmitry Mindra: Design by Contract in .NET

Disclaimer: text below is compilation of notes I made on AgileBaseCamp 2011 conference, listening to different speakers. I do it to keep knowledge I got on conference, share it with my colleagues and anyone else who interested. It is only about how I heard, interpret, write down the original speech. It also includes my subjective opinion on some topics. So it could not 100% reflects author opinion and original ideas.

Dmitry Mindra prepared very nice speech dedicated to Code Contracts. He opens ideas what the Code Contacts are and more important how to “Design by contract”. This speech has been especially interested to me, since I never heard/used such type of design.

Design by contract in short

Desing by contracts has been introduced by Bertrand Mayer. He is one of original creator of Eiffel language. No surprise that first time contract’s have been added in Eiffel. The contract consists on 3 parts: Preconditions/Postconditions/Invariants.

  • Preconditions - check that client does everything OK
  • Postconditions - check that server does everything OK
  • Invariant - check that state of object valid for next call

In Eiffel you have special sections in each method called (require, do, post) there it is possible to put contract definition.

By specifying the contract we specify exact constraints for code. Typically methods have some input parameters and return some results to outside world. For instance we have class Account with method Withdraw, we are expecting that argument that Withdraw receives are always positive and return results also is greater than zero.

Wait, I can already specify contract with my beloved exceptions

Many of us, get used to write such code..

public Amount Withdraw(Amount amountToWithdraw)
{
 if (amountToWithdraw == null)
 {
  throw new ArgumentException("amountToWithdraw");
 }
 
 if (amountToWithdraw.Value <= 0)
 {
  throw new ArgumentException("amountToWithdraw.Value");
 }
 
 var result = //.. calculate operation here.. ;
 
 if (result.Value <= 0)
 {
  throw new AccountOperationException();
 }
 
 return result;
}

It is definitely the way of creation robust application (in case of all those conditions are tested and client code properly handles exceptions). But is has several drawbacks:

  • It is only run-time mechanism
  • It is not possible to turn on or turn off the exceptions
  • This type of contract is “method-wide”, but how can you specify “class-wide” contract?
  • All of if / throw statements are “code-garbage” that make it difficult to read

What we have in .NET?

The Common Language Runtime (CLR) team is introducing a library to allow programming with contracts in the Microsoft .NET Framework 4. Adding them as a library allows all .NET languages to take advantage of contracts. This is different from Eiffel or Spec#, a language from Microsoft Research (research.microsoft.com/en-us/projects/specsharp/), where the contracts are baked into the language. Code Contracts System in .NET consists of 4 parts:

  • System.Diagnostics.Contracts contract library where Contract class is defined.
  • ccrewrite.exe tool that modifies the Microsoft Intermediate Language (MSIL) instructions of an assembly to place the contract checks where they belong.
  • cccheck.exe contract static checker, that examines code without executing it and tries to prove that all of the contracts are satisfied.
  • ccrefgen.exe which will create separate contract reference assemblies that contain only the contracts.

Why is it useful?

Let’s rewrote the previous example with code contracts:

public Amount Withdraw(Amount amountToWithdraw)
{
 // preconditions..
 Contract.Requires<NullReferenceException>(amountToWithdraw);
 Contract.Requires(amountToWithdraw.Value);

 // action
 var result = //.. calculate operation here.. ;
 
 // postconditions 
 Contract.Ensures(result.Value <= 0);
 
 return result;
}

I like the style, because this code a little reminds me TDD AAA (arrange/act/assert) pattern, code side is smaller and we are getting benefits of .NET Code Contracts.

What benefits here?

  • Static code analysis - during the compilation code is checked for contracts, if somewhere I wrote account.Withdraw(null) it would be immediately caught by cccheck.exe
  • Runtime execution - if error is missed on compilation time, it would be caught on run time and proper exception is thrown
  • I could configure to include or exclude contracts check at final assembly

Ok, I want to use it

To have a full support of Code Contracts you have to have .NET framework 4.0 and Visual Studio Premium or Ultimate. So, Code contracts are pretty expensive stuff. As far as I get from Dmitry speech it is possible to use System.Diagnostics.Contracts but limited capabilities are enabled only.

So, contract are great for building Life-critical and High-robust application where the quality / cost ratio makes sense for you.

Materials

AgileBaseCamp Kiev 2011: Vitaly Stakhov: Working safety net

Disclaimer: text below is compilation of notes I made on AgileBaseCamp 2011 conference, listening to different speakers. I do it to keep knowledge I got on conference, share it with my colleagues and anyone else who interested. It is only about how I heard, interpret, write down the original speech. It also includes my subjective opinion on some topics. So it could not 100% reflects author opinion and original ideas.

Vitaly is a .NET developer shared really nice ideas of the creation of “safety net” with unit tests. He works in software delevelopment shop with no testers in it. Having that they manage to keep quality on acceptable level by means of test driven development practices and implemenation of safety net of tests around the product code base.

Many of TDD practitioner’s have heard this term before, but not a lot of have full understanding of it. In general “Safety Net” does not mean 100% code coverage, it either don’t nesserely mean to follow “test first” principles in creation of code. Safety net is a methaphor of net used by acrobats in circus. Acrobats are doing dangerous tricks close to cicrus ceil (that’s a bit high).. As more complex trick is more chances to fall down, but as you fall on safety net you would be still alive. So, with safety net you fail, but not die.

Another good methor about the actual position of safety net. If it put to low (close to the floor), resiliency of net might not be enought to compensate your kinetic energy during fall.. if it is put to high (close to ceil), it start to loose it sense and only interfere of making tricks.

Let’s one more time see, why we using TDD and work so hard to create tests:

  • test driven design - indeed by having testable code you really much affect code design. Design is key factor through the all life-cycle of project
  • safety net creation - in other hand we do testing to follow “cover my ass” principle, having a good test coverage makes us feel really confortable and confident with application

In real life those to factors could contradict itself. When we originally create the code with TDD we are typically put too much details in test cases. We do verify for methods calls, expected exceptions, argument values etc. It works perfectly at the beginning. But as soon as we start to do refactoring it might turn to situation then a lot of cases became red, just because of implementation details changed! It leads to situation then the changes are applied, application still in workable state.. but tests are failed (safety net is put to high). That makes tests to be fragile.. fragile tests are bad, because it make to feel like “tests are not useful”.. so such tests are typically deleted, commented out, skipped. All of these is just a putting safety net low, close to floor. As for me this is even more dangerous then putting it too high.

What to do? The the of mitigation this is all the time controlling safety net position. It is very useful to distinguish 2 types of tests:

  • Interaction - tests that we basically create on “design” stage of application (detailed tests)
  • Component - tests that we basically create to change safety net position (behavior tests)

The point is the interaction tests are “classical” unit tests that we create for particular method/class. It works great of design stage of development, time there we have no model at all, we don’t know how internal structure would like etc. But as it said, the problem is that interaction tests are to much detailed. And as soon as small implementation detail is changed, those tests became red (I found a little analogy with blog post I did recently about Functional tests).

To mitigate that we look on same problem with different view - components. With components tests we move focus from details to behavior. For instance if in case of moving money from one account to another, we are not particularly interested in what methods called, but we interested that amount is really moved. Type of test that actually tests that is called component.

As soon as component test is defined we are no longer interested in interaction tests. Interactions tests became overhead and only slows done the process of development by continues fixes of failed tests. So, as we already have component tests is just possible to remove interaction tests at all.

Back to our safety net analogy - with a lot of Interaction (detailed) cases, we are putting net to high, we should move it lower by getting rid of too detailed cases. To compensate that, it is important to have behavior test that would verify that application state still in workable state. This should be really careful and continues process.

AgileBaseCamp Kiev 2011: Sergey Dmitriev: Setting up priorities with Kano analysis

Disclaimer: text below is compilation of notes I made on AgileBaseCamp 2011 conference, listening to different speakers. I do it to keep knowledge I got on conference, share it with my colleagues and anyone else who interested. It is only about how I heard, interpret, write down the original speech. It also includes my subjective opinion on some topics. So it could not 100% reflects author opinion and original ideas.

Sergey Dmitriev is a first russian-speaking Certified Scrum Trainer. On AgileBaseCamp he presented scientific based approach for prioritizing the backlog. I liked his speech since it was very new for me and could be done on practice.

The approach is based on Kano analysis with weighted factor matrix. It might sound a little bit complex, but whole process is rather simple and splitted up on 2 stages.

Kano analysis of backlog items

In very simplified version it is basically 2 questions, with 5 answers. Questions are:

  • How do you feel that feature is PRESENT in product?
  • How do you feel that feature is ABSENT in product?

And answers are:

  • I like that
  • I expect that
  • I don’t care
  • I can live with that
  • I don’t like that

Each item in backlog are put through that questions. It is important that real customers to participate that survey. Due to statistics laws, as biggest sample you have as more precise results you got. Sergey has created a web application that helps to perform such surveys, KanoSurvey.

Classification of results

Based on users answers and using some complex math behind KanoSurvey would split up all user stories up to 5 classes:

  • Have to be done (mandatory) class - features that customers sees as very important/expected and have to be done fast
  • Linear class - linear features are something that you might accomplish but still do not fully satisfy customers (like battery life of mobile phone - if it works 3 days without re-charge.. it is nice, but I would like to have 4 days. Or, if I have 1Gb of storage on new service.. it is nice, but I would like to have 2Gb etc)
  • Amazing class - leadership function, something that makes you unique and produce “wow”-effect
  • Inversed class - features that you you might have, but customers are feeling bad about it
  • Don’t care class - features that customers not interested in
  • Questionable class - these are probably mistakes, then users answering “I want, I don’t want” in the same time

The value only persist in first 3 classes of features. So, we use that in further analysis, rest of features are just thrown away are useless.

It is done. After analysis it is time for weighing.

Weighing

With Kano analysis we understood that features are important for our auditorium. Now we need to plan the feature thought releases, to do that we have to prioritize them correctly. Weighing of features helps to do right prioritization.

To perform a weighing business criteria’s have to be defined. It is very up to the product what the business criteria’s are. It is recommended to select not more than 5, that makes the process simpler. The example of criteria’s could be:

  • Uniqueness - weight 3
  • Sales - weight 2
  • Low costs - weight 1

Each pre-selected backlog item is assigned with weight.

Combining the results

Sergey has created an Excel spreadsheet (that is available here). In the Excel spreadsheet we specify backlog items, estimations in points, class of Kano and it’s weight. Based on this input spreadsheet would automatically propose desired priority. It is also possible to see value and cost of particular item. This is a priceless information for any Product Owner.

The presentation slides are located here.

AgileBaseCamp Kiev 2011: Alexey Krivitsky: The Future is Now

Disclaimer: text below is compilation of notes I made on AgileBaseCamp 2011 conference, listening to different speakers. I do it to keep knowledge I got on conference, share it with my colleagues and anyone else who interested. It is only about how I heard, interpret, write down the original speech. It also includes my subjective opinion on some topics. So it could not 100% reflects author opinion and original ideas.

That was the opening speech of AgileBaseCamp conference that took place in Kiev, 16-Apr-2011. Alexey Krivitsky who is head of ScrumGuides was sharing his vision about Agile through last years.

The dates for this Base camp were dedicated to major event in Agile world, namely it is 10 years from Agile Manifesto has been signed. All of us know who the software development changed with this event, but was it in good way, bad way? Who know’s and Alexey tries to analyze that.

Past

Alexey shared his personal story of starting up with Agile. He used to work in big Ukrainian outsourcing company. That time (I think it was somewhere around 2003) this company was adopting CMM3 and customer company was adopting CMM2 itself. Alexey took part on both processes of adoption and did great job there (I know that, because I worked the same company that time and know that efforts were successful :). That was both good experience and key factor to change his mind on software production. He quit the company realizing that his vision very different from one dictated by CMM.

That time Agile was not so well-spread as it is now (I’m pretty sure that time very few people in Ukraine knew something about Agile, Scrum, Kanban etc). So, having Alistair Cockburn book printed out on corporate printer he started his journey in Agile world. Not only improving own knowledge and experience, but sharing the information with community.

Present

One of the software development institutes does statistics of projects success in 1994 and 2004. Figures are:

  • 1994 - success rate 15%
  • 2004 - success rate 34%

This is a definitely impact of agile. There was an interesting speech about details of this impact on Agilee 2010 Henrik Kniberg speech: The essence of Agile. In a little other words in 10 years we’ve learned to “suck less”.

A lot of information about Agile nowadays. Many, many books became available.. a lot of conferences, trainings, blogs. New brands became popular as Software Craftsmanship, Lean Startup, Management 3.0 and more. From the management factors agile moving to engineering and business factors. Code is still the key in software (even if a lot of people thinks it is not).

Future

Even with a great popularity of Agile it is still not mainstream. We still talk about Agile itself a lot and would continue to talk at least 2 years. Originally it was Waterfall as biggest enemy of Agile, but now this enemy is weak and it is no longer interesting to fight it :). Due to Mike Cohn we would stop to talk about Agile at all and simply start to do that. To reach that stage we need to have crowd of people who are “on-the-same-page” with understanding of Agile, but it is not the time now.

There is no silver bullet, but there is super glue that unite teams.. Agile is this kind of glue.

Functional tests must not be done first

TDD

First of all, by functional tests I understand tests of application features primarily from user (UI) perspective, with different set of frameworks - Seleminum, Watin, FuncUnit or whatever.

I’ve seen the cases that functional tests are being implemented either first or immediately after some particular feature is implemented. I think this is a just wrong practice. Even the UI testing could definitely provide value you should think twice before getting to deep with functional testing and apply as development process rule. Functional tests have several serious drawbacks:

  • Fragility. Functional tests are extremely fragile. You could run test suite and got green results and re-run them in 5 seconds and got completely red sheet. And there could be thousands of reasons why it happened.
  • Speed. They are simply slow. As much tests you have as slow they are. As much slow tests you have as long feedback cycle is. As long feedback cycle is as low value you are getting from tests.
  • Bad isolation. Because of functional tests have integration nature, it is just not possible to run them in isolated environment. Tests became depended from each other, that in particular leads to fragility.
  • Support. Implementation of UI test is not trivial thing. Even with so cool frameworks like FuncUnit or WebDrivers that have strict and clear interface, it could be hard to test some UI feature of application. But support is much more harder - really small change in could turn many tests to red state. In fact, implementation of change that costs you 1 hour, could cost 1 day for correction of tests.
  • Changes. We are trying to adopt agile into development world, one of the aspect that makes development really agile is reacting on change. Reacting on change have to be fast. Changes are integral part of development, even if developers hate changes. But as I said above, even small change could ruin UI tests and it is not fast to fix it. It means that cost of change is getting more and more expensive.

So, trying to implement UI tests in early development stages or cover all applications features with UI tests is a waste. Ratio between effort / value would be just go to zero with time.

What I think as recommendation for UI testing:

  • Rely more on unit tests - all server side code must be covered with a different level of testing. Units have to be isolated and fast.
  • Keep view as thin as possible - in ideal you should not have logic in view at all, but modern web application heavily using client side (javascript) logic that affects view. Javascript code have to be unit tested with qUnit or similar frameworks.
  • Don’t start with UI tests - UI testing have to be done last. If the feature is just implemented it does not mean it has to be immediately tested. It has to pass PO approval, QA. Ideally it has to be released to beta group and confirmed that “this is exactly what we need”. After feature stabilized test could be added.
  • Don’t try to test everything - since UI testing cost more, you should be more careful. It has sense to automate smoke test, critical parts of application or parts that have high regression.
  • Clean up tests regularly - peer tests regularly, if you have bunch of tests for some area that works OK for a long time you might consider to remove those tests, to speedup whole tests process.

Keeping focus only of primary features of application and avoid of testing small stuff should be key for functional tests.

How do I seek for knowledge?

Developers job is nothing more than other engineers specialties except one big difference. Technologies we are using are changing very fast and to feel confidence in daily work you have to be up-to-date with latest changes in development world. That’s why I usually ask people “How do you seek for knowledge?”, “How you sure you up-to-date?”. And today I gonna share my primary sources of information.

Blogs

I’ve started actively follow blogs several years ago. Followed any blog I met with useful information (usually by googling for some problem and found the answer). I did know nothing about authors, but after much reading I feel like I know them personally and many of them are actually very famous developers :). Through years I filtered out a lot, so now I look closer to:

  • http://weblogs.asp.net/scottgu/default.aspx - Scott Guthrie is a VP of Microsoft, so all major .NET announcements going from there. Scott does share a lot of examples of “how-to-use” some new technology or given very useful links for other resources. Primary source of information for any .NET developer.
  • http://www.hanselman.com/blog/ - Scott Hanselman probably one of the most famous .NET guys from Microsoft. He is guru of technology, sharing a lot of interesting stuff in his blog.
  • http://haacked.com/ - Phil Haack is Program Manager in Microsoft, in particular responsible for ASP.net MVC framework (that I love really much). He is also managing the project that changed .NET world much - NuGet and currently he describes a lot from this area in his blog. I also like Phil’s posts on TDD and Open Source projects.
  • http://wekeroad.com/ - Rob Conery is very famous with his latest start-up project Tekpub and a bunch of other open sources projects. He has one of the most interesting blogs I ever follow. I share many ideas Rob has on development and practices.
  • http://blog.stevensanderson.com/ - Steve Sanderson, is author of the briliant MVC2 book that I highly recommend to read. He is author of Knockout.js javascript MVVP framework and his recent posts aimed to it.
  • http://odetocode.com/Blogs/scott/default.aspx - K. Scott Allen blogging for many years from now, so his blog is full of useful info.
  • http://hackingon.net/ - Liam McCleanan is primary .NET developer with a lof of using of Javascript, so he shares knowledge in both areas.
  • http://osherove.com/ - Roy Osherove blogs a lot about TDD and Agile software development, he is famous with his book about TDD Art of Unit Testing and his TDD String Calculator kata.

That’s a small percentage of all my reading, but I think that is major ones.

I use Google Reader to track my RSS subscriptions, seems best choice as for me. I try to check reader at least 2 times per week, but in reality it could be even rare.

Videos / Screencasts

With better internet channels video became even more better source of knowledge. Learning from videos is much more productive than reading blogs and books, as for me. There are a lot of free videos that could help you to start up with new technology. Advanced videos are typically paid, but price are affordable and price / value ratio is very low.

  • http://www.asp.net/ - This actually main source of information of ASP.net developers, tons of videos, blog posts, articles. I reviewed mostly all of ASP.net WebForms then I was learning it.. and some of ASP.net MVC as well. All videos are free that make this recourse priceless :).
  • http://tekpub.com/ - Tekpub is great for .NET/Ruby developers. There are free and paid videos. I had a chance to watch both and they are superb.

Social networks

If you think that social networks are only for schoolgirls or people who had nothing to do.. you are right :). But there some social network that really help professional growth.

  • http://stackoverflow.com/ - StackOverflow is best of the best of the best QnA site for developers. It allows you to create account ask questions and help other people with answering questions. It has huge popularity because of rating system, as much correct answers you do, as much rating you have.. such simple and good idea that force other people to provide a quality answers :). I like StackOverflow very much and post my questions that I lost hope of getting answer by trying and googling. In 95% I got very precise answers.
  • http://github.com - Best place for hosting open sources projects. Besides hosting the code, it is a true social network. You can use github for social coding. Recently I recieved my first pull request, believe me it is good feeling :). As bonus you can get such looking good CV.
  • http://twitter.com/ - I tried to avoid twitter as much as I can, but twitter won :).. Now I really feel power of it. If you follow right people, being polite but not tweeting to much you are on right way. This is probably my main source of information now, RSS slowly moving behind the scene and twitter became main information distribution system. (btw, if you read this line follow me @alexbeletsky :).

Podcasts

I recently opened podcasts power for myself. I spend at least several hours walking dog or driving car with no value :). Podcasts help you to feel that gap. Stuff that I like the most:

Picking a little bit from every source of information make you more comfortable in our fast changing world :).. If you know some great resources that you like, please share as comments.

ASP.NET developers disease

We are trying to hire web developer with using of Microsoft technology stack of ASP.net/C#/WebForm/MVC etc. So, I do a lot of interviews nowadays and I found out interesting fact. I’ve noticed symptoms of disease many of many ASP.net guys suffer from. The people who spent a lot of time with WebForms do not really understand web.

Originally WebForms has been designed in perspective to be conveniently used by Visual Basic developers. Ones who created UI in designers and put logic of application into control event handlers methods. That was a good idea because that gave possibility to many people switch to web world, without too much harm their own world of Windows programming (controls, events, handlers). That worked great, because all you have to understand is Page Lifecycle, be aware of controls that Microsoft created for you and be able to drag the control in WYSIWYG editor. The application created in this way was mostly fine, because it was to far to Web 2.0, AJAX, CSS and HTML revolutions.

Time has passed. Web stepped far away from a form with submit button and tables for everything, into rich UI applications with complex client side code, styled up with CSS. WebForms issues appeared really seriously, because it too much abstracted you from “what is actually going on”. WebForms controls generated HTML for you, did a server side validation. You could tweak a bit of CSS on server side as well. But this is a not controlling of your application, but rather adapting to “old-school” type of web development with ASP.net. With such level of abstraction you were loosing many important things.

I’m not saying the WebForms is bad. I’m saying that WebForms did a bad joke for the people who do not try to see bit far from their nose. If you open your eyes you would see how much web development is far from ASP.net WebForms development. Modern web development is actually started on client side. You have to understand HTML/JS really fine to be efficient in it. You might not be a designer, but you have to be able to style up your HTML from UI mock in PSD. You should understand REST and AJAX to make go from web page to web application. And the issue is - the most ASP.net developers do not know these things. They think that knowing a C# or VB.net is much OK for creation of web application and the rest is work for front-end developers.

Guys, seriously.. If you think: “HTML is easy, I could do learn in 5 mins”, “CSS is nothing to do..”, “I don’t want to do javascript” - you have those symptoms. Please try to markup the page that might look like that, enchase it with CSS3 transitions and shadows. Try to use jQuery for handling UI. You will see it is not easy at all. And basically your knowledge on C# does not make help you. I would say, it might be even harder than you application server side. Knowing how to do things with WebForms is great, but you should not be only limited to WebForms.

So, if your CV says you are web developer, please make sure that your are really able to do web products. Do not limit your self to “client-side” or “server-side” developer, it does not work any more. Be universal, look for the same things in a different angles, that helps sometimes.