Alexander Beletsky's development blog

My profession is engineering

I’ve released my own product - Trackyt.net

I’ve not been blogging for awhile, not because I’m too lazy but because I’ve spent all the time for my own first release. And today I’m very happy, proud and excited to announce that Trackyt.net is now available online.

Trackyt.net is a simple time management application. I’ve chosen this idea while I was working on my self-study project, just because it is well known, something I can use for myself. Latter on I started to feel that I see a bigger picture, bigger product that I want to build, where time tracking is just a small part of. That time I created first draft of Product Roadmap and now I try to follow it. That was mainly been influenced how we work in my current company what tools and process we are using.

It is a spare time project. At the beginning I spent no more that 3-4 hours per week mainly on weekends. But last several month’s when I’ve been approaching release dates I put almost all my free time on that.

Project is already on milestone 2, “Look & Feel Release” and supposed to be public. Nevertheless the time and effort it is still MVP. It’s primary functionality could be described in one sentence: “give you task a name; add new task on dashboard; start timer as you started to work on this task and release timer as you stopped; submit task on server;”. Originally I haven’t planned to go in a public availability with that. It was just a playground for HTML/CSS, MVC 2, jQuery. So, even now it lacks some common site functionality as mail notifications, remember me and so on.

I were inspired by some other small projects that people releasing just for fun. I like that someone working to have own “footprint” in web having a product placed there. I like that way of learning the things not only from books but from practical exercises. I like to have something to care, plan, enhance, develop. I hope that I would be receiving feedbacks that might would make a product better.

It is open source. You can crawl the repository and find something interesting for you. It would be great if you fork it and push some changes. I continue to make it open sources as far as I can do.

Site design have been created by Sasha. She have never tried a web design before this project. We’ve started together with simple tutorials and articles, replicating some existing site designs, learning photoshop. I was happy to see how fast she delivered first results. Even if we usually have completely different vision on things we came to one result. So, during the project we also formed a nice team of developer and designer.

So, as soon as original idea was to learn something new, what have I learned so far? A lot I think:

  • Getting Real - methodology and recommendations by creators of Ruby on Rails, that fits my project perfectly.
  • ASP.net MVC 2 - I’ve got a great introduction to MVC2 with a lot of help of Steven Sanderson great book Pro ASP.NET MVC 2 Framework, Second Edition.
  • JavaScript/jQuery - this one became one of my favorite language and jQuery one of the favorite frameworks.
  • HTML/CSS - I’ve never done so much HTML/CSS before. I would not say I’m good on it, but I definitely improved.
  • UpperCut and Roundhouse - good tools that help you with versioning and deployment.
  • Moq, Automapper, JSon, REST and more :).

What should I do now? I would take one week rest and to refresh my eyes a bit. I would start with a planning next release and next sprints to work on. I hope to back for my normal blogging rhythm again as well as support a product blog. So, I still got something to do :). But today is the birthday of Trackyt.net and on weekend I would be definitely celebrating it!

Testing database and Test database

Testing the database is important. It is absolutely required then your DAL is simply a bunch of methods that do SQL against the DB. It still required to be done then you rely on some ORM (like Linq to SQL, NHibernate etc).

Testing database

By testing the database I mean unit tests that runs database operation tasks (create, update, delete, stored procedures calls) and test asserts that operation is being completed successfully.

Even the SQL is simple enough (like a SELECT or UPDATE) it have to be tested. More complex stuff, like JOIN, UNION etc. have to tested with much more care (meaning different scenarios, different input set etc.). As more complex query you have, as more complex test you should do. I personally trying to avoid complex SQL queries, because they difficult to read and support, but in many cases is the only way you can do.

Example of test case of DAL methods that runs SELECT statement:

https://gist.github.com/662906

Using ORM simplifies life a bit. You are working with objects, creation of new record in DB is just creation of new object and call InsertOnSubmit() method. In many cases you have to trust that framework does its job correctly and do not write the tests for framework, that is simple waste of time. But what usually happens is that you have a wrappers against ORM DataContext, like a Repository and the behavior of Repository have to be validated with tests (check out about repositories here). Typically repositories interface looks like that:

https://gist.github.com/662918

All interface methods and properties are covered by tests. Moreover, if you have repositories extensions that helps you to select record by Id, or do paging they are also part of testing. Please check how this IBlogPostsRepostitory is tested by this example - example.

What else about testing database you should know?

First of all, database tests might required some initial data to be put in database before each test start. It could be easily solved by using [TestSetup] and [TestTearDown] methods of Fixture. Bad side of this that you could not customize the data for particular test, because Setup/TearDown runs the same code that you could not parameterize. I prefer to use a static method (or methods) that is placed in TestFixture and called at the beginning of unit test. Please see SubmitTenBlogpostsToRepository from previous example.

Second at all, you have to care about test isolation some how. It means that previous tests results should not affect next test cases, each tests case have to leave the database in the same state as it enters it. There are different approaches how to do that, I like something like this. I’m having FixtureInit that does kind of initialization for any test case and it contains DbSetup instance in it. DbSetup itself holds DataContext as well as TransactionScope object. TransactionScope is a very nice way of handling implicit transactions. Inside DbSetup constructor we also could put some test data initialization, as it would be shared thought all tests. Dispose methods of the DbSetup disposes a transaction without committing it. Each test case body is placed in using (var fixture = new FixtureInit("http://localhost")), so any data that is placed (or deleted) from database during the tests would not affect the database then test is finished. It is simple and works good, please check out for implementation here.

The last thing - layout and naming of database tests. Database tests have to be separated from application unit tests. It a good way of doing the testing. Database tests are usually too slow and you do not need to re-run them after minor changes (except changes in DAL). That’s why it is better to place DB tests to separate assembly. The name of assembly should contain *Database.Tests. It gives a clear understanding of assembly goal, as well as it will not be re-runned every time as you do build.bat by UppercuT

Test database

Test database is database against that database tests are running.. and typically Test Database == Developer Database, meaning the same database as developer tests the application is used for database unit tests.

It it bad in several reasons.

By doing developers testing you make database really “dirty”. Putting new records in DB or deleting existing ones, doesn’t matter. I usually do a small test that if I put new object(s) I check the count of objects in table after, like Assert.That(foundTasks.Count(), Is.EqualTo(2));. But if I Tasks table will have some data before, I’m not guaranteed that count will be 2, but Assert.That(foundTasks.Count(), Is.EqualTo(countOfObjectBeforeInsert + 2)); will be running OK. It works, but I don’t really like it, because it mess up test with some not required details.

It is possible that some complex database tests, could leave database in inconsistency state during its failure (that is actually bad and you have to design the tests to prevent this happening.. but it is happening especially with no-ORM, no-Transactions DAL). Also, one bad surpise could be if database tests deletes some data required for development testing.

So, it should be Test Database != Developer Database, test database and developer database are different instances actually. Test database must be restored with save version of developers database, but in general it is empty and its state is not changed after testing complete.

I’m using RoundhousE to deploy database. As I was describing in previous post I use initdb.bat and resetdb.bat to initialize and reset developer database. I’ve just created 2 others that do exactly the same but for trackytest database, inittestdb.bat, resettestdb.bat. In the same time, app.confing of test project is pointed to trackytest (instead of trackydb before). So now tests are running on trackytest, but the application itself on trackydb.

It is very convenient to work like that.

Roundhouse your database

As you started to use UppercuT for your builds, it is definitely worth to try worth to try RoundhousE for versioning, deployment, migration of your database.

RoundhousE - setup and usage

Let’s begin, just to to project web site and download latest package. Package contains documentation (just some draft version, not yet completed), samples and framework itself.

Initially I spent sometime to make samples run. Documentation said that it has been developed and tested with SQL 2008 Server and there is no information that it would run on SQL 2008 Express Edition. After several fail/repeat sessions I was happy to see that it actually could work with Express edition as well!

Before you start

Before you start, you have to change something in you SQL Express configuration. Go to Start -> All programs -> Microsoft SQL Server 2008 -> Configuration Tools -> SQL Server Configuration Manager. There you have to configure your SQL instance Network Configuration, to be able with “Named Pipes” protocol. So, go to Protocols for <%INSTANCE_NAME%> and enable Named Pipes

UppercuT configuration

Now you have to change something into UppercuT config. First you have to modify LOCAL.settings (file that is used for generation of deployment scripts). You have to supply server.database (in case of local Express it would be .\SQLEXPRESS), database.name the name of your db.

After small changes is required for UppercuT.config. First, folder.database I just used “db”, short and clear.

Check out this diff, for instance.

DB folder layout and content

Now, you have to create folder that would contain all sql scripts. I places it in the same level as docs, lib etc (but it could be placed to src folder as well). So, please folder with name “db”.

Documentation currently lacks information about folder layout, but from samples it is pretty clear:

  • functions - sql for functions definitions
  • permissions - setup permissions for database
  • sprocs - stored procedures definition
  • up - update scripts (create, drop, alter tables and records here)
  • views - views definition

If your application doesn’t currently use stored procedures or view.. it is OK to just leave those folders empty.

Deployment options

You have 3 options of deployment: by Nant, by MSBuild or by RH executable. It does not actually matters what to choose, all options are equal by its functionality. I choose executable, because it lightweight and only one file (please not that it would be included to deployment package).

To go with RH you have to copy it from RoundhousE packages into yours deployment folder. So, you will have deployment/rh/rh.exe.

Correction of deployment scripts

Deployment scripts are ones that will be used on production to update database. They are created based on templates and settings you set. You have to modify yours DBDeployment.bat file. Since I use rh.exe I created template for it. It simply runs rh.exe that does the rest of work.

Check out this diff, for instance.

Running RoundhousE

As soon as build application, go to code_drop folder you can run LOCAL.BEDeployment.bat and RounhousE will setup database with version it.

That’s basically it! As soon as its configured you can easily add new scripts to db folder, RoundhousE will handle changes of them.

Additional scripts I use

There are something that I added for my personal needs. If I do build on new environment, I would like to create database first. For such purpose I created initdb.bat, so build procedure on clean environment would be:

  1. run build.bat
  2. run initdb.bat
  3. run test.bat all

I also created script to reset database, resetdb.bat drops existing database and just creates new instance. This very useful during development, if you database became “dirty”.

UppercuT your builds

I like open source tools and frameworks, but more I like open source tools and frameworks that work! There are 2 nice frameworks that I started to use really fast got exactly what I need. They are UppercuT and RoundhousE. I’ve planned to blog about both, but post appears to be quite long, so I’ll spilt it on two.

It is all about build and deployment. I’ve seen different build and deployment system through my career: some domestic made, some commercial, some simple, some complex. I’ve recently started to think about for my own project. But before we started, lets make it clear - what I personally expect from build framework?

  • It have to use a industry standard tools - nant, msbuild, nunit, ncover etc.
  • Build have to be done by one click (press key)
  • During build unit tests have to run
  • During build different code metrics must be collected
  • It have to be integrated with popular SCM systems - svn, git, perforce etc.
  • It must have easy to use versioning system, all build artifacts (binaries, docs) are labeled with one consistent version
  • It must have automatic packaging mechanism, to deploy package on target machine

If you are having database in your application, you should also be concerned of.

  • Versioning of database
  • Deployment of database on any environment
  • Updating live database with new patches

Initially I thought to do some very simple by myself, but later I was browsing some repositories on github and noticed interesting project structure - build/deployment folders as well as number of batch files, like build.bat, test.bat, open.bat etc. I digged a bit and found out that it is actually scaffold provided by UppercuT. After I checked some documentation, I realized that it is probably something that I need! So, please met two projects that would simplify your life a bit: UppercuT - automation build/versioning/packaging; RoundhousE - db migration/versioning/deployment.

UppercuT setup and usage

Just go to UppercuT web site and download latest package. I’ve used 1.0.5.0, in package you will found some docs, that might be interesting to read and framework itself in UppercuT folder.

What’s inside? Well, package contains a configuration files, batch/shell scripts, nant scripts and number of tools - MbUnit, MoMa, NAnt, NCover, NUnit. Internally UppercuT uses nant for build automation. If you are not familiar with nant.build scripts, please don’t worry. UppercuT created in a way, that you very isolated of nant details and perform all configuration only in one place, settings/UppercuT.config - simple XML file.

If you want to apply UppercuT to your existing project, I would recommend to do following steps before you start.

  • Change you folder structure in the way to have on: docs, lib, src on higher level of directories tree.
  • Solution file (.sln) and all projects are put to src folder.
  • All 3rd parties (nunit, moq, ninject etc.) referenced from lib folder.
  • Project documentation moved to docs folder.

As soon as you done, just copy all content of UppercuT folder into your project folder. You are ready to use it. Before actual use you have to configure it. Fortunately, it have to be done only in one place - settings/UppercutT.config file. You have to provide project.name, path_to_solution, repository.path and source_control_type, test.framework and microsoft.framework (net-4.0 is supported).

As you provided all required keys you are able to start build! If you have compliable sources and runnable unit tests, you have to receive SUCCESS build status (I failed to start it at first time, having problems with nunit, please see Issues section below).

Lets see what UppercuT does for us:

  1. It uses source repository to generate _BuildInfo.xml and SolutionVersion.cs files. _BuildInfo.xml contains a digest of build, with revision, version, path to repository, dot-net framework version etc. SolutionVersion.cs is code file that must be referenced from all projects in solution, so they all have a common version.
  2. It compiles all projects in solution.
  3. It runs all unit test in solution (please note, it run only units - projects with *Test* in their names. It does smart job to skip Database, Integration tests).
  4. It copies all db scripts and documentation to build_output folder.
  5. It generates batches for build deployment and db deployment.
  6. It analyzes output binaries with such tools as NCover, NDepend (it is not in package, because it is commercial but if you have it UppercuT will use it), MoMa etc and place results to build_artifacts folder.
  7. It packages everything to code_drop folder, where everything is on right places - web content, binaries, db scripts and build artifacts.

A lot of things for free, right? What more you can do?

  1. You can run open.bat that by default open your Visual Studio with solution.
  2. You can run test.bat to build and re-run unit tests, test.bat all will run all types of tests - units, integration, database. test.bat useful to be run by some Continues Integration servers as CruiseControl.
  3. You can run zip.bat. Very useful feature, that would run build/tests and zip content of code_drop folder into one single zip file (like trackyourtasks.net.v0.1.0.584b2ef4.zip) that could be copied to production system for deployment and placed to repository to track builds.

Issues and criticism

As I said above first time I run build.bat it failed. The reason was that nunit-console loaded 2.0 runtime, but my test assemblies are build with 4.0 framework. It have been solved by modification of build\analyzers\nunit.test.step and adding one additional argument in exec of run_tests target <arg value="/framework=${microsoft.framework}" />. Please check my commit for details. Ferventcoder already contacted me regarding that issue, for I hope in some nearest versions in would be solved :).

Another problem I met with coverage reports generation. It just generates them empy. It is not blame for UppercuT but rather for NCover, that seems to have issues with .NET 4.0. In the same time, somehow, Testdriven.net is able to run NCover with properly generated coverage reports.

No more critic yet. What would be nice to have is already defined tasks for Stylecop and FXCop.

Conclusions

This uppercut turns to be powerfull. If you have no build system yet, UppercuT is recomended.. If you have some “self-written” system that you tired to support - UppercuT is recomended.. If you have build system and happy about it, well.. try UppercuT, you could notice something you missing, or otherwise suggest some new feature for UppercuT :).

Update:

Next blog post is about RoundhousE to handle your database.

Test Driven Design and levels of testing

Major benefit of TDD is not coverage, regression stability, test problem solving - NO. It is - Test Driven Design (Architecture). Test Driven Design is something that is being born of writing tests before code, writing tests before functionality started to work, writing the tests before you really clear about the rest of system (data flow, integration etc.). Test Driven Design forces you to use old good patterns as Factories, Template Methods.. it forces you to hide implementation behind interface.. it forces you to separate concerns and divide functionality on small objects, each of it responsible for its small function and could be tested exactly for its function. Instead of increase functionality by appending some code to already big function, you encapsulate new functionality to new object, test it in isolation and then easily integrate it to the place where it have to start work on.

The reason why Test Driven Design works is that doing a tests make you a class user, make you focus on its primary goal and interface and just make it impossible (or difficult and annoying) to make a class with multiple responsibilities.

How to get practical benefits of TDD, what level of testing should I consider to be sure I following the plan of TDD?

Disclaimer: all code samples are created very quickly, just to demonstrate ideas, should not be considered as working examples.

Level 1 - Unit tests

Here it all begins. Unit tests are kind of testing of really small parts of objects, like method properties and so on. This is there the actual requirements for the object is created and verified. The major point of unit tests is - they have to be isolated! That means, all object(s), data, view etc. that tested object is depend on, must be hidden behind corresponding interfaces. Please note, that at initial state those interface could not be defined at all (contains no methods) and its actual form will be dictated by actual object needs. Object needs will be clarified on a way of applying next test cases to testable object.

You have to be always focus on business object or primary object. This is the one that you have to test first. If you are doing registration functionality on site, Registration is class to start. If you are doing calculation application Calculate is class to start. If you are doing web crawling application, Crawler is class to start.

It is always like that - business object typically requires 2 things, data and view. In some particular case BO could depend on some external services, that would mean that interface to the service should pass to constructor as well. Also, in some cases it doesn’t require a view, just returns some data to outside consumer.

public class Some
{
  public Some(IData data, IView view)
  {
  }
}

public interface IData
{
  //empty
}

public interface IView
{
  //empty
}


* This source code was highlighted with Source Code Highlighter.

Then tests are being created:

[TestFixture]
public class SomeTests
{
  //create method
  public Some CreateSome()
  {
    return Some(new DataMock(), new ViewMock());
  }
 
  [Test]
  public SomeInScenarioOne() { }
 
  [Test]
  public SomeInScenarioTwo() { }

  [Test]
  public SomeInScenarioThree() { }

}

public class DataMock : IData
{
  //some methods here
}

public class View : IView
{
  //some methods here
}


* This source code was highlighted with Source Code Highlighter.

It is good design that constructor receives the interfaces only, the rest of parameters required supplied by corresponding methods arguments.

Once again, business object asks interfaces for required data or methods for displaying data. What exact data and methods are required, could be unknown at the beginning, it only depends on current business object needs. So, in a way of implementation goes interfaces of IData and IView started to form its shape. So, at the end they could look like this:

public interface IData
{
  IQueryable<Columns> GetColumns();
  void UpdateColumn(int id, string name, Column column);
  void DeleteColumn(Column column);
}

public interface IView
{
  void ShowSuccessMessage(string message);
  void ShowFailMessage(string fail);

  void DisplayResults(IQueryable<Column> columns);
}


* This source code was highlighted with Source Code Highlighter.

Note, that we are absolutely don’t care about actual implementation of both interfaces at the current moment. For instance Data could use SQL database for columns, View could call some Win32 API functions for displaying results. Doesn’t matter - forget about this at all, just continue with primary object and its functionality and tests.

Mock objects should provide required methods in most simple way, like:

public class DataMock : IData
{
  public IQueryable<Column> GetColumns();
  {
   var list = new List<Column> { new Column(1), new Column(2) };
   return list.AsQuearyable();
  }
}


* This source code was highlighted with Source Code Highlighter.

For the different type of test data you could have a different mocks. Also, it is better to use some existing mock frameworks.

Level 2 - Data/View tests

As interfaces of Data/View granulated after level 1 testing, it is time to implement them. At this time we are really clear of what exactly we are going to implement, since there is already defined contract between client (primary object) and supplier (data/view classes).

Data/View implementation have to be tested also. There are more difficulties in testing them: data depends on SQL Server, that must be up and running, some test data have to be prepared before. View could not be tested in many cases. Netherless, why are implemented in same test before style and finally we have Impl classes implementation. Ones that would be used with primary object after its integration to application.

Implementation

public class DataImpl : IData
{
  private DataContext _context;

  public DataImpl(DataContext context)
  {
   _context = context;
  }

  public IQueryable<Column> GetColumns();
  {
   return _context.Exec("SELECT * FROM COLUMNS").AsQueryable();
  }

  //rest of method implementation
}

public class ViewImpl : IView
{
  public void ShowSuccessMessage(string message)
  {
   MessageBox(message);
  }

  //rest of method implementation
}


* This source code was highlighted with Source Code Highlighter.

Tests:

[TestFixture]
public class DataImplTest : IData
{
  [Test]
  public void GetColumns();
  {
   var data = new DataImpl(new DataContext());
   var columns = data.GetColumns();

   Assert.That(columns.Count,Is.EqualTo(10));
  }

  //rest of method implementation
}


* This source code was highlighted with Source Code Highlighter.

Level 3 - Integration tests

Integration tests are considered to be evil. I also think so, but I don’t think that it is possible to completely to avoid them. They are difficult to support and difficult to maintain them, but I think some number of integration tests have to be present to cover some typical scenarios of bugfix test cases.

In general, integration would mean - test the primary objects with concrete instances of dependent objects. Integration tests would cover a place where primary object is going to be integrated to. So, if we plan to use Some in SomeAnother you have to test the impact of this class after integration.

[TestFixture]
public class SomeIntegrationTests
{
  [Test]
  public void IntegrationScenario()
  {
    //arrange
    var some = new Some(new DataImpl(), new ViewImpl());
    //act
    var results = some.Act(param1, param2);
    //post
    //do assert
  }

  [Test]
  public void IntegrationScenarioTwo()
  {
    //arrange (some is integrated to SomeAnother)
    var someAnother = new SomeAnother(new DataImpl(), new ViewImpl());
    //act
    someAnother.Act();
    //post
    //assert on actual influence of Some class on SomeAnother class
  }
}


* This source code was highlighted with Source Code Highlighter.

Level 4 - Spec tests

The final level is Spec tests. BDD (Behavior Driven Development) is next generation of TDD, focusing on user acceptance criteria’s as development driver. There are quite good tools for that already like, SpecFlow or StoryQ.

Having those type of testing on Level 4, doesn’t mean that they are created after functionality is ready. It is recommended to create such tests first working with product owner and defining a test stories.

Personally I have no rich experience in BDD, but ideas and success stories with Cucumber (Ruby) makes them worth to look at.

Conclusion

Testing is important. But testing to create tests are pointless. Tests have to be created to create good design, easy to read and maintain code.

By doing tests right, they won’t complicate your life but make it easier.

Switching from ASP.NET development server (Cassini) to IIS with MVC applications

Start developing new applications on IIS already is probably good idea, since it is IIS where you application is finally landed to. Working with Cassini is great, cause it fast, requires no maintenance, very easy to start. But it could hide some issues that would surprise you during deployment.

Recently I’ve decided to switch my current Asp.net MVC application from Cassini to IIS. It is done very easy, just go to project, Web tab -> Use local IIS Web Server. Select the option and then click on “Create a virtual directory”, to create a virtual directory for application.

But after I started, application just failed. I’ll share the problems I met and how I fix them.

IIS identity impersonation and path credentials

To make sure your SQL database works fine, you have to configure identity impersonation for application pool and path credentials for folder.

Go to ISS Management, Application Pools, DefaulAppPool, (you could you another app pool for you application), Advanced Settings, Process Model, Identity and set it for your account (you should have admin permissions on this machine).

Go to Virtual folder, Basic Setting, Connect As…, use you credentials.

JavaScript references from Master page

On my master page I’ve referenced number of javascript files, as jQuery, jPost etc. I did it like that,

<script src="/Scripts/jquery-1.4.1.min.js" type="text/javascript"></script>    
<script src="/Scripts/jquery.blockUI.js" type="text/javascript"></script>
<script src="/Scripts/json2.js" type="text/javascript"></script>    
<script src="/Scripts/jquery.postJson.js" type="text/javascript"></script>

* This source code was highlighted with Source Code Highlighter.

That worked fine on Cassini, because virtual is “/”, but as on IIS if you use code like that, it would have some unexpected behavior. Namely, on each view that uses this Master script reference will be mapped differently. For http://localhost/Tracky/Home, it would be be searching for javascript by such URL http://localhost/Tracky/Home/Scripts/jquery-1.4.1.min. And failed to load that. Fortunately there is a way to fix that. Instead of,

<script src="/Scripts/jquery-1.4.1.min.js" type="text/javascript"></script>

Use

<script src="<%: Url.Content("~/Scripts/jquery-1.4.1.min.js") %>" type="text/javascript"></script>

Url.Content method will properly create a absolute path to script file.

Please make sure, that your Master page is inherited from System.Web.Mvc.ViewMasterPage.

namespace Web.Areas.Public
{
  [CoverageExcludeAttribute]
  public partial class Public : System.Web.Mvc.ViewMasterPage
  {
    protected void Page_Load(object sender, EventArgs e)
    {

    }
  }
}


* This source code was highlighted with Source Code Highlighter.

I’ve spend some time, trying to understand why Url.Content does not work for me, thanks to erikzaadi.

All redirected URL have to prefixed with ~/

If you have Redirection somewhere, make sure that redirect URL is prefixed with ~/ (except you explicitly mean it). If you do Redirect("/Pubic/Home") from /Public/Registration, for instance, absolute URL will be http://localhost/Tracky/Public/Registration/Public/Home, something not expected. With ~/ it will be correctly redirected to http://localhost/Tracky/Public/Home.

Use absolute path for AJAX posts

If you somewhere have a javascript code like this:

$.post('/GetAllTasks/' + userId, null, callback, 'json');

It won’t work either, you have to provide absolute path for resource.

$.post(api + '/GetAllTasks/' + userId, null, callback, 'json');

I receive the API folder from a hidden input on page, initialized with a value from ViewData, which in its turn intialized in controller, using VirtualPathUtility class.

Conclusions

These are just some issues I met during my switching after I done all above application started to work fine. If you had some other issues during transition to IIS, please make you comments.

You can review changes I did for my project in this commit.

Agileee 2010: J. B. Rainsberger: Integrated Tests are A Scam

Disclaimer: text below is compilation of notes I made on Agileee 2010 conference, listening to different speakers. I do it to keep knowledge I got on conference, share it with my colleagues and anyone else who interested. It is only about how I heard, interpret, write down the original speech. It also includes my subjective opinion on some topics. So it could not 100% reflects author opinion and original ideas.

It a very dramatic beginning - “Integrated Tests are a scam”. a self-replicating virus that takes over your project and burdens you with long-running, fragile, hard-to-understand test suites - a sentence JBrains started his speech!

What is integrated tests?

Integrated tests checks whole system, not only collaboration between some components. It means any tests where the failure of the test with unjustifiable failures. It is bad, because it is a distraction and give no value.

Integrated tests are slow

Integrated tests involves DB, network that makes tests are slow.

Integrated tests are brittle

If you change one small thing you have to fix a lot of tests. JBrains gives a very nice and common example. “Foreign Keys hell” - you have a foreign keys in DB, so to insert/remove record to one table, you have to insert/remove record to another table.. and so on. Preparation tests data is very expensive operation, complexity goes as exponential form. Eventually it came up that change that took 1.5 hour start to take 1 days.

Goal of unit testing

Goal of unit testing is to have quick and correct result. Test failure should point to exact problem in object to be tested. In case of integrated tests, it is possible to have multiple failures, with very few actual understanding of what is wrong.

What to do?

JBrains introduces term of - Collaboration tests (Interaction tests) - using mock objects or tests doubles. Collaboration is observed as Client-Server architecure. The object that is being tested is Client, the object it depends on is Server. We create interfaces, to make it possible to substitute implementation (using test doubles), we simulate behavior for object we testing. Client object is a completely tested, test double simulate behavior of Server.

This is greatly works, tests pass, but we found a problems. The issue is that object doesn’t always work as it is expected by tests and having Collaboration tests only is not enough. We need another type of test - Contract tests.

Contract tests focuses on Server side, this are ones that actually do SQL to DB, requests by network etc. The goal here is to focus on Contract (or interface) of object and test interface, not exact implementation. Since contract tests are defined, now it is possible to reuse them for any particular implementation. As an example, we could review IList class, that could have different implementations (as array list, linked list etc.), but it have one interface with simple operations like - add, remove, at. We create a number of tests, for all contract operations. Such tests could be put to common class, with abstract operation of object creation (template method pattern), so tests of particular implementation (array list for instance) just inherit common class, override creation function to return array list method and reused all already defined contract tests.

What are benefits?

Combining these 2 types of tests give us fast, isolated suites, failure is suite should point to exact issue avoiding creation of complex integration tests.

Later on JBrains gives some mathematical proves of differences between number collaboration/contacts tests and integrated tests, that I just do not want to write here, cause afraid to incorrectly interpret the information. The keynote of the speech is rather simple: forget about complex and heavyweight integration tests, do as much as possible with test doubles running everything in memory, do contract testing for platform depended objects.

You could find more, here: http://blog.jbrains.ca/integrated_tests_are_a_scam

Agileee 2010: Pawel Lipinski: “Clean Tests” by Uncle Paul, or How to structure your tests so that they serve you well

Disclaimer: text below is compilation of notes I made on Agileee 2010 conference, listening to different speakers. I do it to keep knowledge I got on conference, share it with my colleagues and anyone else who interested. It is only about how I heard, interpret, write down the original speech. It also includes my subjective opinion on some topics. So it could not 100% reflects author opinion and original ideas.

One of few technical speeches on this conference. Pawel discusses a vision of production clean code, with a clean tests as key factor of creation good software. That was very interesting and rather quick, so I did only few notes of it.

Introduction

“Most important stuff is to create a clean code - Robert C. Martin (Uncle Bob)”

Writing a clean code is important! Everybody knows that, but not everybody actually produce clean code. Writing a tests is vital to create reliable, maintable, code. It is a common issue of many developers to forget that tests code quality as such important as production code.

Then we do write tests, we have to be sure that we are:

  • writing tests that are reliable
  • writing tests that they are document code
  • writing tests that they are maintainable
  • writing tests that they are readable

Writing good code/tests is a hard work! That require time and practice.

What test do give?

We are spending an effort on tests (a lot of efforts, actually). Why do we do that?

  • Be in safe net, awareness of what you did
  • TDD leads to a Test driven Design!
  • TDD increasing development speed
  • TDD produce documentation

Problems

The more tests you have, more problem you have with them. So, at some point of time it stop to be a documentation.

This is especially matters then you have a lot of integration tests, that could affect each other, provide massive failures without giving clear understanding what is wrong.

Process

What’s the process of creation the tests, following TDD rules?

  • Think First
  • Trivial First
  • Test specifies “what”, implementation says “how”
  • You need different levels of tests (unit, integration, end-to-end)

Conventions

Something that helps to produce clean tests.

  • Coherent naming of tests classes
  • Coherent naming of tests
  • Test classes names focus more on functionality

Comments

If you feel you must comment, something wrong with you code. Tests have to be document itself.

Some useful //give, //when, //then (//arrange, //act, //assert).

Test setup (Given part of test)

DRY (Dont repeat yourself) is major. We usually repeat the setup code for each tests. Have to be extracted to methods, that could be easily reused. Method name should reflect its meaning.

Behavior Driven Development

It is on top of TDD, we define behaviour. It reflects a user stories that business people create. Focuses, why code should exists. Code have to be a value for customer. Naming is important, language you use reflects the way you think. Naming in code is the same as names by the business people. In BDD we say Examples, Not Tests.

By the way, someone suggested good BDD tool for .NET, http://specflow.org/

BDD rules

  • tests names should be sentences
  • simple constant template of sentence help to focus in the tests on one thing

A-TDD

Acceptance TDD. Scenarios have to written by business people (or at least with them nearby).

Tests smells

Factors that shows something is wrong with test

  • Long setup
  • Long tests
  • Many assertions
  • To many tests methods

Maintainable tests

Some rules must be followed according to have maintainable tests.

  • Reuse assertion, create “business” assertions
  • Reuse object construction methods
  • Reuse test setup
  • Tests should not depend on environment and order of executions

Agileee 2010: Mikalai Alimenkau & Aleksey Solntsev: How to be proud when you are done

Disclaimer: text below is compilation of notes I made on Agileee 2010 conference, listening to different speakers. I do it to keep knowledge I got on conference, share it with my colleagues and anyone else who interested. It is only about how I heard, interpret, write down the original speech. It also includes my subjective opinion on some topics. So it could not 100% reflects author opinion and original ideas.

The speech by two guys for XPInjection.com who shared they knowledge on such important Agile concept of Done. I was really interested in this speech since this is something that we also trying to use as common practice in our company.

Introduction

Software development involves a lot people: developers, customers, management. And everybody has own “dreams” (or goals). Such differentiation of goals could lead to hidden (or opened) conflicts, misunderstanding of business and development, wrong communication and as a result - failure of project. But if we try to look it with another prospective, everybody is actually have one goal - delivery of product. Developers happy to see their work done, managers happy to see it is in budget and met customer criteria’s and so on. So in general everybody has one common goal.

The way of managing one common understanding is the way of creation of Definition Of Done (DOD).

Typical scenarios caused by lack of DOD

“Uncommited stuff”

Developer declare work as done, but sources are not committed to SCM. Developers definition of done - it works on my machine.

“Useless build”

Customer said - let’s deploy on production. Developer said - we still have a lot stuff to work on, merge, tests, DB migration scripts etc. Customer Done is different.

“Unstable velocity”

Customer - Well done, our velocity increased to 32 SP, let’s used for next fixed. Master - I have a lot small bugs to , documentation. Customer - how we change our velocity. Master - let’s say it is 20. Developers lack understanding of done.

“Unverified tasks”

Customer - there is a crash on form, have somebody tried to work before me. Developers - oops, it was quick fix, haven’t tested it well. I’ll just do another quick fix for that.

“Forgoten requirements”

Customer - my site is very slow, let’s discuss it on next meeting. Master - who did the performance testing? Aaaa.. no body. Non functional requirements are forgotten.

What is definition of Done?

Definition of Done: is a contract (number of rules) between different parties of how to treat particular actives state of completeness.

How to start?

If you decided to start with with DOD, you need some initial guidelines of implementation, what should you do to “define” DOD?

  • Take it from previous project (but it have to be done carefully, project are different)
  • Brainstorming, having code quality in mind (start from technical stuff)
  • Brainstorming, having business problems and wishes

The process of creation of DOD is brainstorming, you have to understand your need, you problems, your goals to create DOD.

Who should define

Team + Customer have to decide on definition of done.

What usually happens, as for me, that DOD is created by Team only, because customer (or PO) is not really exited to do that, so this activity is usually skipped, but only understands the importance and consequences in case of major project issues. That’s why it is Scum Master job to involve customers (PO’s) as much as possible.

When to define

Ideally at the start of the project. But in case of something wrong, you have to stop and re-implement definition of done, to prevent the problem. Retrospective is good time for changing DOD.

Where to store

Definition of done is document. Electronic versions is stored in SCM, on wiki site. Printed version have to be ‘before eyes’ of Team/SM/PO. It is a good idea to put a copy near board, so everyone sees it on stand-up meeting etc.

Different levels of granularity

DOD could not cover all aspects of project. So it have to be granulated. DOF for Release, DOD Iteration, DOF for feature, DOF for task and so on.

How to control DOF

Automation

Automate everything you can - of possible try to automate rules from DOF (like using a hook in SVN to control comment). Add static analyzer to build (code quality).

Fixed workflow

Define a workflow in issue tracking system. As code review: workflow have to be changed to make a ‘Code review’ as mandatory state.

Responsible persons

Execution of DOD is not possible without responsible persons. Scrum Master (person responsible for process), Team Leader, QA. It should be one person!

Typical problems

  • No common understanding - definition of Done have to be reviewed from time to time, to make everyone clear
  • No commitment - I understand it is imprortant.. but I ignore it. Everybody have to agree. Good idea is to show DOF for job candidates on interview
  • Unrealistic criteria - for instance 100% of code coverage
  • Too ideal DOD - you try thing about everything, it is hard to apply. From time to time, clean up DOD
  • Partially done tasks are accepted - typically on transion. this is a problem, don’t lie yourself
  • “Broken windows” principle - if someone start to ignore rule, eveybody start to ignore it
  • “Development only” DOF - care only about development side of process, forget rest (doc, QA, deployment)

Agileee 2010: Timofey Yevgrashyn: Agile in Ukraine

Disclaimer: text below is compilation of notes I made on Agileee 2010 conference, listening to different speakers. I do it to keep knowledge I got on conference, share it with my colleagues and anyone else who interested. It is only about how I heard, interpret, write down the original speech. It also includes my subjective opinion on some topics. So it could not 100% reflects author opinion and original ideas.

Speech by Timofey Yevgrashyn, the pioneer of Ukrainian Agile movement, shares his vision of the history of Agile in Ukraine.

Milestones

  • 2001: Agile Manifesto signed, dotcom crash, outsourcing growth.. and many Ukrainian outsourcing companies appears
  • 2005: First Scrum/XP projects in Ukraine, first certified Scrum Master
  • 2007: First scrum meeting in Ukraine
  • 2010: a lot of Agile conferences (Agileee, It jam, Agile base camp etc.)

Ukranian Agile

Timofey did a survey for gathering of information about actual Agile state in Ukraine, which he shared on his speach. Here a some facts of it.

  • 300 Ukrainian CSM (Certified Scrum Master) from ~104000 of the world
  • Ukraine teams are the “industry average”
  • Weakest areas on Ukrainian teams: retrospectives, planning

Timofey emphasis that we have to move somewhere, but there is no new methodology. Agile is already “sold”.. but not implemented. So, a lot of shops are ready to adapt Agile practices and try to do that, but not right (eventually it ends up with blaming Agile that it does not work). Ukrainian Agile adopters have to avoid “cherry picking”, implement all required practices (by book, using Shu-approach).

Allan Kelly were present on session did a several comments, that Ukrainian Agile people behave and do same mistake as in UK, as its in whole world. Later on he did very nice tweet: “#agileee Its reverse-Tolstoy again: All UnAgile teams are UnAgile in the same ways, all Agile teams are Agile in their own unique way”.