Alexander Beletsky's development blog

My profession is engineering

Extension methods and clean code

Extension methods is one of my favorite features of C# language. It appeared from version 3.0 and became very popular.

The main goal of Extension Methods is to provide ability of extension of any class, without creating any derived classes, modifying original type or do “whatever” hacks. It is allowed to extend any type with any functionality in a very seamless fashion. What also great about Extension Methods is they allow to beautifully emulate behavior usually called pipelining (in F# or Bash) and implement Chain of responsibility pattern.

But the most important as for me - Extension Methods help to keep my code clean. Clean code criteria is something I concerned a lot, nowadays.

I recently started to practice with code katas, that I found essential for any developers who cares about keeping their saw sharp. So, after several iterations I came up with extension methods implementation that I pretty happy about. I’m using Roy Osherove’s StringCalculator kata. It is about the implementation of simple method Add that:

  • Takes numbers separated by delimiter, as string
  • Determines if custom delimiter is set
  • Split up original string to array of number tokens
  • Validates the input (no negatives allowed) and returns the sum of numbers, ignoring numbers greater than 1000

I would like to show to you both implementation and try to evaluate them from “clean code” point of view.

Original one (this is of cause a little “unwinded” version, I had a different structure with smaller methods.. But idea still the same).

public int Add(string numbers)
{
    var delimiters = new[] { ",", "\n" };

    if (IsCustomDelimeterProvided(numbers))
    {
        delimiters = GetDelimitersFromNumbers(numbers);
    }

    var processed = numbers;
    if (IsCustomDelimeterProvided(numbers))
    {
        processed = processed.Substring(processed.IndexOf('\n') + 1);
    }

    if (IsContainDelimeters(processed, delimiters))
    {
        var splittedNumbers = numbers.Split(delimiters, StringSplitOptions.None);
        var validation = new NumbersValidation();

        foreach (var number in splittedNumbers)
        {
            validation.Check(number);
        }

        validation.Validate();

        return splittedNumbers.Sum(p => GetIntegerWithCondition(p, IgnoreNumbersMoreThatThousand()));
    }

    return Convert.ToInt32(processed);
}

Do you think is this code clean? I don’t think so.. Of cause, it might be not so difficult to understand it, but:

  • Method is just too long
  • A lot of if statements make it difficult to see instruction flow
  • It messes up “infrastructure” code (splits, validation) with primary functionality (sum of numbers)

Let’s try to read it: I got the numbers and I check if custom delimiter is set on the beginning of numbers string, if so I try to extract the custom delimiters from original string. After I pre-process the string to remove custom delimiter prefix, so do not break the rest of function. If numbers string contains the delimiters, I split it up, perform the validation by helper object (which will throw exception is something wrong). Run Sum algorithm that would: covert string to integer and ignore one if it is greater that 1000.. Otherwise, it just tries to convert to integer and return.

A lot of words, a lot of ifs isn’t it? Thats not good.

Now, my last implementation with using of Extension Methods:

public int Add(string numbers)
{
    var defaultDelimiters = new string[] { ",", "\n" };
    var delimiters = numbers.CustomDelimiters().Concat(defaultDelimiters).ToArray();

    return numbers.Replace(" ", "").Split(delimiters, StringSplitOptions.RemoveEmptyEntries)
        .RemoveSpecialSymbols().ToIntegersArray().ValidateIntegersArray().IgnoreIntegersGreatThanThousand().Sum();
}

Do you feel the power? I definitely do!

I believe that this code is very clean. It basically does not require any comments, because it looks like “plain English” explanation of what’s the functionality is all about! Anyway, let’s try to read: I extract custom delimiters from numbers string and concatenate them with default delimiters. I replace all space with empty symbol (note: this step is not in original requirement, but I put it to keep code robust), then I split them with delimiters ignoring empty lines. After I remove all special symbols in numbers string and convert the result to array of integers. I validate this array (no negatives) and ignore any numbers great than thousand. At the end I sum up everything and return the result.

What is good about this code:

  • Method is very short
  • All details are hidden
  • Control flow is very straightforward
  • All dependent methods has meaningful names
  • Method does exactly what it is suppose to do

Conclusion

Well, don’t get me wrong here. I’m not saying that now only Extension Methods is a way of solving the issues. Of cause not. But if you feel the smell of pipeline of chain of responsibility patters, extension methods are right choice. As always you should consider to do not “overplay” with some particular feature of language/framework.

You can use links I gave above to read about technical details in MSDN. Check out full implementation and tests as my example of usage.

How to export SQL data script for SQL Express 2008 R2

Before deployment of new version of my application I wanted to make sure that everything is alright, by doing development test on my machine. I backed up production database and copied it my local environment. Task was easy, restore database from backup and run all unit/integration tests, run the smoke test of application manually. But as I started to restore database I got such error from my SQL Express Management Studio:


TITLE: Microsoft SQL Server Management Studio
------------------------------

Restore failed for Server 'LOCAL\SQLEXPRESS'.  (Microsoft.SqlServer.SmoExtended)

------------------------------
ADDITIONAL INFORMATION:

System.Data.SqlClient.SqlError: The database was backed up on a server running version 10.50.1600. That version is incompatible with this server, which is running version 10.00.2531. Either restore the database on a server that supports the backup, or use a backup that is compatible with this server. (Microsoft.SqlServer.Smo)

Basically it meant that my production and local environment are different. On production I have SQL Express 2008 R2, locally I have SQL Express 2008. It was a surprise to me, cause my WebPI shows me that I’m upgraded up to R2.

Anyway, I had no time to upgrade my local environment and I did very simple workaround that might work for you as you don’t have very big database to handle.

There is a feature of SQL Express (and Server) that allows you to export your database as SQL script and you can run this script on local SQL instance and got exactly same version.

It is easy taks, but by doing that initially I successfully exported schema only, but not data itself. So, script generated didn’t contain and INSERT statement. As it turns out you have to change some default options, to export both schema and data. Follow the instructions:

  1. Expand your database list and right click on target database.
  2. Select “Tasks” -> “Generate Scripts” context menu item. Generate scripts wizard will open.
  3. Press “Next” on welcome screen and select your database from databases list.
  4. Make sure you checked option “Script all object in selected database” if you want to script all db. In case you need only some particular table you might consider to select it later. Click “Next”.
  5. On Script option step, make sure to select “Script Data” option (this is exactly what I missed and didn’t understand why data is not present in final script).
  6. Proceed to the end of wizard to complete it.

At the end you will got SQL script with schema and data that you can run on SQL Express 2008 and got exactly the same database as on R2.

This helped me quickly to workaround “version incompatibility” issue, hope it might help you as well.

Val Gardena (Dolomites) trip 2011

It is over. It was one of my best vacations ever. We had a chance to see a lot of beautiful spots, especially in Venice and Gratz and had great snowboarding times on Val Gardena area slopes.

As I said in previous post, this time we did plan our trip more carefully. Without long night driving and staying in hotels for night to have a rest. We went out of the Kiev early morning (about 3.00) and went to west Ukraine border to small town - Astey where we crossed the border to Hungary. This time we are 2 cars team. Connected with radio stations, we feel like a professional rally riders for a long test drive trip :).

Dmytro (the manager of our trip) booked very nice hotel All-4 U in center Budapest. Living in center has pros and cons, of cause.. It is good that you could walk to main city POI’s, but it was very (very!) hard to find hotel and find a parking place for car. Center of Budapest is in heavy maintenance now, so our GPS were guide us through streets that are closed for traffic now. We finally found hotel but it was located in very small street, with no free parking places so we did several circles, to finally hit the entrance and underground parking.

So, at the morning we had a quick breakfast and went to Venice, Italy. Nothing more as so good as driving by Euro highways. It is easy and safe and fast. Our path to Italy were made through Slovenia. Slovenia is very beautiful country and we very much enjoyed the ride. Near the 20.00 we came to hotel Alteri near Venice. We had a quick shower and preparation, got a bus tickets and went to city. Since we were really hungry, first place to visit was some restaurant to get some food. The owner of hotel recommended us some place and we went there. It was a quite small place. Food and vine were just perfect, very tasty. I was a little nervous about the price, but as I glanced to menu I was happy to see that prices are not overhigh. It is even lower that some expensive restaurants in center of Kiev.. but could you compare Kiev and Venice, with number of tourists, attractions? No way..

Venice is town on water. No transport, no cars, nothing. Only walking and using gondolas to go by chanells. Since we came on night, not a lot of tourists were in town that time. Some times it really created illusion that we are alone there! So quite, so small streets with small bridges.. so magical! If you are not familiar much with Venice you really think it is like maze. We lost several times but it was really fun. Andrey, the guy who held the map and was showing directions, was doing a great job to save us, but is was like that: “Now, let’s turn right, pass the bridge.. turn left and right again and bla-bla”, but after 10 minutes we were on same place we started with :). Fanstastic.

Venice streets

Main channel

Beautiful bridge

Next morning we had just a several hours to have a walk, so we were hurry to Venice again. At morning it looks really different. You can see much more details of cause (like, how all those building are really old.. or color of water in channels), but now it is overcrowded with tourists and souvenirs dealers. So the magic just gone :). But still a lot of shops and churches are opened, so you might have a chance to visit those.

Sashka near one of the channels

Together on San Marko

Gondolas

The weather were fine, but wind were quite cold. Late January is probably not the best time to visit Venice, but anyway I had a lot of pleasure to be in this legend place.

We packed our bags and went to target point of our journey, St. Christina. We were driving by highway, with a very sunny weather behind the car glass, seeing a lot of vineyards. Outside view was so warm that it was to difficult to imagine that somewhere is cold and snow covered peaks of mountains.

We came to St. Christina at evening, it was -4C and we climb to 1650m over sea level. It was dark already so we could not see mountains, but the air was so fresh and so nice that it could be only in Alps. We were exited about next 6 days! Val Gardena is huge and you might have everyday snowboarding in different places. So, it was a good idea to take good rest before exploring the mountains.

Val Gardena meet our expectations for sure! I’ve never seen so much exceptional view as from Seceda or Marmolada tops! Really-really beautiful landscapes. The killer feature of Val Gardena area is Sellaronda. Sellaronda is a circuit around Sella mountains massif.. Big number of slopes and lifts that makes it possible to feel all beauty of Dolomites. You can do the circuit in 2 directions, green one and orange one. They are mostly equal.. orange just contains a bit more red and black (more complex) slopes. Green one can be used by everyone, even if you are not so confident in skies or snowboard. It took about 3-4 hours to complete the circle.

Sella

Great sunny day

Sashka

Me going down the slope

The biggest disadvantage of Val Gardena - it is overcroweded. Despite of huge area, you typically will be in long waiting lines for lifts on a popular slopes (especially ones that are on green or orange route). Besides of that just before our period of stay it was a quite warm weather there, without out big snowfalls, made slopes very hard and icy.

Marmolada is highest point of resort. According to wikipedia it is even seen from Venice on clear day. We dedicated one day for trip to Marmolada. It took almost 6 hours to me and Sasha to make possible to get on top and back to St. Christina. Even it took a lot of time to get where and spent a time in long waiting lines, as soon as you take a view from top you immediately forget about all bad things!

Exceptional view

Europenean Rocky Mountains :)

And I really-really much enjoyed the run from Marmolada back to valley. That was fantastic.

We’ve spend one evening in Ortizei looking for a fun public places, but have found nothing.. except one small bar where ski-instructors were getting drunk after hard work days :). Despite of Ischgl Apre-ski is not so popular in Italy, so after skiing we usually went home, preparing food and did our small parties.

6 days of snowboarding went very fast. Originally I planned to read some books and small coding stuff, but in evenings I was so tired that after a good dinner, usually with grappa or other alcohol, all I could do is only crawl into bedroom direction :). Our snowboarding times getting over, but way back-home with visits of Gratz and Lviv was waiting for us. Moreover, I’ve lost my camera in Budapest hotel, so we had to be there to take it back :).

Gratz is quite big city in Austia. Since we had not a lot of time, we visited only two Gratz’s spots, as Grazer Schloßberg and Kunsthaus. Schloßberg is a castle on mountain, you can get there either by foot or take a lift. From the mountain you might have very nice Gratz views.

Way to Schloßberg

View from mountain

Kusthaus

Kusthaus is a museum of the modern art. Unfortunately, main space of museum were closed the time we visited it.. and the stuff I saw didn’t impressed me much. We paid for tickets, entered the hall see several small rooms with devices pretended to be robots and after museum worker said: “That’s basically it, goodbye”. Not fair.

We’ve spend some time to find a place to eat.. Some restaurants were closed already that time, some was so luxury with man’s wearing smoking and lady’s in long evening dresses, that we could not conform to their companies :). I started to lose hope and recalling do we still have those crackers in hotel, but finally we found something. Austrian dishes differs from Italian ones. I liked Italian more, but anyway it was good and tasty.

So, next morning we were moving directly back to Ukraine, with one small stop at Budapest to get camera back :). We did a route fast and easy and soon was on Ukrainian customs. Faces of custom workers immediately return you to “Ukrainian Reality”. Fortunately they were to lazy to check our car, because we a little violate the law of amount of alcohol you allowed to bring in. After the customs you are completely back to earth with the roads you have to drive. Leaving Ukraine is easy stuff, but getting back is very depressing :). Roads are awful, full of wholes and without any road markings and lightings makes it really hard to drive. Situation is complicated with a big number of trucks moving from a border of Ukraine, with a low speed.. so you all the time have to practice in overtaking. In night, with limited far view it is difficult, scary and dangerous. First car crew was helping us a lot by moving forward and giving a directions and recommendations of safe overtaking.

I hardly could imagine how people from Europe will be driving to Ukraine to see Euro 2012.

In spite of everything we came to Lviv, one of the most beautiful places in Ukraine. Similar to Budapest, the center of Lviv is not very car-friendly. Small streets, block pavement, trams makes the problems. But now we are home, feeling that most problems are in past :). While we approached Lviv, we’ve booked appartments to live and table in one of the best places in Lviv called Kumpel’. They produce great beer (I like dark especially) and very nice dishes. After ~600km of road having a big glass of cold and tasty beer is great fun, believe me :).

Walking Lviv streets

Chimera on house

Ukrainian delicious “Salo” with dark bread and garlic - great taste

We got to Kiev, mostly with no serious at 15.00 next day. I was mostly sleep all day with a little driving before Kiev.

“East or west the home is best”, as saying goes.. We finally entered own city. Had a good bye by radio station with guys, saying “see you next year”, we landed near my house to unpack the bags and snowboards.

That was a great trip for sure! We did 3,908 kilometers in total. We had no injuries, no car damages. Visited greatest places and had snowboarding that I’ll be seeing in my night dreams in summer. You can find some more picture here. Now, it is time to work again.. to learn new things, to blog and self-improve. Thanks a lot if you read till this point :).

Last workday before vacation

Today is my last day before vacation and I’m very exited about that!

Last year my wife and my friends did a great quest from Kiev to Ischgl, Austria. It was my first experience of driving for such big distance (total drive were ~4500 kilometers). We were too optimistic to do all road at once, without any stay on night.. we did it through, but it was too difficult :).. So, when we finally hit Ischgl, we were so tired and exhausted that even beauty of snow Alps were not please us. Fortunately that feeling gone after we got good diner and sleep (I remember I slept for 15 hours then). Nevertheless of hard road the snowboarding in Austria is just fantastic! We had a lot joy of long runs, beautiful slopes, great weather and fresh air! Some photos you ca find in my Picasa.

This year we are smarter, experienced and want for more! At September we started to plan next adventure, this time to Italian Alps (Dolomites). We will stay in Santa Cristina, Italy.. that gives us ability to joy of the best snow resorts of Val Gardena area. This is not only place we are going to visit. We have a night stay in Budapest, Hungary (last time we had only quick change to look at it, but I would say this is one of the most beautiful cities I ever seen) and Venice, Italy. At the way home we also planning to vist Retz, Austria. This time we are going to join both fun of snowboarding and excitement of Europe traveling. Venice is the place I dream to visit and I’m very happy that my dream will come true, soon :).

This year I’m leaving to vacation with very good mood. I’ve just rolled-out next version of trackyt.net, so it is upgraded up to 1.0.3 :), closed a lot of issues so far. But mostly because of very good progress we currently do with my fine team in E-conomic. I’m pretty happy to know that during my absence everything would remain with same control and with good performance.

So, that’s my time to re-charge a batteries a little :). See you in several weeks, guys!

Implementation of REST service with ASP.NET MVC

Now, after we are clear what the REST is all about and how to verify REST methods, it is time to implement our own service. I’ll create it just from scratch only reusing some data structures from my other projects and test framework.

Project vision and goal

Assume we are managing blogging service, with a bunch of customers. Customers are pretty happy with service, since they could post new blog posts, collect comments, build social networks etc. But since we already stepped into “API epoch”, customers started to want more.. Namely, they want API to be able to work with data from their own applications. Vendors demand API to create new cool editors for our blog service. CEO wants us to create API, because he’s just found out that applications without API are doomed. Business goal is clear, so let’s implement it. We are going to create REST style API, based on JSON as data exchange format. API would allow users to get all posts, create new and delete some existing post.

Set it up

I’ve created just empty ASP.net MVC2 application in my Visual Studio and added it to github (please don’t be confused by bunch of other folders you see in solution, they are part of UppercuT and RoundhousE framework that I use for all my projects). This application is a host of new REST service. We are going to use functionality of MVC2 framework to implement it.

Initial project content

In Model folder of application I added Linq to SQL Classes item and grab BlogPosts table from restexample database to designer, so new RestExampleDataContext class is created and BlogPost entry would be part of it.

[global::System.Data.Linq.Mapping.DatabaseAttribute(Name="restexample")]
public partial class RestExampleDataContext : System.Data.Linq.DataContext
{
    // implementation...

[global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.BlogPosts")]
public partial class BlogPost : INotifyPropertyChanging, INotifyPropertyChanged
{
    // implementation

I’ve added simple data to database that will be used by tests:

insert into BlogPosts (Url, Title, Body, CreatedDate, CreatedBy)
values ('my-post-1', 'My post 1', 'This is first post', CAST('2011-01-01' as datetime), 'alexander.beletsky');
insert into BlogPosts (Url, Title, Body, CreatedDate, CreatedBy)
values ('my-post-2', 'My post 2', 'This is second post', CAST('2011-01-02' as datetime), 'alexander.beletsky');
insert into BlogPosts (Url, Title, Body, CreatedDate, CreatedBy)
values ('my-post-2', 'My post 3', 'This is third post', CAST('2011-01-03' as datetime), 'alexander.beletsky');

API Interface

The interface we are going to implement, looks like this:

http://localhost/api/v1/posts/get/{posturl}
http://localhost/api/v1/posts/all/{username}
http://localhost/api/v1/posts/post/{username}
http://localhost/api/v1/posts/delete/{username}/{posturl}

For all of these interface methods, I’m adding integration javascript tests in the same way I described here. Since we don’t have implementation all are failed now.

Please take a look on those tests before proceeding to implementation part, it would make some things more clear.

API folder structure

It is a question of choice but I just prefer to put all API related code to separate folder, called (who might guess?) “API”. It is really similar with Area, by it’s structure. It has Controllers, Models and Registration class. It does not have any Views, since API does not expose any UI.

Routing

If I were asked to describe what the ASP.net MVC2 application is about I would answer: “It is mapping between HTTP request, with particular URL, to corresponding method of handler class. This handler is called controller, method is called action”. So, the primary goal of MVC application is to define such mapping. In terms of MVC such mapping is called routing. It is all about the routing.

Let’s take a look on our interface one more time and we came up with such routing definition for API.

using System.Web.Mvc;

namespace Web.API.v1
{
    public class ApiV1Registration : AreaRegistration
    {
        public override string AreaName
        {
            get { return "ApiV1"; }
        }

        public override void RegisterArea(AreaRegistrationContext context)
        {
            context.MapRoute(
                "ApiV1_posts",
                "api/v1/posts/{action}/{userName}/{postUrl}",
                new { controller = "APIV1", postUrl = UrlParameter.Optional });
        }
    }
}

API Controller

After routing is defined, it is time to add actual hander - controller class. Initially it would be empty, with out any action. Just initialization of context object.

namespace Web.API.v1.Controllers
{
    public class ApiV1Controller : Controller
    {
        private RestExampleDataContext _context = new RestExampleDataContext();

        // actions..
    }
}

API Actions implementation

We’ve complete infrastructure to start implementation. Solution, Project, Interface, Tests, Routing, Controller.. now it is time for Actions.

Get all posts method

http://localhost/api/v1/posts/all/{username}

It receives username as parameter and expected to return all blog posts belong to this user. Code is:

[HttpGet]
public JsonResult All(string userName)
{
    var posts = _context.BlogPosts.Where(p => p.CreatedBy == userName);

    return Json(
        new { success = true, data = new { posts = posts.ToList() } }, JsonRequestBehavior.AllowGet
    );
}

Signature of action method said: respond to HttpGet verb, get all records with corresponding userName and return as Json.

Json method of Controller class is really cool feature of MVC2 framework. It receives anonymous type object and serialize it Json. So, the new { success = true, data = new { posts = posts.ToList() } } object will be serialized into:

{"success":true,"data":{"posts":[{"Id":1,"Url":"my-post-1","Title":"My post 1","Body":"This is first post","CreatedDate":"\/Date(1293832800000)\/","CreatedBy":"alexander.beletsky","Timestamp":{"Length":8}},{"Id":2,"Url":"my-post-2","Title":"My post 2","Body":"This is second post","CreatedDate":"\/Date(1293919200000)\/","CreatedBy":"alexander.beletsky","Timestamp":{"Length":8}},{"Id":3,"Url":"my-post-2","Title":"My post 3","Body":"This is third post","CreatedDate":"\/Date(1294005600000)\/","CreatedBy":"alexander.beletsky","Timestamp":{"Length":8}}]}}

Nice and clean.

Take a note to JsonRequestBehavior.AllowGet. This is a special flag, you have to pass to Json method, if it is being called from GET handler method. This is done to prevent Json Hijacking type of attack. So, actually if your API call returns user sensitive data, you should consider POST instead of GET.

Let’s re-run test suite and see that first test is green now.

Create new post method

http://localhost/api/v1/posts/all/{username}

It receives username as parameter and blog post content in POST body as payload. Code is:

[HttpPost]
public JsonResult Post(string userName, PostDescriptorModel post)
{
    var blogPost = new BlogPost {
        CreatedBy = userName,
        CreatedDate = DateTime.Now,
        Title = post.Title,
        Body = post.Body,
        Url = CreatePostUrl(post.Title)
    };

    _context.BlogPosts.InsertOnSubmit(blogPost);
    _context.SubmitChanges();

    return Json(
        new { success = true, url = blogPost.Url });
}

private string CreatePostUrl(string title)
{
    var titleWithoutPunctuation = new string(title.Where(c => !char.IsPunctuation(c)).ToArray());
    return titleWithoutPunctuation.ToLower().Trim().Replace(" ", "-");
}

where PostDescriptorModel is

namespace Web.API.v1.Models
{
    public class PostDescriptorModel
    {
        public string Title { get; set; }
        public string Body { get; set; }
    }
}

If you try to run this example just like that, you will see that PostDescriptorModel instance will be null. MVC2 could not handle Json payload. But if you google a little you find article by Phil Haack, where he addresses exactly the same issue - Sending JSON to an ASP.NET MVC Action Method Argument. Support of Json as action method is implemented in MVC Futures 2 library (library that contains useful extensions, that are not yet part of framework but will be there with big chances). Download it by this link add reference to project and in Global.asax.cs add JsonValueProviderFactory:

protected void Application_Start()
{
    AreaRegistration.RegisterAllAreas();

    RegisterRoutes(RouteTable.Routes);

    ValueProviderFactories.Factories.Add(new JsonValueProviderFactory());

    // ...

Important:

  • In case you are using MVC3 framework, you do not need to include MVC Futures assembly, since JsonValueProviderFactory is already included into MVC3.

If I try to re-run the tests, I’ll see that “create new post” test is still red. That’s because “get post” API method is still not implemented.

Get post by url method

http://localhost/api/v1/posts/get/{posturl}

It receives post as post url and return blog post object in response. Code is:

[HttpGet]
public JsonResult Get(string userName, string postUrl)
{
    var blogPost = _context.BlogPosts.Where(p => p.CreatedBy == userName && p.Url == postUrl).SingleOrDefault();

    return Json(
        new { success = true, data = blogPost }, JsonRequestBehavior.AllowGet);
}

Last red test is “delete post test”, so let’s implement delete API call.

Delete post by url method

http://localhost/api/v1/posts/delete/{posturl}

It receives post as post url and return status in response. Code is:

[HttpDelete]
public JsonResult Delete(string userName, string postUrl)
{
    var blogPost = _context.BlogPosts.Where(p => p.CreatedBy == userName && p.Url == postUrl).SingleOrDefault();

    _context.BlogPosts.DeleteOnSubmit(blogPost);
    _context.SubmitChanges();

    return Json(
        new { success = true, data = (string)null });
}

Now all tests are green. Fantastic!

Handle Json Errors

What happens if exception thrown with-in API method? Let’s create a test and see:

test("fail method test", function () {

    var method = 'posts/fail';
    var data = null;
    var type = 'GET';
    var params = ['alexander.beletsky'];

    var call = createCallUrl(this.url, method, params);

    api_test(call, type, data, function (result) {
        ok(result.success == false, method + " expected to be failed");
        same(result.message, "The method or operation is not implemented.");
    });
});

And add implementation of failed method:

[HttpGet]
public JsonResult Fail()
{
    throw new NotImplementedException();
}

If I run the test, I’ll see such result:

This is not very greceful. It is expected that Json response would contain false in success attribute and message would contain actual exception message.

Of cause, it is possible to wrap all methods in try / catch code block and return corresponding Json in catch block, but this violates DRY (don’t repeat yourself) principle and makes code ugly. It is much more better to use MVC2 method attributes for that.

So, we define new attribute that would handle error and in case of exception thrown with-in Action method, this exception will be wrapped in Json object and returned as a response.

namespace Web.Infrastructure
{
    public class HandleJsonError : ActionFilterAttribute
    {
        public override void OnActionExecuted(ActionExecutedContext filterContext)
        {
            if (filterContext.HttpContext.Request.IsAjaxRequest() && filterContext.Exception != null)
            {
                filterContext.HttpContext.Response.StatusCode = (int)System.Net.HttpStatusCode.InternalServerError;
                filterContext.Result = new JsonResult()
                {
                    JsonRequestBehavior = JsonRequestBehavior.AllowGet,
                    Data = new
                    {
                        success = false,
                        message = filterContext.Exception.Message,
                    }
                };
                filterContext.ExceptionHandled = true;
            }
        }
    }
}

Add this attribute to method definition:

[HttpGet]
[HandleJsonError]
public JsonResult Fail()
{
    throw new NotImplementedException();
}

And I’m happy to see that all tests are passing now!

Since we need similar behavior for all API calls it is better to add this attribute to class, instead of method.

namespace Web.API.v1.Controllers
{
    [HandleJsonError]
    public class ApiV1Controller : Controller
    {
        // code..

Examples and code for reuse

All code is re-usable and available on my github repository - https://github.com/alexbeletsky/rest.mvc.example.

Testing REST services with javascript

In previous article we’ve reviewed general concept of REST. Now we will implement some basic REST service. And our approach will be - test methods before, implement them later. I’m talking about kind integration tests, the tests that would act exactly as your client, making real calls to storage and return real results. I will use jQuery and qUnit as my weapon of choice. Like in case of FuncUnit it is easy and fun to create those tests.

Why should I start from tests? Pretty simple, by implementing tests before you are looking on your service as client<, not as a developer. When I was working to version 1 of my REST API I didn’t do any tests, basically because I didn’t know how to do them. When I was ready and started implementation of the client code and documentation, I found out major API issues that I had no time to solve. Those issues were related to: design, security, formats and convenience of usage. TDD principles works the same here: clear and simple design through series of tests.

Simple framework

I rely of jQuery and qUnit. jQuery $.ajax method is used to send and receive data. All tests are done in qUnit fashion. What is good to have more: small wrapper function for doing API calls, that would do initial verification of results and work synchronously. Why is it synchronous? Because tests are not application and you do not need all benefits of async calls. Asynchronous behavior requires additional effort for synchronization of results. Even if qUnit supports asynchronous testing, it should be avoided as possible since it makes test code harder to write and read. So, I came up with such implementation:

function api_test(url, type, data, callback) {
  $.ajax(
    {
      url: url,
      type: type,
      processData: false,
      contentType: 'application/json; charset=utf-8',
      data: JSON.stringify(data),
      dataType: 'json',
      async: false,
      complete: function (result) {
        if (result.status == 0) {
          ok(false, '0 status - browser could be on offline mode');
        } else if (result.status == 404) {
          ok(false, '404 error');
        } else {
          callback($.parseJSON(result.responseText));
        }
      }
    });
}


* This source code was highlighted with Source Code Highlighter.

Internal If/else statement could be extended with some specific result codes you might expect. If API call has been finished successfully, result JSON object will be parsed and submitted to a test callback.

Also, I found useful to create a small helper function that would construct API call signature, based on URL, method and parameters:

  // helper
  function createCallUrl(url, apiToken, method, params) {
    var callUrl = url + apiToken + "/" + method;
    for (var p in params) {
      callUrl += "/" + params[p];
    }

    return callUrl;
  }


* This source code was highlighted with Source Code Highlighter.

Setup the environment

All tests requires a setup. In integration testing we basically rely on existing environment (the same that will be used by real application).

Due to security reasons all API calls receive an api token as first argument for any call. Api token is received after successful authentication, so a StartUp for each test we need to login, receive api token and only then proceed with method tests. For qUnit is is natural to place this code to module setup.

  module("v11 api tests", {
    // setup method will authenticate to v.1.1. API by calling 'authenticate'
    // it will store apiToken, so rest of tests could reuse that

    setup: function () {
      var me = this;

      this.url = 'api/v1.1/';
      this.apiToken = null;

      // authenticate
      var method = 'authenticate';
      var data = { email: 'tracky@tracky.net', password: '111111' };
      var type = 'POST';

      api_test(this.url + method, type, data, function (result) {
        ok(result.success, method + " method call failed");

        me.apiToken = result.data.apiToken;
        ok(me.apiToken.length == 32, "invalid api token");
      });
    }
  }
  );


* This source code was highlighted with Source Code Highlighter.

Module holds API URL and token, so they are reusable through the rest of tests. If setup failed to authenticate, all tests would be failed because they could not use any call without token.

Testing methods

I have a number of REST style methods in my API:

http://trackyt.net/api/v1.1/token/tasks/all
http://trackyt.net/api/v1.1/token/tasks/add
http://trackyt.net/api/v1.1/token/tasks/delete/112
http://trackyt.net/api/v1.1/token/tasks/start/112
http://trackyt.net/api/v1.1/token/tasks/stop/112

and so on..

I’ll give some examples of tests, so you will be able to follow main idea:

Get all task call test:

  test("get all tasks method", function () {
    var method = 'tasks/all';
    var data = null;
    var type = 'GET';
    var params = [];

    var call = createCallUrl(this.url, this.apiToken, method, params);

    api_test(call, type, data, function (result) {
      ok(result.success, method + " method call failed");

      var tasks = result.data.tasks;
      ok(tasks.length >= 1, "tasks has not been returned");
    });
  });


* This source code was highlighted with Source Code Highlighter.

Get all task call receives deterministic response test:

  test("get all tasks returns all required fields", function () {
    var method = 'tasks/all';
    var data = null;
    var type = 'GET';
    var params = [];

    var call = createCallUrl(this.url, this.apiToken, method, params);

    api_test(call, type, data, function (result) {
      ok(result.success, method + " method call failed");

      var tasks = result.data.tasks;
      ok(tasks.length >= 1, "tasks has not been returned");

      var task = result.data.tasks[0];
      ok(task.id !== undefined, "Id field is absent");
      ok(task.description !== undefined, "Description field is absent");
      ok(task.status !== undefined, "Status field is absent");
      ok(task.createdDate !== undefined, "CreatedDate field is absent");
      ok(task.startedDate !== undefined, "StartedDate field is absent");
      ok(task.stoppedDate !== undefined, "StoppedDate field is absent");
    });
  });


* This source code was highlighted with Source Code Highlighter.

Add new task method test:

  test("task add method", function () {
    var method = 'tasks/add';
    var data = { description: 'new task 1' };
    var type = 'POST';
    var params = [];

    var call = createCallUrl(this.url, this.apiToken, method, params);

    api_test(call, type, data, function (result) {
      ok(result.success, method + " method call failed");
      ok(result.data != null, "data is null");
      ok(result.data.task.id > 0, "id for first item is wrong");
    });
  });


* This source code was highlighted with Source Code Highlighter.

Delete task method test:

  test("delete task method", function () {
    var me = this;

    var method = 'tasks/all';
    var data = null;
    var type = 'GET';
    var params = [];

    var call = createCallUrl(this.url, this.apiToken, method, params);

    api_test(call, type, data, function (result) {
      ok(result.success, method + " method call failed");

      var taskId = result.data.tasks[0].id;
      ok(taskId >= 1, "could not get task for deletion");

      var method = 'tasks/delete/' + taskId;
      var data = null;
      var type = 'DELETE';
      var params = [];

      var call = createCallUrl(me.url, me.apiToken, method, params);

      api_test(call, type, data, function (result) {
        ok(result.success, method + " method call failed");
        ok(result.data.id != null, "data is null");
      });
    });
  });


* This source code was highlighted with Source Code Highlighter.

Rest of tests are available on github, check it out to get additional ideas.

Running tests

As any kind of qUnit tests they could be easily run in browser.

For continues integration system, they have to be run from command-line. It is easily possible using FuncUnit + Selemium Server and described here.

Debugging the tests

Sure, you need to be able to run tests under debugger to see what might went wrong. For debuging test code, there is nothing better than FireBug. Just place the breakpoint on a line you need, press F5 to restart tests.

If you need to debug actual API implementation code (which in my case is C#/ASP.net MVC application), I start the web site under debugger (F5 in VS2010), place breakpoint in corresponding method and press F5 in browser to to restart tests.

Conclusions

I liked the idea of those integration tests by means of javascript. I was happy to get final results: the interface is more strict and more corresponds to REST principles. It is much more faster to write tests with javascript instead of C# or Java. Just compare this and this and feel the difference. Write less, get more.

As javascript could be treated as “pseudo language”, since it is easy to read it - API test suite could be used as a developers documentation. If you need to do a call, check the corresponding test, everything there.

Let’s take a REST

Nowadays, REST is becoming so popular, that any web developer must take it into consideration while architecting new application. REST is acronym for Representational State Transfer and being formulized first by Roy Fielding in his PhD dissertation.

What is REST?

It is rather style or pattern of development resource-oriented web applications. Beauty of REST is that its really easy to understand and basically you are using REST everyday but may not noticing that. REST works on top of HTTP protocol, but is is not protocol itself. It seems to me that it actually appears with HTTP/1.1 but only with Roy Fielding work it became well understood, defined and attractive.

REST popularized by such applications as twitter, flickr, bloglines, technorati etc. And of cause, by Ruby On Rails framework.

Unique ID

Unique ID is key concept in REST. Everything in Web is resource, every resource must be addressed, each address is unique.

Here are examples of ID. Of cause in a world of HTTP ID’s are URI’s (Unified Recourse Identifiers):

http://mysite.com/blog/page/1
http://mysite.com/blop/post/my-first-post

What is representation and state transfer?

Let’s see the first URI, it point for first page on some blog. As a client I ask for resource, representation of resource is returned back to client. By receiving the representation client transfers (changes) to particular state. As I ask for next resource, next representation of resource is back. The new representation changes the client application into yet another state. Between previous and next state, client stays is rest mode. Thus, the client application transfers state with each resource representation. That forms concept of - Representational State Transfer.

It doesn’t depend what exact representation is, it could be: HTML, XML, JSON, RSS etc.

Recourses and actions

You can do a different actions to resources. REST achitecture maps CRUD (Create, Read, Update, Delete) to the set of operations supported by the web service using HTTP methods (e.g., POST, GET, PUT or DELETE).

URI Action Description
http://mysite.com/blog/page/1 GET Gets the representation of resouce. Get request does not change the state of server. Typically you do not submit any data in GET request. You can read the URL like, “get page 1 from blog located at http://mysite.com”
http://mysite.com/blog/entries/entry POST Posts the changed state of resouce. With post you can change the state of server. Post contains data in POST body. It is up to web server how to treat and use this data. You can read URI like, “post an entry to entries collection in blog located at http://mysite.com”
http://mysite.com/blog/entries/entry/changename/211 PUT Updates some existing resouces. PUT is similar to POST, since it changes web server state and contains data in body. You can do different change actions that would update resource. Typically recourse identified by ID, like 211. You can read URI like, “update an entry with Id 211, by changing its name, in entries collection in blog located at http://mysite.com”
http://mysite.com/blog/entries/entry/211 DELETE Deletes some existing resouces. DELETE changes web server state but typically contains no data in body. You can read URI like, “delete an entry with Id 211, in entries collection in blog located at http://mysite.com”

Logical and Physical URL’s

If you do a lot of classic ASP.net programming you probably get used that URL reflects physical structure of application. For instance, http://mysite.com/index.aspx corresponds to c:\inetpub\wwwroot\mysite\index.aspx. In REST style URL stand not for physical, but logical URL. It means, http://mysite.com/blog/post/1 doesn’t have to have c:\inetpub\wwwroot\mysite\blog\post\1 file with static content.

Clean and logical URL’s one of the attractive points of REST. It moves away from ugly URL’s like http://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882/ref=sr_1_1?s=books&ie=UTF8&qid=1293077887&sr=1-1.

From implementation point of view: to allow REST style URL for ASP.net applications you should either create your own HttpHandler or use already created start-kits and frameworks, like WCF REST starter kit, OpenRasta.

REST vs. SOAP

Sure, REST is not first who approaches issue of using recourses in Web, it rather trendier new kind in a block. We know bunch of web technologies, SOAP, WSDL, ATOM, WS-*, WCF, ODATA and many many more.. So, what are the difference?

Major difference is that all above are protocols, but REST is style. This has pros and cons. Protocol’s are more strict and heavyweight, with a number of rules, formats etc. SOAP is using XML as data exchange format, REST could work any format depending on client needs. SOAP is using its own security model, REST relies on HTTP and web server security. SOAP requires tools, REST learning curve is small and less less reliance on tools. SOAP designed to handle distributed computing environments, REST assumes a point-to-point communication model.

But my opinion is simplicity always win against complexity. Key popularity of REST is because is simple, easy understand by developers and as a result - implemented in applications. My believe that SOAP and other heavyweight protocols will slightly die more and applications will be using REST.

ASP.net MVC and REST

Developers of ASP.net MVC framework designed it to be REST compatible. In the level of framework, there is an URL routing system (System.Web.Routing) that allows you easily follow REST design principles. It gives you total control over your URL schema and its mapping to your controllers and actions, with no need to conform to any predefined pattern.

So, basically ASP.net MVC web applications development is: create a controller class (LoginController for instance), implement number of actions (Index, CheckCredentials) and map those actions to particular URL. For instance http://mysite.com/login mapped to LoginController.Index method, that handles GET request from server and return View, containing Login form. http://mysite.com/login/check mapped to LoginController.CheckCredentials method, that handles POST and checks users credentials.

It is much more easier to create web applications API’s with MVC framework. The ActionResult is polymorphic, so it could return HTML, JSON, XML results (and you are free to implement own ActionResult, for any format you might need).

Functional testing by javascript with FuncUnit

Functional tests are something that is better to use as early as possible, to get any valuable results. Obvious choice for ASP.net applications are Seleminum for .NET, WatiN or SpecFlow. But all of them are supposed to use C# as programming language and run their tests by special runners. Thought the experience I saw how cool is to write tests with full dynamic languages, like javascript and be able to run tests directly from browser, so I can debug some broken functionality. As I saw FuncUnit project and it really attracted my attention! Here I describe some initial experince of testing ASP.net MVC application with FuncUnit.

What it is all about?

FuncUnit is very compact and elegant functional testing framework. It combines the power of jQuery, qUnit, Selemium, Synthetic. Later it was a part of javascriptMVC framework, but now it is available as separate framework. Major features are:

  • It is only javascript
  • Could run tests in browser mode and command line by means of Selemium
  • Simple API
  • Easy to debug
  • Automate your existing qUnit tests
  • Run file mode and server page mode

Yes, it simple.. just from the reading of documentation and downloading the package you are ready to start you testing. I’ve managed to cover mostly all functionality of my small application in 3 hours or so.

Integration

Download FuncUnit package here. It contains all required javascript files, as well as selemium jar's, so as you have Java runtime installed on your machine you immediately able to run tests from command line. FuncUnit does not say what particular framework should be behind of your application, it would work with any.

It requires that tests page are on the same level as tested page. At the beginning it was confusing me, because in ASP.net MVC we don’t have pages at all, only controllers and corresponding actions; my existing qUnit tests were placed to Scripts/Tests/index.html runner and I expected to have same layout. But it have to be changed a little for FuncUnit. So, filemode testing is not really applicable for ASP.net MVC, you should go with server pages mode.

I placed both FuncUnit and qUnit test pages at the root of application. Test code itself is referenced from Scripts/Tests/acceptance and Scripts/Tests/acceptance.

Content of FuncUnit placed to Scripts/Test/framework, so I got something like:

Test page

It is very similar (even identical) to qUnit page. Just reference framework and tests scripts.

<html>
  <head>
    <title>Trackyt.net Functional Test</title>

    <link rel="stylesheet" type="text/css" href="http://v3.javascriptmvc.com/funcunit/dist/qunit.css" />
    <script type='text/javascript' src='Scripts/Tests/framework/funcunit/funcunit.js'></script>
    
    <!-- Tests -->
    <script type='text/javascript' src='Scripts/Tests/acceptance/tests.home.js'></script>
    <script type='text/javascript' src='Scripts/Tests/acceptance/tests.signup.js'></script>
    <script type='text/javascript' src='Scripts/Tests/acceptance/tests.bugs.js'></script>
  </head>
  <body>

    <h1 id="qunit-header">FuncUnit Test Suite</h1>
    <h2 id="qunit-banner"></h2>
    <div id="qunit-testrunner-toolbar"></div>
    <h2 id="qunit-userAgent"></h2>
    <ol id="qunit-tests"></ol>
  </body>
</html>


* This source code was highlighted with Source Code Highlighter.

Test code

As I said its fun and easy to write tests with FuncUnit. You have everything that should be in functional testing framework: open pages, read values and put values to controls, wait for events.. clicking and dragging mouse. Tests code/layout is the same as you get used with qUnit. Each test basically does:

  • Open test page, by S.open("page_url_from_root") (ex. S.open(“Admin/Login”), will run http://localhost/app/admin/login)
  • Wait for some html elements appear in browser, with S.wait();
  • Type some values to control S('#myControlId').type('bla-bla') and read values S('#anotherControlId').val();
  • Assert for results with ok, same, equal methods;

I’ll do several example from my code.

Simple one, I check that user is not able to login with empty password:

// I as user could not login with empty password
test("password is empty", function () {
  // arrange
  S('#Email').type('a@a.com');

  // act
  S('#submit-button').click();

  // assert
  S('#PasswordValidationMessage').visible(function () {
    var message = S('#PasswordValidationMessage li:nth-child(1)').html();

    ok(message == "Password is empty", "I as user could not login with empty password");
  });
});


* This source code was highlighted with Source Code Highlighter.

Or that I’m not able to register with already registered email:

// I as user could not register with same email twice
test("register twice", function () {
  // arrange
  var email = "test" + new Date().getSeconds() + new Date().getMilliseconds() + "@trackyt.net";
  S('#Email').type(email);
  S('#Password').type(email);
  S('#ConfirmPassword').type(email);
  S('#submit-button').click(function () {

    // wait till registration is done
    S.wait();

    // act
    // go back to register page and try to register with same credentials
    S.open("Registration", function () {
      S('#Email').type(email);
      S('#Password').type(email);
      S('#ConfirmPassword').type(email);
      S('#submit-button').click(function () {

        // assert
        S('#PasswordValidationMessage').visible(function () {
          var message = S('.validation-summary-errors li:nth-child(1)').html();
          same(message, "Sorry, user with such email already exist. Please register with different email.", "I as user could not register with same email twice");
        });

      });
    });
  });
});

* This source code was highlighted with Source Code Highlighter.

To more complex, bug reproducing issues:

// https://github.com/alexbeletsky/Trackyourtasks.net/issues/#issue/20
test("tasks are disappeared", function () {
  // go to sign up page
  S.open("Registration");

  // create new account
  var email = "test_bugs" + new Date().getSeconds() + new Date().getMilliseconds() + "@trackyt.net";
  S('#Email').type(email);
  S('#Password').type(email);
  S('#ConfirmPassword').type(email);

  S('#submit-button').click(function () {
    S('#tasks').exists(function () {
      // create several tasks
      S('#task-description').click().type("fix issue 20");
      S('#add-task').click();
      S.wait(500);

      S('#task-description').click().type("fix issue 20 2");
      S('#add-task').click();
      S.wait(500);

      // and log off of dashboard
      S('#sign-out').click();
    });
  });

  // now sign in with same account
  S.open("Login");
  S('#Email').type(email);
  S('#Password').type(email);
  S('#submit-button').click(function () {

    // wait till tasks are ready
    S('#tasks').exists(function () {
      S.wait(function () {
        var tasks = S('.task').size();

        // tasks created before must exist
        ok(tasks == 2, "tasks created before must exist");
      });
    });
  });
});


* This source code was highlighted with Source Code Highlighter.

I found that those tests are also a kind of documentation! I could document user stories with such tests and maybe even use tools to generate documentation just from tests code.

Running tests from browser

While development it is more preferable to run tests directly from browser. To do that I just select FunctionalTests.html file in Solution Explorer and press Ctrl+Shift+W that mean “View in browser”. Please note, that FuncUnit tests are run in pop-up window, so you have to enable pop ups in your browser. Browser will open http://localhost/virtualdir/FunctionalTests.html and tests will run. For more convenience I also put a bookmark for test page in browser, so I could always re-run all tests by one click.

Running tests from command-line

Major feature of FuncUnit is of cause it’s ability to run from command-line, so you can easily integrate to your continuous integration system. There a script envjs.bat that would easily to start the tests under Selenium and provide with results as command line output (you can also run qUnit tests from command line with envjs.bat).

From the root of web application, I start:

.\Scripts\Tests\framework\funcunit\envjs.bat http://localhost/tracky/FunctionalTests.html

BTW, there is a small issue in envjs.bat that stops me to successfully run it initially, it does not take into accounts that it could be run as envjs.bat, but simpy as envjs. Please see my version of this file here.

Deployment

Sure you don’t want to make those tests public on your site. There are 2 options: change Web.config to disable users to access FunctionalTets.html, UnitTests.html or just delete those 2 before deployment to production. Since I do simple xcopy deployment, option 2 works for me.

Demonstration

To make picture full, I prepared small screencast that would demonstrate how I used the framework. I like it, it fits my requirements (at least for now). I hope you also like it and find this information useful! And also, you should check the repository for test code examples.

Run and debug your tests with shortcuts

While I’m working in Visual Studio, I try to use my mouse as less as possible. You should be really comfortable to do 99% of activities just by keyboard. It saves a time.

I re-run tests hundred times per day, so clicking by a mouse is a bad option. Doesn’t matter you use Testdriven.net or Resharper, you have ability to use shortcuts for quick run.

Testdriven.net

Tools -> Options -> Keyboard -> Testdriven.NET.RunTests and assign it to “Ctrl + R, Ctrl + T” (run tests)

Tools -> Options -> Keyboard -> Testdriven.NET.Debugger and assign it to “Ctrl + R, Ctrl + D” (run debugger)

Resharper

Tools -> Options -> Keyboard -> Resharper.Resharper_UnitTest_ContextRun and assign it to “Ctrl + R, Ctrl + T” (run tests)

Tools -> Options -> Keyboard -> Resharper.Resharper_UnitTest_ContextDebug and assign it to “Ctrl + R, Ctrl + D” (run debugger)

You stucked? Stop and ask for help!

I’ve been working on one task for last several days. In a product I work there is feature that looks like Excel - template editor. User is able to put a data to sheets and do some simple calculation with formulas (SUM, MUL etc.). This functionality has been created far ago and worked fine. But recently we’ve changed our engine and now we support not only “plain” data structure, but expanded one - when user is able to say - “this column or row is expanded with such data”. Data size is dynamic and could be predicted while template edition and SUM function have to take into account that data is expanded.

So, I’ve created a number of test cases that cover new functionality and implemented simple mechanism that process formula like SUM(A1:B2) and adjust start and finish indexes according to size of expansion. At the beginning that AdjustIndexes function were small and easy to read, it just perform simple calculation based on current row/current column, original indexes places and some information from expanded data.

But after a demo we’ve found some bugs: if several expansions present in template - it fails. If not expanded data comes after expanded it fails and some more..

I’ve extended tests cases with all found issues and start to attack problem. I start to adjust AdjustIndexes to work with new cases. And after a while I’ve completed it, but this function started to look scary - a lot of nested if-else cases, calculation of indexes, += operations etc. OK, I thought.. it could be refactored later, so I did a checkin. My happiness was not long, because while developers tests I just found another bug. OK, usual stuff - test and fix. Let’s go.

Meanwhile I’ve already spent about 6 hours on that. At stand-up I said - “it seems to be alright, only one case is failing now, I’m finishing up and commit”. But I was wrong. As I changed the function to work with new case, new case became green but existing ones became red.. I fell stress that task originally estimated for 1 day is now 1.5 and supposed to be about 2 days. I wanted to do quickly, so started to play a game - “Change something, re-run the tests if GREEN - you win!” (such approach actually works some time as soon as you got good test cases and double check the results). Already created function became more scary and what is worse - it does not satisfied all tests. I lost the control of it and only one I understand - “I do not now understand this function works now.. It does something wrong and I have to go with another idea to solve the problem”.

It was about 10 hours spend on that. I started to lose the game, but just didn’t want to give up. I need another idea, I need another approach.. I need additional data structures that would help to map original table indexes to expanded ones, I need to change the engine to fill those mapping, I need to change AdjustIndexes function to take mapping into account. I felt a little tired and stressed but proceed with coding like crazy.

14 hours passes, late evening, and I was finishing up my Mapping mechanism, while I was contacted by my teammate Carsten - “How is it going with SUM?”. I’ve explained my issues, my approach and my status, but basically it was like - “It is not yet done”. But he said that it seems like I’m doing something wrong and proposed another algorithm of doing this adjustment, with no other additional data structures. I probably was to tired to understand everything there, so he just said “Wait for 30 minutes, I’ll do some code and show to?”. I’ve been shocked, something I worked for 14 already he want to do in 0.5 hour? OK.

He sent his first solution. Hopefully it was only one function to change I could easily integrated and re-run all test cases I had so far. Most of them were failed at first try. We started a pair session and after several run/fix/test sessions it started to work! All tests are green! It works in application.. Function is quite small and easy to read.

Yeahh.. that was such great work (respect to Carsten) and so such shame on me. 15 hours.. vs 40 minutes.. 30 lines of code vs. 300.. one point of integration vs. many. I’ve been defeated.

When I finished it up and went a street to walk with dog, I’ve tried to find at lest something positive in all this situation. And you know what? I can up with one another feature to trackyt.net.

If you have a task in progress and you spend 12 (8, 6 or even 4) hours, task is became red and interacts with user:

  • Take a rest, re-think
  • Ask for help

That would prevent my situation somehow. It will try to say to you “Man, you probably stucked with that, ask some one to take a look”.. This seems to be obvious, but if you are in hurry you are not just doing that hoping to handle everything by yourself. This is not good. I was happy about such idea and lesson learned, so back to home with good mood already :).

Have you been to such situation, do you think it’s valuable if you have some stopwatch and notifier that would push you shoulder and say “Stop now ask for help!” ?