Alexander Beletsky's development blog

My profession is engineering

Retrospective 2011

Last year I failed to create retrospective blog post, so lesson learned and I started a little bit earlier now. Not the 1 day, but 3 days before NY :). Retrospective is a great practice and I hope it give me some value when I’ll be reading it next year and laughing on my previous achievements.

I’ll try to cover 4 basic directions: Career, Development, Blog, Personal.

Career

I continue to do my job for e-conomic.com. As I said here e-conomic is the most important thing that happened to me in 2010 and keeps it influence on 2011. I’m still a product developer there, but in general situation has been changing radically. First of all, we are now much more bigger team. Both Ukranian and Denmark parts of team are growing and new cool guys join us. Second, we finished up few projects at the beginning of the year and joined very cool adventure that I a little described here. Sure, I have fun and boring, easy and tough, nice and bad days there. But overall impression is still very good. As soon as we’ll keep same as we are doing, with same level of passion and team atmosphere - we are on right track.

Besides the e-conomic, there is something else that happened to me this year and had direct impact on career - Kiev ALT.NET. Unfortunately, I did not blog to much about that community just a some mention here. Community is very important to any developer and I’m really happy I found the one. I haven’t noticed how I became a speaker, actually. With a few nervous tries on Kiev ALT.NET I’ve managed to give up to 8 (or so) public speeches this year. It might not be so much, but great achievement for me. The speaking opened new opportunities, especially in meeting new people. But first of all it is a great joy and motivation to learn new things.

With a great atmosphere and enthusiastic people inside Kiev ALT.NET I’ve launched another project called Kiev Beer && Code. The idea is taken from Seattle Beer && Code community, but it is nothing more as developers gathering for social coding. The community is very young and to be honest I don’t put to much efforts on it’s promotion, but it a very beginning. I’m very happy with out current Beer && Code team and will be much more happier if new guys are joining.

I became MVB (Most Valuable Blogger) for DZone, that I’m really proud for. It is 2-way value, DZone is using my content.. To me it gives additional traffic. I hope we are still partners for long years.

At the end of the year I tried myself in completely new area - trainings. Thanks to XP Injection training center in Kiev, I’ve been invited to 2 days training session “TDD in .NET”. It appeared so much successful that chief trainer offered me a place in their trainers group. At December 22, I officially joined XP Injection. I’m very excited about that and hope I can do my best there. So far, we’ve planned some further TDD .NET trainings, but definitely TDD won’t be the only one topic I can work in.

Development

I mean, everything that I work on my own: pet projects, self education etc. I don’t remember who said that, but I very agree with this statement - “if you don’t write code at home, you are not progressing”. You have not time to learn at work. Work is the place to perform. You have to have sharp axe, if you came to chop the wood.

My main sharp axe exercise is coding. I try to code as much as I can. For productive coding you have to have some projects. Doesn’t matter what exactly, what’s important is: you like the idea, you have corresponding technological stack. Technological stack have to correspond the area you want to improve in. My main area’s are still: C#, HTML/CSS and JavaScript.

I was reading Pragmatic Programmer this year, with a great advice: “Learn new programming language each year”. I formulated Pragmatic Product Developer advice, just for my self: “Release new product each year”. Even if you release something at work, release something on your own is completely different feeling. It takes too much effort, it’s painful.. But shipment is like drug, you feel very happy as soon you ship.. You fill very bad as you don’t ship for a while.

While ago I write small article there I mentioned what’s my targets and what I working on. Let’s quickly go through it:


  • trackyt.net as I released that late 2010 I did provide a support up the the May of 2011. A lot of new features has been commited there, but still I slowed down the progress much. It has very low traffic, almost 0 active users. I was about to release version 2.0, absolutely different with all good things that I see in GTD. But I have to admit, I failed that.

  • elmah.mvc.controller started out as very simple helper for ASP.NET MVC applications that want to use ELMAH, but with a great surprise to me it appeared so popular, that finally wrapped up in a micro product. Now, it has ~2,000 downloads on NuGet, I’ve received several pull requests and had a talk about it on Kiev ALT.NET. Even if is so small, I treat that as success.

  • github.commits.widget also a micro product that gathered some attention. That was my attempt of working with github API in javascript and I spend maybe 3 hours to create that code, but I know several sites that using that widget. I had joy of creating that and happy that some people find it useful.

  • githubbadges.com a little application that I wrote to participate 10K competition. Total application zipped content should not exceed 10K. I’ve haven’t won, I haven’t got any mentions.. But again it was fun small project. I used some my knowledge of github API and tried to do very tiny JS and CSS code.

  • candidate.net is something I started this summer of hackthone. Since then I completely rewrite it, start to use bounce framework inside and had plans to release it October, but failed. I’m not throwing away this project and going to ship that soon. I have yet-another-huge-refactoring cycle now, but after that I hope that alpha version could be ready middle January.

So, it looks like “I did something”.. but really nothing impressive. I try to be a little more focused, not more than 2 projects in parallel + very clear criteria’s of success for projects. But in general my criteria’s are still simple: actually ship it, learn something new, enjoy the ride.

Blog

Thanks to google analytics it is very simple to have analysis. Just take a look at those figures:


stat

So, I’ve got 25,341 Unique visitors and 36,560 visits in total. To understand what it means to me, let’s take a look last year statistics for same period of time.


stat

In 2010 I got 3,979 Unique visitors. It basically means I improved traffic ~637%. This is actually huge number, I don’t expect that next year of cause.. but hope that traffic will be improved more.

OK, what was most popular content this year:

Personal

First of all, I got married. I strongly believe it is for good and for long.

I did several interesting trips, especially to Val Gardena with my friends at the beginning of the year and Japan. Unfortunately, I don’t predict anything like that in 2012, so it will be kept for good memories.

I have to admit I lowered my sport activities too much. I almost stopped every morning exercises, kyokushin karate, rollerblading. Doing those very occasionally. This is not good at all and I already feel bad influence of that. So, my next year is to make it more balanced.

It was a great year to me. I wish Merry Christmas and Happy New Year to you, dear reader. I’m really looking forward to create new content you would like, new products you would find useful. Let’s gather all good things happened this year and take them for next one.

See you 2012!

XP Days Ukraine 2011

Photos from conference are made by Andrii Matukhno

XP Days is a 3 days event. First 2 days are dedicated for trainings and meet-ups and the last day was for speeches. This time I had 3 roles actually: trainer, speaker and visitor. And that was extremely cool.

.NET TDD class

I’ve never done any trainer job before, so I was little worried of how it goes. With respect to XP practices we decided to do it in pair with Sergey Kalinets.

Our group consisted of 12 students. After initial introduction I realized - wow, we’ve got a pretty strong guys here.. Most of group had some real TDD practice before and came to improve the skill and found out new techniques. We had a program for 2 days, which covers the theoretical intros to TDD and practical tips and tricks.

We used String Calculator kata which we used both days to warn up. This is very productive way of learning and improving and all guys in group absolutely loved that. I hope this kata became every morning exercise and would be shared among colleagues and be done together, which is great fun.

We had great tandem, together with Sergey. We know each other from Kiev ALT.NET community, but never worked closely. I was very surprised how similar initial experience of TDD we had, so there was absolutely no problem to collaborate. Sergey is very professional trainer, his confidence and experience ruled among these days, he managed to create the training in consistent way. Very naturally we divided technologies & tools, since Sergey works mostly with Desktop and WCF and I work with Web.

So, it went very fine as for me. Friday evening, when we’ve closed training I was very happy to receive feedback. I believe everyone liked what we did. If you guys are reading that, I would ask to put small comment to that post (yes, “+1” would be alright :)).

Conference

There was a lot of interesting talks. Unfortunately I had to miss several quality-proved ones by Sergey Kalinets and Dmitriy Pasko, I’m really sure that was great stuff. I liked several speeches the most: one by Mark Seeman about Convetions, that showed me how to write less code relying on conventions.. and lighting talk by Dmitry Mindra on software craftsmanship. It was so great and inspirational! I was touched with case regarding Dmitry’s father and his attitude to craft. At the end of speech I was happy to get yellow bracelet in return to my promise to:

  • Love the craft
  • Study and improve knowledge all the time
  • Share the knowledge with people around you

I hope I will do that!


dmitry mindra

My speech was dedicated to Approvals. Even I had some lack of time and did not manage to show final example I believe it worked great. The audience were very interactive and I felt the positive energy in the air.

I had prepared introduction part and code examples that I did just on stage. In my opinion the best way to show something to developers is to write some code, but unfortunately it took a little more time that I expected. Moreover, I received many questions at the very first example, so I had to calm down people a little saying “Guys, please wait.. I haven’t even started to show cool things” :)..


alexander beletsky

One nice feature of this conference was that all participant two small card which they could fill with feedback and give to speaker back. I was amazed people very stepping by, saying thank you and giving those cards to me. It turns out that most of audience not familiar with Approvals, so general comment was “Man, I would never heard about that, thanks for sharing it”.


good job cards

It was extremely pleasant to receive that feedback thought cards and twitter. I’m truly appreciate each good word I got from you guys. Totally I collected 22 feedback cards and treat it as very good result!

Conclusions

First of all, big kudos for all people who make this event happen: XP Injection, sponsors, speakers and visitors. Conference took place in business center Parus, which was a great idea. It is in a center of the Kyiv and has very suitable infrastructure.

As always, it was great pleasure to meet with people from other cities and countries, getting new contacts. I hope that’s not the last XP Days in Ukraine, not the last I’m participating in.

The slides from my speech is shared on SpeakerDeck. Also, I’m happy to help you with Apporvals, so do not hesitate to contact my skype or twitter.

Approval Tests: Locking Down Legacy Code

Suppose, you working on project with a lot of legacy code inside. I know it makes you sick, but as brave developer you want to improve things. You met that ugliest method in your life and only one thing you want to do - refactor it. But refactoring is dangerous procedure. For safe refactoring you need to have good test coverage. But wait, it is legacy code. You simply have no tests. What to do? Approvals have answer.

Legacy code is the code that…

Works! Right, it is ugly, un-supportable, nothing you can easy change there. But the most wonderful feature of that code - it works for years. And first thing is to get advantage of that fact!

Here is my “just for example” legacy method.

namespace Playground.Legacy
{
    public class HugeAndScarryLegacyCode
    {
        public static string TheUgliesMethodYouMightEverSeen(string s, int i, char c)
        {
            if (s.Length > 5)
            {
                s += "_some_suffix";
            }

            var r = new StringBuilder();
            foreach (var k in s)
            {
                if ((int)k % i == 0)
                {
                    r.Append(c);
                }
                else
                {
                    if (k == c)
                    {
                        if (r.Length <= 2)
                        {
                            r.Append('a');
                        }
                        else
                        {
                            r.Append('b');
                        }
                    }
                    if (k == '^')
                    {
                        r.Append('c');
                    }
                    else
                    {
                        r.Append(k);
                    }
                }
            }

            return r.ToString();
        }
    }
}

(it’s it ugly enough?)

It has a cycles, nested if-else case and all nice features of legacy code. We need to change it, but in the same time guarantee it would not be broken.

Trying first simple test

Supposing also, I’m not much in details of how exactly this function works.. So, I’m creating the test like:

[Test]
public void shoudl_work_some_how()
{
    Approvals.Approve(HugeAndScarryLegacyCode.TheUgliesMethodYouMightEverSeen("someinput", 10, 'c'));
}

I run it and got some result to approve:

approvals

I approve that, cause I know that function works. But something inside tells you - that is not enough. Try to run in under the coverage:

approvals

That does not make me real confident with tests only one hit and coverage 76%. We have to create better tests cases.

Use combinations of arguments

Approvals include some tools to deal this case. Let’s change out test and write something like,

[Test]
public void should_try_to_cover_it()
{
    var numbers = new[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
    var chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ".ToCharArray();
    var strings = new[] { "", "approvals", "xpdays", "^stangeword^" };

    ApprovalTests.Combinations.Approvals.ApproveAllCombinations(
        (s, i, c) => HugeAndScarryLegacyCode.TheUgliesMethodYouMightEverSeen(s, i, c),
        strings,
        numbers,
        chars);
}

With only few lines of code, I’ve got 1560 test cases and all of them are correct!

approvals

Besides, I got pretty good coverage. Ideal, I would say. Now, if even one small change would happen, some of 1560 tests will notice that.

approvals

Locking down

The process of controlling the legacy code in that way is called “Locking down”. After the code is locked down, you have high confidence (read low risk) of breaking changes you introduce. Please note how low effort it was to create all that 1560 tests and how much value gained in that.

Notice, that test like should_try_to_cover_it is not supposed to “live forever”. You probably even don’t need to check it in to source control. You just do your job, either refactoring or changing that functionality and use Approvals to notify you as fast as possible of something goes wrong.

Why New Technologies Move Your Product Faster?

This summer my company e-conomic started new project with a working title SBA (Small Business Accounting). The project is about the creation of the product for small business owner, like consultants, web shops, freelancers etc. Since my company already had great expertise in that area the product vision and initial backlog of stories was already in place. Even more, we already had a product that does nearly the same but for a different target user - so, it was kind of natural to try to build SBA based on existing one. We started our journey having very ambitious plan in our mind meaning release in less than 6 month.

But to our great disappointment the progress for next 2 month appeared very low. We underestimated one thing - the existing product is a great but some of the underlying technologies were ill-suited for the new projects. Very simple things, like adding new custom form, UI changes, data access etc. were hard and unpleasant.

The legacy code, de-motivation and low progress.. were about to bury the project. Fortunately, the company were brave enough to change something on a fly. We went a very risky way, to build new product not based on old technologies, but almost from scratch. And as for me, that was great decision! So, somewhere in October we re-started the project.

What have we changed and how it went?

SVN => Git

I have posted earlier some my experience of starting to use Git in SVN-based organization. That time it was 2-3 people that tried to adopt that process, now it is only 2-3 who are not using it. We are thinking about moving to pure Git environment, but our deployment procedure slightly depends on SVN. As soon as we fix it, we can get rid of SVN completely.

You might argue, that changing source management tool is not exactly technological change and not affecting development velocity! But wait a minute.. How much time did you waste to resolve stupid tree conflicts? How much time you waste waiting for new branch, at the code freeze period? How many times your thrown away yours refactoring results, simply because you are not able to commit that now?

All of that factors are contra-productive. It could be not even seen from first sight, but Git improves the velocity simply by getting rid of annoying things that are natural to centralized source control management systems.


git

ASP.NET WebForms => ASP.NET MVC

Supporting WebForm’s code is a big mess. It’s is difficult to test, difficult to understand and to change. We had a bunch of custom controls that worked fine, but adopting them to something different is hard.

For the year I’m studding ASP.NET MVC and much inspired by it’s clear and powerful design. So, I was really happy we finally moved that direction.

We are using MVC in both modes: as for our web UI pages and for REST services. The same URL could produce both HTML or JSON response, depending on context. Having that, we are much flexible and not duplicate the code on pages and web services. MVC controllers are easy testable, Views are clean having only HTML with only few server side code snippets. Switch to ASP.NET MVC boosts us pretty much.

I think the adaption of new technology was really fast, even if we had some issues at the beginning they have been solved. I have to say that to use ASP.NET MVC we have to update production environment, namely upgrade in Windows 2008 with II7 which requires additional investment. But I think it’s only for good, cause having environment debt is the same as having technical debt in code.


asp.net mvc

Pure jQuery => Backbone.js

jQuery is the best javascript framework ever. It suites so nicely till the application is not getting to much big. As you getting out some imaginary “bounds” jQuery code get out of the control. Till that time we already had huge amount of .js files mixing out logic and UI stuff.

We were choosing between Knockout.js and Backbone.js and finally stopped at Backbone. Backbone.js introduce order in client-side code. It has clear separation of concerns for Models, Views, Controllers.. so we can say Backbone.js is MVC framework on client.

Having order is first step for productivity. The things are more predicable, read more fast to getting things done. Of cause, it takes a lot of effort to learn it. And no surprise some we still having some challenges caused by lack of expertise. But having Derick Bailey as our consultant making it better.

I would add that ASP.NET MVC and Backbone.js suits each other very nicely, since Backbone.js is REST oriented and ASP.NET MVC exposing REST in a right way.


backbone.js

Super.Tricky.NHibernate.Wrapper => ADO.NET

One of the pain-in-neck points of our application is data access. We have huge wrapper around the NHibernate framework. Words “Wrapper on NHibernate” could make someone sick, what if I say, “Wrapper on HHibernate, which is code generated”?

So, someone might treat that are making step back. But I don’t agree. Whatever things you can do in C# code, it is SQL only the language your Database can speak. Having a ORM’s are kind of having translator during the conversation with foreign guy. It works great as soon you speak simply, it’s getting worse than conversation going complicated, so translator hardly could translate everything you want to say. And of cause, it almost has no sense to have a translator, as soon if you can speak that language.

We are not using pure ADO.NET, it would be complicated. We have a tiny wrapper on that that basically run queries and returns result sets. It might seem harder, but I would say it’s very flexible. You can create what ever queries you need and map them what ever models you have. It works fine, it works fast. It is easy to test, since you no longer need NHibernate profiler to understand what’s wrong in you application, code just close to you.

After last Kiev ALT.NET meeting I understood we are not alone here. A lot of people start to understand that NHibernate becoming to heavyweight. New movement of MicroORM’s appeared, that combines power of SQL and functionality of mapping tables to objects. I want to believe that some MicroORM framework could be our next thing to adopt.


sql

De-motivation => Obsession

It is not technological factor.. it is not even something I easy describe. Developers are taking responsibility by changing the things or advocating for some particular technology. You are no longer have excuse of bad quality because of legacy reasons. Sure, legacy still plays some role but impact is not so high.

Having a new technologies on board simply makes developers happier. Happy developer is obsessed developer, he is commited to success. He will work as much as he can to make good results.

With a consistent challenge, every day is no longer “another day in office”, but rather “yet another day on pirate ship”. It starts to smell start-up shop, that tried to get on market as soon as possible, adapting last available tools. The project now is not something I just had to do, but kind of pet project, where you trying things out and feel great if those things works.

That’s why in particular I think it is great to start from scratch (or near to scratch) one a year or something. Pick up last thing, learn them, adopt and build something valuable - it is nothing to compare with maintenance of 5 year old code.

New technologies moves you product faster!

ELMAH MVC: Answering questions

I’ve received some questions recently regarding ELMAH.MVC nuget package. Here is the summary blog post and I hope it is helpful.

How to change that to log it into a database?

Indeed, the demo project on a github uses simple Elmah.MemoryErrorLog just holding all errors in session. That works great for small application or just to try out things. In reallity you need some persistant storage, like files or database. And this is extreamly easy to do. Take a look at this section of web.config:

<elmah>
    <security allowRemoteAccess="yes" />
    <errorLog type="Elmah.MemoryErrorLog, Elmah"/>
    <!--<errorLog type="Elmah.SqlErrorLog, Elmah" connectionStringName="elmah" />-->
    <!--<errorLog type="Elmah.XmlFileErrorLog, Elmah" logPath="~/App_Data/Elmah.Errors" />-->
</elmah>

Just comment out Elmah.MemoryErrorLog and de-comment Elmah.SqlErrorLog (will store errors to SQL database) or Elmah.XmlFileErrorLog (will store errors to XML files). XML logger, just requires a virtual path to folder there files will be stored. Sql logger, requires a connection string name for ELMAH database. The same web.config contains:

<connectionStrings>
    <add name="elmah" connectionString="Data Source=.\SQLEXPRESS;Initial Catalog=elmah;Integrated Security=True" providerName="System.Data.SqlClient" />
</connectionStrings>

The database have to have ELMAH schema inside. It is very easy to prepare that schema, just run this SQL script against database you want to keep errors to.

Does ELMAH.MVC handle custom error pages?

This is a misconception. ELMAH.MVC is not about custom pages, at all. This is good answer on stackoverflow on that. For a code example, you can refer to something I did for before my projects just here

ELMAH.MVC gives me FxCop/StyleCop issues?

I’ve received that kind report on github issues. Initially I thought to do something about it but then I left that idea. The explanation is inside the ticket.

Short answer is, ELMAH.MVC is nothing more that boilerplate code. As soon as you can see it work and it works OK for you, adopt it for your custom needs. That’s it.

Approval Tests, Alternative View on Test Automation

Approval Tests or simply Approvals in a framework created by Llewellyn Falco and Dan Gilkerson, providing support for .NET, Java, PHP and Ruby. It is not yet another unit testing framework like NUnit or MbUnit etc., instead those frameworks are used to run approval tests.

Broadly speaking, software is nothing more as virtual box there we put some inputs and expect on outputs. The outputs could be produces by zillion ways. Those ways are differ by its implementation. Unit tests are too much focusing on implementation. Thats why unit tests might fail even if you have working code, or otherwise. Approvals are focusing on output.

How it works?

Let’s take a look on a very simple case. Say, I have a class ShoppingCart. I can add some products inside the shopping cart, confirm my purchase. I expect that total price is calculated for me.

[TestFixture]
[UseReporter(typeof(DiffReporter))]
public class ShoppingCartTests {

    [Test]
    public void should_calculate_the_total_price_for_shopping_cart() {
        // do
        var shoppingCart = new ShoppingCart();
        shoppingCart.Add(new Product { Id = "iPad", Price = 500 });
        shoppingCart.Add(new Product { Id = "Mouse", Price = 20 });
        shoppingCart.Confirm();

        // verify
        Approvals.Approve(shoppingCart);
    }
}

What happens if I run this test? If I’m running it first time it fails. No matter it works or doesn’t. Framework simply don’t know that yet. To understand how much correct that code is, it will actually ask you, to utilized human primary power - recognition.

In that case it will open the TortoiseDiff application and show actual and expected outputs.




Here, I’m able just read that: “Ok, I have 2 products in my cart..one iPod and one Mouse, iPods costs 500 smth and mouse is 20 smth.. and the total price is 520 - looks good! I approve that result!”.

Technically the approving is just copying actual output file to expected. As soon as test passed, actual file output is deleted and approved file resides near test code file, so you just check it in source control.

If then the shopping cart is modified and something goes wrong. There would be a failure. In case of unit tests, that would be multiple failure of different cases and it might be not so easy to understand what’s exactly wrong. For approval test, it would be one failure. And the cool thing that I have a difference that shows there exactly the deviation is.




Where it works?

It is not only the simple objects you can approve. What’s the cool thing, you can approval against the different sources: objects, enumerables, files, HTML, XML etc. On a more high level: WpfForm, WinForm, ASP.NET Page.

For instance, code for ASP.NET:

[Test]
public void should_have_approved_layout() {
    ApprovalTests.Asp.Approvals.ApproveUrl("http://localhost:62642/customer/");
}

Or for WPF form

[Test]
public void should_have_approved_layout() {
    ApprovalTests.Wpf.Approvals.Approve(new Form());
}

With WPF and Win forms is that it’s able to serialize them into images, so the actual and expected results are actually images, so it is easy to track the differences (TortoiseDiff can do that).

When it works?

It works best when you deal with 2 things: UI and legacy code.

Testing of UI is always a difficult part. But what you typically need is: make sure that UI is not changed and if changed, where exactly is happening. Apporvals solves that nicely. It is only one line of code test, to test ASP.NET page for instance.

Legacy is another story: you have no tests there at all, but you have to change code to implement new feature or refactor. The interesting thing about legacy code - It works! It works for years, no matter how it written (remember, virtual box). And this is a very great advantage of that code. With approvals, with only one test you can get all possible outputs (HTML, XLM, JSON, SQL or whatever output it could be) and approve, because you know - it works! After you have such test and approved result you are really much safe with a refactoring, since now you “locked down” all existing behavior.

Approvals are not something you need to run all the time, like units or integration tests. It more like handy tool. You create approval tests, you do your job and at the end of the day it might happen - you no longer needed, so you can just throw it away.

Want to hear more?

Just go and listen to this Herding Code podcast episode, or visit project web site or join me at 17 December on XP Days Ukraine conference in Kiev, where I’m going to have a speech dedicated to Approvals.

Inside ASP.NET MVC: Instantiation of Controller

Controller are being created by ControllerFactory, by default IControllerFactory type is being resolved to DefaultControllerFactory. Today’s post is dedicated to some details of how actually DefaultControllerFactory works and creates instance of required controller. Let’s go from the beginning!

Request for controller instance

Initially, we are at MvcHandler’s ProcessRequestInit method, where we extract controllers name from RouteData and request controller factory to create corresponding controller.

private void ProcessRequestInit(HttpContextBase httpContext, out IController controller, out IControllerFactory factory) {
 // If request validation has already been enabled, make it lazy. This allows attributes like [HttpPost] (which looks
 // at Request.Form) to work correctly without triggering full validation.
 bool? isRequestValidationEnabled = ValidationUtility.IsValidationEnabled(HttpContext.Current);
 if (isRequestValidationEnabled == true) {
  ValidationUtility.EnableDynamicValidation(HttpContext.Current);
 }

 AddVersionHeader(httpContext);
 RemoveOptionalRoutingParameters();

 // Get the controller type
 string controllerName = RequestContext.RouteData.GetRequiredString("controller");

 // Instantiate the controller and call Execute
 factory = ControllerBuilder.GetControllerFactory();
 controller = factory.CreateController(RequestContext, controllerName);
 if (controller == null) {
  throw new InvalidOperationException(
   String.Format(
    CultureInfo.CurrentCulture,
    MvcResources.ControllerBuilder_FactoryReturnedNull,
    factory.GetType(),
    controllerName));
 }
}

DefaultControllerBuilder internals

CreateController is rather elegant method, basically it does some arguments check’s, resolve the controller type by it’s name and then call to GetControllerIntance to instantiate that type.

public virtual IController CreateController(RequestContext requestContext, string controllerName) {
 if (requestContext == null) {
  throw new ArgumentNullException("requestContext");
 }
 if (String.IsNullOrEmpty(controllerName)) {
  throw new ArgumentException(MvcResources.Common_NullOrEmpty, "controllerName");
 }
 Type controllerType = GetControllerType(requestContext, controllerName);
 IController controller = GetControllerInstance(requestContext, controllerType);
 return controller;
}

Getting the controller type

GetControllerType is delegating it’s call to internal GetControllerTypeWithinNamespaces. It receives Route, Controller’s name and Namespaces.

private Type GetControllerTypeWithinNamespaces(RouteBase route, string controllerName, HashSet<string> namespaces) {
 // Once the master list of controllers has been created we can quickly index into it
 ControllerTypeCache.EnsureInitialized(BuildManager);

 ICollection<Type> matchingTypes = ControllerTypeCache.GetControllerTypes(controllerName, namespaces);
 switch (matchingTypes.Count) {
  case 0:
   // no matching types
   return null;

  case 1:
   // single matching type
   return matchingTypes.First();

  default:
   // multiple matching types
   throw CreateAmbiguousControllerException(route, controllerName, matchingTypes);
 }
}

The namespaces parameter are quite important. If you remember, you can explicitly mention the namespace during the route definition, with overloaded MapRoute method:

public static Route MapRoute(this RouteCollection routes, string name, string url, string[] namespaces) {
 return MapRoute(routes, name, url, null /* defaults */, null /* constraints */, namespaces);
}

Namespaces parameter then being stored to Route.DataTokens[“Namespaces”]. Namespaces parameter matters, then you have controllers with same names. This is in particular makes sense then you have different areas.

Caching Controller Types

The interesting thing is that framework not re-reads types for each request, that would be just to expesive, but instead it uses a cache which is initialized at the very first request and used at the life time of application. The call ControllerTypeCache.EnsureInitialized(BuildManager); makes sure that cache is in actual state. How does MVC caching the types?

Very simple and straight forward solution - in .XML file.

public static List<Type> GetFilteredTypesFromAssemblies(string cacheName, Predicate<Type> predicate, IBuildManager buildManager) {
 TypeCacheSerializer serializer = new TypeCacheSerializer();

 // first, try reading from the cache on disk
 List<Type> matchingTypes = ReadTypesFromCache(cacheName, predicate, buildManager, serializer);
 if (matchingTypes != null) {
  return matchingTypes;
 }

 // if reading from the cache failed, enumerate over every assembly looking for a matching type
 matchingTypes = FilterTypesInAssemblies(buildManager, predicate).ToList();

 // finally, save the cache back to disk
 SaveTypesToCache(cacheName, matchingTypes, buildManager, serializer);

 return matchingTypes;
}

It try’s read cache from file, if there are not matching types there it would try to read assemblies and then store it to cache. You might be interested where this file is actually located, so take a look:


cache file name

And for the most curious guys, here is the content:

<?xml version="1.0" encoding="utf-8"?>
<!--This file is automatically generated. Please do not modify the contents of this file.-->
<typeCache lastModified="14.10.2011 19:33:03" mvcVersionId="aa5414f4-4d8e-4f2a-a98b-7334bf15d104">
  <assembly name="MvcForDebug2, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null">
    <module versionId="1ad99820-dc17-4be0-9f56-6dd2bdcd7950">
      <type>MvcForDebug2.Controllers.HomeController</type>
    </module>
  </assembly>
</typeCache>

FilterTypesInAssemblies method tries to get all controllers it can. What it does, it goes throught all referenced assemblies and using special predicate class matching the types.

private static IEnumerable<Type> FilterTypesInAssemblies(IBuildManager buildManager, Predicate<Type> predicate) {
 // Go through all assemblies referenced by the application and search for types matching a predicate
 IEnumerable<Type> typesSoFar = Type.EmptyTypes;

 ICollection assemblies = buildManager.GetReferencedAssemblies();
 foreach (Assembly assembly in assemblies) {
  Type[] typesInAsm;
  try {
   typesInAsm = assembly.GetTypes();
  }
  catch (ReflectionTypeLoadException ex) {
   typesInAsm = ex.Types;
  }
  typesSoFar = typesSoFar.Concat(typesInAsm);
 }
 return typesSoFar.Where(type => TypeIsPublicClass(type) && predicate(type));
}

So, the last interesting thing here is the predicate that actually used:

internal static bool IsControllerType(Type t) {
 return
  t != null &&
  t.IsPublic &&
  t.Name.EndsWith("Controller", StringComparison.OrdinalIgnoreCase) &&
  !t.IsAbstract &&
  typeof(IController).IsAssignableFrom(t);
}

You can see, it tries to match any public, ends with “Controller” not abstract and implementing IController interface. That’s why important do not forget to call all your controller with “Controller” suffix (yes, I did that mistake several times in the beginning of my MVC journey).

Note, that if you have 2 controllers with same names in different namespaces, but did not provide namespace constraint you will have several matchingTypes, so the CreateAmbiguousControllerException will be thrown. I believe each of us seen that kind of exception at least once.

Instantiating the Type

As we go a little back and check the code of CreateController: now, we’ve got the type (or null if type has not been resolved). Next thing is to instantiate it. Nothing really fancy here:

protected internal virtual IController GetControllerInstance(RequestContext requestContext, Type controllerType) {
 if (controllerType == null) {
  throw new HttpException(404,
   String.Format(
    CultureInfo.CurrentCulture,
    MvcResources.DefaultControllerFactory_NoControllerFound,
    requestContext.HttpContext.Request.Path));
 }
 if (!typeof(IController).IsAssignableFrom(controllerType)) {
  throw new ArgumentException(
   String.Format(
    CultureInfo.CurrentCulture,
    MvcResources.DefaultControllerFactory_TypeDoesNotSubclassControllerBase,
    controllerType),
   "controllerType");
 }
 return ControllerActivator.Create(requestContext, controllerType);
}

If everything is all right with controller type it ask ControllerActivator to create the instance. In default case, ControllerActivator is DefaultControllerActivator:

public IController Create(RequestContext requestContext, Type controllerType) {
 try {
  return (IController)(_resolverThunk().GetService(controllerType) ?? Activator.CreateInstance(controllerType));
 }
 catch (Exception ex) {
  throw new InvalidOperationException(
   String.Format(
    CultureInfo.CurrentCulture,
    MvcResources.DefaultControllerFactory_ErrorCreatingController,
    controllerType),
   ex);
 }
}

As for the rest of MVC entities it would use IDependencyResolver to resolve that type. IDependecyResolved is detailed here, but as you remember by default at the very end it calls Activator.CreateInstance(type) method.

Conclusions

New controller is instantiated on each HTTP request. The instantiation is in-directed, means first we retrieve Type, after creating Instance. The type is being searched dynamically by getting out reflection information from assemblies. To optimize the performance of ASP.NET MVC application, the internal cache of types is used. The cache is stored in file in “Temporary ASP.NET Files\root\cdd53039\36a27802\UserCache\MVC-ControllerTypeCache.xml” folder. Namespaces of controllers are matters, in case of two or more controllers has the same name but different namespaces the exception would be thrown. In case of controller type could not be resolved, the HTTP 404 response is generated. Otherwise, the controller instance would be created by IControllerActivator instance.

Previous post: Inside ASP.NET MVC: IDependencyResolver - Service locator in MVC

Kiev ALT.NET Community Thoughts

I’ve just reviewed blog posts I created thought that year and realized that I miss very important piece of information here. Something that literally changed my developers life. And I’m talking about Kiev ALT.NET community.

From my first join at the beginning of the year I amazed with very cool and friendly atmosphere there. I tried to visit every meeting and each of them brought some value for me. And the most valuable asset I got there is people I met. Those guys simply rock.


kiev alt.net

Mike Chaliy, the leader of community does exceptional job. I could imagine how hard could it be to do all organization work and he does that very well.

What’s important, Mike involved me to do my own speech. I remember that first time when I was sharing my experience with Continues Deployment based on Chuck Norris tools and Jenkins. That was a good start and I did several more speeches after. I would say it happened only because of Kiev ALT.NET.

Getting of knowledge for me is not only books, podcasts and just coding practice. But also, discussions, questions and speeches. I this is definitely productive way of learning.

So, I’m very thankful to Kiev ALT.NET for that positive feel of progress, pleasure of communication with smart guys and fun time of after parties.I wish a long life for community and try to help it as much as I can.

Develop With Tests

TDD

I’ve been recently thinking about my TDD and I came to interesting conclusion. I’m not TDD practitioner any more!

TDD is quite strict practice, it suppose you following certain rules. The main rule is test first. You create test, you create code after. For years I’ve been using that practice as dogma. I’ve seen a lot of value with such approach and that worked for me really much.

Do you use that rules now? Not always.. Do you use something different than TDD? Yes, I do develop with tests.

See, TDD main power is not in tests itself. The power of TDD is Test Driven Design. This is design that makes code more readable and maintainable. Tests of cause helps regression and general quality, but it is not always the truth. As long as you practice TDD much you feel how designed for tests code should look like. You do not need to create test to prove that code is testable. Following very simple rules you simply guarantee that tests are possible here.

Now, I simply optimize the process a bit. I could skip first step and go ahead for code. That does not mean I’m skipping test. No, I’ll create there I feel it necessary.. And I feel that mostly to all code I create. But this is no longer TDD, I call that develop with tests. DWT as you want ;)


develop with tests

I think a some developers might say, “Oh, I’m using DWT as well”. Others will say, “I will use DWT since TDD is boring and requires too much time”. Please do not do that. I strongly believe that is TDD that could make you stronger as developers and your code robust and good looking. So follow the rules before you absolutely sure, you ready to break them.

It’s Writing Time

There are great series of stories called “Azazel” by Isaac Asimov. They are about 2 cm demon named Azazel and George, guy who is able to talk to him. Azazel knows actually nothing about demons or angels, hell or haven.. But he understands the nature of things able to change them. The stories are short and fun and smart, so I would recommend it to read.

In particular, there is one great story called “Writing time”. It’s about the man whose name is Mordehai and he is writer. He has a great issue. He always need to wait for something, like wait for doctors, waiters, cab drivers, wait in a shop lines etc. Mordehai treated that time as completely wasted, because he could not spent it for job.

He shared that problem with George. George asked Azazel if it’s possible to help to poor Mordehai. As always Azazel was angry about peoples misery, but agreed to help. He could introduce change that make it possible to Mordehai never need to wait for somebody. That change required a little cost, literally the sun will stop shining in 2 million years earlier, but who cares.

George met Mordehai again in several month. For his great disappointment, writer created simply nothing in that time! Not even a small story. Instead of being hyper productive he became non productive at all.

How could it be, that person who spend all his time for work are not able to do the work? As Mordehai understood as he sat behind his type writer - he had no ideas. Simply, he don’t know what to write about.

As it turned out all that time he waited actually spent for thinking. It was absolutely not a waste for him, otherwise it was the most productive time.. But he did not realized that before. Unfortunately, Mordehai’s career totally ruined.

You know what? I think that I’m wasting the time instead of doing “cool things”. I work as hard as I can, but the results aren’t impressive. I neither could find good idea nor have good rest recently - I’m exactly like Mordehai who blames waiting line for his problems.

I’ll try not to repeat this issue. Having a rest, spending time with family or sport activities are equally important as job (perhaps even more important). And even if you are “doing nothing”, something is still going on your head. If inputs are right, outputs will be right as well.. Sooner or later.