Alexander Beletsky's development blog

My profession is engineering

The best day in developer’s life

What would you think is best day in developers life? First day of work, last one? Salary or bonuses payment day? All of those are pretty nice, indeed. But I think best day in developers life is day of Release. It is exactly the time than the code you created recently is being pushed out to production and customers are able to see what changes you’ve done so far. Release makes a really good feeling of “work is done”. In reality, of cause, work is not stopped but only begins after release date - production issues, customer reports, change requests.. all of these still need to be handled.

Releasing a product is big job. And it is not all about coding. First of all it is well coordinated effort of all groups - Development, QA, Documentation, Product Owners, Stakeholders etc. I’ve seen situations than products that could be potentially released just stuck on a line, because of lack of coordination. Here the team atmosphere plays important role. If team sees the goal, agreed on method and tools and communicated well it is a big step to success.

Yesterday, Report Designer product that is part of E-conomic has been released. It took about an year to make this happen. Huge amount of work were done, many lessons learned. We’ve passed long way from Specification till Integration code to main branch. I personally that it was just too long. With current experience I see a lot of value in frequent releases, so I tend to just-in-time production practices as Kanban. I world of web products, feedback is extremely fast and cost of defect is not so high, comparing to desktop.

It is good time to exhale, inhale fresh air and self-motivate for next interesting projects. We are now looking forward to make a better codebase, better processes and better products as result. Let’s start new adventure!

Clean tests with SharpTestsEx

I was recently been adding NUnit to one of mine project’s and noticed one interesting framework in Nuget channel. It is SharpTestsEx by Fabio Maulo and its primary goal is to make your assertions shorter, cleaner and easy to read.

To start with, it is just enough to install it through Nuget (or manually) and add using statement in your test class file.

using SharpTestsEx;

SharpTestsEx would add number of Should() extension methods. Suppose you have such case:

[Test]
public void Compile_Div_EmptyDivElement()
{
    // arrange
    var compiler = new Compiler();

    // act
    var result = compiler.Compile("div");

    // assert
    Assert.That(result, Is.Equal("<div></div>"), "expected and actual results are different");
}

With SharpTestsEx I’m changing assert part of test to,

[Test]
public void Compile_Div_EmptyDivElement()
{
    // arrange
    var compiler = new Compiler();

    // act
    var result = compiler.Compile("div");

    // assert
    result.Should().Be.EqualTo("<div></div>");
}

Note, it is much shorter and mostly plain English sentence: result should be equal to something.. The assertion message for this test is really clean and sufficient, so you mostly won’t needed and custom messages.

If I go further and implement extension method for this Compile method:

static class CompilerTestExtension
{
    public static string Compile(this string expression)
    {
        var c = new Compiler();
        return с.Compile(expression);
    }
}

My tests case would be just one line of code:

[Test]
public void Compile_Div_EmptyDivElement()
{
    // arrange / act / assert
    "div".Compile().Should().Be.EqualTo("<div></div>");
}

This is very short and clean notation of test case, even not technical person could read this. You can do more complicated assertions, combining them by And / Not conditions. There are also bunch of useful extensions for strings, sequences.

Additional information on project site as well as author blog.

My first year in company

It is exactly one year since I joined E-conomic company. Last year I decided to switch the jobs and now I realize how much right decision it was. I got invitation on interview to E-conomic where I did a small test application and presented it to management and technical guys (hi Jakob, Chistian). I had no great experience with ASP.net that time, since last years of my career I spent mostly on management tasks. Really, I just reviewed the code that I created year ago and I see it was not so good :). Fortunately, those guys believed that I could bring a value and offer me a job. Since then I try to do my best to proof - you were not wrong.

I see work in company as kind of mariage. Indeed, to feel happy in marriage you should have really strong match and level of trust to each other. Probably it is E-conomic where I first time understood how important trust between development and management. I forgot what the “estimation negotiation”, I forgot the problems with unrealistic project plans created by management and business guys. A lot of problems that only created conflicts on my previous jobs here done in very smooth way.

We are developer-centric company. Developers have power here, developers take a decisions, developers do the job. But with great power comes great responsibility. Developers are totally responsible for product quality. Here I saw a new model, not “Software Developer” but rather “Product Developer”.. Product developers job is not ended on coding and integration changes, but it is more for verification, configuration management and release. We don’t have testers here, but we have QA. And developers are part of QA also.

I enjoy the environment here very much. Even we are highly distributed team I feel the shoulder of each team mate. We working closer, doesn’t matter we are sitting >1000 km from each other. That was not so fast to get used to work like that, but now I have no problems at all. I really think that I’m part of the rest of team who mainly located in Copenhagen. This environment constantly inspires: I was happy to restore my blog (that I created in 2008, but then froze it), I do a lot of reading to understand modern web development, I released my small product.. and have bunch of new ideas and feel only lack of time to work on all of them :).

I would also say that Ciklum as Ukrainian representative does job very well. Basically 2 major things that I see Ciklum is great: it handles all infrastructure and finance very professional and does not interfere to work directly with client. If I have some kind of problem I know Ciklum would solve it, this is also good level of trust and I appreciate that.

Yes, I’m just happy developer now. And I really much correlate this with E-conomic. Last year decision changed my life. I think I found the place where I like to work, even if I spend more than 7 years for that.

Integrating ELMAH to ASP.NET MVC in right way

I’ve received great feedback on this article. Thanks all of you guys, who provided valuable comments and pull requests on github. I edited the article several time, hope its up-to-date now. You can go directly to code repository and get the test application, to see what it is all about. In case if you having some issues, just let me know.

UPDATE: Elmah.MVC is now released as NuGet package. No need to read that long blog post, just install it.

Many of you know what the ELMAH is. If you still don’t, go and read here. If you care about monitoring and stability of your web application, you definitely consider to integrate ELMAH. There are number of guidelines of how to do that. But all of them do not answer several question that MVC developers really concerned about:

  • How to add ELMAH as usual MVC controller?
  • How to add ELMAH handler to make it secure (accessible only by authorized users) in MVC application?

I’ll shed the light on both questions here.

Quick start up

The best way to add ELMAH is to use Nuget package manager (really recommend to install it). I’m using Nuget UI, so just right click on MVC application, select “Add Library Package Reference and pickup ELMAH from nuget feed. Nuget will add reference and update web.config with all required configuration. Basically you can start you application with http://localhost/yourapp/elmah.asx and see main ELMAH page.

Changing the configuration

ELMAH have been developed in ancient history times of Web Forms. And it is nicely done to be as much universal as possible and implemented as HttpHandler. In web.config you have several line of configuration that register this handler. This is of cause works, but URI like this http://localhost/yourapp/elmah.asx is absolutely not MVC way. So, what you have to do is simply comment out those configuration.

<system.web>

        <!--<httpHandlers>
            <add verb="POST,GET,HEAD" path="elmah.axd" type="Elmah.ErrorLogPageFactory, Elmah" />
        </httpHandlers>-->

    </system.web>

<system.webServer>

    <!--<handlers>
    <remove name="Wild" />
    <add name="Elmah" verb="POST,GET,HEAD" path="elmah.axd" type="Elmah.ErrorLogPageFactory, Elmah" />
    </handlers>-->

</system.webServer>

Add ELMAH controller

Now add ELMAH controller to application, the code of controller is:

public class ElmahController : Controller
{
    public ActionResult Index(string type)
    {
        return new ElmahResult(type);
    }
}

And setup a proper routing for it:

context.MapRoute(
    "Admin_elmah",
    "Admin/elmah/{type}",
    new { action = "Index", controller = "Elmah", type = UrlParameter.Optional }
);

Implementation of ElmahResult action result

He we go for most interesting part. I’ve checked the sources of ELMAH and reused Elmah.ErrorLogPageFactory to handle those custom HTTP request. After several modifications and pull request by seba-i the code is:

class ElmahResult : ActionResult
{
    private string _resouceType;

    public ElmahResult(string resouceType)
    {
        _resouceType = resouceType;
    }

    public override void ExecuteResult(ControllerContext context)
    {
        var factory = new Elmah.ErrorLogPageFactory();

        if (!string.IsNullOrEmpty(_resouceType)) {
            var pathInfo = "." + _resouceType;
            HttpContext.Current.RewritePath(PathForStylesheet(), pathInfo, HttpContext.Current.Request.QueryString.ToString());
        }

        var httpHandler = factory.GetHandler(HttpContext.Current, null, null, null);
        httpHandler.ProcessRequest(HttpContext.Current);
    }

    private string PathForStylesheet()
    {
        return _resouceType != "stylesheet" ? HttpContext.Current.Request.Path.Replace(String.Format("/{0}", _resouceType), string.Empty) : HttpContext.Current.Request.Path;
    }
}

Try it now

Just build the application and put http://localhost/yourapp/admin/elmah and you will get nice looking ELMAH main page.

Making it secure

AuthorizeAttribute works great for this purpose. Just put this attribute to ElmahController and now it is only Admin, who has access to it.

[Authorize(Users = "Admin")]
public class ElmahController : Controller
{
    // ...

In my applications I use a custom attribute, but the idea still the same.

Allow to view results remotely

You definitely want to see results remotely, with no need to access hosting machine. By default, ELMAH do not allow this, but it is easily changed in configuration:

<elmah>
  <security allowRemoteAccess="yes" />

Code example

Full code example is located in my github repository here.


UPD:

I’ve moved out controller to separate github repostitory. You are free and very welcome to use/fork/pull changes :). Fork & fix it here - https://github.com/alexbeletsky/elmah.mvc.controller.

UPD2:

Instead of placing just controller I create test MVC3 application that clearly show how to use Elmah in MVC. Plese check code and README here.

UPD3:

Code placed in this blog post has been changed drastically by latest feedback I recieved for this controller. Please better refer code on github or latest blog post here Latest version of ELMAH MVC controller

SeleniumCamp conference in Kiev

I had a chance to be present on a SeleniumCamp conference that took place 26 of February. It was first Selenium dedicated conference in a world, so it was bad idea to miss it. Moreover, my colleague from Dnepropetrovsk Anton was coming there also, so we had a good time to meet each other again.

Event has been organized by Xp Injection group. I’ve been listen those guys last year on Agileee, I also read their blog from time to time, so I was pretty confident of what is going be all about. Event took place in Bratislava hotel. Here is a brief summary of stuff I heard there:

  • David Burns was the one who did open speech. He is Senior Software Engineer in Test at Mozilla working as the Automation Lead in WebQA and one of Selenium Core commiter. David was describing Selenium 2 and WebDriver ideas as primary part on Selenium 2 framework. In spite of Selenium RC, Web driver:
    • Uses native browser API, so works much more faster
    • Reduces and clear API
    • More reliable
    It means that WebDrivers should be tool of choice for functional testing in nearest future. As far as I get Selenium 2 is not officially release and currently in Beta phase.
  • Kirill Klimov did an speech about his experience of deployment Selemium in company. He shared some pros and cons of each Selenium umbrella products: from Core to Grid. They stated to use Selenium from IDE, seems the most easy scenario of using functional tests. Then they switched to Core and RC. Kirill has very good understanding of “what’s going on” and summaries speech with several recommendations I think very useful:
    • Don’t be hurry to start up
    • Understand the difference in tools and pick up right one
    • Try to elaborate in several years perspective
    Slides from his presentation is here
  • Mairbek Khadikov shared his own vision of web applications testing automation. Through 2 years of usage Selenium they came up with bunch of tests and realized the power of automation. Most important that value shared between business and development. He touch such important issues as performance, tests isolations. It is very important to write tests that might be read by non-technical guys. In such case tests starting to be a part of project documentation and used not only be development team. He also noticed that test should be architecture to be run in parallel as early as possible, it is just matter of time then you going to run tests by Grid (for instance) and you will meet a problem. He did several examples of Tests written on Java, so developers had a little fun to see the code. His presentation is stored here.
  • Alexey Rezhchikov did one of the most impressive speech as for me. He works on huge and high complexity project(s) for Ebay Motors (as far as I got). They are dealing with complex requirements, multicultural, high loaded sites with difficult configuration management. He clearly described problems any big project meet in a way towards Release. Nevertheless, they’ve implemented and successfully using solution stack based on Selenium platform (not only). Using different level of testing to meet acceptance goal, they are very flexible in decision “go or not to go with automated test” and “what kind of test is OK for this particular scenario”. I also bit impressed by usage a Feature Flags as opposite way of multibranch development. . His presentation here.
  • Nikolai Alimenkov was selling ideas of using Wiki as “live” requirements. The whole approach back us to Fitnesse ideas of keeping requirements in a Wiki-style storage, placing acceptance conditions into to tables and “somehow” run Fitnesse tests against target applications. With my big respect to Robet C. Martin, original developer of Fitnesse, I don’t believe that stuff works. I don’t believe that Product Owner will ever work with requirements in Wiki will run them. Moreover, it seems a big overhead to me to support all that wiki’s in up-to-date in world of changing requirements. Nikolai showed Fitnesse based tools as Fitnium, Selenesse and some tests examples but it was to artificial to me. You can find his presentation here.
  • Nikolai Kolesnik seems to have a good plan to describe BDD, but was quite nervous and shy that interfered him to make a good speech. Nevertheless, he expressed his understanding and application of BDD for real projects, pitfals and solutions based on Selenium RC. You can find his presentation here.

In short, I enjoyed the conference.. but I would not say it is something I expected. Most speeches I visited, was a kind of “too many word’s, too few real examples”. Guys, please if you are talking about product for developers - show me the code. All the idea’s of ATDD, BDD, Acceptance testing, Wiki are requirements are already sold years ago, don’t do the job that is already done. But, show some your projects, show real test cases, show examples.

I would like to THANK my company who make it possible to me to visit this conference.

PS. for foreign guys: Bratislava hotel is very cold, avoid it to stay on winter time :).

Github commits activity widget

The most important factor about each open source project is how much community is active around this project. It is good to understand the factor. One of the indicators of activity is of cause commits to repository. You probably want to demonstrate how much active your project is by placing some information on a web site.

Having this idea on my mind I created github.commits.widget, small javascript that could be easily integrated to your website and show something like this:

Note, it is not just html pasted.. but widget added to this blog post.

The code is really simple for that:

<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.js"></script>
<script src="github.commits.widget-min.js"></script>
<script>
    $(function () {
        $('#github-commits').githubInfoWidget(
      { user: 'alexbeletsky', repo: 'github.commits.widget', branch: 'master', last: 5 });
    });
</script>

To configure all you need to do: specify your github account, repository and branch to watch. For more documenation, please visit project page. If you like it and want to use it on your website, you are very welcome to do it! Please just give a little feedback to either by github, twitter or mail.

Implementation of REST web service adapter on .NET

The REST web services are quite common thing nowadays. Sure, web application just expose API through HTTP protocol, basically allowing any application to be integrated with. Simple? Yes, this is the power of REST, just simple. But vendor should consume the API somehow. The environment could really be different: .NET, Java, Python, PHP etc., and it it not so convenient to work with HTTP directly from your custom application. Instead of that you expect on “native” API that you work with like the rest of application: having a model, methods that returns of change model state. You expect on having Adapter - the entity which would adapt REST HTTP methods into your platform/language methods. I’m going to give example of creation such adapter for .NET code.

Doesn’t matter what language you write it the steps of adapting is quite common, here they are:

Learn the API

Let’s have Trackyt.net API as an example. First thing you need to do is to learn API. REST API differs much from site to site, depending of developers tools and choice. All you need to understand the exact methods you need, their signature and data they operate with. Let’s take Authenticate method: so we see, it takes 2 arguments, email and password and in response it returns JSON object, containing operation result and API token. That means:

URL: http://trackyt.net/api/v1.1/authenticate

Will be transformed in such C# method, like Authenticate that receives email and password as arguments and return ApiToken as results. Note, ApiToken is first model class we identified.

And call like this:

http://trackyt.net/api/v1.1/af005f695d300d0dfebbd574a4b1c0fa/tasks/all

Is actually transformed in something like GetAllTasks that recieves ApiToken object and returns IList<Task>. Task is yet another model class we have to deal with.

This is a kind of analysis stage of implementation, you just need to understand interface and model.

Define interface and model

After you done you are ready to define interface:

public interface ITrackytApiAdapter
{
    ApiToken Authenticate(string email, string password);

    IList<Task> GetAllTasks(ApiToken token);
    Task AddTask(ApiToken token, string description);
    int DeleteTask(ApiToken apiToken, int taskId);
    Task StartTask(ApiToken apiToken, int taskId);
    Task StopTask(ApiToken apiToken, int taskId);
    void StartAll(ApiToken apiToken);
    void StopAll(ApiToken apiToken);
}

You see that all methods, defined in documentation are reflected as interface methods, all data accepted/returned by methods are defined as POCO.

public class ApiToken
{
    public ApiToken(string token)
    {
        Value = token;
    }

    public String Value { get; private set; }
}

public class Task
{
    public int Id { set; get; }
    public string Description { set; get; }
    public DateTime CreatedDate { set; get; }
    public DateTime? StartedDate { set; get; }
    public int Status { set; get; }
}

Integration testing

Is it possible to code without tests? I don’t think so. So, what we are going to do - one by one, tests all adapter methods. That’s should not be “super-duper-complex” test (at least during initial implementation), but rather smoke test ones. Do the call and see that results back. Here is just a several examples of tests for Authentication and GetAllTasks.

[Test]
public void Authenticate_AuthenicationSuccess_ReturnApiToken()
{
    // arrange
    var adapter = TrackytApiAdapterFactory.CreateV11Adapter();

    // act
    var apiToken = adapter.Authenticate(_email, _password);

    // assert
    Assert.That(apiToken, Is.Not.Null);
    Assert.That(apiToken.Value, Is.Not.Null);
}

[Test]
[ExpectedException(typeof(NotAuthenticatedException))]
public void Authenticate_AuthenticationFailed_ExceptionThrown()
{
    // arrange
    var adapter = TrackytApiAdapterFactory.CreateV11Adapter();

    // act
    var apiToken = adapter.Authenticate("nouser", "noemail");
}

Implementation

There are 2 very suitable components you might use for any REST API adapters:

  • James Newton JSON.net library - the best framework to handle JSON’s in .NET (imho). I enjoy how easy to serialize and deserialize of data with JSON.net.
  • WebClient object - that is part of .NET framework and encapsulate all basic HTTP functions.

Here we go. Our task is to send HTTP request to server, check it for correctness and transform server reply to .NET objects. To do that is great to model all responses into POCO (as we did with model classes ApiToken and Task). The difference is that Responses are actually internal classes, part of implementation and adapter users should know nothing about them. For instance let’s see AuthenticationResponse

class AuthenticationResponse : BaseResponse
{
    internal class ResponseData
    {
        [JsonProperty("apiToken")]
        public string ApiToken { set; get; }
    }

    [JsonProperty("data")]
    public ResponseData Data { set; get; }
}

The base response is some common part of data that every response suppose to contain. In my case:

class BaseResponse
{
    [JsonProperty("success")]
    internal bool Success { set; get; }

    [JsonProperty("message")]
    public string Message { set; get; }
}

We should deserialize responses with JSON.net and send requests with WebClient. Let’s just see simple code example:

public ApiToken Authenticate(string email, string password)
{
    using (var client = new WebClient())
    {
        var authenticationJson = JsonConvert.SerializeObject(new { Email = email, Password = password });
        client.Headers.Add(HttpRequestHeader.ContentType, "application/json");
        var responseString = client.UploadString(ApiUrl + "authenticate", authenticationJson);
        var response = JsonConvert.DeserializeObject<AuthenticationResponse>(responseString);
        if (!response.Success)
        {
            throw new NotAuthenticatedException();
        }

        return new ApiToken(response.Data.ApiToken);
    }
}

It simply creates new WebClient instance, UploadString method perform’s POST by default and places string object as POST payload. Then we receive response as string and try to deserialize to target response type. In case it could not serialize that, exception will be thrown. Next, it check result of operation and return required data back to client.

The implementation of the rest of methods is mostly the same, differing by type of HTTP request (GET, POST, DELETE, PUT) and request/response objects. Let’s see GetAllTasks method that does GET request and returns all users tasks:

public IList<Task> GetAllTasks(ApiToken token)
{
    using (var client = new WebClient())
    {
        client.Headers.Add(HttpRequestHeader.ContentType, "application/json");
        var responseString = client.DownloadString(ApiUrl + token.Value + "/tasks/all");
        var getAllTasksResponse = JsonConvert.DeserializeObject<GetAllTasksResponse>(responseString);

        if (!getAllTasksResponse.Success)
        {
            throw new Exception("Can't get users tasks. " + getAllTasksResponse.Message);
        }

        return getAllTasksResponse.Data.Tasks;
    }
}

As reference I’ll give you implementation of trackyt.api.csharp by me and GithubSharp API by Erik Zaadi.

Refactoring to testability

Suppose you are working on some web REST API adapter. It would be basically one class with bunch of methods. Each method would represent each supported API call. Through the implementation you would probably landed with something like this,

public class ApiAdapter
{
    private HttpClient _client;
    private RequestFormatHelper _requestFormatHelper;
    private ResponseFormatHelper _responseFormatHelper;

    // ...
    
    public ApiAdapter()
    {
        _client = new HttpClient(/* ... */);
        _requestFormatHelper = RequestFormatHelper(/* ... */);
        _responseFormatHelper = ResponseFormatHelper(/* ... */);
    }

    void CreateNewTask(Task task)
    {
        // implementation
    }

    void DeleteTask(Task task)
    {
        // implementation
    }

    // rest of methods...
}

Nevertheless, the code works it has several code smells:

  • High cohesion - the relation between objects are really strong. Objects are aggregated and aggregation is the one of strongest types of links between objects.
  • Violation of Open/Closed principle - one of the SOLID object oriented design principles.
  • Lack of testability - if you decide to unit test this code, you will be in problem. Unit testing is supposed to be done in isolation. Having high cohesion code you can’t get required level of isolation. Moreover it is not possible to substitute concrete class with mock object by using some famous JMock, RhinoMocks or Moq.
  • Lack of flexibility - if you decide to change implementation of some of depended objects, say ResponseFormatHelper you would probably change the implementation of ApiAdapter as well.

If you think that you code is not testable or flexible, don’t waste your time.. apply the power of refactoring.

What to do? It is basically very simple to correct such code, you just need to follow this:

  • Always hide the details behind the interface - all behavior objects must conform to particular interface. Other objects must refer to another object only with knowledge of the interface. In terms of programming languages, if you pass object to client code you must always pass it by interface (e.g public SomeAction(IHttpClient client, Type type, Data data);.
  • Use dependency injection for louse coupled code - try to avoid to create depended object by new, inject the dependency by constructor of public property. (e.g public ApiAdapter(IHttpClient client, IRequestFormatHelper requestHelper, IResponseFormatHelper responseHelper).

How the code might look after refactoring:

public interface IHttpClient
{
    void WebCall(/*...*/);
}

public interface IRequestFormatHelper
{
    string Format(/*...*/);
}

public interface IResponseFormatHelper
{
    string Format(/*...*/);
}


public class ApiAdapter
{
    private IHttpClient _client;
    private IRequestFormatHelper _requestFormatHelper;
    private IResponseFormatHelper _responseFormatHelper;

    // ...

    public ApiAdapter(IHttpClient client, IRequestFormatHelper requestFormatHelper, IResponseFormatHelper responseFormatHelper)
    {
        _client = client;
        _requestFormatHelper = requestFormatHelper;
        _responseFormatHelper = requestFormatHelper;
    }

Now, you can easily test this code using mocks, like this simple test:

[Test]
public void CreateNewTask_SendPutRequest_WithJson()
{
    // arrange
    var client = new Mock<IHttpClient>();
    var requestFormatter = new Mock<IRequestFormatHelper>();
    var responseFormater = new Mock<responseFormatHelper>();

    // act 
    var api = new ApiAdapter(client, requestFormatter, responseFormater);
    api.CreateTask (new Task());

    // arrange
    client.Verify(client.WebCall(/*...*/)).Called().WithArguments(/*...*/;
}

And change any details of injected objects, without any changes in ApiAdapter. Refactoring to testablity is important, as much testable code you got as much flexible code you got.

ASP.NET MVC controller’s action with name View()

If you want to have action with name View in your controller, you will meet a small problem. Base class of any controller Controller already contains a method, with same name (and it is overloaded):

protected internal ViewResult View();
protected internal ViewResult View(IView view);
protected internal ViewResult View(object model);
protected internal ViewResult View(string viewName);

So, you code like this:

public ActionResult View(string url)
{
    // some action logic...

    return View();
}

Won’t complied, because you are adding method with same signature (name and parameters list). You have 2 options here: gave up to compiler and rename action, to something new like ViewPost or insist and try to make it happen.

Fortunately C# have beautiful feature for doing this. It is new Modifier. It hides the member on base class and explicitly says, that now your method is substituting it. So, everything you need is basically this:

new public ActionResult View(string url)
{
    // some action logic...

    return View();
}

It will make application compliable and work as expected.

Ninject provider as factory method

Ninject is a very nice and easy to use, open source dependency injection framework. It is very popular within ASP.net MVC developers community and de-facto framework of choice for MVC applications.

I was implementing small feature recently. As user registered on site, he receives confirmation email and registration details. Pretty common functionality along the sites. So, I add next application service INotificationService that took responsibility of sending email message to user. Nothing special, nothing complex.

namespace Trackyt.Core.Services
{
    public class NotificationService : INotificationService
    {
        private IEMailService _emailService;

        public NotificationService(IEMailService emailService)
        {
            _emailService = emailService;
        }

        public void NotifyUserOnRegistration(string usersEmail, string password)
        {
            var emailMessage = CreateEmailMessageForUser(usersEmail, password);
            _emailService.SendEmail(emailMessage, "support");
        }

        //...

As I’ve tested and integrated it to application, everything were just fine. Till the time I reset database and re-run tests. The problem, that INotificationService itself depends on IEmailService that uses ICredentialsRepository to extract email server credentials (account, password, settings) from database. After database is reset, Credentials table is just empty and IEmailService throws exception that there are no credentials, so send email is impossible. I could not add credentials as SQL to database script, since it depend on configuration and exposes private password. Do it manually after each reset of database is boring task. Furthermore, I don’t want my application to send any emails as I just do some development testing.

The obvious design workaround is - define INotificationServiceFactory that responsible for NotificationService instantiation. Factory decides, if application run in debug mode, just stub of NotificationService is used, otherwise real implementation is used.

namespace Trackyt.Core.Services
{
    public class NotificationServiceFactory : INotificationServiceFactory
    {
        public INotificationService GetService()
        {
            if (HttpContext.Current.IsDebuggingEnabled)
            {
                // just stub..
                return new NotificationServiceForDebug();
            }

            // here I need to pass EmailService to constructor
            return new NotificationService ( // ??? );
        }

        // ...

But it is not so easy as it seems to.. Here the problem: NotificationService have to accept EmailService, that have to be created created by DI framework (I could not create it by new since I loose all benefits of inversion of control). So, in factory I need to have a IKernel object - Ninject core object, for instantiating of objects from Inversion of Control container. It should be extended with constructor taking IKernel as argument.

Issues:

  • Circular dependency - factory is defined in Core assembly, kernel is defined in Web application.. Web application references Core, to make it work now Core need to reference Web (it is actually possible, but very ugly.. I try to avoid such things).
  • Additional references - now Core also need to reference Ninject, to make it compliable.
  • Violation of Dependency inversion principle - one of the SOLID principles of object oriented systems. Model must not depend on infrastructure.

Fortunately Ninject provides functionality to avoid issues mentioned above! Instead of binding to exact type, like


    Bind<INotificationServiceFactory>().To<NotificationServiceFactory>();

I can bind creation of type to Provider:


    Bind<INotificationService>().ToProvider<TrackyNotificationServiceProvider>();

Provider is class that implement IProvider interface, which is actually just one method CreateInstance. CreateInstance, receives IContext object as parameter that contains IKernel. TrackyNotificationServiceProvider is placed on same level as the rest of Ninject infrastructure code is placed. Model remains clear and exact and do not mess up with infrastructure code.

namespace Web.Infrastructure
{
    public class TrackyNotificationServiceProvider : Provider<INotificationService> 
    {
        protected override INotificationService CreateInstance(IContext context)
        {
            if (HttpContext.Current.IsDebuggingEnabled)
            {
                return new NotificationServiceForDebug();
            }

            return new NotificationService(context.Kernel.Get<IEMailService>());
        }
    }
}

Now, in case of INotificationService object need to be instantiated (in my case it is injected to RegistrationController as constructor parameter), CreateInstance is called. If web.config contains <compilation debug="true" targetFramework="4.0"> the stub of service is created. On production, where <compilation debug="false" targetFramework="4.0">, real instance of NotificationService is put to work.