Alexander Beletsky's development blog

My profession is engineering

Failed to generate user instance of SQL Server exception

If you just switched from ASP.net development server to IIS, you might appear to see such exception from data access layer of application:

Failed to generate a user instance of SQL Server due to failure in retrieving the user’s local application data path. Please make sure the user has a local user profile on the computer. The connection will be closed.

Don’t panic, it is easy to fix. Go to Internet Information Services (IIS) Manager console panel. Check out what Application Pool is set for your application, in Basic settings. It would probably be ASP.NET 4.0 Classic. Go to Application Pools section, select ASP.NET 4.0 Classic and go to Advanced Settings. In Advanced Settings. In Process Model, Identity - ApplicationPoolIdentiry would be selected.

You must change it with a credentials of user that has physical access to App_Data folder of application.

You should restart IIS and start application again.

Does TDD find bugs?

TDD

No, of cause not. TDD doesn’t find bugs in your application. This is a very frequent misconception about TDD, I would like to shed a light on it today.

Why tests do not find bugs? Why applications created with TDD still has bugs?

Tests are replication of developers mind. As good as problem understood, as good it could be tested, as good it could be solved. That means that tests and code are very subjective. It all depends on developer. If the requirement is treated wrong, or implemented not completely it will be a bug, even if all tests are passing. If developer is not aware of existing of some problem he won’t be able to create a corresponding case/fix.

Tests are limited. It is just not possible to test everything. Especially in middle/big size applications. Even having a 100% coverage metrics, does not guarantee that code is 100% tested, it only means that existing test suite runs each line at least ones. There always be a corner cases, that are not really seen during requirements/implementation phase and could appear only on acceptance or maintanance. There are high chances to miss something important during development.

Tests have a quality. We frequently hear term code quality. Code quality is some measure that shows how easy code could be understood, changed. We all know that usage of design patterns, enterprise libraries, refactoring.. all of this aimed to improve code quality. Tests are also code, but easy to read and maintain is not primary test quality factors. Test quality is some measure of how good test code is executing and asserting against production code. Number of asserts is a simplest metric. As soon as tests doesn’t have any assert in it, tests have no sence.. it useless. Quality highly depends on actual scenario of test, as much smarter scenario is as much quality test is.

Tests are not intelligent. Tests are code, created by developer, proves that functionality created works as expected at the current moment on time. Test suite is something that could produce a snapshot of particular functionality. They become very useful than they are created before code, because it guides through to solve problem and reach the goal, as well as they are useful after the code is created, minifying the risk of regression during any of code changes (fixes, refactoring, new functionality etc.). But test itself could not give any new information.

Why TDD is still important, you might ask? Even if TDD still leave a room for bugs, it radically decreases overall number of bugs. First of all because many silly mistakes are being found during creation of tests and first tests runs. Second, that good number of tests creates a kind of bounds that helps to keep existing functionality in it.

Web development: Lightweight AJAX thought jQuery, JSon.net and HttpHandlers

If you are about to start using AJAX in your ASP.net application, you will be pointed to some existing frameworks, like: ASP.net AJAX, Anthem.net or something else. That is a probably a good idea to use time-proven things, but you also might have a reasons not do that. First of all, if you are new to AJAX and you need to educate yourself with it, using frameworks in not good, because it hides a lot of details of “how it works”. Second reason, that you might not want to overhead with additional frameworks, to make it as lightweight as possible. If you are about to implement some simple AJAX operations, then jQuery for client code, Json.net to handle JSON on server side and ASP.net HttpHandler is all that you need!

Let’s briefly review each of these components:

  • jQuery - everybody knows jQuery, it is the best javascript framework, created by John Resig.
  • Json.net - just create and easy to use framework of serialize/deserialize .net objects to JSON, created by James Newton-King.
  • Generic Handlers - part of ASP.net framework. With some level of simplicity HttpHanlers could be called a page without any overhead (like a Page with no HTML code and only Page_Load method), that is ideally serves as a handler for AJAX calls.

Preparation

We going to create a simple admin page that could: get list of all users registered in system and quick new user. I'll use the same project that I used in my previous web development articles, called Concept, so as always you could get a source code on github.

Generic Handler implementation

I web project I’ve added new folder, called handlers that would contain all handlers code we wish to have. Add new "Generic Handker" item into this folder, and call it users.ashx.

The skeleton code of handler will look like that:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using Company.Product.DAL;
using Company.Product.BLL;

namespace WebApplication.handlers
{
  /// <summary>
  /// Summary description for users
  /// </summary>
  public class users : IHttpHandler
  {
    private UsersOperations _operations = new UsersOperations(new UsersRepository());

    public void ProcessRequest(HttpContext context)
    {
      context.Response.ContentType = "application/json";
      var respose = string.Empty;

      var function = context.Request["function"];
      switch (function)
      {
        case "list":
          respose = CreateListReponse(context);
          break;

        case "add":
          respose = CreateAddUserResponse(context);
          break;
      }

      context.Response.Write(respose);
    }

    private string CreateAddUserResponse(HttpContext context)
    {
      return _operations.InsertUser(context.Request["Email"], context.Request["SecretPhrase"], context.Request["Password"]);
    }

    private string CreateListReponse(HttpContext context)
    {
      return _operations.GetAllUsers();
    }

    public bool IsReusable
    {
      get
      {
        return false;
      }
    }
  }
}


* This source code was highlighted with Source Code Highlighter.

Two important thing here: first, we declare context.Response.ContentType = “application/json”; meaning that body response will contain json code. Second, request will contain a function parameter, that would contain exact function name we want to call. It our case it will be just 2 functions, list and add.

Serialization of data

Json.net made a serialization of .NET objects to Json very easy. It supports all main types and collections, it also extendable for your custom needs. Code that returns the list of all users:

namespace Company.Product.BLL
{
  public class UsersOperations
  {
    private IUsersRepository _data;

    public UsersOperations(IUsersRepository data)
    {
      _data = data;
    }

    public string GetAllUsers()
    {
      return JsonConvert.SerializeObject(new { status = "success", data = _data.GetAll() });
    }

    public string InsertUser(string email, string secret, string password)
    {
      var user = new User { Email = email, SecretPhrase = secret, Password = password };
      _data.InsertUser(user);

      return JsonConvert.SerializeObject(new { status = "success", data = user.Id });
    }
  }
}

* This source code was highlighted with Source Code Highlighter.

Users repository GetAll() method returns IEnumerable of User. JsonConvert understands such data types, so able to perform serialization not problem.

Aspx code

In aspx I utilize functionality of $.ajax call, as well as very nice component called blockUI, that works upon jQuery and helping to block interaction during AJAX calls, as well as creation of simple modal dialogs.

<%@ Page Title="" Language="C#" MasterPageFile="~/Concept.Master" AutoEventWireup="true"
  CodeBehind="UserOperations.aspx.cs" Inherits="WebApplication.UserOperationsView" %>


<asp:Content ID="Content1" ContentPlaceHolderID="head" runat="server">
  <script type="text/javascript">
    function listOfUsers() {
      $.ajax(
      {
        url: "/handlers/users.ashx?function=list",
        beforeSend: function () {
          $("#results").slideUp();
          $.blockUI();
        },
        cache: false,
        success: function (response) {
          $.unblockUI();
          if (response.status == "success") {
            listOfUserCallback(response);
          }
        }
      });
      return false;
    }

    function listOfUserCallback(response) {
      var html = "<ul>";
      for (var key in response.data) {
        html += "<li>" + response.data[key].Id + ": " + response.data[key].Email + "</li>";
      }
      html += "</ul>";
      $("#results").html(html);
      $("#results").slideDown();
    }

    function showDialog() {
      $.blockUI({ message: $("#adduserdialog") });
      return false;
    }

    function closeDialog() {
      $.unblockUI();
    }

    function addUser() {
      var user = {};

      user.Email = $("input#email").val();
      user.Password = $("input#password").val();
      user.SecretPhrase = $("input#phrase").val();

      $.ajax(
      {
        url: "/handlers/users.ashx?function=add",
        beforeSend: function () {
          $.blockUI({ message: "<h1>Adding new user, please wait...</h1>" });
        },
        data: user,
        success: function (response) {
          $.unblockUI();
          if (response.status == "success") {
            addUserCallback(response);
          }
        }
      });
      return false;
    }

    function addUserCallback(response) {
      //renew list of user:
      listOfUsers();
    }

    $().ready(function () {
      $("#results").hide();

      //setup handlers
      $("a#list").click(listOfUsers);
      $("a#add").click(showDialog);

      //setup dialog
      $("input#adduser").click(addUser);
      $("input#cancel").click(closeDialog);
    }
    );
  </script>
</asp:Content>
<asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server">
  <div id="content">
    <div id="adduserdialog" style="display: none; cursor: default">
      <label>
        Email:</label>
      <input id="email" type="text" />
      <label>
        Secret phrase:</label>
      <input id="phrase" type="text" />
      <label>
        Password:</label>
      <input id="password" type="password" />

      <input type="button" id="adduser" value="Add user" />
      <input type="button" id="cancel" value="Cancel" />
    </div>
    <div id="left">
      <div id="box">
        <p>
          Admin operations:
        </p>
        <a id="list" href="#">List of users</a><br />
        <a id="add" href="#">Add new user</a>
      </div>
    </div>
    <div id="right">
      <div id="results">
      </div>
    </div>
  </div>
</asp:Content>

* This source code was highlighted with Source Code Highlighter.

Putting all together

Now let’s review everything in conjuction. We have generic handler, that receives http request. It uses request “function” to understand what function is requested by user. Based on function type it delegates the call to business object, called UsersOperations. UserOperations relies on UserRepository to work with data, so it get or insert the data and return results as JSON strings. JSON is created by serialization of .NET objects into JSON objects by means of Json.net library. Client receives the output in asynchronous callbacks, checks the status of operation and dynamically creates HTML code. blockUI component help to block user interaction with UI during asynchronous calls, also “create new user” modal dialog is created by means of blockUI.

Such approach serves really great for simple AJAX applications, on pure ASP.net, jQuery. Check out sources on Github.

7 links challenge from Problogger.net

I’m not doing blogging for a long time, event if I started on march of 2008 I had a great pause in it. Originally I started a blog in Russian, but later I’ve decided to switch to English, mainly because I wanted to my colleagues from Denmark to read my blog.. and probably got bit wider audience.

Problogger.net is a great place for bloggers. Even if you don’t plan to get money from your blog (as me) there are great number of tips/tricks for bloggers, as well as some inspiring information. I’ve started to read it recently I found it very interesting.

Daren Rowse is a man behing Problogger.net, professional blogger who live in Melbourne, Australia. He announced 7 Links challenge. I also decided to pick it up. Here we go:

  • First post - Первое сообщение it was a small introduction of myself to the world, as well as setting up objectives for this blog. It was in Russian, so what I said there is that I going to write about challenges that I met in my development course everyday, to keep and share the knowledge I got. I still doing that.. I think.
  • The post I enjoyed writing the most - I have a lot of fun during my blogging (even if sometimes it is hard to finish up some post) I enjoyed the GitHub Social coding the most. It was my first, not so technical, with elements of philosophy post.. I liked that!
  • A post which had a great discussion - I haven’t created a post that really attracted big attention and discussion. I’m OK with that, I still think I create one.
  • A post on someone else’s blog that you wish you’d written - This is the one that Anton Litvinenko created recently, called The biggest demotivator for programmer. Nicely written and describing very actual things for every developer. It is not only me, who liked that it was in top of dzone.com for several days.
  • A post with a title I am proud of - Hard to say may be No back up… Fail!. Don’t know why, but I liked the most.
  • A post that you wish more people had read - It might be the set of my first blog posts, related to TDD. I’ve tried to describe my vision of TDD and why I like it. It was in Russian and my first bloggin experience.. but anyway Разработка ведомая тестированием. Часть 1. Описание. And also I tried to find some ideas in Blogging with a GitHub and Blogspot? Ideas?.
  • My most helpful/visited post - so far it is DDD, Implementation of Repository pattern with Linq to SQL that I created after discussion of Repositories on asp.net forum that I try to read/write periodically.

This is it.. You can also join such challenge, please do that.

Happy blogging!

Blogging with a GitHub and Blogspot? Ideas?

I do use Blogger as my blog engine and really like it. Right, as soon as you create your blog posts in HTML and do not use embedded editor it is perfectly fine! So, when I do new post I create a plain HTML in VS and after it is ready I copy paste it to blogspot and publish. Original HTML I commit special repository on github.

I liked that style of work as far I as I change nothing in posts. As I need change, I have to change original HTML, re-publish it on blogspot and commit again to github. After I did it several times, I started to think how to automate this? I came up this a simple idea, why do not load the content of post dynamically, by using a javascript?

It is no problem to do that, and I implemented a small script that performs exactly what I needed. I called this project GithubToBlogspot. Its description:

GithubToBlogspot is used for people who stores they blog article sources in HTML. Instead copy pasting HTML between blogspot and github, it should be possible to paste a simple script on a page and that script would load a content.

It utilized javascript GitHub API by: http://github.com/fitzgen/github-api

First draft example looks like this,

<div id="content1" class="load" >
Loading your content, please wait...
<img src="http://www.sanbaldo.com/wordpress/wp-content/mozilla_giallo.gif" />
</div>
<script type="text/javascript">
  var __user = "alexbeletsky"; var __repo = "Blog"; var __sha = "16fe3ddf21925508490d91978cf581a13bc37b6c";
  var __path = "07112010/GitHubSocialCoding.htm";
  var __div = "content1";
  githubToBlogspot(__user, __repo, __path, __sha, __div);
</script>

* This source code was highlighted with Source Code Highlighter.

Code itself is also really simple: it opens the blob, reads its data as HTML, extracts the body and put body to target div. That’s it. All this done asynchronously, so as you open the blog you see progress image.. after data loaded it appears on screen.

But I haven’t taken into account several major considerations:

  • First, AJAX content is not crawlable by Google. It actually is, but requires a changes that could not be done, since you are on 3rd party blog engine.
  • Second, is with that dynamic loading RSS feed of blog will also be empty.

I still want to accomplish this. I was thinking about different solutions, but have no solid one.. Do you know is it possible to accomplish something similar with no drawbacks I mentioned above. It would be great if you share you ideas!

JSON / JSONP and Same Origin policy issue

First of all, what the JSON is? It is very, very simple! JSON acronym of a JavaScript Object Notation. It is a data interchange format, analog of XML. It became very popular because of AJAX applications, which receive a data from web services called from a javascript. Web services typically use XML as format of exchanging data, that works great because of SOAP standard and so on.. But in javascript client application you had to put additional effort of parsing XML data, and create an XML to post to web service. JSON simplifies it, because it is native to javascript.

JSON is a serialization of javascript object to string. It based on javascript basic types as Number, String, Boolean, Array, Object and null and looks like that:

{
   "id": 1023,
   "description": "new assigment",
   "data": {
     "size": 117,
     "url": "/local/data/jhsr2kk"
   }
}


* This source code was highlighted with Source Code Highlighter.

As you receive such string from web service (for instance) you can just evaluate it (with eval() function) and use as javascipt object. (I have to mention that direct using of eval() is not recommended way since security issues, all modern browsers have built-in JSON parsers and you can use that. Best way is to use a libraries, like jQuery or Prototype and hides such details from you).

JSON exchanging of data works great as soon as your application and data source service are in one domain. As you want to receive some data outside of your domain, it simply won’t work (it actually works, with IE but it gives security notification and does not work in Chrome at all). This because of Same origin policy, security concept that disallow such operations. So, as you considered to do something like,

$.getJSON("http://external.com/json", function(data) { });

* This source code was highlighted with Source Code Highlighter.
you will not get any result. There are a different approaches of workaround of same policy origin (please check nice article on that), but as you know that web service supports JSON, JSONP comes to help.

JSONP is a JSON with padding (or it also called JSON with prefix). With JSONP you are specifying URL and callback function, like http://external.com/json?callback=f. This must be put into script tag.

<script type="text/javascript" src="http://external.com/json?callback=f">
function f(data) {

}
</script>

* This source code was highlighted with Source Code Highlighter.
It is allowed and works perfectly fine.

But what if you need a dynamic calls? It is also not a problem, it is possible to create script tag dynamically and attach it to DOM. A nice way of doing that. I’ve seen in github-api project. Original code is from here, I just removed some details and made it more reusable.

(function (globals) {
  var json = {
    __jsonp_callbacks: {},
    call: function (url, callback, context) {
      var id = +new Date;
      var script = document.createElement("script");

      json.__jsonp_callbacks[id] = function () {
        delete json.__jsonp_callbacks[id];
        callback.apply(context, arguments);
      };

      var prefix = "?";
      if (url.indexOf("?") >= 0)
        prefix = "&";

      url += prefix + "callback=" + encodeURIComponent("json.__jsonp_callbacks[" + id + "]");
      script.setAttribute("src", url);
      document.getElementsByTagName('head')[0].appendChild(script);
    }
  }
  globals.json = json;
})(window);
  


* This source code was highlighted with Source Code Highlighter.

Example of usage:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<html xmlns="http://www.w3.org/1999/xhtml">
<head>
  <title>Test page</title>
  <script type="text/javascript" src="json.js"></script>
</head>
<body>
<script type="text/javascript">
  json.call("http://ws.geonames.org/citiesJSON?north=44.1&south=-9.9&east=-22.4&west=55.2&lang=de", function (data) {
    var d = data;
  }
  );
</script>
</body>
</html>

* This source code was highlighted with Source Code Highlighter.

This code very useful for small “depends-on-nothing” applications. Of cause jQuery has its own support of JSONP, so as soon as you already using jQuery you should consider $.getJSON for your needs.

No back up… Fail!

It might seem that investment into UPS is a big deal.. as far as you lost a data after power outage.

Yesterday I was in such situatation. Outage of power was only for 1-2 seconds, but it was enough to my PC. It turned off and after I turned it on again, BIOS was not able to detect my hard drives. I’ve got 2 drives on my PC, but none of them is detected.

I’ve completed no fine diagnosis yet, and still hope that hard drives are OK or at least, I could restore something. Fortunately, last times I more use my notebook for work, so all latest information is there. On a PC I’ve got my all my old projects (with local SVN server, no backed up some service) a lot of books, photos and music. Photos would be the most painful thing to loose.

You always can hear about importance of backup but does not take it seriously as soon it does not happen to you. It happened to me and now I feel not so great. For now I’m thinking of how to minimize or avoid such in future. For sure, spending about 200 bucks for UPS doesn’t sound bad idea now. I would also finally buy additional space on google to store all photos in picassa with original size, so you will always have a backup. All important projects, documents other artifacts, should be on copied on services (Google Docs, Google Code, github etc.). Music, videos and so on is backed up for external USB hard drive. Now, it is not so expensive.. and there is a chance that this hard drive could also be corrupted, but it is better than nothing.

My little experience with GitHub collaboration

Recently I’ve started my tiny project. For this project I required access to GitHub by it’s open API. Implementation were done in javascript, so I started to look for existing javascript frameworks that works with API. On GitHub API help page there is a link for such kind of framework, implemented by fitzgen. I’ve opened repository and checked out a small javascript file called github.js. It was implemented very nice, accessing API via JSONP (something that I would avoid to implemented by myself, since I’m not very experienced in that). That was a good news!

But bad news was that current implementation lacks something that I require. Namely, Object API that is suppose to provide access to repository objects was missing. So, I’ve considered it as my change to do first fork!

Forking is very native to development with Git - as you would like to contribute to some project, you do fork, this will actually create a copy repository (or branch) you work with.. and if you commit something to your fork, you can give a notification to author. Author is the one who decides, is your changes worth to be included to master branch or not. Fairly simple. I’ve decided to implement Object part of API by my own, fortunately all infrastructure code was already done, so what I had to do is to learn specification and implement appropriate calls.

After I forked github-api repository Nick started to watch it.. so it was also like motivation factor to me. I did my first part of implementation and was ready to commit. Since my not great experience with javascript I was not really sure about my code and had a little worries about a feedback I could receive from author. But, no changes to step back! I’ve done commit and pull request to author. I was expecting for feedback and received it very soon! Despite of my worries Nick were quite happy on such contribution, moreover he did a very great review and provided my with results. So, after a some corrections we did a successful merge to master branch. I really liked that! Moreover, he helped me to clarify my opened questions regarding javascript and JSONP.

That was really nice experience of collaboration within GitHub. I liked how it happens and I enjoyed that open source style of work.

StackOverflow usage in Chrome

It might be well known fact, but I just discovered it to myself. StackOverflow has a nice feature together with Chrome. If you looking for some information, just start typing stackoverflow.com in Chrome URL bar, as soon as Chrome will suggest you to use http://stackoveflow.com press tab and you will see a prompt to put your query. Just create your query and press enter and you will get results directly from StackOverflow site. It seems a very nice and useful feature, since I frequently use StackOverflow as source of information.

TestLint a new tool from TypeMock

As one of my favorite topics - tools that improve tests quality I would like to describe a TestLint. It is a rather new tool from famous TypeMock company, so as soon it has been announced I’ve immediately downloaded it.

I’ve used it couple of weeks and want to share what I liked:

  • It is lightweight, easy to install
  • It easily detects a empty test cases. In such case it shows a warning near the case
  • Documentation says that it is extendable by custom rules, but I haven’t tried it yes

What my initial feedback on it could be:

  • I would believe my tests are perfect, but “Empty” case if only one warning I saw from TestLint so far
  • Analysis is done on pre-build sources, so to get results you have to build you tests first
  • You can get analysis only by browsing a sources. It would be nice, if you can get all warnings in separate window
  • It would be nice to see all rules currently used in some UI
  • It would be nice if rules could be created not only by API but also through some interface

Anyway, TestLint is free, it doesn’t crash your Visual Studio, it helps you to do better job in some way - there are no reasons not to install it! I would recommend it and waiting for next versions with improvements.