Alexander Beletsky's development blog

My profession is engineering

Company Days 2012 in E-conomic

Company Days is a yearly event in E-conomic, then all employees gather together. It’s usually happen in the middle of September, the time than Copenhagen is especially beautiful. It’s been great week, primarily because of opportunity to meet guys whom we work very close to. Besides the developers, primarily located in Denmark and Ukraine, business and operational people from UK, Sweden, Norway, Spain, Germany etc. are also coming, so it makes very cool international environment.

This year was a little special, since we had additional fun. Our CTO had organized the hackathon. The whole day event, supposed to create some valuable hacks for E-conomic product. All developers has been spitted on pairs and extended with business representatives. I had a team of 4 people - 2 developers and 2 business guys. Started at early morning near the blackboard, it was so nice to be part of process where the value is born in such extremely short period. We finally selected the idea, which seems to be very cool in terms of business and realistic in terms of one day implementation.

It’s been a long day of developing, debugging, demonstration and changes. Our product owners were always near us, giving both valuable feedback and supplying us with coffee and Danish chocolate sweets (that worked so good to me, since I needed sugar all the time). About 22.00 we got something that looks like a demo version of feature we wanted to produce. We’ve spent near 1 hour more, to show everything to PO’s and fix some small bugs. I felt very tired at the end.

Next day at 14.00 was big time of presentations. That was the funnies part. I have to admit, there was several really great hacks. In the same time some our PO’s are so cool of selling things that they turned the presentation into real show (kind of TV market, I would say). Everybody had a great joy, especially after words like “I’ve been asking for this feature for 2 year, and those guys created it in 16 hours”. It’s a bit of irony, but a bit of true as well. I know the jury that included CEO, CTO, Head of PO’s etc., we definitely happy to see the results and some hacks will be turned into the products really soon.




Apart from the hackathon, we had traditional speeches of CEO, Marketing people and guys from other departments. There was also a bit more activities, including jogging in which I took a part and did 8km distance. And of cause, Friday boat trip and party in restaurant. Party was so cool, that I almost missed my flight in the morning (but that’s a completely different story).

I would like to say thanks for everybody involved, I was happy to meet you guys once more.

How My Wife Became a TDD Fan

My wife, Sasha is not (yet) IT person. Living with developer for 5 years, she knows only few details of developers job (like, they do code and sitting late nights to make it work). My bad, I never spent much time to explain what really my job is.

Recently she joined the training courses on software testing, to find the job in IT area in perspective.

After her second lesson she got back home and we had a talk about what she is actually studying there. It appeared they started with a software development processes, understanding how the software is actually created. So, they described waterfall (even if nobody using it any more) and some agile frameworks (scrum of cause), and left with a homework, to read what else practices exists.

- So, did you find anything interesting?
- I read about few, but I think I found the most interesting one..
- What’s that?
- Its called Extreme Programming..
- Hm, I never heard about it. Tell me more (trolls face on).
- Alright, so it’s a different practices as Code review, Pair programming, Collective code ownership.. But you know, what’s the greatest one?
- No, what’s that?
- Its called Test Driven Development, or TDD.
- What’s the point of this TDD? (troll face is still on).
- Can you imagine that you create the test before you create any application code, make sure it doesn’t work and after you fix it, so you always sure that application works fine! That’s sound so interesting and so valuable, I can’t wait to try it!

I had to put my troll face off, after that. I could not believe what I’m hearing. That was simply amazing!

- Do you know, I’m a big TDD fan and the training courses I do from time to time are “TDD in .NET”, so we teach people of using TDD practices in real projects.
- Fantastic, I never thought your job is so interesting!

The big disappointment to her was that testers are not doing TDD (she though it’s actually the testers job). Nethertheless, I got one interesting thought.

During the trainings on TDD we try hard to explain people the value of TDD and usually there some guys who just deny the idea, seeing absolutely no value in it. That actually show some human feature called “open mindness”.

As less you know, as easy to you to get new ideas.. As much you know, as less agile your brain could be, as hard you can adopt anything new.With more experience you got, it’s harder and harder to adopt new things. This is only the matter of discipline and hard working to stay update and actually try things out. It’s also a big patience till some tool or technique became beneficial to you.

The point of my story is. Developer have to be open minded person during whole career. Something you did yesterday, may became obsolete tomorrow. There are many things outside your comfort zone you might missing, staying in comfort zone for long time.

Back to Sasha, I always knew she is very bright lady. Now I hope she became bright engineer, good luck to you, honey.

Why Use Backbone.js?

Even JavaScript language and front-end development has been much matured recent time, I still see a lot of confusion about usage of different frameworks and libraries.

For a half of year I use Backbone.js and feel much satisfied with it. It’s very popular library, probably because of personal popularity of the creator of Backbone.js Jeremy Ashkenas, very much known also as author of CoffeScript language.

Frameworks and libraries

There are differences between frameworks and libraries. In one sentence, “library is then you call some code, framework is then your code is called from somewhere”. The Backbone.js is clearly library. You usually have a lot of different options with libraries and less with frameworks. It’s very minimalistic, actually.

At core it only contains of only 4 components (Model, Collection, View, Router). This minimalism is both power and weak side of Backbone.js. The power is that you can build things whatever you like, Backbone fits the experience really smoothly. In the same time, you are really confused the time you just started, since you simply don’t understand how to make everything work together properly.

The essence of Backbone.js

In one of the podcasts with Jeremy, he said “Backbone is something that almost every developer will come to theirselves.. dealing with JavaScript applications for a while”. I could not agree more. If you have previous experience in programming and you see the value of architecture and design, you will definitely start to invent something similar to Backbone.

After some time spend of development jQuery based application you will realize many fundamental issues, like: data does not belong to DOM, events are not limited on jQuery and so on. But the main issue is jQuery apps usually violates the “Single Responsibility Principle”, mixing and manipulation with data and views in the same time. Experienced developers, tries to eliminate SRP issues as soon as possible, almost naturally coming to MVC pattern.

In essence the Backbone is a way to structure you application better. It makes clear de-coupling between data and views that render that data. It’s nothing more than that.

Abstracting things out

Programming is a matter of abstractions. As better abstractions you have as better code you have.

Every application works with data. The data does not appear in browser just like that, it have to fetched from somewhere. It’s also have to be persisted somewhere. The data could be changed and you have to validate that change is right. If some part of data is changing, you want to be notified on that change. Notice that, this is a very common functionality you want to keep between all data classes - initialization, validation, persistence. This is where the Backbone.Model comes out. With Backbone.Model all that things are already “in the box” you just need to apply your own strategies of implementation.

You typically don’t deal with one instance of model. REST interfaces are usually expose something like /api/tasks or /api/photos and so on. You need to group the models into collections. Collection (similar to model) are able fetch and persist itself, all it needs to know is the actual type of Model and URL. This is where Backbone.Collection shies. You are keeping all models close together, if some model become destroyed the collection will be updated, so it always in consistent state.

With view is basically render your models as DOM structure. So, the Backbone.View is abstraction with primarily method - render. Backbone does not say, how exactly you are going to do that. You can create handcrafted HTML and append it DOM, or use some templating mechanisms. Instead, it provides you convenient methods for listening to view events (like clicks, focusout or any other DOM supported events). Inside the handlers you either could update related model or apply some changes to view parts, whatever.

And finally Backbone.Router is a facility to building, so called Single Page Applications (SPA). SPA’s most popular example is GMail. You browsing emails, writing and sending - without and page reloads. What’s changing is only hash part of URL (like #inbox, #sent, #all). Router is handling hash change events and triggers registered handler. Inside the handler, you can initialize the corresponding application.

Don’t repeat yourself

Now, please let me know. How many times did you create (read copy-n-paste) the code for RESTful data persistence? How many times, you did you create code for validation? How many hours you spend for looking for some event listener provoking the bug in your app and hiding somewhere among 200 .js files?

If your answer, more that 2 - stop doing that. Don’t repeat yourself - reuse. And better - reuse from best.

Conclusions

There are great solutions, created by people experienced than me, amplified by huge community behind. Backbone.js is one best representatives of such solutions. Having a something that I could rely on, that works good and there places to get help is very strong reason for me, to keep doing that, instead of re-invent wheel yet another time.

New Candidate Overview

It’s been a almost a month how I started to re-implement my continues delivery application - Candidate. In my previous post I already mentioned some areas are going to improved, what technologies considered to be used. I haven’t managed to make huge progress on project so far, but I got something to show.

Overall application architecture

I try to build it with classical API oriented architecture in mind. It means that server side code exposes some simple HTTP API, the rest of stuff happening on browser. Fortunately, I got really simple UI, but still UI plays really important role. So far, I spent 80% in Sublime for front-end and 20% of time in Visual Studio for back-end.

Server side technology stack is NancyFX + RavenDB, client side build with Backbone.js + Require.js.

The primary application change is that it’s no longer IIS application, but just standalone app.




So, you just launch an .exe file, after application warm up, candidate is available at http://localhost:12543.




To simplify a life a bit (and do not harm they eyes of users) I’m using Twitter Bootstrap CSS framework and happy with it.

Server side

NancyFX is very lightweight framework, but it requires some skills. So far, I met several interesting challenges, that has been successfully solved. But, Nancy does it job and does it pretty well.

RavenDB initial impressions are really positive. I’m currently doing very basic functionality (as inserting, querying and deletion of documents). But, indeed working with document oriented database feels much simple and native for C# developer. Forget about any SQL, schemas, ORM’s - store and restore POCO’s, that all you need to know. RavenDB API is very clear and intuitive.

I used embedded version of RavenDB, which suites me best - since I want to minimize an effort for application installation.

As I said above, server side exposes HTTP API which receives and responds with JSON based payload. There would be some interfaces for starting/stopping the deployment tasks.. but the rest is happening in client.

Client side

Since I met Backbone.js framework I thought to myself: “I will never ever writing ‘scripts’ in front-end, instead I’ll be doing browser powered applications”. Backbone.js introduces structure in front-end. Front-end development with Backbone.js is no longer $.ajax calls and updates of DOM inside the success: handlers. With Backbones’s main entities you build front-end apps in similar way as it would be desktop applications.

Being much inspired with Addy Osmani Writing Modular Javascript blog post and speeches, I trying to apply some of these practices. In particular I’m using RequireJS for AMD and Backbone’s views as kind of widgets (or modules).

The front end is being build in SPA (Single Page Application) style, which is kind of new to me. Backbone.Router is great component though and I don’t need anything more, at least for now. I wish to keep UI as much responsive and fast.

As always, you are welcome to review server side or client side code, let me know your opinion or ask questions.

IT Jam 2012 in Kiev

It’s been my second time I attended IT Jam, the biggest IT gathering in Ukraine. This time, the rules has been completely changed. Instead of traditional speeches, the organizers decided to make community spots, there people might have a chance to group and discuss different issues. There were number of spots there: .NET, JavaScript, iOS, UX etc.




Besides the community spots, there was several stages where visionary speakers did their presentations. I had a chance to listen Christopher Marsh, and really enjoyed his vision of lean product development. At 13.00 community spots started to operate, to I went there.

.NET spot is a closest to me, since it’s lead by guys whom I personally know and number of participants are well known guys from local .NET community. Mike Chaliy asked me to give a brief talk on.. Node.js. So, together with Dmytro Mindra we did kind of introduction to Node.js to .NET developers, introducing to the history of this rather new technology, it’s main features and showing some real code. Planned for 15 minutes, we’ve spend about hour talking about it. Besides of pure technology talk, I shared some experience of e-conomic switch from .NET to Node.js stack for the latest products and how it worked for us (mainly my message was, throw away your boring C# and switch to JS.. so I had a chance to been kicked off the spot, but everything were fine at the very end). .NET spot was very active, attracted many many participants. People were talking about newest .NET framework features, like Async, Azure, Kinect etc.




JavaScript spot leaded by my team mate Eldar Djafarov was also very exiting. Guys did a very nice job, preparing some cool JavaScript applications, that people might play with. I joined it, having a quite interesting discussion of front end development issues with different JavaScript MVC frameworks. Again, the experience we’ve got in company regarding Backbone.js, unit testing and tools was very interesting to developers.

Almost 4 hours on community spots were like one minute to me.

I really liked how the things went. From my perspective the idea of community spots was really cool. Probably, it’s to difficult for introverts, to feel them self comfortable with this environment, so mix up of formal speeches and spots could be a good idea for future conferences.

I wish to say thanks for organizers of this event for everybody involved, great effort guys. As always, it was so great to meet with colleagues from other cities and shake the hand for guys, whom I knew only virtually before. See you next year!

Developing Web Applications Faster

If you haven’t seen Bret Victor’s talk, please find 54 minutes and do it. This video is giving many interesting ideas, inspiring people to create new things. The main idea of his talk (at least as I understood it), is whatever you do, you have to see the result of your work as soon as possible. This short feedback cycle is important, cause it leads to better ideas and good productivity.

As a developers, we do code. Applying Victor’s principle, ideally we should have this code immediately running and showing it’s results of execution. Some experimental IDE’s like Light Table is trying to adopt this principle now.

If you develop web applications, you spend much time working with HTML/CSS/JS, which being executed by browser. The typical workflow, is to open your text editor, correct some mark-up or javascript code, then switch to browser, press F5 and see the results. It doesn’t sound like a big job, but believe me, it’s getting annoying and boring in very short time.

LiveReload comes to help

Ideal workflow is something like, you apply changes to JS, press Ctrl + S in your text editor and browser get’s immediately reloaded, showing the results of execution. That’s sounds really cool and fortunately there is great solution for that. It’s called LiveReload from Andrey Tarantsov.

It can be installed as either a <script type="javascript" scr="/libs/livereload.js"></script> tag in your app, or as a browser plug-in. I prefer the plug-in way, since it could work with any app and requires no code modifications. There plugins for Chrome, FireFox and Safari.

How does it work? Take a look on a diagram.




The LiveReload plug-in (or livereload.js script) expect’s that there is Web Socket Server, where it connects to and start to listen for changes. Web Socket Server is responsible for tracking changes and if changes occurred, it send the package of data with information about the changes, back to the web socket. LiveReload receives the data and reloads the browser.

As you installed plug-in and pressed “LR” button, it’s gonna light green if connection to server is successful, otherwise it will pop-up the message, that server is not available.




So, to make LiveReload work properly we have to provide an Web Socket Server, that conforms to LiveReload interface.

LiveReload + Sublime Text 2

There is a great plug-in for Sublime Text 2, from Janez Troha called, LiveReload-sublimetext2. It’s get easily installed through standard ST2 package installer (package name is “LiveReload”). Basically, it instantiate Web Socket Server inside the sublime, listening to editors event’s and push data to socket.

As soon as it’s installed, and browser connected to it, you can now modify any HTML/JS/CSS file. As file get changed and saved, you will see LiveReload-sublimetext2 will immediately send the command and browser got refreshed.




When I first time tried that, I could not believe how smooth it was. Can’t even believe how I lived with-out it before. Nothing to add here, just nice and easy.

LiveReload + any IDE

“Any IDE” in my case means Visual Studio, where I spent some time if I’m not Sublime. Unfortunately, there are no dedicated plugin for VS (as seems there are no for JIdea and Eclipse, so good opportunity for developers). Even so, you have a chance to get benefits of LiveReload. All you need to have is Node.js or Ruby installed on your machine.

Originally, LiveReload comes with Ruby-based server, that you can install and used from gems. I’m not much that comfortable with Ruby, so I went Node.js way.

There is Node-base LiveReload web socket server from Joshua Peek. It’s written in coffee-script and actually really easy to adopt. Here is an example, how I used that for my project. With a simple .cmd, I start up livereload, specifying which folder to observe.

    @echo off
    node ./tools/livereload/server.js ./src/Candidate.Nancy.Selfhosted/Client/

It will start up the server, wait till browser connects and watch all files in folder for changes.




Not everything so perfect with that approach, though. There are 2 reasons, why I did not finally like that:

  1. It is slow. Internally node-livereload uses fs.watchFile, which is quite slow (at least on Windows). That mean’s if you save the file, it takes 1-1.5 seconds for browser to refresh.
  2. Does not catch up new files. You have to restart server, to make it watch newly created files.

Mixing things up

There reasons above prevent me to go on with node-livereload. So, even if I work on .NET-based application, I do C# in Visual Studio, but HTML/CSS/JS in Sublime, having the all it’s benefits. So, I basically have 2 things opened side-by-side.




Conclusions

I felt much productivity boost with LiveReload. I have Sublime opened on one monitor and Chrome + WebKit Tools on another. As soon I change something, I see the results immediately, catching up errors in console as early as possible. It works so great then you TDD’ing your JavaScript code, as you write tests and implementation and it’s been re-run with each save (like a continues testing with AutoTest, NCrunch etc.).

That works very fine to me, I don’t want to press F5 anymore. Do you?

CamelCase JSON Formatting for NancyFX Application

As I have chosen NancyFX framework for my pet project, I’ve spend yet another week playing with it. My original expectations were - I’ll be super-duper-fast with it, since it’s so comprehensive and powerful. One more time I have to admit - every technology has it’s learning curve, it could be smooth, but it is still curve. Time need to be invested to learn it.

The good thing, it’s very interesting to fight new challenges, and it’s so cool that Nancy just “does not contain everything”, but actually allows to customize things, as you wish to.

Nancy’s default JSON formatter

Nancy is using Marek Habersack’s version JsonSerializer, which is a smart wrapper on JavaScriptSerializer. It works, fine.. but the problem, it does not provide any facilities to change default formatting settings. That would mean, if you have a module with method like that,

public class SitesModule : NancyModule
{
    public SitesModule() : base("/api/sites")
    {
        Get["/"] = parameters => Response.AsJson(new[] {new { Name = "Site 1", DeployStatus = "Deployed" }});
    }
}

You will get valid JSON, but it would serialize all C# properties, as they are:

    "[{"Name":"Site 1","DeployStatus":"Deployed "}]"

Don’t know about you, but it makes me sick too work with uppercase object fields in JavaScript. Fortunately, such advanced JavaScript serialization libraries as JSON.NET has formatting features. So, our goal is to create custom serialize (based on JSON.NET) and integrate it to Nancy’s pipeline.

JSON.NET based serialization

In order to do that, we need to implement special interface (guess it’s name?) - ISerializer. Without further explanation, I just copy and paste code here:

public class JsonNetSerializer : ISerializer
{
    private readonly JsonSerializer _serializer;

    public JsonNetSerializer()
    {
        var settings = new JsonSerializerSettings
                        {
                            ContractResolver = new CamelCasePropertyNamesContractResolver()
                        };

        _serializer = JsonSerializer.Create(settings);
    }

    public bool CanSerialize(string contentType)
    {
        return contentType == "application/json";
    }

    public void Serialize<TModel>(string contentType, TModel model, Stream outputStream)
    {
        using (var writer = new JsonTextWriter(new StreamWriter(outputStream)))
        {
            _serializer.Serialize(writer, model);
            writer.Flush();
        }
    }
}

Changing the configuration

NancyFX bootstrapper has a special method, which have to be overridden in order to change internal configuration. There is a very convenient helper, that is useful for changing some particular bits of config.

protected override NancyInternalConfiguration InternalConfiguration
{
    get
    {
        return NancyInternalConfiguration.WithOverrides(c => c.Serializers.Insert(0, typeof(JsonNetSerializer)));
    }
}

Please note, I’m inserting that to position 0, just to make sure it’s *before* default JsonSerializer type, since AsJson() response formatter is using FirstOrDefault strategy to find corresponding serialize class. Just build and re-start your app, and from now your JSON’s would be looking really good:

    "[{"name":"Site 1","deployStatus":"Deployed "}]"

Much more better now!

ELMAH.MVC 2.0.1 Update is Out

I’ve just pushed new version of ELMAH.MVC NuGet package - 2.0.1. It covers some interesting parts, that I can’t wait to share.

VB.NET projects support

For quite a while, I’ve been asked to provide VB.NET support. Originally, ELMAH.MVC was shipped as singe .cs file, that would not possible to use in VB.NET project at all. VB.NET support were planned for 2.0, but unfortunately it did not happen. I’ve just tested 2.0.1 and it works great with VB.NET, so all VB.NET developers - you are welcome to use it.

Custom ELMAH route

Another demanded feature was to provide custom path to ELMAH controller. By default ELMAH is available under /elmah, which is nice, but sometimes you what to have a freedom to change it. It’s now possible, web.config has additional configuration section, <add key="elmah.mvc.route" value="elmah" />. By setting up elmah.mvc.route parameter, you can tweek a default one, like <add key="elmah.mvc.route" value="secure/admin/errors" /> for instance.

Besides of that…

ELMAH.MVC does not depend on WebActivator any longer. I’ve removed App_Start.cs code and used PreApplicationStartMethodAttribute attribute. That allows to make a package more tiny + actually made it possible to avoid separate NuGet package for VB.NET projects.

Customizing Folders Layout for NancyFX Application

NancyFX applications are full of conventions. The default conventions are good enough, especially if you use ASP.NET hosting, where you probably don’t even care. If you plan to do self-hosting application, you typically want to change some bits.

In my case, during Candidate application re-writing, I wanted to have some custom folder layout, that would be clean and not confuse application users with so many details. In fact, I wanted something that in deployed state looks like this:


folder layout

‘Bin’ folder would contain the application itself, as well as all referenced assemblies. ‘Client’ folder would contain all client side code - HTML/CSS/Javascript. Initially I thought to make all of those as embedded resources, but firstly did not worked on Nancy v.0.11.0, secondly having those files placed separately make it possible to easy application updates (patching).

Fortunately, it is easy to apply any folder layout you want. There are 2 types of recourses that Nancy is looking from outside, they are - views and static content. Both of them are resolved by default conventions and those conventions applied against the application root.

Changing the application root

In my case, executables are places in Bin folder, but resources are placed one level above. By default, the application root is Environment.CurrentDirectory and I need to changed that. In order to make this happen, you have to implement IRootProvider instance.

namespace Candidate.Nancy.Selfhosted.App
{
    public class PathProvider : IRootPathProvider
    {
        public string GetRootPath()
        {
            return Path.GetFullPath(Path.Combine(Environment.CurrentDirectory, @"..\"));
        }
    }
}

There is a little pitfall, that I got into. Please, make sure that root folder is absolute path (Path.GetFullPath - returns absolute path). If you didn’t do that, some part of application could not work right. I’ve sent a pull request for StaticContentConventionBuilder class, but more appropriate fix is actually having it rooted.

After, just place the type of path provider instance into Nancy bootstrapper class, as overridable property:

protected override Type RootPathProvider
{
    get
    {
        return typeof(PathProvider);
    }
}

Changing the default conventions

Now, we have to teach Nancy to look for view and static resources right.

The bootstapper contains corresponding virtual method, called ConfigureConventions. There I’ll overrider the default ones, like that:

protected override void ConfigureConventions(NancyConventions nancyConventions)
{
    // static content
    nancyConventions.StaticContentsConventions.Clear();
    nancyConventions.StaticContentsConventions.Add(StaticContentConventionBuilder.AddDirectory("scripts",
                                                                                                "Client/scripts"));
    nancyConventions.StaticContentsConventions.Add(StaticContentConventionBuilder.AddDirectory("content",
                                                                                                "Client/content"));
    // view location
    nancyConventions.ViewLocationConventions.Clear();
    nancyConventions.ViewLocationConventions.Add((viewName, model, context) => string.Concat("Client/views/", viewName));
    nancyConventions.ViewLocationConventions.Add((viewName, model, context) => string.Concat("Client/views/", context.ModuleName, "/", viewName));
}

For view location conventions you just need to provider the folder (relative to rootfolder), where the view are located. I used only 2 simple conventions: 1) view is just placed under /views folder 2) view is placed under /view/moduleName folder.

For static resources, it a little bit trickier. You have to return the *response* object, for corresponding request. Fortunately, StaticContentConventionBuilder class, has some helper methods that make it simpler.

Re-thinking Candidate application

I’ve released 0.0.1 version of Candidate pretty long time ago. I was actually quite happy how the things went, before I collected some initial feedback. Even, if I can see the application useful, almost all responders had a different opinion. Let’s briefly see the major concerns.

IIS hosting

Original idea of Candidate was that it works as usual ASP.NET application, hosted under IIS. That worked really bad, though. Candidate is performing a some operations that requires extended permissions (for git, msbuild, file system operations etc.). That means that IIS web site has to be configured to run under under your personal account (or any other administrative account, with set-up SSH keys, access to msbuild etc).

All of that created a bit of overhead. First of all, Candidate has to be installed and configured on IIS. Even, it is possible to create kind of installer to automate this job, it does not sound good. In other hand, some people might have very strict policies on their machines, so it is not even possible to change any kind of IIS settings under it.

So, it turned out to be bad idea for this particular application.

Deployment scenarios

The primary deployment scenario that implemented in Candidate is local one. It means, it’s able to build and test the site and deploy it to local IIS. I did so, since I use the same scenario for deployment of trackyt.net, which is hosted on VPS and deployed by Jenkins running on the same server.

But many people are not using VPS, but rather sharing hosting there they are not even able to install any kind of other software. So, they are more interested in remote deployment, not local one. With the re-invention of Azure, you might consider scenarios of deployment existing web application to it seamlessly.

Local deployments was a good for prototype, but not good at all for product.

Technical stuff

I stick to framework named Bounce. It’s very powerful product, that basically allows to write a deployment scripts in .NET languages. Then I originally saw it, I thought it’s great since it does everything I need, including git operations, msbuild and IIS sites deployment. Even more, Bounce is the one of the best .NET code I ever seen, so respect to refractalize.

But.. Bounce, is not that good for long perspective. It required to create a lot of configuration code, hard to change logging options and further extension.

For a long running product I need more lightweight and robust approach.

Starting it over again

Considering all things above, I decided to re-start the project, almost from the scratch.

It would be based on 3 different technologies, that in my opinion are agile enough to allow me to build what I want.

  • NancyFX - will be used as application host. Currently Nancy is 0.11 version, that has a lot of improvements comparing to 0.9 that I used. The main reason to choose that, is Nancy allows to create self-hosted application. It’s basically .exe application, that you can run and it starts the HTTP server. I plan to have all-in-one executable, that do not require any installations and will prepare all required infrastructure and first run. That would solve both installation and IIS limitations problem.
  • RavenDB - as application document storage. In previous version I didn’t use any kind of database at all, being happy with simple SettingsManager to store .NET objects into JSON files. The reason is that I didn’t have any dependencies on any software (as SQL Express). With Raven it is possible to use Embedded mode, so it does not require any service running, just the assemblies.
  • PowerShell - as deployment scripts. No more VCS, build, test and deploy code in C#. I plan to have a generic PowerShell scripts that would be easy to run and maintain. With a PowerShell I hope to extend the deployment scenarios to support Local, FTP, MS Deploy, Azure deployments.

Unfortunately, I’m not yet proficient with any of these technologies. It makes me very exited to learn something new. I’ve already created a branch where some basic infrastructure is prepared. My goal would be create first prototype in 2 - 3 weeks.

And I want to appreciate everyone who provided feedback and useful technical suggestions. I hope it would turn in something good.