Click here to Skip to main content
13,194,662 members (52,237 online)
Click here to Skip to main content
Add your own
alternative version


4 bookmarked
Posted 16 Jun 2012

From the Trenches - Improving Scalability in .NET for Paycento

, 16 Jun 2012
Rate this:
Please Sign up or sign in to vote.
Improving scalability in .NET for Paycento.


As most of you might know, I am currently in the process of improving the scalability of the Paycento backend.

As a real lean adept, the idea is to optimize where it hurts. As the creator of Node.js pointed out perfectly, the biggest pain is in the disk/network access etc.. Blocking threads hurt big time, so I started searching for easy low-cost optimizations that would not require to much effort.

Before we begin: Phase 0 - measuring=knowing; ask Heracles

I spent a big part of last month coding a command line app called "Heracles", which is some kind of a helper app. It supports the following commands:

  • checkout : downloads source and all solutions from SVN and puts them in the folders following our convention
  • build: builds all the source and publishes it to the publish & publishweb folders
  • db: we have a "db drop xxx" and a "db restore xxx", which downloads a backup from our ref db from the web and restores it in SQL Express..
  • install: this is a simple wrapper that invokes chocolatey or downloads installer packages from the web, so we all have the same dev environment. "Heracles install *" is all one needs to have all the required prerequisites
  • performance test: restores the db, fires up the api WCF service with this db, fires up memcached and then runs all integration tests that have the category "integration" and "speed". (simple MStest invoke, capturing relevant output). We redirect trace output to the console and append performance output in logs, so we have a log that contains every single performance test we ran....

This tool allows us to effectively measure our code adjustments in a few minutes, using two simple commands ("heracles build" and "heracles performancetest").

This implies we can measure if our effort is actually resulting in some real improvements...

First Things First: Caching

The easiest way to speed up network/disk access, is simply avoiding it, so I started with caching the part that requires scalability.

Caching is an essential part of scalable websites, and there is no need to reinvent the wheel here. I started of with a simple static in-memory dictionary to verify my hunches, and then I started considering established options. After some reading up between different caching solutions, I found it hard to decide which option to select, but luckily I found the wonderfull CacheAdapter written and maintained by Paul Glavich. It allows you to switch between different caching types by altering web/app.config. The current cache options are:

  • No cache
  • .NET 4.0 ObjectCache
  • ASP.NET Cache
  • Appfabric cache
  • Memcached

This improved performance big time without a lot of effort.

Up Next: Blocking IO Requests Hurt Big Time

After considering the option to convert our WCF service into an async one (way to much effort for now), I discovered that one can easily improve performance of parallel requests by replacing simple calls with their async counterpart and just wait for them to complete...

I wrote a simple benchmark to download my homepage 20 times using async and sync approaches, and the results were unbelievable:

The Async Method was 28 Times Faster Than the Sync Method

How does it work ?

It is actually quite simple; I wrote a little helper function in a static class called Wait.Async. Here is the complete sample code with the stats included, let us take a look first:

using System;
using System.Diagnostics;
using System.Net;
using System.Threading;
using System.Threading.Tasks;
namespace AsyncTest
class Program
   // result
   //10 11 13 15 17 19 3 5 7 9 12 4 2 6 8 14 16 0 18 1
   //0 2 4 12 10 16 14 8 18 1 6 5 3 13 7 17 11 9 15 19
   //Blocking:28270 msec; non-blocking: 1107 msec
   static void Main(string[] args)
       var uri = "";
       var n = 20;
       var blockingsw = new Stopwatch();
       Parallel.For(0,n,i=> {
           var s = DownloadString(uri);
           Console.Write(i.ToString() + " ");
       var nonblockingsw = new Stopwatch();
       Parallel.For(0,n,i=> {
           var s = DownloadString(uri);
           Console.Write(i.ToString()+" ");
       Console.WriteLine("Blocking:{0} msec; non-blocking: {1} msec", 
         blockingsw.ElapsedMilliseconds, nonblockingsw.ElapsedMilliseconds);
    static string DownloadString(string uri)
        return new WebClient().DownloadString(uri);
    static string DownloadStringNonBlocking(string uri)
        string result = null;
        var wc = new WebClient();
        Wait.Async(done =>
                wc.DownloadStringCompleted+=(s,e)=> {  
                   result = e.Result; done();};
                   wc.DownloadStringAsync(new Uri(uri));
                return result;
        static class Wait
            public static void Async(Action<Action> What)
                var re = new ManualResetEvent(false);

So, the code is really simple; I simple wait for the ResetEvent to be set... As this is quite repetitive code I wrote a little helper for it...

Can we use it to optimize database access ? Even Linq2SQL ?

It is a little hackerisch for LINQ to SQL and it has its limitations for the queries, but it is actually quite easy to do so as we are a big fan of the community and like to give back, we offer you the code we use to improve LINQ to SQL scalability.

using System;
using System.Collections.Generic;
using System.Linq;using System.Text;
using System.Threading;
namespace Paycento.API.Tasks
    public static class Wait
        public static void Async(Action<Action> what)
            var re = new ManualResetEvent(false);
            what(() => re.Set());
        public static IEnumerable<T> AsAsync<T>(this IQueryable<T> what, 
                      System.Data.Linq.DataContext db)
            var cmd = db.GetCommand(what) as System.Data.SqlClient.SqlCommand;
            if (cmd == null) return what;
            var conn = new System.Data.SqlClient.SqlConnection(
                       db.Connection.ConnectionString + ";Asynchronous Processing=True;");
            cmd.Connection = conn;
            IAsyncResult res = null;
            Wait.Async(done =>
                    res = cmd.BeginExecuteReader(x => done(), 
            var rdr = cmd.EndExecuteReader(res);
            return db.Translate<T>(rdr);

Unfortunately non-selects are AFAIK (close to) impossible to make async, so we currently simply opt to update the cached values directly and process the SQL on a background thread...

What's up next?

For now, we have the tools and options in place to start optimizing the code. We applied the principles to two execution paths that require performance, and reached our initial performance goal. So now we need to optimize other paths as well. As my pseudo-fulltime consultancy period for Paycento is about to end next week, I can only assume it will be quite a busy week. Fortunately, I will still be a member of the Paycento team, even though it will (for now) be more on an ad-hoc basis... >


This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


About the Author

Tom Janssens
Founder Virtual Sales Lab
Belgium Belgium

You may also be interested in...

Comments and Discussions

-- There are no messages in this forum --
Permalink | Advertise | Privacy | Terms of Use | Mobile
Web04 | 2.8.171018.2 | Last Updated 16 Jun 2012
Article Copyright 2012 by Tom Janssens
Everything else Copyright © CodeProject, 1999-2017
Layout: fixed | fluid