Click here to Skip to main content
15,891,657 members
Articles / Web Development / ASP.NET
Article

The benefits of providing solid abstractions

Rate me:
Please Sign up or sign in to vote.
3.86/5 (4 votes)
5 Jul 20059 min read 28.9K   36   1
An article on how to identify solid abstractions and their increased importance as your code is accessed from more than one place.

Introduction

It is increasingly common to provide external access to the API of an application via an exposed services layer - typically a web service. Some recent examples of this are Microsoft's MapPoint, Amazon's retail services for providing access to a virtual shop-front, and Google's mapping and exposed search APIs. Providing these services makes a lot of sense in the long run as it means that certain application services can be shared without having to be re-written in every application. However, as your code is accessed from more points, it becomes increasingly important from a maintainability and security standpoint that you have factored your code correctly and that your abstractions are sound. In this article, I want to discuss some of the thinking that I did in this area recently when I was designing a recent web application of mine.

ProjectDistributor backgrounder

I designed ProjectDistributor in mid-2004 whilst working as a consultant. At that time, I found myself thrown - usually at short notice - into many different locations; always with a requirement for producing fast results. During this period, I relied heavily upon tools and code snippets within my "bag-of-tricks" to be able to deliver working solutions quickly and with low error rates. These ranged from my code-generator, to a tool I wrote to test regular expressions, build scripts, and silly little macros used for creating sample data when deploying applications out of the development environment. I would even keep little prototypes that I'd written for showing how to perform such tasks as using multi-threaded concepts, or code snippets for serializing or de-serializing object graphs. Basically, I wanted to keep hold of anything that was "expensive" to create and therefore "expensive" to re-create.

ProjectDistributor was born when I finally grew tired of leaving my memory-stick at home - and therefore not having access to my tools, and it was then that I decided that my tools needed to be available on the web somewhere. When I started designing a site to host the tools, I decided to make the "repository" generic enough so that others could host their tools there too. The design went something like this: Users could create "Groups" to host their "Projects" and they could then upload multiple "Releases" of each project. Furthermore, for security purposes, users could choose whether or not their items would be publicly visible at any of Group, Project or Release level.

Lastly, I had a design goal of making every API "function" exposed via the site's Web Service. This would allow people to consume and manage their projects however they liked. For example, people could write a smart client to work with the site and they could also upload new Projects or Releases directly from things such as automated build scripts.

Data Layer Architecture

My data layer was a pretty simple, crud-like set of classes which represented each of my main application entities - User, Group, Project, Release, etc. For each new data related feature that I needed, I would hang a new method off of the related data class. The data access methods were implemented as public static methods, and the name of the containing class was the plural of the entity that it represented. For example, if I needed a method which allowed me to search for projects, I would add a method to the Projects class named Projects.ListBySearch(arg0…argN), and if I needed a method for creating a new project release, I would add a method to the Releases class named Releases.Save( Release release ).

Factoring in value added features

One of my first design tasks was to ensure that I had some way of organizing the code to ensure that things such as caching and row-level security could slot in seamlessly. I wanted to ensure that these things were factored in at a level below any calling code - such as any web service code or page-level code. To start looking at what patterns I would build into my code, I decided to look at my data operations in two groups: Read operations and Write operations.

For read operations, I decided that the best place for this stuff was at the time that I loaded the data in my data layer and that I could simply wrap the actual Load data operation within any value-added service such as caching and security. Semantically, data retrieval looked like this:

C#
public static ProjectCollection ListByGroup( int groupId ) {
    
    string cacheKey = string.Format("ProjectListByGroup_{0}_{1}", 
               groupId,  Security.CurrentUserId) ;
    
    if( !CacheHelper.Exists( cacheKey ) ) {
        
        ProjectCollection projects = new ProjectCollection() ;
        
        // make db call
        while( reader.Read() ) {
            Project project = Fill( reader ) ;
            if( PermissionAPI.UserCanView( project )  ) {
                projects.Add( project ) ;
            }
        }
        
        CacheHelper.Encache( cacheKey, projects ) ;
    }
    
    return CacheHelper.Retrieve( cacheKey ) as ProjectCollection ;
}

Having a cache key which comprises of the ID of the current user means that I can confidently pull from the cache knowing that I've only stored stuff that a user has access to against that key. And right at the heart of that method you can see that, before any data is added to the cached list that it is subjected to a permission assertion - this is the row-level security check that I talked about previously.

For write operations, I again wrapped the actual data operations within the caching and a row-level security assertion. This ensured up front that the logged-in user had "Read and Write" permissions for the data object that they were operating upon before working on the data. Which looks like this:

C#
public static void Delete(Project project) {

    PermissionAPI.EnsureUserCanEdit( project ) ;

    //… Code for deleting the data object goes here
    
    CacheHelper.Invalidate( project.Id) ;
    
}

Minimize the attack surface

Now that the caching and security related operations have been encapsulated within the interface of the data layer, I can call those methods from anywhere within the application and have confidence that only secure data will be returned based on the user in the current context. Having the code at this level also ensures that I don't have code in multiple places; having repeated code for tasks such as this is how holes are created because you increase the attack surface. For example, you might find a logic error and fix it in one place but forget to fix it in another.

So, code in my pages and within my web methods are simply calls straight through to the data layer method, i.e.:

Web Method

C#
[WebMethod]
public void DeleteGroup( Group group ) {
    Groups.Delete( group ) ;
}

Page Level Method

C#
this.rptProjectList.DataSource = 
            Projects.ListByGroup( groupId ) ;
this.rptProjectList.DataBind() ;

Authentication and Authorization

I've highlighted the main points relating to how the security of the data is put in place - that is, security is applied closest to where the data is accessed. The other two major aspects to the security model are authentication and authorization.

Authentication occurs when a user offers their credentials for checking by the application and, in the case of ProjectDistributor, this can occur at two places: via the UI of the web application and also via a call directly to the public Authenticate method of the web service. Once authenticated, ProjectDistributor makes use of a custom principal so that user data that is required for authorization operations - such as a list of the users groups and application roles - can be easily accessed from a single place.

When a user is authenticated, the application checks for the existence of, and retrieves all of the associated data for that user, encrypts it, packs it into an HttpCookie and attaches it the outgoing HttpResponse. As mentioned earlier, that response will either be going to a user of the web application or to a consumer of the web service. In the case where the user is coming from the web application, the browser will take the responsibility of persisting the cookie and re-sending it on all future requests. In the event that the user is a web service consumer, they will need to provide their own logic for managing the cookie storage and re-sending. In .NET applications, this is a trivial task that is achieved by simply attaching a System.Net.CookieContainer to the web service proxy prior to authenticating the user.

Attaching a CookieContainer to a web service proxy

C#
ProjectDistributorWS service = new ProjectDistributorWS();
service.CookieContainer = new System.Net.CookieContainer();

Once you've authenticated, the credentials will be secured by encryption, attached to the CookieContainer, and then passed back and forth between subsequent web method calls. This is the beauty of the CookieContainer as it is all managed transparently.

At runtime, the ProjectDistributor application checks for the existence of the cookie right up front in the AuthenticateRequest method. Basically, if the cookie exists, it is decrypted and the details are stripped out and used to construct the custom principal. The custom principal contains a list of the groups that the user is a member of and also contains a list of any application roles that the user may have - such as "SiteAdministrator".

Example of attaching a custom principal to the current thread

C#
protected void Application_AuthenticateRequest(Object sender, EventArgs e)
{
    HttpContext ctx = HttpContext.Current;
    
    if (ctx.Request.IsAuthenticated) {
        int[] groupIds = null;
        int userId = int.Parse( ctx.User.Identity.Name ) ;
        string cookieName = "Test_UserData";
        
        FormsAuthenticationTicket ticket = null;
        // Get the groups that this user is in
        if ((ctx.Request.Cookies[cookieName] != null) && 
           (ctx.Request.Cookies[cookieName].Value.Length > 0)) {
             ticket = FormsAuthentication.Decrypt(
                 ctx.Request.Cookies[cookieName].Value);
        }
        
        if ( ticket != null && !ticket.Expired ) {
            string userData = ticket.UserData.Trim(new char[] {','});
            string[] groups = userData.Split(',');
            groupIds = new int[groups.Length];
            
            for(int i = 0; i < groups.Length; i++) {
                groupIds[i] = int.Parse(groups[i]);
            }
        } 
        
        Code.TestPrincipal principal = 
          new Code.TestPrincipal(ctx.User.Identity, userId);
        principal.GroupIds = groupIds;
        ctx.User = principal;
    }
}

Using the principal for permission assertions

Earlier I showed some examples of how the security code appears within the data layer methods.

C#
PermissionAPI.EnsureUserCanEdit( project ) ;

//and...

if( PermissionAPI.UserCanView( project )  ) {
    projects.Add( project ) ;
}

PermissionAPI is a business logic class that is responsible for doing runtime matching of the credentials required to perform an operation and then matching them against the credentials of the current principal. Methods that begin with the term "Ensure" - such as EnsureUserCanEdit - throw an application exception if the minimum credential requirement is not met.

As an example of how this logic is played out, let's look at the high level semantics for determining whether or not a user can view a project - UserCanView( project ):

if the project is publicly visible
         and the projects' group is publicly visible
    return true
   
if user is not authenticated
    return false
   
if the custom principal is an administrator
    return true
   
if the custom principal has a group
       which is the same as the projects' group
    return true
   
return false

As you can see, there are checks to see whether any permissions are required before checking the application roles of the principal - to determine whether or not they are an administrator - and then there are checks against the list of groups attached to the principal to determine whether the user is in the correct group to be viewing the project. All of these lookups are performed against data in the CustomPrincipal which, was retrieved from the cookie once at the beginning of the request. The fact that this data is cached means that the price of re-accessing it is minimal.

Conclusion

In this article, I discussed some of the key design considerations behind the ProjectDistributor architecture. The idea behind this was to provide an understanding of how proper planning and code organization can help to ensure that you minimize the attack surface and also reduce the risk of introduced security logic errors in the future. In the article, I isolated code in my data layer to show how caching and row-level data security could play nicely together and then showed how permission logic was isolated away and run off of a single, custom principal object. Having correctly factored code will mean that there is a greatly reduced risk of introducing silly logic errors in future code iterations which have the ability to open up security holes in your application.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here


Written By
Web Developer
Australia Australia
With a background in financial accounting and budget forecasting, I started developing software in the late '90s, and then focussed on .NET Technologies from the beta days of 2000 onwards. In that period I've designed and developed a wide range of solutions using .NET.

After achieving Microsoft Certification M.C.A.D. status in 2004 I was awarded MVP status for the ASP.NET technology.

A quirky character change led me into the dark world of Regular Expressions resulting in excessive amounts of time being devoted to maintaining a library of them on one of my websites: http://RegexLib.com, and writing about them in my blog: http://MarkItUp.com.

My other website is http://ProjectDistributor.net - a website for uploading tools, widgets and code snippets.

Comments and Discussions

 
GeneralGreat article! Pin
Ace Calihan8-Jul-05 8:35
Ace Calihan8-Jul-05 8:35 
The project distributor site is great too. Simple design yet lots of functionality.

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.