Recently, I had a client that was redesigning a website that had hundreds of static files and a few sections of dynamic content. They were asking me for options on how to make this site maintainable, without writing, purchasing, or installing a CMS (Content Management System -- such as DotNetNuke) tool. I initially recommended they create a Master Page and then move all the content of the static pages into individual .aspx pages that used the Master Page. They did not have a lot of ASP.NET skills, so this was somewhat unappealing to them. They liked the Master Page concept; however, they preferred to simply edit the static HTML files and put the new layout into the files.
This seemed like a poor solution to me, because the website would still have hundreds of files to be edited manually whenever it was to be redesigned in the future. I recommended a different solution. Create an
IHttpModule that checks to see if the request is for a valid page. If it is, then pass it through. If not, then check to see if a "content" file exists for the URL requested. If so, then pass the request to a page that knows how to read that "content" file and place its contents into a Master Page layout.
This article describes a very simple implementation of this architecture.
You might want to read up on:
Using the Code
Let's assume that our website has the following files:
and we want all of them to have the same layout. We could create a MasterPage and have the project look like this:
but this is not a model that scales well, since you will have to create a new .aspx page for every single page.
What I propose is that you create a MasterPage and extract the content from static pages into .con files (or whatever extension you like). The project layout would then look like this:
WrapperForm.aspx is a webpage that uses the MasterPage and knows how to read the .con file and shove it into its
There are two other things that are needed.
First is the control that understands how to read in the .con file and write it to the response (WrapperForm.aspx can't do this since it can't write out specifically in the
Content control). This control is placed in the
Content control on WrapperForm.aspx.
WebControl) has this
string path = Context.Request["path"];
if ( !string.IsNullOrEmpty(path))
using( FileStream stream = File.OpenRead( Context.Server.MapPath( path )) )
byte b = new byte;
ASCIIEncoding encoding = new ASCIIEncoding();
while ( stream.Read( b, 0, b.Length ) > 0 )
It simply gets the .con file from the file system and writes it out to the response.
Secondly, we'll need a way to get IIS to hit our WrapperForm.aspx for pages that don't really exist. We'll use an
IHttpModule for that.
RequestHandler listens to every request, and if the physical file requested isn't found, it checks for a corresponding .con file. If the content file exists, then it transfers the request to the WrapperForm.aspx page. The block of code that accomplishes this feat is:
HttpApplication application = (HttpApplication)source;
HttpContext context = application.Context;
if ( !File.Exists( context.Server.MapPath( context.Request.FilePath ) ) )
string path = Path.ChangeExtension( context.Request.FilePath, ".con" );
context.Server.Transfer( "~/WrapperForm.aspx?path=" + path, true );
The next issue is that for non-ASP.NET handled extensions. IIS will not pass the request into the .NET code. What this means is that if we wanted all .html files to be handled, they will never make it to our
RequestHandler because IIS is watching .html files and throws a 404. You can either make the fake URLs be .aspx pages, or you can change the application's document mapping to route .html (or whatever) pages through ASP.NET (which might be difficult in hosted environments).
In the included source, you'll find the following content structure:
Using this solution, I am able to browse to:
- default.aspx and the default page is hit
- test.html (and test.aspx) and the test.con data is reproduced in the correct layout
- dir/junk.html (and junk.aspx) and the junk.con data is reproduced in the correct layout
Points of Interest
This isn't a particularly special solution. It is very basic, and really it seems to only solve one problem, the "skinning" of static content. This solution really does add quite a bit of value.
- The content can easily be migrated to a database (just change the
Wrapper control to get it from the database rather than the file system)
- Old, existing, deep links still work
- Search engine links are maintained
If I was to implement this for a client, I would make the content files have a little more meat. I would make them be XML, containing the title of the page (this solution is defaulting to the title of the file), some SEO meta content, and breadcrumb trail information, all things a CMS would provide.
No updates yet.
I'm the owner and principal consultant of a small (one man!) shop here in the Dallas, Texas area.
I mostly work with Microsoft technologies, but run a lot of Linux at home.