<!-- Download Links -->
<!-- Add the rest of your HTML here -->
Agent Blue was assigned the task to hire a consultant to work with and create a web
application that would replace the paper methods they used to track their covert UFO
sighting operation - and they had to do it as cheaply as possible. They wanted it to
be simple and scale easily without much modification because the work they were doing
His operation had a recently acquired web server that had the .NET Framework and ASP.NET;
an application (business logic) server, also featuring .NET; and a SQL Server 2000 database box.
Due to the fringe nature of his operation and its budget constraints, they could not buy
any fancy development tools to help him out, let alone maintain the application to any
I received an encrypted IP telephone call from Agent Blue, asking me to assist in creating
the application. I met him at our usual information exchange location, and after an hour or
two's discussion over Bistec a la Brasas and Chilean Merlot about the application,
I agreed to help. The requirements were simple, and the constraint on hand-coding and timely
delivery was a welcome challenge.
The operation consisted of field agents filing paper reports on UFO sightings from their
various locations. Agent Blue told me that agents can also follow up on their reports,
or other field agent reports. The operation leaders wanted the application to mirror how they
currently operated, but wanted also to be able to provide location-specific reports and follow ups
to outside agencies.
Agent Blue outlined the policy about agents, reports and follow ups:
- An agent can file a report, but once filed, can only file follow ups.
- An agent may file follow ups to his or her own reports as well as other reports filed
by other agents.
- Once filed, a report cannot be edited by an agent, only followed-up on.
- Once filed, a follow up cannot be edited by an agent, a separate follow up must be
- Reports can be searched on for text in the title, report, location, or agent. The
results shall not be paged.
- All reports can be listed for a single location. The results shall not be paged.
- Reports must be sorted by date descending, and no other criteria.
He also relayed the policy about the web application:
- Agents access over a trusted TCP/IP network where security isn't that big an
issue, but the network is huge, so potential users will be a factor. It must scale well.
- An agent signing on will use their code name and a password.
- An agent signed on can enter a report from a location, and review reports by
location, following up on reports if they choose to do so.
- Outside agencies (but within the same overseeing organization) can utilize the
application services to get reports and follow ups for use in their own applications.
They can only read information, not add reports or follow ups.
- It had to be easy to maintain and upgrade.
- The operation leaders fancied .NET, so a mandate came down that the implementation
was to be done in C# and ASP.NET, using no fancy development tools since they
could not budget for them.
- We had a week to design, build, test, and roll out the application.
Architecting the solution, which we decided to call NetUFO, didn't prove to be
too difficult, given the policies Agent Blue outlined. It would follow a modified Model View
Controller [MVC] structural design pattern as a way of separating the user view of the
application, from its rules and its data:
- The View. ASP.NET pages with code-behind to pass requests onto the
- Controller, C# components which would enforce the "business logic" behind the
- Model, the SQL Server database where the actual agents, location, reports and
follow up data would be stored.
Web Services would be part of the View, and would be created to
support (via requests to the Controller):
- Retrieving a list of available reports for a location
- Retrieving a report
- Retrieving follow ups for a report
Model View Controller [MVC] is a design pattern coming from the Smalltalk world, where
Object Oriented Analysis and Design was born. In the original MVC, the model could
notify the View, and was originally geared towards Graphical User Interfaces. The
way we used MVC is similar, but tailored for our needs for separation of the NetUFO
components. In true MVC, the user tells the controller something, which modifies the
model, and returns things using the view. Our version uses the View to interact
solely with the controller, and is one way to describe the internal boundaries of an
application. You could also call this an n-tiered application, where n
just happens to be 3 (three.)
Following this paradigm, the "layers" of the application would be discreet, allowing for
better maintainability and a way to distribute the application:
- The View: On the ASP.NET web server(s)
- The Controller: On the business logic server(s)
- The Model: On the database server(s)
If and when the UFO reporting operation expanded, they could add more web servers to
handle the requests. As the web server requests increased, they could add additional
application servers to groups of web servers, and as those increased, database
boxes to groups of application servers. It would eventually grow to look like this:
From here we would design an efficient model (database), come up with C# objects to
bring to life our controller, design the view to look good and interact with the
controller, and finally, implement the Web Services.
Agent Blue was a little worried about having to hand-code the whole thing, and as
daunting as it may seem in this day of GUI development tools, hand-coding isn't so bad
if you're used to doing it on the command line, as I am on Linux using
gcc. So, with that knowledge, I knew I could use the .NET
csc to do something similar, but
that's a little digression. First thing's first...
The Model (Database)
I like to make sure the data that needs to be stored is done so efficiently, and that
accessing the raw data is a clean, simple affair. What this meant to the project was
coming up with the tables, and ways to get data in and out of them. I won't bore you
with Set Theory and all the valid and glorious methods Database Administrators create
and design databases. Suffice it to say, each table we would end up with would have a
unique identifier and contain only the particular set of data associated
with that identifier. The relationships between the tables would be meaningful and clean.
Getting data in and out of them should pose no surprises or threat to one's sanity.
It's common to develop applications that access databases using dynamically generated
SQL statements. While this is highly flexible, the cost is that you have to compile the
SQL each time and if the tables changed, code maintenance would be an issue. With that
in mind, a decision was made that any updates to the data store would be via
SQL stored procedures. That way, the SQL statements would be compiled, and execute much
faster. That means it would scale better. It also meant that even if the table
structure was changed, as long as the parameters to the stored procedures and the expected
results didn't change, we just had to alter only the internals of the procedures and no
other part of the system!
Agent Blue and I were cautious to assert that only pure operations on data would be part
of the design of the stored procedures. We are only concerned with what's stored,
not the business or process rules about the data; it is not the Model's responsibility -
business logic should exist where it belongs - outside the data logic. The same
went for defaults and constraints on the data.
Along with the stored procedures, we agreed it would be nice to create a couple
of SQL Views to aid in the aggregation of data.
With the future in mind, NetUFO, will most likely be using multiple databases and
passing data between them. That led us to use an SQL
as each table's primary key. This ensured that records created on one server would still
be unique if and when replicated to another.
NetUFO consists of 4 (four) tables: agent, location,
report, and followup. Their relationships are:
- One agent to many reports
- Each report would be associated with one location
- One report to many follow ups
- One agent to many follow ups
One can create a database diagram importing all the tables in the SQL Server Enterprise
Manager to view the tables and their relationships.
Agent Blue and I decided the contact point between the controller objects and the model
would be an object which act as an intermediary using ADO.NET. This would further optimize
NetUFO. Accessing the data would be through this object, instead of rolling ADO.NET
data access code into each object. The object would be capable of caching
SqlParameter arrays using a
HashTable so we would not have to
always create parameters for the stored procedures on the fly. This would decrease the memory
and processor usage footprint as well as speed up requests for data. The
SqlDataReader would be used for result sets for speed.
The other components would map to the rules in a straightforward manner:
- Location management and lookup
- Agent management / lookup / authorization
- Report submission / retrieval
- Followup submission / retrieval
- Support classes for the above
Now that the components were designed to fit the UFO sighting report rules we had a nice
framework around which we could build the View, which is how the human (or other) agents
will interact with NetUFO.
We had used our version of MVC in designing NetUFO, so the look and feel of the
application could be entirely changed, enhanced, mutated, or replaced without affecting
the rest of the system. We could even write a desktop or telephone application and use
the controller from there, the end result would be a View, interacting with the
controller, updating the model thus forming the "Application."
I knew that as an architect and programmer that as programmers, we often make
lousy user interfaces. This is no jibe at our technical prowess, but a fact that
often gets overlooked in projects: the end user. They always see things
differently than us. So I asked a friend (with Agent Blue's approval, of course,) to
design the user interface for the web. Over the next day he gladly drew up several designs,
and we finally settled on one, so he created a template HTML file, images and
style sheet. Using this template, we would construct the all the pages where we would
"wire up" the controller.
As far as the View security went, we formulated a strategy: The default page and login
page would be accessible directly without a login, the rest of the pages would use ASP.NET
The design also included caching of the data on the View components so that requests
did not have to go all the way to the Model via the Controller. This would be ideal
in this project because of the rules stating that once filed, reports and follow ups
couldn't be edited. This would be implemented using the ASP.NET built-in caching mechanism.
Finally, the Web Services can actually be thought of as a part of the View, because they
too, are another way of interacting with the application. We agreed that these would be
In the Beginning was the Command Line
After creation of the database, stored procedures and views, it was time to
do some real code, so I set up a project directory structure and fired up
a shell. I must say that I love the command line. It's where I began my career on
a friend's SCO Unix box, and not soon after, on my own i386 running Linux 0.99.
(I wish I still had the bible-thick stack of 5 1/4" floppies!) The .NET Framework's
nmake is quite similar to the
make used by many a command-line freak.
I had to follow the same rules for setting up the internal variables, targets, and what
not that modern-day GUI development suites do for you quite conveniently - at least
compiling C# in .NET is simpler than all the fun gyrations for C++. The initial "makefile"
for the Controller ended up being quite compact, and would serve as a template for
I wanted to make sure I did things properly, and this meant having proper .NET
Assembly information for the Controller, I wrapped this up in an
file, but before I did, I generated a key pair for the Application using the
sn.exe tool. This ensured the Application assembly would have a strong,
unique name for deployment. I saved the resulting file in the root directory of my
project, along with a "top-level" Makefile, to build the whole project.
For the Controller, I broke out the stored procedure parameter names
(and lengths if applicable,) and the stored procedure names into static string constants.
Instead of allocating a string on the fly, they'd be compiled in. I'm a firm believer
in defining constants, especially for "magic" numbers, that way the meaning could be
conveyed through the constant name. This habit formed because of code reviews where
I'd try to glean vaporous information from a programmer's code, where
<pre lang=c++>if (9 == x) /* some voodoo would happen */
It's definitely easier to change one number or string in one place, than the same in
possibly hundreds of locations.
Without a snazzy GUI development suite, the benefit of auto complete, and other niceties,
it was a bit tedious at first: writing code, ALT-tabbing over to run nmake,
ALT=tabbing back to fix typos, testing and repairing things; looking in MSDN for parameters
and case-sensitive methods - but slogging through made it worthwhile, and brought even
deeper understanding of what goes on in .NET. Before I knew it, I had completed the Controller.
With the Controller complete, I split up the layout the designer created
for us into several components for use in our
aspx pages: header, navigation,
content, and footer. These I would implement as classes, and similar to the Controller,
created static string constants for a lot of the HTML, to reduce run-time overhead. We
were two days into the project, and far ahead of schedule.
With all the up-front design, implementation of the ASP.NET
pages was fairly trivial, even with the tedium wiring up each page to it's code-behind,
but that's a why a good text editor helps. I have to admit that the temptation to use
Visual Studio.NET was the greatest at this point.
I set up an initial
Web.config file, copying it from an MSDN sample and
modifying it for our needs, setting the
authorization elements for the
pages we would "lock down." I created a generic error page and added it to the
configuration, to be shown only to remote users (a nice feature in ASP.NET, and helps
tremendously in debugging, without having to change code or configurations to see an
As each page was created, tested, and the whole site tested, Agent Blue's excitement
increased. It was day three, and we "deployed" the application for user testing, while
working on the Web Services, which were probably the easiest portion of the application
to implement. They return straight
DataSets so that clients can use
them right away.
On day five, after a couple of tweaks, it was made accessible to the UFO reporting
agency, with quiet aplomb.
Please note that the sample code does not include management pages for Agents or
Locations for brevity.
To install (Agent Blue and I didn't have time to create a nice
- Unzip to the folder of your choice (it will expand into a subfolder called
- Run the
model\create.sql using the Query Analyzer on your
SQL Server instance.
- Add the ASPNET user as a database owner to the database (this is quite important.)
- Open up your Internet Information Services MMC console and create a
Virtual Directory named NetUFO and point it to the view subfolder
where you unzipped the files. You must also open up the Virtual Directory
- Create an application for it
- Use Scripts Only for the Execute Permissions
default.aspx is enabled as the default document
That should just about do it, so you can surf over to http://localhost/NetUFO! The
sample code automatically puts blue/blue as the user name/password on the login pages.
You can alternatively login as red/red. The Web Services are at
2002-11-01. Initial revision.