This is the second of what is planned to be many articles, spanning the scrapbook, design and architecture, and technical sections. While the story of how the project was conceived is quite interesting, you will also be reading about the corporate decisions, technical challenges and other considerations that went into the design of the product. There are also standalone components that we feel the community would find value in, which we will be providing the code for in technical discussions.
Previous Installments: If you are just coming across this article, you might want to read the previous installment(s) first: Preface.
Once the mental leap had been made to consider creating a new platform, it was time to consider the goals again, this time from a more technical perspective. What was the main priority? While there were many things that were important in the new system, the obvious standout was flexibility. There needed to be simple ways to modify virtually every aspect of the system. Some simple modifications would be performed by highly non-technical users.
Flexibility was defined in three major areas:
- Changes to data schema.
- Changes to standard business logic.
- Changes to screen layouts and client workflow.
In order to accomplish these goals, the system's design would start with a blank piece of paper. Nothing was sacred or assumed. The mantra was "Question Everything". A need was determined, and the optimal approach to address that need was devised. In an iterative process, that new approach was applied to the system design to that point. If additional functionality needed to be added to the base in order to support the new features, it was added. All of this structure was continuously reviewed. If the underlying structure became too convoluted, a new mechanism was needed that met all of the identified requirements without the complexity. Although this sounds like an unwieldy process, a conscious effort was made to use simple components, without a tremendous amount of built-in functionality. This resulted in a highly flexible design that weathered the challenges remarkably well.
An early decision was to utilize XML to specify configuration information. XML is extremely flexible, if somewhat verbose. It is also the standard du jour. Specifying anything in a platform-neutral fashion almost implies XML these days. Utilizing XML would also allow for the use of other tools for editing, validating and processing the data.
With XML, "diff and merge" functionality could be used to integrate changes into the base documents for the modifications, which would be known as "Layers". The implication of the Layers description was that more than one could be applied, and this was a driving force behind the design.
A simple illustration will help to convey this concept:
To handle all the complexities of layers and XML merging, additional metadata is also required. Consider, for example, that tab order in a UI is usually defined by the order in which the controls are deserialized. This adds a layer (no pun intended) of complexity to XML merging and the overall architecture and definition of the various documents and their layers.
Allowing changes to the data schema was fairly simple. Provide a specification for the "base structure", and then provide additional specifications for added fields and tables.
For example, let's look at a simple schema element defined in XML.
If a "layer" wanted to extend this schema definition, it would define the additional fields (or removal/modifications to existing fields):
Then, using an XML "diff and merge" utility, the complete schema for the application, comprising all of the application layers, would be constructed. The specifications also allowed for referential integrity, default values and more. This approach was similar to the original product. The major change was to switch the specification to XML.
The XML hammer gets too big
This fascination with XML led to our first "hammer and nail" moment. Since there was a requirement to integrate business logic in layers, why not put that in XML as well? In order to answer this question, it was necessary to identify how an XML implementation would look, and how merging the business logic from other Layers would function. First, XML does not contain compiled code. This meant that the code in XML would either be interpreted or compiled "on the fly". While the latter is possible, it would be much more problematic than already-compiled code. Earlier experience with interpreted code pretty well eliminated that as an alternative. Merging was also a problem. CVS and similar systems aside, merging of modifications into code is not a perfect science. Finding problems would be extremely difficult.
Stepping back from the brink of XML disaster, it was time to re-examine what modification to business logic needed to be accomplished. For the purposes of this discussion, an element of business logic will be called a business rule. Business rules need to take input (parameters), perform some processing on them (along with other data accessible to the rule) and generate output. The first impulse when talking about modifications of business logic is to try to change the process inside the rule. That isn't necessary! The business rule itself is a "black box". Something goes in, and something comes out. If you want to change the business logic, you really don't want to change the process, you want to change either the input that the process uses or the output that it generates. If you have the capability to do those two things, you can essentially change anything that the business logic does.
Developers familiar with Aspect Oriented Programming will see a similarity between AOP concepts and this architecture.
With a new understanding of the goal of business logic modification, the answers become clearer. We can use compiled code as long as it can be dynamically loaded at runtime. What is required is the ability to add new processes to the applications capabilities, and to specify when they are to be run. These new processes may be completely new business rules, or they may be modification logic for the input or output of existing rules. The only thing needed now is the mechanism to call the compiled rules, plus a way to pass parameters in and to get results out.
While XML may not be the way to add business rules to the system, it is actually a great way to specify when those rules should be run. Four types of business rule executions were identified:
- Rules specifically launched by a command from the client application.
- Rules invoked from inside other rules.
- Rules that are automatically executed upon detection of a trigger event.
- Rules that modify the input or results of other rules.
As it turns out, the fourth case is just a specialized case of the third. Business rules needed to be identified so that they could be executed based on a configuration setting, so an object would be used to identify and launch the business rule. To modify inputs or outputs, a "BeforeProcess" and "AfterProcess" event would be able to launch other rules to make those changes. Now all that layers would need to do is to identify the places where the rules would "fire".
In order to use rules' before and after events to override business logic, it is necessary that the existing rules be as "atomic" as possible. That is to say that each rule should only do one thing. If rules perform an action and then use the result of that action internally within the same rule, there is no opportunity to override the result of the first action or the input to the second one. Again, this is a tenet of AOP architecture, and failing to implement modular, compartmentalized business rules impacts the flexibility of the system, which is its main goal, after all. While the new system cannot enforce the practice of "atomic" business rules, it is certainly recommended.
Containers and data objects
The final item to be settled with regards to business rules is how to pass information in and out.
Business rule parameters
Most of the methods used in software take a series of parameters, each of which is a specific type of data. These methods may then return a single piece of data in a return value. Unfortunately, the parameters being passed and the return value type constitute a method "signature", which makes it awkward to store pointers to these functions and invoke them in a standard way.
Business rule data requirements
The other problem that is introduced by the flexibility of the new platform is that the data that a business rule needs are not clearly defined. That rule may call another, which has an overridden before or after process that needs data not required by any of its related rules. The only way to make sure that every rule has all of the data that it needs is to pass all of the data that is available. The approach that was settled on was to use a "data container" that would have all of the data that any of the rules may use, and to pass that container object as the sole parameter for any business rule. Results would be similarly returned by setting values in the container.
Having loads of data available for business rules is important, and there is not a great deal of overhead with it. Communicating that data to the client, on the other hand, may involve a great deal of overhead. Some mechanism needed to be designed that would minimize network traffic between the server and the client.
Because there is no business logic on the client (based on the separation that was mandated for these "tiers"), it is not necessary for the client to have all of the data in the server's data container. To limit the amount of data being transferred, it was necessary to identify which data was of interest to the client. To accomplish this, data elements were stored in the data container in an object simply called a "data object". Two flags in the data object coordinate when the data should be transferred.
The functionality of the data object on the client is almost identical to that on the server (relative to determining when an update is required), so a data container is also created on the client. In fact, in the implementation, both client and server share exactly the same base class. An "
IsDirty" flag indicates that the value in this data object is different than the value in the data object on the other side. A companion "
ClientAware" flag identifies which data objects on the server are needed on the client side. Every time a value changes in the data object, the "
IsDirty" flag is set. Communications are always initiated from the client. When the client contacts the server, it builds a "command packet" to tell the server what to do.
The first part of this process is to check through the data container and to place into the packet the new value for each data object being changed. Then the dirty flag is cleared. On the server side, once it is done with processing, it will follow the same procedure, but only adding items to the response packet that are both dirty and client aware. In this way, each side is continuously synchronized with the other, but only the changed data is transmitted.
Business rules revisited
Containers were actually designed (and mostly working) before some of the other requirements for business rules were fleshed out. These decisions were all relatively straightforward so far, but they introduced a whole new series of issues that needed to be addressed:
- How would the business rules be identified?
- How would the data in the container be identified?
- How would the client request that a business rule would run?
- What events should trigger automated business rules?
The answers to these questions would be greatly influenced by communications with the client. The concept behind that part of the puzzle was also becoming clearer, but it would take several passes to arrive at a satisfactory solution.
Further discussion of these issues will be forthcoming in the next installment.