12,063,909 members (62,245 online)
Recently, while preparing for a meeting with some of my colleagues regarding the approach for the architecture in a new project, I started writing down what I wanted to see in the new architecture. I decided to put it in a format that would be helpful for anyone else in the same position.
In most applications, there is some sort of user interface (UI) where the application interacts with the users of the application. It has been my experience that the developers, with an eye for the user interface, generally do not do well when in the deeper layers of the architecture. Taking this into consideration, I try to have the data and methods that the UI developers will use as refined and simple as possible. This means business objects with names such as “house,” with attributes such as color and functions such as save.
Given that the business objects are going to be used in the UI, putting calculations or any business process that must scale in them is not feasible. As a result, some sort of server layer that can run independently on any machine that can be used to manipulate business objects is needed. In projects I have done in the past, these servers handled all communication with the database as well as all major data processing.
As the demand on your system grows, these servers can be moved to their own machines or duplicated across multiple machines. For example, one could have a house server for the UI and a second house server that is only used for reporting. Both of them could talk to the same database server, or the reporting sever could talk to a read-only mirror copy of the database server. Once one starts down this path, the possibilities are endless.
In order to implement everything mentioned here, a mile of code will need to be written and a code generator will be the key to success. I usually write my own generator, but there are some available commercially that work very well. One thing to keep in mind is to keep the generated code separate from the handwritten code. This way, if there is a new feature that needs to be added to all of your objects, you will be assured that you are not overwriting something you have handwritten into one of the objects. Microsoft provides a great option for doing this in their partial classes.
In general, objects should look like or have attributes similar to the nouns they represent. For example, a customer will have a name, address, phone number, height, weight, eye color, etc. These attributes are typically keyed into a database table and then a code generator is used to build objects that have the attributes of the table, as well as some additional attributes or functions, such as “Save” and “Delete.” In addition, the code generator will usually generate an object that will contain multiples of the individual object. Going back to our customer example, if the code generator is run for the customer table, it will likely create a class called “Customer” and well as a class called “Customers” that may have attributes such as “Sort,” “Search,” or “Bulk Insert.”
Objects generally reflect the noun they represent and the screen or report in which they are used. That said, don’t shoehorn your objects into the screen or report where they are used. Rather, develop a new object that fits well with the screen. It is acceptable to use an object that does not fit perfectly, but if a few columns are being pulled from multiple tables back for search results, create a new object to handle the search results. That way, the search results will reflect exactly what you need, and will cut down on serialization times, network bandwidth usage, and database calls.
Serialization is the process of taking the values stored in an object from memory and turning them into a long array of ones and zeros in memory that can be transmitted over TCP/IP or written to disk. In Microsoft’s C#, these arrays of ones and zeros are called binary streams.
Serialization is generally looked upon as magic that the black box architecture you are using at the time handles for you. Serialization is also something that does not deserve attention for small to medium size applications; it is only when scalability is a concern that the process should be analyzed for serialization on a table, screen, job, or report basis.
Multi-threading can be defined in a variety of ways, but it can most easily be explained as creating multiple standalone pieces of code or work that can be passed to the various processors or cores of a machine.
Years ago, multi-threading was not even a consideration unless one needed to perform an operation in the background while the main program continued running. The adoption of multiple core processors as the standard helped bring multi-threading to the forefront of software development. With the use of multi-threading, a few changes to one’s code, it could be made to run 2, 4, and then finally 6 times faster than it had before the use of this process. Personally, I found that as I took on bigger and bigger projects and performance became more and more of an issue, multi-threading became the difference between success and failure.
On the other hand, multi-threading can do more harm than good if it is not done properly. The number of threads created should be directly related to the number of processors at one’s disposal. Generally, if there is one processor or core in use and four threads of work have been started for that processor, the overhead of switching threads will make the code run slower than if the work had been combined from all the threads into one. However, there is one exception to this generalization to take into consideration: If a thread will have idle time where the processor could possibly do work on a second thread while the first sits idle, it may be beneficial to keep the threads separate. It is also important to consider the processing power of any other machines on which the threads may be dependent. For example, if one is creating 12 threads to run on six processors, but those 12 threads depend on calls to a four processor machine, the system is not likely optimized for ideal performance.
Additionally, it is important to consider that the machines currently in use could possibly be replaced with new ones that have more processors at a future date. To combat the issue of processor changes, the code should be able to optimize itself for the number of processors in the machine; or, at a minimum, there should be an easy and central way to tell the code how many threads to create. Performance testing and tuning will go a long way to insure success in this area.
By definition, if creating a scalable architecture, it is necessary to have some means of communication between the machines as one scales out. In past project, I personally used Microsoft’s .NET Remoting technology to communicate between my machines. Given my belief that whatever Microsoft does, I can do better (which proved to be the case with serialization), I would use socket communication for my next project. Using socket communication would require one to write his/her own request/response protocol, but it is generic, and (in theory) anyone using any development platform could send a request object and a response object could be returned containing the results of that request.
There are various methods for making serialization perform with large amounts of data. These methods all involve multi-threading. Serialization will be nearly the same in all cases, but other considerations will determine the appropriate approach.
In most cases, an array of multiple objects will be taken and divided into smaller arrays based on an optimal number of threads. Once the objects are divided into multiple arrays, multiple threads can be submitted to do the work of serialization. Now that the data is serialized and there are multiple arrays (binary streams) of ones and zeros, those ‘other considerations’ come into play. Depending on one’s objective, those multiple arrays can be combined into one and/or divided into smaller arrays that are optimized for the next step in processing. When writing a database-driven application, this would be an optimal place to pass the serialized data to another server that can prepare insert or update statements for the database. At this point in the process, one might encounter one of those aforementioned ‘dependents’ where the optimal number of database connections for the amount and type of work being performed on the database server should be taken under consideration, and adjustments to the number of serialized arrays should be made accordingly.
Normalizing the data or database is the process of separating one’s data into fields and tables so the duplication of data is minimized. The way one normalizes his/her data can be the difference between the success or failure of an application. Most people tend to normalize their data to extremes, which hurts performance as much as not normalizing enough.
Take into consideration that database tables do not necessarily need to reflect one’s business objects. For example, a programmer is creating an application that tracks the temperature of multiple devices in 15 minute intervals throughout the day. Common knowledge dictates that one would create a table with a key, device key, date/time, and temperature. If a 24 hour day is divided into 15 minute intervals, the programmer will end up writing 96 rows of data with 96 duplicates of the device key and date, per device, per day. That is a lot of useless data that must be filtered through, not to mention the work the database server is subjected to as it indexes all the rows of redundant data. To remedy this, one should consider writing one row per device, per day; with a key, device key, base temperature, and 95 byte fields that reflect the variation of temperature from the base, or a 95 byte array that does the same. If the temperature is not likely to change more than 8 degrees in 15 minutes, the byte fields or array could be cut in half, and the first half of the byte for the first 15 minutes and the second half for the second 15 minutes could be used. In this way, the load on your database server will be drastically reduced, but the data looks nothing like what is needed to present to the users. Getting that data in the correct format will put undue stress on the presentation layer. This is where the scalable architecture will pay off. Create the temperature objects as they will be seen on the report or screen; for example, if one wanted a row for every 15 minutes. In this case, the presentation layer should make a call to the temperature object serialization server using the preferred communication method. Request the row(s) of data from the database and use the data from the database to create multiple temperature objects. If requesting multiple days or weeks of data, the architecture should automatically create multiple threads and pass the groups of days to those threads, in this way, hundreds of objects can simultaneously be created and serialized quickly for their return to the presentation layer.
Developers tend to over utilize whatever database they have at their disposal. Databases are simply where data is stored. Therefore, it should not be used for anything other than storing data as quickly and efficiently as possible. Applications should not be written in database procedures, that is the opposite of scalable. Scalability means the data processing code is kept in a framework where it can be run from a server (or multiple servers). It should not be put in a stored procedure of a database unless that database can scale it out to multiple servers.
A common mistake made by developers is saving an entire object, even if only a small attribute has changed. A better option to consider is serializing an array of bits with the data that will tell the application what data is and is not included in the serialized object. If only data that was changed is serialized, and only the fields in the database that have changed are updated, the load on the hardware will be drastically reduced.
Though not all this information may be new, this document should be able to help developers with all levels of experience. The platform-specific information was kept to a minimum, but I also drew on my own personal experiences, most of which have been in Microsoft’s C#, to help illustrate the main points.