Previous articles in this series:
Understanding is critical
The development of the client/server system (even without the display part) was moving slowly. Although the overall goals had been described in the initial proposal, the bulk of the design had not. Instead, the outsourced development team was being handed very specific segments of the project without the overall big picture. Earlier difficulties in communicating the big picture to others had driven this approach, but it caused its own problems. As even the smallest segment of the code was created, it would have design "features" that would limit its usefulness in the larger whole. It was time to fill in the new guys with the entire design.
Communicating the design
One of the problems faced by any architect is communicating the design to the people that actually do the work. There are several facets to communication that make life difficult for the architect.
Depth vs. programmer experience
With software, a design can be communicated at high level abstractions all the way down to pseudo-code, barely a level higher than actual implementation. While there are lots of good methodologies and guidelines in place to facilitate this, the goal is to optimize the amount of information a programmer needs to get the job done. Too little information results in failure, and too much information is overkill. An architect (and especially the CEO of a company that is driving the architecture) usually gets paid the "big bucks", so optimizing how much time is spend conveying information has a measurable influence on the cost of the project.
However, succeeding at this is very difficult because you have to balance the programmer's skill with the depth of information you think they need. This is hard enough to do with your own employees, even after years of working with them, especially when they have become comfortable with the existing architecture, tools, technologies, and you are challenging them to work with a completely new and radically different architecture, new tools, and new technologies. It's even harder when some of the work has been outsourced to a country on the other side of the world. And for a small company with a limited budget to develop prototypes, this balance is critical.
The tradeoff, of course, is that more experienced programmers, those who can hopefully take a higher level design and run with it, are also more expensive. So we have a triangle in which somewhere there is an optimal point of documentation requirements, programmer skill, and programmer cost.
Thinking outside the box
Programmer skill is actually an elusive quality. Clearly, there is technical skill that can be fairly easily determined. However, much more difficult to determine are skills in thinking about a problem from a different angle.
Abstraction doesn't come easily
One of the common problems in conveying an architecture is that the programmer, even a technically skilled one, implements to the specific problem domain that the architecture is intended to solve. What is often missing in the communication of an architecture is the design patterns that allow the architecture to be abstracted. An implementation that has incorporated some degree of abstraction will be more flexible when the original problem domain changes. The up-front investment in abstraction pays for itself tenfold when the implementation needs to accommodate new or different requirements. Again though, the questions are:
- How much is the architecture abstracted?
- Where is the architecture abstracted?
- How much of this abstraction is conveyed to the programmer?
These are questions that are not easily answered, and if even worse, in order to answer these questions, the programmer requires a complete and detailed understanding of the architecture - a catch-22.
The Microsoft way can get in the way
Outsourcing companies pride themselves on the skills and training they have acquired using the Microsoft tools in the prescribed manner. Any "outside the box" architecture rubs up against "the Microsoft way" with often disastrous results. We encountered this problem numerous times as common techniques such data sets, remoting, serialization, etc., were applied to the project without concern for performance, security, and flexibility. It is difficult to tell someone, "ok, forget all that you've learned about how to do X" and do it "this way" instead.
Choosing the right tasks for the right people
Outsourcing, and this can be generalized to any working relationship, ultimately boils down to knowing your staff, consultants, and other team members well enough that you can assign the right tasks to the right people. While documenting the design is important, one of the things we learned is that the design documentation needs to be only sufficient enough so that it fits the task and the person.
Easier said than done
The sad reality is that this is easier said than done, and this was evident as the development progressed when the entire design was communicated. Immediately, the project started to veer off-course. Without detailed specifications of each step, the outsourced team made assumptions that needed to be undone the next day. Language did not seem to be a problem and the skill level of the team was excellent. The two big problems were time zone (they were just leaving as we came into the office) and the same problem that had been experienced with everyone else: they could not grasp the entire scope of the project as it was intended.
Startup projects require vision and skill
Only a month into the restarted project, it was starting to look like an impossible task. All the while, the search continued for a tool to help with the interface load. One of the many Google searches (it does matter how you ask...) returned an interesting link to an article on the Code Project. It described a product called MyXaml. The author of the article, Marc Clifton, had created a tool that did many of the same things that Microsoft's XAML did, but without some of the design decisions of the Avalon environment. Essentially, what MyXaml did was to read XML that contained .NET object definitions and instantiate those objects. Of course, that is a massive oversimplification, but it looked like it might be a tool that could be useful. A short email conversation followed, during which Marc was exposed to the overall vision.
The next day, a phone conversation was scheduled. During this conversation, Marc became more and more excited about the overall plan. It actually closely mirrored many techniques that he had already used in applications that he had written, fit well with the MyXaml product as he had designed it and it "felt right". At one point, he mentioned that this was the way that programs were supposed to be designed. The user interface was separate, the business logic was detached from the actual database access. He wasn't the only one who was excited. Within that first phone call, it was apparent that Marc wasn't just the author of a tool that we could use, he was the first person I had encountered who actually immediately understood the high level design! This was to be a turning point in the development of the application. It was the 29th of September, 2004.
Marc had some time available and he was interested in the project, so we started looking at fitting everything together. The original phase one mini-project from the outsource team was examined and a simple demo was devised to test the concepts. Within three days, a conceptual demo was in place, with a number of concept drawings. Putting things on paper in a clear format was one of the major contributions that Marc would make to the effort.
Within a month, the application was running with declaratively designed screens, auto-generated database access, and the data containers and objects. At this pace, it looked like everything would be done by the end of the year. As it turned out, the time of year was right, but the year was wrong. At the end of October, we put together a "to-do list" for the new project. The number of points to be addressed was so extensive that a face-to-face visit seemed appropriate.
The meetings during the second week of November were very eye-opening. Long days full of constant design, clarification, concept testing and "what if" were identifying the places where the underlying platform needed development. Even though Marc understood the high level architecture, there were difficulties with the implementation, again dealing with the right abstractions and the right class structure to support the architectural concepts. While a "demo" platform was running, it was far from the flexible engine that was originally envisioned. Even with its limitations, the base was correct. One of the problem was trying to validate the base functionality before the entire system was written. Numerous system tests were written with AUT (another Marc contribution) and it was exciting to see all of the "lights turn green", but the feedback that says "this actually does what we need" was missing.
While work continued on the server, the outsourced team was given the task of "enhancing" a set of components to support some of the additional requirements of the new platform. In the meantime, visual confirmation was accomplished with MyXaml screen definitions with some logic built in.
The first "wow" experience
Once the first few components were available, new screens were built (hand-coded in XML) to test the concepts. One screen added "people" to a table, and another used a combo box (data driven) that selected from those people. The first "Oh wow!" moment came when a form was opened with the combo box, then another form (these are non-modal, remember) was opened with the "people add" capability. When the new person was added, they showed up immediately in the available data for the combo box. Not only did this work on a single client with modeless dialogs, but firing up a second client on a separate machine, the exact same process also updated the other client, automatically.
The following diagram illustrates what was going on in the code to make this happen, and some of the details that this diagram glosses over will be discussed in future architecture articles.
All of this had been accomplished with declarative coding, abstracted and generalized data management, and most importantly, no specialized code written for the task on either the client or the server. The data transactions, persistent store, and cache transaction/update processes were all common functions, independent of the specific client form and database schema. There would be many more such moments, but this was the first simple validation that "this stuff really works"!
Flexible software – yoga for programmers
An ongoing challenge for this type of development is keeping the focus on the long-term goal. Most programmers are aware how difficult it is to create re-usable code. The tendency is to "code to the moment", creating each new feature to meet the specific requirements that have been presented. Having a test environment that was to display specific functionality brought those old habits out of hiding. Even when we were using "server-side business rules", they were coded in ways that were very rigid, and sometimes intimately tied to client-side controls. It was necessary to do some of this in order to test other parts of the platform. The main goal was to keep this approach from infiltrating the core logic.
Even when designing very generic functionality such as "cache file synchronization", the initial approach was to add logic to the command packets to synchronize the data in the files. The cache files themselves were implemented as a special type of table at the client side. Later, as non-cached table updates were added, it became obvious that the cached table updates were just a special case of the same requirement. The entire system could be made much more robust by utilizing the exact same logic in both places. This would also make the unit testing of the system more straightforward.
As much as with any software project, refactoring was going on continually. One area in which the team was seeing constant improvement was a reduction in the "just get it done" approach. As each new requirement was discovered, there was a conscious effort to step back and view this as a general purpose feature that may be useful in other areas. Simultaneously, we looked at the already-implemented logic to determine if the new capabilities might be merged with existing logic to make the entire system more flexible.
The most difficult part of designing a general-purpose business platform is to make sure that the vast majority of requirements are both supported and easy to use. Within limits, it is also important to allow additional flexibility to support unique needs outside of the standard framework. The use of .NET as a back-end platform meant that the server can do almost anything.
In order to support true cross-platform and web-based clients without customization for each environment, it was necessary to establish a baseline for client functionality. Any client that supports all of that functionality is a "compliant client". Using the techniques from MyXaml, it is also possible to add additional functionality (new components, business logic, etc.) to the client. Any application that relied on that extended client functionality would be a "non-compliant" application. The goal behind the development of UltraTier is to make sure that the vast majority of applications can be implemented within the compliant framework.
The home stretch
With all of the difficulties, development is proceeding and the end is in sight. Even skeptics who have seen the system are impressed with what it can do. As developers, we are never satisfied. Replacing fairly new software has been one of the hallmarks of my business in the past. Visions of UltraTier V2.0 are already being discussed! These articles have so far been mostly non-technical. That was its intent, to lay the background behind the decisions that drove UltraTier’s design. Many more articles will follow, outlining the details behind some of these approaches. Marc Clifton will be mostly taking over that task.