Click here to Skip to main content
13,042,421 members (76,541 online)
Click here to Skip to main content
Add your own
alternative version


21 bookmarked
Posted 14 Feb 2006

Microsoft patterns & practices Composite UI Application Block: Shell Application Start Time and Memory Usage Validation Test

, 14 Feb 2006
Rate this:
Please Sign up or sign in to vote.
This article describes a Microsoft patterns & practices Composite UI Application Block (CAB) based shell application start time and memory usage validation test performed using NUnit.

Validation Test Description

This article describes a Microsoft patterns & practices Composite UI Application Block (CAB) based shell application start time and memory usage validation test performed using NUnit. The native CAB reflection-based module loader service is used against a local startup directory only. A single test module was used as a test object implementing the following:

  • MyTestModuleInit: minimal implementation
  • MyTestWorkItem: includes three event subscriptions (to the following publications)
  • MyTestPresenter: includes three event publications and corresponding public invokers; three commands that each invoke one of the event publications; six UI extension sites, three pairs of which invoke one of the commands (each pair consists of a menu item with icon and a toolbar item with icon)
  • MyTestView: minimal implementation

It is fair to state that the test module described above exhibits a below-average to average amount of extensions and an extremely limited amount of initialization otherwise (i.e., no functional services are contained within the work item such as a web service, database connection or other persistence, and no presentation services with rich UI initialization, etc.). Nevertheless, a baseline shell application start time and memory usage is established per the configuration provided below. A start time test method was executed on the shell application incrementing the number of test modules loaded on each start from 0, 1, 2, 3, … 48 (an arbitrary maximum); the test method ensures that shell application start time is less than or equal to 15 seconds with a 0.5 second tolerance and a 0.125 second result resolution.

Validation Test Configuration

Operating System:  

  • Microsoft Windows XP Professional Version (2002) 5.1.2600 Service Pack 2 Build 2600
  • Microsoft Internet Explorer 6.0.2900.2180 Service Pack 2


  • Mobile Intel Pentium 4 @ 3.20 GHz with 1.00 GB of RAM

Test Assembly/Artifact Size Listing (Debug compiled):

Menubar/toolbar icons1,908
Extension configuration file4,105
Total per Test Assembly28,999




Base Shell Application Assembly Size Listing (Debug compiled):

ConverterBase.dll (internal object builder)69,632
ConverterRegistry.dll (internal object builder)53,248

(internal CAB extensions)

(internal event viewer module)
(internal shell application)











Supporting Software Version Listing:   

  • Composite UI Application Block (CAB) 1.0.51205.0
  • Microsoft .NET Framework 2.0.50727.42
  • Microsoft Visual Studio 2005 Professional Edition 8.0.50727.42
  • NUnit 2.2.5

Start Time Validation Test Results

Shell application start time exhibited the following behavior as is presented in the chart below: 

  • Minimum start time: 3.614 seconds (2 test modules loaded on start)
  • Maximum start time: 10.545 seconds (48 test modules loaded on start)
  • Maximum start time increase: 1.173 seconds (from 14-15 and 23-24 modules loaded on start)
  • Average start time increase: 0.13975 seconds

Sample screenshot










Memory Usage Validation Test Results

Shell application memory usage (i.e., footprint) exhibited the following behavior as is presented in the chart below:

  • Minimum footprint: 25,820 KB (baseline; no test modules loaded on start)
  • Maximum footprint: 32,884 KB (45 test modules loaded on start)
  • Maximum footprint increase: 1,700 KB (from 14-15 and 19-20 modules loaded on start)
  • Average footprint increase: 134.75 KB

Sample screenshot









Validation Test Result Analysis

In general, the results observed are not unexpected in that the shell application start time (and memory usage) increased slowly but surely using a test module with an extremely limited amount of initialization; as well as the fact that there were frequent start time decreases (albeit of small magnitude) given that the test was truly a validation-type environment with full system execution for each test (i.e., the test was performed on the system itself rather than calibrated module loader/extension/etc. services). Given that the test module was updated minimally to avoid module loader service error checking (i.e., assembly and assembly module attribute names kept unique), the assembly namespaces, classes, event extensions, command extensions, etc. were identically named which could decrease absolute initialization timing albeit insignificantly (as well as memory usage) but not relatively as these characteristics remained constant across the test. Load efficiencies could also have been exhibited as a result of shell application/NUnit assembly caching by the framework (in theory, this was a non-factor with a standard test process of… execute shell normally to ensure expected number of modules and gather memory usage, shut down shell, start NUnit and load/execute unit test assembly, save start time results and shut down NUnit).

Finally, given all of the above, the estimated number of modules that would cause a shell application start to exceed 15 seconds (with no tolerance) is approximately 52 (using the maximum start time increase) to 80 (using the average start time increase; ignoring negative and null minimum start time increases).

Start Time Validation Test Indirect Observations

Because of the minimal test module updates mentioned above, and the test module implementation itself, a single command invocation resulted in events broadcast/received across every loaded module (that is, the number of events broadcast/received for any one command invocation was equivalent to N2 modules loaded, or… 1 event for 1 module loaded, 4 events for 2 modules loaded, 9 events for 3 modules loaded, up to 2304 events for 48 modules loaded). The useful point being that event broadcast/receive performance appeared to be acceptable at a (visual event viewer display) maximum of 3-4 seconds for 48 modules (or 2304 simultaneous events); most of which consists of event viewer regeneration given its current implementation versus instantaneous event broadcast/receive itself.


Most importantly, from a test driven development perspective, the start time validation test results realign the start time test case expectation for the baseline configuration from an arbitrary (and fairly lengthy) 15 seconds to a more deterministic expectation of 5 seconds as additional module development moves forward (calculated roughly from the 3.614 seconds minimum start time plus the 1.173 seconds maximum start time increase for tolerance). Similarly, in using the start time validation test results to thoroughly evaluate system behavior, the internal object builder has been removed from the baseline configuration (effectively replaced by native .NET functionality) reducing start times by about 0.5 second at the lower number of modules loaded up to 1.5 seconds at the higher number of modules loaded. Also of broad importance, the validation test results ensure that there is no point in the test coverage at which the Composite UI Application Block (CAB) along with internal extensions that provide a Smart Client architecture baseline configuration reach an unacceptable or surprising start time. Finally, given the specific configuration under test, the validation test results provide "the facts and nothing but the facts" in terms of approximate start time increases that may be expected as additional modules are loaded by a CAB-based shell application.


This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


About the Author

Tom Polanski
Software Developer (Senior)
United States United States
Tom Polanski, Avionics Software Engineer
Abaco Systems

Abaco Systems delivers high performance rugged embedded computing solutions, based on industry standards and open architectures, to mission-critical applications in defense, aerospace and industry around the world.

You may also be interested in...


Comments and Discussions

QuestionPerformance Issue Pin
santhoshmalayil22-Nov-07 2:35
membersanthoshmalayil22-Nov-07 2:35 
GeneralI have No idea Pin
asifcs14-Feb-06 21:34
memberasifcs14-Feb-06 21:34 
GeneralRe: I have No idea Pin
zievo21-Feb-06 1:52
memberzievo21-Feb-06 1:52 
GeneralRe: I have No idea Pin
edjez8-Mar-06 1:43
memberedjez8-Mar-06 1:43 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

Permalink | Advertise | Privacy | Terms of Use | Mobile
Web02 | 2.8.170713.1 | Last Updated 14 Feb 2006
Article Copyright 2006 by Tom Polanski
Everything else Copyright © CodeProject, 1999-2017
Layout: fixed | fluid