Click here to Skip to main content
15,885,546 members
Articles / Programming Languages / C++
Article

The Standalone Programmer: Simple Performance Testing

Rate me:
Please Sign up or sign in to vote.
4.77/5 (16 votes)
3 Jan 200312 min read 61.2K   1K   60   3
A simple framework for creating customized performance tests

Introduction

In my previous 2 articles, I have been discussing quality as a goal and methods to reach that goal.  One of the observations that I made is the importance of high quality code.  Among my observations related to high quality code was the importance of validating method input parameters.  One of the objections that I have heard to this rule is that doing so can adversely affect application performance.  

I have argued that the impact on performance that validating method input parameters has is virtually non-existent.  However, I have not had any hard and fast quantitative numbers to back up this argument.  This has always bothered me because some of the methods I use such as IsBadReadPtr are black boxes to me.  I do not understand their implementation entirely and to be fair and reasonable, I have to question the impact of these functions. 

Therefore, I have created a framework for quickly assembling custom performance tests.  This framework is in its early stages right now, but I believe it is a good start.  I have decided to go ahead and release this framework at this early stage so that I can elicit some feedback from other CPians and hopefully improve it.

The problem with performance testing

As we all know, most performance tests and benchmarks do not duplicate "real world" environments.  They are typically created in what amounts to a clean-room.  This leads many software developers to discount their value and for the most part ignore them.  Another problem with many benchmarks is that they produce results that are difficult or impossible to reproduce.  And probably the most irritating problem with benchmarks is that even if they are performed in "real world" environments, they most likely do not duplicate the "real world" that we live in.

I hope to overcome these issues by producing a benchmarking framework that is flexible and customizable.  Also, by giving the source code away along with instructions on how to use it, hopefully other developers can customize the framework to better represent their "real world".

My solution

As I said before, what I have done is to create a simple framework which is made up of 2 base classes.  The classes are CPerformanceTestEnvironment and CPerformanceTest.  Together these classes provide a basis for creating fully customized testing environments and fully customized tests.

The CPerformanceTestEnvironment class represents the basis for creating a testing environment that will include 0-N threads doing actual work of some kind.  This work will be going on as the test is performed.  This class is responsible for starting and stopping those threads as well as passing to those threads any customizations that are desired.

PUBLIC METHODS

InitBusyThreads const int iThreadCount, const int iMinMemoryMB, const int iMaxMemoryMB

This function is called to prepare the environment for the test.  Depending on your needs, this may be an optional method.
TerminateBusyThreads This function is called to terminate any worker threads the environment has created.  If InitBusyThreads is not used, there is no need to call this function.
RunTest CPerformanceTest* pTestToRun, const int iIterations, const int iMinFunctionTimeMS, const bool bIncludeBaseLine

This function is called to execute a customized test scenario.  All test scenarios are executed through the environment so that the environment can adapt to the scenario if needed.

The CPerformanceTest class is a base class which provides an interface (along with some basic support mechanisms) for how to create customized test scenarios.  This class does not implement any test scenario.  Instead it is expected that classes will be derived off of this class to implement the specific test scenarios that are desired..

PUBLIC METHODS

Run const int iIterations, const int iMinFunctionTimeMS, const bool bIncludeBaseLine

This method should only be executed by the CPerformanceTestEnvironment class.

GetTestName This method should return a constant string (LPCTSTR) giving the test scenario a meaningful textual description.

Some actual test scenarios

Obviously, the framework by itself doesn't do anything particularly valuable.  Therefore I have created 11 test scenarios to be included with the framework.  9 are fully implemented while 2 need more work.  I plan to add many more as time goes on.  Each of these scenarios is implemented in a class derived from CPerformanceTest.  

The 8 test scenarios I have implemented are listed below.  

CPerformanceTest_NullPointer Measures the performance of if (!pSomePointer)
CPerformanceTest_BadReadAddress1Byte Measures the performance of if (IsBadReadPtr(pSomePointer, 1) )
CPerformanceTest_BadWriteAddress1Byte Measures the performance of if (IsBadWritePtr(pSomePointer, 1))
CPerformanceTest_BadReadAddressVariable Measures the performance of if (IsBadReadPtr(pSomePtr, iExpectedSize)) where iExpected size for the test is 500,000 bytes.
CPerformanceTest_BadWriteAddressVariable Measures the performance of if (IsBadWritePtr(pSomePtr, iExpectedSize)) where iExpected size for the test is 500,000 bytes.
CPerformanceTest_MemoryTooSmall Measures the performance of if (IsBadWritePtr(pSomePtr, iExpectedSize)) where iExpected size for the test is 500,000 bytes BUT always passes in a valid pointer just one of insufficient size.
CPerformanceTest_32bitIntConstCompare Measures the performance of if (iSomeInt >= 1 && iSomeInt <= 100)
CPerformanceTest_32bitIntNonConstCompare Measures the performance of if (iSomeIn >= iSomeMinValue && iSomeInt <= iSomeMaxValue)
CPerformanceTest_RangeCheckAndWritePtr Measures the performance of an if (iSomeInt > iSomeMinValue || iSimeInt > iSomeMaxValue || !pSomePointer || IsBadWritePtr(pSomePtr, 48).

This attempts to approximate a little more closely the type of input parameter check that is often needed.

How a scenario is implemented

Each of these 8 classes implements the same 4 functions.  1 of these functions is the GetTestName function which just returns a string, so the other 3 are the only real important ones.  The remaining 3 functions all follow a very similar pattern.  So, lets explore the code for the 3 methods of the CPerformanceTest_NullPointer scenario.  Once you look at the code for the others, you will see that they are 90% the same.

int CPerformanceTest_NullPointer::Run(const int iIterations, 
                                      const int iMinFunctionTimeMS, 
                                      const bool bIncludeBaseLine)
{
	if (bIncludeBaseLine)
	{
		CPerfTimer Timer;
		Timer.Start();

		for (int iX = 0; iX < iIterations; iX++)
			BaseLineTest(iMinFunctionTimeMS, NULL);

		Timer.Stop();

		// make sure we keep track of iterations and time 
                // in function
		m_iTotalBaseLineTimeMS += Timer.Elapsedus();
		m_iTotalBaseLineIterations++;
	}

	// do NULL == TRUE
	{
		CPerfTimer Timer;
		Timer.Start();

		for (int iX = 0; iX < iIterations; iX++)
			RealTest(iMinFunctionTimeMS, NULL);

		Timer.Stop();

		m_iTotalFailureCaseTimeMS += Timer.Elapsedus();
		m_iTotalIterations++;
	}

	// do NULL == FALSE
	{
		CPerfTimer Timer;
		Timer.Start();

		for (int iX = 0; iX < iIterations; iX++)
			RealTest(iMinFunctionTimeMS, (void*)0x0badf00d);

		Timer.Stop();

		m_iTotalSuccessCaseTimeMS += Timer.Elapsedus();
		m_iTotalIterations++;
	}

	return 0;
The purpose of this function is to override the Run method from the base class and perform 3 steps. First, this function performs the base-line test for the desired # of loops, then it performs the FAILURE condition for the desired # of loops and finally it performs the SUCCESS condition for the desired # of loops.
You're probably wondering why this flow is important. To be honest, I made it up. However, I think it makes sense. The idea is that there really are 3 eventualities that must be tested with each scenario.
* The base-line eventuality is designed to measure performance in the absence of the input parameter check.

* The FAILURE eventuality is designed to measure performance when the input parameter check fails. (NOTE: this particular scenario allows the code within the function to continue. Most of the time the function would exit with an error return code, exception or some other mechanism. I left it this way so that I could measure more precisely the cost of the if statement and bReturn = false statements. I think to adequately test, a new scenario should be created which does exactly what this one does except that on failure the function exits.)

* The SUCCESS eventuality is designed to measure performance when the input parameter check succeeds.

 

bool CPerformanceTest_NullPointer::BaseLineTest(const int iMinFunctionTimeMS, 
                                                void* pPointerToTest)
{
	bool bReturn = true;

	CPerfTimer Timer;
	Timer.Start();

	// just wait for our time to expire
	while (Timer.Elapsedms() < iMinFunctionTimeMS) 
           DoSomeSimulatedWork();

	Timer.Stop();

	return bReturn;
}
The purpose of this function is to implement the base-line eventuality (no checking). You will notice that I am using the CPerfTimer class written by Dean Wyant (http://www.codeproject.com/datetime/perftimer.asp).
The CPerfTimer class provides a wrapper for the high-resolution timing facilities provided by windows (QueryPerformanceCounter).

You will also notice that I have a while loop in here which looks kind of stupid. If you're wondering where I got the idea to do things this way, again I must confess I made it up. One of the arguments against many performance tests is that they are completely trivial in nature. What I have tried to do here is to insure that the functions involved in the scenario perform some type of operation that is more than trivial (but not much.) The DoSomeSimulatedWork function is not magic. It is implemented in the CPerformanceTest class as a protected method.  All it does is step through a series of 4 other functions which do very simple things like allocating and concatenating a string, allocating memory on the stack, allocating memory on the heal and all 3 of the above. 

 

bool CPerformanceTest_NullPointer::RealTest(const int iMinFunctionTimeMS, 
                                            void* pPointerToTest)
{
	CPerfTimer Timer;
	Timer.Start();

	// do our testing code
	bool bReturn = true;
	if (!pPointerToTest)
		bReturn = false;

	// just wait for our time to expire
	while (Timer.Elapsedms() < iMinFunctionTimeMS) 
            DoSomeSimulatedWork();

	Timer.Stop();

	return bReturn;
}
The purpose of this function is to implement the actual test scenario. It is identical to the baseline function EXCEPT that the if condition is present. It is important that this function match the base-line function because we use the base-line function as a way to measure performance by subtracting the time it takes the baseline to occur from the time it takes the success/failure eventualities to occur.

Some results

Before I give results let me tell you about my development pc.  My PC is a dual processing PIII 450 with 512 MB ram running Windows 2000 Advanced Server.

So far every test I have run indicates that the cost of the simplest of tests (32bit integer range checking and NULL pointer checks) parameter validation techniques is extremely small.  We're talking <5 microseconds or less for the most part.  Obviously for tight inner-loop type code this may be too much but for the vast majority, in fact I think I can safely say 95-99% of code, this is not an issue.

The more interesting performance numbers come from the tests involving IsBadReadPtr and IsBadWritePtr. There is a definite cost to using these functions, particularly in the failure situations. On one test I performed, the cost of using IsBadReadPtr(pSomePointer, 1) for the success case was 1,576 microseconds, but the failure case required 128,893 microsends! WOW! I did not expect this hude difference. My first thought here is that IsBadReadPtr might be catching an exception internally. This would produce such a huge difference. (IsBadWritePtr(pSomePointer, 1) seems to produce similar results.

The test of IsBadReadPtr(pSomePointer, 500000) revealed that there is a signifigant performance cost for checking larger memory areas. In one test, this check required 54805 microseconds (54 milliseconds) in the success case. The failure condition however appeared to cost no more than the failure condition of IsBadReadPtr(pSomePointer, 1). This would seem to support my assumption that exceptions are being dealt with since the time is relatively constant for the failure condition.

What my tests seem to show is that simple NULL pointer checks and 32bit interger range checking is just about always worth the cost of using them. Calls to IsBadReadPtr and IsBadWritePtr need to be used when success is the normally expected condition and the size of the buffer being checked is relatively small. Also, IsBadReadPtr and IsBadWritePtr are clearly poor choices for inner-loop type situations, although it is unlikely a programmer would use them there anyway.

Why such "trivial" examples

The reason I chose such seemingly trivial examples is because these are some of the simplest code-quality guidelines to implement and they are often not done.  For some reason we programmers seem to find excuses not to follow this guideline (myself included).  One of these objections has been performance concerns.  I think the results I presented here and your own testing should eliminate this objection or at least give it a quantitative value. 

What's next?

The next step is to implement more involved scenarios and create a more realistic environment for those scenarios to be executed in.  You can start doing this on your own with the classes and demo project provided here.  I hope to hear from some of you with ideas for test scenarios and ways to improve my methodologies.  So, please leave comments and tell me what you think and how it can be improved.

Another thing that needs to be done is to run these same tests on other PC configurations (Windows 98, Windows XP, less memory, etc)

Some of the test scenarios I am thinking about:

  1. Scenarios to measure the cost of catching exceptions, throwing exceptions and especially the cost of having exception catching logic when no exceptions occur
  2. Scenarios to measure the cost of complex method parameter validation involving multiple types of parameters, etc
  3. Scenarios to measure the performance of STL containers
  4. Scenarios to measure the performance of various C runtime functions
  5. Scenarios to measure the performance of various DB engines
  6. Scenarios to measure the performance of some Windows APIs
  7. Scenarios to measure the performance of several logging libraries
  8. Scenarios to measure the performance of TCPIP and socket calls

About the demo

Before I forget.  I have included a demo application which is an MFC dialog based application which provides a very simple UI for interacting with the test environment and scenarios I have provided.  You will notice that some options are disabled.  This is because they are not implemented. 

When you run the demo, it will try to create a file in your temporary directory called CRunTimeCheckPerformanceDlg.txt.  Once the tests are run, a message box will appear and then that text file will be opened with whatever editor you have specified for .txt files (usually notepad).  

The demo provides only very simple feedback while it is running.  This feedback is only updated at the beginning of each test and is not updated during the test so be patient. 

A couple words of caution about this.  Because I have chosen an implementation that tries to simulate semi-real-world conditions, some of these tests can take a long time especially if you specify a very high # of iterations. 

A final word from the author

Please remember that this is an early release of this framework.  I have been thinking of doing it for some time, but found the time today to actually make it happen, so I wanted to get it out here before my time slipped away.  I hope to improve upon it as time goes by.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here


Written By
Web Developer
United States United States
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.

Comments and Discussions

 
GeneralIdeas, ideas Pin
Marc Clifton7-Jan-03 14:48
mvaMarc Clifton7-Jan-03 14:48 
GeneralRe: Ideas, ideas Pin
Matt Gullett7-Jan-03 16:17
Matt Gullett7-Jan-03 16:17 
GeneralNote from the author... Pin
Matt Gullett4-Jan-03 4:01
Matt Gullett4-Jan-03 4:01 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.