Best Practices No 5: - Detecting .NET application memory leaks
Memory leaks in .NET application have always being programmer’s nightmare. Memory leaks are biggest problems when it comes to production servers. Productions servers normally need to run with least down time. Memory leaks grow slowly and after sometime they bring down the server by consuming huge chunks of memory. Maximum time people reboot the system, make it work temporarily and send a sorry note to the customer for the downtime.
Please feel free to download my free 500 question and answer eBook which covers .NET , ASP.NET , SQL Server , WCF , WPF , WWF@ http://www.questpond.com .
The first and foremost task is to confirm that there is a memory leak. Many developers use windows task manager to confirm, is there a memory leak in the application?. Using task manager is not only misleading but it also does not give much information about where the memory leak is.
First let’s try to understand how the task manager memory information is misleading. Task manager shows working set memory and not the actual memory used, ok so what does that mean. This memory is the allocated memory and not the used memory. Adding further some memory from the working set can be shared by other processes / application.
So the working set memory can big in amount than actual memory used.
In order to get right amount of memory consumed by the application we need to track the private bytes consumed by the application. Private bytes are those memory areas which are not shared by other application. In order to detect private bytes consumed by an application we need to use performance counters.
Below are the steps we need to follow to track private bytes in an application using performance counters:-
- Start you application which has memory leak and keep it running.
- Click start à Goto run and type ‘perfmon’.
- Delete all the current performance counters by selecting the counter and deleting the same by hitting the delete button.
- Right click , select ‘Add counters’ , select ‘process’ from performance object.
- From the counter list select ‘Private bytes’.
- From the instance list select the application which you want to test memory leak for.
If you application shows a steady increase in private bytes value that means we have a memory leak issue here. You can see in the below figure how private bytes value is increasing steadily thus confirming that application has memory leak.
The above graph shows a linear increase but in live implementation it can take hours to show the uptrend sign. In order to check memory leak you need to run the performance counter for hours or probably days together on production server to check if really there is a memory leak.
Once we have confirmed that there is a memory leak, it’s time to investigate the root problem of the memory leak. We will divide our journey to the solution in 3 phases what, how and where.
- What: - We will first try to investigate what is the type of memory leak, is it a managed memory leak or an unmanaged memory leak.
- How: - What is really causing the memory leak. Is it the connection object, some kind of file who handle is not closed etc?
- Where: - Which function / routine or logic is causing the memory leak.
Before we try to understand what the type of leak is, let’s try to understand how memory is allocated in .Net applications. .NET application have two types of memory managed memory and unmanaged memory. Managed memory is controlled by garbage collection while unmanaged memory is outside of garbage collectors boundary.
So the first thing we need to ensure what is the type of memory leak is it managed leak or unmanaged leak. In order to detect if it’s a managed leak or unmanaged leak we need to measure two performance counters.
The first one is the private bytes counter for the application which we have already seen in the previous session.
The second counter which we need to add is ‘Bytes in all heaps’. So select ‘.NET CLR memory’ in the performance object, from the counter list select ‘Bytes in all heaps’ and the select the application which has the memory leak.
Private bytes are the total memory consumed by the application. Bytes in all heaps are the memory consumed by the managed code. So the equation becomes something as shown in the below figure.
Un Managed memory + Bytes in all helps = private bytes, so if we want to find out unmanaged memory we can always subtract the bytes in all heaps from the private bytes.
Now we will make two statements:-
- If the private bytes increase and bytes in all heaps remain constant that means it’s an unmanaged memory leak.
- If the bytes in all heaps increase linearly that means it’s a managed memory leak.
Below is a typical screenshot of unmanaged leak. You can see private bytes are increasing while bytes in heaps remain constant
Below is a typical screen shot of a managed leak. Bytes in all heaps are increasing.
Now that we have answered what type of memory is leaking it’s time to see how is the memory leaking. In other words who is causing the memory leak ?.
So let’s inject an unmanaged memory leak by calling ‘Marshal.AllocHGlobal’ function. This function allocates unmanaged memory and thus injecting unmanaged memory leak in the application. This command is run within the timer number of times to cause huge unmanaged leak.
private void timerUnManaged_Tick(object sender, EventArgs e)
It’s very difficult to inject a managed leak as GC ensures that the memory is reclaimed. In order to keep things simple we simulate a managed memory leak by creating lot of brush objects and adding them to a list which is a class level variable. It’s a simulation and not a managed leak. Once the application is closed this memory will be reclaimed.
private void timerManaged_Tick(object sender, EventArgs e)
for (int i = 0; i < 10000; i++)
Brush obj = new SolidBrush(Color.Blue);
In case you are interested to know how leaks can happen in managed memory you can refer to weak handler for more information http://msdn.microsoft.com/en-us/library/aa970850.aspx .
The next step is to download ‘debugdiag’ tool from http://www.microsoft.com/DOWNLOADS/details.aspx?FamilyID=28bd5941-c458-46f1-b24d-f60151d875a3&displaylang=en
Start the debug diagnostic tool and select ‘Memory and handle leak’ and click next.
Select the process in which you want to detect memory leak.
Finally select ‘Activate the rule now’.
Now let the application run and ‘Debugdiag’ tool will run at the backend monitoring memory issues.
Once done click on start analysis and let the tool the analysis.
You should get a detail HTML report which shows how unmanaged memory was allocated. In our code we had allocated huge unmanaged memory using ‘AllochGlobal’ which is shown in the report below.
<shapetype id="_x0000_t75" stroked="f" filled="f" path="m@4@5l@4@11@9@11@9@5xe" o:preferrelative="t" o:spt="75" coordsize="21600,21600"><stroke joinstyle="miter"><path o:connecttype="rect" gradientshapeok="t" o:extrusionok="f"><lock aspectratio="t" v:ext="edit"><shape id="_x0000_s1026" style="WIDTH: 12pt; HEIGHT: 12pt" type="#_x0000_t75"><imagedata o:href="mhtml:file://C:\Program%20Files\DebugDiag\Reports\Memory_Report__PID_952__09282009212748890.mht!res/warning.png" src="index6_files/image001.png"> Warning
mscorlib.ni.dll is responsible for 3.59 MBytes worth of outstanding allocations. The following are the top 2 memory consuming functions:
System.Runtime.InteropServices.Marshal.AllocHGlobal(IntPtr): 3.59 MBytes worth of outstanding allocations.
<shape id="_x0000_s1025" style="WIDTH: 12pt; HEIGHT: 12pt" type="#_x0000_t75"><imagedata o:href="mhtml:file://C:\Program%20Files\DebugDiag\Reports\Memory_Report__PID_952__09282009212748890.mht!res/warning.png" src="index6_files/image001.png"> Warning
ntdll.dll is responsible for 270.95 KBytes worth of outstanding allocations. The following are the top 2 memory consuming functions:
ntdll!RtlpDphNormalHeapAllocate+1d: 263.78 KBytes worth of outstanding allocations.
ntdll!RtlCreateHeap+5fc: 6.00 KBytes worth of outstanding allocations.
Managed memory leak of brushes are shown using ‘GdiPlus.dll’ in the below HTML report.
GdiPlus.dll is responsible for 399.54 KBytes worth of outstanding allocations.
The following are the top 2 memory consuming functions:
GdiPlus!GpMalloc+16: 399.54 KBytes worth of outstanding allocations.
Once you know the source of memory leak is, it’s time to find out which logic is causing the memory leak. There is no automated tool to detect logic which caused memory leaks. You need to manually go in your code and take the pointers provided by ‘debugdiag’ to conclude in which places the issues are.
For instance from the report it’s clear that ‘AllocHGlobal’ is causing the unmanaged leak while one of the objects of GDI is causing the managed leak. Using these details we need to them go in the code to see where exactly the issue lies.
You can download the source code from the top of this article which can help you inject memory leak.
It would be unfair on my part to say that the above article is completely my knowledge. Thanks for all the lovely people down who have written articles so that one day someone like me can be benefit.
.NET best practice 1:- In this article we discuss about how we can find high memory consumption areas in .NET. You can read about the same at http://www.codeproject.com/KB/aspnet/BestPrctice1.aspx.aspx
.NET best practice 2:- In this article we discuss how we can improve performance using finalize / dispose pattern. http://www.codeproject.com/KB/aspnet/DONETBestPracticeNo2.aspx
.NET best practice 3:- How can we use performance counters to gather performance data from .NET applications http://www.codeproject.com/KB/aspnet/DOTNETBestPractices3.aspx
.NET best practice 4 :- How can we improve bandwidth performance using IIS compression DotNetBestPractices4.aspx.