Web application slowness in different environments in a software
factory-line is something that we all must have come-across at some
point or the other. Here is my perspective on “where to begin” to
address this issue in a general sense for a Microsoft centric web
application. Let as assume that our application was developed using
ASP.NET and a SQL Server database.
1. Go though the Event Viewer log for any errors, warnings and
informational messages. Watch out for messages that were logged by your
application and another other applications on your web server.
2. Check the IIS logs to see if there is any unusual response rate i.e. errors (http 500, 404, etc).
3. The application pool in IIS can be a source of the slowness of the application.
4. The web server could have ran out of disk space (lack of error log rolling and backup service).
5. IIS crash because of a memory leak, thread locking, etc.
7. Make sure that the web server is up-to-date with all the latest “software patches” (oops !!! service packs)
8. Make sure that the database connection pool settings are correct.
9. Consider rendering the website content using a content delivery
network (CDN) service provider like Akamai, Amazon CloudFront, Microsoft
Azure, AT&T, etc.
10. If none of the above seems to cause an issue then read-on.
1. Analyzing the web request and the web response across multiple
pages of the application using tools like Charles, Fiddler, Firebug, etc
can provide you lot of information that you would not know otherwise.
2. Narrow down the scope of the slowness in the page execution.
3. Not disposing objects after use can eat up lot of resources on the web server causing the slowness.
4. Make sure that the response times of all the Ajax, web service calls are in line with the expectations.
5. Run extended load tests to determine if there might be a new cause
of failure, that might not have been noticed during regular load tests.
6. Always employ best practices for implementing web site acceleration .
7. Consider fetching multiple result sets in one database call as
opposed to one result set per database call. This will reduce the number
of round trips to the database.
8. If none of the above seems to cause an issue then read-on.
1. Low disk space on the SQL Server/Cluster.
2. Not following SQL Server best practices.
3. Go though the execution plans of the various suspect SQL
scripts/statements to isolate the issue. Table scan can be a very costly
operation as opposed to a index scan.
4. Run a SQL Trace for a few hours in an environment with lot of
traffic and feed the trace file to the SQL Server Performance Tuning
5. Apply the recommendations of the SQL Server Performance Tuning wizard to the database to see if that helps.
6. Verify and make sure that the background SQL, SSIS and SSRS tasks are scheduled to run during off-peak hours.
7. Considering breaking down a huge database into smaller ones. as an
example an e-commerce website should be accessing data from a Catalog,
Marketing, Sales and Audit databases instead of one big database.
8. Consider regular archiving and cleanup of historical data from all databases.
9. Index defragmenting  can improve the SQL execution times too.
10. Sorting of huge record sets might be best left to he done at the
application level than at the database level. This could be
controversial depending on who you speak to. With this kind of a
solution chances are that a modern day beefed-up web server in a web
farm (load balanced environment) should be able to handle expensive data operations. This would conserve SQL Server processing time to handle more requests.
11. I hope your issue might have been resolved by now.
Good luck and happy programing !
1. Cost-effective website acceleration
2. Microsoft SQL Server 2000 Index Defragmentation Best Practices