Click here to Skip to main content
15,867,330 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
we have a web application(dot net 2008 and sql server 2005) with totally 1000 users and about 350 online users which be used for recording daily receipts.

the end user comapany has 700 branches that works with system via internet.
the problem is that,the most of reports are heavy and make the system very slow.
now we have 2 server
1) IBM Xseries-3650 Intel(R) ,CPU: X5365@ 3GHz (8 processor), RAM: 16 Gb
2) Intel(R) Xeon(R) ,cpu : X5650 @ 2.67GHz (24 processor) , ram: 15.9 Gb

currently we host the app at server 1 and the database(report server enabled) at server 2
but the app is slow

what is the best scenario and architecture for improving performance and speed?

both servers are 32 bit and sql AWE is anabled
Posted

I'm just trying to help you out.

First you can check this article, because I suspect you might have some 'System Architechture' issues you might want to review. Second, you might want to know more, or at least what others have think about better caching technics. For these two reasons, I invite you to view those two articles:

99.99% available ASP.NET and SQL Server SaaS Production Architecture[^]

The following one is a prize winner:

Ten Caching Mistakes that Break your App[^]
 
Share this answer
 
very very thanks code monkey
our problem is reports.
if we exclude it ,every thing will be ok.
we have used strong standards and performance tips when developing application.just like :
1-there is no index scan
2-view-state is disabled on pages
3-pages and scripts are very small
4-there is no images on project
5-on list pages we use just top 100 records
6-...

but the query of reports involve more tables and columns and we cannot index all the filled, and some reports return more records like 1000 or above.

may u explain more about how to transfer reports to another database and how to make custom denormalized tables to avoid multiple join and heavy queries?

does sql has any embedded capability
or we our self should do it?

and may u explain your last paragraphs?

(I actually would recommend a 3rd server. Have the database live on the largest box and have 2 identical, load balanced webservers. Usually we don't get the hardware we need/want, so if you're stuck with two servers the web/database server config is probably your best. Always have your database on the most powerful box possible. It's usually the bottleneck and doesn't always scale well.)
 
Share this answer
 
Comments
crazedDotNetDev 18-Apr-12 15:01pm    
Sure.
1000+ records... that's getting to the point where HTML might have problems. You might want to also allow paging on the report's datagrid. (http://msdn.microsoft.com/en-us/library/5aw1xfh3.aspx)

The technique I used a while back is setup a nightly job that truncates & repopulate a series of "read only" tables dedicated for reporting. (Note: This assumes you do not need to report on items entered after the job executes and its ok for the report to contain data a day old.) The basic idea is to find items in the reports that can be done once, nightly. Most aggregations (i.e. sum(), max(), min(), etc...) are good candidates for this process. SQL Server doesn't have this kind of functionality built in, you'll need to code it.

Example, Let's say you have a year-to-date receipt report that is taking too long. Assuming the query is something like...

select
month(receiptDate), sum(amount)
from
tblReceipts
where
year(receiptDate) = year(getdate())

...pull that query out of the report, create a reporting table, and populate it via a nightly job...

truncate table tblReport_ReceiptsYTD

insert into tblReport_ReceiptsYTD
select
month(receiptDate), sum(amount)
from
tblReceipts
where
year(receiptDate) = year(getdate())

...and the report’s query becomes something like...

select
receiptDate, amount
from
tblReport_ReceiptsYTD

Even if the nightly job takes 10 minutes to finish, it's ok. The time is spent once, during the night. Anyone hitting the report during the day will see the optimized, pre-aggregated report table.

SQL Server allows cross database queries, so just manually reference database. (i.e. instead of "from tblReport_ReceiptsYTD" use "from databaseA.dbo. tblReport_ReceiptsYTD") Setup a linked server and you can even include other servers. The syntax is [servername].[databasename].[owner].[tablename]... or use openquery() to get really crazy.

About the last paragraph...
First off, the database is the typical bottleneck. It's harddrive intensive, whereas all other pieces of the system tend to get cached in RAM. A second reason the database is a typical bottleneck is locking. If a given user is updating a row, then it locks and nobody else can read it. Always put the database on the strongest server possible.

I'd put load balancing the websevers (or put them in a cluster) more for reliability than performance. If one goes down users typically won't know. Clustering the database server would also be nice, but it’s difficult to get right. See http://en.wikipedia.org/wiki/Cluster_%28computing%29
I believe you don't have enough data to define the best architecture. Performance is never a one size fits all kinda thing. You have an existing system, so I'd...

  • Check out the hardware to make sure it's not having any failures.
  • Run the Performance Monitor to look at disk/memory/CPU usage, swap space size/usage, etc... over time and make sure the servers are beefy enough for the job.
  • Assuming 1 & 2 are ok, start looking at which component (webserver/x-tier/database) is the bottleneck. Basically "divide and conquer".


    • Performance Monitor should include pegs for IIS. Review them over time.
    • You'll have to figure out how to measure any business/data tier in the middle
    • Use SQL Server Management Studio and SQL Server Profiler to identify SQL Server bottlenecks. Management Studio shows any detected deadlocks, so check it out first. I"d then use Profiler like this... identify a report that's slow, then identify which stored procedure/query it uses. Use Profiler's filters to narrow down the trace as much as possible... you're looking for an actual statement WITH the parameters used. Take that and go back to Management Studio. Open a new window, paste in the query, add "Include Actual Execution Plan" (under the Query menu), and run. It'll take longer, but SQL Server will tell show you the bottleneck (as a %).



This should give you enough data to know how to fix this performance issue.

Most importantly: define performance standards and measure them over time. My personal standards are: webpages must take no longer than 5 seconds & user facing SQL statements/stored procedures must be sub-second.

Performance tuning is a set of trades. You give up x to achieve y. Example: Reporting Server tends to give up space and performance for easy, user definable reports. To make reporting faster, write & optimize the high traffic reports yourself (thus loosing the user definable feature).

A few tools to help fix performance issues: (in no order)

  • Obviously fix and/or upgrade all hardware issues.
  • Recode any bottlenecks found in webpages or middle tier(s).
  • Database tuning is a bit trickier...

    • Make sure SQL Server does not do any table scans. You want either an index seek or index scan. Add/rework indexes as needed. (Table scans on temp tables and table variables MIGHT be ok... choose carefully.)
    • De-normalize data as needed if the joins get too intense.
    • Some tables have simply too much data to query effectively.

      • Look at table partitioning as a possible fix.
      • Consider pre-aggregating data. Example: assuming your system reports on receipts in the past week, month, and/or year... then have a nightly process to aggregate all receipts into separate weekly, monthly, and yearly tables. Base all reports on these new tables.


  • I actually would recommend a 3rd server. Have the database live on the largest box and have 2 identical, load balanced webservers. Usually we don't get the hardware we need/want, so if you're stuck with two servers the web/database server config is probably your best. Always have your database on the most powerful box possible. It's usually the bottleneck and doesn't always scale well.


Side note: MAKE SURE the database is being backed up and TEST the recovery process. Losing the database in this config means you're toast and (if me) out of a job.


...hope it helps...
 
Share this answer
 
v2

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900