Click here to Skip to main content
15,921,884 members
Please Sign up or sign in to vote.
5.00/5 (1 vote)
See more:
Hi All,

I have a complex view which is having 6 tables inner joins and max function on it.

Out of 6 tables,only one table is having 100 millions of plain data (no LOBs).

I am running one query which is like
Select * 
  from ComplexView 
  where id in (Select id from AnotherView where dt between 'somedate' to 'someotherdate')

I verified indexes and stats for all the base tables which is used in ComplexView and all of them are less than 10% avg fragmentation.

When I run this query, taking more than 2hrs. But if I rebuild the indexes and update the stats,
its giving the result in 10 secs. Some how,after some time (in prod) its again taking to 2 hrs.


Select id,max(case )
from tab1....tab6 (all r inner joins)
group by id

By seeing the execution plan,am getting confused whether first its executing where condition or view logic.

In Generally,Where clause will considered first or view execution first?

Can any one please suggest what might be causing the perf prob..With out altering the view and with out rebuilding the indexes for every 2hrs.??

Thanks in advance

What I have tried:

For the testing purpose,I just tool the result set of
Select id from AnotherView where dt between 'somedate' to 'someotherdate'
into temp table and run the below query
Select * from ComplexView where id in (select id from #tmp) 

this was running almost 30 Sec.
Updated 16-Jun-17 2:48am

1 solution

You say after rebuild the indexes and stats it takes "some time" to return to the 2 hour execution for your query. How long is sometime? Hours, days, weeks?

It sounds like what you need is to create a maintenance plan for you DB. This includes index rebuilding, etc. With a table of 100+ million row if this is a transaction table with lots of write/update/deletes the indexes can get fragmented quickly.

After reading your comment, but without knowing what this system is being used for you might want to consider sitting up a reporting/data warehouse system. This could help off load the work being done on your transaction system.
Share this answer
[no name] 16-Jun-17 9:30am    
Miracles can also not be fulfilled by MSSQL. A 5 for this
pratap420 19-Jun-17 7:46am    
Thanks for your solution. Sometime here is 4hrs only. The worst part is per day only we can have atmost 1000 rows of write/update/delete for that huge table.

Maintance plan run twice in a week.
AnvilRanger 19-Jun-17 8:21am    
Added a small note to the solution you might want to consider.
CHill60 19-Jun-17 14:02pm    
I would also suggest using a join rather than an IN and listing the specific columns rather than using *
CHill60 19-Jun-17 14:05pm    
Also it may be a completely different query that is messing up the execution plan. See

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900