|
The problem is that a CTE may create a temporary table (which you will then see as a Worktable in the plan), but it is not guaranteed to do that, the optimizer can still choose to do it the wrong way. And Sqlserver doesn't as far as i know have a materialize hint like in Oracle.
|
|
|
|
|
There are lot's of possibilities. The first question I have in mind is that are you using an ERP product or is the system built in-house? Depending on that, the possibilities are perhaps narrowed because of responsibility issues. Depending on the situation I would consider (and talk with the responsible party) tools such as (in no order):
- standby database in read only mode: data not up-to-date, but near (depending on the backup stategy), all ad hocs go here
- using filegroups, spread your data across multiple devices
- partitioning, again spread your data across multiple devices
- replication is of course one option as already mentioned
And especially if the system is built in-house:
- pinpoint the bottlencks
- use smart indexing
- reconsider the design even if it affects application logic etc.
The list most certainly isn't complete but just to raise questions and bring different tools into the discussion at you place.
mika
|
|
|
|
|
Thanks a lot for these tips.
We are in fact using an ERP product, so I am indeed not allowed to touch the db really. I am going to look into replication for now, but in a couple of years we are up for a new server (hardware), and I will start looking into spreading file groups over several disks then.
My advice is free, and you may get what you paid for.
|
|
|
|
|
No problem. Before going to replication which requires a bit more administration I would suggest to have a look at the standby databases in SQL Server. I'm suggesting this since from administrative point of view you won't have issues if the structure of the db is changed etc. Of course if you want to modify the data when it's transferred from primary to secondary database then replication is a better option.
|
|
|
|
|
An interesting suggestion. As I understand what I just quickly looked up on standby servers, the principal goal is fail over. However, using such a "backup" database for my purposes would obviously be perfect, as I will only need to read from it.
As I also understand it though, the basic idea is (*and which would mean for us*):
1. make a backup (10 - 15 min)
2. copy the backup elsewhere (we wouldn't need to do this)
3. restore the backup to another database (15 - 20 min)
That would be approximately 25 - 35 min per operation.
How much or little does the operation affect the server performance during that time? Also, does it make sense to do this on one and the same physical server ?
My advice is free, and you may get what you paid for.
|
|
|
|
|
Exactly, few comments:
Johan Hakkesteegt wrote: 1. make a backup (10 - 15 min)
Log backup
Johan Hakkesteegt wrote: 2. copy the backup elsewhere (we wouldn't need to do this)
This can be automated
Johan Hakkesteegt wrote: 3. restore the backup to another database (15 - 20 min)
As well as this
Johan Hakkesteegt wrote: That would be approximately 25 - 35 min per operation
Don't know about that, depends on the system. Typically log backup is very fast. Of course the files must be transferred and reapplied (preferrably by SQL Server) so it takes some time, but not much.
Johan Hakkesteegt wrote: How much or little does the operation affect the server performance during that
time
It's a backup so yes it affects performance (a bit, again depending on the system).
Johan Hakkesteegt wrote: Also, does it make sense to do this on one and the same physical server ?
Well, if you're going to setup a standby, why not create it on a separate server while your at it. Now you gain both new place for ad-hoc queries along with safer environment. Another point why I wouldn't put it on the same server is that you want to have all your resources (CPU, memory, disks) to the OLTP database. If you install the standby to the same physical server the instances will have to share these resources so there's not so much benefit as if they are separated.
|
|
|
|
|
Ok thanks a lot, I am going to look into this.
My advice is free, and you may get what you paid for.
|
|
|
|
|
No problem and good luck.
|
|
|
|
|
As stated we use log shipping for fail over. This requires a second server where-as replication can be done on the same instance (different database). Since this is an ERP it should not change often thus the replication should not need changed often.
The backup (transaction log) copy and restore does not interfere with the production system workings. We have a database being backed up every fifteen minutes and one backed up every eight hours, and several between. It depends on the amount of change to the database. I assume that yours is a typical order entry/production database that would need log shipped more often, say every ten minutes.
|
|
|
|
|
Excellent idea to spread the load into filegroups across separtate physical disks. (Notice I say physical disks, if the disks are part of an array and you move filegroups from E: to F: and they are part of the same array, then nothing is really done.)
Also, check into SQL Profiler, use this while investigating what your report queries are actually doing. You might be suprised that a report query is not using an index.
Sounds like you've gotten a whole pile of advice and you are moving along.
Report back what you've found. We all may find it valuable.
Good luck.
|
|
|
|
|
You actually just mentioned a problem that we are dealing with as well. We use a RAID 5 (I think ?) disk system. The one that spreads the data out over all physical disks, so any one disk failing is not a show stopper. This means that I can't dedicate a single drive just to my database to increase read/write performance. A mistake I will not make with the next server.
My advice is free, and you may get what you paid for.
|
|
|
|
|
|
Hi all,
Please have a look at the following query.
select avg(family_income) as income
from family_income
where income_type = 1 and sample_id = 10 group by family_income_name_id
family_income table has income details with different types. Through group by I can find the average of different types. Now I want to add all the averages. sum(...) doesn't work for me. Any comments really appreciate.
thanks
I appreciate your help all the time...
CodingLover
|
|
|
|
|
if you want to add all the averages without grouping can you simply use an inline view:
select sum(iv.income)
from (
select avg(family_income) as income
from family_income
where income_type = 1
and sample_id = 10
group by family_income_name_id) iv
|
|
|
|
|
Thanks a lot. It works fine as I expected.
I'm not clever with views in SQL. Could you have any articles written by your related to views?
I appreciate your help all the time...
CodingLover
|
|
|
|
|
CodingLover wrote: Thanks a lot. It works fine as I expected
No problem.
CodingLover wrote: Could you have any articles written by your related to views?
Good idea, perhaps someday I'll contribute an article touching this subject.
|
|
|
|
|
Mika Wendelius wrote: Good idea, perhaps someday I'll contribute an article touching this subject.
Yeah. If I learn the stuff correctly, I'll give a try too.
I appreciate your help all the time...
CodingLover
|
|
|
|
|
My company has a project database that has numerous columns. We track everything between tables with the project number, but because of how screwed up in the past the database got there is another column which is actually the primary keys called OrderStatusID. So if I want to navigate forward through the records I use the following SQL statement:
SELECT TOP 1 * FROM [Order Status] WHERE OrderStatusID > " & CurrentRecordID & " ORDER BY OrderStatusID"
Now there is another table that holds certificates for the projects. A project may have zero to many certificates. So I was trying to use the SQL statement:
SELECT TOP 1 * FROM [Order Status] LEFT JOIN Lead_Free_Certs ON (CAST([Order Status].[Project #] AS nvarchar(255)) = Lead_Free_Certs.[Project Num]) WHERE (OrderStatusID > " & CurrentRecordID & ") ORDER BY OrderStatusID, Lead_Free_Certs.[Cert Num] DESC
However if a site has more then one certificate, only the first one is returned because of the SELECT TOP 1 statement. How can this statement be modified so that it would return one row form the [Order Status] table and multiple rows from the Lead_Free_Certs table? Is this even possible? Please keep in mind that because of the way the database was set up the SELECT TOP 1 (or something similar) must remain in place because I finding the next highest OrderStatusID. Thanks in advanced for any help.
|
|
|
|
|
How about
select *
from lead_free_certs
join (
SELECT TOP 1 *
FROM [Order Status]
WHERE OrderStatusID > " & CurrentRecordID & "
ORDER BY OrderStatusID
) t on (CAST(t.[Project #] AS nvarchar(255)) = Lead_Free_Certs.[Project Num])
|
|
|
|
|
When a record is being loaded from the order status table, if there isn't a matching project number in the lead_free_certs table, then no rows are returned. Would an Right Join work in this circumstance?
|
|
|
|
|
By using an right join it worked, the only problem was that it would have to change a lot of code because all the column indexes are different. I tried reversing the two section of the SQL Statement and came up with:
SELECT TOP 1 * FROM [Order Status] WHERE OrderStatusID > " & CurrentRecordID & " ORDER BY OrderStatusID JOIN (SELECT * FROM Lead_Free_Certs) t ON t.[Project Num] = CAST([Order Status].[Project #] AS nvarchar(255))
However I'm getting an error about an error near the JOIN but I don't know what it is.
|
|
|
|
|
I am having an interview on Friday 25 March. Most of the subject matter I am comfortable with, but my SQL is a bit rusty due to not using it for around 7 years. Now I pose to you denizens to ask me a question.
0. Something that would be likely for an interview,
1. Concerning SQL - restrict it to Oracle or SQL Server,
2. That I can try to answer.
I won't use any search, but will try and answer from the thick region of me head.
Thank you guys [and gals].
Panic, Chaos, Destruction.
My work here is done.
or "Drink. Get drunk. Fall over." - P O'H
OK, I will win to day or my name isn't Ethel Crudacre! - DD Ethel Crudacre
|
|
|
|
|
SQL Server Question (taken from sql server central question of the day!)
CREATE TABLE #Money
(ID INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
Date DATETIME,
Amount INT,)
SET IDENTITY_INSERT #Money ON
INSERT INTO #Money(ID,Date,Amount)
SELECT '1', ' 2/10/2010 ','12' UNION ALL
SELECT '2', ' 2/11/2010 ','13' UNION ALL
SELECT '3', ' 2/12/2010 ','14' UNION ALL
SELECT '4', ' 2/13/2010 ','15' UNION ALL
SELECT SUM(AMOUNT) FROM #Money
Will the create statement fail?
will the insert statement fail?
will the select sum(amount) from #money fail?
Possible answers
1. The CREATE TABLE statement will fail, and hence so will all the following sql statements.
2. The INSERT INTO statements will fail and hence so will all the following sql statements
3. The SELECT SUM(Amount) will return a value of 54
4. The SELECT SUM(Amount) .. statement will fail
As barmey as a sack of badgers
Dude, if I knew what I was doing in life, I'd be rich, retired, dating a supermodel and laughing at the rest of you from the sidelines.
|
|
|
|
|
0. I'm not sure #Money is a valid table name; but I'll accept it for now.
1. The end of the CREATE statement is invalid, there is an extra comma - Amount INT,) , so it and everything else fails.
2. If the CREATE was okay then I think the rest is fine and the SELECT will return 54 .
Panic, Chaos, Destruction.
My work here is done.
or "Drink. Get drunk. Fall over." - P O'H
OK, I will win to day or my name isn't Ethel Crudacre! - DD Ethel Crudacre
|
|
|
|
|
0. Tables names with # in sql server are temporary tables.
1. I thought the same but sql sever ignores this and creates the table and tested the actual create table statement myself, but dont know why!
the correct answer is 54.
Explanation:
From CAST and CONVERT (Transact-SQL)[^]
Implicit Conversions:
Implicit conversions are those conversions that occur without specifying either the CAST or CONVERT function. Explicit conversions are those conversions that require the CAST or CONVERT function to be specified.
Scroll down to the illustration which shows all explicit and implicit data type conversions that are allowed for SQL Server system-supplied data types. These include xml, bigint, and sql_variant.
There is no implicit conversion on assignment from the sql_variant data type, but there is implicit conversion to sql_variant.
As barmey as a sack of badgers
Dude, if I knew what I was doing in life, I'd be rich, retired, dating a supermodel and laughing at the rest of you from the sidelines.
|
|
|
|