|
Sorry I could not give more info but I have only played around with publish replication (no actual usage). We do log shipping which also could work but I think the publish would be better for you.
|
|
|
|
|
Don't worry about it, I don't mind doing the research. That nudge in the right direction was really all I needed.
My advice is free, and you may get what you paid for.
|
|
|
|
|
There are a huge number of things you can do to tune the performance on a database. It sounds like you have a major corporate database and no real skills (DBA level) to service the thing. I STRONGLY suggest hiring a consultant (finding a good one can be a challenge) to look at the infrastructure.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
I think you got the wrong end of the stick. Admittedly, I don't have the skills of a dedicated database administrator, but I am not completely useless with MS SQL. Our company is just a small (50 people) outfit, our ERP system is a small SAP product (Hence, no touching db settings and such), and the database (settings, structure, etc) in question is designed by SAP and specifically dedicated to the ERP system.
Engaging a consultant, we did in a sense, by having SAP support taking a look at it, but they could find nothing wrong, and only suggested some basic maintenance tasks (which we already use).
The problem appears to be that normal daily use, ad hoc queries, and all the automation we have built around the system strain the database beyond its original design. This is why I am looking into lessening the strain, instead of tuning the db.
My advice is free, and you may get what you paid for.
|
|
|
|
|
If you are joining together a lot of tables and then make an aggregation on that join in one big query, you'll take a serious performance hit.
Have a look if it's possible to do the aggregation separately and then make the joins.
As similar to this:
WITH SalAgg as (
SELECT SUM(sal) sal
,DeptID
From Emp
)
Select deptid
,sal
,More info
,even more info
from SalAgg s
join Dept d
On s.deptid = d.deptid If you dont like CTE[^]s you can of course use inline views instead.
Do also take a look at Indexed views[^].
|
|
|
|
|
I use both normal joins and inline views, but I didn't know that using aggregation with straightforward joins performs different from inline views. Also I have never seen this WITH structure you used in your example. Thanks very much for that. I am going to check a lot of our apps and queries now, to see if I can streamline them this way.
My advice is free, and you may get what you paid for.
|
|
|
|
|
The problem is that a CTE may create a temporary table (which you will then see as a Worktable in the plan), but it is not guaranteed to do that, the optimizer can still choose to do it the wrong way. And Sqlserver doesn't as far as i know have a materialize hint like in Oracle.
|
|
|
|
|
There are lot's of possibilities. The first question I have in mind is that are you using an ERP product or is the system built in-house? Depending on that, the possibilities are perhaps narrowed because of responsibility issues. Depending on the situation I would consider (and talk with the responsible party) tools such as (in no order):
- standby database in read only mode: data not up-to-date, but near (depending on the backup stategy), all ad hocs go here
- using filegroups, spread your data across multiple devices
- partitioning, again spread your data across multiple devices
- replication is of course one option as already mentioned
And especially if the system is built in-house:
- pinpoint the bottlencks
- use smart indexing
- reconsider the design even if it affects application logic etc.
The list most certainly isn't complete but just to raise questions and bring different tools into the discussion at you place.
mika
|
|
|
|
|
Thanks a lot for these tips.
We are in fact using an ERP product, so I am indeed not allowed to touch the db really. I am going to look into replication for now, but in a couple of years we are up for a new server (hardware), and I will start looking into spreading file groups over several disks then.
My advice is free, and you may get what you paid for.
|
|
|
|
|
No problem. Before going to replication which requires a bit more administration I would suggest to have a look at the standby databases in SQL Server. I'm suggesting this since from administrative point of view you won't have issues if the structure of the db is changed etc. Of course if you want to modify the data when it's transferred from primary to secondary database then replication is a better option.
|
|
|
|
|
An interesting suggestion. As I understand what I just quickly looked up on standby servers, the principal goal is fail over. However, using such a "backup" database for my purposes would obviously be perfect, as I will only need to read from it.
As I also understand it though, the basic idea is (*and which would mean for us*):
1. make a backup (10 - 15 min)
2. copy the backup elsewhere (we wouldn't need to do this)
3. restore the backup to another database (15 - 20 min)
That would be approximately 25 - 35 min per operation.
How much or little does the operation affect the server performance during that time? Also, does it make sense to do this on one and the same physical server ?
My advice is free, and you may get what you paid for.
|
|
|
|
|
Exactly, few comments:
Johan Hakkesteegt wrote: 1. make a backup (10 - 15 min)
Log backup
Johan Hakkesteegt wrote: 2. copy the backup elsewhere (we wouldn't need to do this)
This can be automated
Johan Hakkesteegt wrote: 3. restore the backup to another database (15 - 20 min)
As well as this
Johan Hakkesteegt wrote: That would be approximately 25 - 35 min per operation
Don't know about that, depends on the system. Typically log backup is very fast. Of course the files must be transferred and reapplied (preferrably by SQL Server) so it takes some time, but not much.
Johan Hakkesteegt wrote: How much or little does the operation affect the server performance during that
time
It's a backup so yes it affects performance (a bit, again depending on the system).
Johan Hakkesteegt wrote: Also, does it make sense to do this on one and the same physical server ?
Well, if you're going to setup a standby, why not create it on a separate server while your at it. Now you gain both new place for ad-hoc queries along with safer environment. Another point why I wouldn't put it on the same server is that you want to have all your resources (CPU, memory, disks) to the OLTP database. If you install the standby to the same physical server the instances will have to share these resources so there's not so much benefit as if they are separated.
|
|
|
|
|
Ok thanks a lot, I am going to look into this.
My advice is free, and you may get what you paid for.
|
|
|
|
|
No problem and good luck.
|
|
|
|
|
As stated we use log shipping for fail over. This requires a second server where-as replication can be done on the same instance (different database). Since this is an ERP it should not change often thus the replication should not need changed often.
The backup (transaction log) copy and restore does not interfere with the production system workings. We have a database being backed up every fifteen minutes and one backed up every eight hours, and several between. It depends on the amount of change to the database. I assume that yours is a typical order entry/production database that would need log shipped more often, say every ten minutes.
|
|
|
|
|
Excellent idea to spread the load into filegroups across separtate physical disks. (Notice I say physical disks, if the disks are part of an array and you move filegroups from E: to F: and they are part of the same array, then nothing is really done.)
Also, check into SQL Profiler, use this while investigating what your report queries are actually doing. You might be suprised that a report query is not using an index.
Sounds like you've gotten a whole pile of advice and you are moving along.
Report back what you've found. We all may find it valuable.
Good luck.
|
|
|
|
|
You actually just mentioned a problem that we are dealing with as well. We use a RAID 5 (I think ?) disk system. The one that spreads the data out over all physical disks, so any one disk failing is not a show stopper. This means that I can't dedicate a single drive just to my database to increase read/write performance. A mistake I will not make with the next server.
My advice is free, and you may get what you paid for.
|
|
|
|
|
|
Hi all,
Please have a look at the following query.
select avg(family_income) as income
from family_income
where income_type = 1 and sample_id = 10 group by family_income_name_id
family_income table has income details with different types. Through group by I can find the average of different types. Now I want to add all the averages. sum(...) doesn't work for me. Any comments really appreciate.
thanks
I appreciate your help all the time...
CodingLover
|
|
|
|
|
if you want to add all the averages without grouping can you simply use an inline view:
select sum(iv.income)
from (
select avg(family_income) as income
from family_income
where income_type = 1
and sample_id = 10
group by family_income_name_id) iv
|
|
|
|
|
Thanks a lot. It works fine as I expected.
I'm not clever with views in SQL. Could you have any articles written by your related to views?
I appreciate your help all the time...
CodingLover
|
|
|
|
|
CodingLover wrote: Thanks a lot. It works fine as I expected
No problem.
CodingLover wrote: Could you have any articles written by your related to views?
Good idea, perhaps someday I'll contribute an article touching this subject.
|
|
|
|
|
Mika Wendelius wrote: Good idea, perhaps someday I'll contribute an article touching this subject.
Yeah. If I learn the stuff correctly, I'll give a try too.
I appreciate your help all the time...
CodingLover
|
|
|
|
|
My company has a project database that has numerous columns. We track everything between tables with the project number, but because of how screwed up in the past the database got there is another column which is actually the primary keys called OrderStatusID. So if I want to navigate forward through the records I use the following SQL statement:
SELECT TOP 1 * FROM [Order Status] WHERE OrderStatusID > " & CurrentRecordID & " ORDER BY OrderStatusID"
Now there is another table that holds certificates for the projects. A project may have zero to many certificates. So I was trying to use the SQL statement:
SELECT TOP 1 * FROM [Order Status] LEFT JOIN Lead_Free_Certs ON (CAST([Order Status].[Project #] AS nvarchar(255)) = Lead_Free_Certs.[Project Num]) WHERE (OrderStatusID > " & CurrentRecordID & ") ORDER BY OrderStatusID, Lead_Free_Certs.[Cert Num] DESC
However if a site has more then one certificate, only the first one is returned because of the SELECT TOP 1 statement. How can this statement be modified so that it would return one row form the [Order Status] table and multiple rows from the Lead_Free_Certs table? Is this even possible? Please keep in mind that because of the way the database was set up the SELECT TOP 1 (or something similar) must remain in place because I finding the next highest OrderStatusID. Thanks in advanced for any help.
|
|
|
|
|
How about
select *
from lead_free_certs
join (
SELECT TOP 1 *
FROM [Order Status]
WHERE OrderStatusID > " & CurrentRecordID & "
ORDER BY OrderStatusID
) t on (CAST(t.[Project #] AS nvarchar(255)) = Lead_Free_Certs.[Project Num])
|
|
|
|