|
Can you tell us which hospital it is, so we can avoid it
If you have no idea about even doing the er diagram what chance have you when it comes to coding.
Anyway, sounds like a piece of coursework, so do it yourself and actually EARN your pass.
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|
|
I'm scared, very very scared!
|
|
|
|
|
Paging Doctor Pain, paging Doctor Pain...
“If you think you are worth what you know, you are very wrong. Your knowledge today does not have much value beyond a couple of years. Your value is what you can learn and how easily you can adapt to the changes this profession brings so often.”-– Jose M. Aguilar
|
|
|
|
|
|
Hello guys,
I would like to create a log of what is happening in my database. What i exactly mean is that i need to log every update, insert and delete statement that executes in my database. for example when a delete happens i want to keep the values that have been delete, when an update happens i want to keep the old values and the new ones etc.
Another important thing is that i want to achieve this without the use of triggers is possible.
How can this be done?
|
|
|
|
|
cdpace wrote: Another important thing is that i want to achieve this without the use of triggers is possible
Can't see an alternative to triggers myself. How else do you effectively capture the insert/update/delete events.
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|
|
ok got your point can i ask you one more thing what are the column names for
the the following tables that are used in triggers
Table names:
inserted
deleted
|
|
|
|
|
Check this.[^]
I Love T-SQL
"Don't torture yourself,let the life to do it for you."
If my post helps you kindly save my time by voting my post.
www.aktualiteti.com
|
|
|
|
|
The column names for inserted and deleted are the same as the table the trigger is for. F|or inserts just inserted is poulated, for deletes just deleted is populated and for an update, guess what, they are both populated.
I often select the inserted & deleted as xml and store them in a songle audit record.
Hope this helps
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|
|
This helps allot thank you very much. Can i ask you one last thing have any links that show how i can retrieve the data as xml? thank you very much
|
|
|
|
|
Just take a look at BOL or google, is not difficult.
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|
|
Triggers are hell slow don't use it use C2 audit
Best Of Regards,
SOFTDEV
Sad like books with torn pages, sad like unfinished stories ...
|
|
|
|
|
softdev_sup wrote: Triggers are hell
True, if badly written, but we use them regularly for auditing on tables with upwards of 20 million transactions a month withu problems.
softdev_sup wrote: use C2 audit
Never used it, does it acually record before and after images, user etc? Also I would be worried about it bringing the system down - from MSDN
Important:
C2 audit mode saves a large amount of event information to the log file, which can grow quickly. If the data directory in which logs are being saved runs out of space, SQL Server will shut itself down. If auditing is set to start automatically, you must either restart the instance with the -f flag (which bypasses auditing), or free up additional disk space for the audit log.
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|
|
well i tried one software ApexSQLAudit for the auditing sql and it uses triggers and the system performance after using it was so bad almost dead while beofre using this ApexSQLAudit the system was efficient but after this it was very slow and c2 and sql profiler trace did the job
i am confused now because that system is running now more then 2 years and is not crashed but by using the triggers we had to remove ApexSQLAudit within one week because of slowness
Best Of Regards,
SOFTDEV
Sad like books with torn pages, sad like unfinished stories ...
|
|
|
|
|
So you used a generic logging system and it slowed your system. No big suprise there. As I said, we use triggers for auditing and have never had performance issues, but we write them ourselves so they are optimised for our system, with the audit tables properly located and indexed etc.
You may have had a bad experience with triggers, but do not assume they are bad, its just a case of using them properly - I have no experience of ApexSQLAudit, but I wouldn't expect some generic product to be as performant as one written specifically for your database by someone who knows what they are doing.
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|
|
In Synopsis
Go to Sql Server Management Studio
object Explorer
Select Server Right Click on Server goto Properties
then Navigate To Securities -----> Enable C2 audit trailing
thats all the only disadvantage of this is that it consumes more space on HDD but it is the requirement and now days with the size of HDD it does not matter
Best Of Regards,
SOFTDEV
Sad like books with torn pages, sad like unfinished stories ...
|
|
|
|
|
Hello,
I need to write a query.
I have a table with many operators (can take them using a query) and for each of them i need to calculate it's success rate (count fail units query/count all units query). I can do the calculation of the success rate for every operator,
but how can i write a query that will loop through all the operators and show the success rate of each operator (in one table).
The operators and the units (count for fail and all the units) are located in one table in the DB.
The results will be as follows:
there will be an operator columns and success rate column. In the operator column it will write the operators from the table and in the second column it will write the success rate of each operator.
How can i do it?
I use the following query for the success rate (of one operator):
select (convert(float,(select count(distinct jobid)
from v_equipattrrep where ([assigndate] > '01/01/2009' or jobcompletiontime>'01/01/2009') and
[operator] like 'mgroiser' and
equipname like '%fib%' and equipname not like '%test%' and
iscabinet = 0 and utilcatdesc = 'Debug' and labid = '37' and jobresult='success')))/(select count(distinct jobid)
from v_equipattrrep where ([assigndate] > '01/01/2009' or jobcompletiontime>'01/01/2009') and
[operator] like 'mgroiser1' and
equipname like '%fib%' and equipname not like '%test%' and
iscabinet = 0 and utilcatdesc = 'Debug' and labid = '37' and jobresult is not null) *100
I tried to write as follows:
select (convert(float,sum(case when jobresult='success' then 1 else 0 end))/ count(distinct jobid)) * 100
from v_equipattrrep where ([assigndate] > '01/01/2009' or jobcompletiontime>'01/01/2009') and
equipname like '%fib%' and equipname not like '%test%' and
iscabinet = 0 and utilcatdesc = 'Debug' and labid = '37' and jobresult is not null
group by [operator]
But it doesn't work well because i muse use a distinct in the sum(case...) too.
|
|
|
|
|
Thats one, no two ugly queries. You have a couple of options, you can move this into a cursor and insert the operator information into a table variable. I hate cursors and would not recommend this.
I would move the operator information out of the where clause into the select areas and join the 2 result sets by operator. This will allow you to use group by operator to get the results without using a cursor.
Research GROUP BY and INNER JOIN
You are probably going to have to muck with the calc as well.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
static query executes fine but there is problem when I make it dynamic.please ignore logic of query,only consider syntax
set @sSQL = 'select s.first_name from sfm_students s
left outer join
(select seq_id from sfm_classes)c
on c.seq_id = s.sfm_classes_seq_id'
print @sSQL
exec @sSQL
when executing above procedure
For Example
exec GetInfo
following error produce
Msg 203, Level 16, State 2, Procedure GetInfo, Line 15
The name 'select s.first_name from sfm_students s
left outer join
(select seq_id from sfm_classes)c
on c.seq_id = s.sfm_classes_seq_id' is not a valid identifier.
modified on Friday, August 28, 2009 5:15 AM
|
|
|
|
|
try
exec (getinfo)
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Hi
I have a senario in which I need to move the data from one database to another depending upon some conditions so that it will reduce some load on one database.For example we have A and B database and lets say lots of new data comes into database A daily.Now i want to put a logic so that some of infrequently used data in database A can be moved to database B on a daily basis.Once data is moved from A database to B databse data will be deleted from A thus reducing load on database A.so daily this process goes on and data from A will be moved to B.So B contains all the data that is moved from A daily and no data of previous day is overwritten on B.Will replication be helpful in this senario ?
|
|
|
|
|
What you are talking about is ARCHIVING, and replication can be useful here. You need to identify the database you are using, the strategy will be different for the type (SQL Server/Oracle/ MySQL do not relate to Access).
Personally we use an SSIS (SQL Server) package to move the data from the active DB to the archive DB in the low processing period (2am in our case) and do any other maintenance required.
You might also look into creating an OLAP cube as you archive destination.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
|
Hi all,
Is there something like ## in sqlserver related to scope of temprory tables.??
which decides the scope of the temp tables??
thanks
dipak
|
|
|
|
|
Yes, a single # creates a temp table only available to the connection it was created on, a ## prefix makes temp table available to all connections.
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|