|
|
jonatan_556 wrote: I would not want to loose those two reports as the design took me one day to do.If anyone knows a solution that I could use to import/convert .rdlc reports into .rdl I would appreciate. This advice may help you - use some form of version control.
You need never find yourself in the position of losing any previous work if you set up the correct protocols for changes and use version control.
My person suggestion is to use GIT[^].
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
Hi,
Sorry the delay in answering.
Many thanks on this.
|
|
|
|
|
You're welcome - version control is your friend in these sorts of situations
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
Hi Friends,
I have a database on which I have enabled CDC on some tables. As CDC captures auditing data on transactions basis, I am accessing that data and putting in flat file by using a Store Procedure
Now is there any way in T-SQL so that, we can wait until a transaction finishes to capture the data. Because if I capture the information half way we may miss some of the data.
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
Can't you use AFTER INSERT , AFTER UPDATE T-SQL (triggers) in this scenario? Am I missing something that you are not explaining in detail?
|
|
|
|
|
Sure, a trigger works on the Table level, where as I want it on the transaction level, on transaction may be using multiple tables. The other interesting and more complex thing is Change Data Capture works on log file asynchronously to write the values into Audit tables. But luckily CDC writes information as Transaction based, like it gives same start_lsn id for same transaction, which I can use.
But for example when I retrieving data from CDC Audit tables and putting it in the Flat tables then I want to wait if CDC is in between writing a Transaction on to the CDC Audit tables. Can I do it?
Thanks in advance.
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
My suggestion - give each 'transaction' a GUID.
Create a table that holds the GUIDs of 'transactions' that have been completed. Write the relevant GUID into this table when a 'transaction' is completed.
Then when it comes to determining whether the data is ready to be sent to flat files just look to see if the GUID exists in your completed 'transaction' GUID table.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
|
Hello
I need some advice... Let's have two network : Nprod and Ndev.
The production server is on Nprod and is connected to 80 customers databases through 80 VPN, all is fine here.
But... we need also to connect the Ndev network to the customers databases, for easy (and sometimes emergency) debugging. Sometimes we have a VPN between Ndev and away DBs, sometimes not, and in this case we have to connect to a remote desktop on the Nprod network, open a connection server, pass queries one by one, and try to understand what's going wrong = not very convenient and lost of times to react to the bug.
So my question is : Is there a way to connect my dev desktop to the Nprod, and then using the VPNs Nprod <-> Customers DB, something like a connector installed on the Nprod network which could pass queries and return results to my dev network ? I don't know where and what to search so if you have some tips or tricks to orient my search, it should be great
|
|
|
|
|
Hi All,
I have a database in which most of the tables have Created by, modified by, Create Date and Modified date columns, for some child tables we don't have. The problem is developers are performing DML operations on those tables like insert with some inappropriate values or update without using modified by column. Then when they update it incorrectly we are unable to track who modified it.
Is there any mechanism for me from Sql Scripts or any other who made those changes, so that I can track that out and let him/her know what incorrect change they made.
Any help a code snippet, a link, a small script or even suggestion helps me. Thanks in advance.
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
A quick Google for "sql audit trail" came up with Audit Trail Generator for Microsoft SQL[^] right here on CP. If you need to know who's messing with your data, how and when, an audit trail is the way to go.
Cheers,
Peter
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
You should enforce INSERT/UPDATE triggers for these auditing purposes. You cannot reply on the developers to implement this.
|
|
|
|
|
Hi All,
I have a parent table and multiple child tables I want to create a resultant Flat table from all result columns, is there any way to create all those columns in a table to put the result into a flat table?
If the Object Ids are going to be different I am fine to put them also into it.
Here is the script I am trying to use, and understandably it is giving me the error "
Column names in each table must be unique. Column name 'ApplicationId' in table 'TempDelta' is specified more than once.
"
Any help is much appreciated thanks in advance friends.
select * into TempDelta from [Application] a
inner join ApplicationComponent b on a.ApplicationId=b.ApplicationId
inner join ApplicationPage c on a.ApplicationId=c.ApplicationId
inner join Audit d on a.ApplicationId=d.ApplicationId
inner join OrgEntityApplication e on a.ApplicationId=e.ApplicationId
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
Don't use select *
List the columns you actually want. If you have two columns with the same name in two separate tables (i.e. the data contained in the column is different) then you can use something similar to
select A.Col1, B.Col1 AS BCol1, A.Col2, B.Col2 as BCol2 ...
Specifically listing the columns you want protects code from subsequent schema changes - imagine you have a carefully crafted gridview that is populated from the database with a query that does a SELECT * FROM Table1 ...for a specific business reason Table1 gets an extra column that has absolutely nothing to do with your carefully-crafted-gridview ... you're going to end up with an extra column that you were not expecting.
|
|
|
|
|
Yeah thanks buddy, but actually some tables have 20 column also and I have to all the column as flat to write into Date Warehouse table. May be I have to do like you said but a lot of writing though
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
indian143 wrote: May be I have to do like you said No way around that
If the brain were so simple we could understand it, we would be so simple we couldn't. — Lyall Watson
|
|
|
|
|
OK sure thank you
Thanks,
Abdul Aleem
"There is already enough hatred in the world lets spread love, compassion and affection."
|
|
|
|
|
Use the database information to generate the SQL for you.
For example (using the Northwind sample database from MS):
USE [Northwind]
DECLARE @listStr VARCHAR(MAX)
;WITH Source AS
(
SELECT
CASE WHEN TABLE_NAME = 'Orders' THEN 'O.'
WHEN TABLE_NAME = 'Customers' THEN 'C.'
WHEN TABLE_NAME = 'Employees' THEN 'E.'
WHEN TABLE_NAME = 'Order Details' THEN 'OD.'
ELSE ''
END + COLUMN_NAME AS ColName
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME IN ('Orders', 'Customers','Employees','Order Details')
AND TABLE_SCHEMA='dbo'
)
SELECT @listStr = COALESCE(@listStr+',' ,'') + ColName
FROM Source
DECLARE @SQL varchar(max)
SET @SQL = 'SELECT ' + @listStr + ' FROM Orders O '
SET @SQL = @SQL + 'JOIN Customers C on O.CustomerID=C.CustomerID '
SET @SQL = @SQL + 'JOIN Employees E on O.EmployeeID=E.EmployeeID '
SET @SQL = @SQL + 'JOIN [Order Details] OD on OD.OrderID = O.OrderID '
PRINT @SQL Produces this output (line breaks inserted for clarity):
SELECT C.CustomerID,C.CompanyName,C.ContactName,C.ContactTitle,C.Address,C.City,C.Region,
C.PostalCode,C.Country,C.Phone,C.Fax,
E.EmployeeID,E.LastName,E.FirstName,E.Title,E.TitleOfCourtesy,E.BirthDate,
E.HireDate,E.Address,E.City,E.Region,
E.PostalCode,E.Country,E.HomePhone,E.Extension,E.Photo,E.Notes,E.ReportsTo,E.PhotoPath,
OD.OrderID,OD.ProductID,OD.UnitPrice,OD.Quantity,OD.Discount,
O.OrderID,O.CustomerID,O.EmployeeID,O.OrderDate,O.RequiredDate,O.ShippedDate,
O.ShipVia,O.Freight,O.ShipName,
O.ShipAddress,O.ShipCity,O.ShipRegion,O.ShipPostalCode,O.ShipCountry
FROM Orders O
JOIN Customers C on O.CustomerID=C.CustomerID
JOIN Employees E on O.EmployeeID=E.EmployeeID
JOIN [Order Details] OD on OD.OrderID = O.OrderID
There are other ways of getting the column names (you should really use the object id instead of the name for example), and you can use FOR XML PATH to generate the CSV - but for a quick and dirty one-off this works
|
|
|
|
|
I am wondering if I can syn some table, views, some record such as select some from table where what I like from oracle DB to my SQL server.
Thanks.
|
|
|
|
|
|
Thanks so much.I Know this solution, I have linked the server but the problem is Oracle DB is a big DB, I only want to syn some data only depend on my need.So that the DB is smaller and the website will run quicker.
|
|
|
|
|
So write some queries to extract only the data you require.
You can set up Tasks to make this happen on a regular basis - see Schedule a Job[^]
Note that a "smaller DB" does not necessarily mean that it will be quicker, much depends on the database design, indexes used, and efficiently written queries for example.
|
|
|
|
|
OK I Know the schedule job.I Can get everything i want to get.But this is only get data not syn data.At the 8 am everyday i get data I need yesterday.But if there is some error that need to run some procedure to update the source oracle DB and they don't tell me.I don't have exactly the data I need.That why i ask for syn tool
|
|
|
|
|
I'm sorry you've lost me. What is "syn data"
If you mean "sync" then you can have oracle "publish" the data changes - see Create a Publication from an Oracle Database[^]
There are also plenty of synchronisation tools out there, some of which are very expensive.
Personally, I would just get the data from the original source - as I said earlier you are not necessarily going to get your website to "run quicker" by doing this. It's more likely that it's the way you are getting the data that requires tuning
|
|
|
|
|