I'm going to create a rather large report that collects the data from two different sources. One is a badly normalized database, where the data I need is spread over a large number of columns in many tables. The other is an Entity–attribute–value model[^].
I don't have any problems in fetching the data from either source, but the data is going to be organised in categories and subcategories, or possibly a tree, that are currently not existing in the EAV model and not corresponding at all with the data model on the other database, so they need to be created from scratch.
I'm looking for ideas on how to make this in a fairly futureproof way, I don't really have any keys to use either. It's just the column names in one source and the attribute names in the other. I realize there is no right way of doing this (but many wrong). So any opinions on how to do this, and as important, how not to do this, is appreciated.
And no, I can't do much about the structure of the sources, that's way out of scope.
You can use Power BI to pull data from multiple sources, build relationships, and get a "feel for the data" and perhaps produce your report in question (with or without using any number of "cleansing" techniques).
Moving forward, look into (data warehouse) "star schemas" for modeling ideas.
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal
I'm looking for ideas on how to make this in a fairly futureproof way, I don't really have any keys to use either. It's just the column names in one source and the attribute names in the other.I realize there is no right way of doing this (but many wrong).So any opinions on how to do this
You got two groups of unsorted and ungrouped column-names coming from a repository that need to be mapped to a model?
..then I'd go for a relational model, even if it only exists in memory. Which would mean following the steps to normalize the data (upto BCNF). That's a tedious but proven way. Any group of properties that make up a model (regardless of the way the data is physically stored) would be a candidate.
I would seriously consider creating that theoretical model and then create an in-memory database (using SQLite or similar) to hold it. The added advantage of that approach is having an abstraction of the data that you can already use to design reporting and stuff. Another added advantage is that any redundancies or inconsistencies in the database would be caught. Downside is that you need (access to) a domain-expert to build and validate the model.
Bastard Programmer from Hell If you can't read my code, try converting it here[^]
I tried this query in SQL. But getting errors like
'The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.'
declare cur cursorforSelect Table_Name From INFORMATION_SCHEMA.COLUMNS Where column_name = 'INV_DATE'declare@tablenamenvarchar(max)
fetch next from cur into@tablenamewhile @@fetch_status=0
begin--print @tablenameset@sqlstring = 'update '+@tablename+' SET INV_DATE = CONVERT(varchar(10),INV_DATE,105)'exec sp_executesql @sqlstringfetch next from cur into@tablenameendclose cur
Please suggest me a solution
current format:2016-03-31 00:00:00.000
I tried to copy data and changing toDateTimeusing temp field.But also getting the same error.
<pre lang="SQL">alter table GL add GL_DATE1 datetimenullupdate GL set GL_DATE=convert(datetime,GL_DATE,104)-------->Getting error hereupdate GL set GL_DATE1=GL_DATE
altertable GL dropcolumn GL_DATE
please give me the advice to copy data from one field to another field using the temp table.
Why do you want to do that? The datetime format is only matter to human not the sql server. Just leave it as it is as datatime, then format it on select to the appropriate format or culture that suite the users. Learn why you should not bother How to format a Date or DateTime in SQL Server[^]
Peter Leow http://www.peterleowblog.com/ https://www.amazon.com/author/peterleow
Create the connection as late as possible, and wrap it in a using block to ensure that it's always disposed of properly.
Don't disable connection pooling, since the underlying network connections can be expensive to set up and tear down. The connection pool is there to improve the performance of your code!
Trying to share a single connection object across multiple threads will only lead to errors. To avoid the errors, you'd have to synchronise access to the shared connection, which will significantly degrade the performance of your code.
"These people looked deep within my soul and assigned me a number based on the order in which I joined." - Homer