|
Thnks for ur reply I had taken avg(isnull(column),0)..is that the reason why my function is also considering null rows also?
|
|
|
|
|
raghvendrapanda wrote: avg(isnull(column),0)..
The answer is yes, but as a test why don't you try the following.
avg(isnull(column),42) ..
Chris Meech
I am Canadian. [heard in a local bar]
In theory there is no difference between theory and practice. In practice there is. [Yogi Berra]
|
|
|
|
|
I see, the "average" function needs to be calibrated first.
|
|
|
|
|
Just like the random function needs to be seeded.
Chris Meech
I am Canadian. [heard in a local bar]
In theory there is no difference between theory and practice. In practice there is. [Yogi Berra]
|
|
|
|
|
I'm not sure that the WHERE UnitPrice IS NOT NULL is required here -- COUNT(UnitPrice) will only count the non-null entries anyway, right? Same with SUM and AVG?
|
|
|
|
|
select a.*, b.ConstructionCompletionDate_dt from projects a join schedule b on a.projectid=b.projectid where projectname like @projectname or country like @country or clientcompany like @clientcompany or ConstructionCompletionDate_dt like ('%(@ConstructionCompletionDate_dt)%')
this is how im searching.. where ConstructionCompletionDate_dt isonly yyyy from datetime field..
if im using '% %' for example if i search 2010,, all fields with 012 in mm/dd/yyyy are displaying... i want only rows with year 2010 how can i do..
|
|
|
|
|
test-09 wrote: where ConstructionCompletionDate_dt isonly yyyy from datetime field
A real DateTime field that you converted to a VARCHAR(4) ? The LIKE operator is more suitable to search through text-fields.
How about something like this;
SELECT *
FROM HumanResources.Employee
WHERE YEAR([BirthDate]) = 1972
I are Troll
|
|
|
|
|
Tell me you are storing your dates as datetime and not varchar.
Try the datepart keyword, something like
where datepart(yyyy,ConstructionDate) = 2010
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
im storing it as varchar as this is comfortable to my requirement..
|
|
|
|
|
You should use the correct data type then. Why are you not using Date as the datatype? You will be doing some casts and converts in your application which is error prone.
|
|
|
|
|
Get comfortable with doing it the right way. Now!
|
|
|
|
|
test-09 wrote: m storing it as varchar
This is the most basic error in data design, I recommend that you change your data type from varchar to datetime NOW The longer you delay the more work it will take to change. You will change eventually or the project will die, the downstream cost of this mistake is extreme and must be fixed immediately.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
I second that.
|
|
|
|
|
I have the following result set in sql server 2005.
Column1 column2 column3 column4
cc cc1 cc2 cc3
dd dd1 dd2 dd3
Now I want to convert above result into the following table without using cursor in SQL Server 2005.
column1 cc dd
column2 cc1 dd1
column3 cc2 dd2
column4 cc3 dd3
Please help me.
|
|
|
|
|
Look into unpivot , I've never actually had to do this but pivot work fine so I assume unpivot will also.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
I am trying to perform a bulk insert into SQL Server where some of the text fields may contain apostrophe's, which need to be removed.
Can I bulk insert from the csv and then have the last column be the file name I am inserting from?
BULK INSERT MY_DB.dbo.SYMBOLS FROM 'C:\data.csv'WITH (
DATAFILETYPE = 'char', FIELDTERMINATOR = ',', ROWTERMINATOR = '\r\n' )
My workaround is to query to csv file to a dataset, remove the apostrophes, and then run a ton of insert queries. I am doing this for more than a hundred files.
|
|
|
|
|
Ted2102 wrote: query to csv file to a dataset,
I do the same, load the data into a datatable, clean out the single quotes and BULKCOPY the datatable into SQL Server table. The target table is all varchar b/c bulkcopy can be delicate sometimes and spits the dummy regularly.
public int BulkCopy(DataTable dtTable, string sTableName, SqlConnection oConn)
{
try
{
SqlBulkCopy oBC = new SqlBulkCopy(oConn);
oBC.BulkCopyTimeout = 60000;
oBC.DestinationTableName = sTableName;
oBC.WriteToServer(dtTable);
return dtTable.Rows.Count;
}
catch (Exception)
{
throw;
}
}
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Okay, then I will make all of my strings varchar for this table. Are there any problems with loading of doubles or big ints that you are aware of?
|
|
|
|
|
Firstly, why do you need to remove the apostrophes? Are they not part of the text you are importing or are they superfluous characters that shouldn't have been there?
Either way you might consider opening the csv file in code and then either doubling the apostrophes or removing them, something like (and this is a very simplistic example):
string filePath = "full_path_to_the_csv_file";
string text = File.ReadAllText(filePath);
text = text.Replace("'", "''");
text = text.Replace("'", string.Empty);
File.WriteAllText(filePath, text);
You'll need to adjust to suit but it should get you started.
me, me, me
"The dinosaurs became extinct because they didn't have a space program. And if we become extinct because we don't have a space program, it'll serve us right!"
Larry Niven
|
|
|
|
|
The apostrophes are part of the text. Appreciate the help. I will this later today as well and see how well it works. I am trying to load a couple hundred csv files.
|
|
|
|
|
I remove the single quotes, we regularly export the data as csv file at some point and text identifiers are not supported by SSIS (I think, one of MS core technologies does not support text qualifiers "" therefor cannot deal with single quotes in the data, astonishing). So we get a cleaner result and the our users don't give a rats ass about single quotes.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Mycroft Holmes wrote: users don't give a rats ass
Would have both said it all and been very precise
me, me, me
"The dinosaurs became extinct because they didn't have a space program. And if we become extinct because we don't have a space program, it'll serve us right!"
Larry Niven
|
|
|
|
|
Given the double quote issue in the data set, I cannot use the ADO .NET csv file reading capability directly. Instead I tried using ReadAllText and WriteAllText to remove the single quotes and double quotes.
I keep getting an out of memory error after a few minutes of inserts. I was able to load the datasets (before discovering the double quote issue)without any memory issues. I have 4 GB of physical memory and a maximum 16 GB virtual memory setting on my laptop. My aggregate datasets should take no more than 6GB of space in SQL Server. I've tried rebooting a couple of times and that did not help. Any ideas?
void Cleanse_File(System::String ^Full_Path_To_File,
long & ErrorCode) {
try {
System::String ^text = System::IO::File::ReadAllText(Full_Path_To_File);
text = text->Replace("'", "");
text = text->Replace("\"", "");
System::IO::File::WriteAllText(Full_Path_To_File, text);
}
catch(System::Exception ^e) {
System::Console::WriteLine(e->Message);
}
}
At this point, I am thinking of using a different csv file reader and hoping that it can handle the double quotes without mismapping columns.
|
|
|
|
|
If the files are large you may need to read them in a line at a time.
me, me, me
"The dinosaurs became extinct because they didn't have a space program. And if we become extinct because we don't have a space program, it'll serve us right!"
Larry Niven
|
|
|
|
|
I am going to start working on this shortly. I decided to remove single and double quotes, so that I would get the same results if I ran a script on the file again. If I did not have the double quote issue and several other formatting problems, then I could have called a bulk load or copy afterwards.
I still might make sense to clean the full data first and then do a bulk load.
My problem with my data sources files is that the first field is sometimes split into several columns and the last few columns are sometimes missing. So I have a gross C++ program to remap the data to try to get the correct table structure.
I noticed that the results are incorrect for Rows when double quotes are utilized.
I am specifying FMT=Delimited(,) in my connection string for a csv file.
My query is SELECT * FROM C:\X.csv
Suppose a row contained Generic 1st "LCD" Monitor, 1, 2, 3. The query results are coming back as "Generic 1st", Null, Null, Null.
modified on Thursday, April 1, 2010 2:47 AM
|
|
|
|