|
Sounds like Oracle.
If it is, you can specify multiple tables for exp, but only single query. When used this way, the query must be applicable to all the tables specified.
If the version of Oracle you use is 10g or higher, you should use Data pump . It's capable of exporting several tables with different queries.
Mika
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
Dear all ,
I establish a Transaction with update and merge replication between two SQL 2005 servers. every thing is OK and working fine. but when the publisher/subscriber is broken, i tired to rebuild the replication because both DB (at publisher and at subscriber) are still OK. The rebuilding is successful and a two way replication is rebuilt it. but the data changed at the subscriber site are not replication to the publisher only data change at the publisher is replicated to the subscriber.
How can i solve this problem, i mean replicated data from subscriber to publisher after rebuilding broken replication.
best regards and thanks
thaar
|
|
|
|
|
thanks Abdul Aleem for ur suggestion
|
|
|
|
|
Hi all,
i have arequirement to import excel sheet to postgresql using C#.
i wrote a code:
try
{
connection = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;" + @"Data Source=" + txtfilename.Text + ";" + "Extended Properties=Excel 8.0;");
mycommand = new OleDbDataAdapter(@"select * from [Ranjit$]", connection);
mydataset = new DataSet();
mycommand.Fill(mydataset, "ExcelInfo");
try
{
dataGridView1.DataSource = mydataset.Tables["ExcelInfo"].DefaultView;
}
catch (Exception ex)
{
//System.Diagnostics.Debug.WriteLine(ex.GetBaseException().ToString());
//MessageBox.Show(ex.GetBaseException().ToString());
throw new Exception(ex.Message);
}
So mydataset is displaying data in datagridview. this is working fine
No i have to send data to postgresql database
For that i wrote code
NpgsqlConnection strconn = new NpgsqlConnection(@"server=localhost;user id=postgres;password=thinksoft10@;database=Test;SyncNotification=true ");
strconn.Open();
NpgsqlCommand cmd = new NpgsqlCommand("COPY \"Test\" FROM STDIN", strconn);
NpgsqlCopyIn cin = new NpgsqlCopyIn(cmd, strconn);
cin.start();
but this is copying files to the database.
So please tell me how to copy that information in the postgresql database.
in my application i am able to show the excel sheet data in datagrid.
but i am not aable to send the excel sheet data to postgresql Test database test table.
Any help is appreciated!!!
thanks in anticipation
Ranjit.balu
|
|
|
|
|
Don't cross-post. It's considered rude.
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
In MS SQL SERVER, I found Primary key is numeric / auto generated in so many senerios. Is primary key is numeric or autogenerated rather VARCHAR to improve the performance?
or Is there any other reason to have primary key as numeric?
ch sriniw8z
|
|
|
|
|
Autogenerated (identity) columns are used as surrogate keys. Refer to: http://en.wikipedia.org/wiki/Surrogate_key[^]
It's a good practice to use surrogate key so that changes in actual data won't affect keys (both primary and foreign keys). Also when numeric datatype is used it's a bit more efficient than character types and also typically uses less space. But the main point is that it is not derived from data.
Mika
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
There are usually 2 reasons. First, performance - an int is quicker than a varchar/char as a primary key. Secondly, using an identity column (autogenerated) means no duplicate key checking is required.
Hope this clarifies things for you.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
Another reason is if your primary key is also a clustered index, which it is by default, the rows are physically ordered by the key. If you use a character data type you can cause the database engine to have a lot of disk IO to re-order the data on inserts. This may not be that noticeable for tables without many inserts or when they are small, but for high-activity tables, especially if they get large (millions of rows) you'd want to stay away from it. Say for example you primary key column is varchar(10) and you started with 'a' as the first item's key value and went up through 'zzzzzzzzzz'. If you insert 'a' and then inserted 'zzzzzzzzzz', they would be stored physically in that order. Anything you insert afterwards may cause the database to move the rows around on disk in order to put the primary key columns in order. I say "may" because it will depend on how full a page is whether the engine has to create a new page, move data from one page to another, etc.
I worked on a project where the primary key columns were char(32) (a GUID without the dashes) and this situation, with the clustered indexes, wreaked so much havoc on performance that on almost every frequently-used table the primary key column's index was changed from clustered to non-clustered. Of course, that design was in place before I looked at it and I would highly discourage anyone from doing that. Personally, I've never found a reason not to use either int or bigint set as identity 1, 1 for the primary key.
Keep It Simple Stupid! (KISS)
|
|
|
|
|
Good point, its so long since I, like you, have used anything other than int/bigint that I had forgotten that horror.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
Hi all,
I am using Visual Studio 2008 and Sql Server 2005
I am use some CLR Programming in my windows application and it goes fine.
But I got Some Error during Transaction in my application.
Error : Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
So, What is the Problem and How Can I Solve this Type of Problem.
Thanking you very much.
Arindam Banerjee
Sr. Software Developer
Rance Computer Pvt Ltd.
Kolkata (India)
|
|
|
|
|
For deadlock descriptoin, see:
- http://en.wikipedia.org/wiki/Deadlock[^]
- Deadlock[^]
In order to prevent a deadlock you must for example ensure that operations (and lock acquisitions) are executed in the same order, everywhere. This won't prevent all situations, but the amount will be much smaller.
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
In your code check the exception, if it is sql exception, then instead of skipping the control, stop the process for some time and call the same functionality once again, by the time, other process will finish it's work and release the data, then you process will continue
ch sriniw8z
|
|
|
|
|
I suppose you didn't actually want to answer my post, but the original question. When answering a question (or another post), press reply on the post which you want to answer or comment.
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
True Dat!
Easy to implement when the DB is new and there aren't that many queries, sps, etc.
On an existing database with alot of objects, little hard to implement. Lack of fore thought will kill you later on a large database with deadlocks/delaylocks.
Any suggestions, ideas, or 'constructive criticism' are always welcome.
"There's no such thing as a stupid question, only stupid people." - Mr. Garrison
|
|
|
|
|
That's true. One basic rule is to make operations top-down in relational order. It's quite simple rule of thumb, but works in many cases.
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
I want to know new values.
I have table
-------------
Names
-------------
A
B
C
now i have data
S,W,B
I want to know that S and W are new or these are not registered in our database.
Any query that help me.
Thanks.
Syed Shahid Hussain
|
|
|
|
|
Something like this...
SELECT 1 FROM Table WHERE Names = 'S'
|
|
|
|
|
This is a situation perfect for a LEFT OUTER JOIN or the NOT IN operator.
select n2.Names
from names_original n1
left join names_new n2 on n2.Names = n1.Names
where n1.names is null
OR
select n.Names
from names_new n
where n.Names not in (select names from names_original)
Of the two, I prefer the LEFT OUTER JOIN; but they should both produce the desired result
Keep It Simple Stupid! (KISS)
|
|
|
|
|
Hi ,
I have a problem with importing database from local system to web hosting server.
can you Please tell me the steps to import database from local machine to web hosting server.I am using Sql Server 2000 in my system.can we import database without using Business intelligence development.
If anyone have any idea please solve my problem.
Thanks in advance
Pavani
|
|
|
|
|
If I understood the question correctly, why not simply take a backup from local system and restore it to hosting server.
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
Thanks for your Reply.
I have taken backup for my database but how can we restore it to hosting server. Please can you tell me the steps to restore the database to Web hosting server.
|
|
|
|
|
|
Hi,
I followed that article, now i can restore it into my local system.But i am not getting how to import it to Web hosting server.Please tell me clearly.
|
|
|
|
|
I don't understand your question.
If you can restore the database to you local host, take the backup files to another server and do the same restore operation on that server.
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|