|
You were able to run IIS on Vista Home Premium? I have had success with Express on XPpro and Vista Ultimate edition only.
Why are you running a server update on a Vista box?
The world is a stage and most of us are desperately unrehearsed.
—Sean O’Casey, Playwright
|
|
|
|
|
One of our clients has just developed a set of coding standards for their .NET and SQL Server application development. For the SQL Server standards, they say the following regarding error handling:
-----------start of quote---------------
Error Handling
- Should be used in every stored procedure, with an Error Handling section
at the bottom of the procedure
- Stored procedures should return the success or failure of the stored
procedure by returning 0 (zero) upon success, or return the error number upon
error.
- @@ERROR should be checked after Every INSERT or UPDATE to deterimine the
success or failure of that action.
- @@ROWCOUNT should be checked after every INSERT or UPDATE if at least one
or more records were expected to be affected.
DECLARE @Error int, @RowCount int
T-SQL here
RETURN 0
EH:
RAISERROR(@Error, 16, 1)
RETURN @Error
-----------end of quote---------------
Does this sound like a good idea? Having an error handler in every stored procedure seems excessive.
Honestly, I'm of the opinion that error handling for the sake of error handling is a waste of time and only serves to obsficate your code. The only reason why you should handle an error is if you're going to do something meaningful with the exception. For example, you know what the error is and how to fix it. Or maybe you want to log it to the database. Or you need to rollback a transaction. But if all you're going to do is catch it and raise it up the call stack, that's pointless code because the error will bubble up the callstack anyway.
Do you agree or disagree? Why or why not?
|
|
|
|
|
Seems pretty much like the standards everywhere I've contracted in the last few years. If you get an error you can enhance the message to help track down the problem, so no, I don't think its a waste of time. My biggest gripe is that you can't (pre 2005) trap ALL errors so you can handle them. I've spent hours trying to track down such elusive messages as 'string or binary data would be truncated'
Bob
Ashfield Consultants Ltd
|
|
|
|
|
> If you get an error you can enhance the message to help track down the problem
What sort of information would you include to enhance the error message? For example, what would you change about this stored procedure?
--air code
CREATE PROCEDURE GetCustomerCount
AS
SELECT COUNT(*) FROM CUSTOMERS
RETURN 0
EH:
RAISERROR(@Error, 16, 1)
RETURN @Error
|
|
|
|
|
Obviously, in this example there is nothing, but if you have anything more complex, for example a proc which has several update statements, then it is useful to know where the error originated. Another example would be a series of nested procs - by including the proc name in the error you know where it originated.
As with any set of standards there are cases for and against - for example, stylecop in c# insists that the using statements come after the namespace declaration, but if you create a new class they are generated before.
As a long time contractor I have worked at many companies, they all have slightly different standards, some good, some not so good, but at the end of the day they are called standards for a reason - consistency. The overhead they cause is minimal and they make it easy for someone to pick up the code and see what is happening.
If you think the standard is not good you need to raise it with the powers that be, but in your original post you state "One of our clients..." and at the end of the day they call the tune.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
A lot of stored procedures are simple, one-statement CRUD operations that probably don't require error handling, which is why I think that mandating that every single sproc have an error handler is excessive.
These coding standards are brand new. If we get the contract, this will be the very first project to use them. So, consistency with existing applications is not an issue.
|
|
|
|
|
Its up to you and your client. I still maintain they are called standards for a reason.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
Sure, I believe in standards too. In fact, my standards that I used on the last project are actually more strict. But unless someone can come up with a valid justification, this particular standard is a bad one.
|
|
|
|
|
Wait a second! Does this error handling strategy even work? When I execute the following stored procedure, I get an error on the RAISERROR statement.
CREATE PROCEDURE dbo.spErrorHandlingTest1<br />
AS <br />
BEGIN<br />
<br />
DECLARE @Error INT<br />
DECLARE @myInt INT<br />
<br />
SELECT @myInt = 1/0<br />
<br />
SELECT @Error = @@ERROR<br />
<br />
IF @Error > 0 GOTO EH<br />
<br />
RETURN 0 <br />
EH: <br />
RAISERROR(@Error, 16, 1)<br />
RETURN @Error<br />
<br />
END
Msg 8134, Level 16, State 1, Procedure spErrorHandlingTest1, Line 8
Divide by zero error encountered.
Msg 2732, Level 16, State 1, Procedure spErrorHandlingTest1, Line 16
Error number 8134 is invalid. The number must be from 13000 through 2147483647 and it cannot be 50000.
|
|
|
|
|
I think it works as expected. Since you are passing forward the original error (8134) as message id in a custom RAISERROR, you'll have trouble. From Books Online:
msg_id
Is a user-defined error message number stored in the sys.messages catalog view using sp_addmessage. Error numbers for user-defined error messages should be greater than 50000. When msg_id is not specified, RAISERROR raises an error message with an error number of 50000.
|
|
|
|
|
You mean RAISERROR is working as expected and the calling code is incorrect, right?
|
|
|
|
|
Had to re-read my post and yes that's exactly what I meant... and yes the answer was confusing since I didn't point out what I was referring to. Sorry about that
But the problem is that you're using reserved error numbers in custom RAISERROR call. If you change the first argument to for example 50001 you'll see it behaves differently.
Mika
|
|
|
|
|
Sorry to revisit an old thread, but I happened to stumble across an article that relates to this topic. According to Microsoft's Design Guidelines for Exceptions:
"Do not overuse catch. Exceptions should often be allowed to propagate up the call stack."
http://msdn.microsoft.com/en-us/library/ms229005.aspx
Of course, this guideline is in reference to .NET code and not SQL, but it's still good advice.
|
|
|
|
|
I'm using DAAB (Enterprise Library) for accessing database. My problem is I can't read output parameters when using DataReader. That is I can only read stored procedure's output parameter only when I use ExecuteNonQuery or ExecuteScalar functions. When I use ExecuteReader function, the output parameter is 0. Do you know how I can read output parameter when using ExecuteReader function?
here is a sample code:
var db = new SqlDatabase(ConnectionString);
string sqlCommand = "ReadItems";
DbCommand dbCommand = db.GetStoredProcCommand(sqlCommand);
db.AddInParameter(dbCommand, "@Count", SqlDbType.Int, count);
db.AddOutParameter(dbCommand, "@Total", SqlDbType.Int, sizeof(Int32));
var reader = db.ExecuteReader(dbCommand);
while (reader.Read())
{
}
int total = Convert.ToInt32(dbCommand.Parameters["@Total"].Value);
|
|
|
|
|
You need to close the reader before you can get the output parameters. No idea why, it has always been this way.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
Does anyone know if it is possible to supply multiple queries and tables to exp - so that the result is a single export file ?
e.g. something like
exp USER/passwd file=db.dmp tables=a,b,c query=[different restictions on tables a, b and c]
?
thanks !
|
|
|
|
|
Sounds like Oracle.
If it is, you can specify multiple tables for exp, but only single query. When used this way, the query must be applicable to all the tables specified.
If the version of Oracle you use is 10g or higher, you should use Data pump . It's capable of exporting several tables with different queries.
Mika
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
Dear all ,
I establish a Transaction with update and merge replication between two SQL 2005 servers. every thing is OK and working fine. but when the publisher/subscriber is broken, i tired to rebuild the replication because both DB (at publisher and at subscriber) are still OK. The rebuilding is successful and a two way replication is rebuilt it. but the data changed at the subscriber site are not replication to the publisher only data change at the publisher is replicated to the subscriber.
How can i solve this problem, i mean replicated data from subscriber to publisher after rebuilding broken replication.
best regards and thanks
thaar
|
|
|
|
|
thanks Abdul Aleem for ur suggestion
|
|
|
|
|
Hi all,
i have arequirement to import excel sheet to postgresql using C#.
i wrote a code:
try
{
connection = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;" + @"Data Source=" + txtfilename.Text + ";" + "Extended Properties=Excel 8.0;");
mycommand = new OleDbDataAdapter(@"select * from [Ranjit$]", connection);
mydataset = new DataSet();
mycommand.Fill(mydataset, "ExcelInfo");
try
{
dataGridView1.DataSource = mydataset.Tables["ExcelInfo"].DefaultView;
}
catch (Exception ex)
{
//System.Diagnostics.Debug.WriteLine(ex.GetBaseException().ToString());
//MessageBox.Show(ex.GetBaseException().ToString());
throw new Exception(ex.Message);
}
So mydataset is displaying data in datagridview. this is working fine
No i have to send data to postgresql database
For that i wrote code
NpgsqlConnection strconn = new NpgsqlConnection(@"server=localhost;user id=postgres;password=thinksoft10@;database=Test;SyncNotification=true ");
strconn.Open();
NpgsqlCommand cmd = new NpgsqlCommand("COPY \"Test\" FROM STDIN", strconn);
NpgsqlCopyIn cin = new NpgsqlCopyIn(cmd, strconn);
cin.start();
but this is copying files to the database.
So please tell me how to copy that information in the postgresql database.
in my application i am able to show the excel sheet data in datagrid.
but i am not aable to send the excel sheet data to postgresql Test database test table.
Any help is appreciated!!!
thanks in anticipation
Ranjit.balu
|
|
|
|
|
Don't cross-post. It's considered rude.
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
In MS SQL SERVER, I found Primary key is numeric / auto generated in so many senerios. Is primary key is numeric or autogenerated rather VARCHAR to improve the performance?
or Is there any other reason to have primary key as numeric?
ch sriniw8z
|
|
|
|
|
Autogenerated (identity) columns are used as surrogate keys. Refer to: http://en.wikipedia.org/wiki/Surrogate_key[^]
It's a good practice to use surrogate key so that changes in actual data won't affect keys (both primary and foreign keys). Also when numeric datatype is used it's a bit more efficient than character types and also typically uses less space. But the main point is that it is not derived from data.
Mika
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
There are usually 2 reasons. First, performance - an int is quicker than a varchar/char as a primary key. Secondly, using an identity column (autogenerated) means no duplicate key checking is required.
Hope this clarifies things for you.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
Another reason is if your primary key is also a clustered index, which it is by default, the rows are physically ordered by the key. If you use a character data type you can cause the database engine to have a lot of disk IO to re-order the data on inserts. This may not be that noticeable for tables without many inserts or when they are small, but for high-activity tables, especially if they get large (millions of rows) you'd want to stay away from it. Say for example you primary key column is varchar(10) and you started with 'a' as the first item's key value and went up through 'zzzzzzzzzz'. If you insert 'a' and then inserted 'zzzzzzzzzz', they would be stored physically in that order. Anything you insert afterwards may cause the database to move the rows around on disk in order to put the primary key columns in order. I say "may" because it will depend on how full a page is whether the engine has to create a new page, move data from one page to another, etc.
I worked on a project where the primary key columns were char(32) (a GUID without the dashes) and this situation, with the clustered indexes, wreaked so much havoc on performance that on almost every frequently-used table the primary key column's index was changed from clustered to non-clustered. Of course, that design was in place before I looked at it and I would highly discourage anyone from doing that. Personally, I've never found a reason not to use either int or bigint set as identity 1, 1 for the primary key.
Keep It Simple Stupid! (KISS)
|
|
|
|