|
I concur. Further, once the exception has been handled there should (normally) be no need to rethrow it.
This rule leads to the useful guidline: catch exceptions at the lowest level where they can be meaningfully (i.e. completely) handled.
The exception to the no-need-to-rethrow rule is when some cleanup action is needed as the stack is unwound. In that case, catch the specific type of exception, do the local cleanup only, and rethrow (ideally, using throw without a paramter - to preserve the stack trace). Do not duplicate the work that will be done in the can-meaningfully-handle-it catch block (such as logging, notifying the user, clearing context).
Note that the using (...){...} construct implicitly follows this rule: in the event that the contained block of code throws an exception, it will automatically invoke the Dispose() method on the item allocated by the using() statement.
|
|
|
|
|
Yes, I always liked C++'s RAII (Resource Acquisition Is Initialization) approach for exactly this reason. Every scope in C++ acts as an implicit "using" for all names declared within it, and the destructor is implicitly called when the scope exits. This gives deterministic resource management, but does leave more room for developer mistakes than managed languages.
While "using" does allow this (to a degree) in C#, to apply it to every resource would sometimes result in a cascade of nested "using" blocks. In C++, its all automatic.
Personally, I like to think I don't need my hands held by the compiler all the time.
"If you don't fail at least 90 percent of the time, you're not aiming high enough."
Alan Kay.
|
|
|
|
|
Is that to shorten the test period? IMHO one should go for defensive programming:
public double Divide(double term1, double term2){
double division = double.NaN;
if(term2 != 0){
division = term1/term2;
}
else{
division = double.Infinity;
}
return division;
}
instead of try/catch the DivideByZeroException.
|
|
|
|
|
Yes, that's applicable for user exceptions such as one you mentioned (DivideByZero) because developer already knows this fact and should handle it correctly in code like your example.
What about the system exceptions like MemoryOverflow, etc? This is not in control of developer to handle them.
Understand SOLID! Believe SOLID! Try SOLID! Do implement live SOLID; your Code base becomes Rock SOLID!!!
http://www.codeproject.com/Articles/593751/Code-Review-Checklist-and-Guidelines-for-Csharp-De
|
|
|
|
|
But shouldn't the developer *know* if a MemoryOverflow is a likely occurrence in a piece of code?
Otherwise we're saying that the underlying systems and framework are so flakey, that our apps might just randomly bug out...
That can't be a sensible approach to take surely??
For general code in most apps you have to trust the OS and framework will do as it always does.
|
|
|
|
|
Yes, developer does not know when the system resources like memory etc has crossed some threshold or system is short of memory. How do he/she come to know? As it can happen irrespective of whatever the code is being written.
Please, somebody correct me if my perception is not correct.
Understand SOLID! Believe SOLID! Try SOLID! Do implement live SOLID; your Code base becomes Rock SOLID!!!
http://www.codeproject.com/Articles/593751/Code-Review-Checklist-and-Guidelines-for-Csharp-De
|
|
|
|
|
memory exception for about 90% of the time, come from memory leaks caused by not releasing objects.
There are a few occasions were you could anticipate and act accordingly.
Furthermore, I hope there is some testing involved before going into production, any memory issues will come up there and you can act accordingly without the necessary need of exceptions.
Mohammed Hameed wrote: my perception is not correct.
it is not about right or wrong
|
|
|
|
|
These kind of exceptions still system can throw eventhough resources have been disposed/released.
This might happen if system's overall memory overflows.
Understand SOLID! Believe SOLID! Try SOLID! Do implement live SOLID; your Code base becomes Rock SOLID!!!
http://www.codeproject.com/Articles/593751/Code-Review-Checklist-and-Guidelines-for-Csharp-De
|
|
|
|
|
V. wrote: IMHO one should go for defensive programming:
public double Divide(double term1, double term2){
double division = double.NaN;
if(term2 != 0){
division = term1/term2;
}
else{
division = double.Infinity;
}
return division;
}
instead of try/catch the DivideByZeroException.
If that's C#, you don't need any of that code. If you divide a double by 0 , you won't get a DivideByZeroException ; you'll get double.Infinity .
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
that could very well be, didn´t test that.
|
|
|
|
|
I am catching the top level exception in a large chunk of my current application. I have two reasons for this, but I'm willing to listen to anyone who thinks this is "wrong" and learn a better development practice if I can:
1. I rely heavily on a third party webservice that is very fickle and unpredictable. I have exception handling in here when making calls to this so I can gracefully handle errors from this application for the user. However, even I as I typed this I realized I should be throwing specific exception and not general ones, so I'll be refactoring that code.
2. In general I prefer to capture exceptions and log them so when my users (these are internal applications) call and say they had a problem I can check the log file to help resolve the issue. WIth CustomErrors on the end user normally can't provide me with much in the way of useful information so I rely on my logs for production troubleshooting.
|
|
|
|
|
Very good question.
According to my view, you can catch specific exceptions for the methods, which you are sure what kind of exception(s) can occur. And for the other methods, which you are not probably sure of all possible exceptions, which may occur you can catch the top-level exception.
Obviously, in any case you need to log the exception details.
Note: I would recommend you to go through this entire discussion of this thread you will get more information and also check this link: http://msdn.microsoft.com/en-us/library/vstudio/ms229005(v=vs.100).aspx[^]
http://authenticcode.com
|
|
|
|
|
Just remembered another stunning design stumble in the same area as the multiple database scenario ... they had a SQL table and decided that because the majority of requests required the output in XML format, it would be "best practice" (that weasel word again) to convert the entire table into a blob of XML. They then dropped the original table and recreated it as a table with 1 row and 1 column, into which they inserted the XML blob.
Unfortunately they had not realised that most requests wanted a subset of the table, not the entire table. So for each request, the XML blob had to be converted to a temp table, the appropriate SELECT run on it, and the results sent back to the requester as XML ... Phew!! Inserts, updates and deletes had to go through the same process: XML blob -> temp table -> apply insert/update/delete -> convert back to XML blob, save back in the "table".
|
|
|
|
|
Precious...
Are you still working there?
Careful, it might be contagious...
|
|
|
|
|
Haha, I'm too lazy a programmer to make work for myself like that
I'm on contract
|
|
|
|
|
|
Yeah, that basically sums it up
|
|
|
|
|
johnsyd wrote: Inserts, updates and deletes had to go through the same process: XML blob -> temp table -> apply insert/update/delete -> convert back to XML blob, save back in the "table".
Presumably without any locking to ensure that concurrent changes don't get lost?
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Enterprisey. It's like the modern version of emulating a 1-M relationship using a delimited string in a text column.
|
|
|
|
|
I work for a large company which has an odd habit of splitting off logically related tables into separate databases. So instead of one database ABC, you have databases ABC_CLIENT, ABC_PORTFOLIO, ABC_PRICING, etc. To join client data to their own portfolio data, you need to go cross database and to include pricing data means yet another cross database connection.
A friend who works for another large company in the same industry says that his company thinks this is "best practice". No one I've talked to thinks this is a good idea, let alone "best practice". What do you think?
|
|
|
|
|
Client and pricing information in separate databases.
Wait until the company grows and amount of transactions get to several millions per day.
|
|
|
|
|
Exactly -- they get away with it because this particular product has limited number of clients and transaction volumes.
|
|
|
|
|
|
Our company (banking industry, over 500'000 credit cards (at the time I worked in that area, so now it could be doubled), bank accounts, ecc. Split between credit card, bank and hybrid (for both), but that's it so far in DB2 area. There are some ORACLE DBs used mainly internaly and some MS SQL DBs used for internal intranet web sites, but that's OK because totally unrelated.
The signature is in building process.. Please wait...
|
|
|
|
|
I kind of already faced a similar problem with SOA architectures.
If you split your business logic across several self-contained services sooner or latter you'll end up needing to show, on a grid or a report, data that comes from several services.
You shouldn't, even if it's possible, do joins across services databases as it breaks all the decoupling principle but when performance starts to be an issue... you know how the story goes from now on don't you?!
So the only reason I see here (even if it's not a good idea) is an attempt to implement this "SOA" concept but only at DB level (say... Service Oriented Databases? ), separating "services" by database.
Now I'm curious to know if I'm right!
|
|
|
|