|
Throwing thousands is not what I call "expensive"; trying to shave of 1 ms because it is "faster" does not justify not using them.
And there are enough places where an exception is actually measurably cheaper than the alternative. I stated an example thereof; where are your arguments?
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Eddy Vluggen wrote: Throwing thousands is not what I call "expensive";
Your opinion isn't relevant, exceptions are considered to be expensive operations, they gather a lot of information that isn't needed if you're simply using it as a form of validation.
Two ways of doing something where one is 10 times quicker than the other, it's a no-brainer. Especially when we're talking websites that might have tens or hundreds of thousands of concurrent users; those 100ms here and there really add up.
Eddy Vluggen wrote: And there are enough places where an exception is actually measurably cheaper than the alternative.
That's a straw-man argument you keep coming back to. Again, no-one is saying that exceptions are never the proper solution, we are talking about a specific implementation.
|
|
|
|
|
F-ES Sitecore wrote: Your opinion isn't relevant, Perhaps I should explain the difference between a measurable benefit and an opinion?
F-ES Sitecore wrote: exceptions are considered to be expensive operations, they gather a lot of information that isn't needed if you're simply using it as a form of validation. An insert is not a validation-routine. Yes, you can try to not use an exception, but if it complicates the code for an unmeasurable "speed optimization", you are still writing crappy code.
F-ES Sitecore wrote: Two ways of doing something where one is 10 times quicker than the other, it's a no-brainer. Especially when we're talking websites that might have tens or hundreds of thousands of concurrent users; those 100ms here and there really add up. They do not "add up", unless you are using exceptions for simple logic.
F-ES Sitecore wrote: That's a straw-man argument you keep coming back to That must be why you came up with the webserver-example
F-ES Sitecore wrote: Again, no-one is saying that exceptions are never the proper solution, we are talking about a specific implementation. Does this specific implementation contain your webserver?
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Eddy Vluggen wrote: An insert is not a validation-routine. Yes, you can try to not use an exception, but if it complicates the code for an unmeasurable "speed optimization", you are still writing crappy code.
The code in question is regarding checking if an item is in a collection, ie it is validating the contents of the collection.
Eddy Vluggen wrote: if it complicates the code for an unmeasurable "speed optimization", you are still writing crappy code.
It does the opposite.
if (!data.Contains(i))
{
data.Add(i);
}
data[i].Value = x;
The above is clear to anyone who reads it what the rules and logic are.
try
{
data[i].Value = x;
}
catch
{
data.Add(i);
data[i].Value = x;
}
The above is less obvious, doesn't read as well and is more ambiguous.
So not using exceptions in this instance makes the code clearer and 10x faster. It's a no-brainer.
Eddy Vluggen wrote: They do not "add up", unless you are using exceptions for simple logic.
Which is what we're talking about; using exceptions to dictate predictable and expected logic flow.
|
|
|
|
|
F-ES Sitecore wrote: Which is what we're talking about; using exceptions to dictate predictable and expected logic flow. Wasn't said in those words, so you could expect me to pound on the obvious. Also doesn't have anything to do with your webserver, does it?
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
It doesn't have anything to do with web servers, no. That was just an example of why performance gains can matter even if they are small. I know you're trying to drag the discussion away from the technical because your technical arguments have fallen flat.
|
|
|
|
|
F-ES Sitecore wrote: It doesn't have anything to do with web servers, no. That was just an example of why performance gains can matter even if they are small. Which is why I already explained with a simple example that adding to a collection can be expensive; that's a strawman-argument as you will, but it isn't different from the webserver-example.
F-ES Sitecore wrote: I know you're trying to drag the discussion away from the technical because your technical arguments have fallen flat. If it had, you'd be quoting it and explaining why
But, I can always appreciate a thread were we argue for arguings sake
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Eddy Vluggen wrote: That must be why File.Open doesn't throw any exceptions if you read a non-existing file
You're conflating contexts. Think of things from the perspective of the class. A File class' only purpose is to manipulate a file. Singular. A developer specifies a path. You have every expectation that file should exist. Now you're a Collection class. You handle an object set. Plural. A developer specifies an object. You have two options now: 1) the object set is well-defined, the object should exist, 2) the object set is not well-defined, the object might exist.
The big difference is that the Collection relies on the properties of the set while the File relies on the properties of the individual.
Eddy Vluggen wrote: Whether or not the existence of the item is expected or unexpected is up to the programmer
Collections are more abstractly complex than Files. The properties of the collection set itself determine the optimal (or only) approach. Not the developer.
Eddy Vluggen wrote: lots of designs where I can safely assume the item to exist, and where it not existing WOULD be a logical error.
Precisely my point. Each approach has a logical and/or mathematic[^] benchmark from which you can determine the appropriate approach[^].
Eddy Vluggen wrote: The idea that one should avoid using exceptions is simply wrong.
Agreed. Unless it's appropriate. In which case it's simply inefficient to use exceptions.
EDIT: Better words. The best words. Kappa.
modified 18-May-18 4:29am.
|
|
|
|
|
Jon McKee wrote:
You're conflating contexts. Think of things from the perspective of the class. A File class' only purpose is to manipulate a file. Singular. A developer specifies a path. You have every expectation that file should exist. Now you're a Collection class. You handle an object set. Plural. A developer specifies an object. You have two options now: 1) the object set is well-defined, the object should exist, 2) the object set is not well-defined, the object might exist.
The big difference is that the Collection relies on the properties of the set while the File relies on the properties of the individual. Lots of nonsense.
Jon McKee wrote: Collections are more abstractly complex than Files. So, a string is more complex than an Sql Server database-file now? And the string determines your approach?
Jon McKee wrote: Agreed. Unless it's appropriate. In which case it's simply inefficient to use exceptions. The only example I have seen that could be called "inappropriate" is where a try-catch is if you use it for a single check. Whether it is efficient depends on how often an exception is raised.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Eddy Vluggen wrote: Lots of nonsense.
Nah, was trying to simplify a thought and failed I guess. A File is a bad example because it accesses a collection at the end of the day too; it isn't truly singular.
I was attempting to explain that a collection, because it's a set of elements, has intrinsic properties that an element that isn't itself a collection doesn't have. These properties can (help) determine lots of design decisions including the best method of accessing elements of that collection. That's why File.Open throws exceptions yet File.Exists is still made available - they're for sets that have different properties. If you deal with a file or files which frequently don't exist, you can use that check instead of exceptions. If you deal with a file or files which almost always do exist, you can rely on exceptions.
|
|
|
|
|
Jon McKee wrote: If you deal with a file or files which frequently don't exist, you can use that check instead of exceptions. If you expect that the file not might exist, then it not existing is not really something exceptional.
In an environment with a lot of people manipulating the FS, relying on the exception might be cheaper than doing the actual check (which might have race-conditions attached). In a database the difference can become more outspoken - depending on how much additional checks the database-server has to make.
Of course, on does not use an exception-handling routine instead of a logical if .
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Eddy Vluggen wrote: Best practice means not to discourage the use of exceptions, simply because someone thinks that they slow the system. As you can see, it doesn't take much time to invoke the entire exception stack, as it can be done several thousand times in a second.
I've seen a few exceptions that consumed viscerally measurable time, but they were thrown deep in the stack, but the first exception handler that could catch them was in or near the main routine, many layers up the call stack. Hence, there was a lot of stack to unwind.
Eddy Vluggen wrote: Too many idiots avoiding exceptions altogether and using booleans instead If it is an error, throw an exception, it is that simple.
Too many people being too lazy to use their God given brains as intended. Programming is, after all, all about thinking. Best practices are intended to serve as rules of thumb, not ironclad rules.
David A. Gray
Delivering Solutions for the Ages, One Problem at a Time
Interpreting the Fundamental Principle of Tabular Reporting
|
|
|
|
|
An Exception that doesn't get thrown, costs nothing.
A double-check for something that exists, doubles your cost of retrieving it.
Looking up parameters by name is a costly operation either way, and it is easy enough to avoid anyway.
BUT, as I have to have such a method available, I choose to avoid the double-check.
I've been using ADO.net since 2002 and not until this week did I find a situation in which I might want to get a parameter by name.
(The method can be marked Obsolete with a message recommending the developer seek a better way.)
Any Exceptions thrown by such a lookup should be a sign of a bug. Once the bug is fixed, there will no longer be any attempts to retrieve a non-existent item -- at which point the double-check has the greater cost. If it makes sense for the calling code to check first, then that's a better place for the check.
|
|
|
|
|
I think you answered my question. Sounds like you have a well-defined set of values so not finding it is a rare occurrence. But what I was getting at is this:
private IDbDataParameter ExistsStuff(IDbCommand CMD, string Name) =>
(System.Data.IDbDataParameter) CMD.Parameters[Name];
private IDbDataParameter DoesntExistStuff(IDbCommand CMD, string Name)
{
IDbParameter result = CMD.CreateParameter() { ParameterName = Name };
CMD.Parameters.Add(result);
return result;
}
if (CMD.Parameters.Contains(Name))
result = ExistsStuff(CMD, Name);
else
result = DoesntExistStuff(CMD, Name);
try
{
result = ExistsStuff(CMD, Name);
}
catch ( System.IndexOutOfRangeException err )
{
result = DoesntExistStuff(CMD, Name);
}
In order to understand the differences, the statements they share in common were yanked out to normalize things. The main difference is as you said: the if front-loads the cost while the exception back-loads it. My other post[^] shows some code that tested Contains vs exceptions. Both that and another test I ran later using an invalid access to trigger the exception showed that exceptions are about 13 to 17 times as computationally expensive as a check.
Using that information, we can describe the functions mathematically. Shared code will be weighted as a 0 since it's shared (the code inside the if/try and else/catch). The if...else comes in at a weight of 1 for both branches since the check is required for both. The try...catch comes in at a weight of 0 for no exception and 15 for the exception branch. Now we'll use the variables x and y to denote the ratio that each branch is visited. Slap it all together and we get:
if/else: (1x + 1y)
try/catch: (0x + 15y)
Balance point: x + y = 15y => x = 14y => 1:14
This ratio shows that for the two methods to be equivalent x must occur 14 times as much as y . So for this example, 93.3% or more of the calls must generate no exception or you'd be better off with a check.
|
|
|
|
|
Sure, but y approaches 0 .
Otherwise, you're saying that you prefer to have buggy code run faster than bug-free code.
|
|
|
|
|
Something not being in a collection isn't by definition a bug. It might be in your case. If y should approach 0 with your collection then exceptions would be the way to go.
This has more to do with the data your collection services and the qualities of that data. If you have a unique set for your domain, then the collection will probably rarely run into an access outside of that domain. If the domain is generic then accesses outside of the registered or active parts of the domain will probably be much more common.
|
|
|
|
|
Well, perhaps "bug" was over-stating my opinion a bit, perhaps "poorly-developed" is a better term, and as I stated in my edit -- searching for a parameter by name (and probably members of other collections as well) is a code smell. Best to avoid it. In many cases, the developer will know what's in a collection. This is true of the program I was working on -- I know the parameter is in there, I even know that it's at index 0, so why search for it by name?
BUT, I write a lot of library/framework code that others might use (yeah, as if), so I need to keep "lesser developers" in mind.
As luck would have it, I was just re-reading a "List of Principles" of Programming Languages and saw this:
"Localized Cost: Users should only pay for what they use; avoid distributed costs." -- MacLennan, 1987
I would apply this to the current discussion as: "Programs which call GetParameter only when the parameter is known to exist should not have to pay for the double-checking that would benefit only other programs."
Is my code still violating that Principle? Yes. Because my GetParameter function tries to be provider-neutral, it has protection against calls that return NULL (e.g. Oracle) -- a program that uses SQL Server will still be paying that cost with no benefit to itself. However, the test for NULL is much less than the double-search.
|
|
|
|
|
I see where you're coming from given that example. I tend to enjoy writing more dev-oriented tools as well.
Abstractions such as labels/names don't bother me though. In my mind there's no difference between representing a key via integer or string if you're free to make the choice; they're both just abstractions in this context. The only requirement for a key is a unique set of bits however that ends up implemented.
I recently watched a great Computerphile[^] video where he was touching on the subject of abstraction using assemblers. Apparently John von Neumann thought assemblers were harmful because they took more processing time. Neumann thought this was wholly unnecessary because you could simply manually address the program making a double-pass by the assembler a waste of time (double because you need to scan for forward-jumping labels first). Got me thinking about the more general debate topic when you have multiple techniques to accomplish effectively the same thing and how advantages/disadvantages can sometimes be relative to the developer or the architecture.
|
|
|
|
|
If you are concerned about finding the maximal intersection of readability, performance and reliability, it is advised to ALWAYS use the bool TryGet( key, out value ) pattern and avoid bool Contains( key ) like the plague.
Moreover, after a TryGet returns false, do not use the Add method since it checks AGAIN for the existence of the item. Instead, use the set indexer to assign the value to the key's index.
if (!items.TryGet( key, out T value ))
{
value = GetNewValueLogic( key );
items[key] = value
}
Extremely readable, and it provides the best possible performance in all cases.
|
|
|
|
|
A good suggestion; but the IDataParameterCollection interface doesn't provide a TryGet method. As far as I can see, none of the concrete implementations do either.
Personally, I think ConcurrentDictionary[^] provides a better API than the standard Dictionary :
T value = items.GetOrAdd(key, GetNewValueLogic);
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Apologies. That's not an API that I was familiar with. I'm shocked that a TryGet isn't provided!
|
|
|
|
|
I had an evil idea so I ran this script on my local SQL server database:
CREATE PROCEDURE TESTPROC
AS BEGIN
SELECT 1
EXEC TESTPROC
END
When I executed the stored procedure, it stopped after 32 levels of nesting. My evil plan failed and I learnt something new today.
"It is easy to decipher extraterrestrial signals after deciphering Javascript and VB6 themselves.", ISanti[ ^]
|
|
|
|
|
If you change to a recursive CTE you can play around with OPTION (MAXRECURSION 100)
|
|
|
|
|
Each time a stored procedure calls another stored procedure or executes managed code by referencing a common language runtime (CLR) routine, type, or aggregate, the nesting level is incremented. When the maximum of 32 is exceeded, the transaction is terminated.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
I will take over the world before you do.
Just letting you know!
|
|
|
|
|