|
MrMarco wrote: one of my workmates:
Thanks to him for the explanation and to you for posting it here... I find it awesome that a true reason has been found to justify the mere existence of this line of code.
MrMarco wrote: The macro you see is
a workaround for this
So, we are speaking about using awful code to counter awful bugs. Probably the climax of the coding horror (well, no, it could have been VB ).
~RaGE();
I think words like 'destiny' are a way of trying to find order where none exists. - Christian Graus
Do not feed the troll ! - Common proverb
|
|
|
|
|
Rage wrote: So, we are speaking about using awful code to counter awful bugs.
That was really a feature, not a bug (see my other comment in this thread). It became a bug only after the Standard was released.
|
|
|
|
|
MrMarco wrote: That's a workaround for one of the more braindead shortcomings of
Microsoft Visual C++ 6.
Exactly!
However, in defense of good ol' VC6, it was released before the Standard, and in early 1990's some other compilers were doing the same thing.
|
|
|
|
|
interesting and enlightening... but then just for the sake of a little simplicity, it could have been written like
#define for if(true)for;
and yes, this thing should have been documented.
I saw a similar thing somewhere else where there was code like..
do
{
}
while(0);
This was just to define a scope for the variable declared with in the block. MS VC compilers do allow braces without any keyword behind them but in some flavors of C/C++, you just can't put braces in your code without a construct.
|
|
|
|
|
Thats a usefull code method to avoid using goto.
A failure is followed by a break and you jump out of the loop.
However, I use non keyworded braces my self for local vars, but also to bracket code that has a particular purpose, or I want to be thought of as distinct from the rest.
Morality is indistinguishable from social proscription
|
|
|
|
|
A fair criticism, but remember that Visual C++ 6.0 was written before the standard was actually finalized, and at the time it was released that was one of the breaking changes.
|
|
|
|
|
Similar than the macro style used for macro's that need to be an explicit statement.
Eg:
#define ASSERT(f) \
do { \
if (!(f) && assertFailedOnLine (THIS_FILE, __LINE__)) \
FatalExit (0); \
} while (0)
This forces the 'user' to only use ASSERT(x) as a statement, as the ; is required.
See link below for more info, or Google for "while(0)".
http://www.thescripts.com/forum/thread215019.html[^]
|
|
|
|
|
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
A few years back, I used to work on an 'enterprise' system that touted itself for the 'increased' data accuracy that it provides its clients, and one day, my employers wanted me to change their DB schema to accommodate a new feature for their system, except there was one problem: the database had no referential integrity! Each table had a primary key and some foreign keys pointing to other tables, but none of the tables were actually linked together. When I asked the 'senior' programmer why they did this, his explanation was that their system maintained the links automatically, despite the fact that the DB itself was designed to have 'soft' deletes, and none of these soft deletes actually cascaded across the entire system. When I browsed the entire code base, however, there was nothing to indicate this sort of behavior. In short, the whole DB (and the application) was a mess, and not even the upper management knew about it.
Now my first impression of this was a "WTF? That's just...immoral!", but it got me thinking...is not linking the DB tables together a viable strategy?
Traditional DBA wisdom (from "within the box", per se) would say that referential integrity using the DB is important, but is it possible to do with out it?
Anyway, here's my question: Is it a horror, or not? And if it isn't a horror, why would you say it isn't?
|
|
|
|
|
Once, in my previous company, in a system that someone else maintained, all primary keys and relationship were removed because "they causes problems when they need to patch data (due to bugs such as multiple rows were inserted) by using scripts"......
|
|
|
|
|
darkelv wrote: "they causes problems when they need to patch data (due to bugs such as multiple rows were inserted) by using scripts"......
That management should actually adopt the policy of abolishing all SQL Server licenses and prohibiting RDBMS in thier realm. They can simply live with plain vanilla text files which would save them pretty good bucks from software license costs, recurring DBA charges and more.
Vasudevan Deepak Kumar
Personal Homepage Tech Gossips
A pessimist sees only the dark side of the clouds, and mopes; a philosopher sees both sides, and shrugs; an optimist doesn't see the clouds at all - he's walking on them. --Leonard Louis Levinson
|
|
|
|
|
Vasudevan Deepak K wrote: That management should actually adopt the policy of abolishing all SQL Server licenses and prohibiting RDBMS in thier realm.
Maybe that management consists of a bunch of drunken lemurs
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
The scary part is that they're a Microsoft Certified Gold Partner.
|
|
|
|
|
Well that explains everything then ! It is by design !!
|
|
|
|
|
At Microsoft, it's never a bug. It's just a Microsoft Certified "Gold" Feature...:P
|
|
|
|
|
That made my day!
|
|
|
|
|
Enforcing referential integrity takes clock cycles, and this is where you end up getting into a battle with DBAs. A DBA will typically point out that it is up to your application to ensure integrity, but you argue back that you have the tools in the database to do it - so why not let the database do what it is designed for? In some cases, the DBA has a point because they have a legacy database where the referential integrity checking is a real kludge (i.e. slow). In more modern DBs though, referential integrity is performed much quicker (generally by using a quick index scan).
Now, the issue becomes how to react to a referential integrity problem and this becomes an architectural issue. If you leave it to the database to inform you then you've gone through the whole process of submitting the data and waiting for the database to verify (or not) that the operation has succeeded. If it fails, you have to notify the user/do some remedial work. If your application checks the integrity though, then theoretically this becomes less of an issue. There is a problem with this line of thinking though - you could only guarantee this if the database were single user; in the time between you performing the check and you actually attempting the insert (or update), the record could have been deleted at which point you've broken the integrity rules. Another issue boils down to this - if you leave it to your code to check the integrity then EVERY update/insert/delete statement must check the integrity (and in the case of deletes this can be across multiple tables - which means your selects must be redone everytime a new table is added into the referential mix).
Bottom line - the DB provides the tools to do this. It's efficient, and means you don't have to worry about forgetting to perform a referential check.
|
|
|
|
|
Pete O'Hanlon wrote: Bottom line - the DB provides the tools to do this. It's efficient, and means you don't have to worry about forgetting to perform a referential check.
There must be a harmonious combination of the application and the database to minimize the different heart-burns.
Vasudevan Deepak Kumar
Personal Homepage Tech Gossips
A pessimist sees only the dark side of the clouds, and mopes; a philosopher sees both sides, and shrugs; an optimist doesn't see the clouds at all - he's walking on them. --Leonard Louis Levinson
|
|
|
|
|
They should both do the checks. Your application should send the type of information the database is wanting, and the database should expect a specific type of data.
The best way to accelerate a Macintosh is at 9.8m/sec² - Marcus Dolengo
|
|
|
|
|
Expert Coming wrote: They should both do the checks.
I disagree.
Expert Coming wrote: the database should expect a specific type of data.
It should expect nothing. Like any code, the caller should never be trusted (unless of course you are the guaranteed only caller).
|
|
|
|
|
Expect isn't the right word, but I do think that the database needs to know what it is storing, and the application needs to know what kind of data the database wants.
The best way to accelerate a Macintosh is at 9.8m/sec² - Marcus Dolengo
|
|
|
|
|
Pete O'Hanlon wrote: it is up to your application to ensure integrity
Yeah, on a previous job (using RDB on OpenVMS) we had referential integrity on the dev systems only; the code was expected to be correct and fully-tested before it was deployed to production, so the database needn't check.
They also said that metadata slows down the database, so I wasn't allowed to create functions in the database.
Now that I get to use SQL Server, I do set up referential integrity... but turning on cascaded deletes still feels like cheating.
|
|
|
|
|
PIEBALDconsult wrote: but turning on cascaded deletes still feels like cheating
It feels dirty - so dirty. And it's one of the reasons we don't do deletes - we use statuses to control whether a record is visible or not (and that way we don't worry about accidentally deleting something important).
|
|
|
|
|
Pete O'Hanlon wrote: don't do deletes
I agree with that.
|
|
|
|
|
In my line of work speed is not such an issue as robustness and solution being as error proof as possible. So I think database and application which uses it must both be able to gracefully handle whatever crap is thrown at them (i.e. checks on both sides).
That works for me and is my opinion based on experiences so far. Of course I'm always opened to well argumented ideas.
modified 19-Nov-18 21:01pm.
|
|
|
|