Forgot your password?
Sign in with
Article Help Forum
Submit an article or tip
Import GitHub Project
Import your Blog
Ask a Question
View Unanswered Questions
View All Questions
View C# questions
View C++ questions
View PHP questions
View Python questions
All Message Boards...
Running a Business
Sales / Marketing
Collaboration / Beta Testing
Design and Architecture
Internet of Things
C / C++ / MFC
ATL / WTL / STL
Objective-C and Swift
Hardware & Devices
Hosting and Servers
.NET (Core and Framework)
Site Bugs / Suggestions
Spam and Abuse Watch
The Insider Newsletter
The Daily Build Newsletter
Most Valuable Professionals
The CodeProject Blog
Where I Am: Member Photos
The Insider News
The Weird & The Wonderful
What is 'CodeProject'?
Ask a Question
Bugs and Suggestions
Article Help Forum
Comments by KP Lee (Top 14 by date)
Pretty much the way I would have done it. Maybe less efficient than your code. Maybe dispense with the list altogether and just list one directory and two directories in that one. And put the files in the right directory and list the files to be read. No maintenance of lists and files, just files.
You forgot .WriteXML(filename) to write a dataset into an XML file. You may have done that on purpose because of your additional (valid) point?
First, your "Table_Training_Detalis" alias in your example doesn't exist so it wouldn't work. Second, it is exactly the example of when to NEVER use a cursor.
UPDATE tw set WHIMIS= tc.DateExpires
FROM Table_Courses tc
JOIN Table_Workers tw
ON tw.Worker_ID=tc.Worker_ID AND tw.WHIMIS != tc.DateExpires
Would be about 800 times faster than a cursor. If this is a high volume system where IO contention is a concern, you can use another technique about 400 times faster.
I haven't seen any evidence of a lack of awareness of the cursor command, but I've seen plenty of evidence of when to NOT use a cursor being ignored. There are cases where a cursor is very useful, but that should be the exception instead of the rule. Think SET solution first, second, and third, then cursor.
I'm willing to learn. I ASKED if it was a bug. If you had said something about "checked" to begin with, I'd have realized that the lazy math was intentional and not a bug at all. This is a complex language, it isn't possible to know everything to begin with. "Help" is certainly NOT a mind-reader and I'm not a mind-reader knowing what keyword to use to find the information I am interested in. Even searching the checked keyword in the index required browsing to find the compiler level version: "/checked[+ | -]" vs. scoped checked/unchecked commands. (Lazy programming is not implementing something for performance reasons, but allowing the program to do what it is supposed to in special cases like applying.)
Yes, I know the hex value of the max int value. It exactly matches the binary value which is 2^31-1. Or 1 zero followed by 31 1's. I don't bother memorizing it, didn't know bit shifting was implemented. I did bother to memorize 2^10 which is 1024 decimal. On a system that checks values, 1024*1024*1024 - 1 + 1024*1024*1024 is the easiest way for me to remember how to mathematically get there. (The Maxvalue property is handy for that.)
barneyman "SQL is simply catching the overflow and asserting a fault - a feature of that language"
EXACTLY, however it should be a feature of the OS' mathematical process. Since it ISN'T, shouldn't the burden of doing mathematical processes correctly be handled by the language? If it doesn't do so, isn't that a bug in that language? Say you had a 3 bit signed 2's comp int. The valid range of numbers is 3 to -4. Do you think it is acceptable that 3 + 1 == -4? In my book, THAT is a bug.
The stack fault was a separate issue. Not a red herring, but confusing combined with the numeric problem. So adding two positive numbers together SHOULD produce a negative result? Where is that written? It violates every standard I've ever encountered. I used SQL to illistrate a system that DOES follow the mathematical standards set years ago to NOT allow invalid mathematical results.
I intentionally set up a recursive call to be called so many times the system couldn't hold all of the routines in memory. As each routine was called, it's data was added to the memory stack until finally the code blew up with a stack overflow. I was wondering when the memory overflowed that the catch should fail to interupt the process and it is properly working. Again, it had nothing to do with the mathematical problem.
It's standards says it is OK to produce invalid mathimatical results? Where is that written?
Yes I know 2's complement, I've had extensive training on both 2's and 1's complement systems because at the time 1's complement systems existed. It's easy to verify it's 2's comp by the fact that with a signed 32 bit number you can create a number equal to negative 2 to the 31'st power. The KEY word here is SIGNED!!! It is critical that the system you use UNDERSTANDS it's signed. SQL understands it is SIGNED and will not LET you get an invalid mathematical result. This is what I was trained in, so many years ago. When you reach positive 2^31-1 on a 32 bit number there is one "0" followed by 31 1's. On an unsigned int adding one, there's no problem because the result is 1 followed by 31 0's. Adding two positive numbers should NEVER result in a negative number because that is an INVALID MATHEMATICAL RESULT!!! Adding 1 in SQL at this point to a 32 bit signed integer results in an ARITHMETIC overflow error, not a Stack overflow (Yes, that was a different issue, sorry for overloading your brain by talking about 2 separate issues in the same note.)
Yes, 1 followed by 31 zeros is -2^31. Adding 1 to positive (2^31 - 1) to positive 1 shouldn't produce negative 2^31 power.
1. I said an identity int field is a great PK, not a guid.
2. A guid is 16 bytes, not 36 characters. It is represented on output as a 36 character field.
3. If you don't have a hot-spot problem using an identity field, see comment 1.
4. Every example I've seen that shows a guid is a bad choice uses the worst set-up imaginable for a guid as proof that it is a bad choice.
5. I don't see the point of having a sequential guid, it pretty much wipes out the reason for having a guid in the first place.
I beg to differ. Uniqueidentifier was originally DESIGNED to be a PK. An identity int field is a great PK, only 4 bytes, part of the DB schema design. The only problem with it is that it is sequential. This is no problem if the table gets less than 40K inserts a day, a huge problem if that insert count is an average for an hour. In comes the GUID. With random placement, the hot-spot problem disappears.
All numeric field types support TryParse, byte.TryParse would fail if -1 was entered, and that can be a valid check. I actually was lazy and used int.Tr... on a field whose only valid values are between 15 and 49.
Reason for my vote of 1
Last Updated 1 Jan 1900
All Rights Reserved.