|
You missed my point, I suggested that you do not post your question in multiple forums. Read the posting guidelines for clarification.
Unrequited desire is character building. OriginalGriff
I'm sitting here giving you a standing ovation - Len Goodman
|
|
|
|
|
I thought that it would be possible to discuss this subject in a forum.
The Question was just dedicated for answers not discussions.
It's Ok, I'll delete it from here.
|
|
|
|
|
Hi all,
i am working on visual studio 2010 and i am making a dialog based application.
In a function i have made a unsigned char array like this
unsigned char buffer [ 524600 ];
and a small array of unsigned char buffer1 [ 628 ];
now i need a bigger array in the same function to store something else, but when i make a array bigger or equal to buffer it gives me a stack overflow error and taking me to this line
cs20:
sub eax, _PAGESIZE_ ; decrease by PAGESIZE
test dword ptr [eax],eax ; probe page.
jmp short cs10
i am not getting what problem i am facing??
can anybody help me....
|
|
|
|
|
Well, your stack is only so big and if you are declaring stack based (local variable) arrays of that size, it's no wonder you got a stack overflow. And while it is possible to enlarge the stack (Project Properties, Stack Size[^]), I wouldn't recommend that.
Instead, you should use new or malloc to dynamically create the array from the larger "heap" pool. You can look up those functions on MSDN if you are unfamiliar with them.
(PS, if you new something, remember to delete it when you are done with it, otherwize you will have a memory leak and eventually you will not be able to repeat the dynamic allocation. If you use malloc instead, you will need to use free to return the memory to the pool.)
|
|
|
|
|
Chuck O'Toole wrote: it's no wonder you got a stack overflow
Hmm. One could argue that a system with gigabytes of memory, a hardware memory management unit, and an OS that is based on virtual memory, still has a rather artificial limit on stack size; with a couple different implementation choices the stack could have been made to grow automatically as need be, from a default size (say 1MB) to whatever is required by the app.
|
|
|
|
|
Yep, spot on. All arrays should go on the heap. It is FAR FAR safer that way.
==============================
Nothing to say.
|
|
|
|
|
+5, but I wonder why one would need arrays that large...
|
|
|
|
|
Look, here is a tip. Stack corruption is a pig to debug:
NEVER ALLOCATE AN ARRAY ON THE STACK.
That is rule one of writing good code. All arrays go on the heap. Period.
THen when you run your app use Appverifier. Its afree download. If you blow a heap buffer it will barf and tell you why.
If you blow a stack bufer you aint got nowt but a mess dude. DONT DO IT! ALL ARRAYS GO ON THE HEAP!
==============================
Nothing to say.
|
|
|
|
|
I disagree about allocating ALL arrays on the heap... that's not a good practice at all... the heap is significantly slower for allocating resources. I think this is a special case because of the size of his arrays.
|
|
|
|
|
I put in the part about delete / free for you
|
|
|
|
|
I noticed...
|
|
|
|
|
Given the increase in speed these days is stability more important?
Yes.
When you use verifier on your app, or driver, it can check a heap buffer. It cant check a stack buffer.
Rule one. All arrays go on the heap. Period.
==============================
Nothing to say.
|
|
|
|
|
|
Albert Holguin wrote: I strongly disagree.
I 100% call your dosagreement wrong.
I have worked in the kernel for 14 years. If the code isnt 100%, it is crap. I never use stacj based buffer becuase they can be overrun leading to an imnpossible to debug stack trace. If you believe onie thing I say, it is this, All arrays go oin the heap where verifier can catch overruns for you.
Seriously, and my kernel code has been on 30% of global products, I know what I am talking about here.
==============================
Nothing to say.
|
|
|
|
|
You can call it whatever you want. I call your method slow. Maybe you were part of the Unity development group...
|
|
|
|
|
When you have as much experience as me in the kernel I might listen to you.
==============================
Nothing to say.
|
|
|
|
|
As much as I hate jumping into another man's argument, I feel compelled to answer this one.
Quote: When you have as much experience as me in the kernel I might listen to you The problem with saying things like this is that somebody comes along with more experience. I'll see your 14 years and raise you 31 more. I've been doing this since 1967 on various machine types, mostly on kernel / monitor / core / OS, whatever you wish to call it. And yes, some of my kernel code has been into space.
Allocating from something called heap / common storage / pool / whatever is always slower than just adding / subtracting a size from the stack pointer. Plus, the pool is usually one shared resource so for multithreaded or multiprocessor kernels some sort of locking mechanism is needed to prevent corruption of the basic allocation / deallocation structures. Add in the possibility of doing allocations at interrupt level too and you complicate the locking mechanism further.
I've also dome a fair amount of kernel performance analysis over the years and guess what it reveals as a common bottleneck in kernel programming, heap allocation / deallocation. These are places where one would concentrate the efforts to find smarter and faster allocation methods, split poole, etc. And even though better algorithms have developed over the years, carving small arrays / structures / blocks of code on the stack just flys over calling pool allocation.
Now it is true that you need kernel pool for longer life objects and maybe for things too large to be of practical use on the stack but please don't argue that all things must always go on one place over another, that's just too many absolutes. Sounds more like a religious argument then a logical one.
|
|
|
|
|
I dont know if you followed the rest of this thread but what I recomended was using the heap so that verifier can see where your buffers are getting overrun, then go back to using a stack buffer when the code is good.
Yes, heap allocaiton is slow, thats why preallocaiton is a good idea for frequently used memory blocks of a known size. Look aside lists for example. I mentioned this too. This obviattes any performance issues.
What you say about locking mecnahisms isnt the case though. The memory manager deals with that, not the code, so it doesnt add complication. Shared data between threads will though, but this wont be the case here since the guy is only using the buffer in one function.
Re kernel performance, look aside lists. Plus IO is the real bottle neck.
But here is my central point, whic is always valid: Stability is more imporant than speed. Always.
==============================
Nothing to say.
|
|
|
|
|
Erudite__Eric wrote: then go back to using a stack buffer when the code is good
Did you read your own posts!? ...when did you say that!?
|
|
|
|
|
said the same thing in my reply which I was typing as you entered yours.
|
|
|
|
|
|
I think this guy may be delusional...
|
|
|
|
|
|
This is why I don't like jumping into someelse's argument, it becomes mine
Yes, I've followed the posts and just to be sure, I went back and re-read all of yours specifically (you should do the same to refresh your memory). You never mention going back to a stack buffer "after the code is good". In fact, quite the opposite, you were adament about buffers always being from the heap, always, always. You repeated that quite often. So the resposes where to those statements.
Erudite__Eric wrote: What you say about locking mecnahisms isnt the case though. The memory manager
deals with that, not the code, so it doesnt add complication.
Well that's a convenient hand wave, blaming the underlying function rather than the caller who invokes it. The point is that memory allocation at that level is costly and that even if the "memory manager" has to deal with the locking, etc, you still are resposible for chosing a methodology that invokes that call over stack allocation. So, the introduction of the overhead is your choice, the kernel code is just giving you what you asked for. Don't blame it.
Erudite__Eric wrote: Plus IO is the real bottle neck IO is a "wait state" event and not chewing up cpu cycles, which was what this discussion was all about. The kernel / application is free to do other things while IO is going on using any number of asynchronous IO techniques. If you wish to now have a discussion on all the things that affect application / kernel performance, we can do that too.
Erudite__Eric wrote: Stability is more imporant than speed. Always.
Only an idiot would argue in favor of "instability". Of course stability is important. In fact, if you're getting paid to do code, your client / employer will assume stability and will find someone else to deliver it if you fail. So most don't even bother listing stability as a priority, it's assumed you will deliver it. On the other hand, many will list speed as the priority, depending on the application. Imagine trying to defend a radar application that is too slow to catch all the incoming phase radar data by saying "but it's stable!!".
|
|
|
|
|
Chuck O'Toole wrote: You never mention going back to a stack buffer "after the code is good".
I did, here: http://www.codeproject.com/Messages/4071887/Re-making-a-unsigned-char-array-gives-a-buffer-ove.aspx[^]
Chuck O'Toole wrote: IO is a "wait state" event and not chewing up cpu cycles,
I am talking user mode IO. Ring 3 Ring 0 transitions are really heavy. Surprisingly so in fact.
HW IO is generally very quick.
Chuck O'Toole wrote: it's assumed
And surprisingly often, ignored. I can not tell you how many times I have had verifier barf on third party drivers, yet it is a tool MS introduced expresly to improve quality.
You would also not believe how often I have seen memory allocation used unchecked. Guaranteed BSOD as soon as resources get a bit tight.
==============================
Nothing to say.
|
|
|
|