|
I have legacy code written under VC6 which use a lot of auto_ptr. And, have problem to build it under VS2008/VC9, becouse it's put auto_ptr in vector, that isn't accepted in VC9. Then, I want share_ptr to replace it. Can I just replace them at where they are declared?
Thanks,
Hawk
|
|
|
|
|
hawkgao0129 wrote: I want share_ptr to replace it. Can I just replace them at where they are declared?
Yes.
The difference between auto_ptr and shared_ptr is that auto_ptr passes ownership around like a token, and it's possible for ownership to expire (and the object to be deleted) before you expect.
shared_ptr is like lots of people holding a plate - the plate doesn't break until everyone's let go of it.
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
In legacy code, it employed auto_ptr::release() to free resource. When i use shared_ptr, how to release resource? use "delete" keyword to call destructor of shared_ptr?
|
|
|
|
|
No - use the reset()[^] method.
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
you mean shared_ptr::reset(0)? Why tr1::shared_ptr doesn't have an explicit release() function? Do auto_ptr::reset(0) and auto_ptr::release() have the same mean?
|
|
|
|
|
Oh, shared_ptr::reset() can be called without argument. That means set it become empty. Maybe equal to auto_ptr::release()
|
|
|
|
|
hawkgao0129 wrote: Maybe equal to auto_ptr::release()
For your purposes, yes - they're equivalent
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
I have been wondering what the logic was behind the fact that the ifstream read method sets the eofbit as well as the failbit on a read failure, but can find no explanation. I can see that if you are just checking for EOF in a loop then setting the eofbit will stop any future reading, but this also hides the cause of the error. That is if we have not actually reached the end of the file, then there is nothing to tell us this fact. If you try to read a 25k file and the codecvt do_in method returns error after reading only half a line of text, then both bits are set and there is nothing to tell us that we have not just read the entire file. I can easily hack my way around the problem, but the fact that I would have to do that is ridiculous.
Does anyone have an idea or explanation for this behavior?
INTP
"Program testing can be used to show the presence of bugs, but never to show their absence."Edsger Dijkstra
|
|
|
|
|
Looking at the C++ standard specification for read (27.6.1.3 paragraph 28):
Characters are extracted and stored until either of the following occurs:
—n characters are stored;
—end-of-file occurs on the input sequence (in which case the function calls setstate(failbit|eofbit), which may throw ios_base::failure(27.4.4.3)).
There are two outcomes - either we can read n characters or not. If not, failbit and eofbit get set.
BTW - the idiomatic way (from what I'veseen) to loop on a stream isn't to check for eof, it's to check the stream status like so:
std::ifstream f(...);
while(f)
{
}
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
Here is that basic problem, the while loop works great but the test calls an over-ride that is just checking if the EOF has been reached (overloaded … etc.). Which makes since accept when that is not true? If an error occurs while reading, you need to know that is what happened; just saying that you have reached the EOF, when untrue, gives the wrong information. An error has occurred, true, but it is not because you reached the end of the file (EOF). This little piece of information is very important. One work around, which applies to C as well, is to get the size of the file before reading it; is to compare the number of bytes read as opposed to the size of the file. The main difference between C and C++ (I may be wrong) is that a read error can occur without reaching the EOF, so you need to check for both while reading a file. I will grant that if you try to read past the EOF that it is an error and you have reached the EOF. But before reaching the EOF, only an error flag should be set; indicating that a read occurred. If it sets the failbit and the eofbit every time an error occurs, it is self defeating and is lying to the developer
INTP
"Program testing can be used to show the presence of bugs, but never to show their absence."Edsger Dijkstra
|
|
|
|
|
[edit]Pressed 'Post Message' with no message - DOH![/edit]
If the failbit is set, you know reading terminated because of an error. eofbit set alone is an indicator of end-of-file. Sounds like you just need to change the priority of checks round a bit?
|
|
|
|
|
If an error occurs before the number of elements requested is read then both eofbit and failbit are set. This is done whether you reach the end of file or not, because all the read method knows is the number of elements read and is therefore guessing.
amountRead = rdbuf()->sgetn(pElements, amountRequested);
GCount += amountRead;
if( amountRead != amountRequested )
State |= eofbit | failbit;
_St |= eofbit | failbit;
State |= ios_base::eofbit | ios_base::failbit;
__err |= (ios_base::eofbit | ios_base::failbit);
I do know it terminated because of a read error, but since we can not know how many characters the encoded source data represents, we can not avoid the error. Therefore we do not know if it is an attempt to read the EOF or an encoding error.
INTP
"Program testing can be used to show the presence of bugs, but never to show their absence." - Edsger Dijkstra
"I have never been lost, but I will admit to being confused for several weeks. " - Daniel Boone
modified on Tuesday, July 14, 2009 2:36 PM
|
|
|
|
|
Hello
Following is my code to display CHEVRON
pdbi->dwModeFlags = DBIMF_NORMAL|DBIMF_BKCOLOR | DBIMF_USECHEVRON |DBIMF_BREAK;
By using this code, chevron is display with Icons on Windows Vista with IE7 , but it not display Icons in IE6 for Windows XP.
Is there any othere Flags needs to set for Windows XP.
Thnaks in Advance
AM
|
|
|
|
|
This should work:
if (pdbi->dwMask & DBIM_MODEFLAGS) pdbi->dwModeFlags = DBIMF_USECHEVRON | DBIMF_BREAK;
|
|
|
|
|
|
I am using ATL for an out-of proc COM server. I would like for a single instance of this exe to be used for all clients. Currently, a new instance is launched for each client that calls CoCreateInstance. Is there a flag or registry setting that I should set to allow all clients to use the same instance?
Thanks.
Wayne
|
|
|
|
|
dear all,
I have an ActiveX control designed by ATL,It surpported the Picture Stock property --IPictureDisp* m_pPicture;And It paint the pictures in
OnDraw function like this:
HRESULT OnDraw(ATL_DRAWINFO& di)
{
//draw the picture here
return S_OK;
}
Once it's made to be a dll file and registered on pc,it does work well in VB,meant that we can find the Picture Property in the property panel and can initailize the Picture in design mode in VB,But can not do that in Delphi7,can anybody tell me why or help me? thanks so much!
|
|
|
|
|
I am putting together a local COM server, which gets linked to a legacy static library. My server passes on notifications from this library functions to the client via connection point sinks (server has the Fire_xxx methods).
My problem is, the library author has freely used CreateThread()s in his functions, from which he triggers the notifications that I need to pass on to the client. As you guessed, this throws 8001010e HR. To get around this issue I used to register the interface ptr in GIT and getting it in the context of the thread. This trick wont work here, as with just the base interface, I am not able to touch all the Fire_XXX() functions!
How to get access to these? TIA.
|
|
|
|
|
Why not decouple the notifications from the Fire_XXX functions? Use an event, or similar to indicate to the COM thread that a notification has occurred (you can use some data to show what notification has occurred plus any parameters). Use a kernel wait in the COM thread to wait for the event to be signalled. When the event is signalled, call the appropriate Fire_XXX function.
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
I thought of this...but, I am not sure how to apply this on my case..lemme explain: The server implements, lets say, "IServer" with a method "Run". I use "IServerEvents" to fire events in to connected sinks.
From IServer::Run(), I call one of the library functions, which does a CreateThread() and returns. So, my Run() returns a S_OK and things are all fine so far. But, when the thread started by the library function finishes its work and wants to notify the client, it calls a callback function my server had registered with it in the start. I call Fire_XXX() in this callback, which is not working as it is in the context of another thread...
Since my IServer::Run() has ended long time back, which COM context do I use?
|
|
|
|
|
What threading model does your local server use? Is it possible that using a multithreading model might enable your original design to work?
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
I use MTA; Still the same issue. BTW, since the thread created by library function doesn't initialize COM, in my server callback I do a CoInitializeEx to COINIT_MULTITHREADED. From this callback when I try calling Fire_XXX (in the original server object, which is passed via a context param), I still get 8001010e.
Will creating a CWorkerThread/IWorkerThreadClient based thread automatically assume its parent thread COM context?
|
|
|
|
|
I presume you have managed to successfully call one of the Fire_XXX functions in tests or something? It's not that they just don't work?
Kannan Ramanathan wrote: Will creating a CWorkerThread/IWorkerThreadClient based thread automatically assume its parent thread COM context?
I suspect you're more likely to know than me - I think you've donw a lot more COM than me. My comfort zone is mostly in-proc servers.
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
Thanks Stuart for the reply.
Yes, Fire_xxx() events work correctly as long as they are called in the correct context.
BTW, I am trying out my good old GIT solution -- only this time I am going to store the sink interfaces in GIT and store the cookie in my vector...I am currently putting together a custom IConnectionPointImpl<> to store/retrieve things properly with out breaking anything;
Will update you how it goes...
|
|
|
|
|
The other suggestion I was going to make (but didn't get around to) was to start a long-running thread and use the event idea I first proposed within that thread.
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|