|
Robert Edward Caldecott wrote:
My question is ... why?
Can't talk about MS, but at my work we have the same standard. Apparently, this is to better separate interface from implementation (personaly, I don't like it that much, but stick to the standard anyway).
My programming blahblahblah blog. If you ever find anything useful here, please let me know to remove it.
|
|
|
|
|
I've never heard or read about a MSFT policy regarding how inline member functions should be declared, but I suspect that it's due to readability reasons. Like Nemanja said "separating the interface from the implementation" in order not to clutter the interface and make it hard to read.
Regarding the syntax of the language there is nothing wrong with defining the member function in the class definition. It would make the function inline implicitly as Blake said.
In order for clients to use the inline function, the definition has to be included to make it possible for the compiler to extract the body in place. The most straight forward way to accomplish this is by have the definition in the header file. As I understand it the compiler does the same thing regardless of whether the function was defined inside the class definition or not.
Sometimes in MFC code, e.g. CDocument::GetDocument, the debug version of the function is not inline in order to step into the function. Otherwise one has to step through the assembler code to know what happens.
To write the preprocessor directive (#ifdef _DEBUG) inside the class definition wouldn't be a very beautiful code to look at for my eyes.
But that's a matter of taste and the compiler doesn't seems to have any.
--
Roger
|
|
|
|
|
Robert Edward Caldecott wrote:
My question is ... why?
Portability.
"One must learn from the bite of the fire to leave it alone." - Native American Proverb
|
|
|
|
|
Could you elaborate? Portability is potentially a big-deal for me at the moment...
|
|
|
|
|
I simply read it near the bottom of this.
"One must learn from the bite of the fire to leave it alone." - Native American Proverb
|
|
|
|
|
Do you mean this sentence:
Furthermore you can declare it in the header (with inline keyword) and seperate easally from them (with .inl files). Microsoft often use the same technique because it's better compatible with many compilers.
This make sense only if the poster was referring to poor support for member template functions in VC6, but templates are a separate issue when it comes to source code organization anyway.
Otherwise, this has nothing to do with portability.
My programming blahblahblah blog. If you ever find anything useful here, please let me know to remove it.
|
|
|
|
|
Nemanja Trifunovic wrote:
Otherwise, this has nothing to do with portability.
Being able to move code and/or files over to another compiler has nothing to do with being portable?
"One must learn from the bite of the fire to leave it alone." - Native American Proverb
|
|
|
|
|
I looked into this in detail a few years ago and came to the conclustion that the best thing to do is make nothing inline and let the optimiser inline things as it sees fit.
|
|
|
|
|
The inline keywords is a suggestion to the compiler. If you use it the compiler may or may not inline the function code ("as it sees fit"). If you do not use it, then the function is not inlined (even if it should be).
Of course there may be an optimization command line argument that tell the compiler to treat all functions as if they have been designated as inline, but I doudt it.
Designate any function that just returns a simple value as inline, or place the code for that function in the class body (which makes it implicitly inline).
INTP
Every thing is relative...
|
|
|
|
|
John R. Shaw wrote:
Of course there may be an optimization command line argument that tell the compiler to treat all functions as if they have been designated as inline, but I doudt it.
It's /Ob2.
"One must learn from the bite of the fire to leave it alone." - Native American Proverb
|
|
|
|
|
Robert Edward Caldecott wrote:
Is there anything inherently wrong with inlining code directly in the class header?
No!
Robert Edward Caldecott wrote:
Are there compiler compatibility or performance reasons for splitting inline code out of the body of the class definition?
No! (well no preformance reasons)
The only reason for splitting the inline code out of the body in this way is so they can change the inline code (file) without touching the class body. In other words they can change the inline code without changing the class definition in the header file.
This is simular to providing a header file for a library which contains the class definition, but not the function bodies. The developer can change how the functions do their jobs and release an update to the library without changing the header files that contains the class deffinitions.
INTP
Every thing is relative...
|
|
|
|
|
I just encountered these words, Skinning, GDI/+, OpenGL, DirectX, Bitmaps and Palletes. Which among these topics do I have to study if I'm going to change the overall look or design of my project (SDI/MDI or Dialog)? It sounds like a theme because even the buttons, edit boxes, other controls or even the background, toolbars, menu, etc, will be changed.
|
|
|
|
|
Hello,
I think that skinning your application will suit you best. This enables certain themes to be used by your users. It even enables them to make their own themes might they want to do that...
GDI/+, OpenGL and DirectX are more for heavy graphics. GDI is more to draw graphs and simple stuff while OpenGL and DirectX are more for the type of graphics you see in games.
Bitmaps and Palletes may come in handy if you want to change colors and things like that at runtime without changing the actual files etc.. You might want to look more into these subjects.
Behind every great black man...
... is the police. - Conspiracy brother
Blog[^]
|
|
|
|
|
Ok, now I know what these stuffs is all about. Thanx
|
|
|
|
|
You're welcome
Behind every great black man...
... is the police. - Conspiracy brother
Blog[^]
|
|
|
|
|
What function is running while a certain dialog or application is on standby-mode?
Because I want to create a program that will scan a certain folder if a certain filename exists on that directory (folder and filename will be entered by the user). While it is scanning the directory, I will display another dialog stating that it is scanning just like a message box except that it has an OK and a Cancel button. The OK button will be disabled until the scanning is not yet done. The Cancel button can be clicked prematurely but will prompt the user if the Scanning is not yet done. In Cancel procedure, OnCancel will do but how about the enabling the Ok button once the Scanning is complete? The Scanning Dialog will be on standby and will just wait until the scanning is complete. I disabled the Ok button on OnInitDialog function of the Scanning Dialog class. Where will I place the enabling of the Ok button? Thanx
|
|
|
|
|
Typically, you are either halted at something like GetMessage() or MsgWaitForMultipleObjects or else in wierd PeekMessage -> DoIdle() type of loop. Depends upon the udnerlying class library.
If you are straight Win32, I would start a secondary thread to do the scanning, and have it post registered messages back to your dialog to advise the dialog of the scanner's progress. If user hits cancel, then just stop thread and exit dialog. Similarly for a MFC app with a dialog.
Be sure to catch possible ways dialog can close, by handling the WM_CLOSE messages generated by escape key, ATL+F4, etc.
|
|
|
|
|
|
Hi all,
I'm facing an odd behaviour in a client application that I'm currently working on. Let me put you in the picture: I create a socket and then call ioctlsocket to set it to non-bloking mode. After this I try to stablish a connection to a server through this socket, so far so good.When I call 'connect' it is supposed to return the control inmendiately with an error such as WSAEWOULDBLOCK but this does not happen at all an it hangs for a while.
<br />
sockaddr_in server ={0};<br />
SOCKET sClient = 0;<br />
<br />
<br />
server.sin_family = AF_INET;<br />
server.sin_port = htons(ESEP_SERVER_PORT_1);<br />
server.sin_addr.S_un.S_addr = htonl(ulIP);<br />
<br />
<br />
if ((sClient = socket(AF_INET,SOCK_STREAM,IPPROTO_TCP)) == INVALID_SOCKET )<br />
{<br />
FormatError(); <br />
return ESEP_ERR_CREATE_SOCKET;<br />
}<br />
<br />
fd_set fdWrite ={0};<br />
TIMEVAL time ={0};<br />
int iRC = 0;<br />
unsigned long ulMode = 0;
<br />
<br />
ulMode = 1;<br />
if (ioctlsocket(sClient, FIONBIO, &ulMode) == SOCKET_ERROR)<br />
{<br />
FormatError();<br />
return ESEP_ERR_CONNECT;<br />
}<br />
<br />
FD_ZERO(&fdWrite);<br />
FD_SET(sClient,&fdWrite);<br />
time.tv_usec = ESEP_CONNECT_TIMEOUT;
<br />
iRC = connect(sClient,(SOCKADDR *)&server, sizeof(server));<br />
if ((iRC == SOCKET_ERROR) && ( WSAGetLastError() == WSAEWOULDBLOCK ))<br />
{<br />
if ( select(0,NULL,&fdWrite,NULL,&time) <= 0 )<br />
{<br />
FormatError();<br />
return ESEP_ERR_CONEX_REJECTED;<br />
}<br />
}<br />
The most curious thing is that this behaviour only takes place in some particular computers because if I run the application in my development machine or in some other machine everything works absolutely right.
I have taken two different machines with the same configuration (everything is the same, they're almost clones) and it works in one system but it doesn't in the other. Has anybody ever had a problem like this? Any reasonable explanation? I'm a bit puzzled because I'm not able to guess why this happens, I've been looking for something in differents forums and dicussion boards but I found nothing. Could you help me please?
|
|
|
|
|
Hi,
I am dealing with OLE automation. Now: I have to convert a string to V_BSTR to use it with OLE. This string is really long. About 500 chars and more! If I convert it to V_BSTR, then it cuts after some hundred bytes (about 230).
How can I avoid it? I use SysAllocStringLen to convert it, with this, normally, it should be allocated enough space for it. But it seems it doesnt care!
VARIANT v1;
V_VT(&v1) = VT_BSTR;
V_BSTR(&v1) = SysAllocStringLen(strToWc(selStr), 1500);
strToWc() is a method by me, which converts a string to widechar. This one works. I checked, and the string is complete!
DKT
|
|
|
|
|
If my memory serves me right then the "B" in BSTR stands for Byte - the byte that is used to hold the length of the string. This is the Pascal-type string representation. As a consequence, a BSTR cannot hold more then 255 chars. No way around it.
Cheers
Steen.
"To claim that computer games influence children is ridiculous. If Pacman had influenced children born in the 80'ies we would see a lot of youngsters running around in dark rooms eating pills while listening to monotonous music"
|
|
|
|
|
Hmmm thats strange...
If I use a direct string:
V_BSTR(&v) = SysAllocString(OLESTR("blabla"));
then, it works. Even if the "blabla" string is more than 500 chars long!
If I use it the way I mentioned before, then it behaves strange:
The first time it cuts some bytes off, the second time and so on, it works!
If I put the same command twice, then it never works...
It seems as if the memory management is totally sh*t!
DKT
|
|
|
|
|
I'm sorry, my memory certainly didn't serve me well. I think I confused it with the ANSI version of BSTR (I think it's called BSTRT).
Just to be sure, you say your function converts it to a widechar, you mean unicode, right?
A BSTR is a pointer to a location, where the first four bytes are the length part and the rest is the unicode string terminated by a double-zero. Could this in any way be the cause of your problem (e.g. a premature terminating double-zero?)
Otherwise, I don't have any good ideas. Perhaps you could post more code?
Cheers
Steen.
"To claim that computer games influence children is ridiculous. If Pacman had influenced children born in the 80'ies we would see a lot of youngsters running around in dark rooms eating pills while listening to monotonous music"
|
|
|
|
|
I checked the double zero, but it didnt make it better...
Here some code:
Function to convert String to widechar:
OLECHAR* ViaExcelConnector::strToWc(const string &cnvrtData) const
{
OLECHAR cnvrt[500];
int i = 0;
char cnvrtChr[500];
for(i=0; i
|
|
|
|
|
Your code got f***ed up because you didn't check the "do not treat <'s as HTML tags" box.
Anyway, you only allocate 500 chars to do the conversion, so it's no wonder that it wont' work for more than 500 chars. And you have a buffer overrun when cnvrtData is more than 500 chars since you don't check for max 500 chars in your for loop. There's no saying what will happen, but you will definitely get your memory screwed up. Furhtermore, you return a pointer to cnvrt which is a stack variable that will go out of scope (and be overwritten) when the function returns.
Why do you move the content of cnvrtData into cnvrtChr? Can't you do the LPCSTR cast directly on cnvrtData? Besides, you should call MultiByteToWideChar with cchWideChar set to zero firt to get the length of the widechar, then allocate it and then convert it:
<br />
int iLength = MultiByteToWideChar(CP_ACP,0,(LPCSTR)cnvrtData,-1,NULL,0);<br />
OLECHAR* cnvrt = new OLECHAR[iLength];
MultiByteToWideChar(CP_ACP,0,(LPCSTR)cnvrtData,-1,cnvrt,iLength;
return cnvrt<br />
but then you will have to remember to delete cnvrt (the return value from strToWc) with delete[] or you'll leak memory.
Cheers
Steen.
"To claim that computer games influence children is ridiculous. If Pacman had influenced children born in the 80'ies we would see a lot of youngsters running around in dark rooms eating pills while listening to monotonous music"
|
|
|
|