|
Don't use the SuspendThread , TerminateThread and ResumeThread functions for the type of thread management you're doing here. From the documentation on SuspendThread[^]:
This function is primarily designed for use by debuggers. It is not intended to be used for thread synchronization. Calling SuspendThread on a thread that owns a synchronization object, such as a mutex or critical section, can lead to a deadlock if the calling thread tries to obtain a synchronization object owned by a suspended thread. To avoid this situation, a thread within an application that is not a debugger should signal the other thread to suspend itself. The target thread must be designed to watch for this signal and respond appropriately.
There are many other explanations of the kind of problems these functions can cause if used incorrectly and the reasons why, such as the following: Why you should never suspend a thread[^]
Steve
|
|
|
|
|
As already mentioned earlier[^], the TeminateThread, SuspendThread and ResumeThread should be avoided. There are much better mechanism to handle this type of situation (events for instance).
I think people already mentioned it to you, but you should really have a look at this article[^] and understand it. Threads are not an easy thing for beginner, so instead of trying things by your own, it would be much more efficient to read a good article about it.
|
|
|
|
|
These functions do not work the way you think they do. After the thread is suspended, no more code from the thread will be executed, and it won't be scheduled any processor time until it is resumed (A thread can suspend itself, but unfortunately cannot resume itself). If your thread is suspended, some other thread has to wake it up so that it can become "schedulable" again.
What you've designed is never going to work as it is fundamentally flawed. Please read up on threading and understand how it works. There are plenty of articles at CP and you should probably read a good book if you're serious about writing multi-threaded code. After all that, there are gotchas involved in threading that you'll only experience and learn by writing code.
It is a crappy thing, but it's life -^ Carlo Pallini
|
|
|
|
|
I want to send varying length application specific messages between my client and server application. The problem is how should I determine the length of message when receiving it?
I thought of one way was that I should append a termination string in the end of the message. But others said that prefixing the message length is far more better?
May I know why and how do I prefix it? Any code examples would be really appreciated. I'm using Winsocks (version 2.2, if that helps)
And, by the way, what is the minimum amount of bytes/bits that will successfully transfer in one shot using TCP/IP?
Thanks
modified on Wednesday, August 26, 2009 6:25 AM
|
|
|
|
|
Ahmed Manzoor wrote: I thought of one way was that I should append a termination string in the end of the message. But others said that prefixing the message length is far more better?
Prefix the length. That gives you a definite message length, which lets you detect errors better.
Say you had a termination string and the packet containing the termination string was lost. Your program would keep searching and either a) never see the terminator, so think the message never ended, or b) see the wrong terminator (maybe from the next message) and merge two messages into one.
Ahmed Manzoor wrote: And, by the way, what is the minimum amount of bytes/bits that will successfully transfer in one shot using TCP/IP?
No minimum really - but you need to turn off the Nagle algorithm (see setsockopt and TCP_NODELAY[^]).
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
I don't understand what you mean by no minimum? Can you please explain?
|
|
|
|
|
If you turn off Nagling, then you can send a single byte in a TCP/IP packet. You can't get smaller than that (except 0, which is nothing).
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
You can send one byte at a time. You do NOT need to turn off Nagle algorithm in order to be able to send single bytes.
Beside using a length prefix or a terminator, there is also a third possibility. If you send only one message per TCP connection, then close the connection gracefully[^]. The receiving peer will know when it has received the complete message and can close the socket. However, if you want to send more then one message, the first two possibilities would be better.
|
|
|
|
|
Can you please provide an example? using the recv() and send() functions?
|
|
|
|
|
Hello,
I have the following problem. I must display a printstream as readable text. Inside the stream are the signs asci 13 (Carriage Return) and asci 10 (Line Feed). If I want to display it in a CRichEditCtrl, the Control wraps the line.
If Set
SetTargetDevice(NULL, 1);
The Control makes also a word wrap at CR and LF.
I there a possibility to supress the word wrap?
best regards
thomas
|
|
|
|
|
Word wrap is different from carriage return/line feed.
CR/LF will move to the beginning of the next line.
Word wrap will move to the next line if the line does not fit in the current view.
If you do not want to move to next line in spite of having CR/LF you will have to remove the character pair from the stream.
«_Superman_»
I love work. It gives me something to do between weekends.
|
|
|
|
|
Thanks for your reply,
but the problem is that I can't remove it form the text, because it is important to show these signs as symbol. I have my own font to show CR sign or a LF sign.
|
|
|
|
|
Then how about replacing these characters with your own symbol in the stream itself so that it will not move to the next line.
«_Superman_»
I love work. It gives me something to do between weekends.
|
|
|
|
|
It doesn't work the control wraps the text anyway.
|
|
|
|
|
You might need to look into the ES_MULTILINE , ES_AUTOVSCROLL , and ES_AUTOHSCROLL styles.
"Old age is like a bank account. You withdraw later in life what you have deposited along the way." - Unknown
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
|
|
|
|
|
Thias doesen't work. I tried it.
|
|
|
|
|
You tried looking into it, or you tried implementing those styles? Wrapping in an edit control is only caused by one or two things: the characters being put in the control, and the styles of the control. Both have been suggested to you.
"Old age is like a bank account. You withdraw later in life what you have deposited along the way." - Unknown
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
|
|
|
|
|
I just look into these styles, but how should I implement a style?
|
|
|
|
|
Hi everyone
my os is vista 64bit pack1 and I installed DirectX SDK(10) on my system
but I cant compile this code an it returns :
hr=E_INVALIDARG
DXGI_SWAP_CHAIN_DESC swapchainDesc;
ZeroMemory(&swapchainDesc,sizeof(swapchainDesc));
swapchainDesc.BufferCount=1;
swapchainDesc.BufferDesc.Width=width;
swapchainDesc.BufferDesc.Height=height;
swapchainDesc.BufferDesc.Format=DXGI_FORMAT_R8G8B8A8_UNORM;
swapchainDesc.BufferDesc.RefreshRate.Numerator=60;
swapchainDesc.BufferDesc.RefreshRate.Denominator=1;
swapchainDesc.BufferUsage=DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapchainDesc.OutputWindow=hWnd;
swapchainDesc.SampleDesc.Count=1;
swapchainDesc.SampleDesc.Quality=0;
swapchainDesc.Windowed=false;
HRESULT hr=D3D10CreateDeviceAndSwapChain(NULL,
D3D10_DRIVER_TYPE_REFERENCE,
NULL,
0,
D3D10_SDK_VERSION,
&swapchainDesc,
&pSwapChain,
&pD3DDevice);
on the callstack window it shows this message:
The application was compiled against and will only work with D3D10_SDK_VERSION (28), but the currently installed runtime is version (29).
Recompile the application against the appropriate SDK for the installed runtime.
I tired with examples of Directx but error is same.
how can I solve this problem?
|
|
|
|
|
Sounds to me like you the version of the DirectX 10 SDK you have is older than the version of the DirectX 10 runtime you have installed (maybe a game's installed a newer version of the runtime?). I'd suggested downloading and installing a newer DirectX10 SDK and rebuilding your program.
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
Further to my previous reply - the current (March 2009) version of the DirectX SDK has D3D10_SDK_VERSION set to 29. I'm guessing you'll need to download and install that.
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|
|
Hi there,
I got a little confused here, maybe someone can help with this noob question.
Following code piece to convert byte to int (supposedly big endian):
unsigned int time = ((m_aHeader[4] & 0xff) << 24) |
((m_aHeader[5] & 0xff) << 16) |
((m_aHeader[6] & 0xff) << 8) |
(m_aHeader[7] & 0xff);
Now why is this different than
unsigned int time = ((m_aHeader[4] ) << 24) |
((m_aHeader[5] ) << 16) |
((m_aHeader[6] ) << 8) |
(m_aHeader[7] );
m_aHeader is a byte array.
Edit: Just found, my problem only occurs when I use the first piece of code in java (in c++ it works fine).
I create the bytes in c++ with
m_aHeader[4] = (BYTE) ((time >> 24) & 0xff);
m_aHeader[5] = (BYTE) ((time >> 16) & 0xff);
m_aHeader[6] = (BYTE) ((time >> 8) & 0xff);
m_aHeader[7] = (BYTE) (time & 0xff);
If I read it in c++ with the further above mentioned code pieces, everything is fine.
If I try this in java, though,
long time = ((m_aHeader[4] ) << 24) |
((m_aHeader[5] ) << 16) |
((m_aHeader[6] ) << 8) |
(m_aHeader[7] );
modified on Wednesday, August 26, 2009 3:44 AM
|
|
|
|
|
Provided m_aHeader[k] (k=4..7) is a byte , there is no difference, i.e. you may write ( note there is NO 0xff close to m_aHeader[7] )
unsigned int time = (m_aHeader[4] << 24) | (m_aHeader[5] << 16) | (m_aHeader[6] << 8) | m_aHeader[7];
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
This is going on my arrogant assumptions. You may have a superb reason why I'm completely wrong.
-- Iain Clarke
[My articles]
|
|
|
|
|
I know, thanks. the m_aHeader[7] 0xff was a typo.
and yes, there should be no difference. I just updated my post, since I found the difference is only in java
|
|
|
|
|
The reason you generally want to and with 0xff is to ensure that when your byte is promoted to int (as it will be internally in the expression), it is zero-extended, i.e. the top 24 bits of the int are set to zero, not one. Why would they be set to one? Basically if the language sign-extends the byte when it makes it an int AND the top bit of the byte is set.
Now, if you use 'unsigned char' for BYTE and 'unsigned int', you should be OK. If you used 'signed char' for BYTE, then you could have issues.
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
|
|
|
|