Click here to Skip to main content
15,883,901 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more: , +
Hello

I'm working with MFC program which receives UDP data and parses it to images.

There are three additional threads (beside main thread):

1. receiving UDP data by using ReceiveFrom() and saving the data into buffer

2. reading buffer and writing it to text file by fprintf()

3. reading buffer and parsing/writing it into image file by fwrite().


Sometimes (including when other processes start to run) there is some data loss both in text file and image file,

so I think it was a loss in ReceiveFrom().

In order to resolve the problem, I would like to set higher priority to thread 1.

Now, there is no priority set. I'm using just CreateThread().


In referring literature, I found PROCESS_MODE_BACKGROUNG_END as a parameter in SetPriorityClass().
(and THREAD_MODE_BACKGROUND_BEGIN in SetThreadPriority())

I think it might be important because my project is all about I/O.

But I have no idea about effect of the parameter and even of the priority set.

I want to understand how it works.

Can you advise me regarding priority set and whether you think that can solve the problem?



----- added -----

Global variables:

C#
#define SIZE_BUFFER 50
#define SIZE_PACKET 414
static char buffer_received[SIZE_BUFFER][SIZE_PACKET];
static unsigned int in, out_log, out_image;



Thread 1 is running:

C#
char received[SIZE_PACKET];
while (1)
    {
        // Block until receive message from a client
        int recvMsgSize = udpServer.ReceiveFrom(received, SIZE_PACKET, senderIP, senderPort);

        mu.lock();
        if (in + 1 < SIZE_BUFFER)
            memcpy_s(buffer_received[in + 1], SIZE_PACKET, received, SIZE_PACKET);
        else
            memcpy_s(buffer_received[0], SIZE_PACKET, received, SIZE_PACKET);
        in++;
        if (in == SIZE_BUFFER)
            in = 0;
        mu.unlock();

    }



Thread 2 is running:

C#
while (1)
    {
        if (in != out_log)
        {
            out_log++;
            if (out_log == SIZE_BUFFER)
                out_log = 0;

            memcpy_s(buffer, SIZE_PACKET, buffer_received[out_log], SIZE_PACKET);

            stringToHexa_WiresharkForm(buffer, SIZE_PACKET, buffer_hex);

            Decode_NavData(buffer);
            fprintf(ptr_logFile, "%s\n\n", buffer_hex);
            count++;

            if (count > 5000)
            {
                fflush(ptr_logFile);
                count = 0;
            }


        }

    }


Thread 3 is running:

C#
while (1)
    {
        if (in != out_image)
        {
            out_image++;
            if (out_image == SIZE_BUFFER)
                out_image = 0;

            memcpy_s(buffer, SIZE_PACKET, buffer_received[out_image], SIZE_PACKET);

            if (!Decode_Image(buffer))
                return 0;
        }
    }



The code above is the core part of each thread. (unnecessary codes were removed)

I think the important things are timing of increment of variable 'in' and writing to 'buffer_received'. But the timing is well treated I think.

Sometimes, in one of reading (thread 1) and writing (thread 2, 3) routine, there is data loss.

The UDP data is being sent from other device in a bunch of 414 bytes (called packet), and 400 packets per second.

If neither of synchronization and priority is the problem, I think it is the design of the code.

1. read buffer size in ReceiveFrom() ?

2. while loop design ?

I'm doubting above two, but have no solution.
Posted
Updated 21-Jul-15 18:29pm
v4
Comments
[no name] 22-Jul-15 0:30am    
You have to think through the problem and work out some diagnostic steps. Why are you losing data and how to correct it? Is it a buffer overrun? The first thing to try is increase the fifo buffer size a huge amount. If the progam now seems to run OK for a while and then the problem appears it is very likely buffer overrun. This will be because the other tasks are taking too much time. There is no free lunch and giving the other threads more time will probably not solve the problem maybe delay the onset. Displaying data/image processing is often the bottleneck (comment out the display/processing and see what happens) and if so you may have to display only every 50th or 100th record or process offline. This is how many systems work.
Member 11499804 22-Jul-15 0:39am    
Thank you. Beside increasing the fifo buffer size, does it make sense to increase the read buffer size ('received' in ReceiveFrom(received, SIZE_PACKET, senderIP, senderPort) which is set to receive 414 bytes)? If there is unread UDP buffer before the next packet arrives (so now there is 828 bytes of data in buffer), calling ReceiveFrom() for 414 bytes would cause data loss?
[no name] 22-Jul-15 0:43am    
http://stackoverflow.com/questions/2862071/how-large-should-my-recv-buffer-be-when-calling-recv-in-the-socket-library

From your explanation, I see no need in changing any priorities.

The effect of thread (and, don't forget, also the process priority which affects combined priority of the process's thread) in Windows it totally probabilistic. At the same time, boosting the priority to the extremes, for a prolonged period of time, will block other threads, even the driver operation. In absolute majority of cases, not touching priorities is the best strategy.

Changing or priorities can be use for fine tuning of the probabilities, to slightly increase total performance (but what can guarantees it can help not only ad-hoc?), and such situations are very rare. Also, for example, with hardware, you can use time-critical priority for a short fragment of code mission-critical code, but, strictly speaking, not really to guarantee that this fragment will be executed in time, but rather to reduce the probability of failures related to bad timing, and, in fact, it's mostly done to compensate some design defect of hardware, which could be just too primitive to guarantee real-time under the hood.

I hope you understand that priorities cannot be and shouldn't be used to affect the order of operation. If the order of operation depends on timing, this is incorrect and is generally a source of disasters: http://en.wikipedia.org/wiki/Race_condition[^].

More exactly, this is the problem not of priorities, but only the indication of wrong threading design. That is, people have some wrong design which suffers from racing condition in first place, and then some try to change the priorities to shift the racing condition in hope to achieve desired order of operations. The most dangerous situation is when they succeed in observing the desired effect; the the system can work seemingly correctly for undefined period of time and crash some time during production. :-) I hope you haven't been planning such kind of abuse.

—SA
 
Share this answer
 
Comments
Member 11499804 21-Jul-15 7:26am    
Thank you very much. I'm planning to try mutex locks in critical sections and increase buffer size of UDP ReceiveFrom(), not considering priorities. Is it reasonable?
It is hard to fix because the problem occurred in some low-performance computers which other colleague has. So I have no sufficient time for trial-and-error. Your reply was a big help.
Sergey Alexandrovich Kryukov 21-Jul-15 8:27am    
I don't know all your detail, but most likely it's reasonable.
About the critical sections and other thread synchronization primitives: that's the whole point, they really guarantee some discipline of synchronizing execution in time, and priorities do not.
—SA
Member 11499804 21-Jul-15 10:57am    
All of examples of thread synchronization are about when many threads are running a same function. In my case, different threads are running in different functions and accessing a global variable (buffer). In this case, does Entering/Leaving a critical section (few lines in a function for writing data to buffer) still work?
Sergey Alexandrovich Kryukov 21-Jul-15 12:59pm    
No, "same function" per se has nothing to do with thread synchronization. This is a huge misconception.

It's all about shared objects (shared resources, more generally), and nothing else. You should never synchronize threads only by that reason. Look, if you don't have shared objects, you pass all objects through function parameters (and return, too), all on stack, and the stack of each thread is separate, this is the whole point. Generally, people say "best synchronization is no synchronization". Don't over-synchronize! Some artists here demonstrated how they synchronized so much that many threads actually executed one after another.

Critical sections, mutexes and semaphore are used to make sure that some shared objects are accessed by no more that 1 thread at a time. In more general and less useful case, no more than N threads at a time. This is all you need. Now, the best threading design is having no or almost no shared objects => no or almost no mutual exclusions.

As you are apparently a bit lost here, please also read thoroughly: Mutual exclusion. And, by the way, software development should not be done using trial-and-error.

Also, are you going to accept my answer formally? In all cases, your follow-up questions will be very welcome. By the way, I was one of the earliest developers of threading, at the time when C++ threading was far from its modern form and, say, Linux did not have threads at all, only process, so I know how these issues look inside.

—SA
Member 11499804 21-Jul-15 23:34pm    
Thank you for your considerate comment. I could understand the problem more deeply. You encouraged me to have desire to understand how to code and software development. Unfortunately, then it is not the problem of synchronization I think. I updated my question for you.
The UDP reading is important for getting input and NOT missing it, so I would use normal priority. I prefer using the THREAD_PRIORITY_BELOW_NORMAL flag for non-foreground threads. If you go to low, they wont run but wait. There is no reason for going to low and you better test it under realistic conditions. Its looks like the workload isnt too heavy.

Microsoft has its own ideas about Thread priority. Read and understand it.
 
Share this answer
 
Comments
Member 11499804 21-Jul-15 7:26am    
Thank you!

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900