|
I have two threads and one will indicate to the other that it should perform some operation through a boolean flag. To set this flag I know it is necessary to have mutual exclusion so I use a monitor to restrict access to the flag and then exit the monitor.
Here is my question: a long time ago I remember that sometimes processors will cache their own copy of a variable and only fetch or write this value again when it is evicted from the cache (or memory or whatever) or when it knows something is volatile. My question is, do i have to indicate this boolean flag I am using in my monitor is volatile?
Regards,
Did I post well? Rate it! Did I post badly? Rate that too!
|
|
|
|
|
Rather than use a boolean flag why not use a ManualResetEvent or AutoResetEvent?
only two letters away from being an asset
|
|
|
|
|
I was more interested in it from a thought experiment perspective.
Did I post well? Rate it! Did I post badly? Rate that too!
|
|
|
|
|
Esmo2000 wrote: do i have to indicate this boolean flag I am using in my monitor is volatile?
Yes. Declare it like
volatile bool yourFlag = false;
Other option is to work with Thread.VolatileRead[^]. In fact, in C# if you use volatile keyword, it will be using VolatileRead and VolatileWrite .
As Mark suggested, you can try EventWaitHandle as well. Make sure you understand the difference of AutoResetEvent and ManualResetEvent before you make a choice.
|
|
|
|
|
No offense but most of what you said is untrue on some level..
You don't need mutual exclusion for a bool
x86's don't care about "volatile" and have no notion of it (MTRRs control caching, but the cache is coherent anyway), the "volatile" keyword makes sure that the value is actually fetched (that can be from the cache, but it doesn't matter, since it's coherent anyway) instead of reused if it is already in a register (and the same for writing) and guarantees that those operations won't be reordered by the compiler (the CPU can still reorder it, but only under some circumstances)
It's all very complicated. Unless you want to do assembly programming it usually isn't even relevant, especially if you'd just use the existing synchronization primitives.. see below.
Just making the bool volatile should usually be enough, since it's just a bool, and you may not even need that (usually works anyway, but better be safe than sorry)
However, I don't have a lot of information here go to on, but usually you should use normal synchronization primitives when dealing with synchronization, instead of making one up out of a bool. Like ManualResetEvent, or one of the static functions in Interlocked (if you need something low-level). At least, that's what I got told
ps: I'm sorry for any inaccuracies in this post, it is a complex subject, and I'm not even an expert on this
|
|
|
|
|
yes, you need one of two things:
- either make the flag volatile, i.e. non-cacheable, to make sure you aren't using stale data. The disadvantage is you still have to poll it, wasting a lot of CPU cycles;
- or use a real synchronization primitive such as a Manual/AutoResetEvent, as Mark indicated.
Luc Pattyn [Forum Guidelines] [My Articles]
The quality and detail of your question reflects on the effectiveness of the help you are likely to get.
Show formatted code inside PRE tags, and give clear symptoms when describing a problem.
|
|
|
|
|
Luc Pattyn wrote: i.e. non-cacheable
While it's not actually wrong (it can't be cached in a register anymore), it might give the impression that it has something to do with the cache..
Bit a nitpick maybe, but it would a shame if people were to assume that "volatile" makes uncachable memory ranges (which is AFAIK the only way to not cache something at all, non-temporal hints are just hints)
|
|
|
|
|
the exact meaning of volatile is a well-kept secret, manuals aren't clear on the subject; the best description would be: each read or write operation in code must result in an actual read or write from/to actual memory (in the same order), so all registers, and cache levels are to be ignored/disabled somehow.
Another way of putting it is: the compiler should assume the current code is NOT the only one operating on the data, hence every time the data is touched, it has to be fetched/written.
Luc Pattyn [Forum Guidelines] [My Articles]
The quality and detail of your question reflects on the effectiveness of the help you are likely to get.
Show formatted code inside PRE tags, and give clear symptoms when describing a problem.
|
|
|
|
|
Luc Pattyn wrote: Another way of putting it is: the compiler should assume the current code is NOT the only one operating on the data, hence every time the data is touched, it has to be fetched/written.
It's worth noting that, at least in C, any time the value of a variable is read the compiler is required to generate a read instruction [i]even if the value is ignored[/i]. On hardware platforms with memory-mapped I/O (very rare for the X86, but common with microcontrollers) reads certain addresses will trigger various effects. For example, on some machines reading the lower byte of a multi-byte hardware counter will latch the read registers for all bytes; even if code is only interested in the upper bytes of the counter, it may have to read the lower byte to trigger the latch. Even though such hardware is rare in the PC world, accessing a memory location while ignoring the value may have two potentially-useful effects:
-1- Forcing the page containing that location to be loaded into memory.
-2- Validating the address as pointing to something accessible.
Most programs won't worry about #1, but in some cases it may be good to minimize the likelihood of a page fault occurring while a resource is held. Pre-accessing a memory location before acquiring the resource may be helpful; even though there'd be no guarantee the page holding the location wouldn't get swapped out before it was actually used, the likelihood of a page fault occurring before the resource was allocated would be reduced.
|
|
|
|
|
Hi there. In my company, we have a windows service app that relies on multiple threads functionality to process many socket requests at a time, using the Socket events OnDataArrival to proccess the arriving data.
Although the system is working properly, the server administrator is complaining that the app process is using a very high number of handles. That is, when you open the Windows Task Manager and check for the Handles column, the number is around 30000 handles for the process. I've checked some sources and a reasonable number of Handles for a single process is around 3000, and a 8000 is already a very high number. I'm way too over this.
Do anyone has some good sources relating .net threads and events with the use of OS handles? I need some hints on how to low the number of hanldes for the process. I should probably re-do the app with good threading pratices (the implementation is very old), but I need a faster solution for the actual service.
Regards,
Leonardo Muzzi
|
|
|
|
|
You're probably not really using that many handles, but have a leak in your code somewhere where handles are not being released properly.
|
|
|
|
|
Hi,
I haven't done any mass socket stuff, however the way I understand .NET sockets you don't need explicit threads at all; a BeginConnect, BeginAccept, or BeginReceive is handled asynchronously, and the events get handled on some other thread, typically taken from the ThreadPool. So I expect you only need one (or a few?) handles per socket.
Luc Pattyn [Forum Guidelines] [My Articles]
The quality and detail of your question reflects on the effectiveness of the help you are likely to get.
Show formatted code inside PRE tags, and give clear symptoms when describing a problem.
|
|
|
|
|
Leonardo Muzzi wrote: around 30000 handles for the process
Thats too much. AFAIK, the handles count includes the file handles, threads, mutex, semaphore etc. Look at the application source and ensure the resources are properly released. No resource should live longer than expected. Get some memory profilers and analyze the object allocation and GC activities.
Other way is to log the application activities and analyze the log. This will help you to understand which area it is spending more time and what resources are not getting released properly.
Leonardo Muzzi wrote: windows service app that relies on multiple threads functionality to process many socket requests at a time,
How are you doing this? Each thread per request? If yes, consider using asynchronous methods provided in the socket class. Asynchronous methods makes it possible to write highly thread efficient applications.
Here[^] is a decent MSDN article which takes the subject in detail.
|
|
|
|
|
The application don't open and close sockets all the time: in fact, it just open and keep them opened. I must use different sockets to send and receive data.
I have one thread per "send" socket. The "send" sockects are awalys opened and send data indefinitely, as long as it arrives in his own queue. There are 2 to 10 sockets like this being used at the same time, so 2 to 10 threads always active sending whatever arrives on the queue (separate queues).
The "receive" sockets (one for each "send" socket) works using the OnDataArrival event. I suppose they use thread pool threads to fire the event.
Anyway, I agree that what I need is a complete code revision to avoid memory leaks and some multithreading review too. The sockets doesn't seen to be the real problem. For example, there are too many "controls" over the queues, using some could-be-avoided multithread. I'll work on that direction.
Does anyone know if the Enterprise Library can have something to do with this? It has been used to log some data in a sql server database.
Thanks everyone for the support.
Regards,
Leonardo Muzzi
|
|
|
|
|
Leonardo Muzzi wrote: I must use different sockets to send and receive data.
Why don't you use the same socket for sending and receiving data? IMO, you only need one server socket and it should be able to handle the connection and send/receive. Avoid creating unnecessary threads. Prefer asynchronous methods over the synchronous one.
Leonardo Muzzi wrote: Does anyone know if the Enterprise Library can have something to do with this?
Not sure.
|
|
|
|
|
'Cause they are really different sockets. Diferent ip/port combination, different connections. The legacy application that answers to these sockets on the other side works like this. And I believe this is better , 'cause this application can receive many requests at the same time. Otherwise, it would block the connection until it gets the answer.
I use the OnDataArrival event to receive the data, that is, assyncronous to the sending function.
Regards,
Leonardo Muzzi
|
|
|
|
|
It's most likely a leakage, I doubt you are really using all those handles. You may check for missing calls to Dispose() and similar.
Not only for sockets, since handles are used also for open files etc.
2+2=5 for very large amounts of 2
(always loved that one hehe!)
|
|
|
|
|
thanks...'On top of that, you try to access the row before you actually add it to the rows collection'this helped me to solve the issue and also understand some basic concepts
|
|
|
|
|
Here's a useful snippet for you; rather than starting a new thread every time, you can reply to others on your existing thread. That way, it's all kept in one place (which incidentally should be in the VB.NET forum). Please stop creating new threads when one thread will do.
"WPF has many lovers. It's a veritable porn star!" - Josh Smith As Braveheart once said, "You can take our freedom but you'll never take our Hobnobs!" - Martin Hughes.
My blog | My articles | MoXAML PowerToys | Onyx
|
|
|
|
|
Private Sub addtopaymentbtn_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles addtopaymentbtn.Click
Dim i As Integer = DataGridView1.Rows.Count
DataGridView1.Rows(i-1).Cells("Details").Value=billcombo.SelectedItem
DataGridView1.Rows.Add()
End Sub
--------i changed my code like this.though it is not showing any error it is overwriting the previous results.Also i cant understand how the rows get added first and then the combobox value gets added,given i wrote code for adding rows in the end
|
|
|
|
|
It's a very similar problem to your last post. i-1 is the last element in the collection, so you need to change that to i . On top of that, you try to access the row before you actually add it to the rows collection.
Between the idea
And the reality
Between the motion
And the act
Falls the Shadow
|
|
|
|
|
My previous post doesn't means that!!!
Modify your code like
<br />
<br />
Private Sub addtopaymentbtn_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles addtopaymentbtn.Click<br />
<br />
DataGridView1.Rows(0).Cells("Details").Value = billcombo.SelectedItem<br />
Dim i As Integer = DataGridView1.Rows.Count<br />
DataGridView1.Rows.Add()<br />
DataGridView1.Rows(i).Cells("Details").Value=billcombo.SelectedItem<br />
End Sub<br />
<br />
|
|
|
|
|
Private Sub addtopaymentbtn_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles addtopaymentbtn.Click
DataGridView1.Rows(0).Cells("Details").Value = billcombo.SelectedItem
Dim i As Integer = DataGridView1.Rows.Count
i = i + 1
DataGridView1.Rows.Add()
DataGridView1.Rows(i).Cells("Details").Value=billcombo.SelectedItem
End Sub
_______i am selecting a value from a combobox(billcombo)and it is to be inserted into a datagrid for each addpaymentbtn click.But the code gives the below error
'Index was out of range. Must be non-negative and less than the size of the collection
Can anybody help me to sortout this problem
|
|
|
|
|
I see you are trying to access DataGridViewRow that doesn't exists.
Don't forget that DataGridViewRowCollection.Item is zero based index.
It means that row with number I has index I-1.Remove I=I+1 from your code!
|
|
|
|
|
Comment out the i = i + 1 line. This is because the Count property of a collection is almost always one higher than the number of items in that collection, because VB.Net starts counting at zero, instead of one
Between the idea
And the reality
Between the motion
And the act
Falls the Shadow
|
|
|
|