|
You have a misunderstanding of Mono, my friend. Mono is a programming framework, like the .NET framework. You can use C# in Mono just like you can in .NET. You don't need to convert C# to anything to run it on Mono. If your C# code uses some library, say a native Win32 library, then you'll need to change it to use something more cross-platform.
AFAIK, there is no converter that automatically makes your project run on Mono. If you used purely the .NET framework, chances are it will compile OK on Mono. You can use the Moma[^] tool to see if you need to make any changes to your code for it to work on Mono.
|
|
|
|
|
Judah Himango wrote: ou can use the Moma[^] tool to see if you need to make any changes to your code for it to work on Mono.
Thanks
It is Good to be Important but!
it is more Important to be Good
|
|
|
|
|
Hey all,
I've been puzzling this over for the last hour; basically I only want to wait a certain amount of time for a method to complete before giving up (or unless a certain condition is met).
I've tried a few things involving watcher threads and so on, and finally hit upon the idea of using an asynchronous delegate (after some nosing around MSDN):
delegate string Worker(string args);
public string DoWork(string args)
{
Worker d = new Worker(Workhandler);
IAsyncResult result = d.BeginInvoke(args, null, null);
if (!result.IsCompleted)
{
result.AsyncWaitHandle.WaitOne(1000, false);
if (!result.IsCompleted)
{
return "Timeout!!";
}
}
return d.EndInvoke(result);
}
private string Workhandler(string args)
{
Thread.Sleep(2500);
return args;
}
Now the above seems to work perfectly ok, but I'm the first to admit I don't know much about delegates. So my questions are:
1.) If a timeout occurs (as in the above faked up example) what happens to the Workhandler? Does this continue to run, or does the .Net Framework leap in and forcibly stop any processing?
2.) I imagine that
d.BeginInvoke is starting a thread; is this so, and where does this thread exist in terms of the application? Is it possible to grab hold of that thread without using the delegate to control it?
3.) Am I giving myself enough rope to hang myself with here - could this go horribly, horribly wrong in subtle ways?
4.) Are there any better/more standardized/preferred ways to achieve what I'm trying to achieve?
-- modified at 14:58 Thursday 26th July, 2007
"It was the day before today.... I remember it like it was yesterday."
-Moleman
|
|
|
|
|
martin_hughes wrote: 4.) Are there any better/more standardized/preferred ways to achieve what I'm trying to achieve?
I am unsure if you have considered this, but I believe that this could possible work, not sure if it is preferable. Use a System.Timer and start it at the beginning of your method. create an event that watches the timer, and fires at every tick. when the ticks get to the time you specify, you could kill the method.
I get all the news I need from the weather report - Paul Simon (from "The Only Living Boy in New York")
|
|
|
|
|
It's a good thought, and I think I tried something similar - the problem I had here (my apologies, I wasn't very clear in the original post as to why the method may take so long to complete) was that the method couldn't easily be interrupted.
Basically the method called a synchronous funtion in an external library which fires a barcode scanner. Every so often the call just seems to hang - either a fault in the library or the hardware itself is my best bet - causing my application to hang and the device needed to be reset causing much anger and frustration with my users.
"It was the day before today.... I remember it like it was yesterday."
-Moleman
|
|
|
|
|
Use a mix of Thread and Thread.Join. Here is an example:
public void MyWorkerThread()
{
try
{
}
catch (ThreadAbortException)
{
}
}
public void MyWaitingMethod()
{
Thread t = new Thread(newThreadStart(MyWorkerThread));
t.Start();
bool threadFinished = t.Join(30000);
if (!threadFinished)
{
t.Abort();
}
}
Something along the line.
-----
If atheism is a religion, then not collecting stamps is a hobby. -- Unknown
|
|
|
|
|
Hello,
I would like to know what happens to a class when I declare it inside a loop. Is it eliminated by the GC when the cycle ends, before a new instance of the same is created?
An example:
namespace Whatever
{
class Model
{
my fields and methods
}
class Program
{
static void Main
{
Var declaration
...
while (condition)
{
Model temp = new Model();
...
...
}
}
}
}
Am I doing it correctly?
Thanks in advance for the attention.
|
|
|
|
|
It is not garbage collected immediately, but it is flagged for garbage collection. When the next garbage collection occurs, it may (or may not!) be disposed (and then queued for some other things like finalization), depends on the GC to decide when/what to do with it. Classes that use a lot of resources should implement the Dispose design pattern so that they can be explicitly disposed of in the code, releasing expensive resources such as images, file handles, database recordsets, etc... instead of relying on the GC to eventually get to them. If you are allocating expensive resources in a tight loop, it is imperitive that you dispose the objects when you are done using them, or memory usage will explode.
|
|
|
|
|
Thanks!
|
|
|
|
|
PhilDanger wrote: t is imperitive that you dispose the objects when you are done using them, or memory usage will explode.
That is true, and I was thinking along the lines of performance hit, as well.
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
|
|
|
|
|
Going a bit deeper, if I have
namespace Whatever
{
class Base
{
...
}
class Model
{
List<Base> myList = new List<Base>();
my fields and methods
}
class Program
{
static void Main
{
Var declaration
...
while (condition)
{
Model temp = new Model();
...
...
}
}
}
}
Do I have to implement in the Base class the :Idisposable interface?
I've tried to implement it on Model class, and I have an error at execution, that has something to do with infinite loops, when the Model.Dispose() method is called.
Thanks for your quick answer, previously.
|
|
|
|
|
No. A List is a managed object, so you don't need to implement IDisposable to take care of that. You only need IDisposable to handle unmanaged resources.
When the reference to the object is not used any more, the object, the List inside the object, and every object in the List are all collectable (unless the object are referenced elsewhere, of course).
The garbage collector doesn't look for object to collect, it looks for the objects to keep. That means that at the instant that an object gets collectable, all objects that it contains are also collectable.
---
single minded; short sighted; long gone;
|
|
|
|
|
Let's say you have a List<Bitmap> (or something else that is disposable). You will want to give your Base class a Dispose() method so that you can loop through the List and Dispose of everything manually.
Basically, if your class contains unmanaged resources OR contains members that implement IDisposable, you'll want your class to implement it as well.
--sorry, meant this as a reply to OP.
|
|
|
|
|
I am trying to bind a DataGridView to a Collection that implaments IList<t>. The Entities in the collection contain properties that will be displayed as columns in the DataGridView. This all works fine.
One of the properties in the Entity is itself another Entity. I want the DataGridView to bind to a property of the nested Entity. How would I acomplish this?
Example Description:
I want to bind a DataGridView to the EmployeeCollection. The columns I want to display are EmpNumber, EmpName.LastName, EmpName.FirstName. I have tried setting the DataPropertyName on a column in the DataGridView to EmpName.FirstName and this does not work. I would like to solve this issue without exposing the properties to the Name Class in the Employee Class.
Example:
public class EmployeeCollection : List<employee>{
}
public class Employee{
public string EmpNumber { get; set; }
public Name EmpName{ get; set; }
}
public class Name{
public string FirstName { get; set; }
public string LastName { get; set; }
}
Thanks for your help.
Patrick McCoy
|
|
|
|
|
Hi guys,
I'm using an object data source in a windows app with a datagridview. The grid binds fine, and populates all text fields ok. Now i want to be able to populate the DGV with existing data as well. The whole thing binds as usual, but the "Selected" field is now occasionaly true.
Now - regardless of what i do, it refuses to set the check boxes to true. I have attempted to set the TrueValue and FalseValue to their relevant boolean strings, as well as bit values.
Changing the value through the GUI assigns the value as expected.
This is perplexing me, any pointers.
Cheers
Tris
Update: The values are being re-set when i display the dialog as a SelectionChange event is being thrown. When i origionaly bind the data i have a _isLoading flag to early exit before any process.
Is there any way to suspend this event?
-- modified at 12:37 Thursday 26th July, 2007
-------------------------------
Carrier Bags - 21st Century Tumbleweed.
|
|
|
|
|
|
Hi Eliz,
Thanks for the reply but i was using WinForms. I've fixed it now; Had to move the initialization to the OnFormLoad event, this handled things properly on the ShowDialog() call.
Cheers
T
-------------------------------
Carrier Bags - 21st Century Tumbleweed.
|
|
|
|
|
Hi,
I am essentially trying to rename the columns of a table I retrieved from a database using an OracleConnection in C#.
I have tried TableMappings.Add -->> ColumnMappings.Add but I when I try to build, I get a "Child list for field server_EOD_Execution cannot be created" exception.
I'm sorry but I'm at my wits end with this one. Can someone please help
Is there another way to implement the renaming?
God speed
Deji
|
|
|
|
|
You could have a look at this link[^] on msdn, it shows you how to rename a column and the issue related to renaming it and the workaround to fix it.
Tarakeshwar Reddy
MCP, CCIE Q(R&S)
There are two kinds of people, those who do the work and those who take the credit. Try to be in the first group; there is less competition there. - Indira Gandhi
|
|
|
|
|
Hi folks,
I'm working on something which will heavily use USB as a means of communicating with whatever program I write for it--kind of like a GUI that takes info from somewhere else outside the computer.
However, I'm not familiar with how USB even works, much less how it receives and interprets the information it gets.
What I'm wanting to do is to take a cable from an RS232 connection on the back of the machine, run it to a RS232-to-USB converter (which I have, it's one that Texas Instruments puts out that has a TUSB3410 chip on board), and then run a USB cable back to the machine again.
I already know how to send/receive information out the RS232 port, but what I want is a program that sends something out via RS232, that goes to the converter, then out the converter and into the USB port and shows on the screen.
Any ideas?
Thanks for your time,
Michael Fritzius
|
|
|
|
|
matrix2681 wrote: I'm not familiar with how USB even works, much less how it receives and interprets the information it gets.
Neither am I, but after some quick googling it seems that your scenario might not work due to the "power" control aspect of USB. You might want to do some research on this subject.
|
|
|
|
|
I did find something that makes it work, but it seems like more of a hack job than anything.
I found a program that I wrote before for messing with RS232, and made a connector that allowed the computer to talk to itself. So I looked at the device manager on here and saw that it considered the USB port to be "COM5". Basically it looks at it as a regular port.
So I modified the program to send data out COM1, like it used to (the RS232 port) and expect data to come in COM5, and it seems to work.
I'm not sure if this is what I want though... but it *can* take info in the port and display it... but there needs to be an RS232 connection present.
hmm...
What did you see about power control that set a red flag off for you? I know that power might be an issue since whatever project is made will be something standalone, so maybe it'll demand too much? USB can power devices from the power supply on the computer right?
Thanks,
Michael Fritzius
|
|
|
|
|
matrix2681 wrote: What did you see about power control that set a red flag off for you?
this discussion[^]
in the case of a connecting two PC’s USB ports directly things would fail (aside from the major electrical issues) because both PC’s would try to exert control and expect the other to submit... to get around this problem PC to PC USB cables have a controller within that allows each PC to think that they are in charge
|
|
|
|
|
Hi,
I am in the middle of similar things, i.e. I have a setup with some old Macs and my own
little microcontroller-based network, and I want to port that to one or two PCs that
dont have serial ports natively.
Some remarks:
1. in order to connect two PC's serially (also when looping back to the same PC) you
need a "null modem" (that's either a null modem cable or a very small null modem adapter,
both having two female DB9 connectors). I guess you figured that one out already.
2. I havent observed/measured it yet, but the addition of a USB-to-serial converter is likely
to add some latency, i.e. the characters will be transmitted/received a bit later than
would be the case in a direct RS232C interface; there are two reasons for this:
- encapsulating I/O commands in USB packets has some overhead;
- the USB-to-serial will try to utilize USB bandwidth the best way it can, i.e. not strain it,
so it will knowingly wait a while to see whether it can pack a couple of characters in
one packet.
3. Also I believe the timing of control lines (DTR/RTS/etc) may be somewhat less accurate,
so if you want to interface to special hardware that strongly relies on these timings,
it might go wrong.
4. The USB-to-serial cable comes with some software (a driver) that turns it into an
almost normal COM port: you can choose its number, after that it should show up in regular
lists of serial ports. As a result you can use any terminal emulator (HyperTerminal),
select and use the port.
5. The USB-based serial port probably shows up under Device Manager in a different category,
possibly named after its manufacturer (as opposed to the Ports category).
Cheers !
|
|
|
|
|
Luc Pattyn wrote: 4. The USB-to-serial cable comes with some software (a driver) that turns it into an
almost normal COM port: you can choose its number, after that it should show up in regular
lists of serial ports. As a result you can use any terminal emulator (HyperTerminal),
select and use the port.
This is what I've done. The driver software that came with this did exactly that. Any kind of project that is made will have something like this RS232-to-USB converter on board, since the solid, cheap chips I use have RS232 built in. Do you think this is a safer way to accomplish what I'm doing?
See, the thing is, I want to stay away from having to designate this device as a particular thing so that a particular driver can act on it. Nothing I make will be able to be labelled like that anyway--they will just be sending out information. If I can bypass that standard by just treating the whole USB port like a fake COM port, then that's awesome. Basically, I'm looking for a way to make a program that needs info from somewhere, and here (COM5--the USB port) is where to get it.
In fact, I just used HyperTerminal with an old project that talks to a GSM modem and it is able to receive info just as if it were looking at COM1, like it used to.
So I don't know. I mean, bearing all that in mind, would it still be necessary to do all the registering and driver matching for whatever device I make, or no?
|
|
|
|
|