Actually, this is a pretty bad idea. The right idea is this: develop this application the way it could get request from the second instance to change its behavior and assume the responsibility of the second instance. The best method of communications I know from all the different methods I knew is classic .NET remoting.
Some instance of the application process tries to connect to the remoting service as a client. It this attempt fails, this instance is the only one, so it start playing the role of the remoring service. When some client is connected, it should accept the data sent to the second instance of the process and continue running with "new mission". After the second process passes data to the first one, this second process terminate itself. This is always better, and the termination of a remote process is careless termination, always bad and generally unsafe.
This way, there is no need to terminate the process, which is not a nice thing. However, it you are so stubborn to deny my good idea (which was actually proven to be very productive), you can use
System.Diagnostics.Process.Kill
.
But listen, why would you want all that? Anyway, to create an instance of the
Process
class for an external process, you will need to know its id to be able to call the factory method
Process.GetProcessById
:
http://msdn.microsoft.com/en-us/library/76fkb36k.aspx[
^].
Better don't even think about getting all processes (with
Process.GetProcesses(string)
) by name. How would you guarantee that this name is unique? So, you will need some communication between the instances of your processes anyway. It could be, say, shared memory, or something else, like the same very remoting. But it you already have communication between your processes, why would you use it for killing, if you can simply use my schema, when a new process terminates itself (instead of terminating the older one) simply by returning from the entry point method? Whatever you do, this method will always be better.
—SA