|
No. I'm using Win2k and all requests are sending to valid IP addresses.
|
|
|
|
|
Well it doesn't matter if it is to valid or invalid address. But anyway, you don't use XP
Because as you said, if used locally it works perfectly, I would say that there will be a problem in IP stack - either on your machine or on the peer - if you open hundreds of connections simultaneously, it can have reach some SYN flood protection on the peer side or something like that. I think some tool like TCPView from sysinternals can help revealing what's up.
Anyway what's the ping time between the machines?
|
|
|
|
|
I am trying to build a Java launcher for a Java Product. Executing a Java application with a Batch file looks akward. I want a write code so that a native application written in VC++ can launch the Java application. The batch file of the application looks like this
---------------------------------------------------------------------------------------------------
set path=%PATH%;.\java\j2re1.4.2_07\bin;.\java\j2re1.4.2_07\lib;.\JMF2.1.1e\bin;
set classpath=.\JMF2.1.1e\lib\sound.jar;.\JMF2.1.1e\lib\jmf.jar;
java -classpath Sample.jar;%CLASSPATH%;%JMFHOME% -Djava.library.path=./Samplelib NrthSample.SampleMainApplication
---------------------------------------------------------------------------------------------------
Is there any way to lauch the Java application straight for the native application without the batch file? Can someone tell me which functions to use??
Regards.
|
|
|
|
|
Why don't you just ShellExecute java.exe with all the environment variables specified as command line arguments?
Regards
Senthil
_____________________________
My Blog | My Articles | WinMacro
|
|
|
|
|
iz there any restriction regarding the amount of heap memory that can be used by a program?does windows put a restriction to that?
"faith, hope, love remain, these three.....; but the greatest of these is love" -1 Corinthians 13:13
|
|
|
|
|
I think it is about 4GB and the reason is that we are using 32bit system. Therefore, the amount of memory used by an application must not exceed 4GB.
Cheers...
|
|
|
|
|
Well actually, Windows reserves 2GB for internal use (other DLL's and such). Substract from the 2 GB the amount required by your program (total stack size of all your threads, etc..) and than the remaining memory can be used as the heap.
But if you really need to extend the amount of heap memory, you can easely write yourself a heap manager and a pointer wrapper for a file or whatever that simulates 'real' memory. And there you go, per instance of the class, you have 4GB of extra heap. (Maybe I'll write an article about it one day)
I also got the blogging virus..[^]
|
|
|
|
|
Hi Bob, I will be very interested to read your article on this. Do you mind to let me know if you have published your article on this area? My email address is ryu_thomas@hotmail.com.
Thank you...
|
|
|
|
|
|
|
No, I use some other technique which creates some files that are used as a heap.. This allows me to allocate virtual unlimited memory! Confused? Let me elaborate:
I have a class, which uses a file where you can allocate data in. You don't get an ordinary file pointer, or just an adress, you get a struct, which has a heap id (the object which handles the file) and a block id ( a ID from the index ). This allows me to move all the memory I want without affecting anything I don't want.
How do I access the memory like anything else you ask? Well, I use a heap manager that manages the heap objects (the files) and a cache. When you request a certain data block, the heap manager loads the block into the cache and you get a pointer to it. When you write to the address, it's marked as dirty, and when the cache is cleaned up, it's written back to the file.
I've almost finished the design of the little library and if you watch my blog once in a while, you should notice it when I post the first of three articles. Maybe you find them interesting.
Behind every great black man...
... is the police. - Conspiracy brother
Blog[^]
|
|
|
|
|
No? It is 2 Gigabytes address space for the Kernel and 2 Gigabytes address space for User by default.
I would recomend a much simpler and faster mechanism for a dynamic heap such as using a memory mapped file implementation.
8bc7c0ec02c0e404c0cc0680f7018827ebee
|
|
|
|
|
I should probably elaborate on this a little bit.
Everything in Windows is a memory mapped file. Your executables, your dll's, everything on the system is actually treated as a memory mapped file.
These give you two distinct advantages.
1. The memory manager will manage dirty cache for you, so you will have no effort in attempting to track dirty regions.
2. The memory manager DOES NOT remove a memory mapped file just because you unmapped it! It will actually still attempt to keep it in memory as long as it can. You can actually unmap a file, come back hours later and re map it and it would never have been re-read from disk.
These are neat optimizations and they are provided to you for free by the Operating System so I would definately attempt to take advantage of them.
8bc7c0ec02c0e404c0cc0680f7018827ebee
|
|
|
|
|
Yes, but the addresses of this file are within your address space, so basically you consume your own 2gb of adress space. My technique bypasses this problem by creating it's own 'heap'. The key in the technique is that the memory doesn't get allocated on the native heap (in your 2gb adress space), but somewhere on the disk, which is not limited to the adress space given to your process. Using sophisticated 'heap' pointers, I can move these blocks from the file to the cache, which is in the native heap, and vice versa. I could use memory mapped files as the cache, but this would not be very efficient, cause I would be moving the memory blocks from one file to another. I also could use multiple memory mapped files and map / unmap them on demand, but that would require me to update all the pointers that point to the memory in the mapped file, which is not very feasible...
So basically my techique is a workaround for the 2 GB adress space limitation.
Behind every great black man...
... is the police. - Conspiracy brother
Blog[^]
|
|
|
|
|
Yes, that is the point; you can use memory mapped files to access more than 2 Gigabytes of address space without doing the overhead of a copy.
Your cache is not on disk, you said it yourself it is in the native heap. The point is that you do not want to copy memory around, so you directly access and use the memory mapped file as it is very flexible.
So in the case of memory mapped files you do not need a "cache" since it handles it all by itself.
Mapped files are very flexible in that you can create a single file and then map any portion of the file you want into memory at once, thus creating a view.
So, actually what you are creating is all already done by Memory Mapped Files.
8bc7c0ec02c0e404c0cc0680f7018827ebee
|
|
|
|
|
Take this very simple (and ugly) example, as long as you have the disk space. It will allocate a memory mapped file of 65 Gigabytes. To make this easier to use since pages can only be mapped on boundaries you could have a thin wrapper library around this to help out with that issue, but one large file can be mapped multiple views and take care of caching and writing out dirty regions.
#include <windows.h>
#include <stdio.h>
int _cdecl main(void)
{
HANDLE hFileMapping, hFile;
PVOID pLowMemoryMap, pHighMemoryMap;
hFile = CreateFile(L"C:\\temp.sys", GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_EXISTING, 0, NULL);
hFileMapping = CreateFileMapping(hFile, NULL, PAGE_READWRITE, 0xf, 0xffff, NULL);
pHighMemoryMap = MapViewOfFile(hFileMapping, FILE_MAP_WRITE, 0, 0xF000000, 1024*1024);
pLowMemoryMap = MapViewOfFile(hFileMapping, FILE_MAP_WRITE, 0, 0, 1024*1024);
*((DWORD *)pLowMemoryMap) = 0x1234;
*((DWORD *)pHighMemoryMap) = 0x5678;
UnmapViewOfFile(pLowMemoryMap);
UnmapViewOfFile(pHighMemoryMap);
pHighMemoryMap = MapViewOfFile(hFileMapping, FILE_MAP_WRITE, 0, 0xF000000, 1024*1024);
pLowMemoryMap = MapViewOfFile(hFileMapping, FILE_MAP_WRITE, 0, 0, 1024*1024);
printf("0x%0x, 0x%0x\n", *((DWORD *)pLowMemoryMap), *((DWORD *)pHighMemoryMap));
UnmapViewOfFile(pLowMemoryMap);
UnmapViewOfFile(pHighMemoryMap);
CloseHandle(hFileMapping);
CloseHandle(hFile);
return 0;
}
8bc7c0ec02c0e404c0cc0680f7018827ebee
|
|
|
|
|
Toby Opferman wrote:
Yes, that is the point; you can use memory mapped files to access more than 2 Gigabytes of address space without doing the overhead of a copy.
True, as long as you have the adress space. But this approach requires a whole lot of different and more difficult things to be managed, since MMF's are not designed for my purpose. They are designed for IPC and efficiently processing large files of records, not for completly unpredictable random access.
Besides that the copying of the memory is done by windows, only at a lower level.
What I'm trying to accomplish is indeed basically a memory mapped file, but only at a smaller scale as the windows MMF's. I don't know yet if I should use MMF for this purpose, but I'll certainly look into it.
If you want, I can notify you when my article is ready, and you can see what I mean.
Thanks for the suggestion, I can make a decision after doing some research..
Behind every great black man...
... is the police. - Conspiracy brother
Blog[^]
|
|
|
|
|
You do not need 65 Gigabytes of address space to open a 65 Gigabyte memory mapped file. You can map any portion of the file into your process for view. The location of the memory in the file is not directly related to the address that will be used in the program. You can map the last 4k of memory in the 65 Gb file and have it actually a low memory address!
"The flat-file database application example is useful in pointing out another advantage of using memory-mapped files. MMFs provide a mechanism to map portions of a file into memory as needed. This means that applications now have a way of getting to a small segment of data in an extremely large file without having to read the entire file into memory first. Using the above example of a large flat-file database, consider a database file housing 1,000,000 records of 125 bytes each. The file size necessary to store this database would be 1,000,000 * 125 = 125,000,000 bytes. To read a file that large would require an extremely large amount of memory. With MMFs, the entire file can be opened (but at this point no memory is required for reading the file) and a view (portion) of the file can be mapped to a range of addresses. Then, as mentioned above, each page in the view is read into memory only when addresses within the page are accessed."
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dngenlib/html/msdn_manamemo.asp[^]
8bc7c0ec02c0e404c0cc0680f7018827ebee
|
|
|
|
|
Hello,
I don't think that you understand completly what I want. If you want to leverage all that MMF's has to offer, you need to know when certain data is going to be accessed. This way, you can traverse large files using views of only parts of that file.
I want to have a very large heap, which allows you to allocate classes and POD's. Since the user can be using this as a heap, you'll never know when certain data is accessed. So every assumption would be wrong and almost never will the data be accessed sequential. It'll be completely random.
At this point, I'll keep both options open and I might implement different allocation strategies and do some benchmark tests and describe the results in part three of the articles. But this is it for the moment.
Behind every great black man...
... is the police. - Conspiracy brother
Blog[^]
|
|
|
|
|
I completely understand what is trying to be accomplished, however you seem to underestimate what MMF's were designed for. They were designed for completely random access! They were designed to access large files! They were also integrated into the memory manager so you have great flexibility and optimization!
That's fine if you want some type of front end to make it seem as if all memory is available at the same time; however the back end should definately be using memory mapped files as this is what you are really doing on the back end; except in a very limited and unoptimized way.
8bc7c0ec02c0e404c0cc0680f7018827ebee
|
|
|
|
|
If you do benchmark tests and find out that copying a file from disk then into memory all the time is faster than on demand memory mapped files I would send the code to Microsoft's Base OS team!
8bc7c0ec02c0e404c0cc0680f7018827ebee
|
|
|
|
|
I'm not trying to be a prick; I just want to educate you on the operating system and what it has so you can take full advantage now and in the future.
The memory mapped file has the advantage that you can set options to use the memory mapped file for the "page file" for the memory locations! This is great because if you just read a file from disk into memory you're actually puttint it onto two locations in the disk and this could be happening concurrently.
There is something called a "working set" for any particular process. This is the number of pages that the process can have concurrently in physical memory at the same time. This means that as you copy pages you could now be swapping pages back out if you're crossing 4k boundaries. You are now dirtying your cache constantly and when your memory manager deems it as dirty, the OS has possibly copied this memory to page file for update and now you're also copying it to your file. So, you are doing double paging! Memory mapped files could use themselves for this so no copying any data to the page file.
If you use a memory mapped file you map the view but it doesn't mean that any data is read into memory. Once you access a memory location then it may be pulled into memory. So, if your user says "Allocate 10 meg" you would then copy 10 meg into a memory location. If he then changes one byte and hits "done" you could then copy the 10 meg back from that memory location.
The memory mapped file has the advantage that it would probably not put 10 meg into physical ram. It would also only put the page of the bytes changed into memory and then swap it back directly into the file. This is much faster.
The second optimization is that you would have to do yourself would be to keep pages of memory in RAM even though you've been told to swap it out. As an example, your user could say "allocate 5 meg" and you do, but now you have to swap that out for a different 5 meg. In your situation you copy out and recopy the new 5 meg into the same location. If you used a memory mapped file, just because the physical address of the original 5 megabyte isn't viewable to your process anymore doesn't mean it's not still there. The Operating System itself can optimize and will attempt to keep pages in memory even if they aren't being used. This is great because if you then remap that 5 meg, the OS simply just needs to update your virtual address. In your example, you'd have to recopy all the memory into the process from disk every time.
Memory Mapped files were designed to have optimal flexibility and you'd be surprised at how many implementations use this. You can imagine it would have to be flexible since all files, executables and DLLs on the system in every process are just memory mapped files!
If you looked at that article as well that I posted you will notice the date is 1993. Over 10 years of optimizations to the memory mapped file implementation in Windows is also at your advantage.
I wouldn't underestimate thinking that this was just implemented by some junior developer! Memory mapped files are even available on other operating systems. It is very flexible, highly optimized and it was implemented for this specific reason.
8bc7c0ec02c0e404c0cc0680f7018827ebee
|
|
|
|
|
Toby Opferman wrote:
I'm not trying to be a prick; I just want to educate you on the operating system and what it has so you can take full advantage now and in the future.
I know that you are not trying to piss me off and I thank you for your efforts on trying to educate me on this subject, since I know very little about it.
Toby Opferman wrote:
If you looked at that article as well that I posted you will notice the date is 1993. Over 10 years of optimizations to the memory mapped file implementation in Windows is also at your advantage.
I just printed a few articles on memory mapped files (including the one you posted to me) and I will read them throughly during my long train ride back home.
After reading a little about the topic, I was convinced that I need more flexibility in my design. I also want to keep other options open, so I put the cache at a lower level in my design. I now easely can use different implementations, even together, of the core.. So for now, I'll just go ahead and start implementing the thing and wait for the benchmark tests to be accurate.
Toby Opferman wrote:
They were designed for completely random access! They were designed to access large files!
I doubt this, since in all the articles that I read about MMF's; the authors keep emphasizing that they are the best solution for accessing large sequential files and IPC.
Toby Opferman wrote:
I wouldn't underestimate thinking that this was just implemented by some junior developer! Memory mapped files are even available on other operating systems. It is very flexible, highly optimized and it was implemented for this specific reason.
I know that MMF's have a relative long past comepared to other pieces of software, but I just want to keep some options open for now. Besides that, the sceptic that I am, I'll have to see the hard numbers to be completely convinced. Having those numbers will have an other advantage. It will convince people who would have doubts about the solution in the first place..
Behind every great black man...
... is the police. - Conspiracy brother
Blog[^]
|
|
|
|
|
Toby Opferman wrote:
They were designed for completely random access! They were designed to access large files!
I doubt this, since in all the articles that I read about MMF's; the authors keep emphasizing that they are the best solution for accessing large sequential files and IPC.
Just because author's don't mention this or know about it doesn't make it untrue. Think about it logically, why would you be required to access the file sequentially? Is there a limitation that would prevent this?
Remember, there is no physical link between the file stored on the disk and RAM. There is only a logical link which is created in software. The hard disk itself can be accessed quite randomly; you simply move the read/write head to the cylinder and sector you want to read. Think about the paging file for Virtual Memory itself. It's stored on the disk, pagefile.sys. Would this be accessed sequentially?
Generally, the authors aren't talking about simulation of > 2Gigabytes of memory and as such MMF's are more commonly used for shared memory. Most general applications, aside from databases, don't generally need this functionality.
How Windows NT Provides 4 Gigabytes of Memory[^]
MAPPED FILE I/O
If an application attempts to load a file that is larger than both the system RAM and the paging file combined, the mapped file I/O services of the virtual memory manager are used. Mapped file I/O enables the virtual memory manager to map virtual memory addresses to a large file, inform the application that the file is available, and then load only the pieces of the file that the application actually intends to use. Because only portions of the large file are loaded into memory (RAM or page file), this greatly decreases file load time and system resource drainage. This is a very useful service for database applications that often require access to huge files.
There are a variety of applications which use MMF's on systems that support it for this reason, here's an example:
MatLab - Accessing Files with Memory-Mapping
[^]
When Memory Mapping Is Most Useful. Memory-mapping works best with binary files, and in the following scenarios:
For large files that you want to access randomly one or more times.
8bc7c0ec02c0e404c0cc0680f7018827ebee
|
|
|
|
|
Ok you convinced me! But, I just don't understand one thing: why wouldn't the authors of those articles mention the goal where the MMF's were designed for?
Anyway, I'll implement this MMF technique as the main technology for my very large heap. So this basically means that my heap will be thick wrapper around MMF's.
I'll implement the other technique as well, just for the benchmarks and for the learning curve. These benchmarks will also support the use of MMF's and maybe it will convince other users as well..
I guess that I found someone who I'll ackowledge for his expert insight in the issue.
Behind every great black man...
... is the police. - Conspiracy brother
Blog[^]
|
|
|
|
|