<b>please I'm having a problem understanding how data is transferred from disk to memory is conducted under programmed I/O. Can anyone help explain to me with examples the steps involved in this type of data transfer. Thanks.</b>
Suppose any reference type is stored in some memory location and its data is continuously expanding for example appending string in loop in StringBuilder.
How memory manager manages the scenario when its object doesnt has contiguous memory available at that location.
It would be great if any one can explain it in detail.
It just gets moved to a block of memory that is large enough to hold the expanded object. This is why you should always try to estimate how big your objects are likely to grow and reserve enough space, to minimise these operations.
Beyond Richard's answer, note that you can run into an OutOfMemoryException when no contiguous block of the desired size is available, i.e. when the memory is fragmented. Also, the desired size is in multiples of 64k.
1. Gather actual requirements. For example exactly define what 'page' is, what 'navigate' is and what 'another one' is.
2. Learn to program in a language like C# or Java.
3. Learn how to design.
4. Create a design using 1 and 3.
5. Implement the design using 2.
6. Test it.
Once you are actually doing one of the above stages then ask questions about what you have done with that stage by posting information about what you have done and why it isn't working.
http://msdn.microsoft.com/en-us/magazine/hh547108.aspx[^] An aggregate root is an entity obtained by composing other entities together. Objects in the aggregate root have no relevance outside, meaning that no use cases exist in which they’re used without being passed from the root object.
In implementing a root in vb.net how is this possible? I would require a entity called OrderItems as a property of Order to be publicaly visible in order for the persistence framework to do it's thing???
Some considerations which might or might not matter.
I am rather certain that you can't use WCF, MSMQ and MSMQ Transactions. You can investigate yourself but I believe there is a fundamental problem in terms of receiving them. Basically they can be sent ok (with a transaction), but when receiving them there is no assurance that the transaction is preserved. (I can't remember how I came to this conclusion.)
MSMQ has a hard 4 meg byte limit per message. Between overhead and unicode (not optional) that can reduce the maximum message sized to less than 2 meg (bytes.) Microsoft technical docs state they have no intention of changing this limit.
Despite claims to the contrary WCF/MSMQ does not support a streaming operation. So the message must always be less than 2 meg.
MSMQ uses magic routing which is great when it works (when you get all the ports/permissions right) but is extremely difficult to figure out otherwise. And because of that messages can take a long time to arrive at the target, for example hours.
MSMQ at least on OSes before 2008 (and maybe that too) uses by default a single file with a 2 gig limit for persistence. MSMQ will crash if that file fills. And there is no way to monitor problems via the MSMQ API (maybe there is something in WMI.)
Most of the standard MSMQ 'queues' use Active Directory. If Active Directory has problems then MSMQ will have problems. If you use options to exclude Active Directory then you CANNOT insure transactions are in use. See next note.
If one queue uses transactions and the other end doesn't, messages just disappear.
MSMQ failures can result in MSMQ exceptions which have a enum which indicates the type of error. The problem however is that API can end up returning a value, via the enum api, which is not a enum. Basically violating the contract of the method.
MSMQ queue permissions and application permissions must match. Which would seem obvious. However if they don't match then it returns non-helpful errors. Such as telling you that the queue doesn't exist (even though it does.)
MSMQ is supposed to support multiple clients consuming from a single queue. However someone I know (but not me) ran a test that suggested performance was significantly degraded in such a scenario when transactions were in use. Developing around this is complicated.
I can state that I built message streaming using chunking to break a larger (bigger than 4 meg) into pieces. It was extremely difficult. Especially since I was attempting to support multiple clients (see above limitation)
MSMQ is tied to the OS. This of course means that if you want to upgrade from, say, MSMQ 3.0 to 4.0, then you must upgrade the OS.
I have gone a considerable way down the WCF/MSMQ route (see my articles/posts) but find its awkwardness exasperating. Transactions were presented as a 'done deal' both in the documetation I read and the resposes to my queries. See Mohamad Halabai's work in this site and elsewhere.
The problem with MSMQ is that the payload size is limited to 4 MB
We created a actually a framework that can transport jobs to multiple machines using :
1. MSMQ (only as an event trigger)
2. WCF + MS SQL (for managing the job distribution transaction based )
3. and a TCP component for async transport of the payload.
4. A base class that you inherit in each (windows service) application you build to process a job. This class gets and returns the payload to the WCF which stores or retrieve the data from the Database.
On which server a job has to be processed and what process should handle the job is controlled by a simple xml flowlist. This flowlist can be requested from a WCF service.
im making reportage website and i want that the editor/reporter will have the ability to edit his reportage with "rich text editor" i kinda found my rich text editor.My question is how does it works snd how do i set it on my website i use asp.net...:/
every help will be aprreciated
I have an IT-architectural problem and probably not enough experience to take an objective and with criteria decision. The problem is the following:
I have an ASP.net web monitoring application which gathers, processes and shows information to connected users near to real time. This information has also to be processed by an AI-based expert system. At this point, JBoss Drools was selected as the most adequate engine to deal with this task. The selected version of this expert system engine is the Java version while the body of the main application works over C# (to select the C# Drools version is not a feasible option). Then, I should find the best way to connect both parts of the application.
1. The first option is to connect both processes through web services. This is, to deploy as a Java Web Service an interface of the Java part of the application in order to add elements to the knowledge memory of the expert system, and retrieve the "results" (alarms or warnings). Something like to publish a method as:
public newAlarmsOrWarnings addFacts(factsToAddToTheKnowledgeBase fa, factsToRetrieveFromTheKnowledgeBase fr);
which should be called by the main C# /ASP.NET process.
2. To maintain a record of changes in the database and regularly (e.g. every 5 seconds) poll this table looking for updates. The problem is that this way I will have two separate accesses from two very independent sources to the database, which is not a pleasant situation so the application will be maintained and expanded in the future and it implies more work, higher chance of problems, etc.
3. Other "mix" solutions.
Any suggestion will be very welcome; thanks in advance.
PD: English is not my native language so please take it into account when you discover any typing error.