|
That's why there's a warning in the stackoverflow thread; it is not an easy task - hooking isn't, and writing a filter isn't either.
Look at it this way; there'll be few developers who can say they tried something similar. There'd be quite some people waiting for an article on "how" you did so
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
I look at it the way you suggest and drop that idea. Thank you though, saved me a lot of time and coffee.
|
|
|
|
|
I have developed some code in C++ which creates a bitmap, manipulates the bitmap with Direct2D, encodes the bitmap as a jpeg and returns the image as a stream. In a .net MVC project I am importing the C++ function using Platform Invoke.
Calling the C++ function from within the C# .net MVC app works fine and an image is received as a stream. However currently when the C++ code is called it starts up and then when it returns it shuts down. I would like the C++ code to maintain a state which persists from one time it is called to the next and the next after that etc.
How can I achieve this on Windows? I have been investigating Windows Services, however I have not seen any examples of returning data to the calling code (Service control program). Is this possible with a Windows Service? Could an image be returned from a call to a Windows Service as a stream?
Or am I looking in the wrong direction? Is there a more natural way to develop a C++ program which can be called from C#, returns a Stream and maintains a state? Could this even be achieved with Platform Invoke?
Kind Regards,
Duncan.
|
|
|
|
|
If you want to maintain state you can do it in one of 2 ways.
If your application needs to regularly perform tasks whether a method is called or not, a Windows Service is the correct approach and you can communicate with it using some form of IPC. Microsoft gives an overview of various .NET IPC techniques at:
Interprocess Communications (Windows)[^]
The other mechanism that you can use is a persistence framework of some sort; i.e a database connection or a file. The easiest route would be to create a StateTracker object that serializes to a specific file. Load that file when your application is called and modify it as appropriate.
Actually, there's another pattern that you could follow, and that would be for your MVC application to track state and pass it to the class library when it calls methods. Bear in mind that IIS will manage memory for you, so may clear out any cache that you define, but this way you can leverage the same database used by your MVC app to store a StateTracker.
|
|
|
|
|
Member 12424215 wrote: returns the image as a stream...which persists from one time it is called to the next and the next after that
Why. If it hands you a stream what do you think it is going to do in the future? Not like it can take the stream back.
Perhaps you want it to cache the stream and then update if there are changes? Thus if the next time it is invoked the changed or not changed stream is returned?
But is so how often is this called and how often is it changed. And how big is it in the first place?
Those are relevant because caching is done for performance reasons. Which you should only do if you have identified an actual performance problem. And the answer to the above would be relevant to that.
|
|
|
|
|
Hello Folks,
Recently I came across a situation where an application is doing so much processing/storing data inside application domain itself without using cache or database for it. obviously in the end everything goes in to db, but say for some time it stays in application domain, and all the objects, list of objects created reside in application memory. and these objects and lists are being updated frequently as well.
Although it is not creating problem till now but it is general thinking that we should be able to drive this application from multiple application servers, thus dividing application load to multiple servers or processes.
What can be the possible solution one can suggest for this situation. any help will be appreciated.
Thanks
|
|
|
|
|
I think you need to look closely at the application's design.
|
|
|
|
|
girishmeena wrote: Although it is not creating problem till now but it is general thinking that we should be able to drive this application from multiple application servers,
Sounds potentially like over-engineering.
Does the company have sales/growth goals or estimates which will directly impact the size of the data set. Do you have performance measurements that track the time and size of this processing by which you can correlate to those business numbers?
If you have neither then perhaps it is time for the developers to ask sales/business for the first set. And if there are none perhaps time for development to ask sales/business what they actually need to help them create more profit and/or reduce costs.
|
|
|
|
|
In most object-oriented programming languages (such as C#, Java and Visual Basic .NET), static signifies that the method or field does not exist on a particular instance in the application. Static methods/fields must only depend on their parameters and/or other static contexts, a single instance of whom will exist throughout the application lifetime.
The following only for programming languages that compile libraries to object code and not native assembly code:
In executable assemblies, staticity can be inferred automatically using the following rules:
1. A method (or property) should be marked as static if and only if its uses only compose of its parameters and/or static fields, provided it does not override a base method/property.
2. A field should be marked as static if and only if it is never accessed through an instance.
There are advantages and disadvantages to the inference approach:
Advantages:
1. No need to specify static explicitly, which may simplify work when refactoring code.
2. Code noise is reduced.
3. Accidental omission of static for methods is catered for, speeding up method access (refer to Performance of using static methods vs instantiating the class containing the methods).
Disadvantages:
1. If the code is not devised clearly, the code may be harder to understand. Proper code structure should be maintained. Also, method names should clearly represent the intention of whether the method should be static or not.
2. Compiler errors and warnings would have to be more clear, showing clearly what caused staticity inference conclusion to change, if present.
3. For libraries, static should still be programmatically specified, in order to make sure that deviation from the original usage intention is not permitted, unless stated otherwise...
If a library was to allow static to be inferred, then the compiler would omit specification in the object code, and the compiler/interpreter of the executable has to infer them by analyzing the code of itself and the referenced assemblies, generating staticity specifications to any fields, methods or properties that lack it.
Static classes are really easy to infer. A class can only be static if and only if it only contains static fields, methods and/or properties.
The question is: Why is automatic staticity inference not implemented in any popular language, not even optionally?
I think there are various situations that would benefit from such a feature.
Yours truly
|
|
|
|
|
MathuSum Mut wrote: The following only for programming languages that compile libraries to object code and not native assembly code:
That has nothing to do with this.
MathuSum Mut wrote: Why is automatic staticity inference not implemented in any popular language, not even optionally?
Just guessing of course.
First the compiler must determine correctness anyways. So it must determine if the class is static so no point in throwing that information away.
Second since the information exists at runtime the system might as well use it. If the runtime environment supports verification then it might verify that the class is static, but if it does that then it has already derived that information and thus the system should keep it rather than throwing it away (thus requiring a re-derivation later.)
MathuSum Mut wrote: I think there are various situations that would benefit from such a feature.
Like?
Once the program is actually running everything should be resolved anyways. Yes, I understand that dynamic resolution might defer resolution but that just means it is in not fact running yet. After resolution it must be resolved so there is no reason to need it then.
During resolution, however that is done, the information is only needed to correctly resolve calling semantics. (Again ignoring something like a verification stage.) So if the information already existed, which it must for the compiler to correctly issue errors, then it might as well be maintained. It isn't like the space used to keep the flag would be a problem.
|
|
|
|
|
The company i work for is looking for changing the architecture we work with. Now we have one main application that does everything. We would like to change that to small applications to do smaller parts of work. These applications are in different languages (c#, java, erlang, progress, ...) We are also looking at IoT and we have some older applications written in basic. What i would like to know is if anyone has any experience in esb's so al of these applications can communicate with each other. Our main language of programming of our own services and applications are written in c#.
|
|
|
|
|
I see a lot of buzz words and no actual business reasons.
Sounds like a developer read a single web article and decided that re-architecting the entire enterprise was the way to go.
Tom Wauters wrote: What i would like to know is if anyone has any experience in esb's
Certainly one would hope that the internet bloggers who push it as well as selling their own expertise as consultants doing exactly that would have actually done that. But your mileage my differ if you look to one of them (of course their bank account would appreciate it.)
But other than that is there a question?
|
|
|
|
|
We are a company that does lots of different things. We have transporting, warehousing, some of the goods need to be repackaged and given different itemnumbers, we also do customclearance for our customers etc. There arent many full software packages out there that do it al. We see that at the moment with the software we are using right now. This is not a decission that is made on the spot. I am trying to get an overview of what the possibilities are, before doing anything. Not even sure if it is going to be several packages. But thx for the reply.
|
|
|
|
|
So then you start with a high level requirements view and architecture which must delve into some of the specific needs of disparate systems specifically addressing cross system usage.
After doing that then you might look into broad technologies to see which bests supports the model.
So design first followed by technology.
That of course is unfortunately mostly just rhetoric because large systems are complex. And that complexity means that solutions will never be clean. The design will not be clean and the implementation will not be clean.
Additionally the complexity will also almost invariably fail to actually meet the needs of the business due to
1. The complexity itself.
2. Failure to actually capture real business requirements
3. Need to actually reduce costs and/or increase profits.
One need only look how how large companies managed to successfully fulfill their IT needs over time, which is by piecemeal improvement.
So perhaps much better to find a smaller piece that really needs to be improved and focus solely on that piece while still striving to make it somewhat reasonable to use with future systems. But without trying to over-engineer it in to strive for an impossible goal of not modifying it at all in the future.
|
|
|
|
|
Follow up on my earlier posting here[^]...
Can I host my SignalR service on Azure? Anyone done this? Any pointers / thoughts??
If it's not broken, fix it until it is
|
|
|
|
|
|
Thanks.
I've been reading through that page. Not a bad site as far as learning SignalR. Just wanted to be sure there wasn't any caveats I'm not seeing.
If it's not broken, fix it until it is
|
|
|
|
|
I am working towards developing a system that has some of its components in Microsoft Azure and rest on premises. Some key components are the Azure Cloud Services wich has a web role (MVC web applicaiton) and Azure SQL DB as a local datastore. The core Database systems (system of records) are onpremise. Certain transactions follows a specific flow where in they need to be updated in the Azure SQL DB first and then it will be asyncronously updated in the core DB on premise, and then this update along with some other calculated values will flow back to the SQL Azure DB. This flow back to Azure SQL DB is planned using Azure SQL Data Sync. Azure Data sync has a minimum delay of 5 minutes between consecutive updates. The web will always display its data from the Azure SQL DB.
In a scenario where a customer updates a field - say name - immediately two or more times, the web applcation since it is getting data from Azure SQL DB will display the last updated value. However, since the core databases are updated asynchronously and these updates from the core databases will flow to the Azure SQL DB at a later point in time, the last updated record in Azure SQL DB by the web can be replaced with the values coming from the core database which was a result of the initial updates. This may cause some level of inconsistency with respect to user experience. The customer may see the latest updated "Name" as soon as he does his final update, but this value will change (to one of the earlier updates) after the data sync, and finally it will display the last updated value. Any recommendations on the best way to implement such a functionality?
Moreover, what would be the best option to sync an on premise SQL Server and an Azure SQL DB, if not Azure SQL Data Sync?
|
|
|
|
|
|
In my opinion, you have a (design) problem in how you send updates from "core" to Azure.
Other than a few (new) "calculations", you're sending redundant updates; that is not how one "syncs".
Create a proper calculation transaction.
|
|
|
|
|
Rajeshjoseph wrote: In a scenario where a customer updates a field - say name - immediately two or more times, ....Any recommendations on the best way to implement such a functionality?
Best suggestion I know of - stop making up business cases.
When exactly, in the real world, is a user going to update their name twice from two different locations at the same time?
And if they do how are you going to determine which one is 'correct'?
Apply that same reasoning to any other concurrency issues that you might come up with.
Unless you can answer both questions definitively then the answer is last one wins. And it wins by default without you doing anything.
|
|
|
|
|
The architecture was proposed this way due to the following reasons:
1. There are multiple channels through which the core backend systems (system of records) will get updated - Web, Mobile App etc. To have these updated records always reflecting in the cloud, we are bringing the data from backend to cloud if a record gets updated. Web always reads the data from the cloud.
2. Since we are decoupling the core backend systems from the system of access for web, we are building a data repository in cloud for the web to access which will not have direct communication to the core backend systems (asynchronous in nature).
3. Since the backend updates will be asynchronous (it can vary from near to real-time to 1 hour depending upon various factors), it may not be fair to display the customer a message "you request will be processed later" when the customer updates his/her profile information or similar (no customer would love to see his name change to reflect after xx minutes). For this reason, the update will be directly saved to the DB in the cloud and will be read from the there by the web.
4. I was trying to work out a solution for a scenario where a customer updates his "First Name" twice in the interval of say 5 minutes - in this scenario - with the planned architecture - the second update will be displayed on the web as soon as it is done, while the first update will be in transit, making it's way to the core database asynchronously and as I mentioned in step 1, the core backend systems will send the record to the web again if they are updated (the update can happen from multiple channels) and will update the cloud DB which will re-write the second update for this record in cloud. After few minutes, the second record also will complete the round trip and will reach the cloud. But, during the time between the second update saved in the cloud, and completes the round trip - there is a possibility that the customer notices the first update getting displayed for few minutes.
Hope this explains!
|
|
|
|
|
You could try keeping time stamps with the updated values. when syncing from the core db to azure, you could check the time stamp and only update the field if the time stamp is greater.
|
|
|
|
|
Rajeshjoseph wrote: I was trying to work out a solution for a scenario where a customer updates his "First Name" twice in the interval of say 5 minutes - in this scenario - with the planned architecture - the second update will be displayed on the web as soon as it is done,
And how does that change what I said?
What is the exact business scenario where the user is ever going to update their name in 5 minutes?
And given that there is in fact exactly that scenario how are you going to use technology to determine that the 'second' one is right and the first one is wrong?
Let me show how contrived this is with the following business scenario
- You have a single user who is gender challenged.
- That person has two cell phones and they are on the train to work
- The two cell phones use two different service providers and one which is slower due to connectivity issues due to the provider.
- That person is using each cell phone to change their name from 'Dan' to 'Sara' on their way to work
Is the above an actual business scenario? Is the something that the company actually wants to spend real money on to support this extreme corner case? How is your software going to actually determine which button that user pushed last just before they got off the train?
|
|
|
|
|
Google for "Windows Kiosk Mode", and/or try to give more details on what you are trying to do. As is, your question is too vague.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|