|
Homer77 wrote: Seems kind of messy so i wonder how others have dealt with this scenario
Normalization, to BNF (3NF+); following those steps always leads to a decent relational design.
Bastard Programmer from Hell
|
|
|
|
|
Hi everyone,
I am doing some research on SOA design tools. Would you mind telling me some most popular tools, either open-source or commercial ones? I would appreciate your replies.
|
|
|
|
|
At present there are no established tools available for SOA design. IBM has a tool but dont know how successful it is. This may be an interesting read for you: http://soadecisions.org/soad.htm[^]
Eclipse also has a product on this: http://www.eclipse.org/stp/[^]
I have not used a pure SOA design tool as such, still using the CASE\UML tools to define the design. I believe it would still take some more time to have a well defined SOA tool.
|
|
|
|
|
Hi ,
I am having a requirement to develop a project related with mechanical engineering computation. I need a suggession , will it be a good practice to go with VC++ and COM .
If the project will not require multiple change request then in this case which pattern should be best suited ?
If I use COM what will be its benifit?
Any suggession will help me my thought process . Thanks
|
|
|
|
|
From what you posted COM has nothing to do with it.
You haven't stated anything that has anything to do with patterns.
If you understand algorithms, math, the engineering domain and C++ very well then in terms of performance you might be able to produce a faster solution with C++ than another language. Especially if you profile it (presuming the design is ideal to start with.)
This applies ONLY to the computations, and not any other required functionality.
pandit84 wrote: Any suggession will help me my thought process
Start with designing a solution and do not consider an implementation (C++/COM) until you have that design.
|
|
|
|
|
|
General architectural question: From what I've been able to gather, there are basically 3 approaches to handling code that is reusable across multiple projects:
1. Dynamic linking: build a DLL, use API functions like LoadLibrary() and GetProcAddress() to link to the desired functionality at run time.
2. Static linking: build a DLL, use a .lib file to establish the external proc addresses at compile time.
3. Static library: build a LIB only, use as input that becomes part of the .exe file for a given project.
It seems pretty obvious that #1 would represent the worst execution time because of the need to look up a proc address before calling it.
But the issue seems a little more fuzzy between #2 and #3. Is there a significant advantage to one over the other in general? What about in high-demand applications such as video games or real-time simulators, or where the library function is expected to be called dozens of times per second?
Thanks for your help.
|
|
|
|
|
I think that in performance terms there is very little to choose between 2 and 3. The trade-off comes when you have lots of apps running in your machine - using a DLL you have only one copy of each library function in memory, with the static library you have one copy for each app.
|
|
|
|
|
That's kind of what I thought. Does having a larger executable result in slower app launch time? When exactly to statically linked DLLs get loaded -- on application start, first call, etc...?
|
|
|
|
|
Xpnctoc wrote: Does having a larger executable result in slower app launch time?
Probably, but unless you are loading and unloading thousands of times a minute it is unlikely to be an issue.
Xpnctoc wrote: When exactly to statically linked DLLs get loaded
On first call as far as I am aware.
|
|
|
|
|
Xpnctoc wrote: It seems pretty obvious that #1 would represent the worst execution time because
of the need to look up a proc address before calling it.
Without a context that is meaningless.
For starters .Net and Java always do dynamic loads.
Second performance is impacted much more significantly by requirements, architecture and design for most business applications.
Third some business required functionality cannot be implemented with dynamic loads. For example hotloads of a 24x7 server.
|
|
|
|
|
Well I guess I could have been a little clearer, but since, as you said, .NET and Java always do dynamic loads, that's obviously not what I'm talking about. I'm talking about plain old C++. I also did specifically mention video games and real-time simulations.
|
|
|
|
|
Xpnctoc wrote: I also did specifically mention video games and real-time simulations.
amic calls. Could be mistaken though.
Presumably you are familiar with 'video drivers' on PCs? The things that directly drive all the video on the box. They are pluggable components in the OS.
|
|
|
|
|
Xpnctoc wrote: It seems pretty obvious that #1 would represent the worst execution time because of the need to look up a proc address before calling it.
During initialization, one creates a method-pointer, say, a delegate. During execution, you fire it. Can be pretty darn fast.
Xpnctoc wrote: But the issue seems a little more fuzzy between #2 and #3. Is there a significant advantage to one over the other in general?
If your requirements require such speed, you'd be best of in using QNX[^].
Bastard Programmer from Hell
|
|
|
|
|
I thought about using function pointers. But then the sheer number of them I would need to generate and maintain at application start-up seems like it would be more work that it's worth. Might as well just stick with static linking or a static library.
|
|
|
|
|
I think you need make your website clearly, attractive,and browser speed is fast, offer one website for your reference, which is www.sweeshopping.com
|
|
|
|
|
What's the difference between Proxy Patterns and Observer Patterns? and the observer provides more flexibility to handle events when compared to the Proxy?
Thanks for the help and attention...
|
|
|
|
|
Do not cross post. You've already asked this question in C# forum. Please have patience while someone answers your question there.
|
|
|
|
|
I trying to find the best way to structure the company website.
They want the website to now be off site in a hosted location, BUT the database that it uses are to stay here in the office as it is integrated into the back office systems.
The site currently has a public area and 2 sub sites (these talk to the database regularly) that are membership based and also has web services that are used by external parties.
My thoughts are to host the all of the above in the hosted location and have it talk to the database in our office through a web service but I am not sure that this is the best approach.
Lobster Thermidor aux crevettes with a Mornay sauce, served in a Provençale manner with shallots and aubergines, garnished with truffle pate, brandy and a fried egg on top and Spam - Monty Python Spam Sketch
|
|
|
|
|
What happens to the website if if can't talk to the database in the office?
|
|
|
|
|
Then our claims handlers and brokers can't retrieve specific information from our back office database.
Lobster Thermidor aux crevettes with a Mornay sauce, served in a Provençale manner with shallots and aubergines, garnished with truffle pate, brandy and a fried egg on top and Spam - Monty Python Spam Sketch
|
|
|
|
|
The point however is that you need a business decision about how that impacts the business functionality of the site. That in turn impacts the implementation.
Some possible implementations.
1. The site is unusable, and tells the user that.
2. Some specific parts are unusable and tells the user that.
3. Data from the office is cached, with perhaps appropriate expiration times (which again might require some user error if.)
4. Any down time at all is unacceptable thus a replication scheme is needed.
Not to mention of course that you want some way to be notified if the web site can't see the database.
|
|
|
|
|
If the web application must use the database in your office, then go for a web service (a WCF service in case of a .NET solution) in between your web site and the database, otherwise you may want to have a separate database for your web site.
I find it easy to have separate databases for each app and make them share data as needed instead of one big "do-it-all" database that is hard to maintain.
|
|
|
|
|
|
Hi..
I own a Tektronix logic analyzer that was designed for Win95/98 and supported the old APM specification 1.2 for powering the system up and powering down. Under Win95/98 the soft button supported both power up and a safe shutdown/power off mode. After I upgraded to Win2000, both of these features were 'broken'. I discovered the bios issue with Win2000 not being able to turn power off to the analyzer(bios patch) but the only way I can accomplish this is with the traditional Start/Shutdown mouse seq. The front panel button no longer will initiate a shutdown/power off sequence. I am guessing that the bios had an embedded call to the windows function ExitWindowsEx. However, under Win2000 you needed to enable the proper privileges for the API function to properly work.
So... hopefully, someone is familiar with what I am asking and can recommend how to force a windows function call from within bios. I am comfortable patching the bios for this sort of thing. I just need some pointers if it makes sense.
If this is not possible/feasible what other options do I have? I've studied the APM spec but don't understand how to actually force the Windows OS to initiate a shutdown/power off sequence.
If anyone can offer some advice, I would appreciate it. I would love exchanging thoughts on this as I've not been able to find anyone knowledgeable to help.
Thanks for listening..
Jim
|
|
|
|