What you do is based more on what you posted here.
I can say for sure that attempts to "make my work easier" which result in dynamic/meta data solutions are always wrong. They do nothing but make the final solution MUCH harder to maintain.
That said there are numerous solutions that already exist for creating multiple layers based on actual data models. Those work when the actual concern is not a dynamic solution but rather the work involved in creating code for many data entities which is basically the same.
Depending on the solution it can create any or all of the following
1. The DDL
2. The DML - stored procs that act as an database API
3. DTOs and DAO in your language of choice
4. DAO API layers.
Myself I have been rolling such solutions myself for decades.
Only suggestion to the above I would make is that you must not allow the ease of use of the DTOs to allow you to extend their use into other layers of the application UNLESS they are free from all database hierarchy abstractions. And perhaps even then.
We have designed a 3-tier web app for a finance application.
The business tier is divided further into layers like manager, helper, util layers to modularize code and isolate the different functions i.e. core business vs non-business code from each other
Util layer has non-business functions which are required during a particular process e.g. DateUtils.java, EncryptionUtil.java etc
Helper layer has business logic which is specific to particular business process and not required in other business processes e.g. SomeThirdPartyInterestCalculationHelper.java, SpecificRequestBuilder.java
Manager layer has business process which controls the flow as well as implements some parts of business logic e.g. CustomerAccountManager.java has different methods for CRUD operations for customer account. It calls different helpers, utils, DTOs etc and gets the work done. It also implements some pieces of business logic. So,it performs mix of BPM role as well as parts of core business process logic.
As the processes become complex and lengthier, my manager layer is growing and does not look like well organized code.
I want seperate layers doing specific roles i.e. business process controller, business process execution (core business logic), CRUD operations which are DB specific, helpers (specific to processes), non-business logic
What can be a better design pattern to achieve this?
I am trying out Business Objects patterns to isolate different parts of business logic and coupling it with Application Service pattern.
So, for executing a business process, I would have:
1. ApplicationService - Would be a pure business process controller calling different business objects and controlling execution based on results of BO methods
2. BusinessObject1 - Core business logic in different methods - Called by ApplicationService
3. BusinessObject2 - Core business logic in different methods - Called by ApplicationService (if BusinessObject1 grows bigger or BusinessObject 1 and 2 can perform specific business functions)
4. IntegrationBusinessObject - To call other third party services required in business process
5. DomainEntityBusinessObject - CRUD operation for a particular domain entity required in process...will also have some business level checks required before or after CRUD operations
6. Adaptors - To convert formats for third party services - May be called by IntegrationBusinessObject
Idea is to make classes more compact and doing specific business functions. Also, control the process from a single class (Application Service) so that changing the process can be easier.
I have 3 DNN webservers behind a Citrix Load Balancer, the load balancer is configured for SSL Offloading.
I discovered that the login link doesn't work anymore. It just refreshes whenever it is clicked. The URL of the login link is: https://test.abc.net/User-Login?returnurl=%2f. The link when clicked supposed to take users to the page where they will login.
When I changed d Citrix load balancer to HTTP, everything works normal. I.e http://test.abc.net/User-Login?returnurl=%2f takes the users to the Login Page.
Any suggestion on how to resolve this issue will be appreciated.
In an imagined situation we have a file without extension. We open the file with a hex editor. Is there any chance for us to interpret what we see in hex editor and ascertain whether the file has only instruction or only data (maybe as a text, maybe in some other format).
The problem is that brew puts everything in a directory called Cellar and installs everything there even if you have the program installed somewhere else.
For example, I have installed python on my Mac, nevertheless brewer installed it again under Cellar. Quite strange I think! So, given this, how can I "force" pyqt to use my original qt installation ?
So, given this, how can I "force" pyqt to use my original qt installation ?
I did not know the answer. But as already said I guess that it is already using the existing Qt libraries by looking in the common library directories.
It would also depend on the installation order when using brew. If you have installed Qt after PyQt, PyQt would not know where to look for Qt.
I have no Mac here and can't tyr it out. Why not just write a simple "Helle World" Python application using PyQt and check if it works. If so, you can try later to find out how to configure for using specific libraries.
I'm Aware of this, so I can decide whether to allow virtual or not.
There are "hardware fingerprints" available even in VM's (apparently).
That is not clear for me how this should work. I need to investigate more on this.[Edit]In case you mean a Fingerprint of the of the underlying machine I see a Problem:
a.) Some of our customers do move the virtual machines frequently from one physical Server to another (because of securtity reasons, whatelse? I don't know it exactly).
What about Azure?
No idea about it at the Moment and I think also not a solution for some of our conservative customer.
Thank you a gain very much.
It does not solve my Problem, but it answers my question