In the traditional 3-tier setup, the only thing that "should" be performing data operations is your business logic. Permissions may not necessarily make sense at the data level.
As an example a "loan manager" may have permission to "approve loans". The loan approval process may update a loan record, an audit table, salesman perfomance/manager performance/sales funnel tables, etc, etc.
Granted, there will always be "update contact details".
I generally find granting permissions based on roles/interactions causes less friction in implementation, especially when dealing with workflow and interception. Quite a few times I've seen teams struggling with a "well, it goes into the approval state, so you set user to read-only, the group to "managers", and make it read-write...".
I just realized that my question is missing some relevant information.
In no situation I will grant permissions directly to users. Sometimes I've used implementation where the connection from middle tier to the database doesn't use user info at all as traditionally is done, but sometimes I use user information at all levels. In those cases typical building elements may include:
- user identification at client (AD based or not)
- secure identification and authorization to middle tier based on client user
- connection to database for the business logic, but identified by user info (in other words BL connects to the database on behalf of the user)
- database roles granted to the user
- database roles granted to the application (BL) etc...
Object privileges in the database are always granted to roles.
After that description, do you feel that the overall idea is getting better or worse?
As soon as you start connecting to the database as a different user (assuming basic .Net/SQL server/ADO) then you will break the connection pooling, which can be a performance hit.
In a moderately complex application, if you look at CRUD operations at table level, then you might find that just about every user has full access to most of the critical tables anyway :/ So you don't really gain much at that level.
Some people would say to do everything via stored procedures and permission these up...
I'd tend to put the security at the business logic level. You can even use the declarative security attribute thingos on methods called by the presentation layer if you like. :P
These were excellent things to consider. Especially since in some cases it's benefitial (or even critical) to identify the user in the db and the side effects can be managed.
Side-note: Actually I don't break the connection pooling and it's still very elemental in the application. Only the 'level' of pooling is changed but the pool itself must remain (or else there will be lot's of angry users )
This is a tough one...propagating user definitions from the UI down to the database level often makes database administration a nightmare. We use application-defined "users" at the database level to reduce this issue, and copy UI user names into database records to maintain an audit trail (and don't let most users anywhere NEAR a direct connection to the database server and its schemae).
I am exploring an idea I would like to research further and am wondering if anyone else has any thoughts.
For high performance web applications the database and data access code is often heavily optimised, using near and far caches, distributed caches, optimised queries, optimised data access code and so on. The database is usually separate from other TP systems that the organisation runs its business on, and if not could/should well be.
It occurred to me that the reason we use far distributed caches is mainly to avoid
a) Relational/object conversion overhead
b) Disk access overhead
I had been doing some reading about new generations of universal memory (NRAM,MRAM,FeRAM), some of which is commercially available and will eventually replace SRAM,DRAM,SDRAM,Flash,etc as a universal type of non volatile high speed memory. I wondered how this would affect architecture of high performance systems. It then occurred to me that we could use (the now cheaper) solid state hard drives (SSDs) that use standard DRAM with internal UPS and backup devices in the meantime.
I thought - in this case, the need for the far cache would be reduced or removed completely. In fact, if the distributed cache could handle transactions and concurrency, and if the web app was geared less to set based operation and more to CRUD operations, then this 'advanced cache' would, on non volatile high speed memory, serve also as the primary data store.
In short, an Object Database that was mature enough to deal with transactions, clustering and a few other things, deployed on machines with DRAM based SSD's could remove the need for a far cache, an RDBMS, data access logic, etc completely, and boost performance.
There do exist Object DBs that permit SQL based relational queries, though I would imagine that many of these queries would be done for transmitting data to other systems and therefore could be done using the web app's own API.
So, in summary, what I'd to explore is this architecture for high availability large scale systems:
a) Web Farm running web apps with near caches
b) Distributed Object-Relational database on DRAM based SSD as backend
c) Separate API for datapumps to RDBMS for queries, integration etc
What do you think? What problems do you foresee? Would you imagine large performance gains?
Which of the following is a better design? Why or Why not?
Private Sub CheckCheckBoxes()
CheckBox1.Checked = True
CheckBox2.Checked = True
CheckBox3.Checked = True
Private Sub UncheckCheckBoxes()
CheckBox1.Checked = False
CheckBox2.Checked = False
CheckBox3.Checked = False
Private Sub CheckUnchecCheckBoxes(ByVal checkValue as Boolean)
CheckBox1.Checked = checkValue
CheckBox2.Checked = checkValue
CheckBox3.Checked = checkValue
Private Sub CheckUnchecCheckBoxes()
Dim checkValue as Boolean = GetCheckValue() ' Has the logic for whether to be checked or not
CheckBox1.Checked = checkValue
CheckBox2.Checked = checkValue
CheckBox3.Checked = checkValue
My question is not about naming controls or variables, therefore, I did not give them very meaningful names. Of course, I would give them meaningful names in real projects.
If none of them are good designs then can you recommend a good one? I do know patterns and I use them all the time, however, I am not sure what patterns have to do with creating good subroutines. Can you please clarify or recommend a reading.
I posted this [^] on another forum but got no answer. I'm referencing it here in case you want some background on why I need to do this.
In short, I want to detect analog beacon radar pulses, that have already been converted from an A/D board to values between 1 and 256, the Y component or amplitude levels. The data is sampled at a 2 ns rate or 500 Mhz which comprises the X component. Typically, from what I've been told is that pulse lead edge detection and time stamping the LE is done using hardware. I'm not planning on posting all of the requirements but suffice it to say that I will meet all requirements if I can design the foundation of which this topic addresses.
I'm approaching this in two ways based mostly on my empirical observations of the raw data.
Using a plot program I can plot the continuous waveform and see where hardware or software would have some problems separating the pulses. Pulses must meet a minimum width and have various pulse to pulse and other timing tolerances. Of course, individual pulses are really part of a pulse train of a particular type of message, and can be correlated by amplitude and bit position once the pulses have been extracted as pulse records with attributes for the LE, TE, pulse width, plateau or amplitude, overlapped, etc.
1) Pulses of different signal level or amplitude can be intermixed, interleaved, or overlapped sometimes causing wide pulses where two or more pulses of different amplitudes may be joined and the Trailing Edge (TE) from the first pulse may not be detectable, though recoverable, using extrapolation of any downward slope. Pulses, that do not meet minimum width constraints and occasionally of significant amplitude are most likely noise and can be discarded or not stored. Noise can also interfere with pulse width sometimes causing a TE to be initially indiscernible or undetectable unless a small TE can be detected first.
2) I am performing my code design based on how I, a human, would interpret the individual pulses based on an empirical observation. Consequently, I and the program would process the data from left to right or for the direction of X where X equals time, in an increasing time reference.
Before I get too far, my initial question regarded using a state machine as part of the design to perform the pulse processing. Since that time, I already have started coding some of this in the manner #2 described above. When separating individual pulses from combined pulses and when separating pulses when TE's do not return to zero or the noise floor, changes in direction of slope (downward or upward ) I am running into some slight problems.
I've decided that a sliding window history (implemented as a circular queue) of my last 5 slope directions that the pulse is/was travelling could be used to construct a scoring algorithm to try to determine how confident I am about the waveform's future detection into individual pulses. For example, if I am following a TE pulse to about the 3 db point below the pulse's plateau, if I get say 3 consecutive hits in a change of direction initially from a downward slope to a rising slope, then depending on the existing pulse's width up to that point, I could be processing a noise spike, or a second pulse. Suffice it to say, I now just assume that if I have reached a -3 db point (~ 70% of pulse plateau level) and the pulse meets minimum width tolerance, I can just declare it a pulse and save its characteristics.
I want to assign a higher score based on a high degree of confidence that the direction change in slope is not momentary but for a longer period of time. I could also use a second scoring algorithm to assign higher levels of confidence to pulses that do return more to the noise floor perhaps all the way to the floor or the -6 db point.
Now, getting back to program design, IMHO I have discovered that, to me, there is no easy way to code this. There are several possibilities that exist once the LE, and plateau have been acquired: a normal TE could follow resulting in a clean pulse, a noise spike can temporarily occur as part of the TE widening the pulse slightly, a noise spike can be of significant enough of amplitude or width to be possibly interpreted as a secondary pulse adjacent to or following the pulse we were processing, an actual second pulse could be occurring intermixed with noise such that the first pulse's TE never returned to the noise floor or the noise floor is high as compared to the pulse or pulse train's amplitude level. Consequently, the LE for the second pulse may only be observable from say the -3 db point below the second pulse's plateau level.
After thinking about these possibilities and that you or the algorithm processing the continuous waveform won't really know what condition you have until after it occurs, this does not allow for calling of functions in a sequential or logical manner and have found that most or all possibilities must be included in a single function that looks for all of these possibilities. Needless to say I have made some progress and would appreciate any suggestions in design or in helping with coding suggestions.
I have a form which is used for data entry and then data is saved by pressing a button. I load numerous controls in the load event of the form. When user saves, I just hide the form so I do not have to load all the data again when user I show the form again. This also means I have to clear all the controls for the next time the form is shown.
I am using events of controls to change properties of classes I have. For example, in textchanged event of textbox I might have something like: employee.Name = TextBox1.Text where employee is an object. However, when I hide the form I have to clear the text box which causes the text changed event. I can have a boolean variable (something like cleaningUp) set to true during hiding of the form. If this variable is set to true then it will exit the text changed event handler. Alternatively, I can also remove the handler for text box before I clear it up and then rewire it. This is a lot of work with a form which has many controls.
What am I doing wrong? Is there a better way of accomplishing my task?
This is a lot of work with a form which has many controls.
Yes it is. Littering windows with individual controls for each individual data point is what I call the Bug Splat User Interface technique. It's not user friendly and it's not coding friendly. Prefer to use controls like PropertyGrid for data entry. They are both user friendly and coding friendly.
instead of using a form that has individual controls on it?
Yes. Your other parts of the UI provide views and navigation mechanisms of the system. When the user selects something in the system that has multiple data points that can be edited you use the PropertyGrid.
I can have a boolean variable (something like cleaningUp) set to true during hiding of the form. If this variable is set to true then it will exit the text changed event handler.
There's nothing wrong with this approach; I've had to use it myself.
The problem is with the design of the controls, in my opinion. The XXXChanged events do not in most (all?) cases distinquish between changes generated by the user and those generated programmically. As a result, your event handler can't tell the difference. So you wind up having to use a kludge like a boolean flag to indicate when the event is being raised in response to the user's action as opposed to being raised in response to a change made programmically.
I need to create a VB.NET service to exchange files with an a PDA using Telnet protocl. The PDA sends/receives files from an AS/400, a Unix box and a Microsoft FTP server, but all of the "chat" examples I've found can not be used. I have the C code from the Unix application and I'd like to create methods in C++ that I can call from my VB.NET project.
And I would like to transform it in a cool object design.
I've tried a lot of solutions (functor, interface) but the only way my compilator agree is to declare argument functions (funcptrABC) as static.