|
Guys,
Is it possible to synch 4 databases in mysql? I mean there are 4 servers on different location and databases on server will be in Auto Synch mode. If any one goes down rest of the three gets sync automatically and it goes UP, all 4 gets synch automatically.
Please reply...!
|
|
|
|
|
You're talking about sharding, replication, and load balancing, which is great, but unless you're writing software that will perform this task, this is the wrong forum for it. I'd suggest the System Administrator or Database forums.
"There are three kinds of lies: lies, damned lies and statistics."
- Benjamin Disraeli
|
|
|
|
|
I'm working on a C/WPF/SQL Server app.
My SQL tables all have the following columns:
CreatedById INT NOT NULL FOREIGN KEY REFERENCES Users(UserId),
CreatedDt DATETIME NOT NULL,
LastModifiedById INT NULL FOREIGN KEY REFERENCES Users(UserId),
LastModifiedDT DATETIME NULL,
DeletedById INT NULL FOREIGN KEY REFERENCES Users(UserId),
DeletedDt DATETIME NULL
When the user logs into the app the login function returns a User object which is stored on the MainWindowViewModel.
The question is this: What's the righ way to get the ID of he logged in user into the DAL?
I could simply pass the use Id into every DAL function, but I'd like to hear other suggestions.
Thanks
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
The "right" way is entirely contingent on what you want to achieve.
If you're trying to provide a security layer, it could be parsed through an authorization provider with full transactional change tracking posted to a completely different database. This would, by necessity, have the transaction and user info passed to an authorization (or even AAA) provider, which would pass the results of the authorization to the DAL on success in an object that should contain the UserId regardless.
If you're looking for basic audit trails, a simple M-M mapping table that triggers off of update events would be just fine (user, table, item id, SQL query OR enumeration describing transaction). You could also create multiple M-M tables for each data table to provide robust relational support. Using this method, the context information, including the UserId, can be passed through an event handler rather than as a parameter.
If you just want the UserId as a creator, you can just pass the value or have it available in a context that is passed to the DAL on construction.
And there are, of course, other approaches ad nauseum.
"There are three kinds of lies: lies, damned lies and statistics."
- Benjamin Disraeli
|
|
|
|
|
Kevin Marois wrote:
I could simply pass the use Id into every DAL function, but I'd like to hear other suggestions. Compared to what alternative?
Fetching the entire table and doing the lookup in memory?
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Eddy Vluggen wrote: Fetching the entire table and doing the lookup in memory?
Explain this. Fetching WHAT table??
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
Kevin Marois wrote: Explain this. Fetching WHAT table?? Ah. The "SQL tables" that you were referring to;
Kevin Marois wrote: My SQL tables all have the following columns:
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Why would I ever need to "fetch" the tables? I'm doing an Insert/Delete/Update.
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
Kevin Marois wrote: Why would I ever need to "fetch" the tables? I'm doing an Insert/Delete/Update. Most things that are written are read at some point. Let me rephrase then, what alternative would there be compared to passing the userId when you insert/delete/update?
How about Functions That Return User Names and User IDs[^]?
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Your question is phrased in such a way that it is confusing.
Summary of the actual problem.
1. You have a number of tables which you want to update with information specific to the user that initiated the action.
2. You want to know how to get the user information, automatically, to the point to where it is used.
First problems with this architecture.
1. You said "all". Generally a bad idea. Often tables represent structure that has nothing to do with users and keeping data specific to the user every where is pointless.
2. It is better to track what a user did in the business sense and not in the data sense. This leads to several requirements that you often want to know what general action initiated the action, you want to keep the entire history (more than one user) and you need to deal with what happens when a user is deleted from the system.
3. Automated processes can be required to update a system. They have not user.
4. Some systems allow for one user to act on the behalf of another user. Single data collection points do not allow for that action to be captured.
Solution to your specific problem is as follows.
1. A user action is either initiated as a request (thread level) or a message (not thread level.)
2. As a request you can use the class ThreadLocal to store the user and then many layers later you retrieve the user from the ThreadLocal for use in your DAL.
3. A message has no way to transport that information except via a specific attribute. But the message should have originated as a request (real people must always start with real UI actions) so one uses 2 at the point where the message is created to add the user to the message. Then end that processes the message (again in a thread) extracts the user and puts it into the ThreadLocal that is processing now.
I suggest that you insure at and request origination point that you should, for all exit points, including exceptions, clear the user.
I suggest that you test the usage of the user at all usage points to insure that it does in fact exist, before attempting to use it. That insures no null pointers.
I suggest creating a user proxy object, do NOT use the original DAL user object, to contain the information about the user. That proxy object must be exposed to ALL layers whereas the DAL user object should not be.
|
|
|
|
|
Blockchain!
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal
|
|
|
|
|
if u solved can u give the code...
|
|
|
|
|
I have solved it but I can't give it to you as you aren't a level 8 wizard.
This space for rent
|
|
|
|
|
What about a level 9 cleric?
Natch it is never usable by a barbarian regardless of level.
|
|
|
|
|
I have an old Silverlight application that I would like to convert for demonstration purposes. Converting to WPF is relatively trivial.
Does the hive mind know if it is possible to:
Deploy SQL Server to Azure - Yes
Deploy a WCF to Azure
Use clickonce deployment via Azure
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
|
I feel like a ditz, I found both of them after I posted.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
The back end is C# / MVC / Knockout
I've been trying to figure this out for a while with no luck. I can get it to partially work but the session is never passed back to the end user.
What I'm trying to figure out is how to add a single signon (non SAML/oAuth; that's already working) to authenticate a user to a remote site using a username / password combination. Most sites are HTML so from the server I can issue a login and it works. If I try to do this through the browser I get cross-site scripting errors (which I would expect). I know other people are doing this successfully. I need to do this device / browser independent. Outside of writing either a plugin for each browser, how is this accomplished?
|
|
|
|
|
I have these options =
A. Store more data into one single JSON document
Or
B. Distribute them as multiple documents
Storing data in one single doc, gives a neat solution, but I'm not sure how big is too big for a JSON document. What if it runs to 40-50 MB? Parsing docs of this size feels insane?
UPDATED:
This document works like an accumulator. For example, lets assume a requirement like this :
I need to store all the users taking part in discussions in CodeProject.
So, I need to capture the User ID & have a counter against each of them.
{User:1332,
PostCount:23, (incremented every time he posts a reply)
},
{User:1124,
PostCount:56,
},
{User:2323
PostCount:34
}
The document keeps growing as and when new users start posting in CP.
The document gets updated for the count, when old users make another post.
The document is read when the user metrics are needed to be shown on reports.
Now I have the option to maintain it- Yearly, Monthly or weekly OR just Keep it as a single document. Where all the users activities would be recorded in a single doc.
If you are dividing the document as Yearly, Monthly,weekly etc, The document size would be reasonable- Somewhere around <4-5MB (Monthly), but the no. of document would be more in the collection. You'll have to run an aggregate function to collect all the data & calculate the Count that's spread across documents for every user and produce the metrics. As I just said, Here the individual documents size would be reasonable as it can't go on increasing. It just bound by the start & end date.
And Single document means, Keeping everything in single file and keep updating it.
For calcs, Load the whole data in memory, run the aggregate function to parse through the document and produce the metrics.
(Single Document sounds like a Big-Fat approach. I think I'd rather keep them as monthly chunks and make it handy to handle the documents.)
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy Falcon.
modified 1-Dec-17 5:28am.
|
|
|
|
|
At that size it is no longer an exchange-format, but used as a data-store. In that case SQLite would probably be a better option.
Once wrote an application that used XML as a datastore; every edit means rewriting the entire file.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Yep its a data-dump-store. Though a single document looks neat on the design, I guess performance wise its a bad thing. Mainly for the reason you mentioned, it has to load whole lot of data into memory everytime it has to update or read something. On the other hand, if I'm distributing it into multiple documents, it will need proper indexing/partitionin/sharding things done right.
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy Falcon.
|
|
|
|
|
Vunic wrote: On the other hand, if I'm distributing it into multiple documents, it will need proper indexing/partitionin/sharding things done right. JSON, XML and CSV are great formats for exporting data; but they do not work well as datastores where frequent updates are expected. If your data contains images, they'll probably be stored in Base64, making the file even larger. If there is no updates expected to the data, and it is merely a static export - then one of the text-based exports is preferred. Reading is fast enough for text, but updating takes a severe and noticable penalty.
Vunic wrote: Though a single document looks neat on the design Also a lot easier when making backups and sending stuff. I prefer to put all data in a single file; with a file-based database like SQLite, SQLCe and MSAccess you can easily even store binary data. Another nicety is that you can link to these databases from SQL Server and interact with them as if they were local SQL Server databases.
If the data is not related at all, then I'd recommend zipping your various documents, so it remains "one file" in the eye of the end-user. Since you initially chose JSON, I'd be expecting data that is somehow related.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
I have updated the OP[^] Please check the sample scenario.
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy Falcon.
|
|
|
|
|
You can't just look at a "number" and say it is "too big" (or small) without benchmarking.
Most internet traffic is automatically "zipped" (gzipped) by default; and will on average compress to 10% of its original (text) size. That addresses bandwidth.
Where else is is it "too big"?
"(I) am amazed to see myself here rather than there ... now rather than then".
― Blaise Pascal
|
|
|
|
|
Gerry Schmitz wrote: You can't just look at a "number" and say it is "too big" (or small) without benchmarking. If you look at my post, that's exactly what I did. Instead of benchmarking, I'll remind you that any text-based file needs to be rewritten to update, and it does not support random access.
Zipping such a text-file means rewriting and rezipping the entire stream, and writing that result to disc on every change. That's fine for a datadump in CSV that is imported, but it is not a ideal structure for data that tends to be modified.
Gerry Schmitz wrote: Where else is is it "too big"? GMail, if the attachment is larger than 25Mb. So yes, would depend on context, wouldn't it?
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|