|
In my opinion you're right on track.
However, things could get complicated if the amount of rows in the result set is changing (because the amount of rows changes, criteria changes etc). So a straightforward solution would be using paging nevertheless. If the connection between the application and the database isn't slow and the database isn't over utilized, this shouldn't cause too much wait time.
Of course if you have tables that you know are always small, fetching all one time and then paging locally would be an easy thing to do.
|
|
|
|
|
At some point you are presenting too much data to a user.
If you are looking at a 25 row limit per page then probably the limit is well below 500 total rows.
At that point you know that the user knows what they are looking for. They are not just randomly scanning records. So make them tell you what they are looking for. Use that to create a query that restricts the total rows returned.
When creating servers I usually have a configurable maximum and user queries are run with a count(...) first and if the return is more than a maximum then I return an error. The GUI screens are responsible for providing sufficient input specific to one query to allow the user to provide enough data to get below the limit.
|
|
|
|
|
I you are returning 10k rows to a web browser you are sacked on the spot! One of the issues we have is that the users are always after data dumps to analyse in excel, the continious whine of just give me a data dump becomes really irritating. Now that we have volume policies it is easier to limit these.
As others have said, you need to filter the results to minimise the volume returned. I never use paging although I do use a top 1000 in procs with a potential of high volume and then inform the user they have exceeded the volume policy. By the time you chuck 1000 records into a grid with local filtering and sorting I see no requirement for paging.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Mycroft Holmes wrote: One of the issues we have is that the users are always after data dumps to analyse in excel
Does that not imply that something is missing from the application? I'm guessing because I don't know the business requirements, but if they are constantly extracting data into another tool to analyse, that suggests that they need some sort of information they can't get from the source application.
|
|
|
|
|
I have encountered this with several server, the cpu utilization with SQL server on multi core systems is very low i.e. one core does all the work and the others are idle, what is worse is that the queries are queued serially.
Anyone encountered this, is there a solution.
Its the man, not the machine - Chuck Yeager
If at first you don't succeed... get a better publicist
If the final destination is death, then we should enjoy every second of the journey.
|
|
|
|
|
Have you checked if processor affinity is defined? It could be that only one core is allowed. Also it could be that the disk subsystem is actually the bottleneck so CPU has to wait for the answers.
And what comes to the serial execution, I thing there are at least few possibilities:
- again waiting for disk subsystem
- serial executions is forced because of locking issues
- transactions are ran in serializable isolation level.
|
|
|
|
|
Thanks Mika, I will look into it...
In the meantime more information:
- We have a windows application server service connecting to sql server via integrated security on an internal threadpool of 10 threads. Each thread creates a connection when needed, and closes when finished.
- The server is an HP with 2 quad core xeon hyperthreaded (16 processors in windows) with 10k sata raid 5.
- Clients connect to the application server via tcp (no direct connection to sql).
Its the man, not the machine - Chuck Yeager
If at first you don't succeed... get a better publicist
If the final destination is death, then we should enjoy every second of the journey.
|
|
|
|
|
Ok, that's good background info. If you find anything suspicious or more information regarding the problem, let us know
|
|
|
|
|
How do you know that the application server is not the one that is queuing them serially?
|
|
|
|
|
Fair question, I presume when you have parallel threads each with their own connection they query in parallel.
Its the man, not the machine - Chuck Yeager
If at first you don't succeed... get a better publicist
If the final destination is death, then we should enjoy every second of the journey.
|
|
|
|
|
Seems reasonable as long as you have verified that. And also verified that it is not in some other way serializing requests.
|
|
|
|
|
I just spent 2 hours googleing best practices for storing picture's for a asp.net MVC project that I am starting.
There are two school's of thought. One is to store the Picture's in a database (for ease of backup's) Another is to use the site's file system. (For speed)
This project will be for my family.(Very Large Family I might add) I want to keep Pictures Referenced to the user that uploaded them. And only share some of them with other member's
My question: What do You think about storing the file's in the database? And why?
Thanks in advance
Any good resources or link's appreciated
Frazzle the name say's it all
|
|
|
|
|
I use the file system but then I have 1000s of images, the economy of backing up the database and moving it to my dev environment alone dictates that I do not want the image files inside that backup. What do I care if the images are trashed on the server, I have less timely backups of them elsewhere, I certainly don't want to move them over the wire every time I take a backup of my data.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Mycroft Holmes wrote: I certainly don't want to move them over the wire every time I take a backup of my data.
That is a good point I don't think I will have that many images but you never know at design time If the project will be a hit with the User's.
Frazzle the name say's it all
|
|
|
|
|
I put them in the database, they're safer there. In the file system, someone may overwrite* or delete a file. I don't backup my databases though.
* Replace your favorite shot of Granny with a picture from your cousin's bachelor party (or vice versa).
|
|
|
|
|
PIEBALDconsult wrote: * Replace your favorite shot of Granny with a picture from your cousin's bachelor party (or vice versa).
as I am almost a grandfather This would make me Upset!!!
Thank You for the reply I will consider this issue when I Finally decide which way to go
Frazzle the name say's it all
|
|
|
|
|
There are few questions that you should have answer to make this decision:
File System:
- Images sizes are smaller than 2 MB.
- No editing required on the images in future
- Faster access
Database:
- Image size is on a higher side 2+ MB.
- Easy maintenance and backup required
There is an another approach of FILESTREAM if you are planning to use sql server 2008.
http://msdn.microsoft.com/en-us/library/cc716724.aspx[^]
hope it helps.
|
|
|
|
|
|
I just read your article. Interesting so basically all that gets stored in the database is a GUID and correct me if I'm wrong a Hash of the file size? And the file itself goes in the file system? I don't see the benefit in storing the info 2 times. Other than maybe so it can be strongly typed.
Maybe I missed the Point I will reread the article
Frazzle the name say's it all
|
|
|
|
|
One of the main points is the transactionality. In case of an error you don't have to worry if the database contains a path to a non existent file or vice versa.
Second thing, backups. When you backup the database you also backup the files. No separate backups.
Thirdly, speed with larger files compared to storing all the binary info inside the database.
And the fourth thing, which I haven't covered very much yet is that you can actually stream the file better to the client when fetching.
I think those would be the main points. All comments are very welcome
|
|
|
|
|
Mika Wendelius wrote: One of the main points is the transactionality. In case of an error you don't have to worry if the database contains a path to a non existent file or vice versa.
So it wil be Strongly Typed" This I like!
Mika Wendelius wrote: Second thing, backups. When you backup the database you also backup the files. No separate backups.
This I like Alot!
Mika Wendelius wrote: Thirdly, speed with larger files compared to storing all the binary info inside the database.
This should not be a problem, because I Intend to re-size the Images.
Mika Wendelius wrote: And the fourth thing, which I haven't covered very much yet is that you can actually stream the file better to the client when fetching.
This on the other hand I Love. But I will have to learn a bit more about. Looks like Google time!
Frazzle the name say's it all
|
|
|
|
|
WHich is more faster to compare values <> or != for performance tunning
thanks in advance
|
|
|
|
|
It makes no difference, both are the same.
Its the man, not the machine - Chuck Yeager
If at first you don't succeed... get a better publicist
If the final destination is death, then we should enjoy every second of the journey.
|
|
|
|
|
As said, from performance point of view, they are the same. However, since you didn't mention what database you're using, from syntax point of view there may be differences since not all of the databases understand both syntaxes.
|
|
|
|
|
we are using sql server 2008
|
|
|
|