|
Hello all,
I have a report that deals with a large set of inventory data. Inventory items are stored in bins at various locations. At each location, different team members may be responsible for keeping track of inventory in the various bins. Team members may have more than one bin. We don't store which team members are responsible for a bin - that's pretty much determined by who is around when inventory needs to be taken. We are building a report that is basically a worksheet for listing what is in each bin.
We have a user interface that allows us to determine which bins should have page breaks after them (the bins are listed in order of location and then alphabetically ordered). What we need is a way to group the result set coming back from our stored procedure so that each set of bins between page breaks is grouped. If that makes sense...?
I've been googling trying to find an answer to this question, but I think part of my problem is in how to express the question in such a way as to get an answer. Please advise.
Thanks,
Will
|
|
|
|
|
If I understand your description correctly, I advise the following route. Remember that grouping is just condensing the data on a common element. I'd recommend putting the stored procedure results into a result set that can be manipulated (a temp table or table variable) but add an addition column for a 'group number'. You'll need to go over the table using the page break marks to assign the 'group numbers' to each row in the result. The way I think the groups numbers would be assigned is every row between the beginning of the result set and the first page break would be group number 1, every row between the first page break and the second would be group number 2, etc. Look at the ROW_NUMBER() function (if using SQL Server 2005) as you can use it with it's GROUP BY clause (which is separate from the normal query GROUP BY clause) to automatically generate the group numbers. If using SQL Server 7.0 or 2000, you may have to hand-craft the group number assignments. Either way, once the group numbers are assigned, you simply need to select from the result set and group by the group number column.
Keep It Simple Stupid! (KISS)
|
|
|
|
|
Retrieve Data from 3 tables
Schema
symptom(ID,name,description)
visitentry(visitentryID,datevisit,weight,height,bmi,medicalNotes,patientID)
visitentrysymptom(symptonentryID, patientID, symptomID, date)
Records
symptom table
1 || fever || feeling hot in the body
2 || flu || sneezing non stop
visit entry table
1 || 03/12/2008 || 50 || 1.68 || 17 || high fever 40 degree || 12
visit entry symptom table
1 || 12 || 1 || 03/12/2008
2 || 12 || 2 || 03/12/2008
I wan to extract the data from visitentry and visitentrysymptom
i wrote this sql code but dont know how to extract both fever and flu because my visit entry symptom table shows that the same user has 2 symptom
Select * from visitentry
where patientID = @patientID
|
|
|
|
|
This is a pretty straight-forward query:
select ves.*, ve.*
from visitentrysymptom ves
inner join visitentry ve on ve.patientID = ves.patientID and ve.datevisit = ves.date
I think you should read up on the basics of T-SQL, because this is a pretty basic thing to do in T-SQL.
Keep It Simple Stupid! (KISS)
|
|
|
|
|
Hi,
I performed a search in google and found different answers to my question. However none of them did work out. I am able to read from a table but when I try to insert into a table, it doesn't insert and it doesn't give me any error message. I am using SQLCE in Windows Mobile 6.1 OS. If you give me a working sample for INSERT statment in SQLCE, I would really appreciate it.
The code below neither does insert the record into table nor produce any error.
Dim myconnection As New SqlCeConnection(ConnStr)
Dim mycommand As New SqlCeCommand
Dim InsertStr As String = Nothing
mycommand.CommandType = CommandType.Text
mycommand.Connection = myconnection
Dim xdate As Date = DateTimePicker1.Value
xdate = xdate.ToShortDateString
InsertStr = "INSERT INTO Expenses ([Date],BelongTo,Status,ExpenseAmount,ExpenseCurrency) "
InsertStr += "VALUES ('" & xdate & "'," & ExpenseIndex & ",0," & CType(ExpenseAmount.Text, Decimal) & "," & "'" & Currency.SelectedItem.ToString & "');"
mycommand.CommandText = InsertStr
Try
mycommand.Connection.Open()
mycommand.ExecuteNonQuery()
Catch ex As Exception
MsgBox(ex.ToString & vbCrLf & InsertStr)
mycommand.Connection.Close()
Exit Sub
End Try
mycommand.Connection.Close()
What a curious mind needs to discover knowledge is noting else than a pin-hole.
|
|
|
|
|
With a quick look I didn't notice anything wrong in your code (except that you don't use parameters in the insert statement, as stated many times in this forum, no no no and no ). So what I suggest is that after executing the insert, fetch the inserted record in your code to see if it's in the db.
I'm suggesting this because you didn't mention how you discover that the insertion wasn't done. One possibility is that from Visual Studio you fetch data in design time from a different database than what you use at run time. This could be because n some cases the database file is copied under bin/debug folder at compile time and that database is used at runtime. In design time the original source of the copy is used so you wouldn't actually see any changes
|
|
|
|
|
i created a filestream in sql server 2008 and i inserted a word document in it and now i want to display this word document in asp.net 2008.how can i do that?
|
|
|
|
|
lambo wrote: display this word document in asp.net 2008
Thats going to be a neat trick - display a windows form document in an ASP page. Where have you seen or heard about this being done before?
Filestream does imply that you can universally disply the contents, it is a method of storing the binary data in the filesystem where it is managed by SQL Server. You can retreive it from SQL and display it in WORD only.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Well, you can set the mime type etc for the filestream you are sending to a web page and it will open it (using Word) in the browser window. Never really liked this, but you can do it.
Not sure what ASP.net 2008 is though...
|
|
|
|
|
Hi,
i am handling a VLDB( very large database), out of which some tables contains billions of records. a few applications are continuously accessing this database. Right now i am using a Database manager library ( its a c# dll) to handle the database. I am looking forward for suggestions and useful links to help me to increase the performance of these applications.
What are the efficient ways to handle such kind of large database? , Now it is taking more time to return from a query/
|
|
|
|
|
In (as they say) no particular oprder, I would look at
> partitioned tables and/or views as these can have a massive performance gain.
> is all the data required continually, could some of it be archived.
> the physical location of the data and indices - ensure seperate controllers etc
> the indices themselves - can they be improved
There are a few starting points, I'm sure other people will be able to come up with more
Bob
Ashfield Consultants Ltd
|
|
|
|
|
Thanks, i know that no one will have a ready made answer.
anyway i am not sure about your 3rd and 4th points, can u put some light on it?
|
|
|
|
|
By putting your data and indices on totally separate disks (including disk controllers) you will improve performance. Checking the execution plans of the most common queries will reveal if teh indices on the tables are the most efficiant. There also loads of other possibilities.
I would do as Mycroft suggests, get a DBA with strong VLDB experience in to review your system.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
Have you engaged a DBA to optimise your database, with a VLDB it is probably worth paying for professional help. Unless a professional DBA or highly skilled developer has been involved in the DB then there are a myriad of things that can be done to improve performance.
From your lack of understanding of 3-4 in Bobs suggestions I suspect there has not been any optimisation past throwing some indexes at the tables. Indexes and data files can reside on multiple disks, dramatically increasing the response time.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Very good answers from both Ashfield and Mycroft.
To add a few things: You said that applications are accessing the database. What you didn't mention is that do they mostly only read or do they modify the contents of the database. Depending on the answer you should take a bit different paths in optimization. For example if the database is mostly only read, usage of indices can be very agressive while if it's commonly also updated, you will need to get a 'balanced' solution. This principle is not used only against the whole database but for different areas in the database so for different tables you could have different optimization goals.
As Mycroft suggested, you would benefit if you get a good DBA to have a look at the problems and advice you further. The solution won't be purely technical but it will include technical modifications as well as going through the logic of the applications and possibly modifying it if necessary and possible.
One technical thing to add to the excellent list from Ashfield, you could consider also using federations.
|
|
|
|
|
|
sujithkumarsl wrote: but also looking to improve the performance
Start by investigating the execution plans for the problem queries. From there on you should be able to locate the bottlenecks and eliminate them with the info from all the replies.
If you later have some specific questions or problems, I think you should open a new thread with the new question. In that case, don't forget to add enough information about the problem.
|
|
|
|
|
Can anyone suggest the architecture of my applications in order to communicate with the database, right now we are using 3 tier architecture, we have a database manager library to communicate with the database.
Can anyone suggest a design , which can make things faster?
My small attempt...
|
|
|
|
|
This question is too vague.
As I replied earlier, break the problem into parts, post a new question on a new thread and provide enough information about the problem and the requirements.
Also remember to select correct forum in order to get good answers.
|
|
|
|
|
We have this stored procedure:
DECLARE ParentCursor Cursor for
select iitm_id_parent from z_structure where iitm_id = @pverID
open ParentCursor
fetch next from ParentCursor into @versionId
while (@@fetch_status = 0)
begin
select @isDraft = isnull(ipve_draft, 0) from i_product_version where ipve_id = @versionId
select @nCount = count(*)
from i_prod_wwrk
where wwrk_id = @workflowID and ipro_id in (select ipro_id from i_product_version where ipve_id = @versionId)
if ((@nCount > 0) and (@isDraft = 0))
insert into e_parents (expo_id, pver_id, pver_parent_id) values (@expoID, @pverID, @versionId)
fetch next from ParentCursor into @versionId
end
close ParentCursor
deallocate ParentCursor
And now, we are optimizing it into:
INSERT INTO E_PARENTS (EXPO_ID, PVER_ID, PVER_PARENT_ID)
SELECT @expoID, @pverID, z.iitm_id_parent
FROM z_structure z INNER JOIN i_product_version pv ON z.iitm_id_parent = pv.ipve_id
WHERE z.iitm_id = @pverID AND ISNULL(pv.ipve_draft, 0) = 0 AND
EXISTS (SELECT 1 FROM i_prod_wwrk pw
WHERE pw.wwrk_id = @workflowID AND pw.ipro_id IN
(SELECT ipro_id FROM i_product_version WHERE ipve_id = z.iitm_id_parent)
)
Do you think this is correct? And are there any further suggestions?
Thanks!
Rafferty
|
|
|
|
|
In most cases single statement performs much better than cursors so bsed on this you're on a correct path.
What comes to the actual performance, use execution plan information and execution statistics from Enterprise Manager. Compare the execution of the cursor based logic to the statistics from single insert. Also see if execution plan is acceptable and doesn't contain severe bottlenecks
|
|
|
|
|
Hey it's you again!
Are you saying that the single statement can contain bottlenecks compared to cursors?
I will try to read up on execution plan info and execution statistics.
Thanks.
Rafferty
|
|
|
|
|
Rafferty
You got Mika's statement exactly wrong, cursors are evil, try and avoid them wherever possible.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Mycroft Holmes wrote: cursors are evil
Exactly
|
|
|
|
|
Rafferty Uy wrote: Are you saying that the single statement can contain bottlenecks compared to cursors
Not at all.
What I'm saying is that these two techniques are (totally) different and statement based is the better approach. But when you create a statement (complex or not) you should always verify the performance part.
I've seen many situations where a cursor based procedure is converted to a statement but the result has been performing worse. The simple reason has been that the optimization and execution verification has been forgotten or ignored. So the basic idea is that always verify the results.
Mika
|
|
|
|