n the design, data will be accessed by medical institutions so I have to consider connections to the server in advance.
That is a business requirement not an implementation requirement. So exactly what do you think is going to be connecting to your database?
And did you actually attempt to size this? How many requests will your product generate? How long will it take to process them? How many users will be using it? What is the expected sustained rate? What is the burst rate?
If a request took 1 second and was made once an hour then you could handle 10,800,000 requests without reconfiguring anything on the database server.
There are less than 6,000 hospitals in the US. There are less than 200,000 medical clinics. How many of those are there in your market?
At least where I am selling into medical concerns is significantly difficult, even for institutions that have money. Many institutions operate on tight budgets. So expecting to own the entire market is highly unrealistic. (And yes I have worked on products in the medical industry.) So what is your real expected market share? What is your realistic expected growth rate?
And this of course completely ignores how these places are going to connect to you. The "internet" means that you are going to expose your database directly to the internet. Which is a bad idea and I suspect (hope) that institutions would refuse to do business with that arrangement.
Most performance problems occur due to architecture and design problems. Not technological problems. Attempting to solve serious performance problems with technology is likely to fail because technology only allows for incremental impacts on performance. And this of course presumes you use the technology right in the first place.
To capture what are the Stored procedures are being hit in Production environment the profiler would be used. Whether it would be slowdown the environment? Whether it will affect the performance? If so, what are the alternative to trace the SP hits without affecting the performance?
I get SqlTransaction.Zombie exception in production(In .NET Windows application not ASP.NET application). I got answer in below article how and when the exception is thrown. In production one server application available which perform DB operation and there could be more than one client applications to interact with server.
In the article, it is given that connection is explicitly closed. My question is in real world application what are the possibilities to close the connection (in few cases only it occurs in production)? I would like to hear the possible scenarios to reproduce it.
One of the servers is getting stopped due to the Exception. Please do help.
Is there possibility for SPs to close the connection unexpectedly? If so, please do describe. Thanks in advance.
Neither - I'd use SQL server because that is the database I am most familiar with. If you have no skills in either then look at the support resources for the database. The opinion of a random bunch of geeks is not a good basis for making a decision.
Never underestimate the power of human stupidity
I imported a table called sales to my Oracle database from Access. When I look in the Oracle Application Express object browser, I see the table and I can open it and see the design and the data. However, when I enter the query - select * from sales - I get the error message ORA-00942: table or view does not exist. What am I doing wrong?
I would like to thank everyone in advance for their answers as I am extremely stuck. Also apologies if I put this question on wrong as it is my first attempt at this.
I am currently developing a C# application for a project at work, which uses a SQL 2008 server as a back end database. I know it’s an older way but I have been using windows forms and ADO.Net and have managed through this site and various books to get quit far but now have hit a problem that I cannot find answered on here. Some tables are going to be very large and hence I want to implement paging and have done this through a stored procedure. The application works out what page it requires and passes the start row index and number of rows to return to the procedure. This works perfectly (though I have read various posts to do with performance, and different ways of achieving paging in SQL).
My problem has arisen now that I won’t to introduce searching into the mix. Most posts/articles are geared towards ASP.net application which a user specifies the search criteria and the results for that search are returned in a page form. But I want to copy the functionality of a system we currently run at work called Syspro. Syspro will return you to the page relevant to the data you have just edited, added, or searched for. For example it opens the form with a data grid view on page one, I add new data which will be 100 pages in, after adding the data and closing the add form it finds the page that result is on and shoots the data grid view page to that page and high lights the new row. This works in the same way when you search and edit as you can skip through data on the add/mod form as well.
Is there a way of replicating this behavior in SQL or is this something that is handled in the C# application. I need a way of determining what page the data I require is on, but I understand that SQL only assigns row numbers after a query is run against that table. I have thought of using the primary key to save me the trouble, but some tables will allow deletes.
Is there a way of replicating this behavior in SQL o
Your question about what "behavior" you want to replicate is confusing but yes you can implement paging in SQL.
1. Create your SQL with appropriate search parameters. You should still use parametrized values.
...a. If you have just a couple of items you can hard code the SQL.
...b. With more than a couple items create the SQL dynamically.
2. Apply appropriate ordering to the SQL. This can include creating it dynamically.
3. Apply paging (dynamic SQL again) See the following link
At the business level you might also want to consider what happens to paging if the data being paged changes underneath. Very common answers are
1. It won't, so there is no problem.
2. It won't in relation to the user because the work flow and usage insures that.
Be VERY sure that you analyze this at the business level before deciding that this could be a problem.
Also you should require sufficient search constraints to insure that no single result has lot of pages. Three pages is ok, 100 isn't. If someone claims otherwise then note that users do not randomly seek data. They know what they are looking for. So make them tell you it. Paging is a convenience so they don't have to tell you everything in detail but ignoring it can inhibit rather than help workflow.
Thank you jschell.
But I must apologise as after returning to work today and looking at the programme in which I am trying to copy some functionality from, I have realised I haven’t explained what I am after properly, and fear I have caused some confusion. But asfor what you have told me in the above it validates most of the code I have written, thank you for that.
With that I’ll try again to explain myself, I caused confusion by saying there was a search involved “Apologies”. In a nut shell what I am after is basically “Selecting and displaying a page based on a records location”. Now I know that until a query is run against the table there is no theoretical position of a record. But what the programme does isopen the pageForm on page one of the data in that table. Although there is a ID columnon the table (as I have poked around in the programme files) the page is organised by the first column which is stock-Code. Hence you can page through all the results from a stock code beginning with “A” right through to “Z” which I think is around probably 1000 pages at the minute, not that we go through them all. Then a user opens a new form to edit oradd a new record, when the form is closed and the data returned to the database the pageform will display the page relevant to the stock code added, and you can page backwards and forwards from this page. The same applies to the search, which is conducted in a separate searchForm, which displays the possible results for that criteria, which the user selects one, andreturnsto the pageForm which now displays the page with that record on.
Do you jschell orany one have any idea of how this is accomplished?
i try to learn ado.net so i start with an article how to build database using ado+access and vb.net and think this article is compatible with visual studio 2005not2010 which i have so some error i face like
Me.BindingContext(DataSet1, "table1").Position = 0
i tray toread the first record from access data base
any one can guide me to learn new version of ado.net with ms access
I'm not designing a purchase order system per se, but it is the closest thing to what I'm actually doing so that's why my question/concern involve it. In a "minimal" PO system, you have the following tables:
When an order is first created (e.g., #123), product descriptions and prices are pulled from the Products table. That order is marked as paid. A few months go by and some of the products in the Products table get updated with new prices. Now what happens when I look at order #123 and its details? The price of that order will no longer match the sum of its individual products. Is that common? Am I looking at this from the wrong angle?
"One man's wage rise is another man's price increase." - Harold Wilson
"Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons
"Show me a community that obeys the Ten Commandments and I'll show you a less crowded prison system." - Anonymous
When a new price is added, the FromDate of the new record and the ToDate of the previous price are set as Now.
You find the current price by WHERE ToDate IS NULL
To find a price for an order you can use WHERE orderdate BETWEEN FromDate and ISNULL(ToDate,Now)
You may add NextPrice and PreviousPrice fields if you need to do a lot of lookups.
(Specific syntax depends on the particular database in use.)
jschell has the right of it, put all the details required to calc the value on the detail record. This may also include any discount/adjustment values. Your invoice report should only pull the descriptors from the product and customer tables
Never underestimate the power of human stupidity
I am learning Asp.net with C# and Sql
server. I came across with a huge fixed width text file of 1780 columns. I have to import this file in sql server table. I will appreciate if you can help in this.
i want to show results something like this...
First row contain the opening balance calculated by this i.e: Sum(Balance)where datefield <= DateByUser
and next rows shows the records between dates given by user & evaluate balance on each row
output should like this..
. . . .
. . . .
I'm looking for a method/example/pseudo-code of importing a CSV file into a SQL Server table with several requirements:
1) I do not know the quantity of columns, column header nor data type of each column prior to the program running. Along with the header fields, I have another file which describes the type and size of each column (like integer, decimal or string) and I can already create the destination table based on this input.
2) Because I don't know the format of the input table before running the program (i.e. the input table changes every time), I can't create a class specifically designed to describe the contents of each record from the CSV file.
3) I already have a method working where a DataTable gets fully populated and the SQLBulkCopy variable uses the .WriteToServer() method to insert all of the records. HOWEVER, there's a problem when the source file gets too large (over 500mb or so) and the result is an OutOfMemory exception.
4) The AdHocGeek has a partial solution which works with strings only but doesn't address integers, decimals, dates, et.al.
The problem is that I can't see a way to use an IDataReader into a complex (i.e. many field types) table on the fly. I've added the .ColumnMappings() method to determine the name of the columns with which to map the input columns into the destination SQL Server table. When the application is run however, it gets an error indicating "The given value of type String from the data source cannot be converted to type decimal of the specified target column".
I can't determine why the DataTable works and the DataReader doesn't and can't find any examples to prove otherwise.
Well I would say you are screwed, only because you are trying to transform the data in your load. IMHO transforms are the biggest time waster ever inflicted on the load process. I would split the operations to a load process and then a transform. Caveat I am not addressing the size issue as I have never had the problem (and 500mb is very small beer in our environment).
I would do the following.
Read in the data file and get the header record.
Create a staging table in sql server exactly matching the column headers - every column to be varchar or nvarchar if needed
Convert the csv file to a datatable (This article [^]may help).
Use BulkCopy to load the data (everything is varchar so it WILL load)
Use a stored proc to do the transforms from the staging table to the target table. If you have to use a dictionary file then you are going to have to take that into account and it will be a challenge (probably a crap load of dynamic sql).
Drop the staging table
Never underestimate the power of human stupidity
That's a very interesting idea that should work. I would need to create a temporary SP based on the fields and data types for the conversion. It also pushes the field validation to the SP but that shouldn't be a problem.
..because one of the three subqueries returns more than one row.
(SELECT account_number FROM patients WHERE file_no = param_file_no);
(SELECT account_number FROM accounts WHERE account_id = param_account_id);
(SELECT price_list_id FROM accounts WHERE account_number = param_account_number);
Meaning that the table holds more than a single record with the requested Id.