I read somewhere[^] that the difference between count(*) and count(1) or count(indexed_column) is not nearly as relevant now as it used to be in the past. Basically, once the writers of the database engines learned of the problem, they quietly fixed it. The habit of writing count(1), however, outlived the bug. In the late nineties, using count(indexed_column) or count(1) would make queries a lot faster on many commercial engines (e.g. Oracle-7). At this point, however, most modern DB engines with the "non-toy" status should prefer count(*) to count(1).
I don't know what the performance would be like but your code seems a bit weired. Does table1 and table2 have any relationship(column[s]) in common? it seems to be you're going to end up with a cartesian product of some sort.
The “ORA-01033: ORACLE initialization or shutdown in progress.” error can also happen when Oracle is attempting startup or shutdown and is "hanging" on a resource such as a failed disk, writing to redo, etc.
Wait for few minutes and retry..
If after few tries it comes, restart your machine.
♫ 99 little bugs in the code,
99 bugs in the code
We fix a bug, compile it again
101 little bugs in the code ♫
Suppose I have this customer orders table in which we have to specify a customer for the new order, however, customers are most of the time the same so we just need to make it easier when creating the new order and list the customer name from the box.
I tried changing the datasource, value member and display member but it doesn't work.
i have to import xml document having size more than 10GB in Sql Server database. i also have to do XSD Validation before Processing and maintain transaction. Please suggest me the best way to do the same. should i use SSIS packege? is it capable of Processing such large document? or Do i have to use c# coding to accomplish this task?
That is a big file and the vast majority of solutions deal with smaller files.
If it was me I think I would do the following
1. Create an app that does nothing but divide it into smaller files. Might not even do validation.
2. Create a second app that consumes the smaller files, validates, logs the file processed and then post to the database.
Advantages in the above.
1. If an error occurs you are going to need to manually review it. And reviewing a file that is 10/100 meg is easier than 10 gig.
2. If an error occurs you can manually fix and continue with the file that errored, rather than restarting everything.
3. You don't need to deal with the issue that the file is just too big for some part of the process.
Regardless however you must have the following features.
1. Log as you complete each block (insert, whatever) of work so you not only know where something failed but also can track that it succeeded.
2. Insure that transaction blocks are small. 1000 is probably a good number.
3. Design for the possibility that something will fail so you have a way to restart it at the failure point.
I have 2GB RAM in my machine. i tried SSIS package for 1.5GB file it's giving error.
Error: 0xC02092AF at Data Flow Task, XML Source : The component "XML Source" (1) was unable to process the XML data. Insufficient memory to continue the execution of the program.
If SSIS is not right choice than i have to follow the process as jschell suggest. can anyone give me some code sample for splitinf file into smaller one. i can split 10GB file into 10 1GB smaller files not more.
If i process 10 1GB files than how can i maintain transaction in all files? i mean to say if 1 file fails all others files should be rollbacked.
SELECT "tempBillDetails"."SampleNumber", "tempBillDetails"."Description", "tempBillDetails"."Amount", "tempBilling"."BillNo", "tempBilling"."BillDate", "tempBilling"."PartyName", "NewSampleEntrys"."NameOfSample", "NewSampleEntrys"."BatchNo", "tempBilling"."AgmarkCharges", "tempBilling"."DisplayName", "tempBilling"."OtherCharges", "tempBilling"."Discount", "tempBilling"."OtherChargesFor", "Charges"."FTest", "Charges"."Flag"
FROM ("Quali"."dbo"."NewSampleEntrys" "NewSampleEntrys" INNER JOIN (("Quali"."dbo"."tempBillDetails" "tempBillDetails" left OUTER JOIN "Quali"."dbo"."tempBilling" "tempBilling" ON "tempBillDetails"."BillNo"="tempBilling"."BillNo") INNER JOIN "Quali"."dbo"."SampleRegistration" "SampleRegistration" ON "tempBillDetails"."SampleNumber"="SampleRegistration"."SampleNumber") ON "NewSampleEntrys"."QLID"="SampleRegistration"."QlCode") INNER JOIN "Quali"."dbo"."Charges" "Charges" ON "SampleRegistration"."QlCode"="Charges"."QlCode"
ORDER BY "tempBillDetails"."SampleNumber"
it work fine if all tables having data.
my problem is that it only "tempBilling" contain record and no matching record in other table it show nothing i want that tempBilling data sholud be display.
how can i solve this problem plz help.
I have these related tables from access and it's killing me why can't they work like a master/detail in two datagridview controls when dragging them in the designer from the dataset tree to into the form.
The master works just fine, but the detail table is not loading the relative records.
I want to make a database for an mlm project (binary tree). I'm confused
to choose data storing model. I've got two option "adjacency list model"
and "Modified Preorder Tree Traversal". Which one is better for this kind of project ?
My project may contain a large amount of records. Please suggest better model for my project.
I have develop 3 binary tree related package. ( MLM and Microfinance). Just try to put Parent Id and thats enough to proceeds.. And if u have left right combination is there then put Position with varchar. which ontain Only R or L ( for left or Right.
Last Visit: 31-Dec-99 18:00 Last Update: 21-Sep-23 9:05