|
I may be wrong, but I think that LINQ-to-XML only works with .NET 3.5. Not many home users will have that installed yet. It requires a 250Mb download just to install the .NET 3.5 framework, which might be a problem for some people especially since this is just a small home-user app. ("Our new application is just 100K, which you can download and install in seconds... after you spend an hour and a half trying to download and install all this other garbage that you don't really want.")
SQLite is fast and has a very small memory footprint. That's its main advantage.
SQLite datatypes are a bit weird because you don't tell it in advance what the datatype of a column will be. SQLite figures it out itself dynamically. This is mostly OK but every now and again it can surprise you (among other things, SQLite thinks it's OK to store string values in a column intended for integers without warning, it will happily convert from floating point numbers to integers according to its own internal logic, and so on). Also, there is no support for date/time values. If you want to store these, you have to figure out your own solution.
The biggest potential problem is that it is a single user database. Every time you update something, it locks the entire database (yes, the entire database not just the table being updated). May not be a problem for this particular application if it's a small home-user app but SQLite does not scale in terms of number of users.
Personally, I don't like SQLite. It's popular with C and C++ people and some Pythonistas but I just don't see the point of using it when there are better lightweight databases available these days.
|
|
|
|
|
You're right, that's a large download if you just want to save some application-data.
David Skelly wrote: SQLite datatypes are a bit weird because you don't tell it in advance what the datatype of a column will be.
Like I said, I haven't used it much just noticed that it converts all datetime values to string .
David Skelly wrote: The biggest potential problem is that it is a single user database.
That's also true for SQLCE. MSAccess has some more options, but isn't intended for this scenario either. They can however, be used as readonly-datastores by multiple users.
David Skelly wrote: Personally, I don't like SQLite
Me neither, but it is still an option that may be considered. I'd be using SQL Server Express
I are troll
|
|
|
|
|
You could alsways try www.sqlite.org[^]
I don't speak Idiot - please talk slowly and clearly
I don't know what all the fuss is about with America getting it's first black president. Zimbabwe's had one for years and he's sh*t. - Percy Drake , Shrewsbury
Driven to the arms of Heineken by the wife
|
|
|
|
|
Hi i want to split one large table into two. But here my problem is, i need to query from both the tables, which should give the data same as the parent table.
For example If the large table is like
COLA COLB
1 A
2 B
3 C
4 D
5 E
and if i split this into
FIRST table
COLA COLB
1 A
3 C
5 E
Second table as
COLA COLB
2 B
4 D
Now i want to query the two tables, so that i should give the value order by COLA. Result shold be like
COLA COLB
1 A
2 B
3 C
4 D
5 E
My small attempt...
|
|
|
|
|
This should do the trick:
select cola, colb
from(
select cola, colb from table1
union all
select cola, colb from table2
) d
order by cola
But I don't see why you should split a table in two.
Wout Louwers
|
|
|
|
|
Actually i have one billion records in this table all are logs from a server machine. But only some specific error logs are used to process. To get better performance i am planning to do the same
My small attempt...
|
|
|
|
|
You'd be better off archiving out old records and properly indexing your main table.
|
|
|
|
|
i am creating some reports by parsing the error logs in that tables. SO i need all the error logs in the table. That y planed to move irrelevant data.
The table is indexed...
Any other technique?
My small attempt...
|
|
|
|
|
Look into partitioning the table by periods.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
That also did.. Right now i have different partition files to keep data for this table
My small attempt...
|
|
|
|
|
hello
Is there a SQL Profiler for Oracle? I want to do:
1. intercept any database/storedproc calls, examine parameters ..etc
2. identify performance bottleneck for example
Thanks
dev
|
|
|
|
|
|
Check out the tools available from Quest for Oracle. The have great products.
http://www.quest.com/spotlight-on-sql-server-enterprise/[^]
If you need a consultant, I've got a great guy for you. He is a former Oracle employee (18 years), specializing in Oracle performance tuning. He has worked majic for me in the past. Email me privately and I will give you his contact info.
|
|
|
|
|
I'm overworked, too exhausted and might actually be willing to hire extra help - this is what I am looking for (basically Oracle tuning and troubleshooting) ...
1. Use TOAD to intercept database calls, also stored proc calls, examine parameters etc.
2. Use TOAD to do performance monitoring
your email if interested in freelance job? I won't do this now, but will probably to set up external IP (buy new domain, hardware... etc and have my business which will happen next ... 2-3 months I hope, by then I can recruit your professional assistance and you can teach me the tricks remotely)
dev
|
|
|
|
|
Keep posting your questions here and I'm sure the folks will help you out.
Try this query for finding SQL statements that have more than 10,000 reads:
select disk_reads,sql_text
from v$sqlarea
where disk_reads > 10000
order by disk_reads desc
Oracle has a tremendous amount of views that you can use to find performance bottlenecks.
There are tons of books out there on the subject of Oracle tuning.
Good luck with your endeavors.
David
|
|
|
|
|
|
I want to add more record to tblStaff. before add it i want to check if existing this iD...Please help me.
|
|
|
|
|
declare
@Exists int
Select @Exists = count(*) from tblStaff where StaffID = @StaffID
If @Exists = 0
Begin
add record
end
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
This query has a better performance:
if not exists(select * from tblStafff where StaffID = @StaffID)
Begin
-- Add record
End
Wout Louwers
|
|
|
|
|
i got half way through the reply and started adding additional bits, should have started again .
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
first , i'm very thank for your help ... but can you tell me more detail ? can you give me some sample...i'm a new learner.
|
|
|
|
|
All 3 of us supplied sample code - use Wout's it is the best.
As a learner you should have a book to work through the examples! You need some basic knowledge to begin with...
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
thank 4 your advice...now i'm checking my book other sample in my class
|
|
|
|
|
Here is the Stored Procedure
ALTER PROCEDURE SP_CHECKDUPLICATEID
-- Add the parameters for the stored procedure here
(@ID INT)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
-- VARIABLE DECLARATIONS
IF ( (SELECT COUNT(*) FROM TBLSTAFF WHERE ID=@ID) = 0 )
BEGIN
INSERT INTO TBLSTAFF(ID) VALUES(@ID)
END
END
GO
hope it helps
Niladri Biswas
|
|
|
|
|
I have got two queries both of which generate the same execution plan:
query 1:
SELECT TOP 10 *
FROM news
CROSS APPLY (SELECT TOP 1 NetworkID FROM ItemNetwork WHERE ItemID = news.ID) itemNet
query 2:
SELECT TOP 10 *
FROM news
CROSS APPLY (SELECT TOP 1 NetworkID FROM ItemNetwork WHERE ItemID = news.ID AND ItemType = 0) itemNet
ItemNetwork table has 4 columns:
[ID] [bigint] IDENTITY(1,1) NOT NULL,
[ItemID] [bigint] NOT NULL,
[ItemType] [tinyint] NOT NULL,
[NetworkID] [int] NOT NULL
I have also created a non-clustered index on ItemNetwork table:
CREATE NONCLUSTERED INDEX [IX_ItemNetwork_ItemID_ItemType__NetworkID] ON ItemNetwork
(
[ItemID] ASC,
[ItemType] ASC
)
INCLUDE ( [NetworkID])
The first query takes one second to execute, while it takes 2 minutes for the second one to execute. The execution plan for both queries is the same. You can see the execution plan for the first query here[^] and for the second query here[^].
The only difference that can be seen between the two execution plans is the amount of data that comes out of news table. For the second query, we see a very big arrow coming out of news table. That is because the actual number of rows coming out of this table is 1534672 rows while for the first query, this number is 877 rows. For both queries, the estimated number of rows is 10 (because of top 10 clause). Look at the actual number of rows for both queries here[^] and here[^].
The only difference between the two queries is this condition:
ItemType = 0
I also updated the statistics for all the tables involved, but it didn't make any difference.
Could somebody please tell me how I can make the second query execute as fast as the first one?
p.s. the total number of rows in News table is 1576612 rows, in Network table 1820 rows and in ItemNetwork table 42164 rows
modified on Thursday, June 25, 2009 3:19 AM
|
|
|
|