|
I even tried this simple delete but still not working. No error and records not getting deleted!!!
sql_connection = new MySqlConnection(ConfigurationManager.ConnectionStrings["SQLdb"].ConnectionString);
sql_command = new MySqlCommand("sp_delete_channel", sql_connection);
sql_command.CommandType = CommandType.StoredProcedure;
sql_connection.Open();
sql_command.Parameters.AddWithValue("param_channel_id", channel_id).MySqlDbType = MySqlDbType.Int32;
int result_row = sql_command.ExecuteNonQuery();
and SQL:
CREATE DEFINER=`root`@`localhost` PROCEDURE `sp_delete_channel`(IN param_channel_id int, IN param_volow_member_id int, OUT param_is_channel_created bit)
BEGIN
DELETE FROM member_posts WHERE member_channel_id = param_channel_id;
DELETE FROM member_channels WHERE member_channel_id = param_channel_id;
IF NOT EXISTS (SELECT member_channel_id FROM member_channels WHERE volow_member_id = param_volow_member_id) THEN
BEGIN
UPDATE volow_members SET is_channel_created = FALSE;
SET param_is_channel_created = FALSE;
END;
ELSE SET param_is_channel_created = TRUE;
END IF;
END
|
|
|
|
|
Hi everyone. I'm trying open a connection to sql server database in a background thread while showing a progress dialogbox that has a Cancel button.
I would like to know how I can terminate the connection when the Cancel button is clicked and SqlConnection.Open call is still connecting. Thanks.
|
|
|
|
|
I'm afraid you cannot do that. SqlConnection.Open is a blocking call and control will not return to the calling code until the method completes. And I'm not aware of any asynchronous way of opening a database connection.
I stand corrected. Check Eddy's answer below for an asynchronous way of doing this.
modified 4-Oct-12 15:20pm.
|
|
|
|
|
Use OpenAsync[^], or it's overload[^].
--edit
For which version of Sql Server? Take note of the "supported platforms" in the documentation.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
Thanks Eddy. The function is available only in .net framework 4.5 which I'm not using.
I tried in my own way which works fine now. My BackgroundWorker thread creates a new thread and the created thread rather calls SqlConnection.Open(). When the cancel button is clicked, the BackgroundWorker thread aborts the thread opening the connection.
It works fine although I don't know if there could be any related problems. I have run the application several times just to try that part of the code and I haven't found any problems.
|
|
|
|
|
Dan_K wrote: I tried in my own way which works fine now. My BackgroundWorker thread creates a new thread and the created thread rather calls SqlConnection.Open(). When the cancel button is clicked, the BackgroundWorker thread aborts the thread opening the connection.
That's what the OpenAsync in 4.5 will do; open the query on a backgroundthread, polling it's status.
Dan_K wrote: It works fine although I don't know if there could be any related problems.
Opening a connection can fail for multiple reasons, and exceptions aren't casted across the thread. You might want to make sure you have a good exception-handler in there; if you have the time, then it'd be mighty cool to have a "try connecting again" function - that way people won't have to relaunch the app after they found out that the database-server wasn't powered on.
Bastard Programmer from Hell
if you can't read my code, try converting it here[^]
|
|
|
|
|
We've upgraded one of our systems to SQL 2008R2 and certain parts are running like a dog. One statement that would have taken an average of 20-40mins on the old box, took from 11:40 to 00:35. Its not every statement but odd ones.
Its a 2 cpu box, and the cpu usage sits a 50%, there's hardly any activity in activity monitor (no significant resource waits etc, no io, or requests)
Unfortunatley this didn't happen during conversion, system test or uat, so we've imp'ed, and now hitting these issues
Has anybody got some hints, places to look, good resources to read up on?
One thing we would like to work out, is it a fact, 2008 does things differently and query design, indexes etc need to be re-developed/analysed?
|
|
|
|
|
Rebuild all indexes and do update statistics and then try again.
|
|
|
|
|
Rebuilding indexes didn't seem to help.
But updating all the stats did, the sample I was looking at now processes in 6s rather than 9h.
My concern now is I have Auto Update Stats On, and Update Stats Async, and the way the system works is it that each stage wipes the working tables before processing. So the stats would be invalidated every run.
Now we obviously could go through and do manual stat updates, but should we need to? Given we didn't for 2000.
Also I think that SQL is memory starved, would this be preventing it from also auto-updating stats?
|
|
|
|
|
Since it seems that you can isolate it down to a SQL statement, then you might want to put that SQL statement in SQL Server manager and under the query menu, enable the option to "Include Actual Execution Plan". After running the statement, you should get an idea of where the problem lies.
You can also do things like including statistics.
Good luck.
|
|
|
|
|
In our couple of runs we have identified 2 separate statements, the problem is this system is wiped each run and the data then passed through changes in size, so we probably have not found them all. We've also not been down all the varying processing paths.
So I was wondering if in doing a conversion to 2k8, you will always find statements that no longer work efficiently and you just have to find them all and fix. Or if there is something we might have set wrong?
If everyone says its the former then fine, I can plan for that, just want to make sure before saying we need to make more changes.
|
|
|
|
|
Hmmm ...
Let's look at this differently. Do you have an example of a SQL statement which took longer to execute under 2008 and have you found a solution ? (For example rebuild index)
Are you running SQL 2008 server in some "Compatibility Level" which is causing it work inefficiently?
For example, run
select compatibility_level from sys.databases where name=db_name()
and you should get 100 for SQL Server 2008.
I've recently upgraded an applicaiton from SQL 2005 to SQL 2008 and things seem to be just fine with the queries, etc.
I took a full backup from SQL 2005, created a brand new database on our SQL 2008 server and restored the backup. Very straight forward. No complaints. (The applicaiton which accesses the database was also upgraded, but I have no control over that because it is an application which we purchased.)
If you need more detailed help, send me a private message and I can point you to a SQL Server professional.
You can continue to post your questions here, I will do the best I can.
|
|
|
|
|
The Compatibility level is set to 100. We've not tried 80...
We do seem to have resolved the issue, the server now has 8Gb, rather than 4Gb. The 2000 server had 4Gb.
One query that took hours on 2008, but down to seconds once we added the 8Gb was:
update outMortgageAsset
set
OpeningBalance = z.OpeningBalance
, ClosingBalance = z.ClosingBalance
, AvgBalance = z.AvgBalance
from outMortgageAsset b
join
(
select
GroupID = a.GroupID
, MonthDate = k.MonthDate
, OpeningBalance = sum(k.OpeningBalance)
, ClosingBalance = sum(k.ClosingBalance)
, AvgBalance = sum(k.AvgBalance)
from EIRGroupAccountsToBeTotaled a
join
(
select
monthdate = b.ForecastMonth
, accountid = b.AccountID
, OpeningBalance = isnull(b.CurrentBalance, 0.0)
, ClosingBalance = isnull(c.CurrentBalance, 0.0)
, AvgBalance = (isnull(c.CurrentBalance, 0.0)
+ isnull(b.CurrentBalance, 0.0)) / 2.00000000
from EIRAccountBalancePerPeriod b
left join EIRAccountBalancePerPeriod c
on b.ForecastMonth = c.ForecastMonth - 1
and c.AccountID = b.AccountID
union all
select
ForecastMonth - 1
, AccountID
, OpeningBalance = 0
, ClosingBalance = isnull(CurrentBalance, 0.0)
, AvgBalance = (CurrentBalance) / 2.00000000
from EIRAccountBalancePerPeriod a
where ForecastMonth =
(
select min(ForecastMonth)
from EIRAccountBalancePerPeriod z
where z.AccountID = a.AccountID
)
) k
on a.accountid = k.accountid
group by a.GroupID, k.MonthDate
) z
on b.GroupID = z.GroupID
and fm = z.MonthDate
The row counts for the tables:
outMortgageAsset: 1148
EIRGroupAccountsToBeTotaled: 1499860, Clustered Index on GroupID, AccountID
EIRAccountBalancePerPeriod: 14347829, Non-Clustered on MonthDate, ForcastMonth, AccountID
|
|
|
|
|
Interesting.
So after you added the additional memory, the query executed faster?
What OS is the SQL 2008 Server running vs the SQL 2000 Server? (64 vs 32 bit?)
I'm thinking that the SQL 2008 server OS was struggling with memory; causing poor performance.
Does the system seem to be operating better with the 8GB of memory? It may have been just a memory issue. (Seems like Microsoft OS and Applicaitons need more and more memory with each new release. But memory is cheap these days ...)
Hope things are better.
|
|
|
|
|
Both are 64bit!
System's sitting at 93% physical used, with SQL taking ~6gb for its private working set.
Unfortunately just running a slightly larger dataset through, and now another earlier statement is showing the same signs. 50% cpu, but nothing else seems to be happening.
|
|
|
|
|
|
Grr, not getting very far.
Copied the db back to our dev server which hasn't had the 8gb upgrade, so its still 4gb.
Ran the same dataset through, the dev one finished in a reasonable time, the other is still going and now slowed down on yet another part of the process...
|
|
|
|
|
Might have found something, a high number of CXPACKET waits. We've set the MAXDOP to 1 to see if we now get consistent performance, if we're lucky it will consistent performance that matches the old server...
|
|
|
|
|
Hi,
I don't know why mySQL timestamp was change?! it' 3rd today but it's showing 4th when I try SELECT NOW or SELECT SYSDATE?
How can I fix it?
|
|
|
|
|
IMO, MySQL does not have its own date/time storage. Check your system's date/time.
|
|
|
|
|
I not have Connection String for Ms access 2010 to Windows 7 64 Bit
there find an error on "provider=Microsoft.Jet.OLEDB.4.0" is
The 'Microsoft.Jet.OLEDB.4.0' provider is not registered on the local machine.
|
|
|
|
|
See here[^].
One of these days I'm going to think of a really clever signature.
|
|
|
|
|
We have a web application using SQL Server 2008 R2 that contains data for tracking time that another company has expressed interest in using, we initially set up the database such that all end-user tables have a foreign key to what company that row belongs to. Is there a good way to backup/restore the data for a specific company across the tables, such that if the new company says tomorrow that they want their data restored to yesterday; it can be restored without impacting all the other companies data as well? I'm trying to avoid writing some form of stored procedure that selects and then have to do an insert/update/delete to put the data back. Thanks for any insight.
Dennis Daugherty
|
|
|
|
|
Could you simply use bcp[^] to 'export' or 'import' the data. For your case, when exporting, you should define a custom query for the bcp to run.
|
|
|
|
|
I guess I could if I turned identity insert on and then just delete all for a company and then import ... I'll think on that one ...
|
|
|
|