|
If it were me, I'd return the guid so the client side could requery as needed. If you're somehow persisting the newly added data (in a session var?), i'd return the guid and add it to the data.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
It's a WPF app, which creates a new entity and sends it to the API. When the API call returns, I am now passsing back the Guid and assigning the new entity.
Thanks
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
Hi,
I have debugged a stored proc in Sql Server, which was fairly easy to do.
I am trying the same with debugging Oracle procedures, being fairly new having some difficulties. I have researched how to output a cursor value, with no luck.
Also, the sql is constructed, and then used in cursor to output results to crystal report.
My final sql looks like this, which is a whole bunch of variables:
ssql := s_SEL||s_FROM1||s_WHERE||s_WH_PER||s_WH_CC||s_WH_VEN
||s_UNION||
s_SEL||s_FROM2||s_WHERE||s_WH_PER||s_WH_CC||s_WH_VEN
I tried with no luck:
execute immediate ssql
I just want to see the values of the sql statement.
Any help is much appreciated!
|
|
|
|
|
Either use PRINT ssql;
or SELECT ssql FROM DUAL;
Really depends on how the rest of the procedure looks like.
|
|
|
|
|
Thank you for your reply.
I did the following:
==
execute immediate ssql;
SELECT ssql FROM DUAL;
==
I got the following:
[Error] Execution (313: 1): ORA-06550: line 313, column 1:
PLS-00428: an INTO clause is expected in this SELECT statement
|
|
|
|
|
In my opinion , if you can create log table in your database.
A easy way is insert the sql into your log table.
e.g.
Create log table:
CREATE TABLE SQLLOG_DEBUG
(
SQL_LOG VARCHAR2(4000 BYTE)
)
Insert the sql into your log table.
INSERT INTO SQLLOG_DEBUG (SQL_LOG ) VALUES (ssql )
*This is just a sample, I didn't test the code.
If the sql is big , you can change SQL_LOG's datatype into clob.
You can also use DBMS_OUTPUT.PUT_LINE and redirect the output to a file.
The following page shows how to do it.
[stackoverflow.com]
modified 17-Jul-19 1:00am.
|
|
|
|
|
I test the mysql_real_query API, I just loop to execute Sql syntax ,like 'UPDATE ** SET **' there is a leak memory bug occur. when I use 'top' to check the bug, I find the system 'used memory' option will always growing until the system or process crush. but 'mysqld' and 'testsql' processes's %MEM option has not increase, System free memory look like disappear. I try to force kill the 'testsql' process but the memory still be used and can not be release. Why? Please help me.
int ThreadExeSQL(MYSQL* lpSQLConn, char * sql, int iLen)
{
if (mysql_real_query(lpSQLConn, sql, iLen))
{
MYSQL_RES* lpGetSQLRes = mysql_store_result(lpSQLConn);
mysql_free_result(lpGetSQLRes);
return -1;
}
MYSQL_RES* lpGetSQLRes = mysql_store_result(lpSQLConn);
mysql_free_result(lpGetSQLRes);
return 0; }
void* ThreadSQL_HexWrite(void* lpGet)
{
LPThreadParam getParam = (LPThreadParam)lpGet;
MYSQL* lpSQLConn = (MYSQL*)&getParam->lpSQLConn;
int iThreadIdx = getParam->iThreadIdx;
printf("ID:%d\n", iThreadIdx);
mysql_thread_init();
lpSQLConn = mysql_init(NULL);
if (!mysql_real_connect(lpSQLConn, g_host_name, g_user_name, g_password, g_db_name, g_db_port, NULL, 0))
{
ThreadSQLError(lpSQLConn, NULL);
return;
}
else
{
printf("mysql_real_connect OK!\n");
}
for (int i = 0; i < 1000000; i++)
{
char lpCmdStr[8192] = "\0";
sprintf(lpCmdStr, "update %s set %s=0x%d where id=%d\0", "tb_Data", "Info", i, 1);
if (ThreadExeSQL(lpSQLConn, (char*)lpCmdStr, strlen(lpCmdStr)))
{
MySQLError getError = ThreadSQLError(lpSQLConn, NULL);
HandleMySqlError(getError);
continue; }
else
{
printf("ok. ");
}
usleep(1000 * 10);
}
mysql_close(lpSQLConn);
mysql_thread_end();
printf("ThreadSQL_HexWrite OK!\n");
}
MYSQL* g_MySQLConnList[100];
void main()
{
if (mysql_library_init(0, NULL, NULL))
{
printf("could not initialize MySQL client library\n");
exit(1);
}
int thread_num = 1;
{
pthread_t *pTh = new pthread_t[thread_num];
for (int i = 0; i < thread_num; i++)
{
LPThreadParam lpSetParam = new ThreadParam;
lpSetParam->lpSQLConn = (MYSQL*)&g_MySQLConnList[i];
lpSetParam->iThreadIdx = i;
printf("---create thread idx:%d\n", i);
if (0 != pthread_create(&pTh[i], NULL, ThreadSQL_HexWrite, lpSetParam))
{
printf("pthread_create failed\n");
continue;
}
}
for (int i = 0; i < thread_num; i++)
{
pthread_join(pTh[i], NULL);
}
delete[] pTh;
}
mysql_library_end();
printf("All Done!\n");
}
modified 23-May-19 9:38am.
|
|
|
|
|
Doesn't seem like the code is complete.
normga wrote: MYSQL* g_MySQLConnList[100];
That is a list of uninitialized pointers.
Where do those pointers get set to actually point to something?
|
|
|
|
|
yes.
g_MySQLConnList fill with like 'new MYSQL[100]';
In fact, in my testing environment the code always query mysql successful, but leak memory.
|
|
|
|
|
Perhaps you missed my point.
You have a list of pointers. Nothing more. The pointer must be set to point to something. Where does that happen?
normga wrote: the code always query mysql successful
That doesn't mean anything. Code can run successfully, sometimes, even with uninitialized pointers. It depends on how the memory is laid down.
|
|
|
|
|
So I need to regularly update a table with data from another table.
The problem is that if I update the normal way I get a table lock on the target table for half an hour, which is frowned upon by the users. So I need to run the update in batches.
The other problem is that the ID sequence is having gaps in it. Larger gaps than the batch size.
At the moment I have this solution:
DECLARE
@LastID int = 0,
@NextID int,
@RC int = 1;
WHILE (@RC > 0)
BEGIN
SELECT TOP 5000
@NextID = s.id
FROM Source s
WHERE s.id> @LastID
ORDER BY s.id
;
UPDATE t
SET
FROM Source s
JOIN Target t ON t.id = s.id
WHERE s.id > @LastID
AND s.id <= @NextID
;
SET @RC = @@ROWCOUNT;
SET @LastID = @NextID ;
END
Which works just fine, but using two selects is getting under my skin.
Any better suggestions for how to do it?
|
|
|
|
|
How about something like:
DROP TABLE IF EXISTS #ProcessedIDs;
CREATE TABLE #ProcessedIDs (id int NOT NULL Primary Key);
DECLARE @RC int = 5000;
WHILE @RC = 5000
BEGIN
UPDATE TOP (5000)
T
SET
...
OUTPUT
inserted.id INTO #ProcessedIDs
FROM
Target As T
INNER JOIN Source As S
ON S.id = T.id
WHERE
Not Exists
(
SELECT 1
FROM #ProcessedIDs As P
WHERE P.id = T.id
)
;
SET @RC = @@ROWCOUNT;
END;
DROP TABLE IF EXISTS #ProcessedIDs;
NB: The DROP TABLE IF EXISTS syntax is new in SQL Server 2016. If you're using an earlier version, you'll need to use an alternative syntax[^].
The OUTPUT clause should work in SQL Sever 2005 or later.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Ah, yes!
I always tend to forget the output clause.
Thanks!
|
|
|
|
|
Using a WHERE NOT EXISTS turned out to be very slow because of the antijoin using index seeks for every row.
I changed it to WHERE ID > (SELECT ISNULL(max(ID),0) FROM @ProcessedIDs) which allows an index scan.
This is magnitudes faster than the original nonbatched update.
The question is how to use this with composite keys?
|
|
|
|
|
When you use the TOP clause with the UPDATE statement, there's no guarantee that the rows to update will be picked in any particular order. Using the MAX(id) option, you could end up missing rows.
I notice you've replaced the temporary table with a table variable. Was there a reason for that? IIRC, execution plans for table variables tend to assume they contain a very low number of rows, which might explain the poor performance.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Richard Deeming wrote: When you use the TOP clause with the UPDATE statement, there's no guarantee that the rows to update will be picked in any particular order. Using the MAX(id) option, you could end up missing rows.
I know, and you can't add an order by to an UPDATE or INSERT.
But you can put the SELECT with TOP and ORDER BY in a CTE.
Richard Deeming wrote: I notice you've replaced the temporary table with a table variable. Was there a reason for that?
No particular reason. I like to keep the scope as local as possible, so it's mostly a habit.
Richard Deeming wrote: IIRC, execution plans for table variables tend to assume they contain a very low number of rows, which might explain the poor performance.
Table variables don't have statistics, which obviously could affect the plan, but since all ID's are unique I don't think it would make a big difference in this case
But I will test it.
<edit>Oh, and table variables can't go parallell, which obviously can affect performance a lot in this case.</edit>
|
|
|
|
|
Done some testing now.
And as I suspected, there is no difference in either performance or plan as long as the temp table has one column with unique values.
Until the query goes parallel that is. Then the difference is quickly getting huge.
But as long as I'm batching the query it stays the same until the batch is big enough to go parallel (which happens between 10000 and 20000 rows in this case). But then I will also get a table lock.
And oddly enough, it also goes much slower when parallel until reaching 100000 rows per batch. I will do some more testing on this.
|
|
|
|
|
Jörgen Andersson wrote: The other problem is that the ID sequence is having gaps in it. That doesn't change the functionality, and since it should not be visible to the outside world it should not be a problem.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
It was important to mention so that I don't get suggestions like WHERE ID BETWEEN @LastID AND @LastID + 5000
|
|
|
|
|
Add WITH(NOLOCK) to your selects and joins:
DECLARE
@LastID int = 0,
@NextID int,
@RC int = 1;
WHILE (@RC > 0)
BEGIN
SELECT TOP 5000
@NextID = s.id
FROM Source s WITH(NOLOCK)
WHERE s.id> @LastID
ORDER BY s.id
;
UPDATE t
SET
FROM Source s
JOIN Target t WITH(NOLOCK) ON t.id = s.id
WHERE s.id > @LastID
AND s.id <= @NextID
;
SET @RC = @@ROWCOUNT;
SET @LastID = @NextID ;
END
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
|
I didn't put nolock on the update statement - I put it on the join.
You could just create a job that does the monster update at night.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
Quote:
UPDATE t
...
FROM Source s
JOIN Target t WITH(NOLOCK) ON t.id = s.id
...
That NOLOCK hint is on the target table. It's exactly the same as the first example from the article I linked to:
UPDATE t1
SET t1.x = something
FROM dbo.t1 WITH (NOLOCK)
INNER JOIN ...;
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
We use WITH(NOLOCK) prolifically. Of course, we have indexes on all of our tables, and don't generally do massive updates in the middle of the work day. We have no issues.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
NOLOCK can cause nonclustered index corruption, and it's also deprecated[^].
|
|
|
|
|