Multithreading the timings really distorts the results, so as suggested those are now run sequentially. The code is running about as well as I can make it at this point, and I'm pretty chuffed with it.
This is the output I'm getting using batched prepared SQL. I commented out the updatable resultset code, because it works great for SQL server, but MySQL was unacceptably slow. Unbatched prepared sql was the same. I suspect the SQL server Java data driver does automatic batching, while the MySQL one needs to be told to batch. So those 18 seconds to update / delete in SQL server may look bad, but MySQL was taking just under 30 minutes to insert 24k records using either unbatched prepared sql, or resultsets.
Server engine SQL server 2019 SQL server 2017 MySQL
Server name
Table (NumKey,NumData)
Create table 6 3 1007
Insert 24k rows 425 160 6057
Delete 4k rows 36 29 462
Update 4k values 78 28 441
Select * 1 1 15
Delete all (SQL) 33 16 234
Drop table 2 13 191
Table (Alphakey,NumKey,NumData,AlphaData)
Create table 1 1 375
Insert 24k rows 241 170 6107
Delete 4k rows 18496 18521 547
Update 4k rows 16399 16410 551
Select * 1 1 18
Delete all (SQL) 31 21 205
Drop table 1 2 298
That said, I realise that it's probably a horrendous legacy-like abomination that needs to be abstracted the hell out of. Thing is, I'm pretty much a horrendous legacy-like abomination myself, so I need some guidance on how I need to change this thing so it's not quite so legacy. Can I dump the code on this thread for comments, or do I need to start a new thread?