While you have provided much information, it's quite hard to give you exact answers without thoroughly going through the environment.
But few things that hopefully are helpful
Most likely this happens because the amount of rows is higher and yuo do not have proper indexes to satisfy quick access to the row when updating it
Try creating an index for column
especially if this is unique
Also note that if it isn't unique, your update is updating multiple rows
Basically the same answer as in bullet 1
Typically databases write to the database file during a process which is called a checkpoint. During the transaction, all the modifications are written to the transaction log as modifications are done. Very often the exact moment of the checkpoint is unpredictable.
If this question is related to the performance, I would skip it. Databases optimize both read and write operations to the disk so in normal situations these are not the causes for bad performance. Typically the problem lies elsewhere (for example indexing)
For background information, refer to for example Write-Ahead Logging
Addition, example of creating the index:
In order to create an index, refer to CREATE INDEX
So if you want to add an index to
column in table
, try to execute the following:
CREATE INDEX X_TagValues_Tag ON TagValues (Tag)
You don't have to worry about the select or the update statements. In most cases the database automatically picks the best index (if available) to perform the operation so just by creating that index once, the performance should be enhanced.