Scenario:
I am developing a new module to generate protocol files in a special format that will be integrated in a bigger software. The main software has several parts, some in C# .Net and some in c++ with MFC (including multi-threading) and the API I am using has a .c file too.
My test Application is a C++ Console with MFC and ATL support in VS2017 Professional.
Although the main software uses multithread I am not willing to get involved with them in my test app.
I would like to get the fastest (from performance point of view) and most accurate / smallest resolution timestamp to calculate deltas / needed time of execution I can use.
The possibility of needing the deltas in another part of the program is not gone yet, so I would like to get it in a variable just in case. This is why I try to do the measurements via code instead of using external performance tools.
What I have tried:
I have tried several approaches:
1)
time_t GetActualTime()
{
_tzset();
time_t myTime;
time(&myTime);
return myTime;
}
I suppose is the fastest option (in execution time), but it returns seconds. For other parts of my program is more than enough, but not for what I want to check right now.
2)
With FILETIME I should be able to get down to 100x Nanoseconds, but the problem is the "workaround" I am using to get the needed timestamp. With this code:
ULARGE_INTEGER GetTimeFile100xNanoSec (int iNr)
{
FILETIME timeCreation; ULARGE_INTEGER uli100xNanoSec;
CString strFileName = _T("");
strFileName.Format(_T("D:\\Temp\\myDummyFile_%03d.txt"), iNr); CStringW strFileNameW(strFileName);
LPCWSTR lpFileName = strFileNameW;
HANDLE hFile = CreateFileW(lpFileName, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, CREATE_NEW, FILE_ATTRIBUTE_NORMAL, NULL);
GetFileTime(hFile, &timeCreation, NULL, NULL);
CloseHandle(hFile);
DeleteFileW(lpFileName);
uli100xNanoSec.LowPart = timeCreation.dwLowDateTime;
uli100xNanoSec.HighPart = timeCreation.dwHighDateTime;
return uli100xNanoSec;
}
using it in a loop like:
ULARGE_INTEGER lluOld;
ULARGE_INTEGER lluNew;
lluOld = GetTimeFile100xNanoSec(999);
for (int i=0; i<100; i++)
{
lluNew = GetTimeFile100xNanoSec(i);
wprintf(_T("\nIn Loop [%d], New - old = %llu"), i, lluNew.QuadPart - lluOld.QuadPart);
lluOld.QuadPart = lluNew.QuadPart;
}
I am getting values from 10001 (1 ms) up to 60006 (6 ms), average of the 100 tries is almost 2,5 ms. Creating / deleting the temp files affects the performance making it so slow, that the valid range gets in milliseconds. What makes the small resolution to be in vain.
3)
With SYSTEMTIME I can only go down to Miliseconds. I have not checked performance speed yet, but I will do it later, if I can get the 1ms step in a stable way, I suppose I will use this since #2 is not stable enough to be reliable
Any suggestions to get below the mark of milliseconds in a reliable and reusable way? With reusable I mean getting the value in a variable that can be evaluated in other place.