If you had read the second paragraph in my original post, let alone the thread that Richard pointed to, you would have seen that almost the first thing I (and the OP in the other thread) did was to go back and rebuild the application, which (at least in my case) only uses standard, current, controls, and determine that the controls in the rebuilt application also showed the strange behavior.
This is not a "the application I created in 2005 doesn't run unchanged under Windows 10" issue.
Our experiences apparently differ. In general, I have found that .Net applications, and particularly their standard controls, almost always continue to work as expected through hardware and software upgrades, so long as the Framework version that they require is still present on the new system. On the very rare occasions when I have seen a problem with an old compiled application on a new system, this has always been corrected by rebuilding the application on/for the new system.
The C++ thread is at least arguably relevant because the apparently very similar problems that both threads describe is in a standard control which is part of the Framework, rather than in user code, strengthening (IMHO) the concern that the problem is one recently introduced into the Framework rather than 'user error'.
As per the other discussion looks like Windows has made a structural change which is perfectly normal.
You should be able to go around the problem by just using Windows API. The other guy with the problem said he was losing the last word but I am pretty sure that is because he didn't have enough room in the box for the wording because the font was obviously slightly different spacing. I told him about using the negative height which forces absolute height selection of font and gives windows no latitude to play with it. I don't know what parameters the original MFC code used in selecting a suitable font but you just need to work it out.
WM_SETFONT should and must work because it is an API message if it doesn't work correctly it would be given a high priority to fix from Microsoft because it could effect any and every program.
This should not be a big issue to fix. Using frameworks makes things easier the penalty is sometimes things can change in the Windows API that break the frameworks. The alternatively would be Microsoft not ever be allowed to change anything which as one of your comments already says this stuff has changed multiple times over the years already.
Got a scenario: we have ASMX web services and we auto-test the service with the help of Soap UI Tool (by SmartBear) every night after launching a new build which also happens every night after merging all the latest changes in the TFS source codes. In the process of test step’s execution, some test step gets failed due to mismatch in test parameter values; this mismatch might be due to change in the values in the source code and check-in by the developer. Some test step gets failed due to null reference and some due to mismatch in testing value stored in sqlserver database. When we try to find the reason of failed test step, it’s a time consuming task every day to find the change-set which caused the failure. Therefore, we want to automate the process to get the change-set for the failed test steps. Here, looking for your expert suggestion for developing a solution for the same.
Please let me know in case you need further clarification.
Soap UI tool generates log file after each execution. To know the reason for the failure, we go through log file. Log file does not have a proper stack trace with filename, methodname, line number etc for the failed test rather the generic reason for the failure of the test step. So, we have to debug the application for the failed method and this process takes a good amount of time to debug and find the bug. Hope it clarifies the question.
In this case, I agree with Pete. You're not going to be able to write code to automatically find which check in caused the problem. You could even have multiple problems.
You have two possible paths here. The first is to debug the code and do what you've been doing. This is, obviously, time consuming and reactionary.
The other is to implement a check in process where the code is tested before you allow it to be checked in. It sounds like you've got people "breaking the build" on just about every check in. That's not tolerable and must be prevented. Yeah, it sounds like it's more time to do this. It is but instead of spending that time on finding and fixing the bug you're spending it on preventing the bugs and improving the quality of the code.
If I were in your team, I would suggest that you would be better off with a comprehensive CI process with decent unit tests and pre-flight check ins to ensure that you don't regularly commit broken code.It's better to prevent problems than look to apportion blame.
It seems to me that you could tackle this in one of two ways: (1) Finer granularity on your build process. Somebody else already suggested a full-blown CI solution, but you could (I suppose) build say 4 times per day, which would dice-up the problem and presumably help; (2) Semi-automate the debug process by correlating changed modules with locations where errors are arising.
Option 1 requires that you can get those merges done more often and turn the build handle. Option 2 strikes me as a good thing to do anyway, but whether it adds much value will obviously depend on how "orthogonal" the changes are that people are making to the code: if many are making changes in the same few files then it's not really going to work.
By "finer granularity" I meant only that you could build more frequently. For example if currently you build at (say) midnight each day after everybody has finished work, then if Module "X" was changed 3 times and now fails to compile you've got to figure out which change caused it to fail. If, however, you'd done several builds during the day (e.g. every 2 hours) then you have more chance of discovering a broken build for which only one checkin has been done.
Taking this to an extreme you end up with CI = Continuous Integration in which typically there'll be a gate on the checkin process with some mandatory regression tests needing to be passed before checkin can be done. There are lots of ways to implement this in practice: you could allow checkin but advance a stable label only as each checkin is built and validated. In the ideal world CI should ensure that your code always builds. However this is a big topic which is why I suggested only a small change to your existing process rather than a fundamental change of strategy.
You're first going to have to describe what you mean by "autocorrect". If you're talking about a spell check (which is what I think of when I hear "autocorrect") there's no such thing in Visual Studio.
Visual Studio has Intellisense, which will suggest names of existing objects in your code to make things easier and faster to type.