The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
100s of additional warnings.
(first bunch with how using statements should be inside the namespace, and yes I know you can configure the warnings, but there is a joke here)
Me: Warnings fixed.
I think it's great!!! I use it in every .NET project I have by default. It's in my default Solution-wide Directory.Build.props file, so I don't even have to think about it in any new project, it's just there already.
Me too, I think it's great. An advanced developer might find some of the warnings unnecessary (myself included) but you just suppress them if you know what you're doing. The real benefit is for junior devs that don't understand the implications of their code (not properly implementing IDisposable for example). I probably won't use it in older, existing projects because I'd be overrun with warnings but I always use it in new code.
We use it and have over 75 projects in our solution, and so far it's been, well, a non issue. (I won't say "great" since that implies too much).
I can't speak to the startup times (they have always been a bit slow for my liking) but the most important thing for us is we use a custom ruleset so it doesn't complain about dumb things like the way we prefer to format code.
For us it's been an important tool to help keep things consistent.
Same here I'm pissed too - i totally aggree with you Mike - And I like FXCop. Everyone that says "oh i had 100 warnings then" has written code "not good". Normally from 100 FxCop Warning only 3-4 are to ignore…
But this update mess with VS 2019 is complete crap.
It slowed things down to a dead crawl, and gave me over 8000 warnings. Of those, roughly 7800 were complete rubbish, yet considered so Very Very Very important (in someone's imagination) that they couldn't be turned off. Pitched it after a few hours.
It seems to me that there's a reasonable argument that the thing that creates the most bugs in software is the fear of introducing bugs into the software. The fear is not unreasonable either. If you do semi-major surgery on a large piece of code, almost guaranteed you'll introduce new bugs, and possibly even some big, hairy ones with painful consequences.
But, that fear of semi-major (up to major) surgery seems so often to push folks towards always only making incremental and/or localized improvements. And that localized approach also seems to tend to create islands or layers of disparate style and technique and tools. Newer code wants to move forward but can't pull the rest along. Doesn't mean that they are ignoring the cracks in the mortar between the parts, but they just are so loath to pull it all apart and put it back together because of the possible damage.
In the end, does that ultimately lead to worse software? I think it does. But of course companies don't sell software in the end, they sell it now and have to deal with the consequences of that. And this is one of those scenarios where there kind of isn't a middle path. The middle kind of becomes the muddle that I describe above, and there's really no moderate way to fundamentally re-tool and you just have to take the pain and get it over with.
The optimal thing would be to just start a new code base, taking all the lessons learned and building it right. But that's probably a fool's paradise. It hardly ever would happen that way. The expense and the complexity and teasing all of the intricate details that have been woven into the code and perhaps not really remembered by anyone, etc... Not being able to give up the folks who really understand the current code base, so maybe different folks work on the new one and don't really have that deep understanding of what went wrong the first time, rolling their eyes at the old school crowd and their outdated concerns. And the time scale would likely have to be way too short in order to be commercially viable, so it may end up just being a new and expensive collection of different compromises.
Or of course Version 2 Syndrome with a vengeance is a possibility, and it takes twice as long to have enough meetings to decide on further ways to explore possibilities for a deeper understanding of possible modalities for scaling leverage of something or another, than it would have to just have given a small crew of talented people a room and a lot of coffee and left them alone.
Maybe in the end, the above does happen, it's just that a different company does it. Anyhoo, I'm rambling while waiting for my coffee...
It shows as in, I usually just refactor things to my liking. But this refactoring tendency of mine has been known to irk people... because, you know, we don't things "that way" here
(whatever "that way" is... I think it stands for "I wouldn't have written it that way")
A thoughtful post. A former colleague of mine called it "verification inertia", which I think was very apt. Software would be more frequently refactored if it was easier to retest. But tests are too often lacking, or the test environment too difficult to set up.
Wholesale rewriting of code is fraught with peril. It's not often attempted, and I've seen it fail more often than succeed. Here's something I wrote about it last month: [^]
I have re-written things a couple of times and in my experience it was worthy every second I invested on it.
Performance increases, load reductions, clarity and ease of maintenance...
But I was lucky, I was mostly a single developer team (in two of them I had a couple of additional programmers I was training), I had fought the bugs of the legacy versions long enough to be 100% sure of what was definitively wrong and I was pretty confident in what to change and how to change it before I got started.
I now am in front of a huge rewrite in a more or less unknown system... the original programmer (over 15 years developing this) is still here and is 100% agree with me, that we can't add some new features as it now is.
I think we are a good team, we think in similar ways and can speak about almost anything. We both hear the ideas of the other one and analyze all aspects. We both like to think about the structure first and turn ideas around to look from different points of view.
He is long here and knows everything about the current version so he can say if something is worth to try or is not going to work from the very beginning, but he is open minded. And I am new to the project, so I don't have old habits and am not blocked / blinded by established structures so many of my "why not...?" or "have you consider...?" are kind of fresh air that he likes and bring him new ideas.
I really think we can do a huge improvement and expand the action radius of the software to several new fields, if (a big IF) we are allowed to do it, because we are the only developers in the department with our bunch of daily routines / tasks to keep us busy and we estimate that we will at least need 1,5 years to do the rewrite (with around 30% to 40% of dedication within the everyday work), and maybe another year to add the features that we are sure we need to keep us or even increase our current field-share in the mid-term future. And we do know too, that without that re-write we might get "obsolete" because we won't be able to fill the increasing exigence of requirements in future projects / use cases.
But... sadly not our decision.
Let's see what happens
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
Last Visit: 18-Feb-20 12:52 Last Update: 18-Feb-20 12:52