The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
1. The lounge is for the CodeProject community to discuss things of interest to the community, and as a place for the whole community to participate. It is, first and foremost, a respectful meeting and discussion area for those wishing to discuss the life of a Software developer.
The #1 rule is: Be respectful of others, of the site, and of the community as a whole.
2. Technical discussions are welcome, but if you need specific programming question answered please use Quick Answers[^], or to discussion your programming problem in depth use the programming forums[^]. We encourage technical discussion, but this is a general discussion forum, not a programming Q&A forum. Posts will be moved or deleted if they fit better elsewhere.
4. No politics (including enviro-politics[^]), no sex, no religion. This is a community for software development. There are plenty of other sites that are far more appropriate for these discussions.
5. Nothing Not Safe For Work, nothing you would not want your wife/husband, your girlfriend/boyfriend, your mother or your kid sister seeing on your screen.
6. Any personal attacks, any spam, any advertising, any trolling, or any abuse of the rules will result in your account being removed.
7. Not everyone's first language is English. Be understanding.
Please respect the community and respect each other. We are of many cultures so remember that. Don't assume others understand you are joking, don't belittle anyone for taking offense or being thin skinned.
We are a community for software developers. Leave the egos at the door.
Honestly... it gets worse and worse and worse with Microsoft.
What was wrong with the code analysis in VS2017?
* perfectly integrated, just runs with the build
* no additional installs (or nuget-mayhem) needed
* compiler-warnings and all perfectly there for any logs
* "install this extension" _or_ "you can do it as nuget"
* give a warning of deprecated analysis in each and every project
* go there, click install for this, and configure that
* starting time of VS tripled since this crap has been installed
In a multi-product environment with 100+ solutions which are opened at least once a week (throughout the team) the nuget-mayhem is no option - we don't want to have 100's of copies of the same crap running around on the dev machines.
The CI server plays the song of death
The performance is down to the late 90's
WHY... microsoft... WHY?
Do you really want to push us all to the java world with intelliJ? It was already lightyears ahead of Visual Studio and now you implemented the Hogwarts Express from VS to intelliJ.
It seems to me that there's a reasonable argument that the thing that creates the most bugs in software is the fear of introducing bugs into the software. The fear is not unreasonable either. If you do semi-major surgery on a large piece of code, almost guaranteed you'll introduce new bugs, and possibly even some big, hairy ones with painful consequences.
But, that fear of semi-major (up to major) surgery seems so often to push folks towards always only making incremental and/or localized improvements. And that localized approach also seems to tend to create islands or layers of disparate style and technique and tools. Newer code wants to move forward but can't pull the rest along. Doesn't mean that they are ignoring the cracks in the mortar between the parts, but they just are so loath to pull it all apart and put it back together because of the possible damage.
In the end, does that ultimately lead to worse software? I think it does. But of course companies don't sell software in the end, they sell it now and have to deal with the consequences of that. And this is one of those scenarios where there kind of isn't a middle path. The middle kind of becomes the muddle that I describe above, and there's really no moderate way to fundamentally re-tool and you just have to take the pain and get it over with.
The optimal thing would be to just start a new code base, taking all the lessons learned and building it right. But that's probably a fool's paradise. It hardly ever would happen that way. The expense and the complexity and teasing all of the intricate details that have been woven into the code and perhaps not really remembered by anyone, etc... Not being able to give up the folks who really understand the current code base, so maybe different folks work on the new one and don't really have that deep understanding of what went wrong the first time, rolling their eyes at the old school crowd and their outdated concerns. And the time scale would likely have to be way too short in order to be commercially viable, so it may end up just being a new and expensive collection of different compromises.
Or of course Version 2 Syndrome with a vengeance is a possibility, and it takes twice as long to have enough meetings to decide on further ways to explore possibilities for a deeper understanding of possible modalities for scaling leverage of something or another, than it would have to just have given a small crew of talented people a room and a lot of coffee and left them alone.
Maybe in the end, the above does happen, it's just that a different company does it. Anyhoo, I'm rambling while waiting for my coffee...
It shows as in, I usually just refactor things to my liking. But this refactoring tendency of mine has been known to irk people... because, you know, we don't things "that way" here
(whatever "that way" is... I think it stands for "I wouldn't have written it that way")
A thoughtful post. A former colleague of mine called it "verification inertia", which I think was very apt. Software would be more frequently refactored if it was easier to retest. But tests are too often lacking, or the test environment too difficult to set up.
Wholesale rewriting of code is fraught with peril. It's not often attempted, and I've seen it fail more often than succeed. Here's something I wrote about it last month: [^]
I've got a mix of old projects needing new features and new projects that my customers keeping coming up with. I keep having nightmares where a line of end users are waiting in front of my desk, when another comes in and breaks line to ask for another report or program tweak...a fight breaks out and I find myself paralyzed...then I wake up!
It's the new stuff that seems to take forever, mostly because there's no roadmap...unlike the rest of you, the only project specs I've ever gotten were handwritten on scratch paper or bar napkins...if I'm lucky, there might be a drawing!
Personally, I'd rather add features/fix bugs in existing projects than work on new stuff.
When asked, I always give multiply a reasonable time estimate by 3.
In classic waterfall (the thing before agile/nonplanning/adhoc) writing specs is the devs' work. From that, a basic design is formed, and from that, a technical design. AFAIK, agile skips that and goes directly to implementation.
Makes estimating so much harder; it's like guessing at what date you finish the book filled with sudoku-puzzles, and putting a price tag on it. That sounds a lot like gambling
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
That should NEVER be the devs work!
Especially in a waterfall environment, a business analyst (or more) should talk to users, stakeholders, etc. and only when it's finished hand it over to development.
In fact, if a developer strays from the written specs a tester should make note of it and either the software should be fixed or the specs should be changed accordingly by the analyst.
A developer is probably the least suitable person to write specs.
Agile doesn't skip it, it just doesn't plan too far ahead.
It's more like, let's write the specs for this particular feature.
Which features are getting specced or built next is up to the product owner who decides on priority (also with input from users and/or stakeholders).
The specs are then made into stories (or perhaps the stories make up the specs) which are estimated by the developers.
If any story isn't fully clear it goes back to the product owner who can then try to clear things up, probably by talking to the users.
In agile, you also don't estimate time, but complexity, in terms of story points.
The story should be done by the end of the sprint (usually two weeks) and every sprint can have x story points.
Estimating anything else than complexity of user stories isn't agile, it's a team trying to do agile in an otherwise waterfall company.
Consulting must be a different beast. I am the analyst, stakeholder, developer, support department, and tester. I write my own specs, usually on the fly! Unfortunately, I am also in charge of the documentation.