The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Frequently, especially if the syntax for something is tricky (gives a stern look at C++ templates). I also tend to iterate a lot, given that our products are complicated and mistakes in the 'plumbing' aren't always easy to find.
At my current workplace, there is one hard rule: You do not commit any code to the code base unless it compliles cleanly! So the question comes down to: How often do you commit?
In earlier jobs, I was used to committing when a module was reasonably completed and tested. You wouldn't find very many updates to each file. So when I switched jobs about ten years ago, it came as a surprise to me when my colleagues couldn't understand why I hadn't comitted that code change I had made before luch - they wanted to verify that it wouldn't break their code. I was frowned upon if I didn't make commits at least several times a day. Sometimes, a colleague could hang over my shoulder to see me type in the code, compile it and commit it, before going over to his own desk to check out my new code.
I quickly learned that as the appropriate working mode in this company. But I am not going to defend it as an absolute rule. Not even as a main one. And you wouldn't believe the amount of processing power required when 100+ developers commit a dozen times a day, and each commit triggers a backend complete rebuild, module testing, linking and integration tests of the entire system.
The documentation guys, too: For a while, they had a system that rebuilt all the volumes of all the variants of the documentation at every commit - a job requiring more than an hour of CPU time on our fastest build agent. The technical writes had made a habit of committing for every paragraph they changed. We forced them to restructure their builds so that an edit only rebuilt the volume with the changed paragraph, and later only to those variants of that volume where the paragraph actually occcured. Still, the doc guys had to get their own multi-CPU blade server to get their jobs through fast enough.
With intellisense and other IDE features, compiling (though not the meaning it used to have) is pretty much unnecessary until you're ready to test. And of course in duck-duck-run languages, there is no such thing as compiling.
Heavily depends on the situation. When I'm doing something somewhat complicated, like mixing libraries using different types (in a type-safe language like Delphi or C#) or nesting loops or anything I don't completely see through at the given moment, I compile very iteratively (hit F9, F5 or whatever key the IDE uses every couple minutes) to make the compiler catch the harsh mistakes. But when it's something I'm comfortable with and can firmly say I know exactly what's going on with what I'm doing, I may compile half an hour later. Or an hour.
Sometimes, I create a prototype construct, compile to see if it works at all and then, now sure that it works, expand it to what I wanted in the first place. Same goes for the inevitable case where I have to repeat myself a few times (because creating a truly generic solution would take way more time).
I used to compile constantly, but I also used to be young and made lot's of mistakes and compiling felt safe.
Nowadays I just type out whatever I'm working on in a single go and rely on IntelliSense to catch my mistakes.
In general, I don't have compiler errors anymore on a day-to-day basis.*
I never liked working through compiler errors one at a time.
I'm IntelliSense-4-life now.
* I now get compiler errors when it's 2AM and I'm drunk.
Like many others, frequency of compilation depends on what I'm doing. If I'm writing a processing routine that has no visual component, I probably won't compile until I've completed the first pass on it. This could be hours or days between compilations.
If I'm doing front end work? Much more frequently, as "what you think you're gonna see is not necessarily what you're gonna get".
It's not about compiling, per se, but about testing.
Some people like to write an entire chunk of functionality, then do all their testing, iterating at that level when they find the problems.
Some, (myself included), like to write a small chunk of the overall functionality, test it thoroughly, then move on to the next small chunk. This approach probably means less unit-level testing, but more integration testing, I suppose.
Think everyone develops their own habits naturally for me, as a Linux C/C++ developer, if it's maintenance I am doing fairly frequently as I am making my code changes. We work with eclipse and CMake so it's easy just to hit the build button.
But when I am adding a new feature and looking at my class diagrams (Dia, visio type). It normally happens at the end after I code a nearly complete class and add it to the CMakeList.txt file.
Think if people created well fleshed out class diagram first, it would happen less often when you start coding it, but then more often afterward.
When I was young and unexperienced, it was more like a lottery whether my code would run or crash. Knowing that on a subconcious level, I didn't want to be reminded of how unexperienced I was, rarely compiled and never debugged unless there actually was a bug. But finding the bugs I produced sometimes took more than twice as much time as it took to produce them, and I believe some of them were never even found by anyone at all.
Today, with more than 20 years of experience, when writing production-level code (so this doesn't go for clickdummies etc.), I want to be aware of my code quality at any time, which means I compile and run *very* often and execute all newly written code in the debugger step by step, having a look at the state of all objects and local variables involved. So lottery no more!
This significantly decreases my development speed, by let's say factor 3, while increasing my code quality by factor 2 only. But(!!) economically, this is still sensible bc it spares the quality assurance and technical support people some effort and prevents customers from running away due to poor code quality and frequent crashes.
It depends, but in general, as soon as all the related changes needed in order to have a successful compile have been made, I try compiling it. In well designed code, that time usually comes very quick as very little code needs to be changes. In really poorly designed code, it might be a week.
Create a project, figure stuff out, finish (or don't) project.
Create a new project, copy paste everything you need from first project because you already figured this stuff out, finish (or don't) project.
Create a third project, copy paste everything you need from second project because the first one is ([sarcasm]obviously[/sarcasm]) outdated and shouldn't be used as a reference, finish (or don't) project.
Create a n-th project, copy paste everything you need from n-th - 1 project because n-th - 2 project is outdated and shouldn't be used as a reference, finish (or don't) project.
It's mostly stuf like authentication, database setup, renaming default cookies, some custom routing, setting up your DI framework, etc.
Mostly the Startup class in ASP.NET Core or the Globas.asax/App_Start classes in .NET Framework.
I usually shelve my changes or branch them then copy from shelveset/branch to trunk/main, etc. I don't create separate projects unless I have to , which is rare.
You mean like, some code that you can look up later?
I didn't mean create new projects just to test/save some code, but because the business actually requires them.
I work for multiple customers and all do microservices (because I've said so)
I created at least ten new serious projects that will be/are in production this year alone and they all need "the basics" like authentication, database, DI, etc.
I've considered writing a utility package and re-use that, but it's not really worth it as it's mostly two lines here, two lines there, with usually slightly different parameters as well.
I will however create separate console apps sometimes to test services, web apis, etc. if I don't use a unit test to do that, which can be convenient, even though it is not really a "unit test".
I do that too sometimes. It's more of an integration test I guess.
At work we've got a starter solution with all the stuff built in that's to be used as a baseline for new development so we don't have to reinvent the wheel and to keep all our solutions organized somewhat similarly to make it easier for anyone switching projects to get up to speed.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
Nah, I'm far too lazy for that. I would instead do the following:
Create a project, figure stuff out, finish (or don't) project.
Have requirement to do second project with similar stuff.
Put second project on hold.
Create "template project" that has all the figured out stuff in it (auth, setup, etc.)
Copy template project and rename it project 2.
Repeat as needed, and update template if new "common stuffs" are identified.
Wait, you mean like a regular project that you copy/paste and then rename?
Not like a Visual Studio template?
In that case it's almost what I do too, except I copy the code rather than the template because I can't be bothered with renaming templates and namespaces