|
I feel like such a tool would belong in the weird and the wonderful.
I hate to say this, but if your includes are so heavily dependent on ordering you are almost certainly due for a restructure of your code.
For example, it might be better to do the includes as more of a tree in terms of what includes what than you currently have it.
There are a number of ways to deal with it but it all has to do with structure.
Edit: I'm not saying this is certainly the issue in your case. It just smells from here. My spidey sense is tingling.
To err is human. Fortune favors the monsters.
|
|
|
|
|
By tree I assume you mean nested/transitive if that's the right term. Doesn't that leave me w/ the same problem i.e. each file whether cpp or h still needs certain #include's and in a certain order. As you clearly are a better programmer than I there must be something I do not understand. As for relying on the order of #include's I gave up attempting to make each independent of any other after scratching my head raw.
|
|
|
|
|
It leaves the same problem, but it creates potentially more organization.
The better alternative is to reduce the number of cross header dependencies, or restructure the dependencies into common headers included by each of the downstream headers.
This of course isn't possible if you don't "own" the code in those headers, and in any case, it's probably a lot of work to restructure it as above.
So I'm not saying your tool doesn't have merit. I'm just saying if you need it, you might want to take a second look at how things are structured.
To err is human. Fortune favors the monsters.
|
|
|
|
|
"my spider sense..." lol. well put
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
technically it was "spidey" - comes from the old spiderman comics.
To err is human. Fortune favors the monsters.
|
|
|
|
|
I'm sure there's tools to do this. Maybe something like CPPDepend? Dependency Graph
That seems to be a commercial tool, I've not used it, so I can't comment on whether it actually works or not, but it seems like it might give you what you're looking for. Maybe search results for "C++ include dependency graph" or similar might lead you to what you're looking for.
Keep Calm and Carry On
|
|
|
|
|
This bloody #include hell is huge reason why I'm looking forward to widespread standard module support. Until then, I combine all the external includes into one header & include that.
|
|
|
|
|
Does the order of the includes matter?
If not you could include all of them to check the project compiles, then remove them one by one and adding them back if removing them breaks the compile.
40! is 8.1591528e+47 according to Google. That would take a while to work through.
|
|
|
|
|
Not a bad idea, but how do you exclude circular dependencies from cropping up in this case. Not all of them are caught at compile time which you could run the risk of introducing them into your application with a just get it to compile mindset.
|
|
|
|
|
You could try the technique used in the Git project.
Firstly, they do have two two top level `.h` files that are always included as the first lines in any other file that will need includes. This provides a level commonality across the project, and a consistent inclusion order.
Secondly, and slightly more important for you, is that all the included `<file>.h` files have a preamble/suffix of:
#ifndef <FILE>_H
#define <FILE)_H
... stuff ...
#endif /* <FILE>_H */
Thus stuff is included only once and a hierarchy of usage is created.
Add
#else /* warnings */ for extra feedback and local tastes.
|
|
|
|
|
Instead of inserting until there are no errors, have you tried deleting until there's an error and then re-inserting it? When you can't delete any without causing an error, you've finished.
|
|
|
|
|
When I want to debug things related to header inclusion, I typically run the compiler in preprocessor-only mode and scan the output.
vc /P
... or ...
gcc -E
You've tried this? The preprocessed output sometimes shows some interesting things and might be good input for a tool chain.
-- Matt
|
|
|
|
|
So because I don't want to selectively choose what should be encrypted vs what I shouldn't bother encrypting, my entire backup drives have been encrypted with TrueCrypt. I've stuck with this for over a decade.
TrueCrypt development has ceased years and years ago, and (it seems to me) the most popular replacement (branched off of TrueCrypt) is now VeraCrypt.
It's been long enough, I really should be moving my backup drives from TC to VC.
Has anyone here done that? What was your approach?
I have two backup drives - identical to each other. The way I see myself doing it is:
a) Format DriveA with VC
b) Mount Drive B with TC
c) Copy the content of DriveB to DriveA
d) Dismount everything
Once I'm confident all the data's been transferred, repeat the process in the opposite direction - that is,
a) Format DriveB with VC
b) Mount DriveA with VC
c) Copy the content of DriveA to DriveB
d) Dismount everything
Or just use something like CloneZilla to clone DriveA back to DriveB.
At that point, TC is completely out of the picture.
The reason I want to copy backup drive to backup drive, rather than directly backing up my live drive to the first backup drive, is that in order to do a full backup, I need to dismount some files (primarily VMs), and I don't want to leave my system with those VMs dismounted for the entire time it's gonna take to backup the whole thing.
After the backups are on VC, then I'll take the time to run my backup script, which only re-synchronizes modified files (which typically only takes a few minutes) - so that should minimize the down time while my backup gets re-synched.
Would you do it differently?
|
|
|
|
|
Just curiousity: Doesn't having your backups encrypted increase the risk of not being able to use them for recovery?
I'd opt for physical security (lock them up off-site, for example) instead.
Software Zen: delete this;
|
|
|
|
|
Only if the software is buggy and can't read back what it wrote itself. TC/VC are so mature at this point, I really don't worry about that.
If you're thinking about disk failures...then (a) that's why I do my backups in pairs and (b) an unreadable sector is simply unreadable, whether it's encrypted or not. And things like SpinRite don't care whether a sector is encrypted or not - these tools only concern themselves with trying to recover raw bits. They don't even know whether they're reading a FAT partition, NTFS, ReFS, ZFS, whatever.
|
|
|
|
|
I use SyncBack (free). It allows a number of different options, so that you copy one way or do a sync. It also allows you to do a simulated run and logs what will change.
Just curious, why have you encrypted the whole disk? Doesn't that make access a whole slower?
// TODO: Insert something here Top ten reasons why I'm lazy
1.
|
|
|
|
|
yacCarsten wrote: I use SyncBack (free).
I'm not sure how this helps here TBH. I'm talking about migrating from one encryption system (TrueCrypt) to its successor (VeraCrypt).
yacCarsten wrote: why have you encrypted the whole disk?
As per the top of my post...I don't want to take the time to selectively decide what needs to be encrypted (eg, my banking info), vs what's fine to remain unencrypted (eg, setup programs, which just happen to exist on the same disk). Just encrypt the whole thing and be done with it.
yacCarsten wrote: Doesn't that make access a whole slower?
Possibly. Could I actually measure it, especially nowadays? Maybe if I was transferring the entire content of the drive - which only happens in extremely rare circumstances like this one, which is pretty much a one-time operation. Besides, they're backup drives - they're only being used when I'm updating my backups or when I find out I need to recover a few very specific files (which has happened maybe...3 times since I've started maintaining backups decades ago?)
|
|
|
|
|
Every morning I open Code Project and start with the news.
Every morning I find a new application or framework or both.
My question is how many of those, are actually used by developers, other than the people who created them?
I believe that some of those created last year are still in use today, but not many.
Really! I don't intend to criticize those that developed them; however, the learning curves have got to be tremendous.
Am I a hopeless luddite?
What do you think?
|
|
|
|
|
|
That there are too many frameworks? Or that he is a hopeless Luddite?
You could always embrace the power of “and”!
If you can't laugh at yourself - ask me and I will do it for you.
|
|
|
|
|
The same as him. 
|
|
|
|
|
A framework really helps if it's intended for your domain and has a low surface-to-volume ratio. Without one, the outcome is superfluous diversity, which makes it hard for software to interoperate without writing glue that would otherwise be unnecessary.
Ideally, a framework should be developed internally so that it can evolve to suit the needs of your applications. But if an external framework is a good fit, and if it's responsive to its users, it's worth considering.
The worst outcome is a team without a framework. It can happen because management thinks everyone should be developing features or because no developer has enough domain experience to develop a framework.
|
|
|
|
|
When I read about software code having a "low surface to volume ratio", I know I have stumbled into Pseuds Corner. 🙄☹️
|
|
|
|
|
What does that even mean: "low surface to volume ratio"???
Steve Naidamast
Sr. Software Engineer
Black Falcon Software, Inc.
blackfalconsoftware@outlook.com
|
|
|
|
|
Good question! It's just techno-babble as far as I can make out.
|
|
|
|