|
tracked down the one thing that was plaguing me.
Documentation? f***, all I can find are useless Microsoft help articles that are just informational to the point of flying you into the mountain. Return me to the days of DEC, where one bookcase help the complete knowledge.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
well, you can just read the header files.
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
lol, true. But going into the mfc afx, etc header files where "those people" took macros to an art form (I'm being generous) is tedious.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
Hi there,
i have a question regarding C++ "language updates", to phrase properly what i want to know:
We currently have an external employee that codes C++ Libraries for us, which are used for "drivers" to ensure communication via Ethernet, USB and Bluetooth.
Those libraries are currently only available as 32 Bit dll's and therefore created a discussion after we found out that in some cases we would as well need them as 64 Bit.
To get to the point, said collegue also mentioned, that those libraries are ensuring a downwards compatibility to run on systems below windows 10 (which is our current limit of support), namingly they run even on Windows XP and probably further down, written with VS 2005 iirc.
Since i am .NET developer i can not really grab the necessity for doing so, nor estimate the potential "risks, flaws or performance" related topics comming up with that compatibility.
So i want to ask you if my concerns are correct or totally wrong, when it comes to using "very old" C++ instead of modernizing it and only ensuring runtime compatibility to "current" OS'es.
I personally feel that, since C++ get's updated every now and then, there must be a reason for doing so, as well as ofc improving the final produce that get's spit out by the compiler if your using newer (not newest) C++ Versions.
As a bonus, if someone could take the time to answer this as well, would you suggest, think about or deny switching code parts into C# and .NET which probably could be switched due to framework functionalities where we can rely on the stuff microsoft already has built in there?
As a note: We leave aside the fact that nobody else from the team would have hardware to continue coding or even compile this old stuff.
In any case, thanks alot for reading and / or answering.
Rules for the FOSW ![ ^]
MessageBox.Show(!string.IsNullOrWhiteSpace(_signature)
? $"This is my signature:{Environment.NewLine}{_signature}": "404-Signature not found");
|
|
|
|
|
This is really a broad topic.
I am only able to just write down few observations.
C++ language updates are not bug-fixes: they improve the language. If you are going to start from scratch a new project then using modern C++ is a real advantage. On the other hand, migrating an old big (working) project could be painful.
If you need a 64bit DLL , you colleague could probably build it without using newest C++ features. Maybe he can, at the same time, keep the existing 32bit DLL , two builds of the same code.
If Microsoft provides me the same functionality my mates code do, by no means I would continue using the latter.
Does the compatibility argument applies also to your .NET code? I mean, DLL compatibility with old OS s is useless if you cannot run the application on such systems.
My two cents.
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
Thanks for the answers, that lit up some dark corners for me.
His proposal is to do simply convert the necessery stuff for 64 bit aswell so he'll provide both for us, not changing or modernizing anything.
That's what i would have guessed, an example given, we have code for BLE communication in such a library, i know that Microsoft API or respectivly .NET has a lot of Code and Features provided for BLE communication. SO my idea was, since he knows both worlds, to take the effort and convert to .NET, at least the things that are possible, he wasn't happy...
No, i hope i mentioned it but our application is bound to another external application that standardises things. Therefore we are only supporting Win 10 and 11 because said "Frame Application" is only running on those two.
Thanks again for your answer
Rules for the FOSW ![ ^]
MessageBox.Show(!string.IsNullOrWhiteSpace(_signature)
? $"This is my signature:{Environment.NewLine}{_signature}": "404-Signature not found");
|
|
|
|
|
You are welcome.
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
If you switch to 64 bits, you need to update the whole thing, including external libraries.
IMO, it's not worth it to convert to 64 bits unless you have real reasons for it (large datasets, hardware requirements... )
HobbyProggy wrote: We leave aside the fact that nobody else from the team would have hardware to continue coding or even compile this old stuff.
lol.
At that point, it's more a business decision than a technical decision.
Your company needs to decide if they want to spend money maintaining old code on old compilers or move everything to a recent compiler and making sure everything is continuously up to date
Good luck.
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
Thanks, i may need that
To answer your first statement, it is required for us to support 64bit and 32Bit, with .NET it's easy -> Compile for Any CPU and done. I know it's more complicated in C++.
Rules for the FOSW ![ ^]
MessageBox.Show(!string.IsNullOrWhiteSpace(_signature)
? $"This is my signature:{Environment.NewLine}{_signature}": "404-Signature not found");
|
|
|
|
|
As another has pointed out, the changes to C++ are mostly language extensions. For example, before C++-11, there was no auto or ranged for loops. Some of the language updates do address some defects in the standard, either clarifying the language or addressing a corner case.
Two things stand out:
1) going from 32 bit to 64 bit is rarely as simple as just changing the compiler flags. You may find that, particularly if you need to access hardware, you need to adjust data types. For example a long is 4 bytes in 32 bit land, but 8 bytes in 64 bit land, and if you're using structs you may need to adjust padding.
2) You seem to have a "key man" reliance. Worse, the key man is an external entity. Hopefully, you have an escrow arrangement so that in extremis, you're not in the situation where you have to stat from scratch.
It's not clear why the libs should need to be compatible with older versions of Windows. One reason might be is that the entity providing the library has other clients that need it. If you're the only client, then it might be time to review the deliverables, and update contracts/expectations accordingly.
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
|
|
|
|
|
Thanks for that.
Yep, i just started there quite fresh as well but it seems i am, because of the tasks i was assigned to, the one that finally starts to clean up some old things. And yes, it'll be fun if something unexpected happens.
I'll keep the last bit in mind and will address this task to my superior.
Rules for the FOSW ![ ^]
MessageBox.Show(!string.IsNullOrWhiteSpace(_signature)
? $"This is my signature:{Environment.NewLine}{_signature}": "404-Signature not found");
|
|
|
|
|
HobbyProggy wrote: to ensure communication via Ethernet, USB and Bluetooth.
As described all of those currently exist without customization. One might also wonder if this custom stuff is secure. Specifically how is be being tested to insure that it is secure and will remain so.
HobbyProggy wrote: windows 10 (which is our current limit of support)
Windows 10 runs on 64 bit systems but it also runs on 32 bit systems.
HobbyProggy wrote: We leave aside the fact that nobody else from the team would have hardware to continue coding or even compile this old stuff.
Presumably the company at least has the source code.
And yes there are risks to the company in not insuring continuity in case there are problems.
|
|
|
|
|
jschell wrote: As described all of those currently exist without customization. One might also wonder if this custom stuff is secure. Specifically how is be being tested to insure that it is secure and will remain so.
Yep... Uhm ... exactly one of the first things i was asking when i heard that, on the plus side i was able to read the whole BLE traffic with a sniffer when i was asked to measure and verify time needed for updates over ble.
jschell wrote: Windows 10 runs on 64 bit systems but it also runs on 32 bit systems.
Yep, i should have mentioned we need both.
jschell wrote: Presumably the company at least has the source code.
And yes there are risks to the company in not insuring continuity in case there are problems.
I am also asking the question because i want to ensure that we will not be screwed if something funny happens.
All the info will be used to lay out a plan and strategy to not fall off the edge.
Rules for the FOSW ![ ^]
MessageBox.Show(!string.IsNullOrWhiteSpace(_signature)
? $"This is my signature:{Environment.NewLine}{_signature}": "404-Signature not found");
|
|
|
|
|
HobbyProggy wrote: that we will not be screwed if something funny happens.
You need not suggest anything 'funny'.
Heart attack. Fire. Tornado. Flood. Etc.
What happens to the company if those happen to that single person.
|
|
|
|
|
for(int i=0;i<3;i++)
{
vector<thing*> * Nodes = new vector <thing*>();
thing * Athing = new thing();
Nodes->push_back(Athing);
}
do I need to delete the vector once the work is done or this is not required?
How about the things stored inside the container, do I need to delete those too or calling clear() is enough.
for(int i=0;i<3;i++)
{
vector<thing*> * Nodes = new vector <thing*>();
thing * Athing = new thing();
Nodes->push_back(Athing);
Nodes->clear();
delete Nodes;
}
modified 9-May-24 9:22am.
|
|
|
|
|
Calin Negru wrote: do I need to delete the vector once the work is done?
for(int i=0;i<3;i++)
{
vector<thing*> * Nodes = new vector <thing*>();
thing * Athing = new thing();
Nodes->push_back(Athing);
//do stuff
Nodes->clear();
delete Nodes;
}
Yes, you do.
Since you created it (vector) with new then you need to delete the vector once the work is done
|
|
|
|
|
What happens if I don’t delete the vector, will that cause a memory leak or it’s just allocated memory that is not used and takes extra space. The program doesn’t break with an error when I do source code version No 1
I have unexpected behavior somewhere else in my program and I was wondering if this could be the cause of it.
modified 9-May-24 10:43am.
|
|
|
|
|
With a memory leak, the memory footprint of the running process increases over time. If the conditions that lead to the leak are encountered often enough, the process will eventually run out of available memory, usually causing an exception of some sort. Unless you're in a very specialized environment, the memory associated with the process gets released when it exits. That means that the overall memory on the system doesn't get incrementally consumed over time. So you really only run the risk of the one process running out of memory, not the system as a whole. I hope I've explained that clearly enough.
Some system calls (and some user written functions!) use this to their advantage. On the first call they will allocate some memory, and then reuse it on successive calls. With no cleanup routine, they just rely on the program exit to do the memory release. That's why you might get notice of memory in use at exit when running a program under a memory diagnostic tool like Valgrind. When you trace back to where the memory was allocated, it might be inside something like fopen() .
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
|
|
|
|
|
Thanks k5054 I think I understand. What I described in the first example is a memory leak but probably is not causing problems elsewhere.
|
|
|
|
|
If by "not causing problems elsewhere" you mean it's not affecting other processes, that's mostly true. You can, of course, run into an Out Of Memory (OOM) situation, where all the RAM and swap is marked as "in use", and Bad Things start happening. Assuming you've got a 64 bit OS with lots of RAM and swap configured, (heck, even a 32 bit OS with good Virtual Memory), that's only likely to happen if you've got a lot of memory allocated.
As a rule of thumb, you should clean up memory when it's no longer needed. Think of it like craftsmanship. A piece of Faberge jewelry shows attention to detail from both the back and the front. Freeing up unused memory is part of the attention to detail, just like closing files after use, for example.
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
|
|
|
|
|
On top of what Victor said, look into using std::unique_ptr instead of "naked" pointers. Something like this:
auto Nodes = new std::vector< < std::unique_ptr<Thing> >;
Nodes->push_back( std::make_unique<Thing>() );
delete []Nodes;
Edit:
It is a bit unusual to "new" vectors. Given they can grow dynamically, in most cases you would write something like:
std::vector<<std::unique_ptr <Thing> >nodes;
nodes.push_back (std::make_unique<Thing>( ) );
When nodes goes out of scope all Things get deleted automatically.
Mircea
modified 9-May-24 9:41am.
|
|
|
|
|
Yes, you have to explicitly delete the vector s you allocated using new .
But that's not enough.
Try running the following code
#include <iostream>
#include <vector>
using namespace std;
class thing
{
public:
thing(){cout << "thing ctor\n";}
~thing(){cout << "thing dtor\n";}
};
int main()
{
for(int i=0;i<3;i++)
{
vector<thing*> * Nodes = new vector <thing*>();
thing * Athing = new thing();
Nodes->push_back(Athing);
Nodes->clear();
delete Nodes;
}
}
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
Thank you guys for your feedback. I think I understand what a memory leak is now.
|
|
|
|
|
You are welcome.
As suggested, have a look at smart pointers, they could make your coding life easier.
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
Ah memory leaks (and small buffer overruns).
I would point out that your example is fairly obvious. If you have a long running task, and this code is in some sort of processing loop, you'll see it quickly. What will really bite you in the a$$ are the small leaks. A byte here, a byte there. I live in the embedded world where customers forget our equipment was installed under their production line 10 years ago. The engineer responsible either died, retired or moved on to another company. I'm not being morbid, I have stories I could tell you
The group I work in is a decent group of smart people. Sadly, they never let us into the field to see how the product is actually used. The few times I've seen examples, they always shock me with "I didn't see that coming" sort of response. What amazes me is that if your customer never turns off your machine, why not set up an area where a test unit runs forever? I guess it falls under diminishing returns.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|