|
Working in C, and doing it well, requires an unusual level of discipline. For many years, I worked in a similar proprietary language. But it had better typing and other concepts alien to C (more like Modula, say), which was one reason we could compete with larger firms that used C.
Much of our better software was rather object-oriented, but it was done manually. A struct containing function signatures would be defined. This was effectively an abstract class. A concrete class would populate it with function pointers and register it against a type index so that it could be invoked polymorphically, through an array of such struct s. There were even some ad hoc examples of inheritance.
Building these things by hand was tedious, but it worked well when people took the time. Naturally, there were also horrors like deeply nested functions, a surfeit of global variables, obscure side effects, and bugs fixed by
IF(the conditions under which bug report#aannnn occur are true)
THEN
update a little of this and a little of that so the code can carry on successfully;
ENDIF;
This isn't necessarily a problem restricted to procedural languages, but I find that C++ does a better job of encouraging one to find the root causes of problems instead of working around them, which helps to both keep the code cleaner and evolve the system. Not to mention that having polymorphism, inheritance, and encapsulation built into the language saves a lot of time!
|
|
|
|
|
Those are some of my thoughts as well. I'm hoping Richard gives his input, based on his experience because I'm very curious as to why some people stick with C. For instance, I hear Linus say that C is the only language for building operating systems in (or something like that), and I just shirk at the thought of such an endeavor without the help of thinking of things in objects. In my mind I see tons of errors that would be easily avoided using higher-level constructs. But I haven't attempted that type of project, so I can't say much about them. One argument in favor of C may be a smaller memory footprint (~5%???), which might be critical for embedded code, but even then??? If you are having to build objects by hand like you mentioned, wouldn't that savings be moot???
|
|
|
|
|
Given that C++ is basically a superset of C, Linus is wrong that C is the only viable language for operating systems. "Too much" C++ would unduly degrade performance, but an operating system isn't the only place where this is a consideration.
For memory footprint reasons, Embedded C++ eliminated RTTI, exceptions, and templates. At the time, this made some sense. But with memory as cheap as it now is, I doubt there are many cases where it still does.
|
|
|
|
|
For embedded applications, the amount of memory is a significant factor in the power budget. For several IoT device groups (or similiar, on other standards), battery lifetime is essential. If a chip with less memory can give you 30% longer battery change intervals, and is large enough, you'd go for it.
This is most essential with RAM sizes; code often resides in flash/ROM, needing no power to retain the data. But when the flash/ROM is accessed, some solutions have power requirements that depends on the memory size (although not by 30% of the total).
Another aspect is that even though the real cost of manufacturing a SoC chip is almost independent of the memory size, the vendor will often differentiate the customer pricing significantly: Customers wanting that 256 Mi version subsidise customers who go for the budget 1 Mi versions.
(Historical note: Around 1980, the age of Vax class superminis, I was working with a company selling a series of machines in three significantly different price ranges. The CPU was identical in all model, but in the budget model, the CPU cache was removed - they were distinct chips in those days. The high end model was delivered in a double cabinet, with lots of space for peripheral interfaces. I talked with a customer of the top model who was convinced that their machine was a lot faster than the midrange model. Learning that they were identical with respect to CPU power, she was seriously considering suing the vendor for fraudlent business practices.)
|
|
|
|
|
I use embedded for any system dedicated to a specific purpose, usually with specialized hardware. This ranges from toasters to IoT to smartphones to servers. But I've never seen a formal definition, so YMMV. Even IoT must mean many things, because I've seen articles about Linux for IoT.
|
|
|
|
|
Sure. I consider as "embedded" any CPU that doesn't present itself through an explicit user interface (to the computer), but receives commands from some other source than the user as a human. Maybe the user pushes some button or rotates a knob, but that is all defined by the function of the device, whether a car, rice cooker, stereo system or whatever. The user is unaware of the CPU; in theory the function could be realized by other measures. (E.g. up/down buttons could, in principle, be direct power switches to motors pulling a potentiometer one way or the other.) As long as the device has plenty of power available - including cellphones acting as a central for several sensors - there is no need to worry about the power requirements of a larger RAM. My concern was with the button cell powered sensors etc.
The cellphone itself has a quite extensive power management system: The circuits are organized in several power domains with are turned on and off individually. If some circuit is not required as the moment, it is turned off to save the power of keeping it available. The more, and smaller, power domains defined by the chip, the more focused can the power management be, and the more power can be saved. E.g. some chips allow power to be turned off half or three fourths of the RAM if the current load does not require more RAM. Yet, cellphones have huge batteries, compared to small button cell powered sensors.
(The Bluetooth Low Energy standard essentially reduces energy consumption because the slave/sensor and master/central makes agreements: We'll talk again in exactly 875 milliseconds! In the meantime, the slave turns off all power except for a clock programmed to wake up the chip just in time for the agreed next communication.)
For the time being, I use the "Internet of Things" (IoT) term only for devices communicating over an IP based protocol. There are several other wireless alternatives, some of them proprietary. Many of them have been used for years, long before the IoT term was invented.
I guess that within a few years, IoT blurs into a general term for any small device communicating with some more central unit - IP protocol or not. E.g. I've got a couple thermometers / "weather stations" receiving information from sensors of outdoor and indoor temperature, rainfall, wind... They are old, from long before the IoT protocols were specified. Nevertheless, they will soon be called IoT devices.
"True" IoT devices may use Bluetooth as a physical carrier. Alternately, they may use BT profiles directly (with far less overhead), maybe as a pure software choice. So is it an IoT device on one protocol stack, but not on another one? Borderlines will washed out in the future. But for now, I assume that IoT refers to devices running IP based protocols from the Internet IoT protocol family.
|
|
|
|
|
Greg Utas wrote: "Too much" C++ would unduly degrade performance I would like to understand those conditions better. I can kind of understand it if everything was modeled with virtual functions, and thousands of Objects were talking to each other through virtual functions, but even in that case would the affect be much more than 5%? And if you know something was critical, eliminate the virtual functions? Templates and objects, even without polymorhpism, are powerful tools.
|
|
|
|
|
David O'Neil wrote: ... objects, even without polymorhpism, are powerful tools. (Maybe that is a bit of a stretch when compared to C, since it is basically only a struct with finer-grain access control. But even that helps.)
|
|
|
|
|
I meant too many objects, especially if short-lived and allocated on the heap. Using the heap and invoking constructors and destructors up and down the class hierarchy can add lots of overhead. It can easily be over 5%, but there's no way to quote a single number because it depends on many factors. Too many threads or messages makes it far worse, but a system doesn't have to use objects to make mistakes in those areas.
If the design calls for a virtual function, I would use one without hesitation. It doesn't add much overhead and is important to both inheritance and polymorphism. I wouldn't want to give up either of those, and would probably give up templates first if forced to make such a horrid choice.
|
|
|
|
|
Greg Utas wrote: I meant too many objects, ...
Oh. Yeah, I can see that point of view, but I don't really understand the fundamental argument! In C you are going to have to create some type of construct to handle that memory in a safe way without allocations through new , as that will be required! Wouldn't modeling the system as an object make it substantially easier to do so? class MemPoolForMySpecialNeed , or something??? (It would probably be almost the same thing as done in the C way, but objectified.) I just don't get Linus's fundamental argument! I can't see C being easier in the long run for a huge project like an OS. That's one of the reasons I asked. Modeling things as objects makes work so much easier to understand and debug, from my point of view, even if the object is something that eliminates creating and deleting objects.
|
|
|
|
|
I agree that there's no fundamental argument against using C++ for an O/S, and it's actually what I would choose. The typical C designer will use too few "objects", but some C++ designers will subdivide a problem into more objects than are needed. In those circumstances, the O/S written in C will be more efficient, even if the one in C++ is better by any other criterion.
Linux was first released almost 30 years ago. Since then, compilers have gotten much better at producing efficient code for C++. At the time, the choice of C might have been reasonable. But if the decision were being made today, it would be wrong.
|
|
|
|
|
Thank you. I will stop doubting my sanity when I hear that claim in the future. Every time I've heard it I've thought that those individuals were ones who, given C++, chose to use it in a C manner, and didn't really understand the power of the tool in their hands. All the 'C' C++ code I've seen has been stuff that made me go WTF??? Not clean at all. Usually used three letter names for all variables, too.
|
|
|
|
|
Greg Utas wrote: Linux was first released almost 30 years ago. Since then, compilers have gotten much better at producing efficient code for C++. That reminds me of of one paper I read years ago, from a "History of Programming Languages" international conference (I only read the papers, I wasn't present). This one lady who had been involved in the development of Fortran II, a pioneer compiler with respect to optimzation techniques. (A significant number of the tricks we now consider the very foundation of code optimization had their debut in Fortran II.) She told that even though they had themselves programmed the optimization functions, they frequently asked each other: How the elephant did the compiler discover that it could do that? And is it valid, is the generated code functionally equivalent to the unoptimized one? ... It turned out that the compiler was right; it had discovered (valid) rewritings that they would never have thought of themselves, according to this presenter.
Fortran II was released in 1958. As you point out: Compiler guys have learnt lots of supplementary tricks since then. Yet: Inefficient code generated by compilers from the 1980s and 1990s is mostly due to the compiler writers being unaware of methods that had been known for decennies in other parts of the programming community.
Another aspect is that neither K nor R were recognized as language/compiler designers when they set out to define the C language (with R as the driving force in the language definition). They did not design a language well suited for neither code optimization, unambiguity nor other "academic" qualities. It grew out of assembler, not out of high level modeling concept. So they created a language a lot harder to optimize than other contemporary languages. E.g. the very free use of pointers, possibly typeless, may require quite extensive flow analysis to determine if an optimization is valid or not.
My personal experience: I grew up with a scepticism to automatic garbarge collection. When starting with C#, I was seriously considering adopting some of my old code from earlier projects for managing my own lists of discarded memory blocks, for later reuse without having to invoke the system heap management function. Then I got hold of a description of the dotNet memory management. As I read, several times I nodded: That is smart - I never thought of that myself! ... So before I had read the description to the end, I was turned into a GC devotee.
Nowadays, I classify people who claim to do memory management better than any GC along with those who claim to write C code in such a way that there is nothing left for any code optimization to do. I grant everyone the right to such self confidence (both wrt. code optimization and GC), but I am not willing to take their word for it. Certainly not if we are talking about general programming. Those special cases where hand carving is claimed to be required are extremely few and far between. Most "special cases" are far from special real cases, but highly synthetic, constructed cases having nothing whatsoever to do with real world applications.
|
|
|
|
|
Interesting write-up.
I still sometimes write code in a way that reflects a lack of confidence in the compiler to optimize it. Caching a result or moving an invariant out of a loop, for example, when the compiler would probably do those itself. Sometimes the code would be clearer if simply written the inefficient way! But given what I've seen, there's no way I could write code that wouldn't benefit from optimization.
When it comes to GC, I do a better job not by replacing the GC algorithm, but by not using GC at all! But that's only because I'm interested in hard/soft real-time systems, where GC can seriously affect latency. However, I have a background form of GC that can recover what an application leaks. If a system is heavily loaded, the time allotted to this GC can be reduced until the system is no longer stressed.
|
|
|
|
|
I found working in C gave me a solid foundation for how code lays things out in memory and how to do pointer operations, and what a cast does. With C++ and vtbls and 30 different types of casts it muddies the water.
But because of my C experience it's relatively easy (except when .net makes it hard as above) to use P/Invoke and to marshal structures and stuff
Real programmers use butterflies
|
|
|
|
|
I fell in love with C++ because of templates. Once I understood the full breadth of what you could do with generic programming - well beyond creating typed containers! it was all over for me. There was no going back. I still miss it while i'm coding in C#. Generics just aren't the same.
Real programmers use butterflies
|
|
|
|
|
I have to admit I haven't fully mastered them, because I haven't found myself needing them. My belief is they are handy when you have multiple types that you want to do the same thing to, but based on type, and I haven't often found myself with that scenario. Every time it seems like one type needs something subtly or substantially different! Way back when they really made my head hurt, but looking at Sergey's Delegates, which was my toothing experience, I can now follow what is happening, and refactor names to finally make sense to me! So maybe I'm not totally hopeless!
|
|
|
|
|
I was being slightly ironic, and am surprised you took it so seriously. I believe most languages have their good and bad points. Whether C, C++, C# or even VB the important thing is to choose the best one for each particular job. One of the things that C undeniably gives you is an understanding of the basics of memory structure, stacks and heaps, functions and pointers; something that is completely lost on many modern developers - just browse QA for an hour.
David O'Neil wrote: I've always wondered why so many like yourself seem to love C so much, given that its syntax (was?) so easy to become ugly, with all the casting involved, and can fairly easily get the deeply nested logic I encountered. I'm not asking to attack C, just curious as to whether I'm overlooking something big, since most C stuff can be done in C++ if you chose not to use objects. And if you slightly objectify things, the casting is reduced.
It is not difficult to write C without excessive casting or nesting, and look at all the different types of cast in C++. I agree that objects can reduce the use of casts, but only for those who really understand the language. I have seen some really bad C++ code in my time; as well as other languages.
Incidentally, back in my mainframe days I worked briefly with Burroughs systems whose operating system was written completely in Algol-60 - now there was an innovative design.
|
|
|
|
|
Richard MacCutchan wrote: I was being slightly ironic, and am surprised you took it so seriously. I have heard the claim from several people, including Linus, and was genuinely curious whether I was missing something big. Because of those other claims, I'm sorry to say I totally missed any irony you embedded in your statement!Richard MacCutchan wrote: I have seen some really bad C++ code in my time Yes.
Thanks for taking time to respond!
|
|
|
|
|
|
Just been reading your blog which makes interesting reading. I also learned a new word that I have never come across before: conniption.
|
|
|
|
|
I'm working on putting the astronomic knowledge into video form, to make it much easier to understand. If you want I'll try to remember to send you an email through CP's system when its done in probably a month. Glad to have expanded your vocabulary! It's a good word! If everyone had a conniption about our current status, things would get fixed fast!
|
|
|
|
|
Sander Rossel wrote: I'm sorry, but I need some more compelling arguments
Someone else said: My compiler compiled your compiler Check mate
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
I said "compelling" arguments, not "compiling" arguments
|
|
|
|
|
Ah... sorry. My left ear is not working that good.
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|