The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
I also worked in what you could call the embedded world, though not so close to the hardware, and soft rather than hard real-time. An OO rewrite saved the product I was working on, and it's still seeing development over 20 years later. We used all three (encapsulation, inheritance, and polymorphism) extensively.
The farther you get from the hardware the more generalized approach are useful / a necessity. I worked in a product with similar specifications ("not so close to the hardware, and soft rather than hard real-time.") and OOP was a huge benefit, when we started adopting it there has been a significant improvement in quality, developement time, customization time and stability.
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
You are quite correct, IMO. First, forget about the promise of re-use when it comes to objects. Nobody ever does anything in the real world twice exactly the same way to merit any re-use benefit.
In order of importance, to me:
1. Encapsulation - keeps stuff organized
2. Interfaces - defines what the class is expected to implement. I also use empty interfaces simply to indicate that the class supports some other behavior. Could use attributes for that as well, but interfaces are sometimes more convenient when dealing with a collection of classes that all support the same thing and there are methods that operate on that, hence I can pass in "IAuditable", for example.
3. Inheritance/Abstraction - mostly useless, but there are times when I want to pull out common properties among a set of logical classes. Note that I don't consider this to be true abstraction, it's using inheritance to defined common properties and behaviors.
4. Polymorphism - useful, but less so now with optional default parameters that do the work of what one often used polymorphic methods for.
IMO, the reality of "how useful is OO" falls quite short of the promise of OO.
I tend to take the view that isolation is a good target. Code isolated from other code is both maintainable and reusable, regardless of whether it is via an OO design or not.
Reuse is limited when using any OO language, because then any program that wants to reuse that code has to use the same language. If you're going after reuse, you have to give up OO because they are mutually exclusive.
All the re-used code are written in a non-OO language. Sqlite, for example, is one of the most widely deployed pieces of code in the world. All media format readers, as well, are widely deployed and non-OO.
If you write something really new and novel that does not exist (a new image format, a new protocol, new encryption algo, new compression format, interface to any of the above, or interface to existing daemons (RDBMSs, etc)), and you write it in Java or C#, the only way that it can become popular is if someone re-implements it in C so that Python, C, C++, Java, C#, Delphi, Lazarus, Perl, Rust, Go, Lisps, Php, Ruby, Tcl (and more) programs can use it.
The upside of producing library files (.so or .dll) that can be used by any language is that the result is also quite isolated and loosely coupled from anything else:
It can be be easily extended by anyone, but not easily enhanced.
It can be easily swapped out and replaced with a different implementation without needing the programs using that library to be recompiled, redeployed or changed in any way.
Because it is a library, it will only be for a single type of task (no one would even think of putting unrelated functionality into a compression library, but I've seen devs happily put in unrelated stuff into a compression class).
Ironically, you can more easily achieve SOLID principles writing plain C libraries (.so or .dll) than you can with actual OO languages, because of the limitations of the call interface in dynamic libraries.
fwiw, the code I am modifying has not changed in 10 years. So, why bother making it general?
Well, if it has not changed in 10 years I would say it is general enough
I am in sort of the same place as you. Mostly embedded development, and whenever I tried using OO I mostly failed. Usually because I decide to make a class for something that will only have one object instance.
I know where you are coming from but would think it depends on what you are working on. some projects and cases may merit it and some not, and although i dont work in embedded would assume that it's less important. however as a 'business app guy' i find it great, however if i really looked at it i probably don't need it as much as i think i do, but it's the way i roll now and i quite like it GL
I do embedded firmware full time. We use all of the OO features you have mentioned. For the most part this has been done as a positive effort, ie it was worth the effort. But lately I think we have taken it too far and the code is too difficult to follow and debug and also our performance seems really bad. We are now in the process of going back (yet again) to profile, study and do performance measurements.
But I think it is a case of having too much of a good thing. We have overused such features on an embedded design with limited memory and tight timing requirements.
We build everything from ground up including custom ASICs.
But we have a large development team, maybe in the 100's.
But what seems to be happening as a trend is an over use of language features. Or in another perspective the solutions are over-engineered. I have seen developers resort to techniques that are perfectly fine in a PC application environment where you have Gigahertz processors and Gigs of RAM
But in a custom hardened, real-time, constrained embedded systems the environment is different.
While the tools will support C++ templates, all the OO features etc, there is an art to how to balance design freedom vs efficient execution.
You can definitely use OO features in this environment, we have done it successfully before.
But there is risk of going too far and again this is where experience comes into play.
The real problem with OO Programming and possibly Design is that everyone knows the parts, but most don't put them together well. OO has worked well for years and it is visible at the low level.
Consider the storage device. It can be a floppy (remember those?), a hard drive, an SSD, a write once, CD/DVD, an online storage, an so many more. But the block storage protocol is applied to all. the Driver converts the hardware to an object that responds to the same inputs no matter what is hidden behind the interface. The device is the object. The functionality is abstracted and hidden. The behavior is locked behind the wall. And the new types of devices are added invisibly to the higher levels of code by following the interface rules. Enhancements to the rules are added all the time by extending the interface for things that were not previously considered.
The trick is in creating the specific, but only working with the generic. The failure is in creating specific necessary behavior at the derived level that cannot be used at the generic level.
Hierarchy of Animal -> Quadruped -> Horse.
Horse whinnies and gallops. But to implement them as Horse features means that they must be addressed as features of a horse.
Instead, Animal Speaks(Friendly | Loudly | Fearfully ), Moves( Slowly | Quickly )
Now, we can say Animal->Speak(Friendly), Animal->Moves( Quickly).
We can add Dog and a Cat and implement the interface described. Then we create the item and put it in Animal and use it without changing the code at all.
The biggest trick of all, is to construct the Animal (or derived type) with as much of the descriptive information as possible. then use it with as little information as possible.
Compare to the original Storage idea. the File Create and Open takes all kinds of descriptive information, but the actions on it take only the necessary variations. File->Read( howMuch, toWhere). File->Seek(toPosition, fromWhere).
It isn't really the tenants of OO that are in question, it is the organization. Put them together and use them well and they provide for an easier path to expansion, adaptation and improvements.
Print is a great example of Polymorphism.
Print (format, arg1, arg2, arg3, ...)
OO is more than a Hammer. It is a whole Toolbox than can be used to build better tools and expandable structures.
I actually have a complex example of when inheritance should have been used, but wasn't. We have a Posi-Pay application that basically takes check and deposit data from a bank and converts it to something we can use. The program has a 15k+ lines long switch statement, one case for each bank format. Much of the code is duplicated among case statements. The unique licensing code is in a separate area, but also is a giant switch statement.
I long for the day when I'll be given the time to make each bank format a separate class, the base class would have all the conversion functions needed, and the switch statements become 1 line of code each:
BankFormat.Convert(<bank format type>)
BankFormat.GetLicense(<bank format type>)
Keep all things as simple as possible, but no simpler. -said someone, somewhere
In embedded environments I'm not sure a lot of the OOD stuff makes sense.
Encapsulation - definitely as it hides the intricacies of the hardware, but be careful you don't introduce performance penalties by doing so.
Abstraction - goes along with encapsulation in this case with the same caveats.
Inheritance - depends on the application. Don't use it just because it's available. I have used Inheritance successfully, but going more than two or three levels deep simply invites inefficiencies and unreadable code. Consider that most unreadable code is because the control flow is at the wrong level and then consider that inheritance is a form of control flow.
Polymorphism - use it or don't use it as needed.
In other words, you don't have to use all of OOD if it doesn't make sense. Use what makes sense.
I still mainly like OO stuff (as a .NET/C# dev), but I can't say whether your project would benefit from the refactoring you considered. If your gut says no, then I believe you.
I come across useful inheritance examples occasionally, so I don't understand inheritance hate. Like any language feature it can be abused. My examples are often abstract classes that therefore require another class to implement/inherit abstract methods. For example, I work on stuff that I want to work equally well in a local file system and Azure blob storage. There is some common behavior that goes in an abstract base class, followed by some subclass capabilities that are environment-specific. From the .NET BCL, Streams follow this pattern. Streams are abstract, with a gaggle of concrete implementations for different situations. I don't see what's wrong with that. Now, I've seen inheritance diagrams like the old C++ MFC stuff that were over-the-top complicated. I think they had reasons at the time, but that was then.
I would say that most of the intellectual heavy lift in app development (in my world) is database design, data modeling -- understanding stakeholder requirements and translating them into relational models. I would agree this doesn't really fit an OO paradigm very well, and I don't see that it needs to. For example, I don't think inheritance translates very well in relational terms except in one narrow situation. I usually have base class to define user/timestamp columns like `DateCreated`, `CreatedBy` and so on -- and my model classes (tables) will inherit from that base class in order to get those common properties. But that's really it.
In the app space, it seems like 80% of dev effort goes into building CRUD experiences (forms and list management) of one kind or another. The other 20% is "report creation" of some kind or another, in my world. I don't think there's a perfect distinction between the two, but I would agree there's not really any OO magic in this layer. OO doesn't really make this experience better, IMO. We do have endless battles over UI frameworks and ORM layers. (I'm getting behind Blazor in the web space, and I have my own ORM opinions for sure! Another topic!)
i have also been working on embedded for some time and always wanted to apply (unsuccessfully) OOP approach to my designs, mostly because of lack of proper tools,in the last years i found python and my point of view has changed.
at this moment i work for a company that makes battery chargers, most of them have been migrated from a totally analog control to a microcontrolled one, there were several embedded programmers involved, each one of them working on his own so you can imagine the kind of mess that exist in software.
founding concepts of OOP you mention are powerful but, like any other tool, you have to choose the one that fits the best to your particular application.
In my case i use a python (a GUI and some scripts on the PC side) to generate C++ code for embedded automatically, and OOP paradigm is the thing that glues all together. here the software changes frequently, and this approach allows those changes to be applied quickly.
using the previous approach to modify an existing application usually takes several days of code reading in order to understand how it works and then apply the required changes.
Practical, pragmatic feedback is that you use what works. Polymorphism and inheritance aren't needed for everything, but they can make some things easier to do. If an application has no polymorphism or explicit inheritance, does that mean it is not object oriented? It depends. Unlike a language where the advanced concepts may not be useful for what you are doing, you can have an object-based application that otherwise follows object-oriented design without polymorphism. Does it matter that you call it object oriented?
Too many software development concepts are treated more like a religion than a tool set to accomplish a goal. I currently work in an application where inheritance is used to an extreme and it works well. I don't even think about it, but I can see how it could be written completely differently without inheritance and still work. I wouldn't want to try though.
I am the exact opposite. Don't use OOP and don't plan on using it unless I have to. Why ?
I am an experienced WIN32 programmer and the low level WIN32 is not OOP based, but is procedurally based. Only later did Windows add OOP on top, but the core of Windows is not OOP based.
OOP adds complexity and overhead and the core OS needs performance and minimal overhead. Embedded programmers experience a similar experience when they have to work with minimal hardware and every clock cycle matters.
The problem with procedural programming is that many programmers did not learn the importance of reusing code by building quality libraries. Procedural code can be encapsulated. That means well written libraries. Well written libraries need to built in what I like to call "the three tier" method. What is that ?
(1) Low level
(2) Medium level
(3) High level
Procedural libraries start with the low level routines which are the basis of the core functionality. But after awhile much of the core functionality is covered, so then a more medium level set of routines need to be built upon the backend of the low end routines. Medium level routines can be quite complex, but they should still be relatively simple in purpose. As you build the medium level part of a library, then one can build the much higher level routines, which be quite complex and extensive.
This three tiered approach produces a tight, efficient library and IMO can perform better than any OOP library and will be significantly smaller and faster than a comparible OOP library.
Modular design existed long before OOP. OOP was just a way to "force" modular design, rather than a better solution. Well written, modular, procedural code will better perform than any OOP counterpart. Also in the embedded world, where minimal hardware exists, the use of such modular procedural coding will be able to push such hardware to the limits, whereas OOP based code will have limits.
I did a large embedded project in classic C++, a collection of a dozen devices communicating over IEEE-488 with a PC. Our project was very object-oriented. The project was code-named "brick" since we intended to stack devices together like bricks. Our use of OOP was very successful, though the product itself failed in the marketplace.
Code Reuse: Our previous similar project was coded in C, and it was a quarter-million lines of wet spaghetti that we were ordered to reuse. It, in turn, was the result of an order to reuse a previous C project. Attempts at reuse were an abject failure, and we wrote 100% new code for the brick. The old code was so undocumented and hard to read that we had to reverse-engineer the behavior of the hardware.
Inheritance: Code was successfully reused within the brick. Inheritance promotes factoring of common code into base classes rather than cut & pasting it.
We used a multiply-inherited polymorphic mixin to control communication between software subsystems. The mixin let us defer and change the decision about what code to execute on the PC and what code to execute on the brick. This was incredibly fortunate because the hardware part of the project went way behind schedule.
Polymorphism: One difference between the brick and the previous product was that in the brick, code could directly control hardware devices like A/D converters, where on the previous project, the hardware was accessed over a bit-parallel protocol using the PC's parallel printer port (uck). We were able to prototype and test a lot of hardware control code using the previous device's hardware. We had polymorphic classes with one derivation to communicate with the brick's hardware, and another to communicate with the old hardware. As I said before, this was very fortunate because the hardware was so late.
Issues: This project was long enough ago that a virtual function call was expensive. Performance was very important to us, so we worried about every virtual function.
Another issue was that the hardware of this project was mostly a big sequencer ASIC that ran its own programs written in a generic macro-assembler that we had repurposed. There was no getting around the fact that much of the code was one big-ass class with a zillion methods. Normally this would be bad style, but how do you factor hardware? The programs for this sequencer were things we absolutely had to reuse from previous projects, our "secret sauce" as it were.
We did not even understand the functioning of the sequencer. Nobody did. We had to reverse engineer it from reading the previous projects' spaghetti code and fragmentary documentation. So much for code reuse.
Our C++ compiler was far less than perfect. It mostly conformed to the ARM, but only the simplest templates worked at all. We learned to submit very detailed bug reports, with example code and citing chapter and verse of the ARM. I think we got to be their favorite customers, and got excellent turnaround on bug fixes as a result.
Summary: I think the whole team were satisfied with our use of C++, and with the performance of the software. I don't think that company ever went back to C after the brick.
I did embedded programming till I retired in 2019. I looked at OOD when it was becoming popular but it made much more sense to me to do composition. I wrote classes (eventually became templates) for things like FIFOs, Queues, timers, digit filters, tone detectors, tone generators, etc. These templates have been used in multiple projects over the last 20+ years of my career with little or no modifications (most modifications were due to compiler changes). I can say that I've had quite a bit of reuse of my personal template library.
note: Surprisingly, I've never written code for an embedded device that had a display. Also, except for an inherited legacy device, every device I coded for had way less than a meg of RAM.
Last Visit: 31-Dec-99 19:00 Last Update: 8-Mar-21 0:15