The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Practical, pragmatic feedback is that you use what works. Polymorphism and inheritance aren't needed for everything, but they can make some things easier to do. If an application has no polymorphism or explicit inheritance, does that mean it is not object oriented? It depends. Unlike a language where the advanced concepts may not be useful for what you are doing, you can have an object-based application that otherwise follows object-oriented design without polymorphism. Does it matter that you call it object oriented?
Too many software development concepts are treated more like a religion than a tool set to accomplish a goal. I currently work in an application where inheritance is used to an extreme and it works well. I don't even think about it, but I can see how it could be written completely differently without inheritance and still work. I wouldn't want to try though.
I am the exact opposite. Don't use OOP and don't plan on using it unless I have to. Why ?
I am an experienced WIN32 programmer and the low level WIN32 is not OOP based, but is procedurally based. Only later did Windows add OOP on top, but the core of Windows is not OOP based.
OOP adds complexity and overhead and the core OS needs performance and minimal overhead. Embedded programmers experience a similar experience when they have to work with minimal hardware and every clock cycle matters.
The problem with procedural programming is that many programmers did not learn the importance of reusing code by building quality libraries. Procedural code can be encapsulated. That means well written libraries. Well written libraries need to built in what I like to call "the three tier" method. What is that ?
(1) Low level
(2) Medium level
(3) High level
Procedural libraries start with the low level routines which are the basis of the core functionality. But after awhile much of the core functionality is covered, so then a more medium level set of routines need to be built upon the backend of the low end routines. Medium level routines can be quite complex, but they should still be relatively simple in purpose. As you build the medium level part of a library, then one can build the much higher level routines, which be quite complex and extensive.
This three tiered approach produces a tight, efficient library and IMO can perform better than any OOP library and will be significantly smaller and faster than a comparible OOP library.
Modular design existed long before OOP. OOP was just a way to "force" modular design, rather than a better solution. Well written, modular, procedural code will better perform than any OOP counterpart. Also in the embedded world, where minimal hardware exists, the use of such modular procedural coding will be able to push such hardware to the limits, whereas OOP based code will have limits.
I did a large embedded project in classic C++, a collection of a dozen devices communicating over IEEE-488 with a PC. Our project was very object-oriented. The project was code-named "brick" since we intended to stack devices together like bricks. Our use of OOP was very successful, though the product itself failed in the marketplace.
Code Reuse: Our previous similar project was coded in C, and it was a quarter-million lines of wet spaghetti that we were ordered to reuse. It, in turn, was the result of an order to reuse a previous C project. Attempts at reuse were an abject failure, and we wrote 100% new code for the brick. The old code was so undocumented and hard to read that we had to reverse-engineer the behavior of the hardware.
Inheritance: Code was successfully reused within the brick. Inheritance promotes factoring of common code into base classes rather than cut & pasting it.
We used a multiply-inherited polymorphic mixin to control communication between software subsystems. The mixin let us defer and change the decision about what code to execute on the PC and what code to execute on the brick. This was incredibly fortunate because the hardware part of the project went way behind schedule.
Polymorphism: One difference between the brick and the previous product was that in the brick, code could directly control hardware devices like A/D converters, where on the previous project, the hardware was accessed over a bit-parallel protocol using the PC's parallel printer port (uck). We were able to prototype and test a lot of hardware control code using the previous device's hardware. We had polymorphic classes with one derivation to communicate with the brick's hardware, and another to communicate with the old hardware. As I said before, this was very fortunate because the hardware was so late.
Issues: This project was long enough ago that a virtual function call was expensive. Performance was very important to us, so we worried about every virtual function.
Another issue was that the hardware of this project was mostly a big sequencer ASIC that ran its own programs written in a generic macro-assembler that we had repurposed. There was no getting around the fact that much of the code was one big-ass class with a zillion methods. Normally this would be bad style, but how do you factor hardware? The programs for this sequencer were things we absolutely had to reuse from previous projects, our "secret sauce" as it were.
We did not even understand the functioning of the sequencer. Nobody did. We had to reverse engineer it from reading the previous projects' spaghetti code and fragmentary documentation. So much for code reuse.
Our C++ compiler was far less than perfect. It mostly conformed to the ARM, but only the simplest templates worked at all. We learned to submit very detailed bug reports, with example code and citing chapter and verse of the ARM. I think we got to be their favorite customers, and got excellent turnaround on bug fixes as a result.
Summary: I think the whole team were satisfied with our use of C++, and with the performance of the software. I don't think that company ever went back to C after the brick.
I did embedded programming till I retired in 2019. I looked at OOD when it was becoming popular but it made much more sense to me to do composition. I wrote classes (eventually became templates) for things like FIFOs, Queues, timers, digit filters, tone detectors, tone generators, etc. These templates have been used in multiple projects over the last 20+ years of my career with little or no modifications (most modifications were due to compiler changes). I can say that I've had quite a bit of reuse of my personal template library.
note: Surprisingly, I've never written code for an embedded device that had a display. Also, except for an inherited legacy device, every device I coded for had way less than a meg of RAM.
OOP sucks primarily because you've got all these astronauts who are obsessed with silly, bloated design patterns. One of the biggest issues is that our CS educational system is broken because it promotes this trash as good design. The other issue is all these books, blogs, etc. that promote over-architected solutions to relatively simple problems. Gang of Four design patterns are mostly passe, and all of the advanced developers I know barely give those the time of day any longer. But, alas, how will narcissistic developers prove how smart they are, if not by the silly application of arcane design patterns?
OS, System Software Embedded and Drivers are areas where OOD has had small success because most companies haven’t figured out how to do it right on those spaces. But ironically the rise of NextStep and the fall of OS/2 teach us that it’s worth the risk investing on OOD on those spaces.
So don’t be hard on yourself for NOT GETTING IT where others can. You just live in a different paradigm and adjusting takes time.
Just remember how many iterations Coca Cola has had with Coke Zero just trying to emulate the original taste in a sugarless environment.
I did 15 years of research, specifically on the topic of OOP, and I've come to the conclusion that abstraction is mostly pointless beyond modeling data providers. When you have 2 distinct but very similar looking problems, it's better to have 2 distinct but very similar looking functions to solve those problems.
Turns out that's the most efficient solution.
Compilers don't care about lines looking the same, they don't produce slower code because of it. At the same time, junior devs can understand similar looking code faster, because they notice both the similarity and the differences, and naturally wonder why both exist, which lowers the learning curve.
Turns out only OOP-experienced developers care about avoiding redundancy in the literal sense, because they feel like it impedes either maintainability or efficiency, which is factually wrong. OOP sacrifices both those properties for scalability, and gets misattributed with them anyway.
It's kinda a thing in our field. Whenever something new and shiny arrives, people assume it solves every problem they currently have.
I can offer encouragement rather than practical advice (since I have no specifics about the problem you are trying to solve). You are absically correct: encapsulation is the most useful feature of OOP and the only feature code needs to be OO. (Many claim that certain languages are OO simply because they support an optional 'object' construct, but unless encapsulation is enforecd, with the option of strong encapsulation, then a language is not OO in my opinion).
Inheritance should only be used very sparingly - it is good for complex frameworks that have to support a multitude of applications, like teh Java framework, but not useful at all in most cases.
As for generics ... I'm still thinking about that one. I would say that generics are about more than code reuse. Do you really want a type as a parameter in your application? Is it really saving hastle to avoid explicit type casts? Who is the end-user of the code and is it more important to detect type errors at compile-time rather than runtime? In the end you have to weigh-up the hastle to you as coder against the hastle to the user!
That probably doesn't help, but it was an interesting query.
in my opinion OOP, FP and procedural programming are more a way of thinking. it's how you approach a problem.
i have a strong distaste for single paradigm languages. for me, if those languages were people they would have surely been racist. not only they have chosen to do things in only one way, but they have heavily preached that their way is the only true way and ridiculed others.
Encapsulation - i see this as only the shortening of the visible scope. aside from that inside the encapsulated area you still deal with structural programming tools. also, classes are inferior in encapsulation to ADTs in C. when a library or a module in C exposes it's ADT as an opaque struct, opaque pointer or should i say a handle, that is when you are really working with a "blob of data". it's not even a blob, it's just a name. it's called an opaque pointer, but it's not a pointer at all, because you cannot de-reference it yourself. when they give you the definition of a class i.e. the data type descriptor, is when you already know to much to be called encapsulation.
Inheritance - for the type of languages that we speak, statically typed, OOP folks see it as a liberation from a world that has nothing resembling a first-class functions, while LISP folks see OO as a prison. inheritance is the liberating force in OOP because it lets you go around the type system that is too strict. not by any virtue as it is always represented (it had never happened to me to misuse a reptile where i should have used a mammal), but by the lack of abstraction from the von Neumann architecture and the basic data types in CPU usage, namely: integer and float. in that respect Java has not gone far in abstraction than the C Abstract Machine.
inheritance is not about code reuse. every time you see a method overridden you see the braking of the OOP promise of reuse. inheritance is to losen the grip of the strict typing and it's most useful for building interface compliance. inheritance is mainly a specification technique rather than an implementation technique
and now lets look at encapsulation and inheritance together. they are antagonizing forces. the former is a restricting force, the later is an liberating force. in my opinion, that is why it is hard to do OOP in an Algol type language and that rises the need for design patterns. those patterns are there to make you interact the right way in such an environment. 2/3 of those patterns have no meaning in LISP and probably in JS, too. LISP and JS have something in them essentially reusable, because its easy to prototype a new application and it's easy and fast to rewrite the application. first class functions.
Polymorphism - without this, you only have function overloading. everybody knows that in C++ (et al) the code is not inside the object. when you don't use polymorphism and virtual functions, all you get is overloading. having the ability to define a function that has the same name for a different type of formal parameters. signature if you will.
int f(int, int)
int f(double, double)
but because in Algols the f method has a hidden parameter, it's not magic. it's only overloading.
f(type cat, int)
f(type dog, int)
in C++ that is the first parameter in Pascal that is the last parameter. in fact, that is how also Pascal does nested functions. the last hidden parameter is a delegate to the scope of the enclosing function. and thus, closures are a poor man's object's and vice versa.
newer versions of C++ are going to make this equivalent. whenever you can say cat.move() you will also be able to say it move(cat).
so, with inheritance you get the liberty to call a last descendant's function with any ancestor. and with polymorphism you get that call resolved the right way and runtime. Because the only true OOP you get with Algols is when you use virtual functions and there for you have something from the class functionality embedded in the object. otherwise objects are just dumb data. it's not like in JS where the object is constructed dynamically and the functions are added to that object.
it all boils to personal preference, mostly to the type of person you are. i presume building Byzantine hierarchies of classes is for persons who like perfect utopia like societies.
i use my favorite language C any way i please. it's not too strong at OOP nor at FP, but it doesn't forbid it. Java forbids procedural programming. my understanding of FP is in using pure functions and i use them all the time. C is more capable of pure functions than Java, bc it's composed types do not copy by value. only the reference to an object is copied by value, much like you would do with a pointer to a struct in C and that's mutation outside the scope of the function. a side effect.
whenever i create structures that have mostly the same building blocks i understand there is something OOP in the C++ way in it and may use it's tricks.
this is an exhausting topic and although i have written too much, plenty more can be written. but i believe it comes to what is your personal liking. there is also the right tool for the job.
i have strong confidence in Dennis MacAlistair Ritchie and the compromises he chose. i like what John McCarthy did with LISP, too. the way i do in programming are closely related to my personality, unless i get payed to do otherwise.
that is why you should not care if you do it in OO design or not. in your domain of embedded systems it is not strictly required from you to do so.
I stumbled across a video from a guy who wanted to test some CDP1802 processors with some Arduino board. Interesting, I thought. And then he begins with taking the processors out of their antistatic tubes and fingering around on them before the camera! What part of CMOS has he never heard of? He claims that all processors worked, but not for long if he keeps handling them like this.
What a horror video! Do you have your torches and pitchforks ready?
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
He may be wearing an antistatic strap: I have one ankle strap and one wrist strap which "snap on" to a grounded cable on the desk mat.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
I used to follow his channel. Used to. The whole video is terrifying, he looks like a complete noob (and he is not, he definitely has abilities, he was out of his depth).
Then he started trolling the people who commented and ended up disabling comments to the video. Damn ID-10T.
On another topic, I follow Forgotten Weapons: when Ian has a rare gun he wears gloves, goes extra delicate, does extensive research before attempting to disassemble the piece and if he is not sure he doesn't touch. He never forces anything lose. 8-bit guy resorted to a fscking Dremel not because the screws were unique and impossible to open: he just had his ass too heavy to go and get a frakking Torx from the toolbox.
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
Ooops, I was too lazy to get the appropriate tool so I used a power tool on what's possibly a unique item, then instead of doing proper study and diagnostic on the power supply I used a paper clip to short circuit two connectors without knowing what they actually are: it doesn't happen.
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X