The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
This is speculation, but my guess is no. For one thing, they're very different application domains. And although it's easy to hoot at 18000 classes, we should hoot at the managers and the corporate culture, not the developers. It could undoubtedly be done with 20% of the staff if only they had a clue whom to keep. But when you have the revenues of this lot, productivity is irrelevant. I've seen similar things. Design documents (before coding, in a waterfall methodology) running to hundreds of pages. FFS, I've never stayed true to anything beyond a high-level design that could be described in 20 pages.
When something has 18000 classes, either there';s no architect or there are way too many. I don't recall which, but one of the currently fashionable methodologies says that there shouldn't be architects. Utter drivel unless it's a very small group of skilled developers that agree on the design.
It was the same for me: I learned C++ very shortly after C, with little practice programming in any other language (and only for learning purposes, no real-world applications, not even playing around). Therefore the procedural paradigm wasn't heavily ingrained on me.
For many years I fully embraced the OO paradigm. There was a even time when I considered introducing a virtual class hierarchy to break up some deeply nested if/else structures.
I followed it for more than two decades before starting to realize that there's more to programming than OO.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
Go over to Q&A and you will see the real problem. For quite some time programmers have been homogenized, sterilized and, most important of all, been taught not to waste much time thinking for themselves. Instead, their heads have been stuffed with rules, conventions and dogmas. Ask them why they think that something MUST be done in a certain way. Always, no exceptions allowed.
It's a rule, they say. Or maybe a convention. Whose rules or conventions? When do they apply? What do they acomplish? Dunno, ask Guru Soandso or company xyz. Anyway, some of these mass produced idiots tend go overboard with the beliefs of their particular religion and make life more interesting for all who are not quite as fanatic as they are.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
Disclaimer: The Big Brother is watching you! There was a time when at least you could lose your job for such claims, at worst you could have get killed by an angry mob of mostly rookie developers who want to show of.
I remember how much impressed I was with multiple inheritance, assignment overloading and copy constructors... One day I realized what I have always known as a kid. Programming is data processing.
"in C++ as in Simula a class is a user defined type."
"Every language that uses the word class, for type, is a descendent of Simula"
They should have called OOP - class oriented developing, because it's appealing to class obsessed chauvinists. Contrary to popular belief, objects are only data. You could have a pointer to an array of pointers to functions here and there or a reference to a function, but that's data too.
No matter what language you use it all gets down to the same assembly language. Even before that, in the compilation process, programs are translated to a common language neutral data representation.
So, for EVERY program in Java you could write a program in C that gets translated into the same assembly code the CPU will execute. But, you could hardly write a Java program for ANY C program that will be translated into the same assembly code.
"The very first Java compiler was developed by Sun Microsystems and was written in C using some libraries from C++. Today, the Java compiler is written in Java, while the JRE is written in C."
"The Sun JVM is written in C"
Provided as is from Stackoverflow.
C implements Java, but Java cannot implement C.
Back to topic, this is what I find most appealing.
"We don’t have a mathematical model for OOP. We have Turing machines for imperative (procedural) programming, lambda-calculus for functional programming and even pi-calculus (and CSP by C.A.R. Hoare again and other variations) for event-based and distributed programming, but nothing for OOP. So the question of “what is a ‘correct’ OO program?”, cannot even be defined; (much less, the answer to that question.)"
It was given as an answer at Quora to the question 'Why did Dijkstra say that “Object-oriented programming is an exceptionally bad idea which could only have originated in California.?"'
To be fair, as I've said elsewhere on the thread I use OO in places - like if I expose an API to whatever i'm writing that will often be OO.
And I tend to use OO here and there for other reasons when I'm stuck in a hard OO env like Java or C#
I limit its use though:
1. Does it help explain the code?
2. Does it work with the rest of the code rather than against it?
3. Does it encapsulate an abstraction such that it makes it simpler to employ?
There are so many times when the answers to those questions are no, and I see people using objects. See @SanderRossel's console app upthread - he was ribbing me but it's a good example of class misuse.
The problem isn't OO; slavish fanatical adherence to anything at all screws everything up -- and it's certainly non-evolutionary. Doing things one way and one way only results in restrictions to growth and expansion.
Given the above immutable fact, rigid adherence to OO practices is obviously wrong before even going into details, so I won't waste my time going into any (plus I don't have a week to spare).
I wanna be a eunuchs developer! Pass me a bread knife!
The long version:
I tend to write a bunch of interfaces (as necessary) that explain the function of the code.
Take, for example, an IUserRepository.
When I see a (ASP.NET Core) Controller being injected with an IUserRepository I know this Controller does something with users.
I don't know (or care) where the users come from, but I know I need them.
If you look at the specific code that uses the IUserRepository you'll find stuff like userRepository.GetUser(id), which is way more descriptive than some code that accesses a database.
So in that sense, I often use classes and methods to describe what my code is doing.
That, for me, and to lesser extent re-use of code, are the biggest pros of OOP.
I'm not a big fan of re-use anymore.
Back in the day I re-used all the things, but just because two pieces of code incidentally need the same results doesn't mean they do the same thing.
I now make a clear split of functional re-use and technical re-use.
Functional re-use is rare, because that would mean a user has two ways to do the exact same thing.
It happens, but not all that often.
I think I write my code less "OOP" than seven or even five years ago.
The OOP I still write is more architectural in nature (like I now make heavy use of DI and interfaces, but not so much of base classes and such).
I've written some simple programs in Haskell, a purely functional language, but I think that doesn't work all that well.
It comes natural to think in objects and to have side effects at some point.
Nevertheless I started to write more functional in my OOP code, mostly no side effects.
I'm pretty sure my bug-to-code ratio went down since I've employed the no side effects approach.
A function just does its thing and produces a result, but it won't affect the overall flow or state of the program.
All the results come together in the calling function, mostly a controller, and then I do all the side effects in one spot.
Makes the code a lot easier to read and you have a lot less to think about.
It's still OOP, so it doesn't always work like that, but I try when I can.
Another change in my code is the use of delegates instead of one-function interfaces.
Makes for less abstraction and classes and it's still easy to read.
The biggest game changer for me, and this saved me a lot of bugs, was when I started to use curly braces for one line if and loop statements though
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.