An important point not mentioned by the article is that of "co-recursion" with inheritance (of implementation).
That is: an instance of a subclass calls a method defined on a parent class, which in turn may call a method that's been overridden by the subclass (or even another sub-subclass in the hierarchy) and that one in turn may call another parent method, and so on. It can easily become a pinball of calls around the hierarchy.
Add to that the fact that "objects" have state, and each class in the hierarchy may add more state, and modify state declared on parents. Perfect combinatory explosion of state and control-flow complexity.
I've seen this scenario way too many times in projects, and worse thing is: many developers think it's fine... and are even proud of navigating such a mess. Heck, many popular "frameworks" encourage this.
Basically: every time you modify a class, you must review the inner implementation of all other classes in the hierarchy, and call paths to ensure your change is safe. That's a horrendous way to write software, against the most basic principles of modularity and low coupling.
This is only the case when the language does not distinguish between methods that can be overridden versus those that cannot. C++ gives you the keyword "virtual" to put in front of each member function that you want to opt into this behavior, and in my experience people tend to give it some thought on which should be virtual. So I rarely have this issue in C++. But in languages like Python where everything is overridable, the issue you mention is very real.
Good point. In Java and many other languages you can opt out instead... which might make a big difference. Is it more of a "cultural" thing?... again, many frameworks encourage it by design, and so do many courses/tutorials... so those devs would be happy to put "virtual" everywhere in C++
The virtual keyword in c++ is more of a compiler optimization and less of a design decision. C++ doesn't want everyone paying the overhead of virtual function calls like other languages
I think that's an over-simplification. There was pressure on the language to ensure that data structures were compatible with C structs, so avoiding the vtable with simple classes was a win for moving data between these languages.
Of course these days with LTO the whole performance space is somewhat blurred since de-virtualisation can happen across whole applications at link time, and so the presumed performance cost can disappear (even if it wasn't actually a performance issue in reality). It's tough to create hard and fast rules in this case.
While in Python everything is overridable, does this show up in practice outside of (testing) frameworks? I feel like this is way more common in Java. My experience in Python is limited to small micro service like backends and data science apps.
> It can easily become a pinball of calls around the hierarchy.
This is why hierarchies should have limited depth. I'd argue some amount of "co-recursion" is to be expected: after all the point of the child class is to reuse logic of the parent but to overwrite some logic.
But if the lineage goes too deep, it becomes hard to follow.
> every time you modify a class, you must review the inner implementation of all other classes in the hierarchy, and call paths to ensure your change is safe.
I'd say this is a fact of life for all pieces of code which are reused more than once. This is another reason why low coupling high cohesion is so important: if the parent method does one thing and does it well, when it needs to be changed, it probably needs to be changed for all child classes. If not, then the question arises why they're all using that same piece of code, and if this refactor shouldn't include breaking that apart into separate methods.
This problem also becomes less pressing if the test pyramid is followed properly, because that parent method should be tested in the integration tests too.
> I'd argue some amount of "co-recursion" is to be expected: after all the point of the child class is to reuse logic of the parent
That's the point: You can reuse code without paying that price of inheritance. You DON'T have to expect co-recursion or shared state just for "code-reuse".
And, I think, is the key point: Behavior inheritance is NOT a good technique for code-reuse... Type-inheritance, however, IS good for abstraction, for defining boundaries, to enable polymorphism.
> I'd say this is a fact of life for all pieces of code which are reused more than once
But you want to minimize that complexity. If you call a pure function, you know it only depends on its arguments... done. If you can a method on a mutable object, you have to read its implementation line-by-line, you have to navigate a web of possibly polymorphic calls which may even modify shared state.
> This is another reason why low coupling high cohesion is so important
exactly. Now, I would phrase it the other way around though: "... low coupling high cohesion is so important..." that's the reason why using inheritance of implementation for code-reuse is often a bad idea.
If object A calls a method of object B (composition), then B cannot call back on B, and neither A nor B can override any behavior of the other (And this is the original core tenet of OO: being all about "message-passing").
Of course they can accept and pass other objects/functions are arguments, but that would be explicit and specific, without having to expose the whole state/impl to each other.
> Add to that the fact that "objects" have state, and each class in the hierarchy may add more state, and modify state declared on parents. Perfect combinatory explosion of state and control-flow complexity.
What if you are actually dealing with state and control-flow complexity. I'm curious what would be the "ideal" way to do this in your view. I am trying to implement a navigation system stripping interface design and all the application logic, even at this level it can get pretty complicated.
You are always dealing with state and control-flow in software design. The challenge is to minimize state at much as possible, make it immutable as much as possible and simplify you control-flow as much as possible. OO-style inheritance of implementation (with mutable state dispersed all over the place and pinball-style control-flow) goes against those goals.
Closer to the "ideal": declarative approaches, pure functions, data-oriented pipelines, logic programming.
If the author intended a function to be overridable and designed the class as such, none of this is a problem. I never need to look inside the parent class, let alone the entire hierarchy.
On the flip side, if the author didn't want to let me do that, I really appreciate having the ability to do it anyways, even if it means tighter coupling for that one part.
I tried to contribute a bug fix to a Common Lisp project and found this exact issue. In CL you can trace methods but if the call hierarchy is several dozen levels deep with multiple type overrides and several :around, :before and :after combinations, it’s just impossible to keep track of what does what. This is not a language issue though, CLOS is really powerful and can be a life saver in good hands, but when people use it just to try the feature it creates monstrosities.
I think the fundamental issue with implementation-inheritance is the class diagram looks nice, but it hides a ton of method-level complexity if you consider the distinction between calling and subtyping interfaces, complexity that is basically impossible to encapsulate and would be better expressed in terms of other design approaches.
With interface-inheritance, each method is providing two interfaces with one single possible usage pattern: to be called by client code, but implemented by a subclass.
With implementation-inheritance, suddenly, you have any of the following possibilities for how a given method is meant to be used:
(a) called by client code, implemented by subclass (as with interface-inheritance)
(b) called by client code, implemented by superclass (e.g.: template method)
(c) called by subclass, implemented by superclass (e.g.: utility methods)
(d) called by superclass, implemented by subclass (e.g.: template's helper methods)
And these cases inevitably bleed into each other. For example, default methods mix (a) and (b), and mixins frequently combine (c) and (b).
Because of the added complexity, you have to carefully design the relationship between the superclass, the subclass, and the client code, making sure to correctly identify which methods should have what visibility (if your language even allows for that level of granularity!). You must carefully document which methods are intended for overriding and which are intended for use by whom.
But the code structure itself in no way documents that complexity. (If we want to talk SOLID, it flies in the face of the Interface Segregation Principle). All these relationships get implicitly crammed into one class that might be better expressed explicitly. Split out the subclassing interface from the superclass and inject it so it can be delegated to -- that's basically what implementation-inheritance is syntactic sugar for anyway and now the complexity can be seen clearly laid out (and maybe mitigated with refactoring).
There is a trade-off in verbosity to be sure, especially at the call site where you might have to explicitly compose objects, but when considering the system complexity as a whole I think it's rarely worth it when composition and a tiny factory function provides the same external benefit without the headache.
These are powerful tools, if used with discipline. But especially in application code interfaces change often and are rarely well-documented. It seems inevitable that if the tool is made available, it will eventually be used to get around some design problem that would have required a more in-depth refactor otherwise -- a refactor more costly in the short-term but resulting in more maintainable code.
Author here. I wrote “ But even a modestly more recent language like Java has visibility attributes that let a class control what its subtypes can view or change, meaning that any modification in a subclass can be designed before we even know that a subtype is needed.” which covers your situation: if you need to ensure that subtypes use the supertype’s behaviour in limited ways, use the visibility modifiers and `final` modifier to impose those limits.
I 100% agree. And even though I use C# which is kind of OOP heavy, I use inheritance and encapsulation as least as possible. I try to use o more functional worklflow, with data separated from functions/methods. I keep data in immutable Records and use methods/functions to transform it, trying to isolate side effects and minimize keeping state.
It's a much pleasurable and easier way to work, for me at least.
Trying to follow the flow through gazillion of objects with state changing everywhere is a nightmare and I rather not return to that.
I agree that changing object state and having side effects should be avoided, but you can achieve both immutability and encapsulation very easily with C#:
public record Thing()
{
private string _state = "Initial";
public Thing Change() => this with { _state = "Changed" };
}
Arguably the answer is “When Barbara Liskov invented CLU”. It literally didn’t support inheritance, just implementation of interface and here we have her explaining 15 odd years later why she was right the first time.
I used to do a talk about Liskov that included the joke “CLU didn’t support object inheritance. The reason for this is that Barbara Liskov was smarter than Bjarne Stroustrup.”
I haven't encountered diamond inheritance a single time in 10 years of writing/reading C++, so I definitely don't have nightmares about it. Maybe that was really a thing in the 90s or 2000s?
I have been programming professionally in c++ for 20 years. I remember once thinking "cool, I could use virtual inheritance here". I ended up not needing it.
MI is not an issue in c++, and if it were the solution would be virtual inheritance.
Exactly. Unlike Java where every object inherits from Ojbect, in C++ multiply inheriting from objects with a common base class is rare.
Some older C++ frameworks give all their objects a common base class. If that inheritance isn't virtual, developers may not be able to multiply inherit objects from that framework. That's fine, one can still inherit from classes outside the framework to "mix in" or add capabilities.
I've never understood the diamond pattern fear-mongering. It's just a rarely-encountered issue to keep in mind and handle appropriately.
> in C++ multiply inheriting from objects with a common base class is rare.
One example is COM (or COM-like frameworks) where every interface inherits from IUnknown. However, there is no diamond problem because COM interfaces are pure abstract base classes and the pure virtual methods in IUnknown are implemented only once in the actual concrete class.
Diamond inheritance is its own special kind of hell, but “protected virtual” members of java and c# are the “evil at scale” that’s still with us today. An easy pattern that leads to combinatorial explosion beyond the atoms in the universe. Trivially.
People need to look at a playing deck. 52 cards, and you get 8×10^67 possible orders of the deck. Don’t replicate this in code.
What is the issue with those overrides? They only affect that one path in the hierarchy of inheritance, no? Not a C++ user here, but I imagine it would be catastrophic, if an unrelated (not on path to root superclass) class could override a method and affect unrelated classes/objects.
It's also cultural, possibily. Python supports diamond inheritance, and clearly states how it handles it (it ends up virtual in C++ terms). But in like 20 years of working with Python I can't remember encountering diamond inheritance in the wild once.
Django documentation explicitly recommended it for a short while. At a point, the Python community created all kinds of mixins on all kinds of random APIs.
Diamond inheritance is in fact highly pervasive in Python. The reason is that every class is a subclass of object since Python 3 (Python 2 allows classic classes that are different). So every single time you use multiple inheritance you have diamond inheritance. Some of this diamond inheritance is totally innocuous, but mostly not, because a lot of classes override dunder methods on object like __setattr__. It was Guido van Rossum himself that observed the prevalence of diamond inheritance that led to Python 2.3 fixing the MRO, and introducing the super() function to make multiple inheritance sane.
> Diamond inheritance is in fact highly pervasive in Python.
I don't think that's true, because...
> So every single time you use multiple inheritance you have diamond inheritance.
Multiple inheritance is supported but not itself “highly pervasive” in Python
> It was Guido van Rossum himself that observed the prevalence of diamond inheritance
The essay you link does not support that claim. He doesn’t observe an existing prevalence, he describes new features being added simultaneously with the MRO fix that would present new use cases where diamond inheritance may be useful.
And, its true, diamond inheritance is more common in modern Python than it was with classic classes in ancient Python, but there is a huge leap between that and “highly pervasive”.
The MRO fix was added to Python 2.3. The new style classes that would cause diamond inheritance to be prevalent were already present in Python 2.2. So they weren’t simultaneous.
A better phrasing would be that Guido predicted the prevalence of diamond inheritance in Python and therefore found it necessary to fix the MRO.
Aside from game dev, Rust is being used in quite a lot of green field work where C++ would have otherwise been used.
Game dev world still has tons of C++, but also plenty of C#, I guess.
Agreed that it’s not really behind us though. Even if Rust gets used for 100% of C++’s typical domains going forward (and it’s a bit more complicated than that), there’s tens? hundreds? of millions (or maybe billions?) of lines of working C++ code out there in the wild that’ll need maintained for quite a long time - likely order decades.
struct A {
name: String,
owned: B
}
struct B {
name: String,
}
you can't have a writeable reference to both A and B at the same time.
This is alien to the way C/C++ programmers think. Yes, there are ways around it,
but you spend a lot of time in Rust getting the ownership plumbing right to make this work.
Now it may take you a while to figure out if you've never done Rust before, but this is trivial.
Did you perhaps mean simultaneous partial field borrows where you have two separate functions that return the name fields mutably and you want to use the references returned by those functions separately simultaneously? That's hopefully going to be solved at some point, but in practice I've only seen the problem rarely so you may be overstating the true difficulty of this problem in practice.
Also, even in a more complicated example you could use RefCell to ensure that you really are grabbing the references safely at runtime while side-stepping the compile time borrow checking rules.
It's kind of crazy that OOO is sold to people as 'thinking about the world as objects' and then people expect to have an object, randomly take out a part, do whatever they want with it and just stick it back in and voila
This is honestly such an insane take when you think about what the physical analogue would be (which again, is how OOP is sold).
The proper thing here is that, if A is the thing, then you really only have an A and your reference into B is just that, And should be represented as such, with appropriate syntactic sugar. In Haskell, you would keep around A and use a lens into B and both get passed around separately. The semantic meaning is different.
I recently had this problem is some rust code. I was implementing A and had some code that would decide which of several 'B's to use. I then wanted to call an internal method on A (that takes a mutable reference to A) with a mutable reference to the B that I selected. That was obviously rejected by the compiler and had to find a way around it.
Rust depends on C++, until people cut their compilers lose from LLVM, GCC, and other C++ based runtimes, it is going to stay with us for a very long time.
That includes industry standards like POSIX and Khronos, CUDA, Hip and SYCL, MPI and OpenMP, that mostly acknowledge C and C++ on their definition.
There's a growing group that believes no new projects should be started in C/C++ due to its lack of memory safety guarantees. Obviously we should be managing existing projects, but 1973 is calling, it's time to retire into long-tail maintenance mode.
I've programmed C++ for decades and I believe all sane C++ code styles disallow multiple inheritance (possibly excepting pure abstract classes which are nothing but interfaces). I certainly haven't encountered any for a long time even in the OO-heavy code bases I've worked with.
And python didn't get it right the first time either. It wasn't until python 2.3 when method resolution order was decided by C3 linearization that the inheritance in python became sane.
Inheritance being "sane" in Python is a red herring for which many smart people have fallen (e.g. https://www.youtube.com/watch?v=EiOglTERPEo). It's like saying that building a castle with sand is not a very good idea because first, it's going to be very difficult to extract pebbles (the technical difficulty) and also, it's generally been found to be a complicated and tedious material to work with and maintain. Then someone discovers a way to extract the pebbles. Now we have a whole bunch of castles sprouting that are really difficult to maintain.
Python is slightly better because it can mostly be manipulated beyond recognition due to strong metaprogramming but pythons operator madness is dangerous. Random code can run at any minute. It's useful for something's and a good scripting language, and a very well designed one, no question there. Still it would be better if it supported proper type classes. It could retain the dynamic typing, just be more sensible.
I'm always surprised by how arrogant and unaware Python developers are. JavaScript/C++/etc developers are quite honest about the flaws in their language. Python developers will stare a horrible flaw in their language and say "I see nothing... BTW JS sucks so hard.".
Let me give you just one example of Python's stupid implementation of inheritance.
In Python you can initialize a class with a constructor that's not even in the inheritance chain (sorry, inheritance tree because Python developers think multiple inheritance is a good idea).
class A:
def __init__(self):
self.prop = 1
class B:
def __init__(self):
self.prop = 2
class C(A):
def __init__(self):
B.__init__(self)
c = C()
print(c.prop) # 2, no problem boss
And before you say "but no one does that", no, I've see that myself. Imagine you have a class that inherits from SteelMan but calls StealMan in it's constructor and Python's like "looks good to me".
I've seen horrors you people can't imagine.
* I've seen superclass constructors called multiple times.
* I've seen constructors called out of order.
* I've seen intentional skipping of constructors (with comments saying "we have to do this because blah blah blah)
* I've seen intentional skipping of your parent's constructor and instead calling your grandparent's constructor.
* And worst of all, calling constructors which aren't even in your inheritance chain.
And before you say "but that's just a dumb thing to do", that's the exact criticism of JS/C++. If you don't use any of the footguns of JS/C++, then they're flawless too.
Python developers would say "Hurr durr, did you know that if you add a object and an array in JS you get a boolean?", completely ignoring that that's a dumb thing to do, but Python developers will call superclass constructors that don't even belong to them and think nothing of it.
------------------------------
Oh, bonus point. I've see people creating a second constructor by calling `object.__new__(C)` instead of `C()` to avoid calling `C.__init__`. I didn't even know it was possible to construct an object while skipping its constructor, but dumb people know this and they use it.
Yes, instead of putting an if condition in the constructor Python developers in the wild, people who walk among us, who put their pants on one leg at a time like the rest of us, will call `object.__new__(C)` to construct a `C` object.
> In Python you can initialize a class with a constructor that's not even in the inheritance chain
No, you can't. Or, at least, if you can, that’s not what you’ve shown. You’ve shown calling the initializer of an unrelated class as a cross-applied method within the initializer. Initializers and constructors are different things.
> Oh, bonus point. I've see people creating a second constructor by calling `object.__new__(C)` instead of `C()` to avoid calling `C.__init__`.
Knowing that there are two constructors that exist for normal, non-native, Python classes, and that the basic constructoe Class.__new__, and that the constructor Class() itself calls Class.__new__() and then, if Class.__new__() returns an instance i of Class, also calls Class.__init__(i) before returning i, is pretty basic Python knowledge.
> I didn't even know it was possible to construct an object while skipping its constructor, but dumb people know this and they use it.
I wouldn’t use the term “dumb people” to distinguish those who—unlike you, apparently—understand the normal Python constructors and the difference between a constructor and an initializer.
> Knowing that there are two constructors that exist for normal, non-native, Python classes, and that the basic constructoe Class.__new__, and that the constructor Class() itself calls Class.__new__() and then, if Class.__new__() returns an instance i of Class, also calls Class.__init__(i) before returning i, is pretty basic Python knowledge.
I disagree that this is basic knowledge. In python a callable is an object whose type has a __call__() method. So when we see Class() its just a syntax proxy for Metaclass.__call__(Class). That's the true (first of three?) constructor, the one then calling instance = Class.__new__(cls), and soon after Class.__init__(instance), to finally return instance.
> Knowing that there are two constructors that exist for normal, non-native, Python classes, and that the basic constructoe Class.__new__, and that the constructor Class() itself calls Class.__new__() and then, if Class.__new__() returns an instance i of Class, also calls Class.__init__(i) before returning i, is pretty basic Python knowledge.
I didn't know most of that, and I've performed in a nightclub in Python, maintained a CSP networking stack in Python, presented a talk at a Python conference, implemented Python extensions with both C and cffi, and edited the Weekly Python-URL!
Oh I've seen one team constructing an object while skipping the constructor for a class owned by another team. The second team responded by rewriting the class in C. It turns out you cannot call `object.__new__` if the class is written in native code. At least Python doesn't allow you to mess around when memory safety is at stake.
For what it's worth, pyright highlights the problem in your first example:
t.py:11:20 - error: Argument of type "Self@C" cannot be assigned to parameter "self" of type "B" in function "__init__"
"C*" is not assignable to "B" (reportArgumentType)
1 error, 0 warnings, 0 information
ty and pyrefly give similar results. Unfortunately, mypy doesn't see a problem by default; you need to enable strict mode.
1. Your first example is very much expected, so I don't know what's wrong here.
2. Your examples / post in general seems to be "people can break semantics and get to the internals just to do anything" which I agree is bad, but python works of the principle of "we're all consenting adults" and just because you can, doesn't mean you should.
I definitely don't consent to your code, and I wouldn't allow it to be merged in main.
If you or your team members have code like this, and it's regularly getting pushed into main, I think the issue is that you don't have safeguards for design or architecture
The difference with JavaScript "hurr durr add object and array" - is that it is not an architectural thing. That is a runtime / language semantics thing. One would be right to complain about that
> The difference with JavaScript "hurr durr add object and array" - is that it is not an architectural thing. That is a runtime / language semantics thing. One would be right to complain about that
Exactly. One is something in plain sight in front of ones eyes, and the other one can be well hidden, not easy to spot.
I don't understand the problem with your first example. The __init__ method isn't special and B.__init__ is just a function. Your code boils down to:
def some_function(obj):
obj.prop = 2
class Foo:
def __init__(self):
some_function(self)
# or really just like
class Foo:
def __init__(self):
self.prop = 2
Which like, yeah of course that works. You can setattr on any object you please. Python's inheritance system ends up being sane in practice because it promises you nothing except method resolution and that's how it's used. Inheritance in Python is for code reuse.
Your examples genuinely haven't even scratched the surface of the weird stuff you can do when you take control of Python's machinery—self is just a convention, you can remove __init__ entirely, types are made up and the points don't matter. Foo() isn't even special it's just __call__ on the classes type and you can make that do anything.
With the assumptions typical of static class-based OO (but which may or may not apply in programs in Python), this naively seems like a type error, an even when it isn't it introduces a coupling where the class where the call is made likely depends on the internal implementation (not just the public interface) of the called class, which is...definitely an opportunity to introduce unexpected bugs easily.
There's nothing wrong with implementation inheritance, though. Generic typestate is implementation inheritance in a type-theoretic trench coat. We were just very wrong to think that implementation inheritance has anything to do with modularity or "programming in the large": it turns out that these are entirely orthogonal concerns, and implementation inheritance is best used "in the small"!
CLU implemeted abstract data types. What we commonly call generics today.
The Liskov substitute principle in that context pretty much falls out naturally. As the entire point is to substitute in types into your generic data structure.
Yes it is, as it is about the semantics of type hierarchies, not their syntax. If your software has type hierarchies, then it is a good idea for them conform to the principle, regardless of whether the implementation language syntax includes inheritance.
It might be argued that CLU is no better than typical OO languages in supporting the principle, but the principle is still valid - and it was particularly relevant at the time Liskov proposed it, as inheritance was frequently being abused as just a shortcut to do composition (fortunately, things are better now, right?)
No, because the LSP is specifically about inheritance, or subtyping more generally. No inheritance/subtyping, no LSP.
It is true that an interface defines certain requirements of things that claim to implement it, but merely having an interface lacks the critical essence of the LSP. The LSP is not merely a banal statement that "a thing that claims to implement an interface ought to actually implement it". It is richer and more subtle than that, though perhaps from an academic perspective, still fairly basic. In the real world a lot of code technically violates it in one way or another, though.
Except that Smalltalk is so aggressively duck-typed that inheritance is not particularly first class except as an easy way to build derived classes using base classes as a template. When it comes to actually working with objects, the protocol they follow (roughly: the informally specified API they implement) is paramount, and compositional techniques have been a part of Smalltalk best practice since forever ago (something it took C++ and Java devs decades to understand). This allows you to abuse the snotdoodles out of the doesNotUnderstand: operator to delegate received messages to another object or other objects; and also the become: operator to substitute one object for another, even if they lie worlds apart on the class-hierarchy tree, usually without the caller knowing the switch has taken place. As long as they respond to the expected messages in the right way, it all adds up the same both ways.
I mean, it's not that hard to understand, why composition is to be preferred, when you could easily just use composition instead of inheritance. It's just that people, who don't want to think have been cargo-culting inheritance ever since they first heard about it, as they don't think much further than the first reuse of a method through inheritance.
I have some data types (structs or objects), that I want to serialize, persist, and that they have some common attributes of behaviors.
In swift I can have each object to conform to Hashable, Identifiable, Codabele, etc etc...
and keep repeating the same stuff over and over, or just create a base DataObject, and have the specific data object inherit it and just .
In swift you can do it by both protocols, (and extensions of them), but after a while they start looking exactly like object inheritance, and nothing like commposition.
Composition was preferred when many other languages didn't support object oriented out the gate (think Ada, Lua, etc), and tooling (IDEs) were primitive, but almost all modern languages do support it, and the tooling in insanely great.
Composition is great when you have behaviour that can be widely different, depending on runtime conditions. But, when you keep repeating yourself over and over by adopting the same protocols, perhaps you need some inheritance.
The one negative of inheretance is that when you change some behaviour of a parent class, you need to do more refactoring as there could be other classes that depend on it. But, again, with today's IDEs and tooling, that is a lot easier.
TLDR: Composition was preferred in a world where the languages didn't suport propper object inheretance out of the gate, and tooling and IDEs were still rudemmentary.
> In swift I can have each object to conform to Hashable, Identifiable, Codabele, etc etc... and keep repeating the same stuff over and over, or just create a base DataObject, and have the specific data object inherit it and just .
But then if you need a DataObject with an extra field, suddenly you need to re-implement serialization and deserialization. This only saves time across classes with exactly the same fields.
I'd argue that the proper tool for recursively implementing behaviours like `Eq`, `Hashable`, or `(De)Serialize` are decorator macros, e.g. Java annotations, Rust's `derive`, or Swift's attached macros.
Yes, all behaviors should be implemented like definitions in category theory: X behaves like a Y over the category of Zs, and you have to recursively unpack the definition of Y and Z through about 4-5 more layers before you have a concrete implementation.
I'll be honest here. I don't know if any comment on this thread is a joke.
There are valid reasons to want each one of the things described, and I really need to add type reflexivity to the set here. Looks like horizontal traits are a completely unsolved problem, because every type of program seems to favor a different implementation of it.
> The one negative of inheretance is that when you change some behaviour of a parent class, you need to do more refactoring as there could be other classes that depend on it. But, again, with today's IDEs and tooling, that is a lot easier.
It is widely known as the "unstable base class" problem.
Another one is, that there are cases, where hierarchies simply don't work well. Platypus cases.
Another one is, that inheritance hides where stuff is actually implemented and it can be tedious to find out when unfamiliar with the code. It is very implicit in nature.
> TLDR: Composition was preferred in a world where the languages didn't suport propper object inheretance out of the gate, and tooling and IDEs were still rudemmentary.
I think this is rather a rewriting of history to fit your narrative.
Fact is, that at least one very modern language, that is gaining in popularity, doesn't have any inheritance, and seems to do just fine without it.
Many people still go about "solving" problems by making every noun a class, which is, frankly, a ridiculous methodology of not wanting to think much. This kind of has been addressed by Casey Muratori, who formulated it approximately like this: Making 1-to-1 mappings of things/hierarchies to hierarchies of classes/objects in the code. (https://inv.nadeko.net/watch?v=wo84LFzx5nI) This kind of representing things in the code has the programmer frequently adjusting the code and adding more specializations to it.
One silly example of this is the ever popular but terrible example of making "Car" a class and then subclassing that with various types of cars and then those by brands of cars etc. New brand of car appears on the market? Need to touch the code. New type of car? Need to touch the code. Something about regulations about what every car needs to have changes? Need to touch the code. This is exactly how it shouldn't be. Instead, one should be thinking of underlying concepts and how they could be represented so that they can either already deal with changes, or can be configured from configuration files and do not depend on the programmer adding yet another class.
Composition over inheritance is actually something, that people realized after the widespread over-use of inheritance, not the other way around, and not because of language deficiencies either. The problems with inheritance are not merely previously bad IDE or editor support. The problems are, that in some cases it is bad design.
Each has its place. There's some things that inheritance makes possible, and some things that are best handled by composition. I use both, quite frequently.
It Depends™.
Composition can add a lot of complexity to a design, and give bugs a lot more corners to hide in, but inheritance can be such a clumsy tool, that it just shouldn't be used for some tasks.
That goes for almost everything in software. Becoming zealous about "The Only Correct Way" can be quite destructive.
I dunno. It's easy to say, "there are trade-offs, it depends" any time two things are compared, and it's never entirely untrue. However, sometimes one option is just generally worse than the other.
I'm not saying it's malpractice to use inheritance or anything, but it's a tool I definitely hesitate to reach for. Go and Rust removed inheritance entirely, and I'd say those languages are better-off without it.
There’s definitely stuff that it enables. I’ve been writing software since before it was a thing, and it was almost magic, when I first learned about it.
I also saw why it fell from grace, but I already knew, by then, that it was no panacea. I learned, on my own, that composition was often a better pattern, and I learned that, back in the 1980s.
Not worth arguing about, but I do find absolutism to be almost offensive, and there’s a damn lot of that, in software development.
Rust removed inheritance only for the Rust ecosystem to generate some kind of half-inheritance system by sticking macros on everything. For every `extends Serializable`, Rust has a `#[derive(Serializable)]`. Superclasses are replaced by gluing the same combinations of traits together in what would otherwise be subclasses, with generic type guards.
The problems with bad design don't go away, they're just hidden out of plain view by taking away the keywords. Rust's solution is more powerful, but also leads to more unreadably dense code.
One clear example of this is UI libraries. Most UI libraries have some kind of inheritance structure in OO languages, but Rust doesn't have that capability. The end result is that you often end up with library-specific macros, a billion `derive`s, or some other form of code generation to copy/paste common implementations. Alternatively, Rust code just reuses C(++) code that needs some horrific unsafe{} pointer glue to force an inheritance shaped block down a trait shaped hole.
Java serialization is implemented with reflection, not inheritance. `extends Serializable` is just a marker which tells the serializer it's okay to serialize a class. Go serializes with reflection too, and there's no inheritance at all in that language.
> Rust's solution is more powerful, but also leads to more unreadably dense code.
Instead of reflection, Rust does serialization with code generation. Java does this too sometimes in libraries like Lombok. The generated code is probably quite dense, but I expect the Java standard library reflection-based serialization code is also quite dense. In both cases, you don't have to read it. `extends Serializable` and `#[derive(Serializable)]` are both equally short. And the generated code for protobuf serialization (which I have read) is pretty readable.
But classes are a really crap way to share code, since you get the is-a relationship rather than the has-a relationship which is almost always what you want to express. Rust traits and derive macros are a much better abstraction for everything I've ever done when contrasted with classes.
Yeah, it's like how OS war truces get proposed. "Depends entitely on the use case." Most of the time, the two arguing over Mac vs Windows or Linux distro A vs B have almost identical use cases.
I haven't intentionslly used inheritance in forever, only in cases where some lib forces you to use it that way. Not cause of some trend, but it's just not something you naturally need.
Sure, but "favor x over y" or, put another way, "use y only if x is unsuitable" is compatible with this. Nothing in "prefer composition over inheritance" says that composition is the only correct way.
Objects and inheritance are good when you need big contracts. Functions are good when you want small contracts. Sometimes you want big contracts. Sometimes you want small contracts.
Exactly. Inheritance and composition are two different axises of the implementation reuse problem, just as object-oriented programming and functional programming are two different axises of the expression problem. In both cases, you should want to have both axises available, because they do different things. I think saying "prefer composition over inheritance" makes about as much sense as "prefer the Y-axis over the X-axis in a graphics system": it's a statement that doesn't make sense on its own; it only makes sense in specific scenarios, like "... when making a document scroll".
I think the biggest "mistake" in object-oriented programming is the explanations by analogy that many people advocate. A lot of times, people attempt to use a taxonomic metaphor like the tree of life—"a dog is-a mammal" sort of stuff. As a model it falls apart because even the real world tree of life is a flawed model that doesn't fully capture the complexity of life in the way most lay people assume it does. Try getting a random person to justify a platypus being a mammal. Without specific training in biology they stumble. And most computer scientists attempting to employ this analogy are definitely broadly ignorant about biology. You can see it because a lot of the examples people employ aren't even consistent with the metaphor. You're just as likely to see people say things like, "A dog is-a four-legged animal". I think it's an extremely harmful didactic path down which to start.
A lot of these problems happen because most people don't have a good handle on graph theory. They don't understand when they are trying to force a graph with cycles into a tree. Trees are easy for people to understand and handle, graphs with their pesky cycles are much harder, so I get the appeal. But what people come to call "tech debt" or "degenerate edge cases" are really evidence that an inappropriate model was employed early in development.
In real world examples, you'll see object oriented programming and functional programming as well as inheritance and composition used extensively and successfully. I think GUI libraries are a good example here. Buttons and text boxes both inherit from a control base class. This pattern is pervasive and long standing. But you naturally shouldn't usually[1] try to make a form inherit from control as they are much more appropriately compositions of controls.
[1] I say "shouldn't usually" here because some common subforms can be useful encapsulated as a control for composition into other forms, e.g. an address entry form embedded into a user profile form.
One of the lesser known features in Kotlin is interface delegation. This lets you get away with doing multi class inheritance via composition of a class with a delegate. This kind of blurs the boundaries between inheritance and composition in a useful way.
class Foo(internal val _list: MutableList<Int>=mutableListOf()): MutableList<Int> by _list { ... }
Here Foo has a _list property that it delegates the implementation of list operations to. You can even do function overrides in the class and interact with the delegate via the _list property. However, messing with internal list state is off limits (a problem with inheritance).
val foo = Foo()
foo.add(1)
Like Java, Kotlin supports single class inheritance. But this provides a way out.
When I was researching this stuff in the nineties, I came across some papers about role based programming by a Norwegian called Trygve Reenskaug. That formed a lot of my thinking on this topic a bit.
Modern Kotlin and Java look a lot like what he proposed: small interfaces (roles) and classes that implement multiple of these things whose objects can play those roles in different contexts. Go's duck typing (having the operations means it implements the interface) is also cool for this. Traits, mixins, etc. are all variations on this topic that you can find in other languages. Javascript is actually a really interesting languages since it is a prototype based language (inspired by a long forgotten language called Self). It did not have classes for a long time (that's a recent syntactic addition) and you create new objects by copying old ones. And since it is dynamically typed, it has no need for interfaces either.
I really like the idea of role based programming / mixins. I think it does not get enough attention.
[1]I know only of some programming languages that even call it roles.
To be honest I always get confused by the difference between interfaces and roles. For me it was always something like an interface/behavior that can be mixed in at runtime.
That idea is first visible in OOP systems like COM, which depending on the language, or to use a more recent term from WinRT (language projection), exposes that capability.
Since COM only allows for interface inheritance, unlike SOM from OS/2 which also did classes, the way to avoid doing from scratch all members, is to compose and delegate all unmodified methods, while implementing only the new ones in an extended interface.
MFC, ATL, VB and Delphi provided some mechanisms to make this easier, naturally not at the same level as easyness done by Kotlin.
By the way, the same concept is available in Groovy, with @Delegate annotation.
Though it's important to add that composition is not a complete replacement for inheritance, see the Self problem. (The Manifold project has a good description on it).
I once worked with a library that had such a deep inheritance tree, only for ontological purposes, that I was always confused as to where anything was actually implemented. I decided to squash the layers and found almost every method was overrode two or three times.
That was the project that I turned against inheritance, it was 2009, project was written in Java 1.4
What I like about the modern¹ approach (interfaces + composition) is that it cleanly untangles polymorphism from behaviour-sharing.
When you inherit from a parent class, you have to be careful to only override methods in ways that the parent expects, so the parent's invariants aren't broken². There's a whole additional set of keywords (private/protected/final) meant to express these parent-child contracts. With interfaces + composition, those are unnecessary: You can compose an object and use it however you want; then, if you want the wrapper object to uphold the inner object's contract, you can additionally implement an interface to formalize that. The behaviours you use and the polymorphic guarantees you make are totally separate.
Inheritance mixes these ideas together and ends up worse for it. Not only the modern approach is simpler, it's more powerful: (1) Polymorphic extension (i.e. extending a polymorphic parent class at runtime) is doable, and (2) multiple inheritance is a non-issue.
[1]: I call it "modern" because newer languages like Go and Rust have eliminated inheritance in favour of exclusively using interfaces/traits.
Smalltalk protocols and message categories were a step toward this (for example you could classify messages as implementing a particular interface, such as the collection or stream protocols), but Smalltalk lacked the type and interface checking supported by Java and other languages.
"Composition" is a word that can mean several things, and without having read the original source I never really understood which version they mean. As a rule, I've always viewed "composition" as "gluing together things that don't know necessarily know about each other", and that definition works well enough, but that doesn't necessarily eliminate inheritance.
So then I start thinking in less-useful, more abstract definitions, like "inheritance is vertical, composition is horizontal", but of course that doesn't really mean anything.
And at some point, it seems like I just end up defining "composition" to mean "gluing together in a way that's not inheritance". Again, not really a useful definition.
I find the Monoid/Semigroup typeclass pretty concisely captures what is generally meant by "composition" in the minimal sense.
> As a rule, I've always viewed "composition" as "gluing together things that don't know necessarily know about each other"
The extension to this definition given the context of Monoids would be "combining two things of the same type such that they produce a new thing of the same type". The most trivial example of this is adding integers, but a more practical example is function composition where two functions can be combined to create a new function. You can also think of an abstraction that let's you combine two web components to create a new one, combining two AI agents to make a new one, etc.
> "inheritance is vertical, composition is horizontal", but of course that doesn't really mean anything.
This can actually be clearly defined, what you're hinting at is the distinction between sum types and product types. The latter of which describes inheritance. The problem with restricting yourself to only product types is that you can only add things to an existing thing, but in real life that rarely makes sense, and you will find yourself backed into a corner. Sum types let you have much more flexibility, which in turn make it easier to implement truly composable systems.
I actually knew most of that (I've done a lot of Haskell). I don't really disagree with what you said, but I feel like like you eliminate a lot of stuff that people would consider "composition" but aren't as easily classified in happy categories.
For example, a channel-based system like what Go or Clojure has; to me that is pretty clearly "composition", but I'm not 100% sure how you'd fully express something like that with categories; you could use something like a continuation monad but I think that loses a bit because the actual "channel" object has separate intrinsic value.
In Clojure, there's a "compose" function `comp` [1], which is regular `f(g(x))` composition, but lets suppose instead I had functions `f` and `g` running in separate threads and they synchronize on a channel (using core.async)? Is that still composition? There are two different things that can result in a very similar output, and both of which are considered by some to be composition. So which one of these should I "prefer" instead of inheritance?
Of course this is the realm of Pi Calculus or CSP if you want to go into theory, but I'm saying that I don't think that there's a "one definition to rule them all" for composition.
I think there's still a category theoretic expression of this, but it's not necessarily easy to capture in language type systems.
The notion of `f` producing a lazy sequence of values, `g` consuming them, and possibly that construct getting built up into some closed set of structures - (e.g. sequences, or trees, or if you like dags).
I've only read a smattering of Pi theory, but if I remember correctly it concerns itself more with the behaviour of `f` and `g`, and more generally bridging between local behavioural descriptions of components like `f` and `g` and the global behaviour of a heterogeneous system that is composed of some arbitrary graph of those sending messages to each other.
I'm getting a bit beyond my depth here, but it feels like Pi theory leans more towards operational semantics for reasoning about asynchronicity and something like category theory / monads / arrows and related concepts lean more towards reasoning about combinatorial algebras of computational models.
The thing about inheritance is it limits you to one relation. Composition is not a single relation but an entire class of relations. The user above mentioned monoids. That is one very common composition that is omnipresent in computation and yet completely glossed over in most programming languages.
But there are other compositions. In particular, for something like process connection, the language of arrows or Cartesian categories is appropriate to model the choices. The actual implementation is another story
In general when you want to model something you first need to decide on the objects and then you need to decide on the relations between those objects. Inheritance is one and there's no need for it to be treated specially. You will find though that very objects actually fit any model of inheritance while many have obvious algebras that are more natural to use
"Gluing together in a way that's not inheritance" is useful enough by itself. Most class hierarchies are wrong, and even when they're right people tend to implement th latest and greatest feature by mucking with the hierarchy in a way which generates wrongness, mostly because it's substantially easier, given a hierarchy, to implement the feature that way. Inheritance as a way of sharing code is dangerous.
The thing composition does differently is to prevent the effects of the software you're depending on from bleeding further downstream and to make it more explicit which features of the code you're using you actually care about.
Inheritance has a place, but IME that place is far from any code I'm going to be shackled to maintaining. It's a sometimes-necessary evil rather than a go-to pattern (or, in some people's books, that would make it a pattern like "go-to").
I don't think that it really is a useful enough definition. There are lots of ways to glue things together that aren't inheritance that are very different from each other.
I could compose functions together like the Haskell `.`, which does the regular f(g(x)), and I don't think anyone disputes that that is composition, but suppose I have an Erlang-style message passing system between two processes? This is still gluing stuff together in a way that is not inheritance, but it's very different than Haskell's `.`.
But both of those avoid the pitfalls of inheritance. "Othering" is a common phenomenon, and I think it's useful when creating an appropriate definition of composition.
But I don't think it's terribly useful; there are plenty of things that you could do that the people who coined the term would definitely not agree with.
Instead of inheritance, I could just copy and paste lots of different functions for different types. This would be different than inheritance but I don't think it would count as "composition", and it's certainly not something you should "prefer".
One of the most damaging things is when they teach inheritance like "a Circle is a Shape, a Rectangle is a Shape, a Square is a Rectangle" kind of thing. The problem is the real world is exceedingly rarely truly hierarchical. Too many people see inheritance as a way to model their domain, and this is doomed to failure.
Where it works is when you invent the hierarchy. Like a GUI toolkit or games. It's hierarchical because you made it hierarchical. In my experience the applications where it really works you can count on one hand, whereas the vast majority of code written is business software for which it doesn't really.
I've been building gui applications for the past 20 years and I couldn't imagine doing it without an inheritance model. There's so much scaffolding needed to build components and combine them into a working view. Sure inheritance can be bad in the data layer because you don't want to handcuff yourself to bad data expectations. But building out views and view controllers, there's a lot of logic you don't want to keep duplicating every time.
Guess what, lots of people have been building GUI applications without views, much less view controllers, for longer than that. Including Squeak, with Morphic.
GPUI is a great example of the insane amount of boilerplate needed to create a component when you don't have inheritance.
My guess is that people don't create a lot of individual components in this framework to handle different business cases, and instead overload a single text input component with a million different options. I would hate to untangle a mature app written under those conditions.
My personal preference for composition over inheritance is that it forces callers to call the owned-object’s methods directly rather than automatically through inheritance.
There is more typing/boilerplate but when you read the class file you get a full picture of what’s happening rather than some parts happening automatically in a different file.
I like to say that code should be written with a reader bias: the singular writer should do more work if it makes the class more obvious for the multitude of readers. I feel like composition is a good example of that.
I call it "read-optimized code". Inheritance is biased toward conservative writing. Once your mind becomes so enmeshed with the code base that you can no longer fathom a future where you might fall out of sync with it, inheritance becomes extremely appealing. It's all in your head! You pull the Razzle parent, sprinkle a bit of Dazzle mixin, everything is alchemized into a Fizzle class and abracadabra. Meanwhile newbies in the team have their eyes welling up from having to deal with your declarative mess.
It's been settled multiple times that a relational DB is your default choice, as opposed to an object DB. Feels like the same lesson applies to OOP. Objects are ok when you have a simple bag of properties, but otherwise begin to distract from what you really want to model. And I guess composition is more analogous to relations.
> That points to a deficiency in the “composition over inheritance” aphorism: those aren’t the only two games in town. If you have procedures as first-class types (like blocks in Smalltalk, or lambdas in many languages), then you might prefer those over composition or inheritance.
First-class procedures/functions are a form of composition. Requiring a function type behaves like requiring an interface/class type with only one method. (In languages like F#, `Func<_,_>` is literally defined internally as an abstract class with one method, `Invoke`, although there are other mechanisms to auto-inline lambdas when enough static information is available to do so.) In either case, you can place it into a field of an object or data structure, or pass it directly as an argument to a function/method.
When I first took an object oriented programming class it was all about inheritance so that's what I tried to use for everything. Then I started writing real programs and realized that inheritance sucked then finally found the succent "Favor composition over inheritance".
In mainstream/SV coding, I would say the scales just barely tipped toward composition in the late 10s... There are plenty of programmers still completely oblivious, the inertia is huge. Plus the swing back is too strong, inheritance is very powerful, just not as generic as originally thought.
i think inheritance got a bad name due to abuse of multiple inheritance and overly fragile base classes in c++ (and maybe java) codebases of the 90s and early 00s.
it's mentally satisfying to create a beautiful class hierarchy that perfectly compresses the logic with no repetition, but i think long term readability, maintainability and extensibility are much better when inheritance is avoided in favor of flat interfaces. (also easier to turn into rpcs as all the overcomplicated object rpc things of the 90s were put to bed).
Rpcs really can't be understated in terms of the effect they had on classes and inheritance.
While in theory it should be straightforward to ship instance state over a wire, in practice most languages have no built-in support for it (or the support is extremely poor in the general case; I remember my first experiments with trying to ship raw Java objects over the wire using the standard library tools back in the early 2000s, and boy was that incredibly inefficient). Additionally, the ability to attach arbitrary methods to instances in some languages really complicates the story, and I think fundamentally people are coming around to the idea that the wire itself is something you have to be able to inspect and debug so being able to understand the structure in transit on the wire is extremely important.
Classes and their inheritance rules make exactly the wrong things implicit for this use case.
I never liked inheritance. It seems like something that works well in a world where you assume things don’t evolve rapidly. It also feels like it adds mental debt—every new thing needs to comply with old things to stay compatible. Every update has to take into account how old components are working. Probably, the static nature helps big teams and big companies. But I’ve found that some duplicated code is way easier to deal with, especially now that LLMs can generate new code so quickly.
It really helps me to think of it all as extensive metaphors. Math included. The point is to tell an active story using symbols as metaphorical representations of something. With a lot of assumed language implied (through teachings) by choices of naming things. (As a fun example, don't focus on the name Algebraic if you aren't going to lean in on grade school algebra for things.)
That said, I think this is also a good way to approach framing things. Agreed that the idea of "prefer composition" is often a thought termination trick. Instead, try them both! The entire point of preferring one technique over the other is that it is felt to give more workable solutions. If you don't even know what the worked solution would look like with the other technique, you should consider trying it. Not with a precommitment that you will make it work; but to see what it illuminates on the ideas.
I have been using inheritence for 15 years, and have sometimes regretted it and sometimes loved it.
It does have actual benefits if you can limit its usage, and don't use the full insanity that languages like C++.
I generally dismis people that tell you to always use composition over inheritance without first understanding the problem space, and how it could be modeled.
The split between inheritance-heavy OOP and composition-first OOP really just reflects how software design has shifted toward approaches that handle change better. Inheritance still solves real problems, but for most modern, fast-changing systems, composition usually offers a smoother path.
Of course, developers mix and match depending on what the situation calls for. But knowing how these two mindsets differ can make it easier to build code that stays clean and easy to evolve.
And as programming continues to pull ideas from functional, reactive, and declarative styles, the compositional way of thinking will probably stay right at the heart of how we approach object-oriented design.
Composition is ultimately more flexible and less constraining than inheritance. It reflects a practical approach of just using the types/classes you need, without having to adopt some project wide OO religion or design philosophy.
With C++, no-one needs to be told (even if good advice) to "favor composition over inheritance" - I think most people who have worked with the language for long enough on large enough projects will end up realizing for themselves that this is generally the preferred approach. Inheritance is a specialized tool, best reserved for specialized use cases.
It's a bit of a shame that C++ "Concepts" were never adopted, or some other type of compile-time polymorphism, since I think this is often all that is really wanted - a compile time guarantee that two classes will provide the same interface, without forcing them to be related by inheritance.
There are days I hate the mapping of plain English terms of art over actual in-language effects.
Considering sets, if something is, in set terms a specific subset with a defining membership or characteristic of a definable superset, representing that at compile time effects a hard constraint which honours the set Venn diagram.
If that set/subset constraint doesn't exist then you have to ask yourself if applying a compile time constraint is appropriate.
Hierarchy (and thus "inheritance") is a way to express that several different things share the same quality. They are different in general, but same in some way. It is a very natural way for people to express such a thing and no wonder it is so widespread. But it is not the only way nor the general way, of course.
Composition is not an opposite to inheritance. An opposite would be something like:
Message A ( ... ):
Type B: { ... }
Type C: { ... }
Or, if the body of the method is same ("a parent method"):
Type B, Type C:
Message A ( ... ): { ... }
Here we do not give A and B places in the hierarchy but merely say they respond to the same message or that even the procedure is same.
I do not know if any meaningful and systematic alternative to a hierarchical way exists in any programming notations. Interface spec is a partial way, but that's all. (I know only a few notations, of course).
In Eiffel we have multiple inheritance.
It's such a powerful tool.
And a natural way to model the world.
For example if you think of your typical OOP book
You have
Vehicles with engines
* cars that move on roads
* planes that move on air
* and boats that move on water.
But then comes an aqua-plane and it breaks your inheritance tree!
But with multiple inheritance is the most natural thing to have a plane that is also a boat and a car.
In Eiffel we favor the appropriate tool that better represents the world.
Coming from C++ and C# I think interface inheritance is good. But code inheritance is bad. I always try to avoid it. The only times I need to use code inheritance is wen I have to use framework classes that have bug or broken behavior I have to repair. Eg: label control in c# copy its text to clipboard on double click in windows.form
The main point is the same as the Dewey Decimal System. Keep things tucked away yet findable. Make a huge code base useful to people who didn't write it themselves.
When they find inheritance is actually worse at describe the concept though. With composition you no longer need to implement whatever interface and bridge the implementation by proxy or whatever. You are also not limited to what parent class have (while you can still add all components that parent have to children if you need). Interface and proxy is just composition but worse in my opinion
Great article, I though it was going to be yet another one looking at recent trends, however it actually dives into the history of how it came to be, as someone that started learning OOP with Turbo Pascal 5.5 and Clipper 5, before other OOP languages.
When we realized object models were an anti-pattern. Abstract base classes or just regular class hierarchies inherently create tightly-coupled structures. An eventual maintenance nightmare.
Modularization was the core principle of DDD and it still holds up 20 years later.
Do you dislike type inheritance? Or only implementation inheritance? My view is that type inheritance is incredibly useful, both for single system programming, and rpc. Whereas implementation inheritance creates brittle systems.
the article seems to be digging into justifications for using inheritance. one thing I've heard and it seems to work is inheritance is ok for interfaces but usually not good for implementations.
I’ll be honest. I don’t really understand the point of this article. Maybe that’s just a preference thing. The philosophy behind these abstractions is the least interesting part of the question for me. What problems do these various methods of polymorphism solve and create? What solutions do they enable or prevent? That’s the only part that matters. But citing some discussion about the philosophy behind the theory from 40 years ago is not particularly enlightening. Not because it’s not relevant. But because we have 40 years more experience now and dozens of new languages that have different takes on this topic. What has been learned and what has been discovered?
I usually think of the ideas behind "composition" as "how do I assist a future developer to replace the current (exported) implementation of a type with a new one by restricting external visibility of its internal implementation through the use of private methods and data".
In "inheritance", it often feels like the programmer's mindset is static, along the lines of "here is a deep relationship that I discovered between 2 seemingly unrelated types", which ends up being frozen in time. For example, a later developer might want to make a subtle innovation to the base type; it can be quite frightening to see how this flows though the "derived" types without any explicit indiction.
Of course, YMMV, but I think of "composition" as "support change" and "inheritance" as "we found the 'correct way to think about this' and changes can be quite difficult".
Since I think that the key to building large systems handling complex requirements is 'how do we support disciplined change in the future' (empowering intellectual contributions by later generations of developers rather than just drudge maintenance).
> This contrasts inheritance as a “white box” form of reuse, because the inheriting class has full visibility over the implementation details of the inherited class; with composition as a “black box” form of reuse, because the composing object only has access to the interface of the constituent object.
So, we just need devs to stop trying to be overly clever? I can get behind that, “clever” devs are just awful to work with.
Inheritance is not a fundamental concept of anything. Inheritance is just composition with syntactic sugar. The semantic meaning was always composition.
Oop is a mistake. Rust and pythons explicit self passing and turning of the dot operator into simple syntactic sugar is the correct approach. We should just stop teaching everything related to this in universities and go back to fundamentals.
Implementation inheritance is not just composition. Composition on its own does not allow for open recursion (implementing methods that were called on a base class in a derived class, via an in-built dispatch step), whereas inheritance does.
A virtual table and virtual dispatch are orthogonal to inheritance. Haskell let's you do the former without the latter. I agree that syntactic sugar for virtual dispatch is a nice language feature because it is tedious to do by hand
Is your code simple? Then use whatever helps you finish it fast and rewrite later if needed. Or is it complicated? Then don't rely on any canned advice. If you are implementing a virtual machine on an embedded chip, maybe parallel arrays and gotos are the way to go, nobody except you knows. Everything else is just overpaid senior architects trying to justify their own existence by not allowing working code to be merged.
I am always bemused when i see articles like these. Do people not have an understanding of fundamental Software Engineering principles from OGs like Parnas/Liskov/etc.?
The fundamental idea is that of Abstraction which can be defined as the discovery/invention of "higher-level concepts" from more primitive "lower-level concepts" and then reasoning and manipulating at the higher-level. This abstraction is based on structure and/or behavioural attributes.
In order to manage the complexity inherent in the building of large systems certain fundamental aspects were identified as highly desirable. They are Separation-Of-Concerns, Modularization, Reuse and Information-Hiding.
The crucial point to understand is that Abstraction does not imply any of the above aspects! A good example are Mathematical Abstractions. But because for Software we desire the above aspects for our system-as-a-whole we learn to combine them with our Abstractions. This is why we have so many different styles of Programming (i.e. Imperative/OO/Functional/Logic/etc.).
Viewed in the above light the relation between Inheritance and Composition becomes clear. They are just different ways of emphasizing different combinations of the above aspects for your abstractions based on your design needs.
References:
1) Software Fundamentals: Collected Papers by David L. Parnas.
2) Program Development in Java: Abstraction, Specification, and Object-Oriented Design by Barbara Liskov and John Guttag.
3) Multi-Paradigm Design for C++ by James Coplien.
How about not favoring anything. There are many paradigms and each one has its place. Franky I do not really understand why do developers fight these religious wars about languages, frameworks etc.
> There are many paradigms and each one has its place.
That's a thought-terminating cliché. The argument against inheritance has been laid out pretty clearly. It's reasonable to rebut that argument. It's not reasonable to say, "you shouldn't criticize inheritance because Everything Has Its Place." Everything does not have its place. Sometimes we discover that something is harmful and we just stop using it.
Em.. I’m quite nitpicky and want to do the opposite of “thought-terminating”.
I’m for encouraging best practice, but most things do have its place. I present to this court two examples:“premature optimisation is root of all evil” and “goto statement considered harmful”.
Both well accepted as things should be avoided for good reasons (incl. but not limited to, preserving sanity of coworkers)
But both definitely “have its place”. First one’s place is legitimized (with nuance) by author himself in second part of same sentence. The latter one (goto) is routinely used by linux devs (random example: https://github.com/torvalds/linux/blob/master/fs/ext4/balloc...)
> we just stop using it.
We minimise/restrict the usage.
Isn't it? People have written extensively about why we should prefer composition to inheritance, and you haven't mounted any defence of inheritance beyond the thought-terminating cliché that it "has its place."
- (java) Least interesting example to rebuke “never”: exceptions, interfaces.
- (java) inheritance is used by active and successful projects (e.g. junit5, spring framework). I would argue that success is a pragmatic vindication criteria of a tool/technology.
True; I suppose I could concede the idea that inheritance has its place if we recognize that that place is quite small and out-of-the-way. My problem is that "everything has its place," without any qualifications, is effectively a blank cheque to use inheritance anywhere and then just go, "well that was its place."
Interfaces are great; I wouldn't consider them inheritance.
Sure, good stuff has been written with inheritance, but good stuff has been written with C, and that doesn't make C unproblematic. If Postgres were being written today, the authors would probably choose something other than C—we just have better, safer languages for that kind of work now.
I use both where choosing what I believe is appropriate for particular case.
Frankly I do not give rat's ass about what "People have written extensively". From what I read most of it sounds like spoken by politician: look Jimmy, someone can do a bad thing with it. Well fuckin don't do a bad thing.
So much over very simple and primitive thing: John HAS a key vs dog IS an animal. Both are valid and proper.
>"you haven't mounted any defense"
Why would I bother. It does not need a defense. It is like do not use Java because it encourages FactoryFactoryFactory, 20 level of abstraction etc. Well it does not. Architecture astronauts do it and I am not one of those
> So much over very simple and primitive thing: John HAS a key vs dog IS an animal. Both are valid and proper.
I don't think so. "Having" vs "being" are descended from an overly simulationist notion of program design. The fact that John has a key in real life does not suggest that this relationship should be represented by an object John which owns an object Key. I think this kind of ontological approach is behind a lot of bad object-oriented design.
> Architecture astronauts do it and I am not one of those
This is the same rationale used to defend memory-unsafe languages. I like that as a point of comparison because we can actually measure the relationship between the use of memory-unsafe languages and the number of dangerous memory vulnerabilities that show up even in highly-scrutinized code bases like the Linux kernel. "I write good code" doesn't fly; bad code is getting written, and the tools we have to correct that are our languages and paradigms.
> Why would I bother. It does not need a defense.
If we take our craft seriously, we need to be able to discuss the merits and drawbacks of our tools without getting defensive and refusing to engage. I'm not saying you have to defend it to me—I'm just some guy online—but if you're disinterested in defending it in general, I think that's a craft issue.
I was not talking about criticizing. Valid critique us useful and deserved. And this concerns composition as well as any other area. I was talking about crusades by programmers.
Gameplay logic inherently leans more towards composition, with a little hint of inheritance.
You can have players and monsters, which are all types of "characters" or "units", which is inheritance, but instead of having a separate FlyingPlayer and a separate FlyingMonster, which use the same code for flight, you could have a FlyingComponent, which is composition.
I've been going all in on composition and it's amazing for quickly implementing new gameplay ideas. For example, instead of a monolithic `Player` class you could have a `PlayerControlComponent` then you can move that between different characters to let the player control monsters, drones, etc.
Imagine instead of only Pac-Man being able to eat the pills, you could also give the ghosts the `PillEaterComponent` in some crazy special game modes :)
I've also been fantasizing about a hypothetical language that is built from the ground up for coding gameplay, that doesn't use the word "class" at all but something else that could be a hybrid of inheritance+composition.
It depends though. Learning what things don't actually work like the textbooks says is the key to level from junior to senior. Some people never get it, some got it quickly.
"Single responsibility" isn't an especially useful yardstick. If you actually need to decompose a complex piece of logic into modules, the place to start is by identifying areas of high cohesion and separating them into loosely coupled functions. Ideally you can match those up to a DDD-style ubiquitous language, so your code will make intuitive sense to people familiar with the domain. "Does this have one responsibility?" really isn't the right question to ask.
The open-closed principle is straight-up wrong. Code should be easy to modify and easy to delete, and you only rarely need to add hooks for extensibility. Liskov substitution is fine, but it has more to do with correctness than cleanliness. Dependency inversion is a source of premature abstraction—you shouldn't open the door to polymorphism until you need to. Interface segregation is good, though.
In general, I think SOLID is overly enamoured with the features of object orientation. Objects themselves just aren't that big of a deal. It'd be like making the whole acronym about if-statements. If I were going to make a pithy acronym about legible code, it'd have more to say about statelessness, coupling, and unit tests. It'd reference Ousterhout's idea of deep modules, and maybe say something about "Parse, don't validate," or at least something against null values.
Thank you for taking the time to reply, instead of just hitting downvote. I feel like if we argued over a beer we’d probably end up agreeing on a lot of things. But let’s start by disagreeing. :-)
> "Does this have one responsibility?" really isn't the right question to ask.
It’s a great question to ask. As a senior engineer, the answer might be “no”, but there’s a vast difference between code where the answer is “no” because someone made a conscious choice, vs code where nobody even asked the question. Here’s the thing: a compiler and linker can join ten classes into a single executable, but even a senior engineer cannot look at a single class with ten responsibilities and figure out what the fuck is going on. There’s a doc at my company that describes the core function of one particular service. The doc describes the simplest of systems and so you would be surprised to learn that 1) it took me two years of working one the product before I could write it and 2) nobody knew. The reason it took two years was because there were 10 different pathways, and every pathway was just a giant implementation, each written differently, and each, ultimately, doing the exact same fucking thing. But you’d never be sure just by looking at the code. In fact it very much looked like each of these things had very specific things that they did differently. Over two years, while also doing my job of keeping this thing running and adding features, I refactored the thing to be SOLID. In doing so, demonstrated that they all do the exact same thing. We haven’t finished refactoring everything, but we do now test all the pathways with a parallel implementation that verifies 80 classes and 500 instances at runtime with one class and ten instances.
I work on software that you and most people on planet earth with at least a mobile phone are using in one way or another. I have made many pieces of this system better by evolving a clusterfuck of cohesion into a system that is easy to reason about, maintain and evolve - by apply SOLID principles.
I’m currently working on a package used by over 1,000 services. The most pain has been caused by previous iterations ignoring the open-closed principle. As you say, “easy to modify and delete”. A stronger rule, which perhaps you’re alluding to, is don’t allow any extension at all, and just expose only interfaces. In that sense I could agree open-closed principal is moot, but it’s moot for taking its argument to the logical conclusion.
I am also a fan of DDD, and for the reasons you allude to: the second half of the book is more about communicating in a large engineering organization.
An important point not mentioned by the article is that of "co-recursion" with inheritance (of implementation).
That is: an instance of a subclass calls a method defined on a parent class, which in turn may call a method that's been overridden by the subclass (or even another sub-subclass in the hierarchy) and that one in turn may call another parent method, and so on. It can easily become a pinball of calls around the hierarchy.
Add to that the fact that "objects" have state, and each class in the hierarchy may add more state, and modify state declared on parents. Perfect combinatory explosion of state and control-flow complexity.
I've seen this scenario way too many times in projects, and worse thing is: many developers think it's fine... and are even proud of navigating such a mess. Heck, many popular "frameworks" encourage this.
Basically: every time you modify a class, you must review the inner implementation of all other classes in the hierarchy, and call paths to ensure your change is safe. That's a horrendous way to write software, against the most basic principles of modularity and low coupling.
This is only the case when the language does not distinguish between methods that can be overridden versus those that cannot. C++ gives you the keyword "virtual" to put in front of each member function that you want to opt into this behavior, and in my experience people tend to give it some thought on which should be virtual. So I rarely have this issue in C++. But in languages like Python where everything is overridable, the issue you mention is very real.
Good point. In Java and many other languages you can opt out instead... which might make a big difference. Is it more of a "cultural" thing?... again, many frameworks encourage it by design, and so do many courses/tutorials... so those devs would be happy to put "virtual" everywhere in C++
Kotlin switches that back to opt in-- they did as a specific design level thing learning from their observation of how to improve java.
heh, i have seen programmers using virtual everywhere, because they were lazy to use declspec(dll_export) on windows system :)
The virtual keyword in c++ is more of a compiler optimization and less of a design decision. C++ doesn't want everyone paying the overhead of virtual function calls like other languages
I think that's an over-simplification. There was pressure on the language to ensure that data structures were compatible with C structs, so avoiding the vtable with simple classes was a win for moving data between these languages.
Of course these days with LTO the whole performance space is somewhat blurred since de-virtualisation can happen across whole applications at link time, and so the presumed performance cost can disappear (even if it wasn't actually a performance issue in reality). It's tough to create hard and fast rules in this case.
It still funcions as a design optimization even though that isn't the reason for it.
While in Python everything is overridable, does this show up in practice outside of (testing) frameworks? I feel like this is way more common in Java. My experience in Python is limited to small micro service like backends and data science apps.
I've seen it a lot on Django projects. Maybe I was just unlucky on the Python projects I've joined.
> It can easily become a pinball of calls around the hierarchy.
This is why hierarchies should have limited depth. I'd argue some amount of "co-recursion" is to be expected: after all the point of the child class is to reuse logic of the parent but to overwrite some logic.
But if the lineage goes too deep, it becomes hard to follow.
> every time you modify a class, you must review the inner implementation of all other classes in the hierarchy, and call paths to ensure your change is safe.
I'd say this is a fact of life for all pieces of code which are reused more than once. This is another reason why low coupling high cohesion is so important: if the parent method does one thing and does it well, when it needs to be changed, it probably needs to be changed for all child classes. If not, then the question arises why they're all using that same piece of code, and if this refactor shouldn't include breaking that apart into separate methods.
This problem also becomes less pressing if the test pyramid is followed properly, because that parent method should be tested in the integration tests too.
> I'd argue some amount of "co-recursion" is to be expected: after all the point of the child class is to reuse logic of the parent
That's the point: You can reuse code without paying that price of inheritance. You DON'T have to expect co-recursion or shared state just for "code-reuse".
And, I think, is the key point: Behavior inheritance is NOT a good technique for code-reuse... Type-inheritance, however, IS good for abstraction, for defining boundaries, to enable polymorphism.
> I'd say this is a fact of life for all pieces of code which are reused more than once
But you want to minimize that complexity. If you call a pure function, you know it only depends on its arguments... done. If you can a method on a mutable object, you have to read its implementation line-by-line, you have to navigate a web of possibly polymorphic calls which may even modify shared state.
> This is another reason why low coupling high cohesion is so important
exactly. Now, I would phrase it the other way around though: "... low coupling high cohesion is so important..." that's the reason why using inheritance of implementation for code-reuse is often a bad idea.
> You can reuse code without paying that price of inheritance.
The same pinball of method calls happens at almost exactly the same way with composition.
You save some idiosyncrasies around the meaning of the object pointer, and that's all.
How so? Not sure what you mean.
If object A calls a method of object B (composition), then B cannot call back on B, and neither A nor B can override any behavior of the other (And this is the original core tenet of OO: being all about "message-passing").
Of course they can accept and pass other objects/functions are arguments, but that would be explicit and specific, without having to expose the whole state/impl to each other.
> Add to that the fact that "objects" have state, and each class in the hierarchy may add more state, and modify state declared on parents. Perfect combinatory explosion of state and control-flow complexity.
What if you are actually dealing with state and control-flow complexity. I'm curious what would be the "ideal" way to do this in your view. I am trying to implement a navigation system stripping interface design and all the application logic, even at this level it can get pretty complicated.
You are always dealing with state and control-flow in software design. The challenge is to minimize state at much as possible, make it immutable as much as possible and simplify you control-flow as much as possible. OO-style inheritance of implementation (with mutable state dispersed all over the place and pinball-style control-flow) goes against those goals.
Closer to the "ideal": declarative approaches, pure functions, data-oriented pipelines, logic programming.
If the author intended a function to be overridable and designed the class as such, none of this is a problem. I never need to look inside the parent class, let alone the entire hierarchy.
On the flip side, if the author didn't want to let me do that, I really appreciate having the ability to do it anyways, even if it means tighter coupling for that one part.
I tried to contribute a bug fix to a Common Lisp project and found this exact issue. In CL you can trace methods but if the call hierarchy is several dozen levels deep with multiple type overrides and several :around, :before and :after combinations, it’s just impossible to keep track of what does what. This is not a language issue though, CLOS is really powerful and can be a life saver in good hands, but when people use it just to try the feature it creates monstrosities.
I think the fundamental issue with implementation-inheritance is the class diagram looks nice, but it hides a ton of method-level complexity if you consider the distinction between calling and subtyping interfaces, complexity that is basically impossible to encapsulate and would be better expressed in terms of other design approaches.
With interface-inheritance, each method is providing two interfaces with one single possible usage pattern: to be called by client code, but implemented by a subclass.
With implementation-inheritance, suddenly, you have any of the following possibilities for how a given method is meant to be used:
(a) called by client code, implemented by subclass (as with interface-inheritance) (b) called by client code, implemented by superclass (e.g.: template method) (c) called by subclass, implemented by superclass (e.g.: utility methods) (d) called by superclass, implemented by subclass (e.g.: template's helper methods)
And these cases inevitably bleed into each other. For example, default methods mix (a) and (b), and mixins frequently combine (c) and (b).
Because of the added complexity, you have to carefully design the relationship between the superclass, the subclass, and the client code, making sure to correctly identify which methods should have what visibility (if your language even allows for that level of granularity!). You must carefully document which methods are intended for overriding and which are intended for use by whom.
But the code structure itself in no way documents that complexity. (If we want to talk SOLID, it flies in the face of the Interface Segregation Principle). All these relationships get implicitly crammed into one class that might be better expressed explicitly. Split out the subclassing interface from the superclass and inject it so it can be delegated to -- that's basically what implementation-inheritance is syntactic sugar for anyway and now the complexity can be seen clearly laid out (and maybe mitigated with refactoring).
There is a trade-off in verbosity to be sure, especially at the call site where you might have to explicitly compose objects, but when considering the system complexity as a whole I think it's rarely worth it when composition and a tiny factory function provides the same external benefit without the headache.
These are powerful tools, if used with discipline. But especially in application code interfaces change often and are rarely well-documented. It seems inevitable that if the tool is made available, it will eventually be used to get around some design problem that would have required a more in-depth refactor otherwise -- a refactor more costly in the short-term but resulting in more maintainable code.
Author here. I wrote “ But even a modestly more recent language like Java has visibility attributes that let a class control what its subtypes can view or change, meaning that any modification in a subclass can be designed before we even know that a subtype is needed.” which covers your situation: if you need to ensure that subtypes use the supertype’s behaviour in limited ways, use the visibility modifiers and `final` modifier to impose those limits.
I 100% agree. And even though I use C# which is kind of OOP heavy, I use inheritance and encapsulation as least as possible. I try to use o more functional worklflow, with data separated from functions/methods. I keep data in immutable Records and use methods/functions to transform it, trying to isolate side effects and minimize keeping state.
It's a much pleasurable and easier way to work, for me at least.
Trying to follow the flow through gazillion of objects with state changing everywhere is a nightmare and I rather not return to that.
I agree that changing object state and having side effects should be avoided, but you can achieve both immutability and encapsulation very easily with C#:
Sounds like someone didn’t follow the SOLID principles
Arguably the answer is “When Barbara Liskov invented CLU”. It literally didn’t support inheritance, just implementation of interface and here we have her explaining 15 odd years later why she was right the first time.
I used to do a talk about Liskov that included the joke “CLU didn’t support object inheritance. The reason for this is that Barbara Liskov was smarter than Bjarne Stroustrup.”
There is a reason C++ devs and only C++ devs have nightmares of diamond inheritance.
Oh the damage that language has done to a generation, but at least it is largely passed us now.
I haven't encountered diamond inheritance a single time in 10 years of writing/reading C++, so I definitely don't have nightmares about it. Maybe that was really a thing in the 90s or 2000s?
I have been programming professionally in c++ for 20 years. I remember once thinking "cool, I could use virtual inheritance here". I ended up not needing it.
MI is not an issue in c++, and if it were the solution would be virtual inheritance.
Exactly. Unlike Java where every object inherits from Ojbect, in C++ multiply inheriting from objects with a common base class is rare.
Some older C++ frameworks give all their objects a common base class. If that inheritance isn't virtual, developers may not be able to multiply inherit objects from that framework. That's fine, one can still inherit from classes outside the framework to "mix in" or add capabilities.
I've never understood the diamond pattern fear-mongering. It's just a rarely-encountered issue to keep in mind and handle appropriately.
> in C++ multiply inheriting from objects with a common base class is rare.
One example is COM (or COM-like frameworks) where every interface inherits from IUnknown. However, there is no diamond problem because COM interfaces are pure abstract base classes and the pure virtual methods in IUnknown are implemented only once in the actual concrete class.
Diamond inheritance is its own special kind of hell, but “protected virtual” members of java and c# are the “evil at scale” that’s still with us today. An easy pattern that leads to combinatorial explosion beyond the atoms in the universe. Trivially.
People need to look at a playing deck. 52 cards, and you get 8×10^67 possible orders of the deck. Don’t replicate this in code.
At least C# methods are not virtual by default like in Java.
Why do protected virtual methods lead to an explosion?
Protected = subclasses can call them.
Virtual = subclasses can override them.
So basically, any subclass can call the method, and that method may be overridden in any other subclass.
What is the issue with those overrides? They only affect that one path in the hierarchy of inheritance, no? Not a C++ user here, but I imagine it would be catastrophic, if an unrelated (not on path to root superclass) class could override a method and affect unrelated classes/objects.
Every language that permits diamond inheritance causes the devs who dare to use this feature at least some nightmare. It's not a C++ issue.
It's also cultural, possibily. Python supports diamond inheritance, and clearly states how it handles it (it ends up virtual in C++ terms). But in like 20 years of working with Python I can't remember encountering diamond inheritance in the wild once.
Django documentation explicitly recommended it for a short while. At a point, the Python community created all kinds of mixins on all kinds of random APIs.
Then people noticed it was bad, and stopped.
Mixins are usually explicitly orthogonal and rarely get subclassed, so diamond-shaped inheritance with mixins seems rare.
Diamond inheritance is in fact highly pervasive in Python. The reason is that every class is a subclass of object since Python 3 (Python 2 allows classic classes that are different). So every single time you use multiple inheritance you have diamond inheritance. Some of this diamond inheritance is totally innocuous, but mostly not, because a lot of classes override dunder methods on object like __setattr__. It was Guido van Rossum himself that observed the prevalence of diamond inheritance that led to Python 2.3 fixing the MRO, and introducing the super() function to make multiple inheritance sane.
You should read his essay: https://www.python.org/download/releases/2.2/descrintro/
> Diamond inheritance is in fact highly pervasive in Python.
I don't think that's true, because...
> So every single time you use multiple inheritance you have diamond inheritance.
Multiple inheritance is supported but not itself “highly pervasive” in Python
> It was Guido van Rossum himself that observed the prevalence of diamond inheritance
The essay you link does not support that claim. He doesn’t observe an existing prevalence, he describes new features being added simultaneously with the MRO fix that would present new use cases where diamond inheritance may be useful.
And, its true, diamond inheritance is more common in modern Python than it was with classic classes in ancient Python, but there is a huge leap between that and “highly pervasive”.
The MRO fix was added to Python 2.3. The new style classes that would cause diamond inheritance to be prevalent were already present in Python 2.2. So they weren’t simultaneous.
A better phrasing would be that Guido predicted the prevalence of diamond inheritance in Python and therefore found it necessary to fix the MRO.
The most evil code I’ve ever written was diamond inheritance where (some) of the base types were template parameters.
I needed it!
For reasons.
Good reasons? No… but I had my justification.
> at least it is largely passed us now
What does this mean? There doesn't seem to be a popular alternative to C++ yet, unfortunately.
Aside from game dev, Rust is being used in quite a lot of green field work where C++ would have otherwise been used.
Game dev world still has tons of C++, but also plenty of C#, I guess.
Agreed that it’s not really behind us though. Even if Rust gets used for 100% of C++’s typical domains going forward (and it’s a bit more complicated than that), there’s tens? hundreds? of millions (or maybe billions?) of lines of working C++ code out there in the wild that’ll need maintained for quite a long time - likely order decades.
The problem in Rust is that if B is inside of A,
you can't have a writeable reference to both A and B at the same time. This is alien to the way C/C++ programmers think. Yes, there are ways around it, but you spend a lot of time in Rust getting the ownership plumbing right to make this work.> you can't have a writeable reference to both A and B at the same time > but you spend a lot of time in Rust getting the ownership plumbing right to
I think you maybe meant to say something different because here's the most obvious thing:
Now it may take you a while to figure out if you've never done Rust before, but this is trivial.Did you perhaps mean simultaneous partial field borrows where you have two separate functions that return the name fields mutably and you want to use the references returned by those functions separately simultaneously? That's hopefully going to be solved at some point, but in practice I've only seen the problem rarely so you may be overstating the true difficulty of this problem in practice.
Also, even in a more complicated example you could use RefCell to ensure that you really are grabbing the references safely at runtime while side-stepping the compile time borrow checking rules.
It's kind of crazy that OOO is sold to people as 'thinking about the world as objects' and then people expect to have an object, randomly take out a part, do whatever they want with it and just stick it back in and voila
This is honestly such an insane take when you think about what the physical analogue would be (which again, is how OOP is sold).
The proper thing here is that, if A is the thing, then you really only have an A and your reference into B is just that, And should be represented as such, with appropriate syntactic sugar. In Haskell, you would keep around A and use a lens into B and both get passed around separately. The semantic meaning is different.
I recently had this problem is some rust code. I was implementing A and had some code that would decide which of several 'B's to use. I then wanted to call an internal method on A (that takes a mutable reference to A) with a mutable reference to the B that I selected. That was obviously rejected by the compiler and had to find a way around it.
It's not crazy at all, especially since majority of programming is about digitalization of real world things/processed.
eBay, Tinder, Youtube, Robinhood, etc, etc.
Those are all real world things that are now represented in digital world and adjusted for that.
Also "world" doesn't imply "physical", but that's different matter.
And at the end of the day that was not wildly crazy, but wildly successful!
Such school of thinking enabled generations of software engineers who created all this digital world.
Wildy successful does not mean a good idea.
> Such school of thinking enabled generations of software engineers who created all this digital world.
Same could be said for imperative or functional programming for that matter.
Rust depends on C++, until people cut their compilers lose from LLVM, GCC, and other C++ based runtimes, it is going to stay with us for a very long time.
That includes industry standards like POSIX and Khronos, CUDA, Hip and SYCL, MPI and OpenMP, that mostly acknowledge C and C++ on their definition.
There's a growing group that believes no new projects should be started in C/C++ due to its lack of memory safety guarantees. Obviously we should be managing existing projects, but 1973 is calling, it's time to retire into long-tail maintenance mode.
https://security.googleblog.com/2025/11/rust-in-android-move...
I've programmed C++ for decades and I believe all sane C++ code styles disallow multiple inheritance (possibly excepting pure abstract classes which are nothing but interfaces). I certainly haven't encountered any for a long time even in the OO-heavy code bases I've worked with.
I'm spoiled by Python's incredibly sane inheritance and I always have to keep in mind that inheritance is a very different beast in other languages.
And python didn't get it right the first time either. It wasn't until python 2.3 when method resolution order was decided by C3 linearization that the inheritance in python became sane.
http://mail.python.org/pipermail/python-dev/2002-October/029...
Inheritance being "sane" in Python is a red herring for which many smart people have fallen (e.g. https://www.youtube.com/watch?v=EiOglTERPEo). It's like saying that building a castle with sand is not a very good idea because first, it's going to be very difficult to extract pebbles (the technical difficulty) and also, it's generally been found to be a complicated and tedious material to work with and maintain. Then someone discovers a way to extract the pebbles. Now we have a whole bunch of castles sprouting that are really difficult to maintain.
Python is slightly better because it can mostly be manipulated beyond recognition due to strong metaprogramming but pythons operator madness is dangerous. Random code can run at any minute. It's useful for something's and a good scripting language, and a very well designed one, no question there. Still it would be better if it supported proper type classes. It could retain the dynamic typing, just be more sensible.
I'm always surprised by how arrogant and unaware Python developers are. JavaScript/C++/etc developers are quite honest about the flaws in their language. Python developers will stare a horrible flaw in their language and say "I see nothing... BTW JS sucks so hard.".
Let me give you just one example of Python's stupid implementation of inheritance.
In Python you can initialize a class with a constructor that's not even in the inheritance chain (sorry, inheritance tree because Python developers think multiple inheritance is a good idea).
And before you say "but no one does that", no, I've see that myself. Imagine you have a class that inherits from SteelMan but calls StealMan in it's constructor and Python's like "looks good to me".I've seen horrors you people can't imagine.
* I've seen superclass constructors called multiple times.
* I've seen constructors called out of order.
* I've seen intentional skipping of constructors (with comments saying "we have to do this because blah blah blah)
* I've seen intentional skipping of your parent's constructor and instead calling your grandparent's constructor.
* And worst of all, calling constructors which aren't even in your inheritance chain.
And before you say "but that's just a dumb thing to do", that's the exact criticism of JS/C++. If you don't use any of the footguns of JS/C++, then they're flawless too.
Python developers would say "Hurr durr, did you know that if you add a object and an array in JS you get a boolean?", completely ignoring that that's a dumb thing to do, but Python developers will call superclass constructors that don't even belong to them and think nothing of it.
------------------------------
Oh, bonus point. I've see people creating a second constructor by calling `object.__new__(C)` instead of `C()` to avoid calling `C.__init__`. I didn't even know it was possible to construct an object while skipping its constructor, but dumb people know this and they use it.
Yes, instead of putting an if condition in the constructor Python developers in the wild, people who walk among us, who put their pants on one leg at a time like the rest of us, will call `object.__new__(C)` to construct a `C` object.
And Python developers will look at this and say "Wow, Python is so flawless".> In Python you can initialize a class with a constructor that's not even in the inheritance chain
No, you can't. Or, at least, if you can, that’s not what you’ve shown. You’ve shown calling the initializer of an unrelated class as a cross-applied method within the initializer. Initializers and constructors are different things.
> Oh, bonus point. I've see people creating a second constructor by calling `object.__new__(C)` instead of `C()` to avoid calling `C.__init__`.
Knowing that there are two constructors that exist for normal, non-native, Python classes, and that the basic constructoe Class.__new__, and that the constructor Class() itself calls Class.__new__() and then, if Class.__new__() returns an instance i of Class, also calls Class.__init__(i) before returning i, is pretty basic Python knowledge.
> I didn't even know it was possible to construct an object while skipping its constructor, but dumb people know this and they use it.
I wouldn’t use the term “dumb people” to distinguish those who—unlike you, apparently—understand the normal Python constructors and the difference between a constructor and an initializer.
> Knowing that there are two constructors that exist for normal, non-native, Python classes, and that the basic constructoe Class.__new__, and that the constructor Class() itself calls Class.__new__() and then, if Class.__new__() returns an instance i of Class, also calls Class.__init__(i) before returning i, is pretty basic Python knowledge.
I disagree that this is basic knowledge. In python a callable is an object whose type has a __call__() method. So when we see Class() its just a syntax proxy for Metaclass.__call__(Class). That's the true (first of three?) constructor, the one then calling instance = Class.__new__(cls), and soon after Class.__init__(instance), to finally return instance.
That's not basic knowledge.
> Knowing that there are two constructors that exist for normal, non-native, Python classes, and that the basic constructoe Class.__new__, and that the constructor Class() itself calls Class.__new__() and then, if Class.__new__() returns an instance i of Class, also calls Class.__init__(i) before returning i, is pretty basic Python knowledge.
I didn't know most of that, and I've performed in a nightclub in Python, maintained a CSP networking stack in Python, presented a talk at a Python conference, implemented Python extensions with both C and cffi, and edited the Weekly Python-URL!
Oh I've seen one team constructing an object while skipping the constructor for a class owned by another team. The second team responded by rewriting the class in C. It turns out you cannot call `object.__new__` if the class is written in native code. At least Python doesn't allow you to mess around when memory safety is at stake.
For what it's worth, pyright highlights the problem in your first example:
ty and pyrefly give similar results. Unfortunately, mypy doesn't see a problem by default; you need to enable strict mode.1. Your first example is very much expected, so I don't know what's wrong here.
2. Your examples / post in general seems to be "people can break semantics and get to the internals just to do anything" which I agree is bad, but python works of the principle of "we're all consenting adults" and just because you can, doesn't mean you should.
I definitely don't consent to your code, and I wouldn't allow it to be merged in main.
If you or your team members have code like this, and it's regularly getting pushed into main, I think the issue is that you don't have safeguards for design or architecture
The difference with JavaScript "hurr durr add object and array" - is that it is not an architectural thing. That is a runtime / language semantics thing. One would be right to complain about that
> The difference with JavaScript "hurr durr add object and array" - is that it is not an architectural thing. That is a runtime / language semantics thing. One would be right to complain about that
Exactly. One is something in plain sight in front of ones eyes, and the other one can be well hidden, not easy to spot.
I don't understand the problem with your first example. The __init__ method isn't special and B.__init__ is just a function. Your code boils down to:
Which like, yeah of course that works. You can setattr on any object you please. Python's inheritance system ends up being sane in practice because it promises you nothing except method resolution and that's how it's used. Inheritance in Python is for code reuse.Your examples genuinely haven't even scratched the surface of the weird stuff you can do when you take control of Python's machinery—self is just a convention, you can remove __init__ entirely, types are made up and the points don't matter. Foo() isn't even special it's just __call__ on the classes type and you can make that do anything.
With the assumptions typical of static class-based OO (but which may or may not apply in programs in Python), this naively seems like a type error, an even when it isn't it introduces a coupling where the class where the call is made likely depends on the internal implementation (not just the public interface) of the called class, which is...definitely an opportunity to introduce unexpected bugs easily.
Curious quirk of history that C++ peaked when Gen X was comiyof age, who were disproportionately affected by lead poisoning.
There's nothing wrong with implementation inheritance, though. Generic typestate is implementation inheritance in a type-theoretic trench coat. We were just very wrong to think that implementation inheritance has anything to do with modularity or "programming in the large": it turns out that these are entirely orthogonal concerns, and implementation inheritance is best used "in the small"!
If CLU only supported composition, was the Liskov substitution principle still applicable to CLU?
CLU implemeted abstract data types. What we commonly call generics today.
The Liskov substitute principle in that context pretty much falls out naturally. As the entire point is to substitute in types into your generic data structure.
Yes it is, as it is about the semantics of type hierarchies, not their syntax. If your software has type hierarchies, then it is a good idea for them conform to the principle, regardless of whether the implementation language syntax includes inheritance.
It might be argued that CLU is no better than typical OO languages in supporting the principle, but the principle is still valid - and it was particularly relevant at the time Liskov proposed it, as inheritance was frequently being abused as just a shortcut to do composition (fortunately, things are better now, right?)
No, because the LSP is specifically about inheritance, or subtyping more generally. No inheritance/subtyping, no LSP.
It is true that an interface defines certain requirements of things that claim to implement it, but merely having an interface lacks the critical essence of the LSP. The LSP is not merely a banal statement that "a thing that claims to implement an interface ought to actually implement it". It is richer and more subtle than that, though perhaps from an academic perspective, still fairly basic. In the real world a lot of code technically violates it in one way or another, though.
I mean, duh. The spicier take is that Barbara Liskov is smarter than Alan Kay.
Except that Smalltalk is so aggressively duck-typed that inheritance is not particularly first class except as an easy way to build derived classes using base classes as a template. When it comes to actually working with objects, the protocol they follow (roughly: the informally specified API they implement) is paramount, and compositional techniques have been a part of Smalltalk best practice since forever ago (something it took C++ and Java devs decades to understand). This allows you to abuse the snotdoodles out of the doesNotUnderstand: operator to delegate received messages to another object or other objects; and also the become: operator to substitute one object for another, even if they lie worlds apart on the class-hierarchy tree, usually without the caller knowing the switch has taken place. As long as they respond to the expected messages in the right way, it all adds up the same both ways.
I mean, it's not that hard to understand, why composition is to be preferred, when you could easily just use composition instead of inheritance. It's just that people, who don't want to think have been cargo-culting inheritance ever since they first heard about it, as they don't think much further than the first reuse of a method through inheritance.
No, it's not a complete replacement for inheritance.
Nor did I claim so.
Composition folks can get very dogmatic.
I have some data types (structs or objects), that I want to serialize, persist, and that they have some common attributes of behaviors.
In swift I can have each object to conform to Hashable, Identifiable, Codabele, etc etc... and keep repeating the same stuff over and over, or just create a base DataObject, and have the specific data object inherit it and just .
In swift you can do it by both protocols, (and extensions of them), but after a while they start looking exactly like object inheritance, and nothing like commposition.
Composition was preferred when many other languages didn't support object oriented out the gate (think Ada, Lua, etc), and tooling (IDEs) were primitive, but almost all modern languages do support it, and the tooling in insanely great.
Composition is great when you have behaviour that can be widely different, depending on runtime conditions. But, when you keep repeating yourself over and over by adopting the same protocols, perhaps you need some inheritance.
The one negative of inheretance is that when you change some behaviour of a parent class, you need to do more refactoring as there could be other classes that depend on it. But, again, with today's IDEs and tooling, that is a lot easier.
TLDR: Composition was preferred in a world where the languages didn't suport propper object inheretance out of the gate, and tooling and IDEs were still rudemmentary.
> In swift I can have each object to conform to Hashable, Identifiable, Codabele, etc etc... and keep repeating the same stuff over and over, or just create a base DataObject, and have the specific data object inherit it and just .
But then if you need a DataObject with an extra field, suddenly you need to re-implement serialization and deserialization. This only saves time across classes with exactly the same fields.
I'd argue that the proper tool for recursively implementing behaviours like `Eq`, `Hashable`, or `(De)Serialize` are decorator macros, e.g. Java annotations, Rust's `derive`, or Swift's attached macros.
Yes, all behaviors should be implemented like definitions in category theory: X behaves like a Y over the category of Zs, and you have to recursively unpack the definition of Y and Z through about 4-5 more layers before you have a concrete implementation.
I'll be honest here. I don't know if any comment on this thread is a joke.
There are valid reasons to want each one of the things described, and I really need to add type reflexivity to the set here. Looks like horizontal traits are a completely unsolved problem, because every type of program seems to favor a different implementation of it.
Another one is, that there are cases, where hierarchies simply don't work well. Platypus cases.
Another one is, that inheritance hides where stuff is actually implemented and it can be tedious to find out when unfamiliar with the code. It is very implicit in nature.
I think this is rather a rewriting of history to fit your narrative.Fact is, that at least one very modern language, that is gaining in popularity, doesn't have any inheritance, and seems to do just fine without it.
Many people still go about "solving" problems by making every noun a class, which is, frankly, a ridiculous methodology of not wanting to think much. This kind of has been addressed by Casey Muratori, who formulated it approximately like this: Making 1-to-1 mappings of things/hierarchies to hierarchies of classes/objects in the code. (https://inv.nadeko.net/watch?v=wo84LFzx5nI) This kind of representing things in the code has the programmer frequently adjusting the code and adding more specializations to it.
One silly example of this is the ever popular but terrible example of making "Car" a class and then subclassing that with various types of cars and then those by brands of cars etc. New brand of car appears on the market? Need to touch the code. New type of car? Need to touch the code. Something about regulations about what every car needs to have changes? Need to touch the code. This is exactly how it shouldn't be. Instead, one should be thinking of underlying concepts and how they could be represented so that they can either already deal with changes, or can be configured from configuration files and do not depend on the programmer adding yet another class.
Composition over inheritance is actually something, that people realized after the widespread over-use of inheritance, not the other way around, and not because of language deficiencies either. The problems with inheritance are not merely previously bad IDE or editor support. The problems are, that in some cases it is bad design.
Each has its place. There's some things that inheritance makes possible, and some things that are best handled by composition. I use both, quite frequently.
It Depends™.
Composition can add a lot of complexity to a design, and give bugs a lot more corners to hide in, but inheritance can be such a clumsy tool, that it just shouldn't be used for some tasks.
That goes for almost everything in software. Becoming zealous about "The Only Correct Way" can be quite destructive.
I dunno. It's easy to say, "there are trade-offs, it depends" any time two things are compared, and it's never entirely untrue. However, sometimes one option is just generally worse than the other.
I'm not saying it's malpractice to use inheritance or anything, but it's a tool I definitely hesitate to reach for. Go and Rust removed inheritance entirely, and I'd say those languages are better-off without it.
There’s definitely stuff that it enables. I’ve been writing software since before it was a thing, and it was almost magic, when I first learned about it.
I also saw why it fell from grace, but I already knew, by then, that it was no panacea. I learned, on my own, that composition was often a better pattern, and I learned that, back in the 1980s.
Not worth arguing about, but I do find absolutism to be almost offensive, and there’s a damn lot of that, in software development.
Rust removed inheritance only for the Rust ecosystem to generate some kind of half-inheritance system by sticking macros on everything. For every `extends Serializable`, Rust has a `#[derive(Serializable)]`. Superclasses are replaced by gluing the same combinations of traits together in what would otherwise be subclasses, with generic type guards.
The problems with bad design don't go away, they're just hidden out of plain view by taking away the keywords. Rust's solution is more powerful, but also leads to more unreadably dense code.
One clear example of this is UI libraries. Most UI libraries have some kind of inheritance structure in OO languages, but Rust doesn't have that capability. The end result is that you often end up with library-specific macros, a billion `derive`s, or some other form of code generation to copy/paste common implementations. Alternatively, Rust code just reuses C(++) code that needs some horrific unsafe{} pointer glue to force an inheritance shaped block down a trait shaped hole.
Java serialization is implemented with reflection, not inheritance. `extends Serializable` is just a marker which tells the serializer it's okay to serialize a class. Go serializes with reflection too, and there's no inheritance at all in that language.
> Rust's solution is more powerful, but also leads to more unreadably dense code.
Instead of reflection, Rust does serialization with code generation. Java does this too sometimes in libraries like Lombok. The generated code is probably quite dense, but I expect the Java standard library reflection-based serialization code is also quite dense. In both cases, you don't have to read it. `extends Serializable` and `#[derive(Serializable)]` are both equally short. And the generated code for protobuf serialization (which I have read) is pretty readable.
But classes are a really crap way to share code, since you get the is-a relationship rather than the has-a relationship which is almost always what you want to express. Rust traits and derive macros are a much better abstraction for everything I've ever done when contrasted with classes.
Just FYI. I find that I do best, when I combine methodologies.
¿Por qué no los dos?
Yeah, it's like how OS war truces get proposed. "Depends entitely on the use case." Most of the time, the two arguing over Mac vs Windows or Linux distro A vs B have almost identical use cases.
I haven't intentionslly used inheritance in forever, only in cases where some lib forces you to use it that way. Not cause of some trend, but it's just not something you naturally need.
Yeah, it's another thought-terminating cliche, and it's on my list!
https://h2.jaguarpaw.co.uk/posts/thought-terminating-cliches...
Sure, but "favor x over y" or, put another way, "use y only if x is unsuitable" is compatible with this. Nothing in "prefer composition over inheritance" says that composition is the only correct way.
No, but I often see folks use it as a bludgeon for dogma.
Same way I see it.
Objects and inheritance are good when you need big contracts. Functions are good when you want small contracts. Sometimes you want big contracts. Sometimes you want small contracts.
Sometimes the right answer is to mix and match.
Exactly. Inheritance and composition are two different axises of the implementation reuse problem, just as object-oriented programming and functional programming are two different axises of the expression problem. In both cases, you should want to have both axises available, because they do different things. I think saying "prefer composition over inheritance" makes about as much sense as "prefer the Y-axis over the X-axis in a graphics system": it's a statement that doesn't make sense on its own; it only makes sense in specific scenarios, like "... when making a document scroll".
I think the biggest "mistake" in object-oriented programming is the explanations by analogy that many people advocate. A lot of times, people attempt to use a taxonomic metaphor like the tree of life—"a dog is-a mammal" sort of stuff. As a model it falls apart because even the real world tree of life is a flawed model that doesn't fully capture the complexity of life in the way most lay people assume it does. Try getting a random person to justify a platypus being a mammal. Without specific training in biology they stumble. And most computer scientists attempting to employ this analogy are definitely broadly ignorant about biology. You can see it because a lot of the examples people employ aren't even consistent with the metaphor. You're just as likely to see people say things like, "A dog is-a four-legged animal". I think it's an extremely harmful didactic path down which to start.
A lot of these problems happen because most people don't have a good handle on graph theory. They don't understand when they are trying to force a graph with cycles into a tree. Trees are easy for people to understand and handle, graphs with their pesky cycles are much harder, so I get the appeal. But what people come to call "tech debt" or "degenerate edge cases" are really evidence that an inappropriate model was employed early in development.
In real world examples, you'll see object oriented programming and functional programming as well as inheritance and composition used extensively and successfully. I think GUI libraries are a good example here. Buttons and text boxes both inherit from a control base class. This pattern is pervasive and long standing. But you naturally shouldn't usually[1] try to make a form inherit from control as they are much more appropriately compositions of controls.
[1] I say "shouldn't usually" here because some common subforms can be useful encapsulated as a control for composition into other forms, e.g. an address entry form embedded into a user profile form.
One of the lesser known features in Kotlin is interface delegation. This lets you get away with doing multi class inheritance via composition of a class with a delegate. This kind of blurs the boundaries between inheritance and composition in a useful way.
Here Foo has a _list property that it delegates the implementation of list operations to. You can even do function overrides in the class and interact with the delegate via the _list property. However, messing with internal list state is off limits (a problem with inheritance). Like Java, Kotlin supports single class inheritance. But this provides a way out.When I was researching this stuff in the nineties, I came across some papers about role based programming by a Norwegian called Trygve Reenskaug. That formed a lot of my thinking on this topic a bit.
Modern Kotlin and Java look a lot like what he proposed: small interfaces (roles) and classes that implement multiple of these things whose objects can play those roles in different contexts. Go's duck typing (having the operations means it implements the interface) is also cool for this. Traits, mixins, etc. are all variations on this topic that you can find in other languages. Javascript is actually a really interesting languages since it is a prototype based language (inspired by a long forgotten language called Self). It did not have classes for a long time (that's a recent syntactic addition) and you create new objects by copying old ones. And since it is dynamically typed, it has no need for interfaces either.
I really like the idea of role based programming / mixins. I think it does not get enough attention.
[1]I know only of some programming languages that even call it roles.
To be honest I always get confused by the difference between interfaces and roles. For me it was always something like an interface/behavior that can be mixed in at runtime.
[1]https://docs.raku.org/language/objects#Roles
That idea is first visible in OOP systems like COM, which depending on the language, or to use a more recent term from WinRT (language projection), exposes that capability.
Since COM only allows for interface inheritance, unlike SOM from OS/2 which also did classes, the way to avoid doing from scratch all members, is to compose and delegate all unmodified methods, while implementing only the new ones in an extended interface.
MFC, ATL, VB and Delphi provided some mechanisms to make this easier, naturally not at the same level as easyness done by Kotlin.
By the way, the same concept is available in Groovy, with @Delegate annotation.
Though it's important to add that composition is not a complete replacement for inheritance, see the Self problem. (The Manifold project has a good description on it).
I once worked with a library that had such a deep inheritance tree, only for ontological purposes, that I was always confused as to where anything was actually implemented. I decided to squash the layers and found almost every method was overrode two or three times.
That was the project that I turned against inheritance, it was 2009, project was written in Java 1.4
What I like about the modern¹ approach (interfaces + composition) is that it cleanly untangles polymorphism from behaviour-sharing.
When you inherit from a parent class, you have to be careful to only override methods in ways that the parent expects, so the parent's invariants aren't broken². There's a whole additional set of keywords (private/protected/final) meant to express these parent-child contracts. With interfaces + composition, those are unnecessary: You can compose an object and use it however you want; then, if you want the wrapper object to uphold the inner object's contract, you can additionally implement an interface to formalize that. The behaviours you use and the polymorphic guarantees you make are totally separate.
Inheritance mixes these ideas together and ends up worse for it. Not only the modern approach is simpler, it's more powerful: (1) Polymorphic extension (i.e. extending a polymorphic parent class at runtime) is doable, and (2) multiple inheritance is a non-issue.
[1]: I call it "modern" because newer languages like Go and Rust have eliminated inheritance in favour of exclusively using interfaces/traits.
[2]: See the fragile base class problem.
> modern¹ approach (interfaces + composition)
Smalltalk protocols and message categories were a step toward this (for example you could classify messages as implementing a particular interface, such as the collection or stream protocols), but Smalltalk lacked the type and interface checking supported by Java and other languages.
So "Adding Dynamic Interfaces to Smalltalk"
https://www.jot.fm/issues/issue_2002_05/article1/index.html
"Composition" is a word that can mean several things, and without having read the original source I never really understood which version they mean. As a rule, I've always viewed "composition" as "gluing together things that don't know necessarily know about each other", and that definition works well enough, but that doesn't necessarily eliminate inheritance.
So then I start thinking in less-useful, more abstract definitions, like "inheritance is vertical, composition is horizontal", but of course that doesn't really mean anything.
And at some point, it seems like I just end up defining "composition" to mean "gluing together in a way that's not inheritance". Again, not really a useful definition.
I find the Monoid/Semigroup typeclass pretty concisely captures what is generally meant by "composition" in the minimal sense.
> As a rule, I've always viewed "composition" as "gluing together things that don't know necessarily know about each other"
The extension to this definition given the context of Monoids would be "combining two things of the same type such that they produce a new thing of the same type". The most trivial example of this is adding integers, but a more practical example is function composition where two functions can be combined to create a new function. You can also think of an abstraction that let's you combine two web components to create a new one, combining two AI agents to make a new one, etc.
> "inheritance is vertical, composition is horizontal", but of course that doesn't really mean anything.
This can actually be clearly defined, what you're hinting at is the distinction between sum types and product types. The latter of which describes inheritance. The problem with restricting yourself to only product types is that you can only add things to an existing thing, but in real life that rarely makes sense, and you will find yourself backed into a corner. Sum types let you have much more flexibility, which in turn make it easier to implement truly composable systems.
I actually knew most of that (I've done a lot of Haskell). I don't really disagree with what you said, but I feel like like you eliminate a lot of stuff that people would consider "composition" but aren't as easily classified in happy categories.
For example, a channel-based system like what Go or Clojure has; to me that is pretty clearly "composition", but I'm not 100% sure how you'd fully express something like that with categories; you could use something like a continuation monad but I think that loses a bit because the actual "channel" object has separate intrinsic value.
In Clojure, there's a "compose" function `comp` [1], which is regular `f(g(x))` composition, but lets suppose instead I had functions `f` and `g` running in separate threads and they synchronize on a channel (using core.async)? Is that still composition? There are two different things that can result in a very similar output, and both of which are considered by some to be composition. So which one of these should I "prefer" instead of inheritance?
Of course this is the realm of Pi Calculus or CSP if you want to go into theory, but I'm saying that I don't think that there's a "one definition to rule them all" for composition.
[1] https://clojuredocs.org/clojure.core/comp
I think there's still a category theoretic expression of this, but it's not necessarily easy to capture in language type systems.
The notion of `f` producing a lazy sequence of values, `g` consuming them, and possibly that construct getting built up into some closed set of structures - (e.g. sequences, or trees, or if you like dags).
I've only read a smattering of Pi theory, but if I remember correctly it concerns itself more with the behaviour of `f` and `g`, and more generally bridging between local behavioural descriptions of components like `f` and `g` and the global behaviour of a heterogeneous system that is composed of some arbitrary graph of those sending messages to each other.
I'm getting a bit beyond my depth here, but it feels like Pi theory leans more towards operational semantics for reasoning about asynchronicity and something like category theory / monads / arrows and related concepts lean more towards reasoning about combinatorial algebras of computational models.
The thing about inheritance is it limits you to one relation. Composition is not a single relation but an entire class of relations. The user above mentioned monoids. That is one very common composition that is omnipresent in computation and yet completely glossed over in most programming languages.
But there are other compositions. In particular, for something like process connection, the language of arrows or Cartesian categories is appropriate to model the choices. The actual implementation is another story
In general when you want to model something you first need to decide on the objects and then you need to decide on the relations between those objects. Inheritance is one and there's no need for it to be treated specially. You will find though that very objects actually fit any model of inheritance while many have obvious algebras that are more natural to use
"Gluing together in a way that's not inheritance" is useful enough by itself. Most class hierarchies are wrong, and even when they're right people tend to implement th latest and greatest feature by mucking with the hierarchy in a way which generates wrongness, mostly because it's substantially easier, given a hierarchy, to implement the feature that way. Inheritance as a way of sharing code is dangerous.
The thing composition does differently is to prevent the effects of the software you're depending on from bleeding further downstream and to make it more explicit which features of the code you're using you actually care about.
Inheritance has a place, but IME that place is far from any code I'm going to be shackled to maintaining. It's a sometimes-necessary evil rather than a go-to pattern (or, in some people's books, that would make it a pattern like "go-to").
I don't think that it really is a useful enough definition. There are lots of ways to glue things together that aren't inheritance that are very different from each other.
I could compose functions together like the Haskell `.`, which does the regular f(g(x)), and I don't think anyone disputes that that is composition, but suppose I have an Erlang-style message passing system between two processes? This is still gluing stuff together in a way that is not inheritance, but it's very different than Haskell's `.`.
But both of those avoid the pitfalls of inheritance. "Othering" is a common phenomenon, and I think it's useful when creating an appropriate definition of composition.
But I don't think it's terribly useful; there are plenty of things that you could do that the people who coined the term would definitely not agree with.
Instead of inheritance, I could just copy and paste lots of different functions for different types. This would be different than inheritance but I don't think it would count as "composition", and it's certainly not something you should "prefer".
That's fair. I'd agree that isn't composition. I'm not sure the thing you describe is worse than inheritance.... It's not composition though.
> Most class hierarchies are wrong
One of the most damaging things is when they teach inheritance like "a Circle is a Shape, a Rectangle is a Shape, a Square is a Rectangle" kind of thing. The problem is the real world is exceedingly rarely truly hierarchical. Too many people see inheritance as a way to model their domain, and this is doomed to failure.
Where it works is when you invent the hierarchy. Like a GUI toolkit or games. It's hierarchical because you made it hierarchical. In my experience the applications where it really works you can count on one hand, whereas the vast majority of code written is business software for which it doesn't really.
I have always heard "prefer composition to inheritance" also referred to as "has a" instead of "is a." Meaning:
Yep. "Composition" has many meanings, but in the context of "inheritance vs. composition" it's just referring to "x has a y".
I've been building gui applications for the past 20 years and I couldn't imagine doing it without an inheritance model. There's so much scaffolding needed to build components and combine them into a working view. Sure inheritance can be bad in the data layer because you don't want to handcuff yourself to bad data expectations. But building out views and view controllers, there's a lot of logic you don't want to keep duplicating every time.
Guess what, lots of people have been building GUI applications without views, much less view controllers, for longer than that. Including Squeak, with Morphic.
And yet somehow the Zed team managed to do it with gpui and rust.
https://github.com/zed-industries/zed/blob/main/crates/gpui/...
GPUI is a great example of the insane amount of boilerplate needed to create a component when you don't have inheritance.
My guess is that people don't create a lot of individual components in this framework to handle different business cases, and instead overload a single text input component with a million different options. I would hate to untangle a mature app written under those conditions.
My personal preference for composition over inheritance is that it forces callers to call the owned-object’s methods directly rather than automatically through inheritance.
There is more typing/boilerplate but when you read the class file you get a full picture of what’s happening rather than some parts happening automatically in a different file.
I like to say that code should be written with a reader bias: the singular writer should do more work if it makes the class more obvious for the multitude of readers. I feel like composition is a good example of that.
I call it "read-optimized code". Inheritance is biased toward conservative writing. Once your mind becomes so enmeshed with the code base that you can no longer fathom a future where you might fall out of sync with it, inheritance becomes extremely appealing. It's all in your head! You pull the Razzle parent, sprinkle a bit of Dazzle mixin, everything is alchemized into a Fizzle class and abracadabra. Meanwhile newbies in the team have their eyes welling up from having to deal with your declarative mess.
That’s a great term and I’ll probably use it instead of mine. Thanks!
The way I like yo phrase it: concretion over indirection.
It's been settled multiple times that a relational DB is your default choice, as opposed to an object DB. Feels like the same lesson applies to OOP. Objects are ok when you have a simple bag of properties, but otherwise begin to distract from what you really want to model. And I guess composition is more analogous to relations.
> That points to a deficiency in the “composition over inheritance” aphorism: those aren’t the only two games in town. If you have procedures as first-class types (like blocks in Smalltalk, or lambdas in many languages), then you might prefer those over composition or inheritance.
First-class procedures/functions are a form of composition. Requiring a function type behaves like requiring an interface/class type with only one method. (In languages like F#, `Func<_,_>` is literally defined internally as an abstract class with one method, `Invoke`, although there are other mechanisms to auto-inline lambdas when enough static information is available to do so.) In either case, you can place it into a field of an object or data structure, or pass it directly as an argument to a function/method.
There's a chapter in Effective Java that deals with this, and it cites https://www.cs.tufts.edu/comp/150CBD/readings/snyder86encaps... so at least as early as 1986
When I first took an object oriented programming class it was all about inheritance so that's what I tried to use for everything. Then I started writing real programs and realized that inheritance sucked then finally found the succent "Favor composition over inheritance".
In mainstream/SV coding, I would say the scales just barely tipped toward composition in the late 10s... There are plenty of programmers still completely oblivious, the inertia is huge. Plus the swing back is too strong, inheritance is very powerful, just not as generic as originally thought.
i think inheritance got a bad name due to abuse of multiple inheritance and overly fragile base classes in c++ (and maybe java) codebases of the 90s and early 00s.
it's mentally satisfying to create a beautiful class hierarchy that perfectly compresses the logic with no repetition, but i think long term readability, maintainability and extensibility are much better when inheritance is avoided in favor of flat interfaces. (also easier to turn into rpcs as all the overcomplicated object rpc things of the 90s were put to bed).
Rpcs really can't be understated in terms of the effect they had on classes and inheritance.
While in theory it should be straightforward to ship instance state over a wire, in practice most languages have no built-in support for it (or the support is extremely poor in the general case; I remember my first experiments with trying to ship raw Java objects over the wire using the standard library tools back in the early 2000s, and boy was that incredibly inefficient). Additionally, the ability to attach arbitrary methods to instances in some languages really complicates the story, and I think fundamentally people are coming around to the idea that the wire itself is something you have to be able to inspect and debug so being able to understand the structure in transit on the wire is extremely important.
Classes and their inheritance rules make exactly the wrong things implicit for this use case.
I never liked inheritance. It seems like something that works well in a world where you assume things don’t evolve rapidly. It also feels like it adds mental debt—every new thing needs to comply with old things to stay compatible. Every update has to take into account how old components are working. Probably, the static nature helps big teams and big companies. But I’ve found that some duplicated code is way easier to deal with, especially now that LLMs can generate new code so quickly.
It really helps me to think of it all as extensive metaphors. Math included. The point is to tell an active story using symbols as metaphorical representations of something. With a lot of assumed language implied (through teachings) by choices of naming things. (As a fun example, don't focus on the name Algebraic if you aren't going to lean in on grade school algebra for things.)
That said, I think this is also a good way to approach framing things. Agreed that the idea of "prefer composition" is often a thought termination trick. Instead, try them both! The entire point of preferring one technique over the other is that it is felt to give more workable solutions. If you don't even know what the worked solution would look like with the other technique, you should consider trying it. Not with a precommitment that you will make it work; but to see what it illuminates on the ideas.
I have been using inheritence for 15 years, and have sometimes regretted it and sometimes loved it.
It does have actual benefits if you can limit its usage, and don't use the full insanity that languages like C++.
I generally dismis people that tell you to always use composition over inheritance without first understanding the problem space, and how it could be modeled.
The split between inheritance-heavy OOP and composition-first OOP really just reflects how software design has shifted toward approaches that handle change better. Inheritance still solves real problems, but for most modern, fast-changing systems, composition usually offers a smoother path.
Of course, developers mix and match depending on what the situation calls for. But knowing how these two mindsets differ can make it easier to build code that stays clean and easy to evolve.
And as programming continues to pull ideas from functional, reactive, and declarative styles, the compositional way of thinking will probably stay right at the heart of how we approach object-oriented design.
https://d1gesto.blogspot.com/2025/05/the-split-in-oop-compos...
I went out of my way to implement inheritance and then make it multiple. Of course I'm going to use it.
Inheritance is just a more deeply integrated form of composition which puts the inherited parts on equal footing with the new parts.
That reduces certain indirections and frictions, which is sometimes useful when making things out of other things.
Composition is ultimately more flexible and less constraining than inheritance. It reflects a practical approach of just using the types/classes you need, without having to adopt some project wide OO religion or design philosophy.
With C++, no-one needs to be told (even if good advice) to "favor composition over inheritance" - I think most people who have worked with the language for long enough on large enough projects will end up realizing for themselves that this is generally the preferred approach. Inheritance is a specialized tool, best reserved for specialized use cases.
It's a bit of a shame that C++ "Concepts" were never adopted, or some other type of compile-time polymorphism, since I think this is often all that is really wanted - a compile time guarantee that two classes will provide the same interface, without forcing them to be related by inheritance.
In a way, this is similar to tags vs folders.
Folders are hierarchical way of organizing, akin to inheritance and tags are compositional way of organizing.
I'm kind of waiting for any language to invent some sort of #hashtag interfaces to define contracts :)
There are days I hate the mapping of plain English terms of art over actual in-language effects.
Considering sets, if something is, in set terms a specific subset with a defining membership or characteristic of a definable superset, representing that at compile time effects a hard constraint which honours the set Venn diagram.
If that set/subset constraint doesn't exist then you have to ask yourself if applying a compile time constraint is appropriate.
Hierarchy (and thus "inheritance") is a way to express that several different things share the same quality. They are different in general, but same in some way. It is a very natural way for people to express such a thing and no wonder it is so widespread. But it is not the only way nor the general way, of course.
Composition is not an opposite to inheritance. An opposite would be something like:
Or, if the body of the method is same ("a parent method"): Here we do not give A and B places in the hierarchy but merely say they respond to the same message or that even the procedure is same.I do not know if any meaningful and systematic alternative to a hierarchical way exists in any programming notations. Interface spec is a partial way, but that's all. (I know only a few notations, of course).
In Eiffel we have multiple inheritance. It's such a powerful tool. And a natural way to model the world. For example if you think of your typical OOP book You have Vehicles with engines * cars that move on roads * planes that move on air * and boats that move on water.
But then comes an aqua-plane and it breaks your inheritance tree!
But with multiple inheritance is the most natural thing to have a plane that is also a boat and a car.
In Eiffel we favor the appropriate tool that better represents the world.
There's some OO design fallacy here but not sure what to call it.
The reason the aqua-plane broke the inheritance tree is because the modeling is being done backwards.
Objects should be defined by behaviour first and only incidentally contain whatever state is required to support that behaviour.
A well designed object is much more similar to a closure than it is to a data structure.
Coming from C++ and C# I think interface inheritance is good. But code inheritance is bad. I always try to avoid it. The only times I need to use code inheritance is wen I have to use framework classes that have bug or broken behavior I have to repair. Eg: label control in c# copy its text to clipboard on double click in windows.form
Yeah inheritance is just not the point of OO. It’s fine but it’s not what’s really useful.
What is the point of OO?
Encapsulation
The main point is the same as the Dewey Decimal System. Keep things tucked away yet findable. Make a huge code base useful to people who didn't write it themselves.
When they find inheritance is actually worse at describe the concept though. With composition you no longer need to implement whatever interface and bridge the implementation by proxy or whatever. You are also not limited to what parent class have (while you can still add all components that parent have to children if you need). Interface and proxy is just composition but worse in my opinion
Great article, I though it was going to be yet another one looking at recent trends, however it actually dives into the history of how it came to be, as someone that started learning OOP with Turbo Pascal 5.5 and Clipper 5, before other OOP languages.
Certainly not second born musician sons under primogeniture.
You can get Liskov from interfaces too. I rarely (like once in a career) need inheritance.
When we realized object models were an anti-pattern. Abstract base classes or just regular class hierarchies inherently create tightly-coupled structures. An eventual maintenance nightmare.
Modularization was the core principle of DDD and it still holds up 20 years later.
When someone realized that the inheritance glass castle is doomed to always get shattered upon contact with the real world.
Inheritance might be OK for formally finite domains but I can’t envision other cases where it should be favored.
Do you dislike type inheritance? Or only implementation inheritance? My view is that type inheritance is incredibly useful, both for single system programming, and rpc. Whereas implementation inheritance creates brittle systems.
Looks like this one was reupped from a week or so ago, there was another submission with three comments too:
https://news.ycombinator.com/item?id=45845505
I found the following video from CodeAesthetic explains this concept really well.
https://youtu.be/hxGOiiR9ZKg
the article seems to be digging into justifications for using inheritance. one thing I've heard and it seems to work is inheritance is ok for interfaces but usually not good for implementations.
I’ll be honest. I don’t really understand the point of this article. Maybe that’s just a preference thing. The philosophy behind these abstractions is the least interesting part of the question for me. What problems do these various methods of polymorphism solve and create? What solutions do they enable or prevent? That’s the only part that matters. But citing some discussion about the philosophy behind the theory from 40 years ago is not particularly enlightening. Not because it’s not relevant. But because we have 40 years more experience now and dozens of new languages that have different takes on this topic. What has been learned and what has been discovered?
I usually think of the ideas behind "composition" as "how do I assist a future developer to replace the current (exported) implementation of a type with a new one by restricting external visibility of its internal implementation through the use of private methods and data".
In "inheritance", it often feels like the programmer's mindset is static, along the lines of "here is a deep relationship that I discovered between 2 seemingly unrelated types", which ends up being frozen in time. For example, a later developer might want to make a subtle innovation to the base type; it can be quite frightening to see how this flows though the "derived" types without any explicit indiction.
Of course, YMMV, but I think of "composition" as "support change" and "inheritance" as "we found the 'correct way to think about this' and changes can be quite difficult".
Since I think that the key to building large systems handling complex requirements is 'how do we support disciplined change in the future' (empowering intellectual contributions by later generations of developers rather than just drudge maintenance).
> This contrasts inheritance as a “white box” form of reuse, because the inheriting class has full visibility over the implementation details of the inherited class; with composition as a “black box” form of reuse, because the composing object only has access to the interface of the constituent object.
So, we just need devs to stop trying to be overly clever? I can get behind that, “clever” devs are just awful to work with.
In 2006 when changing code that had lots of inheritance.
The only time I use inheritance is when I have an abstract base class, and several flavours of subtypes, all sealed.
Inheritance is not a fundamental concept of anything. Inheritance is just composition with syntactic sugar. The semantic meaning was always composition.
Oop is a mistake. Rust and pythons explicit self passing and turning of the dot operator into simple syntactic sugar is the correct approach. We should just stop teaching everything related to this in universities and go back to fundamentals.
Implementation inheritance is not just composition. Composition on its own does not allow for open recursion (implementing methods that were called on a base class in a derived class, via an in-built dispatch step), whereas inheritance does.
A virtual table and virtual dispatch are orthogonal to inheritance. Haskell let's you do the former without the latter. I agree that syntactic sugar for virtual dispatch is a nice language feature because it is tedious to do by hand
Is your code simple? Then use whatever helps you finish it fast and rewrite later if needed. Or is it complicated? Then don't rely on any canned advice. If you are implementing a virtual machine on an embedded chip, maybe parallel arrays and gotos are the way to go, nobody except you knows. Everything else is just overpaid senior architects trying to justify their own existence by not allowing working code to be merged.
Grepping "extends" over a new codebase is a quick way to see how fucked you are when joining a new project/team.
I am always bemused when i see articles like these. Do people not have an understanding of fundamental Software Engineering principles from OGs like Parnas/Liskov/etc.?
The fundamental idea is that of Abstraction which can be defined as the discovery/invention of "higher-level concepts" from more primitive "lower-level concepts" and then reasoning and manipulating at the higher-level. This abstraction is based on structure and/or behavioural attributes.
In order to manage the complexity inherent in the building of large systems certain fundamental aspects were identified as highly desirable. They are Separation-Of-Concerns, Modularization, Reuse and Information-Hiding.
The crucial point to understand is that Abstraction does not imply any of the above aspects! A good example are Mathematical Abstractions. But because for Software we desire the above aspects for our system-as-a-whole we learn to combine them with our Abstractions. This is why we have so many different styles of Programming (i.e. Imperative/OO/Functional/Logic/etc.).
Viewed in the above light the relation between Inheritance and Composition becomes clear. They are just different ways of emphasizing different combinations of the above aspects for your abstractions based on your design needs.
References:
1) Software Fundamentals: Collected Papers by David L. Parnas.
2) Program Development in Java: Abstraction, Specification, and Object-Oriented Design by Barbara Liskov and John Guttag.
3) Multi-Paradigm Design for C++ by James Coplien.
How about not favoring anything. There are many paradigms and each one has its place. Franky I do not really understand why do developers fight these religious wars about languages, frameworks etc.
> There are many paradigms and each one has its place.
That's a thought-terminating cliché. The argument against inheritance has been laid out pretty clearly. It's reasonable to rebut that argument. It's not reasonable to say, "you shouldn't criticize inheritance because Everything Has Its Place." Everything does not have its place. Sometimes we discover that something is harmful and we just stop using it.
Em.. I’m quite nitpicky and want to do the opposite of “thought-terminating”.
I’m for encouraging best practice, but most things do have its place. I present to this court two examples:“premature optimisation is root of all evil” and “goto statement considered harmful”.
Both well accepted as things should be avoided for good reasons (incl. but not limited to, preserving sanity of coworkers)
But both definitely “have its place”. First one’s place is legitimized (with nuance) by author himself in second part of same sentence. The latter one (goto) is routinely used by linux devs (random example: https://github.com/torvalds/linux/blob/master/fs/ext4/balloc...)
> we just stop using it. We minimise/restrict the usage.
> Sometimes we discover that something is harmful and we just stop using it.
And that is not remotely the case here. So yeah, there are many paradigms and each has its place.
> And that is not remotely the case here.
Isn't it? People have written extensively about why we should prefer composition to inheritance, and you haven't mounted any defence of inheritance beyond the thought-terminating cliché that it "has its place."
- Wording uses “prefer”, not “forbid”.
- (java) Least interesting example to rebuke “never”: exceptions, interfaces.
- (java) inheritance is used by active and successful projects (e.g. junit5, spring framework). I would argue that success is a pragmatic vindication criteria of a tool/technology.
True; I suppose I could concede the idea that inheritance has its place if we recognize that that place is quite small and out-of-the-way. My problem is that "everything has its place," without any qualifications, is effectively a blank cheque to use inheritance anywhere and then just go, "well that was its place."
Interfaces are great; I wouldn't consider them inheritance.
Sure, good stuff has been written with inheritance, but good stuff has been written with C, and that doesn't make C unproblematic. If Postgres were being written today, the authors would probably choose something other than C—we just have better, safer languages for that kind of work now.
I use both where choosing what I believe is appropriate for particular case.
Frankly I do not give rat's ass about what "People have written extensively". From what I read most of it sounds like spoken by politician: look Jimmy, someone can do a bad thing with it. Well fuckin don't do a bad thing.
So much over very simple and primitive thing: John HAS a key vs dog IS an animal. Both are valid and proper.
>"you haven't mounted any defense"
Why would I bother. It does not need a defense. It is like do not use Java because it encourages FactoryFactoryFactory, 20 level of abstraction etc. Well it does not. Architecture astronauts do it and I am not one of those
> So much over very simple and primitive thing: John HAS a key vs dog IS an animal. Both are valid and proper.
I don't think so. "Having" vs "being" are descended from an overly simulationist notion of program design. The fact that John has a key in real life does not suggest that this relationship should be represented by an object John which owns an object Key. I think this kind of ontological approach is behind a lot of bad object-oriented design.
> Architecture astronauts do it and I am not one of those
This is the same rationale used to defend memory-unsafe languages. I like that as a point of comparison because we can actually measure the relationship between the use of memory-unsafe languages and the number of dangerous memory vulnerabilities that show up even in highly-scrutinized code bases like the Linux kernel. "I write good code" doesn't fly; bad code is getting written, and the tools we have to correct that are our languages and paradigms.
> Why would I bother. It does not need a defense.
If we take our craft seriously, we need to be able to discuss the merits and drawbacks of our tools without getting defensive and refusing to engage. I'm not saying you have to defend it to me—I'm just some guy online—but if you're disinterested in defending it in general, I think that's a craft issue.
>""you shouldn't criticize inheritance"
I was not talking about criticizing. Valid critique us useful and deserved. And this concerns composition as well as any other area. I was talking about crusades by programmers.
Gameplay logic inherently leans more towards composition, with a little hint of inheritance.
You can have players and monsters, which are all types of "characters" or "units", which is inheritance, but instead of having a separate FlyingPlayer and a separate FlyingMonster, which use the same code for flight, you could have a FlyingComponent, which is composition.
I've been going all in on composition and it's amazing for quickly implementing new gameplay ideas. For example, instead of a monolithic `Player` class you could have a `PlayerControlComponent` then you can move that between different characters to let the player control monsters, drones, etc.
Imagine instead of only Pac-Man being able to eat the pills, you could also give the ghosts the `PillEaterComponent` in some crazy special game modes :)
I've also been fantasizing about a hypothetical language that is built from the ground up for coding gameplay, that doesn't use the word "class" at all but something else that could be a hybrid of inheritance+composition.
You can do the very same with inheritance, where `Player` and `Monster` inherit `Flying`
Entity Component System:
https://en.wikipedia.org/wiki/Entity_component_system
Yeah, but all current languages still have to wrangle ECS into an inheritance-first architecture: `class` etc.
Would be nice if something like Swift's "Protocols" could be used in a more dynamic way, at the code level.
It takes about 2-3 years of experience in current enterprise scale to deeply realize that inheritance fundamentally doesn't work.
It depends though. Learning what things don't actually work like the textbooks says is the key to level from junior to senior. Some people never get it, some got it quickly.
If it's a car with extra wheels, do inheritance.
If you're adding a device for navigation that could be used by other things, go for composition.
Once upon a time inheritance was a way to compose classes out of pieces of orthogonal, general functionality.
Is-a Winged, TurbinePowered, Piloted, Aircraft, etc
[dead]
When they put away childish things and read about the SOLID principles. Different time for every engineer.
SOLID is a childish thing, imo. Very undergrad.
"Single responsibility" isn't an especially useful yardstick. If you actually need to decompose a complex piece of logic into modules, the place to start is by identifying areas of high cohesion and separating them into loosely coupled functions. Ideally you can match those up to a DDD-style ubiquitous language, so your code will make intuitive sense to people familiar with the domain. "Does this have one responsibility?" really isn't the right question to ask.
The open-closed principle is straight-up wrong. Code should be easy to modify and easy to delete, and you only rarely need to add hooks for extensibility. Liskov substitution is fine, but it has more to do with correctness than cleanliness. Dependency inversion is a source of premature abstraction—you shouldn't open the door to polymorphism until you need to. Interface segregation is good, though.
In general, I think SOLID is overly enamoured with the features of object orientation. Objects themselves just aren't that big of a deal. It'd be like making the whole acronym about if-statements. If I were going to make a pithy acronym about legible code, it'd have more to say about statelessness, coupling, and unit tests. It'd reference Ousterhout's idea of deep modules, and maybe say something about "Parse, don't validate," or at least something against null values.
Thank you for taking the time to reply, instead of just hitting downvote. I feel like if we argued over a beer we’d probably end up agreeing on a lot of things. But let’s start by disagreeing. :-)
> "Does this have one responsibility?" really isn't the right question to ask.
It’s a great question to ask. As a senior engineer, the answer might be “no”, but there’s a vast difference between code where the answer is “no” because someone made a conscious choice, vs code where nobody even asked the question. Here’s the thing: a compiler and linker can join ten classes into a single executable, but even a senior engineer cannot look at a single class with ten responsibilities and figure out what the fuck is going on. There’s a doc at my company that describes the core function of one particular service. The doc describes the simplest of systems and so you would be surprised to learn that 1) it took me two years of working one the product before I could write it and 2) nobody knew. The reason it took two years was because there were 10 different pathways, and every pathway was just a giant implementation, each written differently, and each, ultimately, doing the exact same fucking thing. But you’d never be sure just by looking at the code. In fact it very much looked like each of these things had very specific things that they did differently. Over two years, while also doing my job of keeping this thing running and adding features, I refactored the thing to be SOLID. In doing so, demonstrated that they all do the exact same thing. We haven’t finished refactoring everything, but we do now test all the pathways with a parallel implementation that verifies 80 classes and 500 instances at runtime with one class and ten instances.
I work on software that you and most people on planet earth with at least a mobile phone are using in one way or another. I have made many pieces of this system better by evolving a clusterfuck of cohesion into a system that is easy to reason about, maintain and evolve - by apply SOLID principles.
I’m currently working on a package used by over 1,000 services. The most pain has been caused by previous iterations ignoring the open-closed principle. As you say, “easy to modify and delete”. A stronger rule, which perhaps you’re alluding to, is don’t allow any extension at all, and just expose only interfaces. In that sense I could agree open-closed principal is moot, but it’s moot for taking its argument to the logical conclusion.
I am also a fan of DDD, and for the reasons you allude to: the second half of the book is more about communicating in a large engineering organization.