> It’s not bad because it’s OO, but that it’s very badly done OO
This argument seems to come up for every criticism of OO. The criticism is invalid because true OO would never do that. It seems like a No True Scotsman. Notably, when you whittle away all of the things that aren’t true OOP, you seem to be left with something that looks functional or data-oriented (something like idiomatic Go or Rust). There isn’t much remaining that might characterize it as a distinct paradigm.
My problem with these arguments is that its meaningless, if this "bad OO" (or "bad <insert paradigm>") is what most developers write.
Its a big problem I have with ORM's. Every time a discussion about what's bad about ORM's is brought up, there's a multitude of people saying that its just used wrong, written wrong or otherwise done wrong, yet that's basically my experience with every non-toy codebase I've ever worked on as part of a larger team. Telling me that its just done wrong is useless because its out of my hands. Is it the ORM's fault? I argue yes because it encourages that kind of programming, even though its not technically the ORM's fault.
I see it the same with OO or anything else. Is this "bad OO" the type of OO people tend to write? If yes, then OO is the problem. If no, then OO is not the problem.
Personally, I like a mixed approach where I use some OO and a lot of functional. I'm writing a little toy game in C++ from scratch and I have a very flat design, preferring functional approaches where I can, but I have a few objects and even a little bit of inheritance where it makes sense. Sometimes dealing with an object-based API is just more convenient and I won't shy away from using OO principles, but I also don't default to them just because either.
But every programming paradigm can be (and perhaps most often is) done badly. They each just create their own set of issues when "done badly". I think it just comes from not properly understanding why you're making certain choices. Deciding to use a design pattern is going to come with a certain set of strengths and weaknesses, so ideally you'd consider your problem and choose design patterns that makes the most of the strengths and minimizes the impact of the weaknesses. But I don't think it's very common to put that much thought into programming decisions, so people will often make a design decision, and then invest their effort into fighting the constraints it imposes, while usually failing to realize the potential benefits of it. OOP can be a perfectly suitable tool to solve a problem, but if you haven't thought about the actual reason why you want to use it, then you're just going to end up creating complicated abstractions that you're not properly benefitting from. ORMs can be great tools to, but if you want to use an ORM, you have to solve your problems using the approaches implemented by the ORM. If a programmer can't/won't do that, then they've just chosen the wrong tool for their problem/the approach they wanted to take to solve it.
Perhaps. I think a more likely explanation is that it's just not a very professional field. As a field it hasn't been around for very long at all, and it's changed a lot during that period. If it was obvious to tell the difference between bad and good programming, then perhaps things would be different. But I don't think that's obvious at all, and I don't think it's possible to create any meaningful level of agreement in that area. Look at how hard it is to agree on any sort of standard. IETF RFCs take years to process, with the typical outcome being a standard that enough people agreed to publish, but that 0 people faithfully implement.
I have another opinion about that. Most of us don't work in product companies but rather as consultants or in IT enterprise environments. In such contexts the real objective of software development is often the improvement of a process that is enabled by the software. As such the cons of bad software are not immediatly clear to any management layer above the first. Sometimes even the first layer of management doesn't really care until they get the process improvement they want. If the new software produces an increase of 30% in sales then the fact that it is badly written is just of secondary importance: the objective has been met.
The fact that bad software quality will hurt maintanance in the mid-long term is often too far in the future for anybody to really care.
As software engineers I have seen that we usually make two mistakes in these cases. We focus only on our craft without taking into consideration the real business objectives of the customer and what he really cares about. We introduce unnecessary complexity just for the sake of it, often adding unnecessary abstractions. In a few words we often make the sin of believing that software is the objective while in most of the cases there is a business objective for which software is just the enabler and because of that we lose focus on what is the right thing to do.
In software product companies code is part of the final product and code quality directly impacts costs and the core business of the company.
I’d still say this is at least partially attributable to not having any real measurements of what “good” software is. At least aside from at a very basic level.
If you want to build a bridge, I’d assume there’s a set of standards it has to comply with. If it doesn’t meet the spec you could explain in very objective terms why the quality isn’t sufficient. If you had to sit in front of the board and explain why you think a piece of software wasn’t built to a sufficient standard of quality, you’re going to really struggle to do that in objective terms. The best you could really hope to do is present a well articulated opinion. Anybody who disagrees with you could present their own opinion, and unless they have a high degree of technical competence, they’re not going to have any objective approach for making judgements about that.
Sure lots of companies prioritize feature velocity over code quality. But I doubt most of them a very well informed about what quality compromises they’re actually making.
In some large Internet Scale firms there were ARBs, Architectural Review Boards. They served this purpose, by having very senior members of staff review proposed architecture and implementation approaches PRIOR to coding occurring. The danger was that over time, this sometimes devolved into the members waiting to just beatdown people presenting to them and being internally "political".
In other fields it can be easy to say when a job is done well. There can be clear guidelines. That said, other fields cut corners and do a “bad” job, too. Construction for example. It is probably possible to explain to average Joe why a damp proof course wasn’t installed properly or the insulation used is dangerous and flammable.
Programming as it exists now seems often in service of something else. So a deadline causes cut corners. So in this context a cut corner is fine and “we’ll fix it later”.
I suppose programming differs from a physical thing like construction. Sure you can replace the wiring and install a mezzanine but it’d be insanely impractical.
Construction is a prime example, especially Home Construction. Just look at the TikTok videos showing horrendous plumbing and electrical work.
Hell, I moved into a house built 3 years ago by a major firm and the ceiling fans were ALL wired wrong, and many of the electrical plugs are upside down.
This is a house that had a specific design model. My presumption is that unskilled labor was brought in to complete work and just did a shitty job.
You can see this sometimes with outsourced programming work or local work that is poorly designed and inadequately tested.
Maybe its true, but that just shifts the blame and doesn't change anything or solve anything. Try as we might, "fixing" programmers just isn't realistic, so the only thing we can do is look for ways to improve the tools to make them encourage better practices.
Going back to the ORM example, I'm cool with just using SQL. The language encourages a style of thinking that fits with databases and it being a separate language makes it clear that there's a boundary there, that its not the same as application code. However, I'd also be ok with an ORM that enforced this boundary and clearly separated query logic from application logic.
As for OOP, I don't know what the solution there is. Maybe there isn't one. I like to use OOP, but sparingly, and at least in my personal code (where I have control over this), its worked out really well. A lot of code is transforming data structures and a functional approach maps really well to this, but for overall architecture/systems and even for some small things that just map well to objects, its great to have OOP too. In my toy game, I use inheritance to conveniently create external (dll/so) modules, but inside the engine, most data is transformed in a functional style. Its working quite well. I'm not sure how you could redesign the paradigms to encourage this though, outside of designing languages to emphasize these things.
>The language encourages a style of thinking that fits with databases and it being a separate language makes it clear that there's a boundary there, that its not the same as application code.
The separation is orthogonal to the data access style used, and you really have to make sure the engineers you're working with understand separation of concerns. I have seen many applications with controllers filled with loads of raw SQL, just as I have seen them filled with lots of ORM query building. If the programmers don't get why that's bad, they will do the wrong thing with whatever tools they have in front of them.
result = []
for frob in Frob.get_all(where=something) {
if frob.foo = expected {
result = {
frob: frob,
quux: Quux.get_all(where=some-query-using-frob)
}
results.append(result)
Basically, the idea is that you fetch some data and check some condition or do some calculation on it, then fetch more data based on this condition/calculation. This entire thing could be a single logical process.
The only reason there's a separation of concerns here is because some of this is done in the database and some in the application. Logically, its still part of the same calculation.
But the ORM hides this distinction and makes both the part that runs in the database and the part that runs locally look the exact same and super easy to intermingle. Worse still if you access properties on your ORM-result-object which actually trigger further queries to get. It looks like a field access, but is actually a database query. I've seen this cripple performance.
In many cases, if you step back and don't think about it in terms of application code, but rather the data access and transformations that you want to achieve, then it can be rewritten as a query (joining related data in as needed etc). At the very least, it makes you aware of what the boundaries are.
I'm not saying that scrapping the ORM will magically make the problems go away and I know people also write terribly intermingled application and database logic when using SQL, but at least the boundary is more explicit and the different sides of the boundary actually look different instead of just looking like application code.
My point isn't that there's a silver bullet, but that we can nudge and encourage people to write better code by how the languages/libraries/tools structure solutions.
I'm also not necessarily saying that we have to use SQL instead of an ORM, that's just one possible suggestion that I personally find works due to the mental separation. I'm sure you can design ORM's that make the boundaries more explicit, or design frameworks that encourage thinking about application boundaries more explicitly. Same as how I'm not actually suggesting to get rid of OOP, just... if most people's OOP is so-called "Bad OOP", then we need to think about how to improve OOP, because changing "most people" is just not going to happen.
Yeah, I understand what N+1 queries are and how many ORMs make them too easy.
For me, ORMs become a problem when they're an excuse to avoid learning SQL. If you understand SQL, you will probably understand why the example you give is a bad idea. If you don't, you won't. I'm speaking from the point of view of having written, at this point, thousands of lines of SQL at minimum, and having decided that it's not how I primarily want to access data in the applications I write. The ORM queries I write are written with rough knowledge of the SQL they generate, and I try to be very careful with query builders to avoid the exact case you bring up.
I think LINQ in C# does a pretty good job of bridging this gap, actually. It could be better, but it discourages looping and encourages the declarative style that efficient SQL requires.
I think this is the problem that tools can’t solve. If you want to interface with a database, you need to know how a database works, regardless of what tools you use. I like using ORMs, and I think I write good ORM code. But I also think that I know quite a bit about how databases work, and that I’d be incapable of writing good ORM code if I didn’t.
If you give a novice developer an OOP assignment, they’re likely to struggle with the learning curve, and likely to ultimately create something that has a lot of unnecessary complexity. If you give a novice developer MEAN assignment, they’ll create something that “works” a lot easier. But it’ll likely be full of bugs created by not understanding the underlying problems that the framework is simply not forcing you to address.
Which is what I think these “simple” frameworks do. Allow you to create working features without necessarily considering things like consistency and time complexity. I also think it’s why things like Mongo have been slowly adding more ACID consistency, and schema design features. Because people are coming to realize that those are problems that simply can’t go unaddressed. And why ORMs and things like GraphQL have become so popular, which to me look remarkably similar to the structured approach of OOP design patterns.
I read somewhere that the amount of programmers in the world doubles like every 5 years. That means that at any given moment, half of programmers have less than 5 years of experience.
The biggest problem with intermediate level programmers is their love of hard and fast rules. It's like they have a backpack full of hammers ("best practices") and they're running around hammering everything in sight. For this I feel like it doesn't really matter how much experience you have in an individual piece of tech - the overall amount matters more.
I find it hilarious that you’d mention “20 years of experience”. I work on a team at the moment with one of the most dogmatic developers I’ve ever met, and when challenged his rationale for any decision is “20 years of experience”.
He was recently asked to fix a rather simple bug in a C# project, a language he doesn’t use very often, but he was certain this would be an easy task for him. The project used a small amount of unmanaged resources, and he just couldn’t figure out how the ‘using’ keyword worked. He spent a few days trying to get to the bottom of ‘using’ before demanding to be sent on a C# course, which everybody thought was a great idea because it would keep him busy for a while. Maybe he’ll have “21 years of experience” by the time he gets around to fixing this bug.
Yup. Also, unwillingness to learn, which, to me, is a crime.
Learning is difficult; maybe because it starts with admission of ignorance.
Most of my work, these days, is a ghastly chimera of techniques, from 30 years ago, grafted onto patterns from 30 minutes ago. Real Victor Frankenstein stuff.
But not insurmountable. In my own experience, all it takes is a small amount of consistent effort each day. I've learned a few programming languages that I use regularly like this, I learned sleight of hand card magic like this and most recently, I've learned to play guitar like this. This year, I hope to learn Spanish like this too. Many people don't want to put consistent regular effort in and are looking for shortcuts. That doesn't work.
> a ghastly chimera of techniques
Well, yeah, the right tool for the job, right? But it does work, so...
In order for a programmer to program in many patterns elegantly, one must first have understanding of those patterns and how they are roughly reduced to machine code.
The skill of experiencing trumps the amount of experience.
Bad programmers need to hone their skill to experience in order to become good programmers. The most popular way is to provide them a safe environment to make mistake and learn from it. But the most important way is actually knowing that skill of experiencing and getting out of biases are important, like in this case "if OOP is indeed bad, why is it invented in the first place", and the rabbit hole actually gets interesting from here.
At least that's my take after a few years in the industry.
My main point is that the things we use encourage a certain approach and if most people are "doing it wrong [tm]", maybe its not the people's fault, maybe the tools are encouraging wrong approaches.
I mean, sure, you can just say most developers aren't very good, aren't disciplined enough, don't think things through. Maybe its true. But that doesn't solve the problem, it just shifts the blame. You can't fix the people, because its a never-ending problem: move team or company and you're back at square one. Good luck trying to change all the university courses to try and teach better practices.
So if you can't fix the people, the only thing left is to take a long hard look at the tools and see if we can't improve them in a way that encourages better code.
Also it's maybe notably by a (former?) maintainer of ActiveRecord, though, I haven't really used RoR but I think the design is probably quite different - Diesel is very much just the SQL with a veneer of Rust, almost FFI wrapper like.
Why should I have to remember to use '.values' instead of 'GROUP BY'; '.filter'/'.except' instead of 'SELECT'? It's not helpful. I frequently have a clearer idea of the SQL I want (I'm certainly no DB expert) and have to make it at least twice as long, install a plugin, or in some cases just can't mangle it into Django. For what?
Diesel is nowhere near being able to fully map SQL to rust. It's ok for relatively simple queries that are typical for OLTP applications. Anything beyond that and you're going to run into trouble.
If you want to do something more OLAP-ish or insert data in bulk, you have a huge problem. SQLAlchemy is far better, even though SQLAlchemy also has its own share of limitations (dealing with jsonb_to_recordset and functions like it for example).
SQLAlchemy is really special - coming from Django I wasn't convinced initially, but I think the reason it works so well is it doesn't try to hide any magic. It's simply a nice way to represent objects in your database, you still have to explicitly request it to commit / run queries.
My main issue with ORM's is that they encourage you to mix application code and queries (by making it easy and by making the code for both look the same), blurring the boundary between your application and the database.
This is a problem, because there is a real boundary, typically including a network round trip. I've been on way too many projects where ORM-using code would pull some stuff from the database, do some application logic to filter or process the results, then loop and do another query for each result. Instead of doing it in the database query itself using joins (and possibly being able to use indexes for the filtering).
Even when people are very disciplined about this, I find that once you get to non-trivial queries, they become a lot harder to read in ORM code. I tend to have to think in terms of SQL, then translate between that and the ORM code on the fly. Its not so easy.
Sure, you could say all the teams I've ever worked on that did stuff like this are just bad at using ORM's, but after a few experiences like this on different teams, in different companies, with different developers, its time to stop blaming the people and maybe take another look at the tools.
Do you care about performance? In my experience, N+1 queries are impossible to avoid with hibernate (or any JPA framework, generally). But maybe I’m just using it wrong.
I haven't found that to be the case. In particular I think expecting the ORM to always do the best thing no matter how you use it is folly, ORMs are a tool that you have to take the time to understand in much of their full complexity.
Some would say this makes them a bad abstraction, but to me, data mapping is going to have to happen somewhere, and I would rather be building on someone else's work to write the data mapping for my applications than do it from scratch. You have to know when the ORM is the right tool to use, and when to drop into plain SQL for your querying, because they do not eliminate the need to write SQL, just reduce it significantly.
We are currently considering moving away from Entity Framework Core. Simple things work fine, but it generates ridiculous queries if stuff gets more complex.
Many ORMs are going to have trouble at some point when you start making more complex queries. I believe using an ORM well is largely in understanding the balance between when its tradeoffs are acceptable and when they are not. Per the other reply, I would probably start with EF Core in new applications, but would not hesitate to add Dapper or just plain SQL with ADO if I saw the need.
> My problem with these arguments is that its meaningless, if this "bad OO" (or "bad <insert paradigm>") is what most developers write.
Only if that's more true than “bad <foo> is what most developers write in paradigm <foo>” where <foo> is the presented alternative paradigm, otherwise we’re comparing the platonic ideal of <foo> against real world OO.
Sturgeon’s Law may not be quantitatively precise, but it addresses a real qualitative issue that must be considered.
> This argument seems to come up for every criticism of OO.
The same is true of functional and procedural programming, too.
The problem, as I see it, is that the profession has a history of treating programming paradigms as just being a bag of features. They're not just that, they're also sets of guiding principles. And, while there's something to be said for knowing when to judiciously break rules, blithely failing to follow a paradigm's principles is indeed doing it badly.
This is a particular problem for OO and procedural programming. At least as of this current renaissance that FP is enjoying, advocates seem to be doing a much better job of presenting functional programming as being an actual software design paradigm.
It's also the case that, frankly, a lot of influential thought leadership in object-oriented programming popularized some awful ideas whose lack of merit is only beginning to be widely recognized. If null is a billion dollar mistake, then the Java Bean, for example, is at least a $500M one.
> The same is true of functional and procedural programming, too.
There are some specific criticisms that are levied against FP for which proponents respond “that’s not true FP”. For example, the criticism “FP is too preoccupied with monads” might engender such a response; however, this response is appropriate because it’s not one of the defining features of FP. Yet for FP, there are still pretty widely-agreed upon features or conventions (functional composition, strong preference for immutability, etc).
For OOP, I can’t think of any features or styles that are widely agreed upon. If you mention inheritance, half of OOP proponents will argue that inheritance isn’t a defining characteristic because Kay didn’t mention it in his definition. If you mention message-passing, many others will object. If you mention encapsulation, then you’ve accidentally included virtually all mainstream paradigms.
If OOP is a distinct thing, and it isn’t about inheritance or message passing or encapsulation or gratuitous references between objects (the gorilla/banana/jungle problem), then what exactly is it?
The term may be muddied but it’s not meaningless. Alan Kay’s vision of OO (“it’s about message passing”) never seemed to really take off in the world. When people talk about OO they usually mean classes, objects and methods, used for almost everything as the primary unit of composition. And classes with public methods encapsulating private fields. And often with a dash of inheritance. C++ and Java seem like the flag bearers for this style of programming. It’s popular in C# too.
When people criticise OO, they’re usually criticising this the “Effective Java” style, expressed in whatever language. This style is deserving of some criticism - it’s usually more verbose and harder to debug than pure functional code. And it’s usually less performant than the data oriented / data flow programming style you see in most modern game engines.
OOP the paradigm (messaging and memory encapsulation) is pretty different from OOP the style (access modifier, class+method+properties, inheritance).
As a strong proponent of FP the paradigm, I insist OOP the paradigm is great to study and apply in a system software where there are multiple agencies. For example, in a browser there are multiple "agencies", network-facing workers, storage-facing workers, human-facing workers, etc, as each "agencies" runs at different to pace to make its "client" (networkAPI, storageAPI, human) happy, therefore messaging and buffering between those "agencies" inevitable.
FP also suffers the same issue.
FP the paradigm: pure functions, expression-based, recursion, first-class function, parsing > validation (universal, almost applicable in any programming language)
FP the style: monad, functional-based language, tail-call optimization
Being overly-critical over style is not productive in the long term. Someday one will have to leave the tool for a new one.
Learning the universal part of paradigms is useful because it is not dependent to tools.
So, that's still focusing on features. If we're talking about it from a paradigmatic perspective, instead, then I would argue that the prototypical idea behind object-oriented design is indeed encapsulation.
It's true that this is not a distinguishing feature. I don't see that as problematic, it's just how things work. Object-oriented programming, like any paradigm, evolved over time, as people built languages with new ideas, and then spent time figuring out how best to make use of them. And there's no rule saying that nobody is allowed to subsequently adopt and adapt a useful idea in other domains. Fuzzy boundaries are part of the natural order, and that is a good thing.
But, if you go back and read the seminal papers, it's clear that the common thread is encapsulation, every bit as much as referential transparency is the core idea that unites all the seminal work in functional programming. The idea was that programs would be decomposed into modules that were empowered to make their own decisions about what code to execute in response to some instruction. And that this was supposed to liberate the programmer from needing to micro-manage a bunch of state manipulation. Not by eliminating state, as is the ideal in FP, but by delegating the responsibility to manage it.
This is why the concept of Beans bothers me. A Bean is a stateful object that throws its state into your face, and forces you to directly manage it. That sort of approach directly contradicts what OOP was originally supposed to be trying to accomplish. For my part, I am convinced that the bulk of the backlash against OOP is really a response to the consequences of that sort of approach having become orthodox in OOP. Which is deeply ironic, because this is an approach that enshrines the very kinds of practices that the paradigm originally sought to eliminate.
The problem with encapsulation in OOP, is that the (mutable) state is not truly encapsulated, it is just hidden. State encapsulation would be a pure function that uses mutation internally. OOP encourages mutable state and in general it is non-trivial to compose such effectful code. The state space of an OOP application can become a combinatorial explosion. This might be what you want if you are modelling something with emergent complexity, e.g. a simulation or game, but doesn't sound good for a line-of-business app.
As an anecdote, I once saw a stack overflow reply from a distinguished OOP engineer advocating for modelling a bank account with an object containing a mutable balance field! That is certainly not modelling the domain (an immutable ledger). OOP might fit some problems well, but the cult of presenting it as the one-true-way (made concrete in 90's Java) is deserving of a backlash IMHO.
> But, if you go back and read the seminal papers, it's clear that the common thread is encapsulation, every bit as much as referential transparency is the core idea that unites all the seminal work in functional programming. The idea was that programs would be decomposed into modules that were empowered to make their own decisions about what code to execute in response to some instruction. And that this was supposed to liberate the programmer from needing to micro-manage a bunch of state manipulation. Not by eliminating state, as is the ideal in FP, but by delegating the responsibility to manage it.
I agree with that assessment, but I also think the problems with OO are inherent to that paradigm, and it's time to acknowledge that the paradigm is bad and has failed.
> This is why the concept of Beans bothers me. A Bean is a stateful object that throws its state into your face, and forces you to directly manage it.
I disagree.
Beans are Java's version of the ideas expressed in Visual Basic and, later, COM -- ideas often called something like "software componentry". They are objects that can be instantiated and configured without having to write custom code to invoke their methods, because the instantiation and configuration follow accepted standards, as well as the means by which they register to subscribe to and publish events. This lets them be instantiated and manipulated by other tools in the pipeline, such as a GUI builder tool or an automatic configurator like the one in Spring Framework.
If we go by Alan Kay's definition, then we can argue Elixir/Erlang is an OO language [0]. It's not the first language one would think of when talking about OOP, is it?
So is it dot-method notation (`foo.Bar()`)? If so, does that make Go and Rust OOP as well? That's just syntax though; what about the C convention of `ClassName_method_name(ClassName obj)`? That's still namespace by the type even if the language* doesn't have a first-class notion of a type prefix.
I've found that objects best when they are used in moderation. Most of my code is plain old functions, but occasionally I find that using an object to do some things is just much cleaner. The rest of the time doing away with all the baggage associated with objects and using structs or other, simpler structures is just easier to wrap my head around.
Java IMO is the classic case of going way too far overboard with being object oriented. Much worse than Python.
I feel like OO is okay if objects implement good well thought out interfaces. The problem is designing good well thought out interfaces is hard. Also the rise of distributed computing causes problems for OO. Passing data between systems is pretty easy. Passing functions with that data, not so much.
> I feel like OO is okay if objects implement good well thought out interfaces.
I find that much of what I do is taking something from structure A and manipulating structure B. If you are a slave to OOP, you end up bolting that method to either one or the other object when really it affects both and belongs in a separate place entirely.
The other big issue I have with OOP is frequently I don't want or care for reference passing, I just want value passing. That is usually the place where I start using objects is when passing references is useful.
It's particularly useful if you are building a thing which requires multiple steps and you need that state to persist between steps. I've seen it done well with functional programming, but for me it's just easier to build something out with an object that holds persistent state.
My experience with distributed computing is that the biggest issue isn't passing functions with data. Instead the biggest issue is sharing state between systems. This is part of why immutability is very nice.
Object-oriented means few different things: it could mean "my program contains 'class' keyword" to "Everything must be a class which obeys SOLID".
If you say general statement, like "OO in Python is mostly pointless", you are likely wrong. There are ways to use 'class' keyword to make programs better (shorter, more readable) and there are libraries out there which use OO this way.
If you want to write something which can be proven true, you want to have something much more specific. Here are some possible titles one can have meaningful discussion about:
You’re missing the selection bias at work here. The issue is that good OO doesn’t get you upvoted in programmer related forums like HN because it’s not terribly popular. Instead what does get upvoted is arguments against OO, and these often contain a lot of bad OO code as counter-examples.
And yet a lot of this so-called "bad OO" is representative of most of the OO I see written. If something that is bad is the norm, that says something about the paradigm itself, as well.
Bad code is the norm (regardless of paradigm), because it's a profitable industry that rewards entry but not necessarily skill. It's not really a failing of the paradigm that it's applied wrong when that is the norm in all aspects of the practice.
That is not my experience. I see a lot of boring, working, OO code. Nothing that I’d excitedly put on the front of HN as an example of anything, but code that’s reasonably easy to read, and just works day in and day out.
I think you took me to be arguing for a stronger claim than I actually was. My preferred paradigm is also roughly “idiomatic go or rust.” But the OP isn’t arguing for idiomatic-go-or-rust, as far as I can tell they’re arguing against ever using methods or virtual dispatch (which are core features of both of those languages)! My claim is that the OP’s diagnosis of the problem—“virtual dispatch causes your code to be confusing, get rid of it”—is incorrect.
The style OP is advocating lines well with Rust at least. Rust still tends to use method syntax, but it is very strict around mutability and virtual dispatch is rarely used, so you tend to write your programs with similar flavor as the OP is arguing for.
This is an industry where the waterfall lifecycle was presented as being a bad methodology, which was then promptly adopted by almost every firm in existence.
Then Agile & SCRUM arrived and now people complain about giving status on what they are working on now, what issues they have and what they intend to work on today. You see posts here bitching about it.
I still get software from engineering that appears to lack ANY sense of what we are supposed to use it for. The developer gets tons of back slapping by engineering management, and it leaves us STILL doing grunt work with the poorly designed tooling, that the developer is now writing perl scripts to deal with issues in his delivered work.
It's not like the criticism in this case is particularly esoteric: Literally the first explanation is that the proposed classes don't encapsulate single concepts. That's the theory of the first course you'd get about this in school, or if you're self taught something you'd see within the first 30 minutes of casual googling about the subject.
I think it's fair to question what is going on in the article if the mistakes made are of such a basic nature.
You can create an OO mess in Go and Rust that is equivalent to this too.
Dependency injection is a nice name for factory functions + dependency list + topological sort, but people still use it horribly, injecting dependencies everywhere, not writing functional and data-oriented composable code.
It's nice to learn about patterns but it's wrong to think about patterns as a solution.
> You can create an OO mess in Go and Rust that is equivalent to this too.
To be quite clear, I qualified with idiomatic. I don't think you can create an OO mess with idiomatic Go or Rust, but then again, the whole point of this thread is that there's no clear consensus on what OO actually means, so what does "OO mess" mean.
Idiomatic says nothing about code design. Code design is far above what idiomatic means. You can still have dependencies where you shouldn't have them. Your services can still be designed without composability in mind.
many software problems boil down to badly done X, where X is some paradigm, technology or technique. Then someone concludes, like this article, that it is pointless / to be avoided etc. It would instead be better that you write an article where you steelman X, then compare it to Y, and look at the comparative advantages. Badly done X or Y should always be highlighted but not really used as a reason not to do X or Y. If you can steelman X and show that even best effort has far too many disadvantages, and there doesn't seem a way forward to overcome them compared to other approaches, then sure, call it out. What I see is often new things have less history of people badly doing it compared to the old things that have a lot of examples of people badly doing it and they mistake that as the old thing as being bad.
Right but it actually is just horrible code with zero separation of concerns. Having a client class is a great example of good OO. The library you're using having a Session object is basically essential in python. Any other paradigm would be a mess. Things storing state about themselves in a totally unambiguous way is brilliant. Most complaints about OO just focus on bad usecases. Calling that is not a no true scottsman.
If you saw someone only driving in reverse, complaining about car UX, would you not state the obvious?
> This argument seems to come up for every criticism of OO. The criticism is invalid because true OO would never do that.
This passertion is disingenuous. The whole point is that the reason why bad code is bad is not because it's OO, it's because it's bad code.
You do not fix bad code by switching programming paradigm, specially if it's to move back to procedural programming and wheel reinvention due to irrational class-phobia.
> This passertion is disingenuous. The whole point is that the reason why bad code is bad is not because it's OO, it's because it's bad code.
This is pretty false on its face. Let's say that some paradigm insists that inheritance should be used for every problem. Almost all programmers agree that this is bad code, thus code written in this paradigm is bad because of the paradigm.
> You do not fix bad code by switching programming paradigm, specially if it's to move back to procedural programming and wheel reinvention due to irrational class-phobia.
Agreed. Instead you should start with your current paradigm and remove the problematic aspects until you're left with something that works reasonably well. I posit that when you start with OO and drop its problematic aspects like inheritance or Armstrong's observation of banana-gorilla-jungle architecture, you end up with something that looks pretty functional or data-oriented, a la idiomatic Go or Rust. If your definition of "OO" already excludes these things, then it probably looks something like Go or Rust or perhaps even Erlang.
The issue I'm raising with "OO" is that there is no consensus about what OO is; rather every defense of OO is about what it isn't (and then there's the ever predictable straw man, "But you can write bad code in any paradigm!").
I disagree. Good OO is about encapsulation of concerns. Make a Client object and push data to it; let the class worry about the networking and sockets. Make another object to deal with the database.
For sufficiently complex systems I don't see how functional or other paradigms can manage without holding a huge amount of global state.
The function system doesn't hold 'global' state. It holds the state in arguments given to functions. If you want a client, you still have a bit of client 'data' and you pass that around your functions.
If you wanna do stuff to the client, you call functions with the client as an argument. If you want polymorphic clients, you define a client typeclass / trait / interface called Client, and then your functions take a generic Client bit of data.
In some sense, this approach is a lot like replacing foo.bar() with foo(bar). Which doesn't do much to change your program. The interesting difference in FP is how you 'change the state' of your client.
In FP, if you want your client to e.g. count its connections you would do something like `nextClient = connect(oldClient)` instead of the OOP `client.connect()`. This means that you are a lot more explicit about your state changes. This has a lot of advantages that can be hard to wrap your head around. It also comes with some disadvantages.
As a result though, all of your state is carried very 'locally' in your functions scopes.
FP and imperative handle data without huge amounts of global state the same way. By structuring your code properly.
Per your example, your Client instance still has to be passed around everywhere it's needed, and you call client.push_data() on it; in FP or imperative approaches, you would pass around a Client struct or tuple or similar, and call push_data(client).
OO just bundles state and functions together, as fields and methods on an object. Which has its pros, and its cons.
That was a rhetorical statement. The same reason you don't need global objects in an OO language is the reason you don't need global data structures in an imperative or functional one.
Your Sword class doesn't hold a reference to a NetState object, but then, my Sword struct doesn't need a reference to a NetState struct, either.
If I were to climb the reference hierarchies back to my 'main' function, I would find a common ancestor, just like in OO if I were to climb my reference hierarchies, I would get back to the class with my 'main' method. But that doesn't imply global state; it's all scoped down in the code where I defined and use it.
I don't usually deal with stateful closures. In OO we often have file objects, and if you call Close() on said object, subsequent Read() invocations fail because the encapsulated state has changed. What's the closure analog for this?
Separating concerns the way you describe can be accomplished leveraging the polymorphism mechanisms in an OO language (objects and subclasses) or just as easily in a functional style (multimethods, protocols, interfaces).
Show me an example of something you think is decoupled in OO style which could not be similarly decoupled in FP, and I'll give you the FP version which does the same.
Not saying one is better than the other, but you can definitely keep your code decoupled in both styles.
In an OO program all state is global, because you can't understand any given object's behaviour without knowing what its state is and what the behaviours of all the objects it communicates with are, and you can't understand those without knowing their states either.
State can and should be hierarchical - and you can do that easily in FP - but the idea that OO lets you make it somehow non-global is a myth.
>In an OO program all state is global, because you can't understand any given object's behaviour without knowing what its state is and what the behaviours of all the objects it communicates with are, and you can't understand those without knowing their states either.
By that logic all state in any program is global.
But if I'm writing an MMO server my GoldCoin class doesn't need to know about the client's connection state.
Have you actually written or worked with OO code before? Your comment reads like you have not.
> By that logic all state in any program is global.
All state in any program is global in the sense of being reachable from top-level. You can, and should, make your state hierarchical - reachable from top-level should not mean reachable from anywhere, you can have (non-toplevel) parts of your program that only access parts of your state. Unfortunately OO is uniquely bad at this, because objects are encouraged to have hidden coupling.
> But if I'm writing an MMO server my GoldCoin class doesn't need to know about the client's connection state.
But you have no way of knowing or enforcing that. "You wanted a banana but what you got was a gorilla holding the banana and the entire jungle" - your GoldCoin might contain a ("encapsulated" i.e. hidden) reference to another class that has a reference to another class and so on, and so eventually it does depend on the client's connection state.
> Have you actually written or worked with OO code before?
Yes, for many years, which is why I've become an advocate of FP style instead.
> All state in any program is global in the sense of being reachable from top-level.
Well, think about a random number generator. It has some internal state, which gets set by some action taken (perhaps indirectly) by the top level. And that state is "reachable" by getting the next random number, but that random number may not be a direct representation of any part of the generator's state. Also, after initialization, the generator's state should not be alterable by the rest of the program.
So to me, that's not really "reachable". The entire point of encapulating that state in the random number generator is to make it not reachable.
That's a perfect example; in my experience that kind of RNG state is global (or at least, it has all the problems of traditional global state). Potentially any object might behave in a strange way, because it contains a reference to that RNG (hidden from you): you might find a sequence of calls that usually, but not always, reproduces a bug in test, and then it turns out that it depends on whether other tests running in parallel are affecting the RNG state. Essentially any test you write that's bigger than a single-class mock test has to be aware of the RNG and stub it out, even if you're testing something that on the face of it has nothing to do with RNG.
In a functional program you would make the RNG state an explicit value, and pass it where it's used (but not where it isn't - so there might be large program regions that don't have access to it). It'll be an explicit part of the top-level state of your program. I'd argue that that's not actually any more global - or at least, it causes fewer problems - than the OO approach.
This argument seems to come up for every criticism of OO. The criticism is invalid because true OO would never do that. It seems like a No True Scotsman. Notably, when you whittle away all of the things that aren’t true OOP, you seem to be left with something that looks functional or data-oriented (something like idiomatic Go or Rust). There isn’t much remaining that might characterize it as a distinct paradigm.