Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But every programming paradigm can be (and perhaps most often is) done badly. They each just create their own set of issues when "done badly". I think it just comes from not properly understanding why you're making certain choices. Deciding to use a design pattern is going to come with a certain set of strengths and weaknesses, so ideally you'd consider your problem and choose design patterns that makes the most of the strengths and minimizes the impact of the weaknesses. But I don't think it's very common to put that much thought into programming decisions, so people will often make a design decision, and then invest their effort into fighting the constraints it imposes, while usually failing to realize the potential benefits of it. OOP can be a perfectly suitable tool to solve a problem, but if you haven't thought about the actual reason why you want to use it, then you're just going to end up creating complicated abstractions that you're not properly benefitting from. ORMs can be great tools to, but if you want to use an ORM, you have to solve your problems using the approaches implemented by the ORM. If a programmer can't/won't do that, then they've just chosen the wrong tool for their problem/the approach they wanted to take to solve it.


Maybe the overall takeaway is that most programmers are bad.


Perhaps. I think a more likely explanation is that it's just not a very professional field. As a field it hasn't been around for very long at all, and it's changed a lot during that period. If it was obvious to tell the difference between bad and good programming, then perhaps things would be different. But I don't think that's obvious at all, and I don't think it's possible to create any meaningful level of agreement in that area. Look at how hard it is to agree on any sort of standard. IETF RFCs take years to process, with the typical outcome being a standard that enough people agreed to publish, but that 0 people faithfully implement.


I have another opinion about that. Most of us don't work in product companies but rather as consultants or in IT enterprise environments. In such contexts the real objective of software development is often the improvement of a process that is enabled by the software. As such the cons of bad software are not immediatly clear to any management layer above the first. Sometimes even the first layer of management doesn't really care until they get the process improvement they want. If the new software produces an increase of 30% in sales then the fact that it is badly written is just of secondary importance: the objective has been met. The fact that bad software quality will hurt maintanance in the mid-long term is often too far in the future for anybody to really care.

As software engineers I have seen that we usually make two mistakes in these cases. We focus only on our craft without taking into consideration the real business objectives of the customer and what he really cares about. We introduce unnecessary complexity just for the sake of it, often adding unnecessary abstractions. In a few words we often make the sin of believing that software is the objective while in most of the cases there is a business objective for which software is just the enabler and because of that we lose focus on what is the right thing to do.

In software product companies code is part of the final product and code quality directly impacts costs and the core business of the company.


I’d still say this is at least partially attributable to not having any real measurements of what “good” software is. At least aside from at a very basic level.

If you want to build a bridge, I’d assume there’s a set of standards it has to comply with. If it doesn’t meet the spec you could explain in very objective terms why the quality isn’t sufficient. If you had to sit in front of the board and explain why you think a piece of software wasn’t built to a sufficient standard of quality, you’re going to really struggle to do that in objective terms. The best you could really hope to do is present a well articulated opinion. Anybody who disagrees with you could present their own opinion, and unless they have a high degree of technical competence, they’re not going to have any objective approach for making judgements about that.

Sure lots of companies prioritize feature velocity over code quality. But I doubt most of them a very well informed about what quality compromises they’re actually making.


In some large Internet Scale firms there were ARBs, Architectural Review Boards. They served this purpose, by having very senior members of staff review proposed architecture and implementation approaches PRIOR to coding occurring. The danger was that over time, this sometimes devolved into the members waiting to just beatdown people presenting to them and being internally "political".


This resonates with me.

In other fields it can be easy to say when a job is done well. There can be clear guidelines. That said, other fields cut corners and do a “bad” job, too. Construction for example. It is probably possible to explain to average Joe why a damp proof course wasn’t installed properly or the insulation used is dangerous and flammable.

Programming as it exists now seems often in service of something else. So a deadline causes cut corners. So in this context a cut corner is fine and “we’ll fix it later”.

I suppose programming differs from a physical thing like construction. Sure you can replace the wiring and install a mezzanine but it’d be insanely impractical.

I’m going off on a tangent now...


Construction is a prime example, especially Home Construction. Just look at the TikTok videos showing horrendous plumbing and electrical work.

Hell, I moved into a house built 3 years ago by a major firm and the ceiling fans were ALL wired wrong, and many of the electrical plugs are upside down.

This is a house that had a specific design model. My presumption is that unskilled labor was brought in to complete work and just did a shitty job.

You can see this sometimes with outsourced programming work or local work that is poorly designed and inadequately tested.


Maybe its true, but that just shifts the blame and doesn't change anything or solve anything. Try as we might, "fixing" programmers just isn't realistic, so the only thing we can do is look for ways to improve the tools to make them encourage better practices.

Going back to the ORM example, I'm cool with just using SQL. The language encourages a style of thinking that fits with databases and it being a separate language makes it clear that there's a boundary there, that its not the same as application code. However, I'd also be ok with an ORM that enforced this boundary and clearly separated query logic from application logic.

As for OOP, I don't know what the solution there is. Maybe there isn't one. I like to use OOP, but sparingly, and at least in my personal code (where I have control over this), its worked out really well. A lot of code is transforming data structures and a functional approach maps really well to this, but for overall architecture/systems and even for some small things that just map well to objects, its great to have OOP too. In my toy game, I use inheritance to conveniently create external (dll/so) modules, but inside the engine, most data is transformed in a functional style. Its working quite well. I'm not sure how you could redesign the paradigms to encourage this though, outside of designing languages to emphasize these things.


>The language encourages a style of thinking that fits with databases and it being a separate language makes it clear that there's a boundary there, that its not the same as application code.

The separation is orthogonal to the data access style used, and you really have to make sure the engineers you're working with understand separation of concerns. I have seen many applications with controllers filled with loads of raw SQL, just as I have seen them filled with lots of ORM query building. If the programmers don't get why that's bad, they will do the wrong thing with whatever tools they have in front of them.


Take this contrived presudocode example:

    result = []
    for frob in Frob.get_all(where=something) {
        if frob.foo = expected {
            result = {
                frob: frob,
                quux: Quux.get_all(where=some-query-using-frob)
            }
            results.append(result)
Basically, the idea is that you fetch some data and check some condition or do some calculation on it, then fetch more data based on this condition/calculation. This entire thing could be a single logical process.

The only reason there's a separation of concerns here is because some of this is done in the database and some in the application. Logically, its still part of the same calculation.

But the ORM hides this distinction and makes both the part that runs in the database and the part that runs locally look the exact same and super easy to intermingle. Worse still if you access properties on your ORM-result-object which actually trigger further queries to get. It looks like a field access, but is actually a database query. I've seen this cripple performance.

In many cases, if you step back and don't think about it in terms of application code, but rather the data access and transformations that you want to achieve, then it can be rewritten as a query (joining related data in as needed etc). At the very least, it makes you aware of what the boundaries are.

I'm not saying that scrapping the ORM will magically make the problems go away and I know people also write terribly intermingled application and database logic when using SQL, but at least the boundary is more explicit and the different sides of the boundary actually look different instead of just looking like application code.

My point isn't that there's a silver bullet, but that we can nudge and encourage people to write better code by how the languages/libraries/tools structure solutions.

I'm also not necessarily saying that we have to use SQL instead of an ORM, that's just one possible suggestion that I personally find works due to the mental separation. I'm sure you can design ORM's that make the boundaries more explicit, or design frameworks that encourage thinking about application boundaries more explicitly. Same as how I'm not actually suggesting to get rid of OOP, just... if most people's OOP is so-called "Bad OOP", then we need to think about how to improve OOP, because changing "most people" is just not going to happen.


Yeah, I understand what N+1 queries are and how many ORMs make them too easy.

For me, ORMs become a problem when they're an excuse to avoid learning SQL. If you understand SQL, you will probably understand why the example you give is a bad idea. If you don't, you won't. I'm speaking from the point of view of having written, at this point, thousands of lines of SQL at minimum, and having decided that it's not how I primarily want to access data in the applications I write. The ORM queries I write are written with rough knowledge of the SQL they generate, and I try to be very careful with query builders to avoid the exact case you bring up.

I think LINQ in C# does a pretty good job of bridging this gap, actually. It could be better, but it discourages looping and encourages the declarative style that efficient SQL requires.


I think this is the problem that tools can’t solve. If you want to interface with a database, you need to know how a database works, regardless of what tools you use. I like using ORMs, and I think I write good ORM code. But I also think that I know quite a bit about how databases work, and that I’d be incapable of writing good ORM code if I didn’t.

If you give a novice developer an OOP assignment, they’re likely to struggle with the learning curve, and likely to ultimately create something that has a lot of unnecessary complexity. If you give a novice developer MEAN assignment, they’ll create something that “works” a lot easier. But it’ll likely be full of bugs created by not understanding the underlying problems that the framework is simply not forcing you to address.

Which is what I think these “simple” frameworks do. Allow you to create working features without necessarily considering things like consistency and time complexity. I also think it’s why things like Mongo have been slowly adding more ACID consistency, and schema design features. Because people are coming to realize that those are problems that simply can’t go unaddressed. And why ORMs and things like GraphQL have become so popular, which to me look remarkably similar to the structured approach of OOP design patterns.


I read somewhere that the amount of programmers in the world doubles like every 5 years. That means that at any given moment, half of programmers have less than 5 years of experience.


I've been programming for 30+ years. I still have less than 5 years of experience in anything that's currently popular.


The biggest problem with intermediate level programmers is their love of hard and fast rules. It's like they have a backpack full of hammers ("best practices") and they're running around hammering everything in sight. For this I feel like it doesn't really matter how much experience you have in an individual piece of tech - the overall amount matters more.


> The biggest problem with intermediate level programmers is their love of hard and fast rules

I think that’s just a problem of dogmatism, which doesn’t necessarily go away with experience.


I'll second that. I have seen that by "developers" with 20 years of experience. Dogmatism is the real enemy.


I find it hilarious that you’d mention “20 years of experience”. I work on a team at the moment with one of the most dogmatic developers I’ve ever met, and when challenged his rationale for any decision is “20 years of experience”.

He was recently asked to fix a rather simple bug in a C# project, a language he doesn’t use very often, but he was certain this would be an easy task for him. The project used a small amount of unmanaged resources, and he just couldn’t figure out how the ‘using’ keyword worked. He spent a few days trying to get to the bottom of ‘using’ before demanding to be sent on a C# course, which everybody thought was a great idea because it would keep him busy for a while. Maybe he’ll have “21 years of experience” by the time he gets around to fixing this bug.


Yup. Also, unwillingness to learn, which, to me, is a crime.

Learning is difficult; maybe because it starts with admission of ignorance.

Most of my work, these days, is a ghastly chimera of techniques, from 30 years ago, grafted onto patterns from 30 minutes ago. Real Victor Frankenstein stuff.

It seems to work, though.


> Learning is difficult

But not insurmountable. In my own experience, all it takes is a small amount of consistent effort each day. I've learned a few programming languages that I use regularly like this, I learned sleight of hand card magic like this and most recently, I've learned to play guitar like this. This year, I hope to learn Spanish like this too. Many people don't want to put consistent regular effort in and are looking for shortcuts. That doesn't work.

> a ghastly chimera of techniques

Well, yeah, the right tool for the job, right? But it does work, so...


In order for a programmer to program in many patterns elegantly, one must first have understanding of those patterns and how they are roughly reduced to machine code.

The skill of experiencing trumps the amount of experience.

Bad programmers need to hone their skill to experience in order to become good programmers. The most popular way is to provide them a safe environment to make mistake and learn from it. But the most important way is actually knowing that skill of experiencing and getting out of biases are important, like in this case "if OOP is indeed bad, why is it invented in the first place", and the rabbit hole actually gets interesting from here.

At least that's my take after a few years in the industry.


And yet after 2-3 years they are appointed as "senior" in some of the most developed countries :)


My main point is that the things we use encourage a certain approach and if most people are "doing it wrong [tm]", maybe its not the people's fault, maybe the tools are encouraging wrong approaches.

I mean, sure, you can just say most developers aren't very good, aren't disciplined enough, don't think things through. Maybe its true. But that doesn't solve the problem, it just shifts the blame. You can't fix the people, because its a never-ending problem: move team or company and you're back at square one. Good luck trying to change all the university courses to try and teach better practices.

So if you can't fix the people, the only thing left is to take a long hard look at the tools and see if we can't improve them in a way that encourages better code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: