Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Software engineering topics I changed my mind on (chriskiehl.com)
1164 points by goostavos on Jan 24, 2021 | hide | past | favorite | 686 comments


Have never agreed with a blog post more. Every single bullet point, 10/10.

Okay, okay, actually I have one qualm~

> Standups are actually useful for keeping an eye on the newbies.

Unfair. Standups are useful for communication between a team in general, if kept brief. If senior engineer X is working on Y and other engineer M has already dealt with Y (unbeknowst to X), it's a great chance for X to say "I'm currently looking for a solution to Y" and for M to say "Oh I had to solve that same problem last month!"

Seniority has nothing to do with this. Communication/coordination/knowledge-sharing matter at all levels.


Agree with this wholeheartedly. Standups are annoying but I have learned they are necessary, even as a very experienced developer. Sometimes things just cone up you wouldn’t otherwise know about and it encourages helpful, meaningful communication amongst the team.

What’s not helpful is when standups are treated like status reports. That’s not the purpose - even uber green newbies are responsible enough to do their work. The best kind of standups are those where you can feel free to discuss your blockers and simply state what you’re doing so the team has a general awareness if what’s going on and how/if it impacts them.


> not helpful is when standups are treated like status reports

> simply state what you’re doing

Honest question, what's the difference.


In theory, no difference. In practice, good standups to me always feel like a casual conversation, and bad standups always feel like people speaking off a script like bad actors.


Why not just have casual conversations instead? Standups are one those things that people disagree on endlessly without discussing context - their worth depends on the team. On my current team they're worthless. I'd rather have casual conversation, but that's like squeezing blood from a stone. Departure planning underway.


It should be, but not everyone on the team has to be confident. Not everyone has to be outspoken, not everyone has to have perfect pronunciation. Not everyone is able to structure their thoughts in 1 minute, even though they are able to work on complex systems.

What you want from "what you did yesterday, blockers, what you will do today" script is a framework for conversation starter, conversation scope and having something that you can prepare before. Some people can come up with it on the spot and some people think they should come up with it on the spot.

That is why I hate having standup right at the start of the day like 9.00, I usually have to get at least 20 mins to get check up what I finished yesterday and start picking up something new, going through priorities.


Personally I'd like a stand-up where it's ok not to have anything to say, like yeah, WIP, no problems, no blockers (except being here talking about it instead of doing it..!) etc.

Fully agree I would prefer it later. Not just 20min, I typically start ~1h before ours anyway, but it hangs over me all that time. I'd like to spend most of the day working on something and then stand-up in the afternoon, I'd be more likely to have an issue someone could help me with, or otherwise on my mind to discuss.


Yeah having standup in the afternoon is great. At my last company I worked with Americans from the EU. My standup was at ~4pm. My "last day's work" was still fresh and still had ~2 hours to resolve blockers.


Careful what you wish for. In open landscapes and in meetings with business side, there's endless pointless chatter about everything else but doing their job. I'm not a devotee to overwork and squeezing blood out of stone type of worker. However, the directionless chatter, general incompetence, lack of inquisitiveness and awareness become energy-draining over time.

Status report is when a team say the same things every day and every week, nothing changes and there's nothing new to be learned. This is Waste.

Daily scrum is meant to encourage collaboration, inspiration and brief sharing of information. However, when driven by business needs alone, it becomes another pointless status report. On the flip side, if daily scrum takes off, it should be allowed to continue as a new meeting afterwards, but is also a sign that there's not enough coherency in the group with the current practice.


Some people just don't like casual conversation and wouldn't initiate conversation on their own. If you have enough of those types, no communication wold happen. Ad-hoc conversation tends to be interruptive which is not desirable for people on maker schedules.

Either way, standups is just one communication strategy. Pick the communication strategy that works with the style your team feels comfortable with. There's rarely one solution fits all when it comes to communication.


It’s a good question that every team should ask themselves rather than just blindly follow some scrum book. One reason that standups can be worthwhile is if your product managers are hard to get a hold of (which is common), it’s a guaranteed time when you can ask them some questions. But your mileage may vary.


> Why not just have casual conversations instead?

Scheduling it daily is how you have these casual conversations.


> In theory, no difference. In practice, good standups to me always feel like a casual conversation, and bad standups always feel like people speaking off a script like bad actors.

As someone who did sales early in my career... acting off of a script correctly feels like a casual conversation to the one you're selling to.

If your script-reading is bad, that's because you haven't practiced enough. Having a script isn't necessarily a bad thing, and is in fact very useful in keeping focus.


Programmers are not salesmen, and this is not in their control, the scrum master often demands a question/answer type of conversation.


The more I grow into senior engineer / more leadership positions, the more and more my sales training from my youth comes in handy.

Every meeting you have with people has a goal (otherwise, you wouldn't meet with them to begin with!!). Maybe the goal is to gather requirements, or maybe the goal is to convince them to do something for you. The latter is 100% sales. None of us exist within a vacuum, we rely upon APIs or libraries or frameworks to do things. And if these APIs / libraries / frameworks are company / organization specific, you'll need to convince their lead engineer that your change is worthwhile to adapt.


If one had to troubleshoot a bad standup meeting, how might you turn one that feels more scripted into one that feels more natural?


Everyone is standing up, sitting down only as a medical exception. Strict time limits. Everyone gets 30s, extension only of there is a question from the crowd. For the topics "yesterday, blockers, today" just make it 3 "words" for yesterday and today each. Like "yesterday customer contact and small bugs, today think about customer suggestions, maybe with Christine" at most. Only the blockers deserve a full sentence maybe.


I would suggest the difference is if you feel pressure about your response. Is it ok to pass, or say something like "still working on same issue I discussed a couple of days ago"? If not, it's less like a casual conversation, and more like justifying your time.


I would say that's perfectly okay, but not for the reason you think. If your status is "still working on the same issue", your team should respond with "how can we help?". If your status doesn't change, that's a sign that something's wrong at some level, whether it's because you're stalled or because the issue was poorly scoped or poorly defined.


What about the fact that some stuff just takes time, is that not conceivable to you ?


Stand up is not (supposed to be) oppositional/conflict driven (but there are many toxic workplace cultures). Obviously some things "just take time", but what is the stuff that's taking up time? Assume it's all developers in the room and we're all familiar with the code base. Are you doing a stupid boring refactor of a thing tat "just takes time" but someone on the team wrote - and they have thoughts on pitfalls to avoid if they were rewriting it? Are you banging your head against an elusive bug that "will just take time" to tease out? The point of the standup is to shine light on any number of stupid pitfalls that every developer, even (especially) seasoned developers get stuck on, have dealt with in the past, and can give guidance with.

If you're just cargo-culting having a daily 15-minute meeting under the guise of agile or whatever, and it's just a status meeting, then cancel it, until after people learn to have a proper stand-up. Waking up just to go to a meeting and report "I'm still working on the thing", is a waste of everyone's time, and is a meeting that would have been better off as an email. (Provided people can send that email, which is not always possible, and is an entirely different topic.)


> your team should respond with "how can we help?"

this would kill the meeting at my company, it would go off the rails as a thing that everyone is present for and listening to. Moving it "offline" - as is often done - only occurs once it has gone sufficiently off the rails in the first place. Not saying there's anything wrong with this process, just pointing out it's counter to the "keep it short" discussion happening in this thread.


status report = you have to take responsibility for what you have done (or not done)

state what you are doing = what happens automatically if you sit in the same office with other programmers: you know what they are working on, you know if they are stuck with something because they usually just ask aloud, etc.


Not questioning your experience, but I've sat in a lot of engineering offices and had very little clue what the people around me were working on.

Standups really improved that aspect for me.


In a standup you should be able to say “i’m not really doing anything atm”, “i’m writing tests for x”, “i’m documenting y” without fear of someone asking you justify yourself.


we use these three questions: - what you did? - what will you do? - do you have any blockers? usually the blockers part is useful to know if anyone is having any issues with anything that others could help.


What you did is typically not useful to share. This can be seen on the scrum board. What you do want to share is your experience, what was hard/easy/remarkable or when you are stuck. Just reporting what you did goes quickly to defending your hours or something.


I do not understand why people have to wait to discuss the blockers? Discuss a blocker whenever you have one.


Wouldn't it be great if people were perfectly rational agents, gifted with objectivity vision and purged of all bias?

Of course you're right, you should discuss blockers whenever you have one. But people don't want to discuss blockers (or don't want to discuss at all), or don't identify something as a blocker, or would like to solve it themselves, or want to "protect the team" from this information, or think they'll get it resolved sooner without extra communication, or expect that they'll disagree on the course of action, or feel ashamed of having this blocker, or any other reason out of a hundred.

They won't rationally formulate it like I did just above, but it's just what people do: they get biased and their brain doesn't take the most rational course of action. A personal bias: I tend to prefer solving uncertainties by writing more code than talking to people. This is a stupid thing to do and I actively fight against it, but the fact is that I naturally tend to favour the "code" approach to the "communication" approach: standup forces to surface the problem and people can challenge me.


Discuss a blocker whenever you have one.

But that would require me interrupting one or more people in the middle of whatever they are doing and possibly ruining their flow. Unless there is a very tight deadline, work on something else and bring up your blocker when you know the relevant people have time to listen.


I also prefer not to interrupt others or to be interrupted. But if you send your problem as an e-mail, people can answer at their convenience.

My own experience is that frequently, the act of thinking about an issue long enough to be able to formulate a coherent e-mail about it makes the solution jump out at me before I even send the message.


In truth I normally do both. I'll send an email flagging that there is a blocker I want to discuss at the next relevant opportunity.

And I've also found that writing that email leads to me solving the problem at least 50% of the times (same with writing to forum posts of StackOverflow questions)


It sounds like your team doesn't have a chat tool (Slack, Teams, etc), or if they do, they're using it wrong.


If it can't wait, sure, ask for help right away.

But there are levels of blockers. Most are not emergencies.


this is legit. sometimes i feel like i wait a little too long to gather notes or brainstorm possible solutions when i could probably get that going faster by involving a colleague and tag teaming it. it is a balance i am trying to work on because i feel like i “don’t want to bother anyone” a lot.


Standups allow you to see blockers before they are there just by having a bigger picture of what’s happening.


I think it's the role of the lead dev to help newbies and coordinate work if needed. It can also be discussed at some meeting where the PO would present future tasks.

I don't see that as an intangible rule, it's all project and team dependent. And I see how in a remote world a "standup" can be beneficial.


It's especially useful with the newbies, as they are most likely to attempt to reinvent the wheel, due to lack of knowledge/experience.

Also, in the age of WFH stand-ups are a replacement for lunch conversations, the most rudimentary block of team building. I think that if you're not doing stand-ups or something like that since March you're probably losing team coherence.


There is not much wrong with reinventing the wheel. It is a less efficient use of time that often results in a beneficial serendipity. The opinion that reinventing the wheel is somehow a supremely evil satanic ritual is what prevents original solutions and allow expert beginners to become shitty decisions makers.


Re-inventing the wheel can be useful for learning, but can have very real costs, often in the form of production outages.

* "You can usually use user metadata field X for billing" - except for those users for whom field X actually maps to something else, for tech debt reasons. (Is it stupid and bad? Yes. Is anyone going to be able to fix it this year? No. Is this going to result in Very Big Customer TM getting mad? You Betcha.)

* "Oh, I'll just roll my own fake of Foo" - congratulations, now anyone looking for a fake needs too decide between yours and the other one. (Yes, this is highly context dependent, but the moment you have multiple fakes in common/util libraries this usually starts being a problem.)

* "I can just use raw DB writes for this, because I don't want to learn how to use this API" - except the abstraction exists because it guarantees you can do safe incremental, gradual pushes and roll back the change, whereas your home-rolled implementation had a small bug and now the oncaller needs to do manual, error-prone surgery on the backup instead of the usual undo button built into the API. (Oh, and legal is going to have a field day because there's no audit record of the raw writes' content.)

Cargo-culting is bad, yes, but reusing existing abstractions is often important because they handle (or force you to handle) the various edge cases that someone learned about the hard way.

And of course, if you find a bug in the existing abstraction, well, congratulations- you just found a repro and root cause for that infamous support case that's been giving everyone data integrity nightmares for months.


> often in the form of production outages

Completely unrelated. If you have production outages resulting from new code you have serious gaps in your certification process, especially so if the new code covers existing process/requirements. You are probably insecurely reliant upon dependencies to fill gaps you haven’t bothered to investigate, which is extremely fragile.

The benefit of code reuse is simplification. If a new problem emerges with requirements that exceed to current simple solution you have three choices:

1. Refactor the current solution, which introduces risk.

2. Roll an alternative for this edge cases and refactor after. This increases expenses but is safer and keeps tech debt low.

3. Roll an alternative for this edge case and never refactor. This is safe in the short term and the cheapest option. It is also the worst and most commonly applied option.


> If you have production outages resulting from new code you have serious gaps in your certification process

If you have production outages every week, yeah. But no organization is free of production outages. When they do happen (I said when, not if) it matters a lot if you used standard libraries, code that is plugged into the infrastructure, and the like, and not hand-rolled cowboy code


> it matters a lot if you used standard libraries

Why? From a security perspective, an outage is a security matter, the remediation plan is what’s important.


You are totally right but probably assuming you have a top notch and easily discoverable documentation that can guarantee that newbies won't spend 15 days reimplementing "that bash script you need once a year or so" that do the job in 2 minutes.

However even if this case you might get lucky and end up with a new script that do the job in 30 seconds and everybody in the team have learn that documentation is very important.


It's a matter of balance. No simple rule of thumb exists.

Sometimes it results in a much better solution, great! Sometimes it's a new wheel but with more awkward square-ish corners. Sometimes (and this is the worst because it's hard to explain) it is actually better but still likely has been a waste of the engineer's time - high cost vs low benefit, also often cooccurs with lack of team's capacity to maintain the new thing.


We decided early on that a daily scrum was just too frequent to be effective for our team, so went to 2x per week. When we started working from home, we added a "keep in touch" meeting for the other 3 days of the week just to stay connected.


Yep, not being dogmatic about standups having to be daily, or indeed about any process, is invariably a win - do what works for the team!

I'm working in a small team just now, just 5 of us, and a short, daily standup is working well for us - and it's usually finished in 7 minutes or so.

In my last project (where I wasn't leading the standups), the teams were bigger, 10 people in each, and they went.on.forever. Neither the scrum master or PM were strict at curtailing them or keeping them relevant. Everyone hated it.


Our standups (for a team of four) are about 15 minutes. Then we just leave the zoom on and co-work together out loud for an hour or two. It's pretty priceless the stuff that comes up during that time.


Always thought this will be useful, but never had a chance to experience it. What kind of stuffs come up?


I'm glad I've reinvented the wheel that many time as a junior, it let me learn why some framework and some solutions are the way they are, and made it extremely easy to pick up third party solutions later on


We'll never know the counterfactual, but in my opinion:

You could have learned the same thing more efficiently, with more support for why you didn't need to reinvent the wheel.


I've been a professional software developer for ten years, and I'm on a hiatus right now. One thing I've been doing with my time is reinvent a bunch of wheels, implementing broken clones of this library or that algorithm, and I've gained a tremendous amount of understanding of what's going on behind the scenes of tools I'd been using for years. So yeah, it's a valuable learning tool to build with your own hands.

Also, a valuable guide when deciding whether to write code yourself or introduce yet another library to your dependencies.


Before getting sucked into the enterprise world, I reinvented a bunch of wheels and this indeed gave me a deeper understanding of the inner workings of various frameworks and technologies. I noticed that my colleagues who had ten years or more on their resume rarely had any idea what I was rambling on about or the basis for the things they were using.

Their were so busy trying to get stuff done that they never had time to explore. I don't mean blog post or tutorial explore, I mean weeks and weeks of implementation and testing of patterns and low level engineering. Building database engines from scratch or writing a compiler or a distributed message broker, for instance.


Let’s be honest, stand ups are there so we can get together as a team. Otherwise, engineers would be heads down working on their own things.

Benefits of “Oh I yea I’ve worked on the same thing before” are usually realized outside of stand ups in over the shoulder chats or slack.

Stands ups are a waste of time. There, I said it. But, I like them, especially if you have a fun team.


Agreed!

Stand ups are a teather play to make the client believe a project is moving forward, while team members use their own private channels to talk about the real work.


For our team, we just do a weekly biz/dev meeting, then break into a pure dev meeting every Tuesday morning. The process takes about 30min - 60min a week and we can go into depth when needed. Plan the week and go do what we do. It works great for our small 5 man team.

Daily meetings seem excessive to me, even if they only last 5 minutes.


> Standups are useful for communication between a team in general, if kept brief.

Just once I'd like to work for a company that tries to stay in communication without so many explicit/manual/sync check-in gates.

* No stand-up, engineers required to write a 250-words or less blog post 2+ times a week.

* No announcing PRs, reviews &c to each other. Make watching the board a habit, one you "pull" rather than that is pushed to you. Or use a company provided chat-bot/tool to help surface changes as they happen, if you need that. The issue tracker should better dashboard whatever activity is happening, in general- indicate branch updates, pr changes, &c, clearly, across the board.

There's some value to using social processes to radiate all the changes happening, but I'd really like to see some camp out there that makes a go at mechanizing themselves. I think there are a lot of interesting possibilities, more enduring & valuable forms of communication that we have failed to even begin to explore.


A place I used to work had engineers documenting their work essentially in the form of a blog. It was actually really a useful habit, and reviewing the project blogs once each week made it really easy for me to find cases when I could help a colleague who was working on something I'd had experience with before.


I'd venture that the parent was talking about engineers "documenting" their work for managers, whereas I think you mean documentation for other engineers. Very different audiences and thus different things to say. (And widely different lifespans for the information.)

The latter no doubt is hugely useful (I'm on a long slow effort myself to get my co-workers to document their work more robustly). But writing status reports for managers/PMs on a weekly basis is, in my not so humble opinion, a complete waste of time for the company, and a sign of poor organization.


I was hoping both purposes would be served, at this mandatory level.

I would adore any engineer who writes more blog posts walking through what they're up to more technically.

How do you feel about every-day stand ups as a means for managers/PMs to check in on employees? My own impression has been that this is at least 50%+ of the reason for stand up, and to me, I'd far prefer periodic write-ins, rather than ephemeral, undetailed, synchronous communication.


Every time I mention the goodness of pencil and paper I get downvoted by so many youngsters.

Some people will always disagree about some points. It's in their nature.


Some of the greatest scientific work of all time was done on paper. Doing algorithm design on paper really brings home that you’re working with a mathematical object that just happens to have a mechanical interpretation.


I disagree. The best medium is a whiteboard or a blackboard :p.

(Really though, something about paper makes me afraid to "commit" things which make the pieces of paper no longer usable. Something made to be erased seems to be the trick for me).


And I find a whiteboard a bit more intimidating. It kind of implies performing your writings in public.

A notebook is a very personal thing. =)


I recently switched from a very whiteboard/paper heavy workflow to using a reMarkable tablet.

Holy shit this thing is good.

It's like an infinite notepad/whiteboard that auto syncs to the cloud, lets you define page layout templates, and renders PDFs and ebooks.

I've had it for just a few weeks and it's already the favorite pice of tech I own.


I use a rocketbook [1] to do something similar. They recently came out with a legal pad version and I love it. It's a little more work to convert notes into PDFs (have to manually take a picture) but it's a cheaper solution.

I've never used a reMarkable tablet but there's something off putting for me about using tablets to take physical notes. IDK how to explain it, drawing apps are fine but physically writing symbols, or making charts, or writing notes? It just feels off. I like the rocketbooks because it's just a fancier way to implement OCR for handwritten notes and the actions between paper and their product is nearly identical for me.

Maybe the reMarkable is able to handle this, just never tried it. It does look better than using something like an iPad for note taking.

[1] https://getrocketbook.com/


I know what you mean with taking notes on a tablet, but the reMarkable is very good at that aspect. Nothing at all like an iPad: the e ink makes it look like paper, and even the tip of the (passive) stylus feels like writing on actual paper (it "scratches" ever so slightly, even if it obviously doesn't actually scratches the screen).

It also doesn't do fancy stuff: it's black and white, and doesn't do OCR on device at all. It's basically just a notepad that is synced to your other devices (without the manual picture step, and you can get svg instead, etc)

To me it's too a notepad what a kindle is to books: a single purpose device that does it's job very well.

Oh yeah, and about two weeks of battery life is pretty good.

I realize I'm starting to sound a bit like a sales rep... but I'm just a fanboy user.


I also have a rocketbook. It is an amazing mix of low and high technology, for a very cheap price.


To this end, I have a small, paper-sized whiteboard that I can scrawl onto on hands at all times. I bought a couple packs of the ultra-fine expo markers, and it’s been a boon. Great for quickly noting things down that don’t need to last.

Having both is important, though: you need to be able to preserve the things that matter, that you may need later on.


You can use the rocketbook app. No need to buy the notebook.

https://getrocketbook.com/


That's why phones have cameras.


I've recently replaced this with my reMarkable tablet. Pricey but I love it as a paper replacement. Very open and hackable too (if a bit fragile).


Same here. Well everything except retrospectives, which I think are generally wasteful or better done in small pieces. Was afraid I wouldn't see something about overdoing microservices, but I think the monolith line covers it enough.


Retros should be like recall elections: always available, never scheduled, with a high but achievable barrier to entry and specific veto principles both ways. The point of a retrospective is "something big happened and we should learn and adapt". They shouldn't be routine, because most weeks/sprints, nothing that big happened (or something that big happened too frequently/urgently to wait for a calendar). They shouldn't be too hard to trigger, or you're never going to get one (or results from one). The best process for this is "shit goes down and you should make appropriate dedicated space for it".

Edit to add: I want to emphatically contradict my metaphor in one way, which is that retros should be exactly the opposite of a recall election in terms of identifying/naming/assigning fault. They should be about identifying good/bad outcomes and good/bad patterns, but not about pointing fingers at or casting aspersions on people.


We have an retro automatically if we have a production outage. This doesn't mean its an everyone must attend in-person meeting. Most of the time its just a writeup.

Otherwise, a lead or multiple senior engineers just exercises their judgement on when something serious enough happened that the team needs to be aware of or act on.


It shouldn't take something big to reflect on what went well and what didn't. Or suggest a change.


Most teams are silo'd like it or not. A backend guy or two, a frontend guy or two, layers of management, product, qa, ops people.

If I'm a backend person, I'll talk to the backend guy person if we messed up. If the frontend guys are lamenting among themselves, I find myself not really caring and time being wasted.

There's zero reason that teamwide changes can't be proposed for discussion via email or slack.


It doesn't take domain knowledge to take responsibility for improvement. If your retros are taken up by lamentation maybe you should try to bring more focus.


Lots of companies have teams for back end, front end, ops, etc.

Slack is a good way for things to get lost in the noise or decided by whoever is in the channel at the time. Email doesn't have those problems but discussions can stretch out over days. And people speak more freely when there isn't a written record.


You’re right it shouldn’t. Those don’t need a recall election. Just a normal one. Elections should be like sprints too.


Agree about retrospectives. It's such a waste of time. Most of the time it's just there so that the scrum master can show off their new game and try to justify their usefulness.


A bit of a controversial take: if you need standups for this sort of communication, your work and work culture are way too siloed. With a flexible and collaborative culture, people will communicate these things naturally as part of doing their work. Issues that come up will get addressed as needed when they come up. If something is important, why would you wait for tomorrow's standup? If something isn't important, why are you dedicating an inflexible daily meeting to talking about it?

If your team is having the sort of communication problems standups are supposed to solve, it's a symptom of a deeper issue and standups are a bandaid solution. If your team already works collaboratively, standups are pure overhead at best and actively counterproductive at worst. It's easy to get into the bad habit of waiting for a standup to bring up important issues, which loses time and context. Worse yet, chances are the standup has too many people and not enough time to discuss anything in detail—I've seen so many standups where any actually useful conversation would be caught, stopped and moved to a different venue. You end up with a pro forma meeting where most of the information isn't useful to most of the attendees, but still breaks up people's schedules and focus.

In my experience, an emphasis on standups goes hand-in-hand with a view of engineering work as a ticket factory: individuals get a ticket off the queue, work just on that, get it done as soon as possible and pick up another ticket. I think that correlation is not a coincidence.


That seems like a reasonable concern to me, and something that could apply to almost any communications that are on a regular schedule, whether it’s a daily team meeting or an annual review with your boss.

My father said something to me when I was nervous before my first annual review in my first job, and it has stuck with me ever since: nothing anyone says in that review should ever be a surprise. Whether it’s good or bad, if your management are doing their job, everyone who needs to know about it should have known when it became relevant, not on the anniversary of your employment.

I suspect there is more value in some types of regular but short technical meeting at the moment, when many colleagues aren’t in close proximity at work and ad-hoc informal discussions are less likely to serve the same purpose. But as someone who’s primarily worked from home for years, I’d usually still prefer to arrange a group call or physical meeting with whoever actually needs to be there when there’s something specific to discuss, rather than assuming in advance that any particular tempo will be the right one.


When trying to schedule on an as needed basis, the next time slot where everyone is available together could be in several weeks. Especially if one or more participants are business types with impossible calendars. Standing meeting reserves a time slot and guarantees a topic can be discussed within N business days of becoming important. If there is nothing for the agenda that day then you cancel it and everyone gets some free time.


While you have a point, it's not that rare for someone to be blocked on a hard issue for a day or two, even having talked to someone, then bring that up at the stand-up. The one you talk to may not always have the solution, and it may be someone else in the team at large.


Email works great for that.


Better is better but a daily stand-up provides a common, catch all meeting with the entire team blocked off to participate.

Why wait? Well you could should tap(which we all hate), or you could email, ...or you could do something else in the mean time and bring it up in the daily.

Lets say you're blocked but don't know who to talk to? You could end up playing email tag or out on a few man hunts as you jump from team member to team member looking for who knows what ...or you could bring it up in the daily.

It solves a lot of problems, even if it doesn't solve every problem.


> If senior engineer X is working on Y and other engineer M has already dealt with Y (unbeknowst to X), it's a great chance for X to say "I'm currently looking for a solution to Y" and for M to say "Oh I had to solve that same problem last month!"

I wonder why these "agile practices" shun the expertise so much. Instead of Y working on a similar problem as X in another month, why not make X an expert in the thing so that everybody knows about him being an expert and consults with him on a regular basis.

Is it really better to have everybody a shallow experience with everything rather than have a few individuals with a deep expertise with a particular thing?

And to have a meeting every day to "solve" this non-problem (somebody not knowing who the expert is, or supposed to be) seems really inefficient.


That's "Siloing" which is considered a negative pattern. If X is the expert, then only X can work on that something*. X is now a bottle neck.

Natural siloing happens, but you (team) should be striving to reduce it, not encourage it.

*More accurately: work can only proceed on that something when X is available*


My point is, it doesn't have to be black and white. If your expert is less available, or has too much of the same work, or you just want a backup, you just start training someone else to be an expert in that area too. It's not costlier than what you propose, it seems that it is always better to start with having a designated expert rather than to dilute the expertise so much than no one really is.


It does not work, at least did not worked for us. The problem is that in any single situation it is easier for expert to solve it alone then explain and when then there is sudden need to explain, expert can't do it effectively. Because he was not explaining for years.

Plus, you end up being in a limited box and have harder time to grow by learning new things - project structure keeps you in box and you can't easily expand it by taking tasks to learn something new.

Finally, expert is sort of fake expert - expert only because others are kept clueless. Not because he would had such great knowledge objectively, but because we decided this is only his area. There is no other engineer to discuss issues with our to compete with.


Nobody has said it should be except for you. As I said it will occur but you should strive to spread information and learning as much as is reasonable.


Reason why I don’t want my team to overspecialize:

I don’t want them to isolate and develop tunnel-vision. I want everyone to be aware of the project goals and understand the work that needs to be done to deliver value.

My experience with teams where people are divided by topics for a long time is that unpleasant work that does not fit into a single topic well gets neglected.


> My experience with teams where people are divided by topics for a long time is that unpleasant work that does not fit into a single topic well gets neglected.

I doubt it isn't the case either way. If you neglect understanding and development of expertise, you will still end up with some people having more expertise than others, and possible blind spots. Except now you have no idea what those blind spots are. (https://news.ycombinator.com/item?id=10970937)


I think "agile practices" without pair programming misses 70% of the benefit.

If you're pairing, no one person becomes the only expert on something, and also no one person is left alone to solve all problems in an area.


Standups aren't always necessary, but you'd need a high functioning (read communicative) team. In other words, they serve primarily as a forcing function to make sure teams are acting like teams (communicating).

I find the most useful standups are asynchronous though. It's much easier for others to follow along (and ask follow up questions), and avoids statuses devoid of usefulness (or at least makes them very apparent).


Agreed. It’s a deliberate inefficiency to make sure you at least have a chance to communicate with your team on a regular basis. Otherwise you might go days or weeks without the chance to have a critical two minute conversation.


That sounds suspiciously like your team is not communicating enough in the first place though. I mean, I guess the profession does get its fair share of introverts, but I would have expected enough teamwork that everyone knows what everyone else is doing, at least roughly. At my last gig I remember we had three backenders and four frontenders on a game we were building and the stand-ups seemed superfluous, Ryan on the frontend knew all of the frontend tasks and their exact states, I on the backend knew all of the backend tasks and all of their states, stand-ups were more of a means to celebrate what folks had done and coordinate that info with our QA team.

My silver rule of meetings is “to make a meeting matter, make a decision.” If we were assigning new tickets and/or backlog, deciding who would own each of them, that meeting is valuable. Progress updates can be delivered asynchronously and consumed asynchronously, unless, say, one wants group applause.

Of course now covid exists and I changed jobs to a team that barely talks with me and daily stand-ups are kind of my only social contact with them, so that's less fun. But yeah, 100% the original vision of agile with the “developers should be meeting daily with the product users to clarify the underlying model and mold the software to their hands” should cause people to work together so much that stand-ups become something of an afterthought.


It sounds like you and your team are working on the same artifact and that your tasks are interrelated. That’s kind of a special case. At any given time our 3 engineers have maintenance tasks in flight on 4 or 5 distinct products.


If people are working on completely unrelated things, what do you get from a stand-up other than a group status update?


> That sounds suspiciously like your team is not communicating enough in the first place though.

That's probably true. There definitely exist teams that communicate well enough that the benefit of a standup is nearly nonexistent. But many teams aren't like that and have a handful of people who need a structured process for communication or they will struggle. Standups aren't the best solution, but they are an easily implemented way of getting a team part way there.


You and Ryan are set then (as far as you know). What about the other people on the team? Did they also have flawless insight into latest progress and next steps?


I agree the list is very good list in general. My point of disagreement is:

> Software architecture probably matters more than anything else The devil is in the details here, but the more I program, the less I feel that "software architecture", at least as it is often discussed, is actually not important and often actively harmful.

The architecture driven approach takes the assumption that the "correct" shape of a program should fit into a pre-defined abstraction, like MVP, MVVM (or god forbid atrocities like VIPER) etc which has been delivered to us on a golden tablet, and our job as programers is to figure out how to map our problem onto that structure. In my experience, the better approach is almost always to identify your inputs and desired outputs, and to build up abstractions as needed, on a "just-in-time" basis. The other approach almost always leads to unnecessary complexity, and fighting with abstractions.

The author also mentions SOLID - like architecture patterns, I'm always a bit suspect of true-isms about what makes good software, especially when they come in the form of acronyms. I generally agree that the principals in SOLID are sensible considerations to keep in mind when making software, but for instance is the Liskov Substitution Principal really one of the five most important principals for software design, or is it in there because they needed something starting with "L"?

After 10 years of programming, the biggest takeaway for me has been that the function of a program (correctness, performance) is degrees of magnitude more important than its form. "Code quality" is important to the extent that it helps you get to a better functioning program (which can be quite a bit) but that's where its utility ends. Obsessing over design patterns and chasing acronyms is great if you want to spend most of your time debating with colleages about how the source code should look, but the benefits are just not reality-based most of the time.


I consider software architecture to be extremely important and based on the other opinions of the author he probably agrees with you that it should be "just-in-time", as do I. Useless abstractions just get in the way and make things complex and hard to understand.

Yes, if a program doesn't function the way it's supposed to, it's useless. Unfortunately I've seen many developers take shortcuts and not even think about the software architecture (because "hey, it works doesn't it"). Good software architecture not only makes it much easier to build functioning software, it also makes the team function much better. The ability to maintain the software and quickly add new features depends on it. Even little things you do can contribute to a good software architecture.

Overengineering leads to a terrible mess, so does the "hey it works, I don't care about anyone who has to maintain it" mentality. Ideally you'd be somewhere in the middle. You shouldn't design everything up front and you shouldn't ignore things that are right around the corner either.


> If senior engineer X is working on Y and other engineer M has already dealt with Y (unbeknowst to X), it's a great chance for X to say "I'm currently looking for a solution to Y" and for M to say "Oh I had to solve that same problem last month!"

I personally hate stand-ups. We do get benefit out of them, but I think it also leads to people waiting for the next standup to communicate instead of fostering a culture of communicating more pro-actively.

In your example: why wait for a standup? Why not just drop a message in slack saying "working on Y and not sure how to proceed; any ideas?"

Personally, I don't see the need for standups as long as the team is open about sharing blockers as they come up instead of waiting for the next standup cycle.


That's exactly how Opsware Support worked when I was there: https://antipaucity.com/2011/09/15/the-ticket-smash-raw-metr...


We always run standups in the same way. What did I work on yesterday, what am I working on today and finally impediments or help required as well as general organizational stuff that might impact the team.


So do we, but generally a quick glance at the kanban board would show that for everyone. Well, it would if we had one unified board. Instead we have stuff scattered across multiple boards. Still it would only take 1-2 minutes to open them all up and glance through them. Information radiators. They work well.

Instead we have a 30-60 minute standup/sitdown/try-not-to-doze-off meeting to achieve the same thing. We used to also have additional meetings to review the boards but eventually cut those because people got tired of me saying "the status is still the same as I said an hour ago, because I've been in this meeting since then."

The exception is stuff like team announcements, reminders that someone's going to be out, requests for someone to volunteer to take a task. That can all be done async though.


> Designing scalable systems when you don't need to makes you a bad engineer.

> In general, RDBMS > NoSql

These two bullet points resonate with me so much right now. I'm a consultant and a lot of my client absolutely insist on using DynamoDB for everything. I'm building an internal facing app that will have users numbering in the hundreds, maybe. The hoops we are jumping through to break this app up into "microservices" are absolutely astounding. Who needs joins? Who needs relational integrity? Who needs flexible query patterns? "It just has to scale"!


As an engineer-turned-manager, I spend a lot of time asking engineers how we can simplify their ambitious plans. Often it’s as simple as asking “What would we give up by using a monolith here instead of microservices?”

Forcing people to justify, out loud, why they want to use a specific technology or trendy design pattern is usually sufficient to scuttle complex plans.

Frankly, many engineers want to use the latest trends like microservices or NoSQL because they believe that’s what’s best for their resume, even if it’s not necessarily best for the company. It doesn’t help that some companies screen out resumes that don’t have the right signals (Microservices, ReactJS, NoSQL, ...). There’s a certain amount of FOMO that makes early-career engineers feel like they won’t be able to move up unless they can find a way to use the most advanced and complex architectures, even if their problems don’t warrant those solutions.


>Forcing people to justify, out loud, why they want to use a specific technology or trendy design pattern is usually sufficient to scuttle complex plans.

Does that really work ?

Usually these guys read the sales pitch from some credible source. Then you need to show them that the argument is X works really well for scenario Y but your scenario Z is not really similar to Y so reasons why X is good for Y don't really apply. To do this you usually rely on experience so you need to expand even further.

And the other side is usually attached to their proposal and starts pushing back and because you're the guy arguing against something and need a deep discussion to prove your point chances are people give up and you end up looking hostile. Even if you win you don't really look good - you just shut someone down and spent a lot of time arguing, unless the rest of the team was already against the idea you'll just look bad.

I just don't bother - if I'm in a situation where someone gives these kind of people decision power they deserve what they get - I get paid either way. And if I have the decision making power I just shut it down without much discussion - I just invoke some version of 'I know this approach works and that's good enough for me'.


Yeah, god helps if a higher up is a zealot about a technology. They will try to suggest that at every opportunity and arguing against it makes you stand out like a sore thumb that after a while you wonder why you even bother.


Frankly, many engineers want to use the latest trends like microservices or NoSQL because they believe that’s what’s best for their resume

The sad thing is, they might well be right.

People used to not get hired for a job involving MySQL because their DB experience was with Postgres, but usually more enlightened employers knew better. Today, every major cloud provider offers the basic stuff like VMs and managed databases and scalable storage, and the differences between them are mostly superficial. However, each provider has its own terminology and probably its own dashboard and CLI and config files. Some of them offer additional services that manage more of the infrastructure for you one way or another, too. There is seemingly endless scope for not having some specific combination of buzzwords on an application even for a candidate and a position that are a good fit.

I don’t envy the generation who are applying for relatively junior positions with most big name employers today, and I can hardly blame them for the kind of job-hopping, résumé driven development that seems to have become the norm in some areas.


Agreed, I found it really hard to get good roles 5 years ago. Then I worked on some cool shiny stuff - in general I dont like microservices, k8s, React/JS but it opens a whole new world of jobs.


> As an engineer-turned-manager, I spend a lot of time asking engineers how we can simplify their ambitious plans. Often it’s as simple as asking “What would we give up by using a monolith here instead of microservices?”

Funny you mentioned this. I have the exact opposite problem.

That is, I am an engineer trying to push back against management mandating the use of microservices and microfrontends because they are the new “hot” tech noawadays.


On my reading, this is the exact same problem, not the exact opposite problem. The break-even bar for a reasonable monolith is a lot lower than for microservices, so the GP's question is specifically asking, under a hypothetical where the team simply uses a monolith, what benefits the team would miss out on relative to microservices. If there are none, or they aren't relevant to the project scenario, then microservices probably isn't justifiable.

(I, too, am in the position of pushing back against microservices for hotness' sake.)


The point is it can be engineers pushing back against managers. Not just managers pushing back against engineers.


Aha! Opposite in that direction; I misread. (I'm also an engineer pushing back on managers who want microservices.)


This. I'm a consultant and 90% of the time the technology has already been decided by our fancy management team who haven't written code in 10+ years before a line of code has been written. But they know the buzzwords like the rest of us and know they sell.

Problem is they no longer have to implement, so they are even more inclined to sell the most complicated tech stack that have marketing pages claiming they scale to basically infinity.


In my company we store financial data for hundreds of thousands of clients in sql db. It's decade okd system and we have hundreds of tables, stored procedures (some touching dozen+ tables) and rely on transactions.

It took me weeks to convince my managers not to migrate to new hot nosql solution because "it's in cloud, it's scalable and it also supports sql queries".


> Frankly, many engineers want to use the latest trends like microservices or NoSQL because they believe that’s what’s best for their resume, even if it’s not necessarily best for the company.

Probably nobody is using NoSQL for their resume. It's because picking a relational database, while usually the correct choice, is HARD when you're operating in an environment that changes quickly and has poorly defined specifications.

When you start seeing engineers have difficulty reasoning about what the data model should be and nobody willing to commit to one, it's the clearest sign that organizationally things are sour and you need to start having very firm and precise conversations with product.


I'm facing this issue now. App is supposed to deliver to clients after this sprint - and the data model still isn't locked down. After arguing through about 10 hours worth of meetings this week, I think I need a new job.


The best thing about the lockdown is that you can apply, interview and change jobs without leaving your desk at home =)


My condolences.


There are only two options for me: MySQL or Postgres.

And using AWS generally means using Aurora. Then the choice is already made. Not hard at all.

Yep, my work involves heavy use of SQL and I find it better than the NoSQL insanity.


Just curious, for what reasons would you choose MySQL over Postgres?


Personally, when I want speed or easy upkeep and intend on doing dumb simple things.

Postgres is more featureful, but if you don't intend on using those features, MySQL is consistently faster and historically smoother to update and keep running.


Also in the Enterprise, if you're doing a lot of sharding and replication across networks, Percona MySQL is a very compelling product. I say that as a Postgres diehard.


> MySQL is consistently faster

Unless you want to do a join.


Traditionally it was because you needed replication or sharding that you didn't have to boil half an ocean for, or at least half decent full text indices. These days however I believe the differences are smaller and in other areas.

Most often, you choose a database because of what you application supports and is tested with, not the other way around. Or what your other applications already use. Complete green fields aren't all that common.


Replication.

And our DBAs are already familiar with the gotchas of MySQL.

You just have to write queries in a different way (subqueries are slow, so they are to be rewritten as joins).


Probably the same way sometimes people choose sqlite versus mysql- simplicity. There are many cons and pros for both of them!


It's because picking a relational database, while usually the correct choice, is HARD when you're operating in an environment that changes quickly and has poorly defined specifications.

Wouldn't this apply if you are using a static typed language too? what's harder about changing the schema in the DB?


You (mostly) don't have to deal with data migrations with statically typed languages. Releasing a new version of some code is usually easier than making structural changes to a database that's in active use.


Releasing a new version of some code is usually easier than making structural changes to a database that's in active use.

Yes, and on top of that, code-only changes need to be internally consistent to make sense but DB schema changes almost inevitably require some corresponding code change as well to be useful. Then you have all the fun of trying to deploy both changes while keeping everything in sync and working throughout.


You've hit on something there as well, but essentially it comes down to forced rewrites and flexibility. We tend to choose the more flexible systems to avoid forced upfront work when changes are needed even when it's the wrong choice for the project in the long run.


Eh, not so much databases, but in terms of code, super flexible 'all things to all people' generic abstractions tend to be a lot more work and a lot more difficult to debug than a tight solution tailored to the problem it's solving, written in domain terminology.

If only I had a nickel for every hour I've spent debugging abstractions and indirection that were just there for the sake of adding flexibility that would never be needed.


I'm in agreement with you actually. I was badly suggesting that that kind of flexibility up front bites us in the ass later and it's a bad impulse often followed.


Is every NoSQL database non-relational, and every relational database SQL? It sure seems to me that you could have a relational database without SQL. Something non-text could be nice. It might be a binary data format or compiled code.

One might even convert SQL to binary query data at build time, with a code generator. It could work like PIDL, the code generator Samba uses to convert DCE/RPC IDL files to C source with binary data. Binary data goes over the wire. Another way is that both client code and server code could be generated, with the server code getting linked into the database server.


Which parts are hard?


I like to half jokingly assert that microservices are a pysop to sell cloud hosting


If I were an evil tech giant, I would open source a bunch of libraries that require significantly more effort to use than necessary, and pitch them as the One True Solution. Just to slow my competitors down.


I had a nemesis who would steal all my ideas.

So I bought all the XP books, dog eared them, left them on my desk. My team nearly mutinied. I asked them to wait and see. Two weeks later, nemesis announced his team was all in for XP, Agile, pair programming, etc.

They never recovered, didn't make another release.

I tossed my copies, unread.


I'm praying this story is true, very funny.


With a competent team, XP should work really well, right? So what happened?


Ah, I see you've used k8s.


Not a joke. When i worked at Pivotal Labs, sales / executives were very excited about the synergy between helping clients build microservice architectures and selling them Cloud Foundry.


I've long thought that Java write once, run anywhere, really was a desperate attempt to save Sun's legacy server market from doom.


I bet you're half-right as well.


> “What would we give up by using a monolith here instead of microservices?”

I love that.

I think many people are loathe to "turn the argument around", and pretend they're going the other way.

For example, imagine some legacy app used by 10 people out of 10,000 is incompatible with something like a Microsoft Office upgrade.

In many organisations, the argument goes like this: "We can't upgrade to Office 2023 because StupidApp will break!"

Turning that around: "If Office 2023 was already rolled out, would you roll that back to Office 2021 just for StupidApp?"


> Frankly, many engineers want to use the latest trends like microservices or NoSQL because they believe that’s what’s best for their resume

Then they are bad engineers. It is true that it’s best for their resume, but I also have my professional integrity to maintain.


That integrity isn’t worth much if you can’t get hired.


The question one should ask is: what $hot_tech can we adopt without the product becoming significantly worse than with $old_tech? That is, which things do we adopt only or mostly to make the project or company attractive to work with or invest in?

Saying “why would you ever do that rather than building the solution the cheapest and with the lowest risk” doesn’t fully appreciate the importance of attractiveness.

I’m selling my hours of work time for a salary, fun, and resume points. My employer pays me in all 3. I’ll always push for $fun_tech or $hot_tech despite it not always being in the short term interest of anyone but myself or my fellow developers. I’ll keep justifying this by “if we do this in $old_tech then me and half the team will leave, and that’s a higher risk than using $new_tech”.

(By tech I here mean things like languages and frameworks not buzzwords like microservices or blockchain, ai... )


IMO it's a no-brainer to choose complex technology, the incentives are much more attractive.

Go the simple way and it works, you get paid and off to the next project, if it doesn't work, its your fault.

But on the job experience with scaling tech is way more valued than doing some online course, you get paid to learn by doing on company time, and you don't lose anything by possibly wasting company resources. So you tick the box that often job postings have "proven track record of <insert scale buzz here>" which could possibly lead to a much better salary, its all incentives.


To me this is a little bit weird, because while OP is totally correct in that monoliths are totally fine too when it's the best tool for the job, the default should still be microservices. It's not really harder to use once you have the practice in place and advantages will usually be quite visible in time. But of course there are times when there are great monoliths you can just use and you should use them.


There are challenges with microservices that hints me to build monoliths by default unless not viable.

Things that are trivial in monoliths are hard in microservices like error propagation, profilling, line by line debbugging, log aggregation, orquestration, load balancing, health checking and ACID transactions.

It can be done but requires more complex machinery and larger teams.


Do you mean “a monolith” or “the monolith?” The essential characteristic of monoliths is that you don’t get to start new ones for new projects.

The real skill of architecture is understanding everything your company has built before and finding the most graceful way to graft your new use case onto that. We get microservices proliferation because people don’t want to do this hard work.


I don't understand splitting an API into a bunch of "microservices" for scaling purposes. If all of the services are engaged for every request, they're not really scaled independently. You're just geographically isolating your code. It's still tightly coupled but now it has to communicate over http. Applications designed this way are flaming piles of garbage.


The idea is that you can scale different parts of the system at different rates to deal with bottlenecks. With a monolith, you have to deploy more instances of the entire monolith to scale it, and that’s if the monolith even allows for that approach. If you take the high load parts and factor them out into a scalable microservice, you can leave the rest of the system alone while scaling only the bottlenecks.

All of this is in the assumption you need to scale horizontally. With modern hardware most systems don’t need that scalability. But it’s one of those “but what if we strike gold” things, where systems will be designed for a fantasy workload instead of a realistic one, because it’s assumed to be hard to go from a monolith to a microservice if that fantasy workload ever presents itself (imho not that hard if you have good abstractions inside the monolith).


I understand how microsercices work, but I'm referring to a specific kind of antipattern where an application is arbitrarily atomized into many small services in such a manner that there's zero scaling advantage. Imagine making every function in your application a service as an extreme example.


This seems to be an example of a more general antipattern in software development, where a relatively large entity is broken down into multiple smaller entities for dogmatic reasons. The usual justification given is how much simpler each individual entity now is, glossing over the extra complexity introduced by integrating all of those separate entities.

Microservice architectures seem to be a recurring example of this phenomenon. Separating medium to long functions into shorter ones based on arbitrary metrics like line count or nesting depth is another.


Assuming every function is called the same amount of times and carries the same cost it would indeed be silly to cut up a system like that. But in the real world some parts of the system are called more often or carry a high cost of the execution. If you can scale those independently of the rest of the system, that is a definite advantage.

For me the antipattern poses itself when the cutting up into microservices is done as a general practice, without a clearly defined goal for each service to need to be separate.

(And by the way, i’ve seen a talk before of an application where the entire backend was functions in a function store, exactly as you described. The developer was enthusiastic about that architecture.)


> you have to deploy more instances of the entire monolith to scale it,

That's a common argument for microservices and one that I always thought was bunk.

What does that even mean? You have a piece of software that provides ten functions, running 100 instances of it in infeasible but running 100 of one, 50 of three and 10 of six is somehow not a problem?

That must be really the perfect margin call of some vsz hungry monstrosity. While not an impossible situation in theory, surely it can't be very common.

There are plenty of reasons to split an application but that seems unlikely at best.


I have seen multiple production systems, in multiple orgs, where "he monolith" provides somewhere in the region of 50-100 different things, has a pretty hefty footprint, and the only way to scale is to deploy more instances, then have systems in front of the array of monoliths sectioning off input to monolith-for-this-data (sharding, but on the input side, if that makes sense).

In at least SOME of these cases, the monolith would've been breakable-up into a smaller number of front-end micro-services, with a graph of micro-services behind "the thing you talk to", for a lesser total deployed footprint.

But, I suspect that it requires that "the monolith" has been growing for 10+ years, as a monolith.


> imho not that hard if you have good abstractions inside the monolith

And that is the big if! The big advantage of micro services is that it forces developers to think hard about the abstractions, and can’t just reach over the boarder breaking them when they are in a hurry. With good engineers in a well functioning organisation, that is of course superfluous, but those preconditions are unfortunately much rarer than they should be.


Especially true when the services are all stateless. If there isn’t a conway-esque or scaling advantage to decoupling the deployment... don’t.

I had a fevered dream the other night where it turned out that the bulk of AWS’s electricity consumption was just marshaling and unmarshalling JSON, for no benefit.


I recently decided to benchmark some Azure services for... reasons.

Anyway, along this journey I discovered that it's surprisingly difficult to get a HTTPS JSON RPC call below 3ms latency even on localhost! It's mindboggling how inefficient it actually is to encode every call through a bunch of layers, stuff it into a network stream, undo that on the other end, and then repeat on the way back.

Meanwhile, if you tick the right checkboxes on the infrastructure configuration, then a binary protocol between two Azure VMs can easily achieve a latency as low as 50 microseconds.


A few years ago, my good friend used to say that the first two main duties of a financial quant library are string manipulation and memory allocation.


Exactly!


The first thing that comes to my mind is that there are different axes that you may need to scale against. Microservices are a common way to scale when you’re trying to increase the number of teams working on a project. Dividing across a service api allows different teams to use different technology and with different release schedules.


I don't necessarily disagree, but I believe that you have to be very careful about the boundaries between your services. In my experience, it's pretty difficult to separate an API into services arbitrarily before you've built a working system - at least for anything that has more than a trivial amount of complexity. If there's a good formula or rule of thumb for this problem, I'd like to know what it is.


I agree. From my perspective, microservices shouldn’t be a starting point. They should be something you carve out of a larger application as the need arises.


People always talk about nosql scaling better, but some of the largest websites on the internet are mysql based. I'm sure some people have problems where nosql is genuinely an appropriate solution, but i find it hard to believe that most people get anywhere near that level of scalability.


Exactly, and from a features standpoint Postgres can do everything Dynamo can do and so much more. I think a lot of software devs don't really know SQL or how RDBMS work so they don't know what they are giving up.


This is similar to how I feel about graph databases. Twitter (FlockDB) and Facebook (TAO) built scalable graph abstractions over SQL without a hitch.

Why would I want to use a graph DB directly then?


Postgres even has JSONB support, so if you really want to store whole documents NOSQL-style, you can - and you can still use all the usual RDBMS goodness alongside it!

Postgres really is a wonderful database.


Those very large mysql deployments typically use it as a nosql system, with a sharded database spread over dozens or hundreds of instances, and referential integrity maintained by the business layer, not by the database.

For a good example of a high volume site using a proper rdbms approach I would look at stackoverflow. It can (and has) run on a single ms sql server instance.


Even if that's so, still suggests rdbms are a good choice.

I do know for Wikipedia, english wikipedia is mostly a single master mysql db + slaves, with most of the sharding being on the site language level (article text contents stored elsewhere)


POC scales easier, thats all that matters to win the idiot match.


hey.com is the latest one that is on mysql.


I was on a team which used DynamoDB for their hottest data set. Which would trivially fit in RAM.


If I had a dollar for every senior engineer I've worked with who has never heard of SQLite...


Truth be told I am yet to see a reason to use in-memory database. Datstructures, maps/trees/set - yes. Concurrent/lock free/skip lists/whatever - all great. I don't need a relational database when I can use objects/structs/etc.


I think that depends on what you're doing with the data. If you're just grabbing one thing and working with it, or looping through and processing everything, maybe not.

But if you're doing more complicated query-like stuff, especially if you want to allow for queries you haven't thought of yet, then the DB might be useful.

Sometimes a hybrid of query-able metadata in a DB along with plain old data files is good.

That depends very much on your data, how much things key to each other, and what you're doing with it.


>doing more complicated query-like stuff

That's some kind of fallacy - standard datastructures would totally destroy any-sql-alike thing, if it comes to performance (and memory footprint). I guess it does depend on where the background comes when it comes to convenience - or how people tend to see their data. However like I said - for close to 3 decades I have not seen a single reason to do so. On the contrary I've had cases where optimization of 3 orders of magnitude was possible.


It's easier to find devs who know basic SQL than it is to find devs who know pandas or whatever your language specific SQL-like library is. And the more complicated the queries, the more the gulf widens.


I don't think pandas was the proposal here. I think "standard data structures" refers to arrays, hash tables, trees, and the like.


Performance is not god. It is not the altar at which we sacrifice all other considerations.


> for close to 3 decades I have not seen a single reason to do so

Evidently you don't have dataset far exceeding the amount of RAM you can afford.

For a good example look at LMDB.


ACID transactions, validations & constraints, and the ability to debug/log by dumping your data to disk which can then easily be queried with SQL.

All of the same reasons you would store relational data in a dbms...


>ACID transactions, validations & constraints

There is no D from the ACID. For the D to happen, it takes transaction logs + write barrier (on the non-volatile memory). Doing Atomic, consistent and isolated is trivial in memory (esp. in GC setup), and a lot faster: no locks needed.

Validations and constraints are simple if-statements, I'd never think of them as sql.


It sounds like you're talking about toy databases which don't run at a lot of TPS. Let me point out some features missing from your simple load a map in memory architecture.

You also have to do backup and recovery. And for that, you need to write to disk, which becomes a big bottleneck since besides backup and checkpointing there is no other reason to ever write to disk.

Then, you have to know that even in mem database, data needs to be queried, and for that you need special data structures like a cache aware B+tree. Implementing one is non trivial.

Thirdly, doing atomic, consistent and isolated transaction is certainly trivial in a toy example but in an actual database where you have a high number of transactions, it's a lot harder. For example, when you have multiple cores, you certainly will have resource contention, and then you do need locks.

And last thing about gc, again, gc is great, but there has to be a custom gc for a database. You need to make sure the transaction log in memory is flushed before committing. And malloc is also very slow.

I'd suggest reading more into in mem research to understand this better. But in mem db is certainly not the same as a disk db with cache or a simple Hashmat/B+tree structure.


> And malloc is also very slow.

Isn't one of the advantages of a GC environment that malloc is basically free? Afaik the implementation of malloc_in_gc comes down to

    result_address = first_free_address;
    first_free_address += requested_bytes;
    return result_address;
It's the actual garbage collection that might be expensive, but since that process deals with the fragmentation, there is no need to keep a data structure with available blocks of memory around.

That's also the reason why, depending on the patterns of memory usage, a GC can be faster than malloc+free.


>It sounds like you're talking about toy databases which don't run at a lot of TPS.

The original talk was explicitly about SqlLite and in-memory databases, no idea where you got the rest of.


Correct. So we're talking about in memory databases like MongoDb, and all of the things I listed here are true about MongoDb. For example, MongoDb migrated their database memory manager away from mmap and towards a custom memory manager (point being that gc and memory management for databases is not something you can just use jvm or operating system constructs for)

https://docs.rocket.chat/installation/docker-containers/mong...

I'm happy to justify every single point I made with research papers.

Lastly I know I came off as a bit condescending. Just having a bad day, nothing personal. But you should read more about in mem dbs.


You _can_ have forms of durability if you wish to. You can get "good enough" (actually fairly impressive...) performance for most problems (vs only in-memory) with SQLite making memory the temp store, turning on synchronous and WAL. Then fsync only gets called at checkpoints and you have durability at the checkpoint.


Oh, that’s nothing. My company took over a contract from another company that had two DBA writing a schema to store approximately one hundred items in a database! We converted it to a JSON file.


I've definitely had to push back on engineers wanting to use Redis for caching data they just pulled from the database. "Just store it in RAM guys..."


Eh, the second one is probably the only point I was kind of meh on. You should almost always start with an RDBMS, and it will scale for most companies for a long long time, but for some workloads or levels of scale you're probably going to need to at least augment it with another storage system.


I think, it's mostly a question of education.

Universities taught SQL for years, so everyone knows it and its edge cases.

NoSQL databases are all different AND they weren't all taught for decades.

If you put real effort into learning a specific NoSQL database and it is suited for your problem things work out pretty well.


Depends? If you know you are going to need that scale you can take it into account when selecting your RDBMS technology/setup.


I can't think of a worse decision than trying to use DynamoDB just for the sake of using it.


I’ve seen similar issues where people got stuck on Mongo because it’s easy to install.


In my own different comment I highlighted the same two points with the opposite conclusion haha!

I find dynamodb to be unnecessary but I prefer nosql systems


Are there other constraints that might make DynamoDB a good fit? For example I made an app at a client. We could use RDS or we could use Dynamo. I went with Dynamo because it could fit our simple model. What’s more, it doesn’t get shut off nightly when the RDS systems do to save money. This means we can work work on it when people have to time shift due to events in the life like having to pick up the kids.


The problem with NoSQL is that your simple model inevitably becomes more complex over time and then it doesn't work anymore.

Over the past decade I've realised using a RDBMS is the right call basically 100% of the time. Now pgsql has jsonb column types that work great, I cannot see why you would ever use a NoSQL DB, unless you are working at such crazy scale postgres wouldn't work. In 99.999% of cases people are not.


There are specific cases where a non SQL database is better. Chances are if you haven't hit problems you can't solve with an SQL database you should be using an SQL database. Postgres is amazing and free why would you use anything else.


People keep saying there are specific cases where NoSQL is better, but never what any of those cases are.


Time series is one. Consider an application with 1000 time series, 1 hosts, and 1000 RPS. You are trivially looking at 1M writes per second per host. This usually requires something more than "[just] using a RDBMS".


Here you go, this is from a system I helped building 10 years ago that is an eternity in tech - https://qconlondon.com/london-2010/qconlondon.com/dl/qcon-lo...


a bit more context: High-velocity transactional systems (e.g any e-commerce store with millions of users all trying to shop at the same time), I helped to build such a 10 years ago here is the presentation - https://qconlondon.com/london-2010/qconlondon.com/dl/qcon-lo...


We just ported a system that kept large amounts of data in postgres jsonb columns over to mongodb. The jsonb column approach worked fine until we scaled it beyond a certain point, and then it was an unending source of performance bottlenecks. The mongodb version is much faster.

In retrospect we should have gone with mongo from the start, but postgres was chosen because in 99% of circumstances it is good enough. It was the wrong decision for the right reasons.


Yep, I agree there are cases where mongodb will perform better. However, many use cases also require joins and the other goodness that relations provide.

So really the use case for mongo etc is 'very high performance requirements' AND 'does not require relations'.

Many projects may be ok with just one of those. But very few require meet both of those constraints.

FWIW I've seen many cases which are sort of the opposite: great performance with mongodb, but then because of the lack of relations for a reporting feature (for example) performance completely plummets due to horrible hacks being done to query the data model that doesn't work with the schema, eventually requiring a rewrite to RDBMS. I would guess that this is much more common.


I found that for an EAV type database, NoSql is a much better match as it doesn't require queries with a million joins. But that's a very specific case indeed.


> it doesn’t get shut off nightly when the RDS systems do to save money

If your company needs to shutdown RDS to save a couple of bucks a month, there's a much larger problem at hand than RDS vs Dynamo.


At scale it’s a little bit more than a few bucks. Across the board, we spend hundreds of thousands on ec2 instances for dev/test, so turning them off at night when nobody uses them saves you quite a lot of money.


I can't speak to your specific use case, but I can tell you that a relatively small RDS instance is probably a lot more performant than you think. There is also "Aurora Serverless" now which I've just started to play with but might suit your needs.

As far as what makes Dynamo a good fit, I almost take the other approach and try to ask myself, what makes Postgres a bad fit? Postgres is so flexible and powerful that IMO you need a really good reason to walk away from that as the default.


Aurora wasn’t allowed at the time. The system is a simple stream logging app. Wonderful for our use case. Dynamo for so far. Corp politics made the RDS instance annoying to pursue.


Almost 20 years of professional experience here: I fully agree with this list. I even want to add a few things:

> Clever code isn't usually good code. Clarity trumps all other concerns.

My measurement is "simple and clean" code. Is it simple and clean? No? make it so!

> After performing over 100 interviews: interviewing is thoroughly broken. I also have no idea how to actually make it better.

If a good developer you know, recommends someone they worked with: It's almost an instant hire. But for the rest, yeah, it's incredibly tough.

One last thing I like to remind myself: "Good enough is always good enough". Sometimes "good enough" has a low bar, sometimes high. You always need to reach it, but never want to go too far above it.


I'm also at 20 years, and he has hit the nail dead on.

I can add a few things:

  * Operational considerations dominate language choice.
  * Architecture too.
  * Politics, leverage and all that MBA crap dominate all of that.
  * Language zealots are net negative idiots and need taking outside and shooting.
  * Actually apply that to all zealots.
  * The root cause of the above is often insecurity; and it can be coached / cultured away.
  * If your lead/management doesn't get that then they're in the wrong job, get out.
  * If you see shady consultant types and "thought leaders" talking about something, then it's just the latest bullshitchain.
  * If you see a junior developer say "this is easy", then let them learn the hard way, but manage expectations.
  * If you see Senior level engineers constantly say "this is easy" when the problem isn't clear, then run for the hills
  * The 10x engineer is only true in special cases and does not generalise.
  * That ^ is why you need to work together so everyone is doing their 10x task.
  * Incompetent engineers and bad actors can act as negative 10x engineers.
  * The vast majority of people on the outside of the team (consultant types) are worse than the above.
With interviewing I have decent results just having an open form conversation, looking first for personality and drive and getting an idea of their technical knowledge by seeing how deep they can go (or if they even can go deep into a subject). References from someone competent dominate, mind.


One more from me, with 20 years of experience.

Some people are force multipliers. They make everyone around them more efficient. BUT people like this are hard to determine with Standard Performance Indicators.

Personal anecdote:

I had to fight tooth and nail to keep a junior coder in my team because "she wasn't performing well". Yes, she didn't commit often and her code quality was average at best. Tasks took longer than average to complete as well.

But what she DID DO was keep the two auteur seat-of-their-pants 10x coders in her team focused and on task, made sure to ask the correct questions and had them document their work. She also took over their customer-facing stuff so the socially awkward 10x pair didn't need to do that.

None of this showed up on standard performance indicators.

Now she's a team lead at the same company, managing the two 10x-ers.


Totally agree; I'm now very similar to your example, only I came from being one "the" 10x-er. Then I worked with another 10x-er at the same time as we had our first kid. With no sleep and no time to learn I could remain a 10x-er so transitioned into a Tech Lead (and am now moving towards CTO).

Now I spent my time optimising/running the system of software creation and delivery, being the face and the glue - because hey, that's me.


> * Incompetent engineers and bad actors can act as negative 10x engineers.

I worked with an engineer, let's call him Mark, whose 'talent' was to get involved in everyone else's problems during the stand up. When I first joined, he was 6 months late delivering his own project, which was eventually a year late, and that was the reason. He was a drag on the entire team. Rather than stand ups being a way of learning what everyone else was doing and offering help later, it turned it into guarded, cryptic, monosyllabic updates, lasting seconds. He'd spend half an hour at someone's desk after the stand up, trying to get up to speed on 3-6 months in 25 minutes to knock out a 5 minute solution that never worked. He was asked to leave and productivity soared.

"Don't be a Mark" is still a phrase in our company.


> * The root cause of the above is often insecurity; and it can be coached / cultured away.

It's definitely not true. Very often the root cause of zealotry is simply passion, enthusiasm especially for something new you start to like more and more every day, and at some point you start to have a false conviction - that it could solve all possible problems. Usually, zealots become more reasonable with time.


In thirty five years, I have encountered three zealots that became worse over time, because their zealotry was seen by upper management as dedication and resolve, so they were rewarded. As we all know in engineering, positive feedback is bad. It was awful. But luckily only three times in thirty five years, so I have that going for me.


I’ve been saying for a while I’m pretty sure I can hire decent developers based on a conversation and not the assault course hiring process that’s normally used.

Now I’m sure for certain roles, highly specialised stuff that isn’t true. But I tend to be hiring for teams building pretty standard stuff, line of business tools, general saas products, etc. For this you just need smart, reasonably experienced people who are easy to get along with and ask decent questions.


> If a good developer you know, recommends someone they worked with: It's almost an instant hire. But for the rest, yeah, it's incredibly tough.

A talk I watched recently by Ijeoma Oluo made a point I had never considered before: statistically, most people refer friends, and most friends are of a similar background, race, culture, gender, etc. It's not intentional- people aren't going out of their way to only refer people like them- but it's measurable. And those referrals are more likely to be hired (they're good people, that's why they were referred!)

Which means referral bonuses have a perverse incentive of making a company less diverse.

I don't even have a good answer to what to do about it, because you're right: referrals are a great way to hire good developers. It's just got this big worrying downside that leaves me bothered.


What is the upside of a higly diversified worksforce? Where I work we're all white males age 20-60 except accounting, they're white females around age 30.

What's bad about this? What value does it bring to diversify, what should we look for and why is it important? Or are we too small to need diversifying yet with only about 30 employees?


Lots of advantages to diversity:

- you get a wider range of product ideas coming from all parts of the org. You'd be surprised how many engineering decisions only favor the group of people who develop them, you are less likely to consider non-white-male demographics as a homogeneous group, leaving those users out as prospective clients.

- you have a less diverse QA and testing without diversity. That means you're only testing your products against white people. Lots of famous products struggle with being used with people of color because of that reason. For example many early photo augmentation software didn't work with anyone unless they were white.

- diversity begets diversity. The lack of diversity may (and often will) create a work culture that prevents a suitably qualified PoC from being hired. And even if they do, the lack of diversity may be unwelcoming. It's almost certain that lack of diversity will mean people will make innocent faux pas comments to the one diverse hire, pushing them out of groups. They therefore get disenfranchised from the work place, simply by non diverse culture being unaccommodating and not outright racism.

Basically, end of the day, by not having diversity, you're likely both pushing away people outside your demographics, both as clients and employees, and also leaving money on the table as a result.


> you get a wider range of product ideas coming from all parts of the org

> you're only testing your products against white people

I think these depend strongly on the kind of software you're building. This may be a benefit if you're building a highly user-facing consumer app, a TikTok kind of thing. It's less likely to be useful if you're building an interbank payment platform.


Maybe. Or maybe they'd highlight a bug in your system with working with different numerical and units systems that might not have been considered, so that your system can be preemptively prepared for expanding into new regions.

But yes, it largely benefits user facing or multi regional Software


What if we're not building software and not operating on a global scale? Is it equally important because of ethics or is it just when a different perspective might be useful once going global?


So I think it would come down to a few things then.

certainly I think the ethics is important. Mostly because highly homogenous groups tend not to be inviting for outsiders to the group. It may be intentionally done, but undetected due to lack of diversity, but even harder to discover is implicit biases that form stronger in homogenous groups. So even if you're not actively pushing out minorities, you may be passively doing so which is IMHO unethical when knowingly allowed to fester.

But from a business perspective, this means you're dramatically reducing your hiring pool, even if unintentionally done. So you may be missing out on a lot of people who may improve your product.

Now of course hypothetical value is hard to quantify, but you can quantify how many people you're potentially excluding. A good way to do this is see how many percentage points your makeup is versus college graduates, especially local. It doesn't need to be a match but it also shouldn't be dramatically off.

Then repeat through each tier of your company. A lot of companies struggle with turnover even if their hiring is adequately diverse. This is potentially due to years of forming homogenous in crowds that promote within themselves.

Therefore diversity can help identify procedural issues in your company that could result in better hiring and promotion practices, even if it doesn't lead to diversity itself.

The best thing to do here is collect data. A good analogy may be that you shouldn't test your software with low variance data sets. So why would you test your company with low variance data sets? It would highlight bugs in the system that is your company


Hope I don't step on a landmine here, please take the following in "good faith".

This is something I've struggled with understanding. With all jobs I've had (stacking shelves, software dev) the lack of diversity hasn't been something I've at least recognised as the faults of the team / company. The reason the product hasn't achieved the deadline / high-praise is usually down to bad management, bad technical decisions, over promising, etc. Now if we had more women on the team, or more non-whites that could have changed things, but I'm still left a little sceptical that would have made a big enough difference.

The other problem is the what I call the "sports-team" problem (sorry if this has an actual name): when picking players for a football (soccer) team to, lets say, compete in the World Cup, you pick the best players you can get hold of - regardless of their "identity". If a diverse player doesn't want to join your team because of all the whites, then offer them more money if you think they are worth it. Why shouldn't this translate to software teams? Do you just end up with all the 10x "bros" and a bad product?

I get that more diverse = more moral. But does that mean your competition will be able to out-compete you? If that's the case then there's no hope if "go woke, go broke".


It depends.

I’m famously non-PC, but there are certainly scenarios where diversity is actually a large benefit.

If you are making a product, it makes sense to have a diverse team working on it so that it had the broadest appeal or usability- Apple Watch not identifying the heart rate of black people and the Facebook image identifier misclassifying black people as monkeys being the most famous or prominent examples.


> Facebook image identifier misclassifying black people as monkeys

Tangential, but it’s interesting that facebook’s reputation is now so bad, it’s become a black hole that distorts responsibility enough that google’s bugs are pinned on them too :P


Wow. I totally misremembered that. Good catch!


> Apple Watch not identifying the heart rate of black people

Do you have a credible source for this claim? The one I found—from 2015 [1]—corrected the bit about skin color in their reporting but retained the bit about how Apple Watches can fail to work due the kind of pigment typically used for tattoos.

Perhaps you misremembered and should have mentioned Fitbit [2]?

1: https://qz.com/394694/people-with-dark-skin-and-tattoos-repo...

2: https://www.statnews.com/2019/07/24/fitbit-accuracy-dark-ski...


Politically incorrect opinion incoming: the most efficient teams are homogeneous for the same reasons that military battalions are best off being homogeneous.

1. it improves communication

2. shared experiences and culture

3. overall better team cohesion and culture building

I would go almost as far as saying that diversity is a red flag for a startup and that diversity starts to have benefits only in bigger companies


This has to be one of the most outright bigoted posts I've seen on HN.

It implies that only outright homogeneous cultures are good. So a white woman is a negative to be in a work culture with a white man, because she cannot relate to being a man?

Or a black man can't work with a white man because he can't relate to being white?

Or do you mean if I'm from a foreign country, legally allowed to work in America, that I am a negative because I don't share a common upbringing story?

Should people from different states not work together?


Asking rhetorical questions of this nature is not achieving anything except airing your hurt sensibilities. People can have opinions other than the ones you hold and I already outlined fairly clearly in my original post what I think.

Heterogeneity is bad for startups because of the need to have no friction communication and shared goals/ideals/experiences. It does _not_ mean that diversity is bad in a big company or overall.


You haven't outlined where you consider the extents of "being homogeneous".

Otherwise homogeneous is whatever demographic you prescribe to and is entirely a self serving concept.


> Otherwise homogeneous is whatever demographic you prescribe to and is entirely a self serving concept

Whats wrong with this? I like working closely with people that understand me just by body language and completely frictionless communication. You'd be surprised how valuable that is when you're solving a P0 breakage at 3am.


So you're saying you can't get on equally well with people outside your demographic? You wouldn't even give them a chance based on not being part of your demographics?

There's also a difference between how you pick your friends and how you hire employees. You were advocating that hiring homogeneously is beneficial. In and off itself, that's discriminatory.

And still you refuse to commit to what is the extents of "homogeneous". Is it ethnicity? Language? Gender? Sexuality? Nationality?

Your post also suggested that heterogeneous work forces , even if you qualify it as applying only to startups, work at an interior level to homogeneous ones. Again, that's implying that different demographics can't work well together. But clearly that can't apply across the board. Or women and men could never work together. So what's the extents of your statement?


To be perfectly fair with the parent; It _is_ a very western ideal about heterogeneity being highly valued.

I think tying emotions to it does us little favours - a prominent successful country that does not value heterogenity at all is Japan.

Does Japan outcompete per capita?

(The answer is no).

Not sure if there are other examples of note here.


There's a difference between not valuing heterogeneous work forces versus valuing homogenous ones. The parents comment is the latter.

Also you have to view the concept of heterogeneous work forces relative to the make up of the countries demographics makeup.

A diverse country having a non diverse work force make up is odd statistically.


> A diverse country having a non diverse work force make up is odd statistically.

I agree, but I would also add that a company exhibiting the exact diversity representation as the surrounding country is also very odd, statistically.


The research doesn't support that at all: https://hbr.org/2016/11/why-diverse-teams-are-smarter


That first study kind of put me off the article. Black defendant, white victim. Why did they choose just this specific very subject to racism way to measure if the group was better or not? It makes sense that if you have to collaborate whites and blacks both will have a bias towards defending their own, so they'd have to make up better arguments and highlight more facts to get their way.

I might've jumped to conclusions, but I wouldn't read too much into that example.


This is a question I wrestled a lot with. Growing up, diversity was not something I thought a lot about, and in any case I have always considered myself a hard-nosed type of thinker, for whom concerns about quality and the bottom line should _always_ outweigh the messy questions of identity politics.

But these days I work at a much more diverse organization than the one in which I started my career and what I have realized is: it's better. Better teams, better business, better tech. Far from being a sop to some sort of social-justice party line, diversity in the workforce has made every member function at a higher level.

Social science research has pretty clearly shown that more diverse organizations tend to be more profitable ones, and my own experience confirms this. I don't know the mechanism by which diversity brings this improvement about, but I have two theories as to possible contributing factors:

1) Diversity (at least intellectual diversity) forces us to defend our ideas. Homogenous groups in all pursuits tend to mistake their own way of doing things for some sort of iron law of the universe. Diversity can impose intellectual humility on members of the group and is a useful counter to our tendency to cargo-cult.

2) We pattern-match too aggressively in our evaluation of candidates, which leads us to inadvertently underrate people who don't match the pattern of the type of person we are used to thinking of as being competent. On HN, we talk about this all the time in terms of the software industry's folkloric approach to interviewing.


I actually see the cargo-cult tendencies quite a bit, I'm working as hard as i can to question everything we're doing and how we're doing it. Not a very rewarding task, but if you don't you're obsoleted by others.


There was a paper I saw a few years ago that showed a high correlation between the diversity of a lab (racial, cultural, income, political, age, anything they thought to measure) and citations of published work.

My personal experience is the more diverse a team in software, the fewer blind spots the product will have. This might not be something you always care about, but I think in general it leads to better products, since all that input can be very valuable. I would say by the time you have 30 employees you have had a lot of opportunities to not hire an echochamber. Of course you’ll get some diversity just by hiring different roles. Even just having senior and junior devs rubbing elbows is a good start for diversity, and even if it doesn’t make your product better, you’ll find the two groups have different tasks that are morale tarpits, and you’ll tend to have a happier team.


Several benefits.

1. People with different backgrounds and experiences may notice important product features that you missed. The person who uses a screen reader is probably going to notice accessibility problems faster than the rest of the team.

2. There exist a lot of qualified people who aren't white. If your hiring process is failing to hire these qualified people due to internal biases then you are hiring suboptimally.

3. Social injustice is heritable and building a diverse workforce helps the world (in a small way) shift towards being more equitable.


> What is the upside of a highly diversified workforce?

It's harder to fail to cover use cases that the development team isn't aware of.

One example of this is name changes. Many products neglect this use case even though it's common[0] for women to change their family names after marriage.

[0] https://www.bbc.com/worklife/article/20200921-why-do-women-s...


It depends. Diversity of thought is extremely important to avoid complacency in a dynamically changing marketplace.

If your company is providing services to specifically white men and women in the 20 to 60 age group, you have a pretty good mix of people. You could use somebody who isn't in the target demographic like most of you to help you out of the demographic Johari window, but otherwise your diversity is perfect.

If you're trying to market to people who need food around the world, you will at some point hit a wall on how much you can understand all the different markets. For instance, what do a bunch of white guys from the suburbs know about how a Sub-Saharan African interacts with food markets?


If you don't see the value in having a diverse workforce and company at any scale, I doubt anything I can say will convince you. There's enough research out there showing the benefits, if you're willing to take just a few minutes to go look for it.

Your competitors will read that research.

edit: carlhjerpe is right- this is super condescending. Downvote me.


That's very condescending. I'm asking because I don't see why my colleagues would be any better than they are if they were black, brown, jewish, muslim, female in various combinations, maybe I hold my colleaguestoo high?


> That's very condescending

You're right. I apologize. Often times in tech, the people asking questions like that are uninterested in the answers.

My own view is that I have been blind to the advantages that my background (white, male, straight, upper-middle-class) has given me for my entire life over other friends and colleagues. And I'm trying to learn more, read more, and pay more attention to these things.

Ijeoma Oluo, who I mentioned above, comes from tech. She saw a lot of things that you and I wouldn't notice. Things that matter, and we don't even see it. So she writes, she talks, and she makes a lot of great points. And it's really hard to read and listen to her sometimes because she makes points that part of me does not want to hear.

So that's the ethical reason why it matters.

As to the original question: Diversity of backgrounds can lead to diversity of ideas. Not on every problem. Not every day. But often enough that it can matter. And it can happen in many cases that you and I would not expect or predict. More diversity of ideas leads to better solutions being found. That's the premise- you don't have to believe it and many don't.

In the 1980s, Frito-Lay's CEO, who had never imagined the Latino market, put out a call for ideas. A Latino janitor answered the call with his idea- Flamin' Hot Cheetos. It was a huge hit. Imagine how many markets they (and others) were missing because of a lack of diversity of ideas.


Thanks for this reply.

The Cheetos idea was good indeed. But this is a 1B$ company. Where i work we turn about 7M$ around a year. We're not going global and we're not delivering software. We're an MSP.

I intentionally left out those facts that we're not ever going to operate on a global scale, and that we're not from the US, rather Sweden with 9M inhabitants. The reason i left this out is because i wanted to question the "truth" that a diversified workforce is the always the best. I'm not saying it isn't important, but I'm also not saying that it always is.

I'll add Ijeoma Oluo to my to-read list, i hope you got my point though. WHY is something true, when you know the why you also know when it is and isn't applicable to your situation, giving you the upper hand.


Now this is what I call a tone deaf question! You shouldn’t be asking if a minority or person of color would perform the job any better? The real question is whether there are competent workers who are not white? And if your workforce is entirely white, you should be asking if you’re mixing up competence with familiarity. People usually trust what is familiar.


There's barely any non-white workforce available where i live and we operate. I stand by my question being perfectly valid, what does a diverse workforce that doesn't deliver software to the globe do better than one that isn't diverse? If the answer is as you're suggesting: that we're missing out on competent workforce because we're racist, then I don't see the importance in a diverse workforce, but rather importance in not being racist and hiring whoever's best. I'm not tasked with reqruitment, but I don't believe my colleagues responsible for this are racist. And we're short on people right now so if we had a !whitemale presenting himself with a skillset we need, we'd hire him/her in an instant I'm sure.


I don't think it's a personal slight against your colleagues. It's not that they would be better. They're fine.

I think the idea is that the company overall would be better with a more diverse set of thought patterns, opinions, and experiences contributing to its success. One way to achieve this is through diversity of identity, gender, or culture.


Please. Those studies are a joke. Imo those competitors will go down the drain just like google is doing right now. Time will tell which one of us is right. ;)


The best solution I've heard is to just spend a lot of effort finding good hires from a variety of backgrounds early on. Then it doesn't matter as much that everyone refers people like themselves. You still manage to cover most potential hires, even if any given referrer doesn't.

The "early on" is important though, since "just try really hard" doesn't scale.


> After performing over 100 interviews: interviewing is thoroughly broken. I also have no idea how to actually make it better.

My score is similar, and kind of agree. The eerie part is that the intuition can discern a good hire within 2 minutes with fairly high probability, and within 10 minutes with near certainty. The rest of the interview process is just padding to figure out whether the intuition is not wrong (it seldom is).

That intuition is some mashup of the candidate speaking on point, pragmatism, non-equivocation, admission of ignorance on some topics, English proficiency (1), etc. It's also impossible to codify or quantify, hard to convey and non-transferable.

(1) - here in Europe, English fluency, diction and (lack of) thick accent correlate highly with overall ability


No, they do not. Or at the very least you should not draw conclusions like that. How do you corelate language skills with technical abilities? There is large group of people who even with perfect language skills is unable to shake off the accent as it is probably genetics/biology. Thats why you hear accent in "foreign" people after 20+ years.

Please, trust your intuitions little less. Treat more of your impressions as anecdotal :)


Another 20 year veteran here. (though don't picture a greybeard, I started professionally at 15)

The big lesson for me over the last 5 years now that I also operate my code is design patterns. I think most Software people start with hating design patterns, then some fall in love with them, then eventually some of us fall out of love again.

The advice would be:

"Optimize code cleanliness and readability for reading at 2am in the middle of a production issue when you're trying to understand just exactly how the system got into that state that you didn't think was possible."

That means every jump to another class and every jump to an interface with multiple implementations is a distraction. You should still of course create modular separations for testability and clarity. But that line is way higher than the Uncle Bob "If a function is more than 4 lines long you should refactor it".

For complex mission critical pieces of logic, overindex on procedural execution, with paragraphs of comments explaining why each line is doing what it's doing.

Actually, another wisdom about comments:

I went from thinking "Comments are great!" to "Comments are terrible, and are liars, write self-documenting code" to "Comments are literally an opportunity for you to speak directly to the person coming after you, and explain in clear plain english WHY you made the choices you did, what tradeoffs you considered and dismissed, what compromises you made, and what external factors led to those decisions."

Comments don't need to be passive voice professional corporate speak. Nor do they need to make you sound smart or clever. Speak directly to your audience of future more junior engineers.

Exception messages too (sanitize all exceptions before they get to the customer, of course)


> I went from thinking "Comments are great!" to "Comments are terrible, and are liars, write self-documenting code" to "Comments are literally an opportunity for you to speak directly to the person coming after you, and explain in clear plain english WHY you made the choices you did, what tradeoffs you considered and dismissed, what compromises you made, and what external factors led to those decisions."

Wholeheartedly agree. Also, debugging logs for that same purpose can be great.

> Speak directly to your audience of future more junior engineers.

Often enough, that person might be yourself. I've been very grateful about my own comments in code areas that handled some obscure edge cases.

> That means every jump to another class and every jump to an interface with multiple implementations is a distraction.

Not to mention multiple layers of abstract base classes. I'd say „write your code such that you'll only ever need one IDE jump out of the current scope (and back in) to figure out what's going on“


I have 30 years of pro experience, so I'm like you plus another 10 years of confusion. I agree with the list as well, and like your (and others) additions. I especially feel that parts about zealotry. The worst are Medium developers, those with a Medium amount (3-7 years) of experience that get their dogma from Medium articles.


25 years for me

> Clever code isn't usually good code. Clarity trumps all other concerns.

Agreed, with exceptions... the problem domain matters. HFT code definitely demands code that is clever, but isn't clear such as bit twiddling, template tricks, and very architecture specific solutions. Gaming - Carmack's fast inverse square root. Compilers - Duff's device. I'm sure there are others.

I'd add one: Someone else did it first and better. Pretty much everything you'll encounter that's non-trivial, is likely an algorithm and done by someone else better, correctly, and faster. There's nothing wrong with knowing the algorithms, but Knuth, Hoare, et al. probably did it first, and correctly. Don't be afraid to find the best algorithm implementation in your language.


Your mention of simple and clean code makes me think of the excellent talk "Transforming Code into Beautiful, Idiomatic Python".

Although it relates to Python programming, if the ideas and principles (and thoughtfulness and pace) from it could be applied to much more written code, then (I think) we as an industry and all our users would be in a better place.

[1] - https://www.youtube.com/watch?v=OSGv2VnC0go


Possibly worth noting that the presentation there is a few years old now and appears to be using Python 2 for the examples, so some of the code wouldn’t be exactly the same in Python 3 even if the ideas still make sense.

With that caveat, watching almost anything that Raymond Hettinger presents is positively correlated with improving programmer skill.


Usually i disagree with these types of lists, but this one seems pretty spot on.

My only quible would be that only code quality static analysis is useful. Security static analysis on the other hand (e.g. taint analysis to find security bugs like XSS) is pretty overrated most of the time unless you work really hard to make it fit in your context.

I also think linting rules are important, not for what they actually do, but just to get everyone to stfu about variations in code style that dont matter and arguing about more important things.


People with less experience consistently undervalue linting and code formatting and style. Code linting forces one to deal with code smells and innocuous errors that can cost a lot to fix once they escape into the field. Consistent code formatting and style are invaluable in (a) reducing cognitive load in picking names, which is a hard problem in computer science, (b) clarifying the structure of the code (you begin to see the forest for the trees, because the trees are mostly uniform) and (c) onboarding or familiarising with new codebases, including new areas of large codebases.


I value a consistent enough default style. I'm not going to waste time arguing why this specific consistent style is better that some other consistent style and I'm certainly not going to police the codebase manually and force it into that style.

Either the linter catches it and fixes it automatically with little to no configuration, or its not important enough to be worth the time.


> (c) onboarding or familiarising with new codebases, including new areas of large codebases.

It is extremely useful to lower the cognitive overhead when looking into different areas of the codebase. Inconsistent naming patterns, „magic” strings instead of enums, preferences about array iterations, using deprecated features etc shouldn't need to part of a code review if they can be detected automatically instead. This is especially useful for junior devs.


I like consistent code style (i dont care what as long as its consistent) but it has never helped me come up with good variable names.


Opinion: but I think "invaluable" is overstating it. I agree with OP: not that important. Again, opinion. (20 years experience here)

Edit: I suppose I should clarify why. I think there are more important issues to focus on when building software with others.


I think there might be more agreement here than we think. OP is talking about “people who stress over […]”. I think a linter can be (in?)valuable without stressing over it. Just get everybody on board, don't stress over which style “is better” (as if there's such a thing) and make it your own.


I think the value is modest, but since the cost is essentially zero, it's a no-brainer to use it.


> I also think linting rules are important, not for what they actually do, but just to get everyone to stfu about variations in code style that dont matter and arguing about more important things.

Agree. I don't think that there's one "best" style, but I've worked with people who don't care about consistency at all, and their code is so much harder to read. There's bunch of value in just picking a style and sticking to it.


Reminds me of this:

"Gofmt's style is no one's favorite, yet gofmt is everyone's favorite."


Also, having consistent formatting makes it very easy to read through large amounts of code. If you know how the code is laid out, you don’t need to focus on everything and can “squint” through it to get a good idea of what’s going on.


Agreed with you and the GP. In one project I’m on, I pushed for linting and formatting checks in our CI not because I’m an “insane weirdo” who cares too much about formatting, but precisely because I don’t care, I just want it to be consistent. I don’t really have an opinion about which style or format or whatever is better, I just don’t want to have to read through 6 different coding styles when I scan through my colleagues’ code.


Having a consistent style enforced by an automatic tool set (linter/autoformatter/etc) means no time needs to be wasted in any given PR arguing about style, or waiting for someone to fix a style mistake. You might still waste time arguing about what the rules should be, but that's often less than the cumulative waste of fixing avoidable crap.


I believe though that you should just have some system in place to automatically check/fix these issues(for the most part). I've been in far too many environments though where PR reviews are just surface-level formatting nits and don't address functionality or architecture at all..


The solution to this is to use an automated formatting tool for the entire team and have it automatically run prior to review and be done with it.


This is done in many teams. Go has made it really easy via its gofmt tool. The trouble is that it's hard to do on a large existing codebase that's not consistently formatted: doing a mass format messes with the line-by-line commit history, and on occasion introduces bugs.


I generally agree. But static analysis tools are still pretty dumb and sometimes cause extra work to no benefit.

Flagging a method for cyclomatic complexity when it uses a case statement that is blatantly obvious and simple to any human seeing it, for example. Or just happening to notice a bunch of issues in file B when you commit file A but didn't even touch file B.

Overall it's good, I like it, but with the caveat that it can take some work to tweak and adjust to avoid waste like that. And that usually seems to come right on the deadline or the day before. Worthwhile, but you have to take that cost into account.


Cyclomatic complexity IMHO isn't a good metric for code complexity as understood by humans, but sometimes it can be insightful especially when it's unexpected.

My favorite static analysis issues are the ones that flip-flop between two issues, I love doing "merge this if statement with the enclosing one" only to get "expressions should not be too complex" after I've merged them.

As long as you have the power as the developer to override the tool and mark issues as won't fix (with rationale) I think it's fine. If not, the ruleset needs to be really good and as minimal as possible. Otherwise you'll just get crappy but compliant code which isn't helping anyone.

Also, it sometimes makes sense to leave suboptimal but old and correct code as-is to preserve the Git history or because the suggested change doesn't actually make the code any better. And finally, these tools do still generate false positives. While it makes sense to thoroughly review each of these with a coworker, you should still be able to mark them as false positives.


> And finally, these tools do still generate false positives.

And as long as we talk about tools that do some kind of semantic analysis (not just syntactic) on a turing-complete language they will always either have false positives, false negatives or both. Rice's theorem guarantees that.

They still can be useful though. I programmed a static analysis tool for PHP a long time ago and it worked surprisingly well.


> ones that flip-flop between two issues, I love doing "merge this if statement with the enclosing one" only to get "expressions should not be too complex" after I've merged them.

I'm not sure that's flip-floping, the solution to the overly complex expression is to turn into a function.

Unless the tool is just being overly aggressive on what it considers too complex, in that case you can try to tweak the configuration instead.


We build static code analysis tools, used by companies building safety critical stuff (planes, cars,...). If you're failure mode is "there might be fatalities" over "the app crashes", this becomes really valuable.

Code quality is already quite nice, and metrics like e.g. MISRA are supported by many analyzers; but proving(!) the absence if various error classes can be handy as well.

(Disclaimer: I work on the binary analyses, not on the source code analyses; so my detail knowledge on that front is not too deep).

+1 for linting. If the code looks "the same" I find reviews much easier since I can focus in what it actually does (or fails to do), not on "decoding" it. Though nobody should freak out on minor variations.


Adherence to (imo) bad linting rules have forced the early crystallisation of architecture as arbitrary abstractions simply to please the static analysis gods.

Code style is fine to keep consistent but sometimes concepts just aren’t battle tested enough to warrant their own abstractions yet and committing to one to please a cyclomatic complexity / line count score is actually just creating more work for some future engineer who is tasked with the job of re-engineering the solution.


For the project I'm currently working on, the customer mandates that we must use SonarQube, Fortify and Nexus IQ.

Not once has any of them found any kind of security issue. In fact every single thing they have raised has been a false positive, or just plain time wasting (e.g. forcing some arbitrary level of branch coverage). Also, they can be so slow - more time is spent in the build pipeline on these tools than is spent in actual build, test and deployment combined (even including integration tests)! Fortify is particularly slow (up to 10 minutes!), and provides no value whatsoever.

I've used SonarQube on several other projects too; it seems to be very popular in enterprise circles in particular. Never once have I seen it raise a security issue. On occasion it points out the odd accessibility issue, but that's it.


I spent months at my previous employer (large enterprise) "remediating" findings from a security scanner. The majority of them were IM conversations with the security analyst discussing why they should be ignored.


> My only quible would be that only code quality static analysis is useful.

It didn't occur to me he meant anything else. I just read it as not using -Wall -Wextra -Werror makes life more painful.

> just to get everyone to stfu about variations in code style

It makes reading code easier from different programmers easier. But the exact format doesn't matter - just so long it's all similar. Standardising on a formatting tool / linter from the language like pep8 does the stfu job.

The bit I was surprised by is:

> Code coverage has absolutely nothing to do with code quality

It might be true depending now how you measure code quality I guess, but that's a very fuzzy metric. Choose a concrete metric like the ratio of man hours to get code out the door over bugs in that code, and what is absolutely true is automated tests is one of the few dramatic ways we have of forcing the metric up. Either you automate the tests, you spend enormous amounts of time manually doing tests, or you don't test.

I suspect he's arguing the difference between 95% and 100% coverage isn't great. I'd quibble, but that's defensible. But the difference between 50% and 95% on the other hand is dramatic.


>Security static analysis on the other hand is pretty overrated most of the time unless you work really hard to make it fit in your context.

I think the "unless" part is key here. If used correctly, it's crazy what these tools can find, and they give you a baseline of issues to analyze/fix before digging deeper.

But yes, running them blindly just to tick a box isn't very helpful.


> 90% – maybe 93% – of project managers, could probably disappear tomorrow to either no effect or a net gain in efficiency.

I was just thinking about this. Most PMs remind me of that scene from Office Space: "Engineers are not good at dealing with customers. I have people skills!". There's no need to insulate engineers from the products they work on and the customers they work for. Good engineers want ownership, let them have it.

A big part of the problem in my experience is that PMs tend to proliferate. I've seen many startups go from hiring their first "product" person to suddenly having a PM team that's a third the size of engineering.


In my experience, a PM handles a lot of the crap work that engineers shouldn't be working on. A PM keeps management apprised of progress. A PM handles the work involved in keeping Gantt charts up to date. A PM coordinates with dependents and dependencies to make sure resources and components are available when needed.

If you don't appreciate the value of that, you maybe have worked on smaller projects or not been in a large org that pushes those menial tasks onto engineers. PMs are there to facilitate engineers. They do not rule over them. They make targeted, efficient queries of engineers' status and then communicate and coordinate appropriately.

I would love to see PMs hired at my org. We're currently drowning in process-related tasks and it's mostly because we do not have PMs(the process is mostly necessary, but wastes engineer time). I'd also love to have some technical writers on staff. Having engineers write documentation, beyond, say, doxygen comments in header files, is just dumb. Right now our user facing docs have little consistency or defined scope. They don't target a well defined audience, and don't effectively communicate to our users.

Let engineers do what they do best, and enable them to be productive by hiring helpers as needed.


I feel like a good PM is someone who makes sure everyone is on the same page and working towards the same goal. If they are good at their job, you dont notice them.

Otoh If they are bad at their job, they really get in the way, cause a lot of resentment and extra busy work.

The end result is selection bias - lots of engineers think they are useless because you only really notice the useless ones.


We used to have 3 PMs in our team of 20ish people. Two of the PMs left and NOTHING changed for us or our project.

The idea of being relieved of process-related tasks is nice, but it rarely happens. You get pulled in to those timewasting meetings by your PM anyway.


A good PM, yes.

But IME, most are not.

As a dev lead, tech lead or architect, my experience of PMs has almost exclusively been terrible. It has usually involved them being a thin proxy between myself and management, with the PM constantly pushing for firm numbers and dates, pushing for forecasts, asking for meetings to explain all of the things they have already been communicated...

I invariably end up feeling like a baby sitter, and feeling like I'm doing most of the work the PM should be doing, and with frequent, pointless meetings cutting severely into my available time. Aside from that, as soon as there is any pressure from the customer, the PM projects that onto the team, lambasting and pushing, pushing for more with less.

On occasion my experience is different, with the PM actually using the data they already have available in Azure DevOps (or whatever), keeping meetings to the minimum, and shielding the team from outside stresses. Very rare though.


I said it upthread but I'll say it a little differently here. The PMs do lots of valuable work that allows us to do our work well (and prioritize it so that we're doing the right work).

If they're subject matter experts, that's best - they're your translator/interpreter, able to speak in terms of both tech and the problem domain. And they know what the user wants because they were the user of your competitor.

But even if not, they can be the best assistant you'll ever have. Dealing with customers, sales, higher-ups, etc. so you can focus on your work. Technically they may be above you in the org chart, but you can't have a better assistant than a good manager that gives you what you need, clears the way, and runs interference so you can work doing what you do best.


This!

Where I work there's lately been so much additional process, planning, keeping stakeholders and dependencies up to date, etc.

Yet, the product manager position isn't really staffed, and neither is the project manager one. So, all the additional work beyond design and coding also falls on the owning engineer. And since we are frugal that engineer is happy to get maybe 1 additional engineer assigned to help. But often that one will also have their own project they own or need to prepare for owning.

So the lead/owning engineers turn into mostly product and project managers but are still expected to do the down to earth construction of the product which means to get anything done in the expected time frame engineers need to put in long hours.

I just don't get it. It seems utterly wasteful use of resources. I'm not saying engineers cannot be good at these things, but generally speaking product and project managers cannot and do not need to be able to code in addition to owning a project/product. Adding/hiring more project managers to projects would be the single most effective way to speed up project timelines as it frees the engineers, the producers of tangible product deliverables to focus on that.

In no way do I mean engineers should be cloistered away from dealing with product or project decisions or even chatting with users. But it needs to be in the correct proportion to the actual engineering work.

Add to that the addition of duplicate processes (e.g. we are dealing with 3 right now that cover the same ground but are ever slightly different in what needs to be done to cause triple effort) and everything grinds to a halt. A project manager could also be the right person to push back on management and dedicate resources to improving project management overall.


> Where I work there's lately been so much additional process, planning, keeping stakeholders and dependencies up to date, etc.

This is why I like the way the linked article put it: you can lose most PMs without sacrificing efficiency. I would imagine a great deal of this stuff isn't actually important and only hampers the team's velocity. A PM will make it their job to make sure it gets done, if there is no PM the unimportant stuff will fall by the wayside.


> A PM will make it their job to make sure it gets done, if there is no PM the unimportant stuff will fall by the wayside.

You will always need to spend some time project managing your project even when things are not as process heavy.

You still need to make sure that timelines are met or flag if they are not. Keep dependencies informed, push/remind other teams you depend on to fix a bug, add the feature they promised. Communicate with management to get more resources, etc.

A PM can do all of those things with a bit of input once in a while from engineers.

These things only aren't important if you neither have timelines to meet, have no dependencies, and aren't accountable for the success of your project. Outside a hobby project outside work I'm not sure we ever have that luxury.

The article doesn't talk about other roles, so maybe where OP worked these things would have been assisted by other roles, e.g. the product manager. Maybe from that perspective it's not clear to the engineer what value project managers provide but I feel it's a bit of a limited perspective to say a role that can dedicate their time to tasks of type X isn't of benefit to those who'd otherwise need to do these tasks.

I mean would you say the same about customer support? About getting clarity on compliance/legal/finance (via asking external council)? About internal developer tooling? There's a point where an engineer working on a product may need to do all these things in addition to the building. But you cannot claim that it'd make no difference to the engineer's efficiency if they could delegate those things to others.


>A PM keeps management apprised of progress.

In most places, project managers save managers time by not requiring managers to spend their time figuring out the progress of things. However because PMs have more bandwidth they take up even more engineer time than managers would.


IME a PM takes time at the beginning of the project to define tasks, dependencies, and time estimates. After that, you meet with them formally once every 1-4 weeks as a team to update the schedule, along with an informal, individual chat as needed for critical tasks and late dependencies. It took a lot less time than the useless daily standup meetings I now attend.

Most of my experience with PMs was at smaller orgs with smaller teams. Now that I work at a large valley company with layers of antagonistic management, I appreciate my former PMs more than ever.

It's important to hire good PMs without huge egos or career aspirations. Being a PM is kind of a shit job and it's not about wielding power or authority over people. You also need PMs who understand the uncertainty baked into software estimates, and can cope with unforeseen problems requiring schedule juggling.


> It's important to hire good PMs without huge egos or career aspirations.

This is so important. PMs are there to serve management and engineering, not the other way around. As soon as you have PMs who are trying to boost their own profile, you're screwed.


See much of FAANG over the last five or so years for a number of good examples.


If you're hiring more people to deal with rote, managerial "process" you're already lost in a mire of inefficiency.


I think it is important to point out at this point, that product managers and project managers are two different jobs. Yes, neither writes code, but the product manager focuses on which features are most important for the customer and therefore needs to know a lot about the product and what makes a good product.

Project managers, on the other hand, can be used even if there is no product. They require good people skills and (should) make things work, while having some knowledge about tools/methods that could enhance the job.

If you think SCRUM, the project manager is closest to the scrum master (someone who cares about impediments and methodology) and the product manager is likely to be the product owner (prioritizing and describing features).

However, project managers can be used for all types of assignments as long as they have a start and an end ;-)


The linked post said project manager.

Good PRODUCT managers, in small numbers, are worth their weight in gold IMO. Researching/talking to users takes time, making decks takes time, stepping back and surveying the landscape takes time. It's worth having someone doing that.


Oh god, I hate the PM abbreviation. Product managers and project managers have a bit of overlap (as the former also do project management to an extent, just like engineers), but they are different and having them provides different benefits. Why do we abbreviate both with PM? I never quite know which is referred to.


Don’t forget “program manager” too


Oh god, yes.


We should really start using PdM, PjM, PgM as disambiguation in contexts where that’s necessary


The engineers that believe they would thrive without managers are likely the ones managers exist for.

It is simply absurd to believe that companies throw away billions down the tube for useless work.


Sometimes it is the customers that need shielding from the engineers.

I remember a situation where defcon was raised to 2 when the head of our unit found out that a particular engineer went to a meeting with a particular customer on his own.


I believe, that many project managers are actually bad project managers. And that is not so much their fault as project managers are being trained on methods big time, but then again then job they have to do is not just executing these tools somehow, but to understand the dynamics in larger organizations and to act accordingly. But many project manager just learn the tools and methods to perfection and try to bring their cure to everyone by prioritizing method adoption, when many times the missing methods are not the most important problem.

So many times there are politics and emotions part of the broken communication that causes quite a lot of havoc these days. So instead of teaching developers how they have to work project managers should focus more on how to foster collaboration and use their people skills to reduce friction between people.

Having a good idea of exiting methods can be advantageous, but should never distract from the first priority to actually help the people who do the work, as good as they can.


> After performing over 100 interviews: interviewing is thoroughly broken. I also have no idea how to actually make it better.

This has been my gut feeling for a long time, and in the last year, I finally took the time to write down my thoughts on the topic. Every week since February 2020, I wrote about some aspect of interviewing that could be improved [0], and I now feel confident there are actionable improvements to be made.

Individually, we as interviewers can shift our mindset towards looking for strengths instead of weaknesses, accommodate different backgrounds, acknowledge the best developers are not the best interviewers (both as candidates and interviewers), and put in the extra effort to hone our interview skills. At an organizational level, I'm working on training material that I hope to promote within companies that are interested in improving their hiring process.

Interviewing is thoroughly broken. But I have some ideas to make it better.

[0] https://hiringfor.tech/archive.html


> Individually, we as interviewers can shift our mindset towards looking for strengths instead of weaknesses, accommodate different backgrounds

Virtually each time I've been interviewed myself the attitude was not to find out what my strengths are, but to discover gaps in my knoledge and skills. I think that's the standard in our industry. Sadly, this is exactly the oposite of what companies should be looking for. They need people who can do the job, and knowing some obscure (or even popular) algorithm by hreat or some rarely used programming language feature has nothing to do with it.


I'm pretty sure an overarching focus on task realism fixes most interviewing woes, including this one.

There's a cultural obsession in this industry with testing things that supposedly correlate (usually poorly) with ability to do the tasks we actually want people to do and not the actual task itself.

I think it's probably rooted in goocargle culting.


> I'm pretty sure an overarching focus on task realism fixes most interviewing woes, including this one.

This can turn out pretty badly too. I was rejected by a company, apparently because I couldn't figure out the solutions to some problems they've been struggling with for years. I tried to guess, but either I didn't have enough context, or simply they didn't like my suggestions. The whole thing was very odd.


That sounds less like a focus on realism and more like a desire for free consulting.


> but to discover gaps in my knoledge

I think I've already found one :)


The blog you mentioned has interesting articles. Do you have by chance also something similar but for the person on the other side of the desk? I’d like to sharpen my skills to be interviewed. I’m the developer who never has questions at the end.


Always have questions - interviewing is a two-way street. You want to find out what they value, (what's their quality vs speed tradeoff? etc.), and what sort of things they have problems with (possibly something you could solve to really stand out?)

Those are just the most basic and obvious ones. Maybe hard to think of on the spot, but you can think of others in advance and have them ready.


Thanks! That newsletter does have articles geared toward candidates as well, though a lot fewer.

The topic of what to ask at the end is one I floated around, but I didn't find it to be a priority. In an ideal world, the questions at the end wouldn't be part of the evaluation anyway, so it should be a time for you to ask whatever you want. Unfortunately, I'm sure interviewers do judge based on those questions, so if anyone has insight into what "good" questions are, I'd love to know.


> Functional programming is another tool, not a panacea.

That's extremely true and I am not denying it. I'd like to add the nuance that us the Homo Sapiens seem to operate better when we are pushed into the right direction.

Giving us too much choice -- like global mutability and OOP in general -- can cause analysis paralysis and make people do all sorts of panicked dumb choices in their software projects. As a guy with 19 years of experience I seriously wish I could forget some of the codebases I had to inspect.

So while FP is just another tool, it's also a framework for thinking that helps people make better choices. Same goes for static strong typing.


> So while FP is just another tool, it's also a framework for thinking that helps people make better choices

I’ve been getting more into functional programming this past year (via Clojure, though my day-job is mostly JavaScript/TypeScript) and this is the big takeaway for me.

Type theory and monads may have their place, but thinking about separating logic from side-effects is, I think, the most valuable aspect of functional programming.

That is a thought process that you can apply in pretty much any language that will give you the biggest bang for your buck.

Edit: Here’s some great examples of the benefits of functional thinking without going off the deep end.

1. Boundaries by Gary Bernhardt - https://www.destroyallsoftware.com/talks/boundaries

2. Pits of Success by Mark Seemann - https://youtu.be/US8QG9I1XW0

3. Grokking Simplicity by Eric Normand - https://grokkingsimplicity.com/


I agree that separating logic from side-effects is an excellent lesson that easily translates to less functional programming languages.

Another thing I tend to do now is order function arguments as if the programming language had partial function application. You can see this in JavaScript by comparing lodash to lodash/fp. I think once this practice is established on a team, it can help bring a more uniform feel to the code.


In javascript it's pretty easy to write a function for partial application in either order, you can curry functions forward or backward.

    const map = (arr, mapper) => arr.map(mapper); // Non "traditional" order for arguments
    const curry = f => (...first) => (...second) => f(...first, ...second); // Classic currying: partially apply arguments to the start
    const reverseCurry = f => (...second) => (...first) => f(...first, ...second); // You might want to reverse() first and second depending on the desired syntax
    const arrayMapper = curry(map)(array); // Usually not that useful, more common to map over many arrays with the same function than to map over the same array with many functions
    const funcMapper = reverseCurry(map)(func); // Same benefits as having the classic argument order in the first place
If you're working in a code base that doesn't already implement "most specific argument last", or does so inconsistently, it's very easy to write partial application functions for both cases. If your functions aren't variadic you can even get over the clunky double call syntax by checking the total number of arguments passed thus far, and if it's greater than or equal to the function's "length" (the number of named arguments that it takes) call it, otherwise return the partially applied function, something that is sometimes known as "autocurry".


Yep, there also exists such a thing as "I learned language X and will not use it for my paid work but it made me a better programmer". One such example for me is Racket.


For me that's been Elixir. It's made me appreciate some of the more functional aspects in Python, like generators, map, filter and the functools library in general.


Ironically Elixir has become a personal favourite and I find it amazing for backend web work, especially API gateways -- although it does very well with server-side rendering web projects as well. It's my main work tool (very close second is Rust).

But I fully get what you mean. Working with it and learning it has been a very strong eye-opening experience.


I really enjoy the tooling as well. I feel like these days tooling can be just as important as the more “tangible” benefits of a language. The Phoenix code (and test!) generation in particular is just magnificent, not to mention docs. Pythons most lauded doc generation tool Sphinx on the other hand is just complicated and confusing.


Free Pascal.

That said, I'll use anything in my paid work if the project is interesting enough. Hence the three years I spent writing PHP.


I'm curious. How did Free Pascal make you a better programmer?


Not an answer you'll like, honestly. Turbo Pascal as the first thing I learned after BASIC as a wee youngin'. Mostly it was about connecting young hobbiest programmer me with seasoned professional me.

Basically, sometime's it's good just to kick back and have some fun with your work. Aside from that though, my impressions of the language are that it forces you to be a little bit more organized about everything than C-derived languages. In Pascal, the things that you do to satisfy the compiler are things you can do for anyone reading your code in other languages (not that you would in those languages conventions though...). And other languages do this even better... I wouldn't recommend anyone go learn it unless they had a real business need to.


Thanks!


Completely agree and I feel this every day.

Half of my job is split between a functional language on the backend and a class-based language on the frontend.

I have made a conscious effort to avoid shared state, but it's harder. For example, the friction of cloning objects before passing them around. Or dealing with sort APIs that sort in place.

I'm going against the language defaults while with the FP language, it feels natural.


Here's what i have to say about TDD (I do not practice TDD on a day-to-day basis).

Even if you don't use it, you should know what it is. If you choose to use it religiously, that's probably a bad choice, but if you don't do it ever, you are missing out on a huge tool. In general, if I see an insurmountably complex challenge ahead of myself, sometimes TDD (especially detroit-style TDD) is a tool to refocus my attention and implement the minimal implementation and move past goals.

Also, doing test-heavy development (which TDD gives you for free) gives you insight into how that break/fix loop can supercharge your productivity by giving you those dopamine rushes as a reward for your effort.


> how that break/fix loop can supercharge your productivity by giving you those dopamine rushes as a reward for your effort

It's interesting to hear someone come out and say that there's a link between how "fun" a coding workflow is and how productive it is. I think most of us know it, but nobody ever talks about it. I wonder if it could be harnessed better.


It's kind of like a drug. I don't play any more video games - coding is my video game. I'm convinced that strategic use of those green dots is why I don't burn out much anymore, I just pushed through a month-ish emotional slog that in the past would have triggered full-on burnout but still wrote lots of code for work, my personal projects, and produced the deliverables for my job only a day-ish later than I said I would.


> It's kind of like a drug. I don't play any more video games - coding is my video game.

I feel the same way. I just make sure I take one full day off a week and play games or read. I can still get burned out, but it's generally short-lived.


I think the best part of doing test driven development (or similar) is that making your code easily testable generally also makes it architecturally sound.


I've heard this argument a lot, but some of the worst codebases I've ever worked with have been heavily test-driven code. The need to have a failing test for every line of code combined with a tunnel-vision devotion to YAGNI and working in small pieced ended up creating these elaborate layers of functions that did no useful work except glue test cases together. Straighforward refactoring that might have taken an hour or two ended up taking days because every few keystrokes had to be validated by intermediate tests that would be invalidated by the next incremental phase of the refactor.

Granted that particular team was an extremely low-skill team in general, and I've used TDD now and again successfully when it fit my needs, but that experience was a pretty strong counter-example to the strong claims made by TDD proponents about how it improves the quality of code.


I suppose it’s possible to destroy everything. But I’d still prefer the over-tested code to the code that isn’t tested at all.


Yes. I like that TDD - when I judiciously employ it - makes me think about my interfaces. It’s like a pre-PR that forces me to question whether the intent is clear. It makes me reference guidance from DDD and single-responsibility-ish idioms.


Refactoring is quadratically more difficult if you don't have tests.


Not this simple. Refactoring might mean invariance, but the more comprehensive your tests are, the more they also test the moving parts. That means, they have to be changed too. Many people forget the maintenance costs (and this has a much broader meaning than the refactoring example). Tests themselves have to be considered as a software (or more general) artifact with a cost.


I agree coupling your tests to your code will cause resistance to refactoring as the tests have to change w/ the code. Lack of coverage also causes the resistance due to breaking things. I think you have to find a balance.

We write almost entirely integration tests (and lots of them) and we rarely need to maintain tests since they're only tied to the UI. Most refactors happen w/ no test changes at all. Occasionally a UI change causes a big find/replace across lots of tests.

The tests are also probably less comprehensive than a suite of finer grained tests would be. Most tests tend to cover happy path and few edge cases.

When we started there were zero tests and refactoring was too risky so the early code decisions largely had to follow the architecture that was already present (which wasn't always what we wanted). As we filled out the tests for existing behavior we could refactor more.

I think the mostly-integration-testing approach led us to a pretty good balance by only coupling the test to the UI (with its own cons of course).


For this reason, I prefer going heavy with integration tests early in in a design.

This way I'm not rewriting tests every time I need to rip apart and restructure the internals but I still have the safety net of refactoring without breaking the external contract.


>can supercharge your productivity by giving you those dopamine rushes

These dopamine rushes unfortunately have a nasty side effect of turning people into TDD zealots.

TDD works great where it works, but it often has a nasty side effect of spewing out tests which don't tell you anything useful and demand near constant maintenance to keep the build pipelines alive.

The zealots seeking a dopamine high will tolerate this in pursuit of their high and will churn out unit tests. The zero-benefit-high-cost unit test maintenance eats into the entire team's testing budget without producing a commensurate return on investment.


Agree. I also found that learning TDD rewired my brain to think top down.

So in the many cases I don't do TDD, I imagine what the top level API ought to be before I dive in and start coding.


First time I have every heard of "detroit-style" TDD (and from looking it up "London-style"). Why were they given these names?



> DRY is about avoiding a specific problem, not an end goal unto itself.

To add on to that, I think the key behind DRY is to not repeat a concept within your codebase, and not failing into the trap of avoiding similar but slightly different code blocks that aren't exactly logically equivalent by carving up your code into nonsense. The former leads to elegant abstraction, while the latter just leads to functions with signatures like my_function(data, BITFIELD, opt1=false, opt2=true, opt3=false, BITFIELD);


"Single source of truth" is more meaningful than the overly simplistic "DRY".

If two things are _supposed_ to behave similarly, then not having duplicate code makes perfect sense.

If two things just _happen_ to behave similarly (at first), it may be a hasty generalisation to immediately refactor them to use the same code.


> To add on to that, I think the key behind DRY is to not repeat a concept within your codebase, and not failing into the trap of avoiding similar but slightly different code blocks that aren't exactly logically equivalent by carving up your code into nonsense.

Dry is about not repeating behavior. It's not about not repeating text.


I’ve always found DRY to only apply well to two things:

- business logic

- something that your extracting as a library


It requires subtle thinking. Instead what I see a lot of the time is the "rule" of DRY applied with religious fervor, and it produces the kind of code I enjoy working with the least.


I tend to think of it in terms of bugs or less so, feature addition. Weighted by likelihood of bug.

If there is bug, will someone have to fix it in one place. Or, remember it’s in 7 places. If later I DRY.


> Clever code isn't usually good code. Clarity trumps all other concerns.

Generally this is true, but with the caveat that what actually matters is 90th percentile (or maybe even 99th percentile) complexity. If you have a choice between spreading low-grade complexity across the codebase or having one module that is nearly incomprehensible but that provides an abstraction that makes the rest of the code very simple, the latter is almost always to be preferred. Once you've distilled out the evil into one place and gotten it working reliably, chances are you're never going to need to touch it again, and if you do need to, at least you only need to deal with it in one place.


If I understand what you're talking about, I tend to disagree - I've had a bad experience in general with "pure abstraction" layers in projects. It sometimes seems like a good idea to have this "magic" component which automates everything and takes care of all the hard stuff, but in practice it creates a whole host of problems.

For instance, maybe it works at first, and then your requirements change in a way that doesn't fit the abstraction. Now you can't make local changes, you've got to change this super complicated abstraction layer, and suddenly it requires changes all over the place in completely unrelated components, because you had to add an argument to `super_general_magic_function_that_is_used_everywhere()`.

Also things like this tend to make code super hard to trace. Instead of stepping through your functions, now all your tasks are running through some generic scheduler or something, which is there because somebody had the idea that if you ever need to run your code on a Hadoop cluster this would help, even though your code has only ever been run on a Raspberry Pi.


My rule for major new abstract components is: “if you’re going to do that, you have to get it right”.

It’s fine to have a major abstraction, but it needs to actually make a problem space disappear, for some definition of “entirely”.

That means: - great code - great documentation - tested - engineering resources assigned to maintain it full time

... for as long as it takes. Until you reach that level of “makes the problem space disappear entirely”.

So when someone suggests a new abstraction, I just ask if we’re going to invest all of that. And if not I suggest we stick to just using the tools we already have.


Interesting problem. I'm not convinced but don't have strong opinions on this. I feel nobody wants to touch the high grade complexity code so it would rot. Ultimately, reading code dominates writing it, so having low grade complexity spread around seems better as the bar for reading is lower


On some project I have used this method. It us usually one or two pure functions in a single file littered with comments. It is a complete abstraction, so it has absolutely nothing in common with the DB / UI code.

Every once in a while I have to touch it. Then I ignore everything else and just focus on this one part. Pen and Pencil is the most valuable tool for this.


> Typed languages are better when you're working on a team of people with various experience levels.

Even if you are working alone you should probably treat it as working with people with various experience levels if it is code that you are going to have to periodically deal with in the future.

When older you has to come back to the code, older you should be more experienced that younger you in general, but older you will also likely have forgotten much of what younger you was thinking when younger you wrote the code. In effect, older you and younger you are a team with various levels of experience.


Older me is continually pissed off when I encounter something clever that younger me did. I've tried to be less clever over time.


I think using typed languages can still be good even if alone assuming you are experienced enough with it, you are more likely to write everything correctly on the first try, but some other things like writing good commit messages and things like that I never do. I have never made use of git history to debug some personal project of mine.


> Java isn't that terrible of a language

True. It isn't a great language, but isn't terrible. What is terrible are the over-engineered, over-abstracted APIs that are relatively common in the Java ecosystem.


What I hate is that I can't understand anything when I try to understand existing code with all the abstractions going on and having to jump through 100s of files with so much boilerplate and clutter, trying to understand how the logic binds together. And then just interfaces everywhere, I can't even jump with an IDE, I have to find where this interface magically gets injected first to find the actual logic.

Also I guess I don't like massive slow IDEs and slow compilation times. I just feel so slow when working with Java as well my mind gets distracted and frustrated. Maybe it is my personality flaw to not have that type of patience.


Structured programming saved us from “spaghetti code” but Java culture introduced “spaghetti class hierarchy” and practitioners seem to practice it quite religiously.


This was probably the one bit that I really disagreed with. I've never been a Java developer, but I've had to touch it a few times in my career, and have quite happily left very well paying jobs for lower paying jobs just to avoid having to write more Java. I can't rightly think of any language I would plausibly have to use in industry that I find more unpleasant than the Java and the entire JVM ecosystem (disclaimer: I've never used PHP).

I'll buy that Java might, in purely technical terms, be not that terrible of a stack, but it's just so deeply unpleasant to use, such a profoundly nightmarish vampire of morality, so utterly and irredeemably cruel in it's ability to strip all joy out of the art of software development, that I can't possibly agree that it "isn't that terrible of a language".


Did you know that Java lets you any number of module scoped classes in a single file, just that the public class has to have the same name as the file? There are a lot of things like that you can do in Java which people just don't do that makes many things a lot cleaner. Its just that the eco system standards were set by people who thought that having unique class names were super important even though Java already have namespacing.

Also that you need to have 100% super guaranteed unique global namespaces making your folder structure that horrible net.xxx.abc.adsasd.dfer.cthulu.NetXxxAbcAdsasdUtilityHelperBuilderFactory things, which for some reason wasn't needed in any other programming language and it worked just fine for them so you don't need that either.

With that instead of importing stuff you could just write something like, assuming each library have a sensible top level name:

    gzip.File g = gzip.File.FromDirectory(dir);
When I write Java personally it looks closer to that. And when I follow a corporate style I write it more like that, as close as possible while still following the style guide. I 100% believe Java could have been a nice experience if it didn't have its horrible over engineered style guides.


> There are a lot of things like that you can do in Java which people just don't do that makes many things a lot cleaner.

The problem is not "people could use that particular Java feature to make code cleaner, but they don't".

The problem is on a higher-level: If you have ever tried to read your way through some library, e.g. Spring, you'll see code spread over hundreds of files, and it takes ages to just find out where that particular bit of code you need is, because it's automatically injected in some builder class which is an inner object extending some other object that implements an interface somewhere.

And then you have several competing abstractions, and sometimes one gets used, and sometimes the other.

IDEs and type hierarchies help, but only to some degree.

And then you are still stuck with an overengineered abstraction that nevertheless does not allow you to do the obvious thing that needs to be done in your particular case. So you need to implement a fragile workaround, for example copying one of the library classes, modifying it a bit, and then injecting it instead of the one from the framework. All of which will horribly break if someone changes something in the library.

"Style" is never the problem.


> The problem is on a higher-level: If you have ever tried to read your way through some library, e.g. Spring, you'll see code spread over hundreds of files, and it takes ages to just find out where that particular bit of code you need is, because it's automatically injected in some builder class which is an inner object extending some other object that implements an interface somewhere.

There is nothing about Java the language which forces code to be written this way though.


>deeply unpleasant to use, such a profoundly nightmarish vampire of morality, so utterly and irredeemably cruel in it's ability to strip all joy out of the art of software development...

Oh my...

As a Java developer, I'm curious. Can you elaborate a bit? Was this some really old version of Java, a horrible IDE, crazy frameworks, something else? Am I living in a nightmare and just haven't woken up yet?

I realize that performance has been an issue in the past, but that has been solved. I've used plenty of other languages (also functional ones), and they all have their quirks. In what ways is Java perceived to be worse than other languages nowadays?


I’m sure a lot of it comes down to taste and personal preference, but for me everything about java is just very high friction. I find the docs and standard library deeply unintuitive and hard to navigate, the language itself feels verbose, inflexible, and the spec for the language is very large and complex. The type system is inexpressive and overly limited. The applications themselves, by virtue of the jvm, feel deeply disconnected from the underlying system. I don’t like the fact that Java seems to assume I’m going to use an ide with auto complete and refactoring.

I’ve used a lot of languages professionally in my career: C, C++, Go, Perl, python, ruby, JavaScript, typescript, Erlang, Haskell, Rust, some Kotlin and Java. There are some languages I quite like working with (Haskell, C, Go) and languages I can use just fine but I’m not that fond of (python, JavaScript) and plenty of stuff in the middle, but nothing that makes me want to just walks away from the computer in disgust like the JVM languages.

Edit: a final though.

Really, our industry is pretty new and we’ve made a lot of mistakes. All of our languages have warts and deep flaws. Happiness in your career isn’t about finding something perfect, it’s about figuring out which flaws you can live with and which ones you can’t. I could go on about technical reasons but at the end of the day the JVM is just full of flaws I can’t just live with, and for whatever reason I’ve talked to a lot of other developers who feel the same way, and many who don’t.


Interesting points, and I have to say they do ring true. The type system is definitely limited and if you compare it to Haskell I understand 100% why you'd be frustrated. And the docs are overly verbose while simultaneously not being detailed enough.

For me, I'm currently at a point where a lot of these things are „known issues“, and they don't affect my day-to-day too much. I think the last time they did matter was during the Java 1.7 -> 8 switch, but I see what you mean.

It's funny that you mention not being fond of Python & Javascript, because those are the two most "mainstream" alternatives I thought you'd compare Java to, with plenty of warts themselves.


In my 15 years of professional work I spent 2 years having to use Java with GWT framework and it was the most horrible, deprived of any joy, experience in my career that left me completely burned out and since then I absolutely hate Java and anything related to it.


> Clever code isn't usually good code. Clarity trumps all other concerns.

I'm glad I'm not the only one who thinks this way. I've had countless arguments with colleagues about this, and they always made it seem like I was the crazy one for thinking this.

They would often optimize or use clever tricks in their code, to the point that unless you asked that specific engineer, nobody would understand it. In most cases, the performance benefits you get from that don't outweigh the negative impacts.


Clever code that's actually significantly faster is fine as long as you document every single thing that's clever about it, even if that means writing 20 lines of comments for a 2 line function.

The worst thing is someone who writes a clever one liner without any comments because that would "ruin" the elegance of the one liner and because their code is "self-documenting"


> The worst thing is someone who writes a clever one liner without any comments because that would "ruin" the elegance of the one liner and because their code is "self-documenting"

Sadly this was some of my colleagues, no documentation and clever one liners without comments and often obscure naming conventions. And again, we didn't need the performance benefits for our use case. I think in the end it was just a play to keep some people employed, because without them we wouldn't be able to work on this part of our application.

Sadly for them the business decided to outsource and they ended up being forced to switch teams.


Faster code is not fine.

Faster code should only be written after finished product is shown to be too slow for purpose and after profiler has shown which parts of code are worth optimizing.


This can be too late for the product though. If you have real performance requirements for example you’re making a game and it needs to render at 60Hz optimising at the end can be an impossible task. This is also further complicated when data dependencies also impact performance. For example level designers need some understanding of the complexity of the scenes the game will be able to support.


You should always have a good idea of what part of the codebase will be the bottleneck when you write it. If you have that just optimize the things it spends time on. The only time you really need a profiler before optimizing is when you are micro optimizing to try to handle cpu ordering etc, other than that performance is mostly predictable.


> and they always made it seem like I was the crazy one for thinking this.

Ask them a specific question about their code three months later.


Unit test that documents the required functionality of code helps a lot. After that, one can refactor the code with all kinds of cleverness.


I'm not sure if I agree with that, unit tests would help document what code does. But it doesn't help when you're reading through code. It's also important to understand how your code does what your unit tests document.


> People who stress over code style, linting rules, or other minutia are insane weirdos

I'm less concerned about which coding style, but it is important that everyone (mostly) follows the same coding style.

Not all coding style rules are minutia. I've worked on code bases which have a mix of hard and soft tabs, with tabs sometimes mixed in the middle of the line, not just at the start of line, and where different authors use tabstop=2 or 3 or 4 or 8, all in the same file. And comment lines that are more than 400 characters wide.


I think this gets solved easily by using autoformatting tools and use them as pre-commit hooks or an informal agreement among the devs to have their dev tools run it on save or whatever.


This is completely and trivially solvable via a checkin filter that refuses tabs (or expands them to a given number of spaces). Why fuss about it?


Why a fuss? Because some people think their choice to use hard tabs and a given tabstop setting are right and refuse to allow autoformatting tools or even just a hard tab replacement. Or they might argue: sure, the formatting is a mess, but if we reformat the code it will make tracing version control blame logs more difficult.

35 years ago, a friend told me the he and the other guy working on a piece of code would play tug of war with their shared files. My friend would replace tabs with spaces whenever he had to edit a given file, then the other guy would replace leading spaces with tabs whenever he edited a file.


You can easily set a server-side commit/pre-receive hook that would enforce any specific agreed upon standard.

Anyone who isn’t a team player, doesn’t get to commit (open source) or paid (commercial / closed source).

It is one of those occasions that a 5-like configuration can save months of bickering.


I dislike sacrificing purity, but I am capable of doing it.

Most software is a hogpile of sh*t destined to rot. It simply won't last. Those sections that do (if it never has to be touched, it also didn't last) are usually subpar. Especially with accumulating "ship ship ship" modes, costs, and competing via speed, it'll only get worse.

Most time spent cleaning up code and improving architecture is time wasted. The startup will fail, the startup will get bought and shutdown, the project will get scrapped, the service will no longer be used, etc. What little survives will be in a state of constant turnover, one that leads to the same junk.

Any optimizations/streamlining/elegance would be to address near-term usability issues. That is, the code is unwieldy to adjust each day and so needs to be reworked. Otherwise, it's wasted effort. Further, if it will linger, then would be the time to clean and file away. On the other hand, companies may still fear refactoring at that time as it risks introducing bugs.

Most code sections that linger are not foreseen as being worthy. By then, it's too late. Other sections never matter, as they end up untouched.


> Most software is a hogpile of sh*t destined to rot. It simply won't last. Those sections that do (if it never has to be touched, it also didn't last) are usually subpar.

Look, most of my code is shit. I expect the requirements to change sometime in the next two weeks. The code isn't expected to rot, it's expected to be thrown away.

Anything that sticks around longer than that isn't subpar, it managed to actually meet the underlying requirements.

Anyway, if my shitty code ends up becoming a problem later; I don't have any qualms about sunk costs, because I didn't build a pinacle of architecture; and I'll probably have a much better idea of what the requirements and use-cases are.

Mostly, the important thing is to reduce the amount of layers. It's a lot easier to make changes to shitty code without too many layers than polished code that has wrong or inconvenient abstractions. Of course, shitty code that's over abstracted is even worse.


> People who stress over code style, linting rules, or other minutia are insane weirdos

He seem to have not seen messed up code written and changed by a dozen programmers over the course of years with various notions about indenting, placing parentheses and comments and whatnot. I‘ve seen code with tabs and spaces mixed, wrongly indented code where following the logic is hard, code with no comments at all or code with more comments than code.

Don‘t give any guidelines and you‘ll end up with a mess.


As far as I can tell he isn't arguing for not giving any guidelines, but rather that arguing over them is pointless. choose a set and be done with it.


We don't stress much about code style etc, because we have a pre-commit hook that complains if you don't follow (most of) pep8. It's extremely unstressful, and makes our code nicer.

I guess it could be different in languages where style is more of a matter of personal opinion.


At my last workplace someone made a change that was preventing you to commit if your imports ("use") where not sorted alphabetically. They are so irrelevant that the IDE (PhpStorm) hide them by default.

It used to make me so mad.


>>> Typed languages are better when you're working on a team of people with various experience levels

I picked up the opposite qualm after working in a multi million line codebase in python, that goes back almost 2 decades in a large bank.

Dynamically typed languages actually work, at all scale.

Statically typed languages are good for simple types (int/string) but they're catastrophic for complex types (dict, map, database tables, enum, nullable/optional). The guarantee it gives you on simple code is great but the convolution it forces you to go through for more complex data is equally horrific.


I think it depends on the type system. After working with languages with good inference, explicit nullability, and algebraic types, it feels extremely tedious to go back to un-typed or duck-typed languages.

With a language like python, you spend a tiny bit less time typing out types in the source code, but that savings is totally eclipsed by the extra time you need to spend reading through documentation and source files to make up for the information which is missing from function signatures, and the time you spend chasing down bugs at runtime.


I strongly disagree. Static types have so many benefits they easily outweigh the only downside (you have to actually write them).

I'm sure it's possible to make large dynamically typed codebases work with a lot of effort and a ridiculous amount of testing but it would clearly be easier with static types.

There's a reason Typescript is so popular and pretty much every dynamically typed language has some effort to add type hints (including Python!).

There's also a reason why you see so many blog posts about large dynamically typed code bases being converted to statically typed languages or having type hints added (e.g. Dropbox) but I don't think I've seen a single large company go the other way, because that would be crazy.


The big benefits are types-as-documentation and automated tooling.


At year 15 or so, he'll change his mind about project managers.

Project management is one of those broken things in the industry because it's never taken seriously. People promoted to PM generally receive no training or mentorship. In fact, sometimes the only way to break into management is to quit your job and start somewhere else with a slightly exaggerated resume. And sometimes the only qualification required is a desire to give it a try.

So you have an entire class of people whose skills follow what you'd expect in a task with no discipline or training, with 50% of PMs moving the needle in a positive direction, and 50% in a negative direction (to varying degrees like a bell curve). Roughly 25% of them are people you'd actually like managing your team.


Agree wholeheartedly.

> 90% – maybe 93% – of project managers, could probably disappear tomorrow to either no effect or a net gain in efficiency.

I read that and instantly thought "yeah but which 90-93%" because the 7-10% that are great are not easy to differentiate on paper. And this can easily apply to most auditors, QA, or sales folks too. The ones really good at their job are hard to find and the rest are just being asked to do difficult things with little resources.


> 90% – maybe 93% – of project managers, could probably disappear tomorrow to either no effect or a net gain in efficiency.

I agree with the majority of the article, but I feel this is a shallow thought. It really depends on the context as much as the PM. In an agency environment, I love PMs. Lots of the things they do on the developers behalf just aren't in my job description. If I had to do them, I'd get half as much code written.

I think PMs also get a bad wrap, because they are essentially the interface between a developer and the client. When the client is bad the PM can help shield the team from much of it, but ultimately the developers are going to feel the worst of a client through the PM no matter what.


I agree with you 100%. Where I work now, all of our PMs are subject matter experts. Some of our software relates to chemicals and our PM for that project is a former research chemist, for example. Their input is invaluable, since they know both the problem domain and the software development domain.

In a former job, at an agency, we didn't have that but still the PMs were the ones who dealt with cranky customers and sat through long meetings with sales etc. Also, good PMs will tell you when something doesn't matter so you don't waste time on it.

In either case, they handle all the difficult stuff and get us the info we need to do our jobs well. If I was to go back to freelancing or consulting, I'd hire a manager right away to handle all that stuff so I don't have to.


The insane obsession with scaling is killing this industry. So much effort is being wasted trying to use NOSQL or K8s at companies that have DAU counts in the low hundreds. Absolutely asinine.


But is their business model targeting hundreds of DAUs, or is it targeting millions of DAUs? It doesn't make sense to architect for an amount of usage that is too small to sustain the business. If the business model requires millions of users to be successful, you should build for millions of users, even if you only have hundreds at the present moment.


The reality is that companies at an early stage don't have validation of their idea yet. That is priority #1. The scaling part comes only after you know your idea has merit. It's also "the easy part". With the right allocation of resources and talent, most scaling challenges can be solved. The same cannot be said of creating valuable / interesting products.

The right approach is to optimize for speed and flexibility. Make it as easy as possible to validate your idea. Make it easy to tear down and rebuild in a "scalable" way if you're lucky enough to make it past step #1.


If you build for millions of users the way most devs do, your business will flounder before you finish.

Facebook, Twitter, and Google all started simple and scaled only when it became necessary. If you want to do what they're doing, start by doing what they did. Don't skip to the microservices.


We don't actually disagree, but I believe Twitter is actually one of the cautionary tales. Their product went through a long period of stagnation because they were too busy toiling at scaling. Same thing seemed to happen to Github, from my perspective.


This is exactly right. Prove your business with the bare minimum tools required and then when you have enough paying customers go out and hire lots of engineers to figure out how to make it scale.


Someone told me once it's more important to design things to scale 1 or 2 orders of magnitude and be easily replaced. What you think you'll need 2 or 3 orders of magnitude from now is rarely what you need when you get there.


Yes, but it's also easy to get bogged down reimplementing, right when you have more important things to focus on to keep up with growth. Designing things to be easily replaced is easier said than done.


But when you do have proof of growth it’s much easier to finance more developers to work on the problem.


The point is you end up having to replace it anyway. Or you get bogged down maintaining and integrating with an overly complex system.


No, you should use proper abstractions to allow you to "easily" replace components down the road. There's no point in planning, designing, or building your application for a million users because by the time you get there: 1 - the business needs will be radically different from what you expected initially, and 2 - you'll have dozens more engineers who will probably do the actual work. It's exceedingly unlikely that the architecture decisions that you made will be relevant at that time.

Also, the overwhelming majority of companies that claim that they need to scale into the millions of DAU will never make it there.


I find trying to decide on "proper abstractions" can also cause the same issue as trying to design for scale: you have to make complex design decisions based on how the future might turn out (which components might you need to replace? what might their future interface requirements be?).

I think it might be more efficient to do whatever is fastest/easiest based on what you know now & always plan on refactoring when you know more. So you end up trying to write less, simpler, code knowing you're going to tear it apart soon.

Which I think still fits with your overall point.


Obsessing about scaling is what makes this industry. How else would we keep up software engineering demand?


I haven't done 100 coding interviews.

But here is a suggestion to fix the interviews: Let the candidate bring their own code!

First this gives you a unique inside in the programming style the candidate prefers. Second the candidate knows the code very well and is probably more relaxed when he needs to answer your questions about the code (given that she/he didn't copy-paste the code from somewhere).

It is similar to designers showing a portfolio of their work to a potential employer. So if this works for designers, why not for programmers also? It is called "The Art of Computer Programming" for a reason. ;-)


This has implications that the candidate has code they can bring in. 99% of the code I have written over the last few years has been for work and therefore can’t be shared. The little code I have written outside of work falls into 2 categories, minor tweaks to existing code bases or experimental bits of code to try something out. Neither of those are really useful to this situation. If I was looking for code that I fully wrote myself that would work for this I’d have to look at projects from a number of years ago which means my ability to answer questions would be significantly diminished as I don’t remember why/how I made the choices I made.


> 99% of the code I have written over the last few years has been for work and therefore can’t be shared. The little code I have written outside of work falls into 2 categories, minor tweaks to existing code bases or experimental bits of code to try something out

Agreed whole heartedly with this. When I'm writing code for fun, I'm not spending time thinking about spec coverage, architecture, or great data design. I'm just screwing around getting things working.


The problem with that is that we can't be sure that it is their code.

And let's be honest, it doesn't really work for designers either. Because the problems of hiring programmers isn't unique to programmers. Hiring gets harder the more you need out of your workers.

Take any reasonably skilled position, hiring is incredibly difficult because it's hard to evaluate a person's skill in the blind. Hell, it can be difficult even when we do have ways to evaluate skill.

Take any major sports league. The draft is essentially teams hiring players. And they've spent the entire year evaluating hundreds, if not thousands of players to find out which ones are good enough to be hired. They have games to look at, stats to pore over, people to talk to about these players. And they get it wrong often.

Finding good people is a hard problem to solve.


I am a veterinarian and recruiting is way more straightforward. You’re not going to test the person on the spot. It’s way more about personality fit, what you like/dislike to do, and where you see yourself a few years later down the road. And then a real paid tryout.

Also I just changed jobs and was able to write code quickly because they were recruiting with tasks in mind for me. Sometimes it feels like the position is just opened to have more velocity on the sprints.


Veterinarians have boards and licenses, right? I can't hang a shingle as a vet on a whim. That's a very clear filter. All the vetting of whether or not someone can perform the job is done before they even get to you.

However, anyone can call themselves a programmer and apply to programming jobs and there's no way to tell if they're capable or not until they're at it.

And even previous job experience isn't a strong signal of competence. I've inherited code bases from people who left for other programming jobs where the code they left behind was just a mess inside.


Even boards and licences won’t keep one from being a bad recruit. It has the same value as a CS degree I guess most people applying will have, and you’ll still be whiteboarded.


1. “I am a veterinarian and recruiting is way more straightforward”

2. “Also I just changed jobs and was able to write code quickly because they were recruiting with tasks in mind for me.”

Is no one else confused by this? How common is a coding veterinarian and why do veterinarians write code?


I program as a day job and kept a small nightly gig as a veterinarian, which was indeed my degree.

How common is coding as a hobby? I used to code as a teenager as many here. Made the switch for many reasons.


I just took it to mean they were a veterinarian and now they work as a programmer.


But the first is in present tense, and the second says jobs not fields. It’s enough to be very unclear.


I think this makes for an EXCELLENT interview question.

Engineers with strong opinions are better than engineers who just do whatever others want and haven't thought deeply about anything.

But even better is an engineer whose opinion changes as more information becomes available.

You want to hire the person that is capable of both making a thorough case for one approach, and is mature enough to admit that that approach is suboptimal and then go implement the optimal one.


I'm doing my best to not be an "insane weirdo" by focusing on the correctness of code, and to ignore the extra blank lines and spaces, lack of a space in between the "//" and the first word of the comment, and inconsistent camel casing in the work of some of my colleagues.


I used to be an "insane weirdo", but at some point I gave up.

Now, I just focus on code correctness, merge and fix the code style myself, avoiding a lot of useless back and forth.

My hope is that people like to see their name when they "git blame" (what's the PC version of "blame" again?)

So they will over time learn to follow the code style in order to keep their name there.


If you can, commit wholely to a parsing formatter and plug it into your CI pipeline. I've come to realise over the years that its not style I care about per-se, but consistency. With a parsing formatter and an opinionated linter, you're pretty much covered automatically across all development and developers.


Yep -- where I work, every language we use in any big way has a high-quality automated formatter, and we require all code to be formatted with these tools. It's been a dramatic improvement in code readability in general across the company. We have a few engineers who would consistently beat the formatter by a very small margin if they were formatting their code by hand, but they are the exception -- 95% of the code submitted across the company is improved by these tools.


> My hope is that people like to see their name when they "git blame"

Sadly, my impression is that a lot of people don't want to see their own names on git blame, because they fundamentally operate on "you break it, you buy it" mode. As long as their name isn't associated with a particular piece of code, it's not their fault that it's broken. (It doesn't matter that the code is fundamentally broken and has been broken since inception five years ago - not my fault!) Here, let me just add a quick wrapper layer on top of it so that my module can work around the bug, while at the same time making my module depend on the broken behavior...


I see two different scenarios for this:

Open source: in open source, I hope people want their contributions to be visible, so that they can brag about it, or put it in their resume.

Business: in business, maybe some people wants to hide some contributions they made in order to avoid being blamed for broken code.

But at the same time, if you have code stats and you see that an engineer owns less than x% of the codebase, it could be seen as a bad sign.


> what's the PC version of "blame" again?

JetBrains uses "annotate"


Credit


I like that a lot.


I think some people just have better or worse tolerances to these things than others.

If formatting in code is as I am used to it, I can skim it very quickly. If it is not, I have to focus much more, and that adds up over the course of a day.

My brain can just recognize what's happening in a small block of code if it has a particular shape and colors. I don't know how to explain it better than that.

This is one reason I REALLY want to see editors save files as tokens rather than plain text, and see compilers ingest code as pre-parsed tokens rather than plain text. (Or have editors save the tokenized file alongside a plain text file for the compiler.)

If that ever happens, I will be able to view the code exactly as I like it, and so will everyone else, even when we each want to view it differently.

Compilers and diffing tools will need to report a token number rather than a line number for errors and differences, and I don't see that as a big problem at all.


On the other side, at my current place of work, there's one person that throws a fit every time someone is not following the linting rules, the CI/CD rejects anything, mostly trailing trailing spaces. What drives me nuts is not the fact that he's obsessed with them, but that he actually put no thought whatsoever into what rules to apply, he just put the first one he found on the internet, and is regiouly following them and forcing them on the rest of us.


The bigger the codebase and the more people in an org the more important coding standards are. But enforcing them should be 100% automated with a linter failing PRs that introduce incorrectly styled code (or precommit hook). That makes it so it doesn't waste code review time.


Just use a linting tool? Using a linter allows me to care a lot about formatting (which I do) while not consuming a ton of my team’s time enforcing it.


Then you get to have many meetings about which litter and lint rules to use.


This is a tiny fraction of the time that it takes to comment on PRs/CLs in perpetuity. The alternative is just to literally not care about formatting which if you are able to do more power to you, but I could not imagine working on such a team. To me it’s somewhat akin to saying that all that matters in writing is the ideas, not the formatting of a document.


Organize one meeting, tell everyone you don't care which one but they have to decide on one by the end of the meeting. Otherwise you're deciding one for them (phrase it as "I have a PR ready to go adding this one"). Assuming you have the sufficient authority (managerial, technical, or buy-in from your manager) for such an ultimatum, of course.


This is why I like black, for Python. It doesn't have any configurable settings, which means way less arguments.


Yes, we use golangci-lint for our few Go projects, which works well. However, most of our projects are in Java, and the mistake of our team was to never have our Checkstyle break the build. So, the result is many Java repos with a variety of coding styles.


This is just how life turns out ime. Focus on more important things.


Use an autoformatter and be done with it.

If your preferred codestyle can't be expressed in a standard formatting tool, change your code style and run the autoformatter once. Have a bot make the commit if you care about your diff histories.


I wouldn't worry. In my (long) experience there are many of you.


I find that the people that over stress code style (developer version of the grammar Nazi) end up wasting their social capital and overall energy.

They lose capital going back and forth in these squabbles and lose their ability to look for much bigger danger signs such as over abstractions, under abstractions, and convoluted code.


> YAGNI, SOLID, DRY. In that order.

What?

> Java isn't that terrible of a language.

Doesn’t mean it’s a good language though.

> Despite being called "engineers," most decision are pure cargo-cult with no backing analysis, data, or numbers

And when people ask for numbers, it is usually because they don’t like your idea

> People who stress over code style, linting rules, or other minutia are insane weirdos

That’s why you need languages with good defaults for formatting and linting (e.g. Go, rust)

> 90% – maybe 93% – of project managers, could probably disappear tomorrow to either no effect or a net gain in efficiency.

I admit I still don’t understand what a PM really does.


YAGNI: You aren't going to need it.\ http://c2.com/xp/YouArentGonnaNeedIt.html SOLID: A long list of architectural principles https://en.wikipedia.org/wiki/SOLID DRY: Don't repeat yourself

A set of useful principles in software architecture. I mostly agree with his order too.


> I admit I still don’t understand what a PM really does.

In the best case they have a proper technical understanding and do ground the team in reality (i.e. makes sure the team doesn't over analyses/engineer and adds fully unnecessary part).

Then the PM should help the team to come to decisions if the team can't decide weather they do solution A or B and they _seem_ to be equally good wrt. all measurable and non measurable aspects.

Lastly the PM should help to keep all the unhelpful and hinder-some things upper management might want to put on the engineering team away and similar thinks.

The problem is the PM needs IMHO solid technical understanding of the topic in question at the same time they need enough soft skills to nudge the team in the right direction and protect it from upper management absurdities.

I don't think I have yet to meat a person who fulfills this qualities, often they fundamentally lack technical understanding to a degree that they do more harm then good, i.e. in many cases a self managed team with a tech. senior team lead is a better problem. Except if the team lead is not grounded enough and prone to over-engineering, at which point a PM (or second team lead) to ground this person is essential.


> lead is a better problem

A typo, but honestly it interesting to thing what I could have meant with it even for myself so I won't fix it ;-)


Another low effort blog post that Hackernews readers have upvoted because it says obvious things that they agree with.

Quality on blog posts and articles is taking a bit of a dip lately.


People with experience are acknowledging this has value for newbies. That you already learned the lessons doesn't mean they aren't valuable to others. Sure, not the most exciting and creative piece of writing we have seen this year, but there's no need to be so dismissive either, we shouldn't let our own experience blind us.


It's just a list, no arguments, so that's not high quality, indeed. But the author speaks from experience, and it resonates with experiences from others here, mine included (although I'm using a nosql db for recent projects). It offers a starting point for some discussion and elaboration, so it's fine by me.


I agree. There are maybe 1-2 articles a day that are really worth reading here. I skim the rest.


I disagree, this is healthy exchange. Programmers -- and their employers -- want to discuss what works and what doesn't in the crazy world of software development.

Although this might not be a moment of reflection for you, there are software projects big and small happening all over the world, every day. A large proportion of them fail.

Or don't fail completely as long as we learn something.


I am around 20 years into professional software dev and this list is great. Not sure what else I can add, just wanted to +1 the whole thing basically.


Very insightful post. Best for this month maybe :)

Some of which I like most:

> Clever code isn't usually good code. Clarity trumps all other concerns

> Bad code can be written in any paradigm

> So called "best practices" are contextual and not broadly applicable. Blindly following them makes you an idiot

> Designing scalable systems when you don't need to makes you a bad engineer

> In general, RDBMS > NoSql

> DRY is about avoiding a specific problem, not an end goal unto itself

Overall, software engineering is NOT a solved domain, nor it needs to be. Best practices works best solving others' problem, but most likely, your problem is different, and you are different as well.

Don't be scared to do things in the way that you think is not approved. Exploration is necessary, because unknown is inevitable.

Stay fluid and vigilant, be skeptical yet adaptive.


> Designing scalable systems when you don't need to makes you a bad engineer.

> In general, RDBMS > NoSql

I use NoSql because RDBMS is a premature optimization for me

My experience with ORMs especially ORMs I didnt chose is that they are frail, assume too much and I often end up writing my own plain text queries anyway, all so that I can get a result from the database in object form, but without bogging down the database with inefficient queries

NoSql is object and collection based by default, and any Orm-like necessity is standardized with that database, instead of with a third party under-maintained library

My projects dont need to brag in an engineering blog about scalable performance, and the memcache solves all bottlenecks I’ve ever encountered


The problem you point to is the ORM, not the DBMS. And if you think “of course, but how can you OOP properly without objects? (And thus ORM if you use an RDBMS)” — you might join some of us in the conclusion that OOP is way overrated.

Don’t get me wrong, it is exceptionally useful when it’s a natural fit, like in GUI systems and some aspects of game programming.

But in many places, it is completely unfitting, and complicated things (similarly for FP, BTW)


My point is that I dont need to do plain text queries or need to make queries simpler or worry about what I just did to a row when I am using NoSql

Whereas I have all those concerns with DMBS, the ORMs want to help but just add more problems


Having used ORMs, SQL and NoSQL, My experience is that NoSQL is basically never the right choice for anything. It saves you minutes in the beginning in return for hours within a week’s time, then hours in return for days, then it just keeps costing.

Especially since Postgres supports JSON properly, I’ve found no reason to use anything else except perhaps when you want eventual consistency, which couchdb may do better out of the box, but is a whole big can of worms.

Added: I find DALs are the only acceptable layer over relational data, and other than BLOBs (which unfortunately no database seems to deal with nicely, and are thus relegated to the file system), everything that fits into a NoSQL fits at least as well into a modern RDBMS (e.g. Postgres)


yes, I’ve encountered that in other people’s projects with increasing scope

I architect my own projects to not need more than a simple data store and scrap ideas that dont fit that archetype


> People who stress over code style, linting rules, or other minutia are insane weirdos

Really great post, although I do really value good, consistent code style. Readability really matters (imho). You can even use writing concepts like parallelism if you are feeling fancy to make things even easier to read.


I don't understand this claim of readability. If it were true, then you wouldn't be able to read a code base with a different code style. In truth, the switch takes about 10 seconds for me to get used to. Which is about 10,000 times less that the time I've wasted debating code style with other developers that insist that "consistency" is a useful goal without justification.

There is exactly one concrete goal that I have found. Keeping people's editors from fighting each other in editing the same file. Other than that, just let it go. I've switch everything I can find to the automated formatters so that I never have to listed to someone complain about formatting again.


I have no problem diving into a new code base on a new team with a style that’s unfamiliar. I absolutely do not want to switch styles as I work on the same project. I don’t much care what the style is. Successful implementations I’ve seen include: tasking one person to pick a style, then allowing exactly one meeting in which to iron out the quibbles. Or, literally googling for a style recommendation and just choosing one of the top results, as is, no discussion.

I agree that the endless religious arguments must be avoided. All the more reason to seek a strong manager and / or tech lead.


Cognitive load is real and that 10 seconds is more load you have to put onto your mental ram to get anything done.

It’s easy to not notice it until it’s not.

That’s why autoformatters are better in general. There’s no argument, it’s just how it is, and it’s dealt with.


> TDD purists are just the worst. Their frail little minds can't process the existence of different workflows

I'm not one, but my money is on this one changing, at least in tone. It's deliciously ironic as currently written.


Maybe you've never experienced the anguish of having to pair program with a TDD purist. I honestly wonder how these people tie their shoes without performing a few deliberate face-plants first "just to make sure."


> People who stress over code style, linting rules, or other minutia are insane weirdos

Cannot agree with this one. For example, different indent size can be misleading and has introduced many bugs that are hard to notice during review. Apple had a security issue because of this.


> Pencil and paper are the best programming tools and vastly under used

I do most of my large work (not small features) on paper first in pseudo code and drawings.

It helps me think through a problem rather than code through a problem.

The TDD one seems directed at someone.

I was really anti-TDD until I lost a bet with a lead on my team about 15 years ago. I tried it for a few weeks and it really changed the way I develop - and I started delivering code faster (this was the bet).

Now-a-days, I don’t always write tests first. I definitely fool around or cowboy code sometimes especially when working with something new. Once I’ve figured out how to use the API, tool, or library; I delete my code and write tests to work through what my API will look like, then write my code.


The TDD one I sort of half agree with.

Purism and zealoutry are rarely a good reaction to anything, but I've seen a lot of dodgy code that, if it had been written with at least testability in mind, or with a test first it wouldn't have been half as dodgy.


>test first it wouldn't have been half as dodgy.

Feel part of this, in my very little experience, is with tests focus is getting smallest thing working so less to go wrong


I think another commenter said something poignant around this as well; it’s great too if you are working to a spec. For exploration it might be a cumbersome approach to go all in with.


I tend to do TDD because I hate having to pull up a full application, run the thing and then eyeball a result. The problem with TDD zealotry is that often they'll induce all sorts of architectural rot (DHH did a thing on test induced design damage a while back) in the name of testability


Coming up on 30 years. I agree with most of your observations except standups. In my experience daily standups are a waste of time. Maybe I've just had bad luck with them?


10 minutes is hardly an investment in both ensuring devs share a goal they're committed to at the start of the day, and that devs aren't going down the wrong path, and people are in sync. I have had many standups where days of person X's time was saved by person Y offering a suggestion on how to do a certain task.

Don't let a standup go beyond 10 minutes. X and Y in the above example should sync after the standup separately in case their discussion is longer than the duration of the standup


Standups makes sense if people work close together on a fast moving project. They don't make sense for people who work on unrelated things or projects that move slowly. Then they just become a "who did the most last day?" bragging session since nobody has any use of the information of what others did that is all it can be used for, and when that happens all it achieves is hurting team morale.


"10 minutes"

Are you the only one attending or something?

Standups are initially scheduled for the start of the development day. But then people come in a little late, get that first minute call, have to attend to something that hiccupped, so soon enough the standup happens 30 minutes into the day, eventually an hour into the day.

So if you did get there on time, now you just spin your wheel waiting, because first you've got that obnoxious standup that's going to disrupt everything.

Now you get there and spend ten minutes waiting for everyone to assemble, and for the S.C.R.U.M.Master to pull Jira up on the overhead. Finally you get going and everyone has an essay about their herculean effort putting a button on a form, which absolutely no one is listening to. You have the manager (because really they're a garbage way of managing) interjecting and asking questions. You have someone say something dumb they're doing and everyone gets to stand there while two people who could have taken it offline have a conversation.

40 minutes later you go back to your desk.

Invariably the standup aficionados (almost universally shitty managers) will say "you're doing it wrong!", but what I just described is the standup reality virtually everywhere.

EDIT: Exactly as expected, I made the mistake of actually replying to people who questioned my comment, which means when some angry person comes along they get to downvote all of them. That aspect of HN is weak sauce.


I am not sure where you work, but I am surprised that with such a lackadaisical attitude towards attending a meeting, any work gets done at all.

If you have a meeting at 11 AM in the morning, people are supposed to show up on time. Do all your meetings run late? Or does only the standup get this treatment?

Sidenote: if somebody's late > 2 mins, they can join midway or miss the standup altogether, just as is the case with any other meeting.

At the standup, strictly speaking, we give each developer a minute, else the master calls time on them. In a team of 7, that makes the standup 7 minutes long. Typically, people tend to keep their updates short as a way to share what they did yesterday and what they will do today. For any conversations longer than a minute, we take it offline. I have been in companies where this is done via an official master, and other places where we just follow the process ourselves as adults and don't go overboard with updates.

You get back to your desk in < 10 mins, practically for us - it was more like 7-8ish minutes.

I am no standup aficionado, and can't claim to know where you've worked to know the reality virtually everywhere with such confidence, but anybody aware of the process will help you fix it. It appears you've not met such people.


We had fines for latecomers and people that talked too much at one of my last companies, like $2 or whatever that goes towards end of week beers or the Christmas party. People pulled into shape real quick.


I've worked in three fortune 500 companies (banking and finance). Several SV tech companies (in satellite offices - not in SV). Two pure startups. An engineering firm. I have never seen it be anything but a waste of time. And it always consumes significantly more time than the boosters claim it does.

And we can look at each other's anecdotes with suspicion. But there's a paradox that when someone says "Oh it's only 1 minute per person" then my natural response is "why isn't that a status line they set in IM or something?" at whatever asynchronous time it suits them best in their workday. How could it possibly be the best way to communicate 60 seconds of information by making everyone break their flow and gather together. And if it's more than 1 minute, and you're getting into actual meat, well then there are even bigger issues afoot like an inappropriately large and inefficient audience.

If you're just stating exactly what everyone can see in JIRA or whatever product you use, then it's a waste of time.

Aside: In probably half those teams there were remote developers who'd have to call in. Boy, that added a whole new fun level where everyone does nothing while technical issues are sorted.

EDIT: After the flurry of downvotes by these scrum disciples, I can't even reply to the disingenuous reply to this that makes some astonishing projections, and again tries to moral high ground.

If you are on a team that derives value from scrums, you are probably on a shitty team.

Debating with the zealots of scrum land (who, like the guy below, will repeatedly tell you -- in between their evangelizing -- that they're not really zealots) is a Schrödinger's Cat of productivity, where not only does it take no time and impose no flow penalty, completely and fully detached with every inkling of software development knowledge industrywide, it also conveys everything to everyone and is all powerful.


You didn't address the very basic issue: Why can't your team(s) have proper meetings? People coming on time, or at least not waiting for people to come on time? Why can't they maintain the discipline of not ratholing and going off into technical issues?

As I'm not necessarily in favor of standups, nor has anyone responding to you claimed to be, you're simply blowing hot air complaining about the purpose behind standups as opposed to addressing the comments they are making.

Not respecting people's time (frequently late for a meeting, meeting organizers making the on-time people wait for the off-time ones, etc), not being able to communicate well (cannot summarize, or meanders to off topic stuff) are all qualities of poor workers and poor teams. Such behavior is quite common, in my experience, but these are legitimate reasons not to work with them. Once you hit some minimal level of technical ability, the other skills matter at least as much in a team environment.

There are plenty of pointless rituals most people have to do in their life and work. Amongst those things, being able to handle a simple 15 minute meeting a day is one of the easiest. When people have trouble with it, it's a sign of a behavioral problem. Either some team members do not have the minimum bar of discipline/communication, or they are intentionally acting this way as some sort of civil disobedience against scrum. There are usually appropriate ways to make change - in most cases such behavior is just a sign of immaturity. Of course, there is such a thing as poor management, and sometimes such behavior is the only realistic way. But then again - it's still a good turn off for me not to work there.

As I said in another comment: Following scrum does a good job of exposing problems. And everything you describe sounds like problems in the team or the management.


> virtually everywhere

I've never experienced that in the 7-8 companies I've worked at over the past 10 years.


At the last 7-8 companies I've worked at, I've described exactly how it has been. I have been personally responsible for eliminating them at the last 4 orgs. They were an echo of shitty management.

And I'm hardly alone in observing this being a common pattern.


In the one job where standups went well, our limit was 15 minutes, for about 7 developers.

> Standups are initially scheduled for the start of the development day.

There's no reason it should. We did not have it at that time.

If it's at a reasonable time (e.g. after all people have normally begun work), then have a low tolerance policy for those who come late. Sometimes our standups were about 5 minutes, because some people were on vacation and some were late. When the late people showed up, it would be to an empty room. We didn't wait for them beyond 1 minute.

There's no way we would tolerate 30 minutes, let alone an hour. If it was going beyond 15 minutes, many/most of the developers would abruptly leave. You have to have an idea what you're going to say before you enter the room, and not go into technical details - the standup is not a problem solving session. People who randomly droned in giving their update would be "coached".

For a while we had the manager in the room. Then we booted him out - he was no longer welcome (it was an amicable parting :-) ).

> but what I just described is the standup reality virtually everywhere.

I can believe your experience is more common than mine, but I can also say I wouldn't work with such sloppiness in the team. In our case, our team wasn't the standout. This discipline was drilled in at the organizational level and had support from the leaders.

It may sound bad the way I write it, but it was actually very easy to cultivate this culture, and everyone liked it. This was literally the easiest and simplest part of the work day.

Edit: Lest this come off as a pro-scrum comment, I assure you it's not. There are things I don't like about Scrum, and am not sure I'd like to go back to working scrum-style. However, the complaints in the parent post are not problems with scrum, but cultural problems. As one of the pro-scrum people in my company put it: "I'm not really sure Scrum does a great job in improving productivity. But it's excellent at exposing existing problems in teams."


We had a client doing an "agile transformation" corporate style, ie they pulled the entire floor in to talk about everything for an hour every morning. Imagine 30-40 people, all from different teams, all huddled around a whiteboard talking about their work. It was a shitshow.

I think we need to just kick all the suits out of the engineering process, because they insist on stupid shit like that and never address the core issues.


If you don't have it at the beginning of the day, then you're putting a hard gate on flow. This phenomena is well known, and the disruption of having a scheduled event mid-workday is significantly larger than the time of the disruption. Everyone is going to fade out of their work long before and won't re-engage for sometimes after.

As to scrum exposing cultural problems, maybe that's true, but in some cases scrums are a clumsy attempt to solve cultural problems (as I said in another comment, agile was the failure mode for orgs stumbling with "waterfall").

Terrible progress visibility? Let's have a scrum! No team member communications? Scrum will fix it! People sandbagging or not doing their work? Don't worry, Scrum will save the day!

On good teams we all know what each other is working on. We're all aware of the codebase. We see the commits. We are available when someone has a problem (and people know who is a good person to asynchronously ping when they have a problem in specific domains), and are open to the reality that everyone is imperfect and doesn't know everything. On good teams scrums serve absolutely no value, and are a redundant waste of time. On bad teams they are an attempt to bandage over all of the lack of those factors.


Talking about the beginning of the day assumes everyone starts at the same time. Having it just before most people go to lunch can be less disruptive. I agree scrum isn't helpful usually though.


This is exactly my experience with scrum too.


Same here. At my current job it’s literally impossible for my team to pull off as we’re spread across North and South America, Europe, Japan and New Zealand. Instead we have two weekly meetings, one friendlier to western countries and the other better for Asia pacific, both recorded, where we review agenda items added (by anyone who wants to add something, including folks at other teams) and then a quick review of major team projects status. I personally find it much more productive than a 15-30 minute scrum stand up where people either say “same as yesterday” or drone on for 5 minutes straight, depending on their personality.


Like many things, IMHO they are a good idea that a) is not needed for every team and b) should not be overloaded. But like every agile software idea they often become a big-A "Agile" thing that's done, often with little consideration if it's done in a useful way, but just because it's the thing that's to be done and gets overloaded with stuff someone wants to do. So I'd say it's both yes, you've had "bad luck" in a way, but at the same time that's kinda the norm in many places.


The problems creep in if they get too big. Too many people and it's a huge waste of time. If non devs are sneaking in, they shouldn't be permitted to speak.


I love stand ups but agree they need to stay small. When the audience is too large it becomes an exercise in bragging and is not functional.


agree, what I found useful although I kinda hated at first, was slack messages at the end of the day where I was asked what I did for the day (sometimes followed by what I'll be doing for the next). this includes blockers and other related aspects, like wether I need some aditional manpower for the task at hand.

it was in private messages so it avoided wasting other people's time or distracting them.


They mention that it's "useful for keeping an eye on the newbies", which indicts standups. That is a positively horrendous justification for standups. Your "newbies" should have assigned mentors and helping hands, and that shouldn't be something that drags down everyone.

Standups are a shitty, passive-aggressive way of managing. Managers who are too weak/incapable of simply monitoring progress, asking questions instead make this whole show under the assumption that everyone is checking on each other. 100% of the time this is the justification for standups. In many firms, "agile" is a failure mode for "waterfall".


  > People who stress over code style, linting rules, or other minutia are insane weirdos
I suspect this has to do with a combination of personal preference and never having worked with a reliable (in terms of just never changing the semantics) code formatter like Black. Sure, I don't agree with every single thing Black does, but it's still infinitely better than trying to enforce code style manually. And don't get me started about those who just don't want a code style enforced - the diffs end up a serious mess, and forget dealing with merge conflicts in a reasonable time.


I think this is a nuanced point.

Enforcing code style manually probably means not enforcing code style (at best, selective enforcement). But, I've yet to see a good tool that only enforces format on new/changed lines; and a reformat en masse is asking for trouble.

However, the arguments about a consistent style giving you benefits seems to fall flat when you've got dependencies that don't follow your style. If you're in go, and everyone uses gofmt, and always has, I guess that solves that. But you probably still have some code that's not in go (the OS you run on, at least), and that's probably not consistent. And if you're not in Go, you may have a different required style than your libraries and your language environment and nothing is consistent anyway; so why enforce anything beyond best effort blend in with the surrounding context?


I’m the kinda of guy who has a strict linter and will force you to rewrite every PR you submit, but the result is that most of the time better style means you see the problems better and very often the diff (or additional code) is halved.

I could merge that PR as is because it seems to work, but I don’t because I don’t want to deal with those hidden bugs in 10 days on production.


Yeah this is the one point I don’t get. I’m not sure what “stress over it” means here. I don’t know why I’d stress over it. I’d stress over having indeterminate and mixed styles. Strict style greatly lowers mental overhead, particularly on a team (esp if larger) where one person needs to jump in and work on code others wrote.


Notice the author said people who stress over code style. Using a code formatter and forgetting about it is a great way to not stress about it. Fighting with your tools or your team to get things a certain way is where the folly is, I think, and that's how I interpreted their comment.


Stick with languages, tools, and techniques that respect the math theory. Category theory, type theory, set theory, relational algebra, 1st order predicate calculus, 2nd order predicate calculus.

The OP has intuitively gravitated to this side.

Theory isn't going to have an answer to every question, just don't willfully oppose it. Everything else (best practices, fads, etc.) is a convention at best, sometimes useful, but should never be regarded as having special significance.

And in this regard, don't micro-services fly in the face of Occam's Razor? Or as the OP stated

Monoliths are pretty good in most circumstances


In a lot of cases Microservices are the cheaper option. Keeping your code in a monolith requires a lot more investment in tooling (to manage a large monorepo, because apparently our industry has collectively forgotten that you can actually build separate libraries and keep them in different repositories) and coordination between multiple teams working on a single large project. The simplest solution is often for management to throw their hands in the air and let each team stake out their own territory and own their own services. You certainly pay a cost for it in terms of overall infrastructure complexity, but maybe if you're lucky you get some resilience and scalability out of it.


That is what I am usually annoyed by when reading discussions like that. Someone assumes a context and throws in his experience like it would be applicable to every company and every project. Using their own definitions for industry buzzwords and by that making it more confusing for others, where those others start doing the same again as a response which brings nothing to the general understanding of things.

When reading article I assumed author was not going to tackle project with multiple teams. I expect he was writing about a lot of projects that don't ever live to need more teams. You have a lot of projects where you have 10-15 devs as one team on it and those are huge projects but there are no downsides of a monorepo because it still works fine and is single repo/application.

I would say monorepo should be defined as: repo where you have all projects for a company that don't need to be linked in any way. Because what I understand gave start to the term was Google having such use case, where they wanted ALL things in one repo.

Having one project in one repository cannot be named "monorepo" that is just normal repository and that's what monolith in a single repo is.


Not having a formal cs background, the comments on HN that wax about predicate calculus and algebras always raise questions for me.

Is it a higher plane of programmer thinking or just abstract technical jargon and ideas that are suited for hard core technical cs research but bear little value in practical programming tasks?

Like knowing the Latin names and full phylogenic tree and exact relation of humans to the animal snuffling outside when all most people need to know is whether it’s a small animal like a raccoon or is large like a grizzly bear.

Like in the course of writing a feature engineering pipeline, tuning a ML model, and then deploying it as an api or as a scheduled batch predictor for another pipeline, at what point should one start thinking of parts of this in terms of “1st order predicate calculus” during the day to day tasks?

At what point in working on a feature card does someone generally think of relational algebra?

What tools don’t respect these things? What will happen by using them?

Like can anyone give an ELI5 of the day to day use of explicitly using these concepts to guide average programming tasks?


It kind of boils down (for me) to choosing a strongly typed functional language and relational database by default, justified by the theory behind those.

I can be convinced, by myself or others, that a particular project calls for something else.

Because I'm familiar with them and satisfied with the tools my defaults are F# and SQL Server.


Wait so are you saying that all that terms like predicate calculus and algebra boil down to in practical terms is use a strongly typed language and and relational database? Okay so aside from choosing databases and languages, do these concepts explicitly come up in day to day programming or are regularly thought about?


It's more like I don't need to think about these things. F# gives you immutability by default (but let's you use it if you really need it for a more efficient algorithm) and eliminates nearly all null values. Nulls can still leak in from an RDBMS. Working in an environment like this gets you to if it builds, it works (once you get used to FP thinking). You have to experience it to believe it.


This is why I lean towards serverless monoliths these days. It works great.


>People who stress over code style, linting rules, or other minutia are insane weirdos

Counterpoint: People whose source is sloppy (inconsistent naming, indentation, brace placement etc) are almost certainly bad programmers.


True. But I don't think he's talking about that. I don't particularly care if you prefer tabs over spaces, or if you do brace on the line or on the next line, or one of the many other trivial details that don't affect the code quality.

I do care that you have a preference though. I do care that you're consistent in applying that preference and have reasons where you break with that preference.

I have my own preferences, but if I were in a shop where the style guide was to do the opposite of them, I'd do the opposite of them because it's not worth fighting over. It's more important for the code to be uniformly styled so it's easier to read and maintain rather than it to be styled in a way I prefer.


Yes, it is the inconsistency and lack of attention to detail that is the giveaway, rather than the use any particular convention. Unless they don't vertically align their braces, then they are clearly some sort of monster. ;0)


Are the things under "Things I've changed my mind on" things you believe now, or have stopped believing?

(Can't tell which way the change of mind is.)


Directly under "Things I've changed my mind on" the author writes "Things I now believe".


There is a new sentence in the "Things I've changed my mind on" section:

> Things I now believe, which past me would've squabbled with:

That phrase was not there originally when the article was posted.


Nice.


Damn, I usually don’t agree with every bullet point in these lists but I agree with everything here (even the last bit about the weirdos).

Is the OP me?


I'll add after 20+ years: - Being able to communicate ideas > Being a great technical developer - For every great developer that's an asshole, you can find a great developer that won't be an asshole. Don't keep assholes. Google the "no asshole rule". - Making the team feel safe to discuss stuff they don't know freely and ask questions that might seem stupid in other teams makes for a great team that's not afraid to challenge assumptions and break through with new skills. - TDD is a skill just like refactoring. Learn when it makes sense. - 80% of architects in enterprise don't have the skills needed from them. including the things stated above. - Learning the Theory of Constraints and applying it to find bottlenecks in the process, pipelines and structures of your teams/projects is one of the most useful things you can apply to become more productive. - Learning what to measure and what not to measure, lagging vs leading indicators can help you communicate to management about what changes truly need to happen to make your team work more effectively. - Whiteboards (and remotely, miro boards) are very effective and can be used for easily 50% of meetings. But aren't.


> Whiteboards (and remotely, miro boards) are very effective and can be used for easily 50% of meetings. But aren't.

I agree that it's great to have a visualization aid, especially in remote meetings. But in my experience, for the needs of most meetings, Miro boards are a cumbersome, over-featured sink of time and effort. Then they invite you to leave them standing as sole documentation of the details of a meeting's outcome, which they're not very good at. Use them wisely!


This is refreshing to read, and I don't disagree with you on any of your points. I wish you the best in your career.

If I see your name again in a list of resumes someday, I'll put yours on the top.

- 21 year R&D greybeard (37 years old)


One thing that bothers me and is only tangentially mentioned is code formatting. I don’t have particular favorites but I do appreciate consistency but most importantly: when introducing or changing a formatting rule apply it to all the project in 1 commit. Never, ever dare to apply it “as-you-go” when implementing changes and touching existing files. Makes reviewing a nightmare


I love it when engineers admit changed opinions!

here's a recent thread from charity majors: https://twitter.com/mipsytipsy/status/1349262816239263747?s=...

and from me: https://twitter.com/swyx/status/1349279194446876673?s=20

basically (things I no longer strongly believe or in fact did a full 180 on)

- "Time travel debugging is only for showing off in demos"

- "HSL is better than RGB because you can control hue, saturation and lightness as independent variables."

- "We should move all errors to compile time, the earlier we catch them the more productive we are"

- "React/Preact is the best paradigm to code websites."

- "Monorepos trade off contributor/bug reporter experience for maintainer convenience."

- "If you can support dark mode you might as well support infinite themes"


Test driven development requires you to have a spec upfront.

When you already have a testable spec, test driven development seems to be a net positive. It can be a good way to explore the spec before building the implementation. At the same time, you shouldn't force yourself to write all of the tests up front; additional tests will typically occur to you as you build the implementation.

If you don't have external factors forcing a spec, I feel that test driven development is a net negative. Forcing a spec up front often requires making decisions before you fully understand the problem. Once you have tests, you have invested in the decisions and are less likely to reconsider them when encountering things that should have influenced the design. On a related note, most interfaces, particularly internal interfaces, should be influenced by the implementation to keep the implementation simple. Building a simple yet comprehensive solution typically requires a full understanding of the problem.


> Test driven development requires you to have a spec upfront.

What? TDD was created by Kent Beck as part of XP, which is about as far from up-front design and spec as you can get.


How can you write a test for something that doesn't exist without deciding how it is going to work before building it?

Tests are a form of a spec.


The point is to never test how it's working. Instead declare what it should be doing. If you can't declare ahead of time what it should be doing (maybe this is what you mean "how it is going to work") then you can't build it, can you?


Either you’re making a point that’s technically correct but missing the point, or you’re misinformed about how TDD works. TDD is an iterative process that generates tests and code at the same time, in very small (1-5 lines of code) bite-sized pieces. Although it could be argued that tests are specs, they don’t happen up front.


11 years in and my one critique is:

>Designing scalable systems when you don't need to makes you a bad engineer.

You have to have a very good understanding of the 'need' ahead of time or you're effectively just playing a kind of lotto. Do the bare minimum and it meets the need, you win. Do the bare minimum and it doesn't, get ready to start over, you lose.


You can hedge by breaking up your codebase into logical services from the start which own their data sources and controllers. When the time comes to break up into microservices, you separate each service into its own application rather than grouping them in a monolith.


Writing services in a separable manner is, in a sense, writing scalable software.


Write the microservice, deploy the monolith. You can save on hosting costs. It's not perfect, no.


I disagree. Well designed systems can normally be scaled, the scaling problems are generally well understood. Scaling prematurely is silly and costs a lot of time and effort to support customers that don't exist.


I'm not clear on what part of my statement you're disagreeing with.


You can do the bare minimum with a path to scale, though.

But it seems like when things 'scale' a lot of changes are required regardless, so I feel like it's a bit of a moot point.


Plus, 'scalable' can refer to the codes performance or the codes malleability, and the two are often conflated in blog posts :P


I'm at year 25, and I agree with this list. And I wish more people who think I'm too 'old' agree as well.


> 'agile' / 'scum master' driven waste of everyone's time

please don’t correct this spelling


>Standups are actually useful for keeping an eye on the newbies.

daily updates are a chore and pretty useless rather than for the "manager" who has to check on people's work. I have been in places where we had one update every week or every other week and that was the proof (along with other places I have seen) that daily update is just a waste of time. This article summarises it for me "Daily Stand-Up Meetings Are a Good Tool for a Bad Manager" [0]

[0]https://www.yegor256.com/2015/01/08/morning-standup-meetings...


wise to avoid an opinion on ORMs, otherwise this would devolve rapidly. my opinion is they are potentially useful but i have never worked a job where id rather have an orm


I think ORMs are useful only to help provide a direct 1-1 statically typed interface to the underlying db functionality in the language of choice (example java <-> postgresql ORM) ie. select(User.UserId, ...).From(User); etc.

Any more abstraction/generalization and it starts to suck. ORM for java <-> SQL databases (as a whole) are still fine as long as it allows for specialized operations on your sql db choice (say postgres specific functionality not available on mysql), but as ORMs start to get more ambitious by hiding away the actual SQL queries, things go south (looking at you hibernate). Programmers SHOULD know SQL and if they don't, they NEED to learn it. It isn't the ORMs job to abstract it away. That's a recipe for disaster


I like them when they simplify repetitive things, require minimal configuration and are easy to get out of your way when you and the database need to have a grown up conversation.


You should be using both.


I agree with most of the points the author makes.

> Pencil and paper are the best programming tools and vastly under used

I'm not convinced of this one. I grew up with digital tools only. Can someone give me examples where this assertion is true?


> I grew up with digital tools only.

This implies that you exclusively use digital tools, so my first question would be: have you tried using pen and paper for things?

It's really hard to give compelling examples of where pencil/pen and paper are going to be better than digital tools because everyone is different. If you want to try it out, I recommend simply keeping a pad and a pen at your desk and when you're stuck on a problem try writing something down. Whether it's mapping it out, quickly jotting down some assumption or thing you noticed etc. It won't be suitable for everything but there's something... visceral about writing with a pen that I find indescribably helpful in working through complex problems. This was how I got started with this process, at least.

For a more concrete example, I spend a lot of time in my current role thinking about high-level solutions to problems. Things that need to serve multiple different teams' needs, replace existing solutions (aiming to solve their most obvious pain points), sometimes have non-obvious simple solutions, legacy complex solutions that everyone is used to and just kind of accepts etc. These designs come from months of discussions with different people, thinking, theorising, research into existing solutions etc, but one thing I have found absolutely invaluable throughout this process is the pen and paper on my desk. My notebook is full of unorganised notes that I jotted down when a lightbulb came to me one day, rough sketches or mappings of solution ideas, brainstormed lists of features etc. Most of these don't see the light of day, but as they get refined they move to more permanent, collaborative (as necessary) mediums. I do this progressively by flipping back through my physical notes and reviewing/extending/transferring them as necessary.


> My notebook is full of unorganised notes that I jotted down when a lightbulb came to me one day, rough sketches or mappings of solution ideas, brainstormed lists of features etc.

This is to me the biggest benefit of using pen and paper, but it requires shifting your attitude towards pen and paper to treat everything you write as disposable. Just freely sketching out ideas and using paper as a place to dump thoughts without concern for quality, formatting, or even having them be cohetent, is probably the most productive way for me to solve problems - so much so that I have switched to using large post-it notes, rather than a notebook. The good notes stick around for a few weeks, and the bad ones go straight into the bin.


>rather than a notebook

Enjoyed these notepads (http://www.computinghistory.org.uk/det/46261/Floppy-Disk-Not...) for quick scribbles.

More substantial then post-its yet pages easy pull out


> have you tried using pen and paper for things?

Yes.

If I can express something through writing (as with the cases you mention), there is no faster, more convenient tool than a note taking app + keyboard. It's instant, archived, editable, and globally available (where there's internet of course).

If it's an idea that needs to be visually designed (shapes, graphs, space), I find that I can mostly do this by redefining the problem in a way that can be written down.

There's a small percentage of time where having a diagram is truly essential to my workflow. I'm not opposed to whiteboarding in these cases, but it's not as vital to my process as the author implies it is for them.

> indescribably helpful

I tend not to be convinced by things that are indescribable. What I'm understanding is that it's a tool that makes you happy. That's sometimes a valid enough reason to do something.


Are you a visual learner at all? Or an artist at all?

Like, do you ever draw shapes, or correlate shapes?

Have you ever drawn a flowchart for somebody?


System diagrams.

I've never drawn seen anyone draw a system diagram using any tool more quickly or better than they could on the back of a napkin.

If you start from the get-go trying to wrangle a UML diagram or Lucidchart, then your mind is immediately guided to do only the things the tool makes easiest.

That's why when you're inventing something from scratch, a pencil and a piece of paper are best. Of course, if your handwriting is messy, go back and use a tool later for your colleagues' sake.


In my experience, pen-and-paper helps me resist the urge to “just get started already”.

I don’t know why, but when I’m in the design-phase with a keyboard in my hands, I think I get too excited about building things and go straight to the text-editor to “hammer out the details” in code.

Away from the computer, I’m much more likely to complete a thorough design. And pen-and-paper remains my favorite tool for working on ideas that are too big to fit in working-memory.


For example, when proofreading writing, it's easier to catch mistakes when reading the document aloud and jotting down the needed edits on a printed copy of the document vs just reading it on the computer.

Or when doing something like data base schemas, drawing out a rough version of what you need so you have something basic to reference vs just a mental model.


For me, one of the virtues of pencil and paper is that I can go outside, sit under a tree, and think. I think differently there than I do in my cube. I can't describe exactly how I think differently, but I do. Some problems are easier to think about there (and some aren't).


Pen and paper are extremely useful for me, but I'm one of the only guys I know of that use pen and paper often.


I think this one is very much up to interpretation and will vary widely between people.

As for me, I spend way more time thinking and planning and picking apart ideas than I do writing code. Often this involves a pen and paper. Better abstractions come from planning rather than diving in and writing code.


Some explanation as to why the opinion changed for or against would be nice.


After reading these, I am wondering how anyone could've thought differently, as I agree with them all and it seems to me that to disagree with any of them doesn't seem to be reasonable. For example, typed languages are of course better than untyped ones, especially for large codebases with multiple levels of experience of the people who write them.

In other words, these seem to be exactly the opinions that the majority of HNers would agree with, and it is one of the least controversial lists I've ever seen.


I agree with most of the list. It's a good distillation of software development principles. In the spirit of HN, here's what I disagree with:

> Designing scalable systems when you don't need to makes you a bad engineer.

Creating overly scalable solutions is useful for learning, especially early on. I've done this.

> Despite being called "engineers," most decision are pure cargo-cult with no backing analysis, data, or numbers.

This is true in almost every human endeavour. It's not a useful opinion. Most decisions don't matter that much so adding data is a waste of time.

> Code coverage has absolutely nothing to do with code quality

I don't buy there's no correlation between code coverage and code quality. I'd much rather take code with >80% coverage than 0 coverage. My revised take is that the value of code coverage rapidly diminishes past 80% (maybe log scale-ish). I also think code coverage metrics are not especially useful outside of library code.

> 90% – maybe 93% – of project managers, could probably disappear tomorrow to either no effect or a net gain in efficiency.

Google tried getting rid of managers. It didn't work. Establishing lines of communications in large orgs is hard.


I agree with most of your assessments.

> Despite being called "engineers," most decision are pure cargo-cult with no backing analysis, data, or numbers

This is indeed so, most people are developers/programmers rather than software engineers. As a discipline, writing/crafting code still lacks the systematic approach of other engineering disciplines.

If I ask for a house to be built for me, people give me a plan as well as cost & time estimates, as part of a pretty standardized process. For sofware, there is much of a zig-zag route towards a foggy end goal in many projects.

That's why I'm working on a methodology to make things more systematic (there are of course many people working on methodology, some heavy like UML, some rather light/"agile", my focus is on methodology for data-intensive systems - systems that require machine learning components, which imposes a rather special list of process requirements on the project, for which nobody appears to have written up detailed guidelines/best practices, so that's ongoing).

> After performing over 100 interviews: interviewing is thoroughly broken. I also have no idea how to actually make it better.

I've interviewed a few hundred people in recent years, and been interviewed a few times myself. I agree most people use a broken process. My idea for making it better is: (a) cover a range of areas: motivation/goals - interpersonal - project experience - management - algorithms & data structures - coding practice. No silly logical riddles, but examples that are common in the daily life. So in case they really have been doing what their CV says, they should not need to have to practice for any of it.


> 90% – maybe 93% – of project managers, could probably disappear tomorrow to either no effect or a net gain in efficiency.

I never cease to be surprised when an otherwise intelligent person lays judgment on something they likely do not understand. At larger companies, these types of roles are invaluable. I'm going to charitably that this isn't referring to product management - it's dumb either way though.


This developer only has about 6 years of experience working at Amazon. It seems surprisingly naive how little this person understands about what a decent PM shields them from as an engineer, but I've heard working at Amazon sucks altogether. So, par for the course?


> Despite being called "engineers," most decision are pure cargo-cult with no backing analysis, data, or numbers

This is the most important part. Most companies don't have a mindset of doing research and analysis before taking a decision. No training is provided around this. Most "engineers" want to play with shiny toys without being held accountable for any decision.


> 90% – maybe 93% – of project managers, could probably disappear tomorrow to either no effect or a net gain in efficiency.

I don't know how 6 years of experience in the industry can confirm this but nonetheless, I do agree that indeed there are software engineering teams that don't need, perhaps even impeded by, PMs but I think they are fewer than average. I would agree if the numbers are reversed, i.e. 7-10% PMs are not needed.

Many developers love to hate on PMs but I think most teams overestimate themselves and think that they're mature and self-managing enough to not need a PM. It can be true that the individuals in the team are self-managing on their own but it's different when it's scaled to a whole team.

For example, it's difficult to finalize a decision when most experienced and opinionated team members have differing opinions. A PM serving as an arbiter can help resolve this deadlock. (Yes, there are strategies out there on how to make decisions w/o an arbiter but then, how do they decide which strategy to use?)


Regarding testing, I've noticed that every time I've thought I've done enough manual testing (and I do a lot of that before I push to QA) and I can add the automated tests "later" due to time crunch or whatever, I've found issues when I got around to adding them. This experience has fundamentally changed how I view automated tests.


It sounds like were I was around my 5th year of coding. After 15 years though I changed my mind again on certain things.

> Typed languages are better when you're working on a team of people with various experience levels

I've changed my mind back and forth multiple times about statically versus dynamically typed languages and after 10 years I settled on dynamically typed languages as superior in almost all cases. The main case where statically typed is better is if most people on your team are juniors and you are doomed to produce poor quality code so in this case you might as well 'label the spaghetti'...

I can say with almost 100% confidence that sticking to dynamically typed languages leads to worse quality in the long run. It has to do with loss of separation of concerns and subtleties of human psychology (gives false confidence, leading to neglect, encourages over-engineered interfaces which creates tight coupling between components and thus ossifies design mistakes and makes them less interchangeable and interoperable with other systems).

> Software architecture probably matters more than anything else.

This is the most important thing. 100% true. Glad the author seems to really get this. Some people don't figure this out after 20 years.

> Designing scalable systems when you don't need to makes you a bad engineer.

I disagree about this point. Writing scalable code can be elegant and fast development wise and performance wise. It just requires more careful design to get right. A bad or even 'mid level' engineer will over-engineer it; that's the real problem.

A bigger problem is when engineers create too many unnecessary abstractions and that doesn't scale in any way. In many cases, scalability requires very few abstractions in order to work. It requires very elegant code.


>> I can say with almost 100% confidence that sticking to dynamically typed languages leads to worse quality in the long run.

I meant to say 'statically typed languages' here but the popular narrative is so strong it must have seeped into my subconscious.


Solid list. I'll disagree this.

> People who stress over code style, linting rules, or other minutia are insane weirdos

We use Black for Python, Prettier/Standard for Typescript. Code is readable across developers and there is no question about how it should be formatted.

Make this easier: > Clever code isn't usually good code. Clarity trumps all other concerns.


I read it as people who stress over what the style should be, not people who think there should be a style.

I have my opinions on what should be in .clang-format, but that's debatable and beside the point. HAVING a .clang-format and using it is a different question and something very important (obviously you should use go-fmt or rustfmt or whatever is applicable, but clang-format covers most of it)


Hmm. Good point


> 90% – maybe 93% – of project managers, could probably disappear tomorrow to either no effect or a net gain in efficiency.

True especially when you get a project manager with no CS/IT background at all (pretty common in consulting for some reason). Then the project is really doomed from minute one.


> TDD purists are just the worst

No, the worst are the people who never produce any tests (or even anything testable).


Always found the concept of "code coverage" to be a weird phrasing of what's actually happening.

The whole point is to answer the question: do your unit tests walk all possible paths in your code?

Now, the whole point of unit tests is to verify code behavior. See if it remains consistent if you change something somewhere, that kind of stuff. Don't want the new hire to make a well-intended change that ends up breaking things. This is why unit-tests exist. Amongst other things.

One may well write tests such that all possible code paths are checked... doesn't mean the actual code is performant or written in a clear manner, or written in a maintainable manner, etc...

I guess a better term would be "behavior coverage" or something?


> So called "best practices" are contextual and not broadly applicable. Blindly following them makes you an idiot.

Very true. "Best practice" are most often just "common practices" - how we've done things here forever.

That's great for standardization. It gets annoying when each team member does things their own way.

But it's a stretch to call them best. The research into what is truly best, A/B tests of various methods, or building prototypes of new experimental frameworks is rarely scheduled in sprint plans. It's actually some crazy programmers who try out these experiments in their off time based on their experience with the actual problems, which leads to truly innovative things.


Is it just me or do those things seem very... obvious? I mean - almost every single one of them makes me question how can you even have a differing opinion that wouldn't immediately make you seem like you're mindlessly repeating hype phrases.


Common sense isn’t common, and sometimes it helps to write down the obvious.

Obviously planes should only land with the gear down, yet not all planes have. Hence the use of checklists stating the obvious: lower the gear.


Is lowering the gear something you have changed your mind on though? ;)


Lowering gear is a strong belief tightly held. I don’t think I’ll change my mind on that. :)


> Monoliths are pretty good in most circumstances

This 1M %!

I have seen a company being approached by an agency to design and implement them an e-commerce website that consisted of multiple web services and when I was asked for feedback, I said "start small; go with WooCommerce and when you hit it big, give me a call."

They are making 500K euros selling goods while being hosted on a shared hosting with a WordPress / WooCommerce website and the server spikes at 25% the most when traffic hit the website lol!

I will paraphrase the well-known "optimization / evil" and say:

    Over-engineer your infrastructure before go live will definitely lead you to failure, let alone bankruptcy.


> So called "best practices" are contextual and not broadly applicable. Blindly following them makes you an idiot

That's the truth. "Best Practices" often force you to ignore what your problem actually is and view it in some twisted way that fits the practice. Bad.

Your problem is your problem. Your data is your data. Write code that matches what you have and what you're trying to do.


My biggest frustration around ‘best practices’ is when you’re tasked with learning a new framework to consider adopting it for the company and you’re given a week and they want ‘best practices’ to come out of it.

There is no way I can learn how to write good code in a completely new framework that everyone should model after from that.

I can show what worked for a toy problem and something that’s hypothetically similar to a real problem, but that says nothing of real complexity or practices that are actually going to last more than a few months on the project.


> learning a new framework to consider adopting

The best practice is to not learn a new framework. Only learn the new concepts (technical and UX, if there are any) and if there is no noticeable benefit do continue using old framework.

In many cases new frameworks do not bring improvements to a degree that switching is worth it. They often do either obfuscate it or don't realize it themself, but most new frameworks do not invent any new concepts, they just dress existing (often old) concepts in new clothes. Which sometimes _can_ be a major improvement of UX but very often are not.


In the specific examples I’m thinking of, it’s transitioning from a single-threaded, non-async infrastructure to a multi-threaded and async infrastructure in a new language. We had to learn a new technology stack for that; and, frameworks, among others, are a required step in that.

I’ve also had to do “best practices” around things like introducing grafana or whatever to our stack, and what the metrics should look like.


i basically have 1 meta best practice, which is- everything you make should work 100% of the time (not that i live up to this)


Mine is that "this should never waste a user's time unless absolutely necessary." And it is absolutely shocking how many developers do not care if they make a user wait a minute for something that should take <1 second if they spent 10 minutes in the code improving it.

To me, it is a moral failing if my laziness costs user's time. Remember how many users you have and multiply delays by the number of users delayed by your code.


Yes but also don't overoptimize it's normally not worth it at all.

The relevant decisions are often about the UI/UX design and flow as well as the architecture, but less about optimizing code sections here and there.


Yes, I failed to state that, as I felt it obvious, and you're right. Over-optimizing is just as bad as under-optimizing.

I will not spend 4 hours to save 5 users 5 minutes per year.


So called "best practices" are contextual and not broadly applicable. Blindly following them makes you an idiot

I've come to associate the expression "best practices" with "a set of unwritten/badly documented rules which accumulated over the years and which we apply manually and thus inconsistently, even though we could automate this at least in part".

Sometimes I wonder what would happen if my "best practice evangelists" from various projects I've been in met.


Well hey, the opinions are mostly my own, with the exception of standups. Newbies should probably have a mentor / shepherd that communicates with them directly and often.


At my company, we have a stand up with Between 15-20 people. Personally, i think everyone is kind of disengaged, and nobody really knows each other. Everything is done remotely obviously because of COVID. The founder believes in working together in the office, so remote working may not be embraced.

So I would add what I learnt to the list: Keep stand ups small, between people who are actually working together on a specific problem/ product.


> People who stress over code style, linting rules, or other minutia are insane weirdos

> Code coverage has absolutely nothing to do with code quality

> TDD purists are just the worst. Their frail little minds can't process the existence of different workflows.

Amen!

I have skipped the Monolith one because I'm not "that sold" onto it, but I'm certainly not on the side of "put your monolith on a cheese grater and leave with thousands of services"


Interviewing is mentioned. I'm currently doing a lot of that and I agree it's challenging to assess engineering skill. We currently use a fairly simple coding challenge that shouldn't be a problem for anyone with some experience. We basically pair program with them for an hour or 90 minutes (remotely for now) to see how they work through problems and write code. It has been pretty successful so far.


Best one: "TDD purists are just the worst. Their frail little minds can't process the existence of different workflows."


They can they just typically don’t want to go roll about the muck once they have achieved full galaxy brain.

Mostly trolling but I do think following XP dogmatically early in my career gave me an accelerated appreciation and understanding for a lot of the other bullet points. I don’t always TDD now but I can typically tell if code was written via TDD or not just by reading it.


I seem to be the only one confused by the list. Could someone clarify? When the author says "Java isn't that terrible of a language." in the "Things I've changed my mind on" are they saying they used to think Java was not a terrible language and now think the opposite? (i.e. current opinion is that Java is a terrible language.)


He introduces that list with "Things I now believe, which past me would've squabbled with". It seems pretty clear.


Ah thanks, I must have skipped over that line.


> 90% – maybe 93% – of project managers, could probably disappear tomorrow to either no effect or a net gain in efficiency.

it’s unfortunate you haven’t had the opportunity to work with a strong product team. i agree when they are not helpful it’s a negative impact on the team. but a great PM can elevate everyone who develops that product


Product manager and project manager are different roles. I think the author was specifically designating the latter.


> it’s unfortunate you haven’t had the opportunity to work with a strong product team.

He said that 10% were useful, not that all of them were bad.


This list is self-consciously blunt/rude (correlating programming practices with intelligence and sanity) and unaccountable ("who knows what I'll believe in a few more years"). It's an annoying combination. I imagine many reader will haughtily deploy these commandments against their colleagues.


I would add just one

- Automation of everything is what you need in 99% of cases. There are almost 0 cases when you automate something and it turns out that you could have done it more efficiently manually (even in that case, learning experience makes it usually worth it, while such manual work provides almost 0 XP).


About the first point, remember you might have to maintain or refactor code you wrote 5-10 years ago. You will have a different experience level from your past self and likely won't remember why you did a number of architectural decisions.

IOW: typed languages are better, period.


> Typed languages are better when you're working on a team of people with various experience levels

I would extend this to say that a test suite, static analysis, linting, code formatting and a ci setup are more and more important, the bigger your team is.


I would also add that, when a project extends over a long period of time, your past self can also be the team member who these tools help to keep in alignment with.


90% – maybe 93% – of project managers, could probably disappear tomorrow to either no effect or a net gain in efficiency.

Manager and sometimes dev lead here. I agree with this, but would qualify that it’s not because teams are “self-organizing” or self managing or whatever claptrap the consultant sells. A good manager - or better said, a good leader - can make a massive positive impact on the efficiency and quality delivered by a team.

There is a massive absence of good leadership in the software industry. All manner of process and certifications and other bureaucratic scaffolding have been invented to try and work around the absence of leadership, to little avail.

Make no mistake, those practices and processes and certifications can help good managers become better. But they don’t turn bad managers into good managers and don’t compensate for an absence of leadership.


Not totally enthusiastic about this type of developer who disses other professions.


Recognizing that other professions have just as many charlatans as software engineering, just that they are harder to weed out, doesn't mean you "diss" other professions. Every role is important, but not every person working in that role can actually do the job properly.


> 90% – maybe 93% – of project managers, could probably disappear tomorrow to either no effect or a net gain in efficiency.

That's my experience, too, but only in short term. In the long term, they can make a difference (either way).


I wonder why he changed his mind to these. They don’t seem that controversial.


They don't seem that controversial? There has been endless religious wars about lots of them. To point:

> Typed languages are better when you're working on a team of people with various experience levels

This is the static vs dynamically typed controversy. And if you think the consensus is "types are good" today, that wasn't at all the case 10 years ago or so.

> Java isn't that terrible of a language.

That was (and probably still is) the knee jerk reaction of many younger devs (and some older ones)

> Designing scalable systems when you don't need to makes you a bad engineer.

Not controversial in itself, but tons of people do it all the time and think it doesn't apply to their case...

> DRY is about avoiding a specific problem, not an end goal unto itself.

Many people take it religiously...

> In general, RDBMS > NoSql

The NoSQL/RD religious wars were all the rage when Mongo and co first appeared (even earlier, with BS Java projects like Prevalayer).

> Functional programming is another tool, not a panacea.

You'd be surprised how many think the opposite...


> So called "best practices" are contextual and not broadly applicable. Blindly following them makes you an idiot

Following best practices will make it impossible to think outside the box (limits the creativity).


It’s interesting, I think, how many people this resonates with. Why do people enter the software business with such strong beliefs about topics like this that after a few years they realize they were wrong?


Because universities have teachers that haven't spent six years in industry to learn these things, and they pass on their unsupported-by-real-world-experience ideas to their students.


> 90% – maybe 93% – of project managers, could probably disappear tomorrow to either no effect or a net gain in efficiency.

I'd extend this point from "project managers" to any manager


First two bullets could do without the qualification, IMHO. “keeping an eye on the newbies” is just condescending.

My take:

* Typed languages are generally better.

* Standups are actually useful.

* Process is helpful, when not used as a straitjacket.


> Pencil and paper are the best programming tools and vastly under used

This should probably rephrased to "taking notes", if I understand what it's suggesting


I think the author meant writing ideas down and visualizing them with pencil so you can quickly amend what you already have drawn along with freeing your cognitive capacity


Inspired by this post and Something I’ve slowly been coming around to:

“Silos probably aren’t the problem. Silos with no conveyor belts between them probably are”


I’ve got 40 years in the software biz. This is an excellent list, and the author is very fortunate to have gotten to this point so fast.


> 90% – maybe 93% – of project managers, could probably disappear tomorrow to either no effect or a net gain in efficiency.

I can't agree enough.


Fantastic post, think he'll come around on this one

> People who stress over code style, linting rules, or other minutia are insane weirdos


> Designing scalable systems when you don't need to makes you a bad engineer.

Oh no. He is 100% correct. Also I am a bad engineer.


> YAGNI, SOLID, DRY. In that order.

Hell yes.


I think I love this person.


s/best practice/good practice/g

It shows an appreciation that there may be a better practice in the reader's situation and that the writer doesn't think they know everything forevermore.


Very good list in general, but I would disagree with one point:

> Software architecture probably matters more than anything else

The devil is in the details here, but the more I program, the less I feel that "software architecture", at least as it is often discussed, is actually not important and often actively harmful.

The architecture driven approach takes the assumption that the "correct" shape of a program should fit into a pre-defined abstraction, like MVP, MVVM (or god forbid atrocities like VIPER) etc which has been delivered to us on a golden tablet, and our job as programers is to figure out how to map our problem onto that structure. In my experience, the better approach is almost always to identify your inputs and desired outputs, and to build up abstractions as needed, on a "just-in-time" basis. The other approach almost always leads to unnecessary complexity, and fighting with abstractions.

The author also mentions SOLID - like architecture patterns, I'm always a bit suspect of true-isms about what makes good software, especially when they come in the form of acronyms. I generally agree that the principals in SOLID are sensible considerations to keep in mind when making software, but for instance is the Liskov Substitution Principal really one of the five most important principals for software design, or is it in there because they needed something starting with "L"?

After 10 years of programming, the biggest takeaway for me has been that the function of a program (correctness, performance) is degrees of magnitude more important than its form. "Code quality" is important to the extent that it helps you get to a better functioning program (which can be quite a bit) but that's where its utility ends. Obsessing over design patterns and chasing acronyms is great if you want to spend most of your time debating with colleages about how the source code should look, but the benefits are just not reality-based most of the time.


So you don't think architecture matters that much, but you've felt the pain of having one that doesn't fit forced on you? That seems... kind of unaware of you.

Just-in-time architecture is a perfectly valid approach to developing an architecture. The point is to develop one that is a good match to the problem. If you don't have that, you suffer - and that's the point of what you quoted.


I think this is a bit of a stretch. The assertion I took issue with is that "Software architecture probably matters more than anything else".

Yes if you want to deconstruct the term to the point of absurdity, and say that whatever shape your program ends up in implies some "software architecture" then you can say architecture is always relevant, but I would consider it a stretch to say this means it's also the number-1 priority in software development.

If you read my comment, you would see that I'm arguing that software results, and software development as a process matter much more than thinking about architecture. I would also argue that every time the term "architecture" came up in a professional discussion over the past several years, this was a sign we were going down the wrong path.


I realize this may get lost in the comments and also that I am partially just venting from a bad week at work with a particularly stressful dev, but...

Is anyone else just sick of developers with hyper-condescending tones like this?


Interesting take, that order on: YAGNI, SOLID, DRY.


After 35 years, my opinions are mostly the same.


Was 'scum master' intended?


>Java isn't that terrible of a language.

That is going to evolve in the next six years to "languages don't usually matter much except in specific domains in a few cases"


I'm not sure about that. As an industry, we're just beginning to realize that object orientation is hurting far more than it's helping. As more people realize this, better tools and techniques will be devised to help developers think of their programs more like computers do: as data, and not as objects which only make sense to some humans.

As such, PURE OOP languages like Java are going to start to fall out of favor in the industry for all workloads. Academia won't let go of OOP until much later.


I hope Java and .NET fall into disuse, I've never enjoyed programming less. I'm not a fool, I recognize that some great software can be created with those tools, but it just feels so much more painful to work with. It must be my taste.


Except when you care about the ecosystem, build tools, willingness of the rest of the team to maintain the codebase, etc.--so the language usually matters even if you're not religiously dedicated to one in particular.


?

That seems like a rather silly prediction. There is enough code written in java that even if people stopped writing new java this instance, it would still matter across a broad field of domains for more than the next 6 years. There is a world outside of what languages are trendy on hn right now.


If this were true, it would make no difference if one uses assembly or Java or use a magnetic needle to swap bits. But it's clear to everyone that it makes a big difference.

If someone says "language doesn't matter" it usually means they suffer from the blub paradoxon.


>Java isn't that terrible of a language.

Someone had to say it. So sick of the fashionable hatred towards what actually makes things work.


I disagree with software architecture. There's not enough theory and science in organizing code and understanding complexity. Most of what goes on here is just guessing. If you created "good architecture" it only means future requirements happened to fit your design, aka you made a good guess.

Rarely does anyone design anything that is efficient in the present and adaptable to any possible future simply because there's no science and theory on how to design such a thing. There's some rules of thumb but even following these rules of thumbs people end up with technical debt most of the time anyway. It always happens, if a project lives long enough, more and more "design mistakes" made at project inception become evident.

I have personally found tacit programming to be the solution for this but because there's no theory I can't explain to you definitely why tacit programming is the best way to deal with tech debt of the structural nature.


> Software architecture probably matters more than anything else. A shitty implementation of a good abstraction causes no net harm to the code base. A bad abstraction or missing layer causes everything to rot.

That's fair, but shouldn't we strive to make the best possible guesses in the absence of said theory and science?


"People who stress over code style, linting rules, or other minutia are insane weirdos" -- best one hahaha


>Java isn't that terrible of a language.

You’re onto something, but not yet enlightened. The next level will be realizing that languages don’t matter at all.


It's good list. Except the TDD rant.

TDD can be reframed as codified specifications with test verifications. Then, the TDD purists do have a point.


Then it’s not (edit: pure) TDD.

The TDD purists I’m familiar with think if you write any code before you write the tests, you’re doing it wrong.


That’s a common misunderstanding of TDD, but it’s incorrect. TDD is an incremental cycle of write test, write code, refactor, repeat. Typically less than 5 lines at a time.


Given the order of the words you said, writing tests comes before writing code, as I said.


Writing one test comes before writing a few lines of code. It’s iterative. Tests (plural) don’t come before code. They happen at the same time.


That guy is a fruitcake. Pencil and paper are the best tools? I don't need other tools than my fingers, a keyboard and the IDE and sometimes a search engine they delivers relevant results.

That guy is not a developer and if he is he's not talented.

I would never hire someone like him. Pure waste of time reading this.


I can't tell if you're serious.

Anyway, if you're not using a pen and paper you're either really smart or you're solving easy problems.


The author of this seems like a real joy to work with

> “TDD purists are just the worst. Their frail little minds can't process the existence of different workflows.”

Imagine thinking about colleagues as people with “frail little minds.” Yikes.

Many of the other points betray serious attitude and perspective problems, but it was forgivable until getting to that last bullet and realizing, no, this is just a person with a shitty attitude.


>Imagine thinking about colleagues as people with “frail little minds.” Yikes.

He's not thinking about ALL colleagues, but "TDD purists".

And even so, it's obviously a turn of phrase to dismiss the hardcore purism, not something the author generally feels about such people in their entirety.


Absolutely not. Setting up a strawman “absolute purist” (and yes, this is a strawman - no, you haven’t worked with real people taking the extreme strawman TDD position that could magically justify this) and then using needlessly insulting or belittling language to condemn the fake strawman is totally ridiculous. Your attempted defense of it is not acceptable.

You could just as easily say something like, “People who take TDD to an extreme are hard to work with and ultimately their inflexibility and unwillingness to appreciate other approaches does more harm than good.”

See how that’s not dehumanizing? See how that accomplishes the same thing without disgusting juvenile insults like saying someone of a different ideology has a “frail little mind”?


>and yes, this is a strawman - no, you haven’t worked with real people taking the extreme strawman TDD position that could magically justify this

No strawman required. There are real devs with extreme TDD zealotry. It's very possible that the author has worked with such devs, and I see no reason to doubt his experience. Your use of the word "strawman" above, doesn't magically make such devs which push TDD with zealotry as some silver bullet solution not exist.

The only reasonable reading of your point then is that those devs are not enough to justify the phrase "frail little minds". It would take some far more extreme, non-existent in the world, level of TDD zealotry (what you call a "strawman TDD position") to justify sthe use of a phrase.

But that's just you setting an arbitrary criterium. Which neither the author, nor I share. I'm fine with using the term "frail little minds" for people with real-world levels of TDD zealotry. I furthermore understand that it's just a phrase used by the author to refer to devs that can't get over their (real world level) TDD zealotry, that it doesn't imply anything about them otherwise, and that it's not a big fucking deal.

>See how that’s not dehumanizing?

No, I fail to see how the authors turn of phrase was dehumanizing. I leave such characterization to actually dehumanizing things, for example racism.


I think this whole thread is an example of The Weak Man argument in action, it’s neat to see it in real life just after it was posted on HN.


I think it's more about making mountains out of molehills and ignoring the context and substance of TFA.


> “ I'm fine with using the term "frail little minds" for people with real-world levels of TDD zealotry.”

This is disgusting behavior on your part. You should be ashamed. Even more so for the mental gymnastics you’ve written in this spastic and bizarre comment to defend and endorse using flippantly insulting language.

The level of intellectual dishonesty in what you say is just staggering. Absolutely shambolic on your part.


>This is disgusting behavior on your part. You should be ashamed.

This is hollier than thou virtue signalling on your part. You should be shunned.

Also, hypocritical. You're against "frail little minds" but are OK with yourself using "disgusting behavior" and "you should be ashamed", like some tv-evangelist...

>Even more so for the mental gymnastics you’ve written in this spastic and bizarre comment to defend and endorse using flippantly insulting language.

No mental gymanstics required. Devs are adults. The "ocassional flippantly insulting language" if fine and has been part of team work since forever.

Not to mention TFA author didn't target anyone in particular, but a category of people with zealotic behavior.

If you can't call out stupid ideas as stupid, and zealots as such, then we've regressed at childish levels of protection from the real world (which perhaps is all the rage in some rapidly infantilizing areas of the world, enabled early by helicopter parents).


"Clever code isn't usually good code. Clarity trumps all other concerns."

Nail, meet hammer.

Mic drop.

True dat.

Word!


According to the author, FP is another tool. No, Functional Programming is NOT a tool but a paradigm which is a formal system of computation based only on function called lambda calculus.


A programming paradigm, or in plain english: a way of thinking and stating solutions to problems, is just a tool.

Just like OOP is just a tool and procedural is just a tool. I'm in the camp that FP should be the default tool for expressing business problems, jumping down to procedural or OOP for caring about some kinds of low-level details, but it does not make it anything other than "just a tool".


Not sure if I follow you. I thought a tool != an abstraction According the English dictionary A paradigm is a standard, perspective, or set of ideas. A tool a piece of equipment == something is concrete Can a mathematical expression or equation be considered a tool?


You sound either like English isn't your first language, or like you're taking words far too literally.

In the sense we're using, a tool is anything we use to do what we're trying to do. A physical tool is a tool. A software tool is a tool. But if I'm trying to ship something, FedEx and UPS are tools (in this larger sense).

So an abstraction, a way of thinking, a way of writing code, is also a tool. I can use FP for the problems that it fits well, and use other ways of programming on other problems. "It's just a tool" means "it's not The One Right Way". Use it when it fits. Don't use it when it doesn't.


People regularly say "the right tool for the job". When it comes to software that can mean any decision you make on what to use for solving your problem: language, OS, IDE, editor, architecture, low or high level design, etc.


Point (of the author) proven.


I predict the following changes:

1) Typed languages are worse, and you want people with high levels of competence.

2) Java actually IS terrible. You win very little by designing for incompetent coworkers. Plan for competence, and hire appropriately. Small elite teams beat large incompetent teams every time. See #1.

3) Thinking through scalability upfront matters in many systems. Not all systems. Many systems.

4) SOLID is one way to do things, and not always the right one. OO isn't always the right one either.

5) YAGNI is more complex... you shouldn't build what you don't need yet, but you should architect for reasonable changes to requirements. Scope out all possible features early in the system design, create an architecture that supports them, and then implement the minimum viable system which can be easily incrementally refactored into that architecture.

6) There are ways to interview well. When you figure them out, you'll notice they give you a competitive advantage (while sharing them doesn't make the industry more efficient as a whole, and makes them less effective, so if you're smart, you'll keep them to yourself).


Hard agree with 4 and 5. Soft agree with 6, I think there's nothing's wrong with sharing specially since many of the important teachings are incompatible with the kind of people for whom these lessons are not obvious after a couple of years of experience.

But points 1 to 3 give me a visceral negative reaction. I'll go into detail:

1) If you're writing anything that's not a script with a few lines, and you care about your software not being terrible to maintain and change for other people in real production settings, typed languages wipe the floor with dynamic languages (excluding ecosystem moats, just judging the languages).

Good types act as documentation, a higher form of tests, and a form a development paradigm to make it easier to iterate through code new or old by building/modifying stacks of machine-checkable assertions that you can pass around. Not enough devs are taught how to think with types and extract the most out of them, but any half-decent dev can be taught how to do it in a few days and the difference is night and day to anyone who has experienced it first-hand.

2) Java is terrible, but it's not as unworkable as some other mainstream languages given its extensibility and ecosystem. Java was sold to managers as "prevents cheap workers from making mistakes", but it was never about that and it's mostly terrible at it.

3) I would say that's a deceitful take, for only the absolute minority of systems have to care about scaling beyond a single server with failover (and a separate database if going with managed DB). For those systems that absolutely do care about horizontal scalability the answers should be absolutely obvious to any senior dev from the moment the requirements are stated. And for those cases where someone feels there may be a need for a complex scalability solution but there's not an obvious case for it yet, development will most of the time be better off starting with the simple solution and selectively optimizing as needed, just don't make it impossible to switch later down the road.


> 1) If you're writing anything that's not a script with a few lines, and you care about your software not being terrible to maintain and change for other people in real production settings, typed languages wipe the floor with dynamic languages (excluding ecosystem moats, just judging the languages).

I really hope the typed-vs-not-typed debate is over... types > no types. However... I used to believe that types were essential. I now believe that they are more like a luxury. Good tools are a luxury, not fundamental—it’s way more important to have good software design, and the core of good software design is really tool-independent. It requires deep understanding of whatever problem you’re trying to solve, not a deep understanding of software tools—-unless you’re unlucky enough to be building some horrible middleware.

I also like to dabble in photography, where you see a similar phenomenon. When I take a really great photo, people sometimes remark, “wow, you must have a really great camera!” No, actually, that’s the least important part of the equation. There’s a reason why Canon’s top-of-the line lenses are the “L” series—the L is for luxury. They make the job a little easier, but they’re not really essential.


> 1) If you're writing anything that's not a script with a few lines, and you care about your software not being terrible to maintain and change for other people in real production settings, typed languages wipe the floor with dynamic languages (excluding ecosystem moats, just judging the languages).

It's sort of the other way around. Typed languages result in most code being better, but completely prevent some design patterns. I have yet to run into piece of Java code -- beyond the completely standard use-case of a database-backed e-commerce / employee / inventory web site -- which doesn't go through one or two architectural contortions somewhere due to being forced into a specific paradigm.

The wins throughout the code from static take a back seat to the architectural contortions from not being able to express what you want.

Now, with incompetent developers, perhaps you want to prevent some design patterns, but those are workplaces I avoid like the plague.

> Java was sold to managers as "prevents cheap workers from making mistakes", but it was never about that and it's mostly terrible at it.

What do you think it's about?

I think that's a fair sale. If you're making a generic business system (you have things -- employees, inventory, customers, etc. you want to manage, which map 1:1 to objects, and which are exposed through a UX), it's nearly perfect, and you'll never hire qualified devs to work on those sorts of systems.

> 3) I would say that's a deceitful take, for only the absolute minority of systems have to care about scaling beyond a single server with failover (and a separate database if going with managed DB). For those systems that absolutely do care about horizontal scalability the answers should be absolutely obvious to any senior dev from the moment the requirements are stated. And for those cases where someone feels there may be a need for a complex scalability solution but there's not an obvious case for it yet, development will most of the time be better off starting with the simple solution and selectively optimizing as needed, just don't make it impossible to switch later down the road.

My experience -- having done a few successful startups and one or two failures -- is that it's critical to plan for success down the road. A major mode of failure is that decisions made early prevent success down the line. The system doesn't need to be scalable on day 1, but it absolutely positively needs to be designed for scale. The world is unpredictable, and you don't know the moment you'll hit exponential viral growth. At that point, you need to be able to refactor the system to hit scale virtually overnight. It doesn't always happen, but if you don't plan for it, it will never happen. It's the successes that define your career (and your bottom line).

The way I tend to design systems, they tend to run on 1-3 servers on a random cloud provider (depending on level of failover), but they use abstractions mapping onto scalable architectures (e.g. generic key-value stores and similar). If I do need to scale, it's a refactor away. A system I'm working on right now has an especially complex data pipeline. Thinking through scalability cost a few months time since it is complex, but without that, if we do hit scale, we'd hit a glass ceiling. As is, for now, we're sitting on top of a non-scalable managed RDBMS. If we do hit scale, I can migrate seamlessly to a horizontally scalable setup. At that point, we'd need to grow the team several-fold (esp. dev ops), but if you're scaling, that's okay; it's a fine problem to have.

I've advised startups, and ones which don't think through the long-term (and not just scalability; total addressable market, monetization, architecture to target adjacent markets, etc.) almost always fail, and if they succeed, it's by sheer, dumb luck.


Many would desire elite teams. On the other hand, such capable programmers are not the norm. Most are stuck with what they get and have to make it work.


Typed languages can be better utilized for self documenting codebase. The types are used to show clear intent for the code.


There is a self-reinforcing network of assumptions in that statement:

- Good architecture beats good code.

- There are things I can express in dynamic languages which are hard to express in static languages.

- If you're not expressing those things, typed static languages win.

- When you're contorting your architecture to fit into static languages, dynamic ones win.

- Of course, if you don't have experience with dynamic programming, you won't think to express certain things or structure code certain ways.

There are, of course, academic languages which do both well (including optional static type systems for Lisp), but I'm not aware of any in widespread industry use (with an ecoystem of libraries, documentation, etc.).


Good enough code that works, at the right time, beats good architecture. Too many examples in computer science history.

Simple, clean code that works is the best way to express logical intent.

Eventually, everything compiles down to byte code. You can do it the easy way or the hard way.

Industry will use the most effective tool. Academics can experiment with high level concepts, but if it doesn't scale out, it won't be popular in the industry.


10ish years in, professionally. I agree with the official list provided by the link pretty well.

edited for clarity


I predict you will be wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: