Hacker Newsnew | past | comments | ask | show | jobs | submit | kbenson's commentslogin

Perhaps this is just a form of technical writing you're unfamiliar with? Those titles are pretty standard for what I consider good technical writing section headers. LLM writing tendencies are tendencies LLMs have integrated by encountering those tendencies. If your assessment standard for AI is just "common best practices for a subset of good writers", then I think perhaps you need to adjust how you assess to be a bit more nuanced.

For some reason people frequently suggest that my problem with LLM writing is that it's too good. Allow me to restate that I find fault with how the article is written, and that I do not in any way perceive this to be good writing. The flaws happen to manifest in a way that I would expect LLM flaws to manifest, which I also do not find to be good writing. I do not find LLMs to have absorbed good technical writing tendencies at all. Instead they absorb sensationalist tendencies that are likely both more common in their dataset and that are likely intentionally selected for in the reinforcement learning phase. Writing which is effective, in the same way that clickbait headlines and Youtube thumbnails are effective, but not good. I felt as though this article was, through its headers and overuse of specific rhetorical devices, constantly trying to grab my attention in that same shallow manner. This gets tiring at length, and good technical writing does not need to engage in such tendencies.

If you disagree and find this to be good writing, you are entitled to your opinion, but nonetheless this is my own feedback on the article.


Can you please share an example of what you perceive to be good writing so we can compare?

Sure, I guess? I feel like this is getting rather in the weeds and will not necessarily lead the conversation in any kind of particularly productive direction, but I will nonetheless take the opportunity to promote what I consider to be excellent writing. Dan Luu is a favorite of mine, and offers what I find to be a much more rewarding use of reading time. A sample picked basically at random: https://danluu.com/ftc-google-antitrust/

Ok that's fair, he's a pretty unusual or at least he's a writer that cares a great deal about his writing, he's talked in the past about his writing gets getting passes from people so there's at least a quality bar there

Thanks for clarifying, in this case it might be comparing apples to oranges as I'd be surprised if most people approach they're writing like he does


> For some reason people frequently suggest that my problem with LLM writing is that it's too good.

> I felt as though this article was, through its headers and overuse of specific rhetorical devices, constantly trying to grab my attention in that same shallow manner.

I think perhaps you're quick to assess a certain type of writing, which many see as done quite well and in a way that's approachable and is good at retaining interest, as AI. Perhaps you just don't like this type of writing that many do, and AI tries to emulate it, and you're keying on specific aspects of both the original and the emulation and because you don't appreciate either it's hard for you to discern between them? Or maybe there is no difference between the AI and non-AI articles that utilize these, and it's just your dislike of them which colors your view?

I, for one, found the article fairly approachable and easy to read given the somewhat niche content and that it was half survey of the current state of our ability to handle change in systems like these. Then again, I barely pay any attention to section titles. I couldn't even remember reading the ones you presented. Perhaps I've trained myself to see them just as section separators.

In any case, nothing in this stuck out as AI generated to me, and if it was, it was well enough done that I don't feel I wasted any time reading it.


I am a technical writer. This article is not good technical writing.

Good technical writing allows you to get to and understand the point in a minimum of time, has a clear and obvious structure, and organizes concepts in such a way that their key relationships are readily apparent. In my opinion this article achieves none of these things (and it also is just bad insofar as its thesis is confused and misleading in a very basic way—namely the relationship between functional programming philosophy and distributed systems design is far more aligned than it suggests, and it sets up a false dichotomy of FP versus systems, when really the dichotomy is just one of different levels of design (one could write the exact same slop article about what OOP "gets wrong" about systems—it gets it "wrong" because low level programming paradigms techniques are in fact about structuring programs, not systems, and system design is largely up to designers—the thesis is basically "why don't these pragmatic program-leave techniques help me design systems at scale" or in other words "why don't all these hammering techniques help me design a house?")


I would only loosely categorize this as technical writing, depending on how you categorize technical writing. It seems much more a survey of problems and discussion piece, with notes about projects making inroads on the problem. It's definitely not a "this is how you solve this problem, and these are the clear steps to do so" type of article. Maybe that's some of the disconnect in how we view it. If I was hoping that this communicated a clear procedure or how to accomplish something, I would be disappointed. I don't think that was their intention.

I came away with some additional understanding of the problem, and thinking there are various nascent techniques to address this problem, none of them entirely sufficient, but that it's being worked on from multiple directions. I'm not sure the article was aiming for more than that.


I'm a highly literate reader and writer of technical topics, and there are a lot of bad technical writers who think they aren't. Except perhaps for the title, which is way too narrow, the article is excellent writing about a technical topic (which is quite different from technical writing)--but then I actually read it, so I know that he doesn't talk about a dichotomy between FP and systems, but rather between single programs and systems, and he explicitly says that his points aren't restricted to FP, but that because FP addresses the single program issues so well, FP programmers are particularly prone to missing the problem.

> Second don’t use databases. Databases don’t type check and aren’t compatible with your functional code written on servers.

That isn't very useful by itself. What's your suggested alternative that aligns with your advice of "don't"? How does it deal with destructive changes to data (e.g. a table drop)?


There are no alternatives. My point is the whole concept was designed with flaws from the beginning.

>How does it deal with destructive changes to data (e.g. a table drop)?

How does type checking deal with this? What? I'm not talking about this. I'm talking something as simple as a typo in your sql query can bring your system down without testing or a giant orm that's synced with your database.

I'm not saying distributed systems are completely solved. I'm saying a huge portion of the problems exist because of preventable flaws. Why talk about the things that can't really be easily solved and why don't we talk about the things that can be solved?


Oh, I thought you were speaking more to the topic and content of the article in question, which goes to great lengths to describe the sorts of problems that are much, much harder to catch than simple compiling of queries and checking them against the database, or the message store.

Even if you were to reduce the database to a simple API, the question then remains how do you make sure to version it along with the other portions of the system that utilize it to prevent problems. The point of the article seems to be to point out that while this is a much harder problem (which I think you are categorizing as "things that can't really be easily solved"), there are actually solutions being developed in different areas that can be utilized, and it surveys many of them.


>Oh, I thought you were speaking more to the topic and content of the article in question, which goes to great lengths to describe the sorts of problems that are much, much harder to catch than simple compiling of queries and checking them against the database, or the message stor

Right. But we haven't even have square one solved which is the easy stuff. That's my point.

>Even if you were to reduce the database to a simple API, the question then remains how do you make sure to version it along with the other portions of the system that utilize it to prevent problems.

I said monorepo and monodeploys in this thread. But you need to actually take it further then this. Have your monorepo be written in a MONOLANGUAGE, no application language + sql, just one language to rule them all. boom. Then the static check is pervasive. That's a huge section of preventable mistakes that no longer exist, now that type that represents your table can never ever be off sync.

I know it's not "practical" but that's not my point. My point is that there's a huge portion of problems with "systems" that are literally obviously solvable and with obvious solutions it's just I'm too lazy to write a production grade database from scratch, sorry guys.


> I said monorepo and monodeploys in this thread

And that helps when you are dealing with schema changes that need to be rolled out at AWS, your local DB, a Kafka cluster, how? The whole point of this article was how to approach the problem when there are different components in the system which make a monorepo and what it provides for this infeasible or impossible.

> I know it's not "practical" but that's not my point. My point is that there's a huge portion of problems with "systems" that are literally obviously solvable and with obvious solutions it's just I'm too lazy to write a production grade database from scratch, sorry guys.

The article talks about database solutions that help with this problem.

I'm uncertain how to interpret your responses in light of the article, when they seem to be ignoring most of what the article is about, which is solving exactly these problems you are talking about. Is your position that we shouldn't look for solutions to the harder problems because some people aren't even using the solutions to the easy problems?


The article is about coping mechanisms for a world where we already accepted fragmented systems: polyrepos, heterogeneous languages, independently versioned databases, queues, infra, and time-skewed deployments. Given that world, yes, you need sophisticated techniques to survive partial failure, temporal mismatch, and evolution over time.

That is not what I’m arguing against.

My point is more fundamental: we deliberately designed away static safety at the foundation, and then act surprised that “systems problems” exist.

Before Kafka versioning, schema migration strategies, backward compatibility patterns, or temporal reasoning even enter the picture, we already punched a hole:

Polyrepos break global static checking by construction.

Databases are untyped relative to application code

SQL is strings, not programs

Deployments are allowed to diverge by default

That entire class of failure is optional, not inherent.

When I say “we haven’t solved square one,” I’m saying: we skipped enforcing whole-system invariants, then rebranded the fallout as unavoidable distributed systems complexity.

So when you say “the article already offers solutions,” you’re misunderstanding what kind of solutions those are. They are mitigations for a world that already gave up on static guarantees, not solutions to the root design mistake.

I’m not claiming my position is practical to retrofit today. I’m claiming a huge portion of what we now call “hard systems problems” only exist because we normalized avoidable architectural holes decades ago.

You’re discussing how to live in the house after the foundation cracked.

I’m pointing out the crack was optional and we poured the concrete anyway.

I’m telling you this now so you are no longer uncertain and utterly clear about what I am saying and what my position is. If you are unclear please logically point out what isn’t clear because this phrase: “ The article talks about database solutions that help with this problem.” shows you missed the point. I am not talking about solutions that help with the problem, I am talking about solutions that make a lot of these problems non-existent within reality as we know it.


You say don't use databases, and that we had the option to use something different and did not, and chose this path.

I ask you what to use instead, and how to deal with datastore versioning.

You say you're talking about how we don't have type safety that extends to the remote systems we're interacting with.

I ask how that helps versioning problems with these systems where you need to deal with applying changes across distributes systems, which specifically is not solved by having types in lockstep definition between systems, because in application of change there are problems to work thought.

You note we did all this deliberately and we didn't have to. I keep asking you what the other option is, because you keep acting like there is one, but refusing to give an example of what that would be, because a monorepo is no solution for the problems being discussed here in the article, which to be clear, are not limited to code.

You've made it very clear you think we should have done "something" else, but refuse to articulate what that is. If it's not known, or not explored, then I posit we didn't "choose" this path, it's the path that was open to us.

> You’re discussing how to live in the house after the foundation cracked.

You keep saying we should have used something else for the foundation that wouldn't crack, but refuse to explain what this mythical material is.

What is your proposed alternative, or are you just waxing theoretical?


You still don’t get what I’m saying, and at this point it’s not a disagreement, it’s a category error.

I am not saying the problem is impossible. I am saying the problem is obvious, stupid, and solvable in theory, and that the only actual solution is to rewrite the foundation from scratch. That is precisely why there is no practical path forward. The impracticality is the point.

When you keep asking me to name an alternative, you’re implicitly assuming I’m advocating for some incremental migration or deployable fix inside the current ecosystem. I am not. There isn’t one. If you want whole system static guarantees, you need a database that is designed from the beginning to be part of the same language and type system as the application. Queries are programs. Schemas are types. The datastore is a compiled artifact, not a remote string interpreter. That requires a fundamentally different database.

That is the alternative. And it is completely unrealistic to retrofit into the existing world. Which is why we are stuck.

So when you say I’m “waxing theoretical,” you’ve missed the entire point. The theory is the indictment. The fact that the solution is obvious but unusable is exactly the problem. We built ourselves into a corner decades ago, and everything we now call “systems engineering” is about managing the consequences of that decision.

The article is fine. It surveys techniques for surviving in the world as it exists. I am not disputing their usefulness. I am saying those techniques exist because we accepted a broken foundation and normalized it. That distinction matters.

You keep arguing as if I’m refusing to answer your question. I’m answering it directly. The answer just isn’t one you like. The only real fix is a ground up rewrite of the data layer and its relationship to code, and that’s never going to happen at scale. That’s the conclusion. Understand?

If you think that means the path we’re on was the only one available, then we fundamentally disagree about what “choice” means in system design.

You’re still trying to refute a design critique by demanding an escape hatch, which only proves you never understood what was being critiqued.


> and time-skewed deployments.

Yeah, those pesky laws of physics, getting in the way of purity.

You simply cannot deploy simultaneously to an active fleet of servers.


Then don't if it's impossible. Make it have eventual consistency. The deploy window is the zone of impurity but if the deploy completes, you're pure.

The zone of impurity is very hard to constrain. Consider an phone with a client app that is out of storage space and can't download an update. Even if you "solve" the problem by forbidding older client versions from connecting, you can't make that cutoff too quick or you'll annoy users. So, you're talking about a lag time of at least weeks on every deploy, which means you've already started multiple new deploys before the old ones drop out of support window. A large system is always in the state of transition.

Then constrain it as much as you can.

Clients that our outside of your window of control obviously can't be part of the zone of purity. In that case the management practices in the article applies. You need to differentiate and control what you can, which is exactly what haskell does.

What I'm referring to is that a HUGE number of systems that can be in the zone of purity, aren't. Microservices architecture does not colloquially exclusivey refer to systems outside of our control, it refers to building a constellation of services WITHIN our control (think of a monolith within our control, but just broken down to microservices for the unintended purpose of introducing all these extra impure issues).

So a huge number of problems people face in distributed systems will be EXACTLY similar to the same problem faced by the phone with a client app simply because of bad choices and NOT because the component lies outside of their control. Like why am I facing this update problem between two services I control? Why should I even face this problem in the first place and why did I make this problem exist when I CONTROL both services? There's no excuse here.


The test is whether you can successfully identify phishing attempts bu approximating what they look like in the wild. Bypassing the test entirely means there's no data on whether you're susceptible to this, and just because someone knows there's a header and how to bypass something doesn't mean they aren't also the kind of person to be distracted and click on stuff they shouldn't.

This method of test passing wasn't okay when Volkswagen did it, and it's not appropriate for employees at a company that asks them to take the test, for the exact same reason.


Small nitpick with the title, because I still find it humorous all these years later, but it's not "Mt. Gox" like Mount Gox, it's MTGOX, which stands for Magic The Gathering Online Exchange, as it started out as a trading platform for that, and adopted bitcoin early as a way to facilitate trades of the cards without cash.


It was literally branded Mt. Gox. In the logo and everything. Also, he had already shuttered the MTG project and simply re-used the dormant mtgox domain.


The Wikipedia page agrees with you as well: https://en.wikipedia.org/wiki/Mt._Gox


There's some discussion about potential Citogenesis here: https://en.wikipedia.org/wiki/Talk:Mt._Gox#Possible_citogene...


> More importantly, McCaleb replied to my email. In response to my question "Did anyone ever actually trade card for card or money for card on Mtgox.com?", he replied "yeah they did". I've asked him some followup questions on dates & volumes & closure reason, but I guess that settles that... Does anyone recall the OTRS procedure for storing emails from primary sources? It's been years since I've last done it. --Gwern (contribs) 20:36 17 February 2014 (GMT)


> For me though Lua is clearly better than JS on many different dimensions and I don't appreciate the needless denigration of Lua, especially from someone as influential as you.

Is it needless? It's useful specifically because he is someone influential, and someone might say "Lua was antirez's choice when making redis, and I trust and respect his engineering, so I'm going to keep Lua as a top contender for use in my project because of that" and him being clear on his choices and reasoning is useful in that respect. In any case where you think he has a responsibility to be careful what he says because of that influence, that can also be used in this case as a reason he should definitely explain his thoughts on it then and now.


I noticed from reviewing my own entry (which honestly I'm surprised exists) that the idea of what it thinks constitutes a "prediction" is fairly open to interpretation, or at least that adding some nuance to a small aspect in a thread to someone else prediction counts quite heavily. I don't really view how I've participated here over the years in any way as making predictions. I actually thought I had done a fairly good job at not making predictions, by design.


This point is driven home by Justin’s insatiable desire to uncover the mystery of his Spirit Stone, and the ancient Angelou civilization.

My mind immediately jumped to the idea that this is a play on words for the ancient Maya civilizations, and Maya Angelou. Apparently I wasn't the only one.[1]

1: https://gamefaqs.gamespot.com/boards/197483-grandia/53620555


That was my immediate thought as well, under the assumption the lazy fsync is for performance. I imagine in some situations, delaying the write until the write confirmation actually happens is okay (depending on delay), but it also occurred to me that if you delay enough, and you have a busy enough system, and your time to send the message is small enough, the number of open connections you need to keep open can be some small or large multiple of the amount you would need without delaying the confirmation message to actual write time.


Choosing your risk level and working within it isn't stupid. Not knowing the risk when it's easy to gather some more info and then acting in ignorance is, which is what GP was describing, and likely why they called their own actions stupid.


I think I know what this is, but the description is so much in its own context I'm not sure. It's for web-apps that also want an offline local version that works and deals with syncing the data when online again and either local or remote is updated?

It probably markets and explains itself perfectly find for someone in that space and/or looking for this solution, so I'm not sure that's actually a problem, but if you also want to stick in the mind of someone that sees this and doesn't have any current interest, but may stumble into needing a solution like this in the future, a few extra words in your initial description might help it be understood more quickly and be something remembered even if they don't dive into it. Or maybe it's fine and I'm just a bit slow today.


That's a fair point. The description does assume some familiarity with local-first patterns. I'll think about how to make the "why you'd want this" clearer for people outside that space. I appreciate the honest feedback


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: