Hacker Newsnew | past | comments | ask | show | jobs | submit | kg's commentslogin

It sounds like you know this but what happened is that you didn't do 4 weeks of work over 2 days, you got started on 4 weeks of work over 2 days, and now you have to finish all 4 weeks worth of work and that might take an indeterminate amount of time.

If you find a big problem in commit #20 of #40, you'll have to potentially redo the last 20 commits, which is a pain.

You seem to be gated on your review bandwidth and what you probably want to do is apply backpressure - stop generating new AI code if the code you previously generated hasn't gone through review yet, or limit yourself to say 3 PRs in review at any given time. Otherwise you're just wasting tokens on code that might get thrown out. After all, babysitting the agents is probably not 'free' for you either, even if it's easier than writing code by hand.

Of course if all this agent work is helping you identify problems and test out various designs, it's still valuable even if you end up not merging the code. But it sounds like that might not be the case?

Ideally you're still better off, you've reduced the amount of time being spent on the 'writing the PR' phase even if the 'reviewing the PR' phase is still slow.


> The fact that a computer loads it into memory does not make it automatically a ‘copy’ in the copyright sense.

IIRC this exact argument was made in the Blizzard vs bnetd case, wasn't it? Though I can't find confirmation on whether that argument was rejected or not...


Most words don't have especially precise meanings, context is everything. So emojis being imprecise is not a unique problem.

And emojis can be especially dense with information in a way that can be pretty convenient. You can scan a 96x32 pixel block of 3 emojis to quickly gather information that would have required reading 1-2 whole sentences, potentially.

Emoji are also more 'casual' in a way that can be helpful. You can tap the 'heart' emoji on a message to a colleague or friend to express your gratitude or thanks for something without having to prevaricate over exactly what language to use to avoid seeming insincere or overly affectionate.


> Emoji are also more 'casual' in a way that can be helpful. You can tap the 'heart' emoji on a message to a colleague or friend to express your gratitude or thanks for something without having to prevaricate over exactly what language to use to avoid seeming insincere or overly affectionate.

I think this might be one of the few points that this emoji use which you mention feels almost universal to me across all ages for the most part.

It just saves time if you can heart a message without saying I agree with you.

Additionally if the emoji itself is a reaction (say how Github/Discord heart emoji can work) then this is even better at times and how most of us sometimes use it because that way the conversation doesn't steer itself because they have nothing to respond to but they still see that you appreciated them. Win Win situation.

> Most words don't have especially precise meanings, context is everything

If someone wants words to have precise meanings, English isn't the best language for it. Sanskrit/Polish is. I was taught sanskrit during school and I think that a language having too precise meaning can actually take too much time to think and this just makes conversation take too long. It can also be that Sanskrit is almost extinct in verbal form aside from religious scriptures and rituals now so its just way too hard to learn the language even though we know fluent hindi. (FWIW It had 7 tenses IIRC and single/duo/plural for a single root verb)

I am not sure about polish tho but I am speaking this because I have only heard polish be also described as a language with more precise meaning and there was a HN post about it sometime ago in the context of AI.


It's a real shame that BYOB (bring your own buffer) reads are so complex and such a pain in the neck because for large reads they make a huge difference in terms of GC traffic (for allocating temporary buffers) and CPU time (for the copies).

In an ideal world you could just ask the host to stream 100MB of stuff into a byte array or slice of the wasm heap. Alas.


I wonder if you can get most of the benefit BYOB with a much simpler API:

    for await (const chunk of stream) {
        // process the chunk
        stream.returnChunk(chunk);
    }
This would be entirely optional. If you don’t return the chunk and instead let GC free it, you get the normal behavior. If you do return it, then the stream is permitted to return it again later.

(Lately I’ve been thinking that a really nice stream or receive API would return an object with a linear type so that you must consume it and possibly even return it. This would make it impossible to write code where task cancellation causes you to lose received data. Sadly, mainstream languages can’t do this directly.)


BYOB reads definitely add complexity, but the performance gains are significant in memory-sensitive applications. It’s frustrating that managing larger streams isn’t straightforward, especially given the increasing importance of efficiency in JavaScript.

Unfortunately, browsers "solved" this by intentionally adding APIs that enable websites to do this to you. It wasn't possible to abuse users this way until the relevant APIs for detecting focus and occlusion were added. :(


It's a huge conflict of interest for an ads company to develop a browser, let alone the browser with...(checks notes)...77% market share.



This, but as a built-in browser feature, configurable per-site, and also for all the other potentially useful/creepy web APIs.


Both could work. The API could be permission based. E.g. without consent the app would always see itself as in focus.


So one could stub out https://developer.mozilla.org/en-US/docs/Web/API/Page_Visibi... or fake it via an extension (there are actually some already that disable the API by injecting JS that always returns "visible", f.e. https://addons.mozilla.org/firefox/addon/disable-page-visibi...). Don't know if there are more.

But just use Chrome! Our website only works in Chrome. Everyone should just be using Chrome. What's wrong with a Chrome monoculture?

:)


As someone who moved out of California a few years ago I assure you that it is exceedingly easy to move to a different state, assuming you have the money to move at all.


Good to hear and I hope and I suspect you are doing well. People leaving is good for other states and good for affordability of California.


You can see https://bsky.app/profile/nytdiff.bsky.social for some examples of how the NYT frequently revises titles and abstracts after publication. Most of them seem harmless at least.


> etcd is a strongly consistent, distributed key-value store, and that consistency comes at a cost: it is extraordinarily sensitive to I/O latency. etcd uses a write-ahead log and relies on fsync calls completing within tight time windows. When storage is slow, even intermittently, etcd starts missing its internal heartbeat and election deadlines. Leader elections fail. The cluster loses quorum. Pods that depend on the API server start dying.

This seems REALLY bad for reliability? I guess the idea is that it's better to have things not respond to requests than to lose data, but the outcome described in the article is pretty nasty.

It seems like the solution they arrived at was to "fix" this at the filesystem level by making fsync no longer deliver reliability, which seems like a pretty clumsy solution. I'm surprised they didn't find some way to make etcd more tolerant of slow storage. I'd be wary of turning off filesystem level reliability at the risk of later running postgres or something on the same system and experiencing data loss when what I wanted was just for kubernetes or whatever to stop falling over.


> This seems REALLY bad for reliability? I guess the idea is that it's better to have things not respond to requests than to lose data, but the outcome described in the article is pretty nasty.

It is. Because if it really starts to crap out above 100ms just a small hiccup in network attached storage of VM it is running on cam

but it's not as simple as that, if you have multiple nodes, and one starts to lag, kicking it out is only way to keep the latency manageable.

Better solution would be to keep cluster-wide disk latency average and only kick node that is slow and much slower than other nodes; that would also auto tune to slow setups like someone running it on some spare hdds at homelab


Yes, wouldn't their fix likely make etcd not consistent anymore since there's no guarantee that the data was persisted on disk?


Yes, but they wrote it’s for a demo and it’s fine if they lost the last few seconds in the event of unexpected system shutdown.

And also in prod, etcd recommends you run with SSDs to minimize variance of fsync/write latencies


Getting into an inconsistent state does not just mean “losing a few seconds”.


How would you get into an inconsistent state based on an fsync change?

Edit: I meant what sequence of events would cause etcd to go into an inconsistent state when fsync is working this way


data corruption, since fsync on the host is essentially a noop. The VM fs thinks data is persistent on disk, but it’s not - the pod running on the VM thinks the same …


Yes, they totally missed the point of the fsync...


well, the actual issue (IMHO) is that this meta-orchestrator (karmada) needs quorum even for a single node cluster.

The purpose of the demo wasn't to show consistency, but to describe the policy-driven decision/mechanism.

What hit us in the first place (and I think this is what we should fix) is the fact that a brand new nuc-like machine, with a relatively new software stack for spawning VMs (incus / ZFS etc.) behaves so bad it can produce such hiccups for disk IO access...


They used a desktop platform with an unknown SSD and with ZFS. There could be a chance what with at least a proper SSD they wouldn't even get in the trouble in the first place.


well, indeed -- we should have found the proper parameters to make etcd wait for quorum (again, I'm stressing that it's a single node cluster -- banging my head to understand who else needs to coordinate with the single node ...)


That’s a design issue in etcd.


CAP theorem goes brrr. This is CP. ZooKeeper gives you AP. Postgres (k3s/kine translation layer) gives you roughly CA, and CP-ish with synchronous streaming replication.

If you run this on single-tenant boxes that are set up carefully (ideally not multi-tenant vCPUs, low network RTT, fast CPU, low swappiness, `nice` it to high I/O priority, `performance` over `ondemand` governor, XFS) it scales really nicely and you shouldn't run into this.

So there are cases where you actually do want this. A lot of k8s setups would be better served by just hitting postgres, sure, and don't need the big fancy toy with lots of sharp edges. It's got raison d'etre though. Also you just boot slow nodes and run a lot.


It's still used for job hunting and recruiting unfortunately. I got a real message from a real recruiter for a 5k+ employee software company on it just last week. My friends and colleagues dealing with layoffs have had to update their profiles. :(


Increasingly widespread age restriction laws?


Like the ones we have in the UK? I can't even look at the craft beer Sub-Reddit anymore without handing over my ID.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: