Hacker Newsnew | past | comments | ask | show | jobs | submit | bestcommentslogin

> A redesign that gets replaced 2 years later is a catastrophe.

> Somebody Should Have Been Fired For This

This person is not a good resource. Uber was a very fast growing company, both in terms of their product and staff. Turnover in architecture happens. Calling this a catastrophe and click baiting about firing engineers over a rounding error in Uber’s overall finances is gross.

I understand this person is trying to grow their Substack with these inflammatory claims but I hope HN readers aren’t falling for it. This person’s takes are bad and they’re doing it to try to get you to become a subscriber. This is hindsight engineering from someone who wasn’t there.


Right on. Meta employees, fuck you for building the surveillance state we live in today. You are the fucking scourge and death of the 2000s internet. Eat shit, I care not for your "privacy concerns."

A lot of people are saying it’s disconnected, but even if it was, if a string of your country’s top rocket experts started disappearing, you wouldn’t just sit idly by

Or maybe the government should not require companies to KYC you for every little stupid thing or action you do in this world. What happened to requiring only the information that's actually required? Why do I need to be KYCd in the systems when buying banana, ordering delivery, etc.

Because of the inevitable breaches and leaks - KYC is the illicit activity. The selling point of KYC was preventing fraud and money laundering. It doesn't actually do that. Search for "largest money laundering settlements" and you will find 5 banks and one crypto scam.


As an outsider I still can't believe anybody gets this emotional about Apple.

> The most underrated skill to learn as an engineer is how to document.

Document why. I can read code. I want to know _why_ this nebulous function called "invert_parameters" that is 200 lines long even exists. Which problem did you have that this function solved? Why was this problem there in the first place? Write some opinions on maybe its intended lifetime of the codebase. Hell, I write comments that apologize, just so that a future reader knows that the code I wrote wasn't meant to be great but that I was in a time crunch or a manager was breathing down my neck, or that some insane downstream/upstream thing did something... well, insane.

Paint some picture of your mindset when writing something, especially if it's non-obvious, as that'll give all the additional context not captured in code, when reading the code.

Obviously this isn't the only good documentation rule, but I wish people - juniors and seniors alike - would do this more often, especially at the workplace.


Compared to a member of US Congress, or the senior executive branch, or the CEO class, they’re still nobody and the “little guy”.

Not that it’s defensible behavior.


> It's also funny to see the "the economy is roaring!" "incomes are up!". Great, have they increased by as much as inflation? Can I afford a home?

Gen Z home ownership is outpacing millenial home ownership at the same age. There's a lot of denial around this topic because everywhere you turn there's a Reddit post or news headline about how housing is impossible to afford.

> Pay's less.

Less than the narrow window of post-COVID mania pay maybe, but inflation adjusted wages are actually up over the long term.

> Nobody can take a break. Pressure's on.

Annual working hours per worker is flat or slightly down from when your mom's generation made up most of the workforce https://ourworldindata.org/grapher/annual-working-hours-per-...

When it comes to happiness, the numbers don't actually matter though. Perceptions do. Your and your mom's worldview that everything "isn't working any more", that young people can't possibly be buying homes, that real wages are down, and that working hours are up are actually very common ideas, especially if you zoom in on demographics who read a lot of certain types of social media (Reddit especially!) where classic doomerism prevails.


Mythos is only real when it's actually available. If you're using Opus 4.7 right now, you know how incredibly nerfed the Opus autonomy is in service of perceived safety. I'm not so confident this will be as great as Anthropic wants us to believe..

I find the scale of some companies hard to understand, they're laying off multiples of the total number of employees of the largest company I've worked at.

A playable 3D dungeon arena prototype built with Codex and GPT models. Codex handled the game architecture, TypeScript/Three.js implementation, combat systems, enemy encounters, HUD feedback, and GPT‑generated environment textures. Character models, character textures, and animations were created with third-party asset-generation tools

The game that this prompt generated looks pretty decent visually. A big part of this likely due to the fact the meshes were created using a seperate tool (probably meshy, tripo.ai, or similiar) and not generated by 5.5 itself.

It really seems like we could be at the dawn of a new era similiar to flash, where any gamer or hobbyist can generate game concepts quickly and instantly publish them to the web. Three.js in particular is really picking up as the primary way to design games with AI, in spite of the fact it's not even a game engine, just a web rendering library.


The more interesting part of the announcement than "it's better at benchmarks":

> To better utilize GPUs, Codex analyzed weeks’ worth of production traffic patterns and wrote custom heuristic algorithms to optimally partition and balance work. The effort had an outsized impact, increasing token generation speeds by over 20%.

The ability for agentic LLMs to improve computational efficiency/speed is a highly impactful domain I wish was more tested than with benchmarks. From my experience Opus is still much better than GPT/Codex in this aspect, but given that OpenAI is getting material gains out of this type of performancemaxxing and they have an increasing incentive to continue doing so given cost/capacity issues, I wonder if OpenAI will continue optimizing for it.


Because we used to be a high trust society where degenrate gamblers wouldnt mess with scientific equipment to rip each other off

I would argue that individualism is the root, more than the work ethic. I’m someone with a 50th percentile work ethic but a 99th percentile focus on community. I only have so much energy, but I make sure I reserve a good portion of it (say, at least 30%) on acts that have no “direct” benefit to me at all. Hosting a party and not worrying if the invitee’s contributions are equitable. Paying a nephews rent for a month so he can travel. Mowing the yard for a neighbor in need. Buying presents for people I see 2x a year. Calling up a distant friend just to remind them how much I like them.

Friendship and community are harder work than your job, because no one makes you do it. It pays off in peculiar ways many years later, if ever at all. It’s senseless effort, but only figuratively. The returns I get are incalculable, but only literally.


First amendment prevents the federal government from preventing speech or punishing for speech (subject to a few exceptions).

This was not that.

This was a civil defamation case; the parents bought a case of actual material harm and harrassment of epic proportions before two seperate judges in two seperate states and both courts made the finding that Jones had indeed caused harm and harrassment .. and continued to do so over years.


> I'm either in a minority or a silent majority. Claude Code surpasses all my expectations.

I looked at some stats yesterday and was surprised to learn Cursor AI now writes 97% of my code at work. Mostly through cloud agents (watching it work is too distracting for me)

My approach is very simple: Just Talk To It

People way overthink this stuff. It works pretty good. Sharing .md files and hyperfocusing on various orchestrations and prompt hacks of the week feels as interesting as going deep on vim shortcuts and IDE skins.

Just ask for what you want, be clear, give good feedback. That’s it


I feel like this time it is indeed in the training set, because it is too good to be true.

Can you run your other tests and see the difference?


"Bonus bonus chatter: The xor trick doesn’t work for Itanium because mathematical operations don’t reset the NaT bit. Fortunately, Itanium also has a dedicated zero register, so you don’t need this trick. You can just move zero into your desired destination."

Will remember for the next time I write asm for Itanium!


It’s incredible how forgiving you guys are with Anthropic and their errors. Especially considering you pay high price for their service and receive lower quality than expected.

I do feel this trend in my life. I have a job which I'm grateful for but nothing feels satisfying anymore, and I feel like it is much harder to connect to people or form deep relationships, especially in this field, unless you already have a clique in your workplace.

On top of that, AI is generally a demotivating entity to the majority of people. Despite all the hype of Altman and whonots, I feel like people just don't have a positive view of the future of their careers due to AI. And once you lose hope it's just downhill from there.

Also I feel like society still hasn't recovered fully from COVID, so many third places gone, restraunts closed, etc. It's getting there but people are isolating more and more. I'm in my late 20s and I just haven't felt like my social life is even half of what it used to be before COVID.


If Signal wants to show you a notification with message text, it needs to put it on the screen through an OS service. That service was storing the plaintext on the device.

The "bug" discussed in the article is only part of the problem.

The main problem, which is notifications text is stored on a DB in the phone outside of signal, is not addressed. To avoid that you have to change your settings.

In this case, the defendant had deleted the signal app completely, and that likely internally marks those app's notifications for deletion from the DB, so the bug fixed here is that they were not removing notifications from the local database when the app that generated them was removed, now they do.

  Impact: Notifications marked for deletion could be unexpectedly retained on the device
  Description: A logging issue was addressed with improved data redaction.
  CVE-2026-28950
They classify this as "loggging issue" so it sounds like notifications were not actually in the database itself but ended up in some log.

Why are you letting the LLM drive? Don't turn on auto-approve, approve every command the agent runs. Don't let it make design or architecture decisions, you choose how it is built and you TELL that clanker what's what! No joke, if you treat the AI like a tool then you'll get more mileage out of it. You won't get 10x gains, but you will still understand the code.

Here, the author means the agent over-edits code. But agents also do "too much": as in they touch multiple files, run tests, do deployments, run smoke tests, etc... And all of this gets abstracted away. On one hand, its incredible. But on the other hand I have deep anxiety over this:

1. I have no real understanding of what is actually happening under the hood. The ease of just accepting a prompt to run some script the agent has assembled is too enticing. But, I've already wiped a DB or two just because the agent thought it was the right thing to do. I've also caught it sending my AWS credentials to deployment targets when it should never do that.

2. I've learned nothing. So the cognitive load of doing it myself, even assembling a simple docker command, is just too high. Thus, I repeatedly fallback to the "crutch" of using AI.


This is, in my opinion, attempting to say the right thing with entirely the wrong perspective:

The people you say are getting "shafted" always got shafted. Their works are the inspiration for all artists and people who lay their eyes on it - maybe they got paid when they made the work, maybe they managed to sell it, but probably not. And still, other artists (and machines) will use remember and be inspired by it, sometimes to the point of verbatim copy (which is extremely common for human artists as well, with verbatim copy and replication being an actual sought after skill).

(Those about to shout "LICENSING", that's a very new invention and we're terrible at it. What are you going to do, cut out the part of your brain that formed new connections while touching GPL code?)

The person (singular) that is actually getting "shafted" at each use is the artist you didn't hire to do the job of making your new work, because it is their skill that got replaced. A skill build from a lifetime of studying other art and practicing themselves, replaced with a skill build from a machine studying other art and by virtue of some closed loops likely also "practicing" itself.

Still, shafting at large, but the obsession with training data is misplaced in that it entirely ignores how society and art worked beforehand.

At the same time, for most of the things you're likely using the tool for, there would probably would never have been an artist in the first place. For example, if you're just making your powerpoint prettier, or if your commission is ridiculous as it often is and yet only willing to offer a single-digit dollar sum per work which no artist should take (RIP the poor souls that take such work anyway).


I'd also add that healthcare is serious shit-show as it currently stands and the best strategy is to just stay as healthy as you possibly can to avoid having to go to the doctor, if you can even find one who will see you.

Remote work is an interesting one. Before you had 8-9 hours a day of serious social activity, and if you were lucky, people you enjoyed. Even if you didn't enjoy the people, you were at least social. Remote takes that away, and as the article noted, social contact is a definite plus for well-being.


> I was never under the impression that gaps in conversations would increase costs

The UI could indicate this by showing a timer before context is dumped.


thanks, i couldn't bother reading the thing due to the ridiculous chest-thumping and self-aggrandizing.

There's a common conversation that goes on around AI: some people swear its a complete waste of time and total boondoggle, some that its a good tool when used correctly, and others that its the future and nothing else matters.

I see the same thing happen with Kubernetes. I've run clusters from various sizes for about half a decade now. I've never once had an incident that wasn't caused by the product itself. I recall one particular incident where we had a complete blackout for about an hour. The people predisposed to hating Kubernetes did everything they could to blame it all on that "shitty k8s system." Turns out the service in question simply DOS'd by opening up tens of thousands of ports in a matter of seconds when a particular scenario occurred.

I'm neither in the k8s is the future nor k8s is total trash. It's a good system for when you genuinely need it. I've never understand the other two sides of the equation.


i was just curious so i tested this actually.

Using fio

Hetzner (cx23, 2vCPU, 4 GB) ~3900 IOPS (read/write) ~15.3 MB/s avg latency ~2.1 ms 99.9th percentile ≈ ~5 ms max ≈ ~7 ms

DigitalOcean (SFO1 / 2 GB RAM / 30 GB Disk) ~3900 IOPS (same!) ~15.7 MB/s (same!) avg latency ~2.1 ms (same!) 99.9th percentile ≈ ~18 ms max ≈ ~85 ms (!!)

using sequential dd

Hetzner: 1.9 GB/s DO: 850 MB/s

Using low end plan on both but this Hetzner is 4 euro and DO instance is $18.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: