Hacker Newsnew | past | comments | ask | show | jobs | submit | karmakaze's commentslogin

Upon reading this:

> The issue is not whether my students are valuable. In the long run, they are invaluable. The issue is that their value emerges slowly, whereas AI delivers immediate returns.

I had the thought that it's more like hiring only autistic/on-the-spectrum employees that will on whims do exactly what their interpretation was, or possibly worse literally what you said without considering further consequences.


Sounds a bit like externalising the learning cost (of AI models) is preferred to investing the time into training the students.

You think? I will get banned from HN if I bring up that these models are fundamentally theft but we just don't put them in jail because they had the foresight to bribe the trump admin like everyone else who wants favor did.

What good is a certification/logo? That means they passed whatever proxy was used. Smells like a cash grab, as most certifications are or become.

We'd need proof with a verifiable supply chain.


Just noticed the tagline of the paywalled page:

  Democracy Dies in Darkness

I can't see the page at the moment (getting the No digest yet).

I made a thing[0] that splits stories by day without mixing old popular with new stories. That helped cut down my HN visits to a few times per day. [I had originally made it to list all stories on one page and load story links at stops in a subway/metro commute.]

What I'm finding now is that there's too much noise at the top and what I really want to see are the stories not upvoted by mainstream/populist interests--if anyone knows a solution to that, please share.

[0] https://hackerer.news/


I've been thinking about microservices again in the light of AI's doing most of the writing. It makes sense mainly as a forcing function. Everything in the post is true, but also nothing in a monolith forces it. I can't imagine trying to keep AI's in their lane in a huge monolith.

In the end I also settled on clear module boundaries, but only for languages that have them. Ruby/Rails apps don't have such boundaries (unless you make pure functions as Gems or something) whereas I know Java/Kotlin/Scala can have such enforced boundaries. Same for databases. My day job is on a monolithic app with a main database. Every module/component/whatever has access to all tables and occasional does take advantage of it. Hard to stop when you don't actually work together, on the same team, or even in the same continent.

What's really hard about microservices is that most have no experience writing them to actually be self-sufficient and not have dependent failure modes. The example in the post is a classic, if you have a UserService you're doing it wrong (aka distributed monolith). At most you should have a UserManagementService and every other service should have the minimum information downstream of it that it needs to operate without changes from upstream.


Do what Netflix did and run servers at ISPs (or at their providers or Cloudflare points).

It's kind of weird that we still don't have distributed computing infrastructure. Maybe that will be another thing where agents can run near the data their crunching on generic compute nodes.


To quote the parent comment:

> The general simplistic answer from those who never had to design such a game or a system of “do everything on the server” is laughably bad.


If me and my roommate are both playing against each other on a server less than 10ms away, in the normal scenario at 60fps there is still ~60ms between me clicking and it appearing on your screen - and another 60ms before I get confirmation. Now add real world conditions like “user is running YouTube in the background” or “wife opens instagram” and that latency becomes unpredictable. You still are left with the same problems. Now multiply it by 10 people who are not the same distance from the ISP and the problems multiply.

What does that have to do with solving the problem?

Sorry to day this, but I don’t think you understand how any of this works. Whenever someone’s proposed “edge computing” as a way to solve trust problems, I know they are just stringing together fancy sounding words they don’t understand.

What “Netflix did” was having dead-simple static file serving appliance for ISPs to host with their Netflix auth on top. In their early days, Netflix had one of the simplest “auth” stories because they didn’t care.


There's different levels of cheating. We can avoid the worst cases by not putting the game state/Netcode in the users computer which basically makes it like an X Server.

It would add some latency but could be opt-in for those that care enough for all players in a match to take the hit.


All the games that use kernel anti cheat have the simulation running on the server.

You can't make a competitive fps game with a dumb terminal, it can't work because the latency is too high so that's why you have to run local predictive simulation.

You don't want to wait the server to ack your inputs.


> All the games that use kernel anti cheat have the simulation running on the server.

There's an exception with fighting games. Fighting games generally don't have server simulations (or servers at all), but every single client does their own full simulation. And 2XKO and Dragon Ball FighterZ have kernel anti cheat.

Well I'm just nitpicking and it's different because it's one of the few competitive genres where the clients do full game state simulations. Another being RTS games.


Go play the original Quake (not QuakeWorld) online and you will soon realise why games realised that concept was flawed as soon as it was implemented.

It works fine for LAN but as soon as the connection is further than inside your house, it’s utterly horrible.


I don't trust languages named after the creator, Philip.

As for the safety dispute, my quick search said that Fil-C has managed memory (gc named FUGC) and runs about 2-4x slower: you decide.


At those prices I wonder if it also reviews the design for ineffectiveness in performance or decomposition into maintainable units besides catching the bugs.

Also the examples are weird IMO. Unless it was an edge/corner case the authentication bug would be caught in even a smoke test. And for the ZFS encryption refactor I'd expect a static-typed language to catch type errors unless they're casting from `void*` or something. Seems like they picked examples by how important/newsworthy the areas were than the technicality of the finds.


Here's an idea: allow downvotes for green posts with published guidelines on when downvoting is and is not appropriate. We can collectively filter out the pure spam efficiently to make it less worthwhile to post.

These details don't detract from the efficiency. The postal code can prefilter every other field which can frequently narrow down to one. I would leave the ability for the user to override with free form data entry as data isn't perfect and changes over time.

I don't remember asking for "efficiency" in typing out an address, something we teach children how to do. It doesn't seem like a societal problem worth iterating over.

These tools are more than often wrong, and cause more grief for the user than any potential help it could provide.

There is no developer in the world that knows this data better than the person typing it into the form.


I'd like any person or system asking for my information one field at a time to minimize my time and effort to give it to them.

When they make erroneous assumptions, which they often do, they steal more of your time and effort than it would take without "assistance".

I bet a large majority of Americans have their city and state uniquely identified by their zip code

if it's not unique, a trivial fallback would be to not populate anything, and that's where we are today


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: