Hacker Newsnew | past | comments | ask | show | jobs | submit | advael's commentslogin

To "decenter [one's] self worth from [one's] job" would presumably require the fact that one's "access to basic rights is intermediated by a corporate job", no? This is a policy problem that needs to be solved by collective action, not a mindset problem that can be solved by personal growth

No you have to decenter yourself from your pay to achieve it. People accept things like cutting of medicaid because they see their job success as a moral success. There's a lot you can do to start being the good in the world that doesn't require Washington's permission.

I can't parse what distinction you're trying to draw here. When I say "collective action" I mean exactly things like exerting political pressure toward objectives like expanding rather than reducing healthcare coverage provided by governments, such as medicaid. The notion that solutions that involve changing the policy of governments "require[s] Washington's permission" seems to reject the notion that we exert power collectively via democracy, but your proposed example of what someone shouldn't accept suggests that this is an issue of how people fail to see the value in exerting said power, for which the prescription would presumably be doing so? I don't understand what you're driving at or even why you think we're in disagreement

Sorry I think you're completely right, I'm frustrated that collective action here is so hard to come by. I think that we feel really isolated because we don't see ourselves as part of a larger whole and becoming less isolated involves leaving behind this identity tied to our economic output.

I think system package managers do just fine at wrangling static library dependencies for compiled languages, and if you're building something that somehow falls through the cracks of them then I think you should probably just be using git or some kinda vcs for whatever you're doing, not a package manager

But on the other hand, I am used to arch, which both does package-management ala carte as a rolling release distro and has a pretty extensively-used secondary open community ecosystem for non-distro-maintained packages, so maybe this isn't as true in the "stop the world" model the author talks about


One is an operating system maintained and run by a diffuse community of people, (albeit a flavor of linux with the explicit backing of at least one large company), whose primary goal is to create and maintain a functional operating system that can be used by many people for many different purposes. The other is a product whose primary goal is to convince investors that the company is on a growth trajectory that will continue into the next quarter by extracting value from that product's customers. We've now seen decades of data that suggest disparate results stemming from these priorities in a lot of contexts. I think viewed through that lens, the reason is obvious and was inevitable over time no matter what your threshold of "sucks" is, so long as it has something to do with the thing's function as an operating system

I feel vindicated by this article, but I shouldn't. I have to admit that I never developed the optimism to do this for two years, but have increasingly been trying to view this as a personal failing of closed-mindedness, brought on by an increasing number of commentators and colleagues coming around to "vibe-coding" as each "next big thing" in it dropped.

I think the most I can say I've dove in was in the last week. I wrangled some resources to build myself a setup with a completely self-hosted and agentic workflow and used several open-weight models that people around me had specifically recommended, and I had a work project that was self-contained and small enough to work from scratch. There were a few moving pieces but the models gave me what looked like a working solution within a few iterations, and I was duly impressed until I realized that it wasn't quite working as expected.

As I reviewed and iterated on it more with the agents, eventually this rube-goldberg machine started filling in gaps with print statements designed to trick me and sneaky block comments that mentioned that it was placeholder code not meant for production in oblique terms three lines into a boring description of what the output was supposed to be. This should have been obvious, but even at this point four days in I was finding myself missing more things, not understanding the code because I wasn't writing it. This is basically the automation blindness I feared from proprietary workflows that could be changed or taken away at any time, but much faster than I had assumed, and the promise of being able to work through it at this higher level, this new way of working, seemed less and less plausible the more I iterated, even starting over with chunks of the problem in new contexts as many suggest didn't really help.

I had deadlines, so I gave up and spent about half of my weekend fixing this by hand, and found it incredibly satisfying when it worked, but all-in this took more time and effort and perhaps more importantly caused more stress than just writing it in the first place probably would have

My background is in ML research, and this makes it perhaps easier to predict the failure modes of these things (though surprisingly many don't seem to), but also makes me want to be optimistic, to believe this can work, but I also have done a lot of work as a software engineer and I think my intuition remains that doing precision knowledge work of any kind at scale with a generative model remains A Very Suspect Idea that comes more from the dreams of the wealthy executive class than a real grounding in what generative models are capable of and how they're best employed.

I do remain optimistic that LLMs will continue to find use cases that better fit a niche of state-of-the-art natural language processing that is nonetheless probabilistic in nature. Many such use cases exist. Taking human job descriptions and trying to pretend they can do them entirely seems like a poorly-thought-out one, and we've to my mind poured enough money and effort into it that I think we can say it at the very least needs radically new breakthroughs to stand a chance of working as (optimistically) advertised


My days of not believing people's gushing praise about "just works" about any proprietary technology are certainly coming to a middle

It's so funny watching the proprietary OS folks despair as their software falls to pieces to make shareholders richer, meanwhile my Linux distro has "just worked" since 2007. You don't have to keep running on this treadmill, folks.

The delusion of the “Mac just works” crowd is only matched by the delusion of the Linux “year of the Desktop” crowd.

They share the same trait of “it works on my machine, I like it, therefore it’s my identity and everything else is wrong”

I use Linux regularly as a second OS for more than 15 years. Driver compatibility improved, software design and quality didn’t, in fact it suffers more or less the same problems of other mainstream OSs.

Linux on the desktop has been winning because the others got bad at a faster pace, I’m not sure there is anything to be celebrated.


It's probably not worth it to say more than that my experience simply differs from yours. I've found it incredibly unproductive to quibble with people who have jumped to the conclusion that some difference of opinion must stem from some kind of identity-justification/confirmation bias delusion out the gate. This seems to be the most common mindless kneejerk criticism people jump to these days when they're engaging not with the person they're talking to, but a strawman or stereotype they believe that they conceptually represent, which in turn seems to be the most common failure mode of internet argumentation in general. It's interesting to see how the real phenomenon of confirmation bias and some relatively well-respected theories about opinion-as-identity have jumped from psychological literature to being basically pervasive thought-terminating cliches. But I like writing out my thoughts so... against my better judgment I'll write 'em out here. As a treat

My experience with the linux ecosystem overall, which seems consistent with that of the person you're responding to from what little information that post gives, has been of consistent improvement over a long timescale with an increasingly capable stack of open-source software whose exact pieces have shifted with various community and maintainer dramas and the natural process of the birth of new projects and death of old ones over time. I've found that I have my preferences within that ecosystem, like I settled on archlinux as a distro about ten years ago and haven't really seen a strong reason to switch, despite periodically working with other popular ones in the course of a career as a software engineer and researcher. I have strong reasons to prefer a modular, composable operating system that I control, so I wouldn't consider using proprietary software if there's a working FOSS alternative. This is a bias for sure! But I find my frustration with these things has decreased in aggregate over time, even as I've changed tools and suffered switching costs for it numerous times, and dealt with the general hostility with which a lot of manufacturers seem to view open-source software running on their hardware, and their attempts to make this more difficult. However, the aggregate experience of proprietary software users seems to have significantly degraded over the same period. They generally insist that this is still worth it to them over doing what I do, and again I've been in enough dumb internet arguments to know that it's not worthwhile to do more than gently suggest that alternatives exist and may be worth trying unless I know them personally.

I do get a window into proprietary ecosystems nonetheless, because I still don't feel I can replace the use cases required of me on mobile phones with an open-source alternative yet, and have seen my frustrations steadily increase over time with both these and SaaS products that I've been required to use for work. I also got frustrated enough with game consoles that I've entirely switched over to using PCs, running linux, for any games I want to play. At every turn, I have found that while computers are always error-prone in some way or another, and using them extensively will result in some frustration, this is significantly less when I have more control over the computer, and has become less rather than more frequent as open-source projects mature. I can not only observe that my own experience with proprietary products has followed the opposite pattern, but that more and more people talking about tech companies with scorn rather than effusive praise, yelling at their phones, and the public discourse adopting terms like "platform decay", "enshittification", "tech rot", etc all suggest that this is a general trend rather than my biases

Again, your mileage may vary, but I do find it odd that you are so immediately dismissive of this perspective, accusing a pretty innocuous comment about it of reactionary identity-defense basically immediately without engaging at all. If you're inclined to listen to a zealot like me at all, I would only urge you to consider why you have assumed this so quickly, why you are so adamant that this is the only sort of person who could form such an opinion


I think by now roughly half of us grew up in a world where global reach has been simply taken for granted. I don't think it's particularly onerous to say that there should be some oversight on what a business can and can't do in the context where that business is relying on public infrastructure and can affect the whole-ass world, personally


There is oversight. Just not oversight of their UI design and algorithm, which is what people are calling for here. Regulation of the feed algorithm would be a massive 1A violation.

Not sure what public infrastructure has to do with it. Access to public infrastructure doesn't confer the right to regulate anything beyond how the public infrastructure is used. And in the case of Meta, the internet infrastructure they rely on is overwhelmingly private anyway.


If algorithm output is protected by the 1st amendment then perhaps Section 230 [0] protections should no longer apply, and they should be liable for what they and their algorithms choose to show people.

[0] https://en.wikipedia.org/wiki/Section_230


That seems a hell of a lot better than repealing section 230 altogether. I also agree with the rest of this argument. Either the editorial choices made by an algorithm are a neutral platform or they're protected speech. They certainly aren't just whatever's convenient for a tech company in any given moment


What has that got to do with anything? Not at all related to the topic of research or algorithmic control. All that does is make companies potentially liable if somebody slanders somebody else on their site.


Yea I think my own preference for self-hosting boils down to a distrust of a continuous dependency on a service in control of a company and a desire to minimize such dependencies. While there are FOSS and self-hostable alternatives to tailscale or indeed claude code, using those services themselves simply replaces old dependencies on externally-controlled cloud-based services on new ones


I dunno, I spend less time fighting with any of my several linux systems than the macbook I'm required to use for work, even without trying to do anything new with it. I choose to view this charitably and assume most of the time investment people perceive when switching operating systems is familiarity penalties, essentially a switching cost. The longer this remains the case, the less charitably I'm willing to view this.


You can also mitigate a lot of the "familiarity penalties" by planning ahead. For example, by the time I made the decision to switch from Windows around 15 years ago, I'd already been preferring multi-platform FOSS software for many years because I had in mind that I might switch one day. This meant that when it came time to switch, I was able to go through the list of all the software I was using and find that almost all of it was already available in Linux, leaving just a small handful of cases that I was able to easily find replacements for.

The result was that from day 1 of using Linux I never looked back.


Of course, MS seems to enjoy inflicting familiarity penalties on its established user base every couple of years anyway. After having your skills negated in this way enough times, the jump to Linux might not look so bad.


We live in a world with the internet and distributed version control, so essentially every piece of software in the world has a tradeoff where the people maintaining it might push an update that breaks something at any time, but also those updates often do good things too, like add functionality, make stuff more efficient, fix bugs, or probably most crucially, patch out security vulnerabilities.

My experience with FOSS has mostly been that mature projects with any reasonable-sized userbase tend to more reliably not break things in updates than is the case for proprietary software, whether it's an OS or just some SaaS product. YMMV. However, I think probably the most potent way to avoid problems like this actually ever mattering is a combination of doing my updates manually (or at least on an opt-in basis) and being willing to go back a version if something breaks. Usually this isn't necessary for more than a week or so for well-maintained software even in the worst case. I use arch with downgrade (Which lets you go back and choose an old version of any given package) and need to actually use downgrade maybe once a year on average, less in the last 5


I like it, though I do notice that like most LLMs it seems to take criticism of LLM rollout kinda personally (actually circa 2023 this was considerably less true across every popular model)

I also think the predictions section seems kinda generic where the roast section felt better-personalized, which seems like a prompting issue


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: