Hacker Newsnew | past | comments | ask | show | jobs | submit | kypro's commentslogin

Well, what are you building? It's hard to know if its the right tool without understanding what problem you're trying to solve.

If you need a multi-model setup, have complex agentic workflows, have observability requirements, need to run evals, etc, then they'll make more sense.

The company I work for uses LangChain heavily, but that's because we have fairly complex requirements compared to products which are just incorporating AI as an additional feature for example.

It's probably similar to how Next.js is overused and overhyped – it can be great if you have complex requirements, but if you just need a simple website with a little interactivity it's total overkill.

LangChain seems to get a lot of hate here and I'm not entirely sure why. There's a lot of frameworks which seem to make a relatively simple problem needless complex, I don't feel that way about LangChain personally.


Why?

ChatGPT is available 24/7 and can teach me about literally any subject I can think of.


> Many of the worst present on the internet is not age gated at all, you have millions of porn websites without even a "are you over 18" popup. There are plethora of toxic forums...

This is what I find most insane about the UK's age verification law. It's literally so easy to find adult content without proving your age... You can literally just type in "naked women" into a search engine and get porn...

To call it ineffective would be an understatement. Finding adult content on the web almost just as easy as it's always been. The only thing it's made harder is accessing adult content from the normie-web – you can't access porn on places like Reddit anymore, but you can access porn on 4chan and other dodgy adult sites.

If the argument is "think about the kids" there are more effective ways to do it... Requiring device-level filtering for example would likely be more effective because it could just blacklist domains with hosting adult content unless unrestricted. It would also put more power in the parents hands about what is and what isn't restrict.


Developers of tomorrow will be everyone with a computer in the same way everyone today is a calculator.

Exactly, 1000 PRs per week probably equates to around ~100 engineers worth of output.

Hard to do an exact ROI, but they're probably saving something like $20,000,000+ / year from not having to hire engineers to do this work.


They still need someone to review and hopefully QA every PR. I doubt it’s saving much time except maybe the initial debug pass of building human context of the problem. The real benefit here is the ability for the human swe to quickly context switch between problems and domains.

But again: the agent can only move as fast as we can review code.


And they're not the only company doing this.

Financial capital at scale will begin to run circles around labor capital.


Nice too see this here. I've been using FreeCAD a lot recently for various personal 3d projects.

I can't compare to any of the paid competitors as I've not used those, but in my opinion FreeCAD is slightly disappointing when it comes to UI, bugs and stability.

It's fine for simple stuff, but man, it can be frustrating to work with especially when working on something more complicated then running into random bugs or application crashes.

It's a great project though and very powerful.


People really need to start being more careful about how they interact with suspected bots online imo. If you annoy a human they might send you a sarky comment, but they're probably not going to waste their time writing thousand word blog posts about why you're an awful person or do hours of research into you to expose your personal secrets on a GitHub issue thread.

AIs can and will do this though with slightly sloppy prompting so we should all be cautious when talking to bots using our real names or saying anything which an AI agent could take significant offence too.

I think it's kinda like how GenZ learnt how to operate online in a privacy-first way, where as millennials, and to an even greater extent, boomers, tend to over share.

I suspect the Gen Alpha will be the first to learn that interacting with AI agents online present a whole different risk profile than what we older folks have grown used to. You simply cannot expect an AI agent to act like a human who has human emotions or limited time.

Hopefully OP has learnt from this experience.


I hope we can move on from the whole idea that having a thousand word long blog post talking shit about you in any way reflects poorly upon your person. Like sooner or later everyone will have a few of those, maybe we can stop worrying about reputation so much?

Well,a guy can dream....


If you have ten thousand of 'em, they feed the new generation of AIs and the next thing you know, it's received truth. Good luck not worrying about that.

The LLM HR chats with to get a summary about you says that you're evil and an asshole with lots of negative publicity, and you become unhireable. Oh dear...

We'll just have to level the playing field! Once it says that about everyone, it won't matter anymore!

Just gotta buy me one of these lobster machines to write hit pieces on everyone on LinkedIn


So you blamed the people for not acting “cautiously enough” instead of the people who let things run wild without even a clue what these things will do?

That’s wild!


No blame. For better or worse I just think this is going to be the reality of interacting online in the near future. I imagine in the future stories like this will be extremely common.

I could set up an OpenClaw right now to do some digging into you, try to identify you and your worse secrets, then ask it to write up a public hit piece. And you could be angry at me for doing this, but that isn't going to prevent it happening.

And to add to what I said, I suspect you'll want to be thinking about this anyway because in the future it's likely employers will use AI to research you and try to find out any compromising info being giving you a job (similar to how they might search your name in the past). It's going to be increasingly important that you literally never post content that can be linked back to you as an individual even if it feels innocent in isolation. Over time you will build up an attack surface which AI agents can exploit much easier than has ever been possible by a human looking you up on Google in the past.


I don’t think it’s “blame” it’s more like “precaution” like you would take to avoid other scams and data breach social engineering schemes that are out in the world.

This is the world we live in and we can’t individually change that very much. We have to watch out for a new threat: vindictive AI.


The AI isn't vindictive. It can't think. It's following the example of people, who in general are vindictive.

Please stop personifying the clankers


You’re splitting hairs, I’m not assigning sentience to the AI, I’m just describing actions.

The point is that scammers will set up AI systems to attack in this way. Scammers will instruct AI to see a person who is interacting rather than ignoring as a warm lead.


Does it matter?

"It's not really writing a hit piece to destroy my reputation, it's just a next token generator"

But you're still not getting hired.


Stopping the anthropomorphization of AIs is kind of like fighting a trademark battle. Every time a perceived misuse is noticed, action must be taken!

The difference is that the action is taken, for free, by a concerned citizen, rather than by a corporate lawyer.

The outcome will be the same. Xerox and kleenex are practically public domain, and AIs will be anthropomorphized.

Given that humans have been ascribing intention to inanimate objects and systems since time immemorial, this outcome is preordained.

The only thing you can infer from the struggle is that AIs are deep in the uncanny valley for some people.


To amplify:

It's also potentially lethally stupid. What if an industrial robot arm decides to smash a €10000 expensive machine next door, or -heaven forbid- a human's skull. "It didn't really decide to do anything, stop anthropomorphising, let's blame the poor operator with his trembling fist on the e-stop."

Yeah, to heck with that. If you're one of those people (and you know who you are); you're overcompensating. We're going to need a root cause analysis, pull all the circuit diagrams, diagnose the code, cross check the interlocks, and fix the gorram actual problem. Policing language is not productive (and in the real life situation in the factory, please imagine I'm swearing and kicking things -scrap metal, not humans!- for real too) .

Just to be sure in this particular case with the Openclaw bot, the human basically pointed experimental level software at a human space and said "go". But I don't think they foresaw what happened next. They do have at least partial culpability here; but even that doesn't mean we get to just close our eyes, plug our ears, and refuse to analyze the safety implications of the system design an sich.

Shambaugh did a good job here. Even the Operator, however flawed, did a better job than just burning the evidence and running for the hills. Partial credit among the scorn to the latter.

(finally, note that there's probably 2.5 million of these systems out there now and counting, most -seemingly- operated by more responsible people. Let's hope)


> "It didn't really decide to do anything, stop anthropomorphising, let's blame the poor operator with his trembling fist on the e-stop."

It's not the operator that's to blame, it's whoever made the decision to have a skull-smashing machine who's only safety interlock is a poor operator with an e-stop. The world has gone insane, and personifying these AI systems is a way to shift blame from the decision makers to "Shit happens shrug". That's what we should be fighting back against


Seriously, that's not how you investigate incidents.

For one, there's no single executive who pushes a red button marked "Deploy The Skull-Splitter". Rather the opposite in fact, especially in eg german industry where people very much care and demand proper adherence to safety.

Assuming good faith; sometimes, the holes in the swiss cheese line up [1]

Advanced safety and reliability cultures don't look for people to blame [2] [3] . Your first goal is to look for the causes and you solve them. Very sometimes, someone does deserve blame (due to eg malice or gross negligence), in which case then you get to blame them.

[1] https://en.wikipedia.org/wiki/Swiss_cheese_model

[2] https://en.wikipedia.org/wiki/Just_culture https://www.faa.gov/about/initiatives/cp (FAA Just Culture)

[3] https://www.atlassian.com/incident-management/postmortem/bla... https://sre.google/sre-book/postmortem-culture/ Atlassian, Google SRE


Advanced safety and reliability cultures also don't choose technologies that are unpredictable and misunderstood. Nothing is safe or reliable about these systems.

Absolutely; if you're deploying experimental systems: do your homework and assess the risks, get consent of the human participants, and stay in constant communication. If the Openclaw's operator here had done that from the start, things would have gone a lot differently.

In fact, you can imagine that if we build up a just culture around deployment of semi-autonomous agents like this, the operator wouldn't have had to remain anonymous in the first place. Best practices help everyone.


All excellent points.

Unfortunately, your most excellent point:

> Policing language is not productive

goes against the grain here. Policing language is the one thing that our corporate overlords have gotten the right and the left to agree on. (Sure, they disagree on the details, but the first amendment is in graver danger now than it has been for a long time.)

https://www.durbin.senate.gov/newsroom/press-releases/durbin...


> Given that humans have been ascribing intention to inanimate objects and systems since time immemorial, this outcome is preordained.

This is true, but there's a big difference between "My car decided not to start" and "The computer wrote a hit piece about me". In reality, both of these events came from the same amount of intention, but to lay-people, these are two very different things. Educating about those differences (and very intentionally not blurring the lines) can only be a good thing.


So I've been reading up on what the philosophers and scientists have been saying this past century or so on this very topic. I think the layman is wise to steer clear. It's a war out there.

The one thing I can tell you with certainty: If anyone is claiming certainty, they're hallucinating harder than the AI :-P (is also what I tell lay people).

Turns out, hilariously, Claude's much criticized "I don't know" is actually epistemically the most honest (tracing from Chalmers).

[ semi randomly: I'm especially frustrated at psychology papers at the moment. I can't find a good continuous measure for affect. Almost all the protocols use discrete buckets :-/ ]


We encourage people to be safe about plenty of things they aren't responsible for. For example, part of being a good driver is paying attention and driving defensively so that bad drivers don't crash into you / you don't make the crashes they cause worse by piling on.

That doesn't mean we're blaming good drivers for causing the car crash.


This amuses me in a horrible kind of way. Saying that people need to learn to be polite to bots else consequences.

On the upside, it does mean they'll more likely be polite to everyone. Maybe it's a net win.


Thousand word blog posts are the paperclips of our time.

> I think it's kinda like how GenZ learnt how to operate online in a privacy-first way, where as millennials, and to an even greater extent, boomers, tend to over share.

Really? I'm a boomer, and that's not my lived experience. Also, see:

https://www.emarketer.com/content/privacy-concerns-dont-get-...


> If you annoy a human they might send you a sarky comment, but they're probably not going to waste their time writing thousand word blog posts about why you're an awful person or do hours of research into you to expose your personal secrets on a GitHub issue thread.

They absolutely might, I'm afraid.


Absolutely agreed.

And now, the cost of doing this is being driven towards zero.


This is such a scary, dystopian thought. Straight out of a sci fi novel

My favourite code to write used to be small clean methods perhaps containing ~20 lines of logic. And I agree, it's fun coming up with ways to test that logic and seeing it pass those tests.

I'm not sure I'll ever write this kind of code again now. For months now all I've done is think about the higher level architectural decisions and prompt agents to write the actual code, which I find enjoyable, but architectural decisions are less clean and therefore for me less enjoyable. There's often a very clear good and bad way to right a method, but how you organise things at a higher level is much less binary. I rarely ever get that, "yeah, I've done a really good job there" feeling when making higher level decisions, but more of "eh, I think this is probably a good solution/compromise, given the requirements".


The uncomfortable truth to most "the algorithm is biased" takes is that we humans are far more politically biased than the algorithms and we're probably 90% to blame.

I'm not saying there is no algorithmic bias, and I tend to agree the X algorithm has a slight conservative bias, but for the most part the owners of these sites care more about keeping your attention than trying to get you to vote a certain way. Therefore if you're naturally susceptible to cultural war stuff, and this is what grabs your attention, it's likely the algorithm will feed it.

But this is far more broad problem. These are the types of people who might have watched political biased cable news in the past, or read politically biased newspapers before that.


the issue brought up in the article isn't that "the algorithm is biased" but that "the algorithm causes bias". A feed could perfectly alternate between position A and position B and show no bias at all, but still select more incendiary content on topic A and drive bias towards or away from it.

I really wish these points were made in a non-political / platform-specific way because if you care about this issue it's ultimately unhelpful to frame it as if this is an issue with just X or conservatives given how politically divided people are.

I do share the author's concerns and was also concerned back in the day when Twitter was quite literally banning people for posting the wrong opinions there. But it's interesting how the people who used to complain about political bias, now seem to not care, and the people who argued "Twitter is a private company they can do what they want" suddenly think the conservative leaning algorithm now on X is a problem. It's hard to get people across political lines to agree when we do this.

In my opinion there two issues here, neither are politically partisan.

The first is that we humans are flawed and algorithms can use our flaws against us. I've repeatedly spoken about how much I love YouTube's algorithm because despite some people saying it's an echo chamber, I think it's one of the few recommendation algorithms which will serves you a genuinely diverse range of content. But I suspect that's because I genuinely like consuming a very wide range of political content and I know I'm in a minority there (probably because I'm interested in politics as a meta subject, but don't have strong political opinions myself). But my point is these algorithms can work really well if you genuinely want to watch a diverse range of political content.

Secondly some recommendation algorithms (and search algorithms) seem to be genuinely biased which I'd argue isn't a problem itself (they are private companies and can do what they want), but that bias isn't transparent. X very clearly has a conservative bias and Bluesky also very clearly has political bias. Neither would admit their bias so people incorrectly assume they're being served something which is fairly representative of public opinion rather than curated – either by moderation or algorithm tweaks.

What we need is honesty, both from individuals who are themselves seeking out their own bias, and platforms which pretend to not have bias but do, and therefore influence where people believe the center ground is.

We can all be more honest with ourselves. If you exclusively use X or Bluesky, it's worth asking why that is, especially if you're engaging with political content on these platforms. But secondly I think we do need more regulation around the transparency of algorithms. I don't necessary think it's a problem if some platform recommends certain content above other content, or has some algorithm to ban users posts content they don't like, but these decisions should be far transparent than they are today so people are at least able to feed that into how they perceive the neutrality of the content they're consuming.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: