Hacker Newsnew | past | comments | ask | show | jobs | submit | monegator's commentslogin

simple: search that user. He's a grifter with many failed venues that recently started flooding big project with useless PRs

everybody should collectively tell him to fuck off


That may or may not be the case - I really was just going off this one thread, and how I personally read it. I completely appreciate that others read it differently.

*grifters

Successful grifters are not this stupid.

Terminal AI users are genuinely *slow* individuals.

These people are functionally illiterate; they are not able to extract information by reading a bit of text in the same ways that you or I are.

These people are very, very *gullible*.


let's make it even better: why not set up a donation mechanism to get in the list?

Because I want people to get paid for writing code, not to pay to write code.

my bad, forgot to /s

No worries, the italics did heavy lifting.

What could go wrong?!

>In March, after $400k in sales through our crowdfunding campaign, I had to figure out how to manufacture 500 units for our first batch. >I had no prior experience in hardware; >I was counting on being able to pick it up quickly with the help of a couple of mechanical/electrical/firmware engineers.

And i wish this kind of people to fail miserably. Too many times they tried to make me the scapegoat for their bullshit project, already sold on paper but with nothing more than a render to show, now looking for an hardware guy to point the finger at to blame


This feels a bit harsh! I get the impression that the author considered their project far more carefully than this, especially given the Ben Kuhn blog posts they say that they were inspired by.

Unironically: win 10, or kde plasma - breeze dark.

What is it with everybody and the goddamn rounded corners?

> The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have.

ever had a client second guess you by replying you a screenshot from GPT?

ever asked anything in a public group only to have a complete moron replying you with a screenshot from GPT or - at least a bit of effor there - a copy/paste of the wall of text?

no, people have no shame. they have a need for a little bit of (borrowed) self importance and validation.

Which is why i applaud every code of conduct that has public ridicule as punishment for wasting everybody's time


Problem is people seriously believe that whatever GPT tells them must be true, because… I don't even know. Just because it sounds self-confident and authoritative? Because computers are supposed to not make mistakes? Because talking computers in science fiction do not make mistakes like that? The fact that LLMs ended up having this particular failure mode, out of all possible failure modes, is incredibly unfortunate and detrimental to the society.


Last year I had to deal with a contractor who sincerely believed that a very popular library had some issue because it was erroring when parsing a chatgpt generated json... I'm still shocked, this is seriously scary


"SELECT isn't broken" isn't a new advice, and it exists for a reason.


My boss says it's because they are backed by trillion dollar companies and the companies would face dire legal threats if they did not ensure the correctness of AI output.


Point out to your boss that trillion dollar companies have million dollar lawyers making sure their terms of service put all responsibility on the user, and if someone still tries to sue them they hire $2000/hour litigators from top law firms to deal with it.


Your boss sounds hilarious naive to how the world works.


In a lot of ways he is, despite witnessing a lot of how the sausage is made directly. Honestly, I think at at least half of it is wanting to convince himself that the world still functions in ways that make sense to him rather than admit that it's mostly grifters grifting all the way down.


The Gervais Principle framework calls this type of person a Clueless. They sit in middle management as a buffer between the Sociopaths who run the world, and the Losers who know the world sucks but would just like to get their paycheck and go home. I'm surprised to hear this actually play out — the Gervais Principle doesn't seem very empirical.


The high-trust Boomer brain cannot comprehend the actual low-trust society of grifters in which we live.


I don't agree with this blanket statement. The internet is low trust for lots of reasons, but regular (read small, proximal/spatiotemporally constrained) communities still exist and are not grifters all the way down. Acknowledging that distant strangers are not trustworthy in the traditional sense seems reasonable, but is categorically different than addressing natural social groups (small and local).


Yes, and most young Americans are locked out of those small, high-trust suburbs due to high housing prices. So instead they get to experience the magic of low-trust America first-hand, hence the disconnect between the young and the boomers.


Exactly. Sadly, low-trust America has become the default where most people live. There are still nice, small-town, local shopping, suburban high-trust enclaves here and there, but as soon as you go online or deal with a business with more than a handful of locations, you're back in the low-trust grifting zone.


This is a good heuristic, and it's how most things in life operate. It's the reason you can just buy food in stores without any worry that it might hurt you[0] - there's potential for million ${local currency} fines, lawsuits, customer loss and jail time serving as strong incentive for food manufacturers and vendors to not fuck this up. The same is the case with drugs, utilities, car safety and other important aspects of life.

So their boss may be naive, but not hilariously so - because that is, in fact, how the world works[1]! And as a boss, they probably have some understanding of it.

The thing they miss is that AI fundamentally[2] cannot provide this kind of "correct" output, and more importantly, that the "trillion dollar companies" not only don't guarantee that, they actually explicitly inform everyone everywhere, including in the UI, that the output may be incorrect.

So it's mostly failure to pay attention and realize they're dealing with an exception to the rule.

--

[0] - Actually hurt you, I'm ignoring all the fitness/healthy eating fads and "ultraprocessed food" bullshit.

[1] - On a related note, it's also something security people often don't get: real world security relies on being connected - via contracts and laws and institutions - to "men with guns". It's not perfect, but scales better.

[2] - Because LLMs are not databases, but - to a first-order approximation - little people on a chip!


> It's the reason you can just buy food in stores without any worry that it might hurt you[0] - there's potential for million ${local currency} fines, lawsuits, customer loss [...]

We are currently facing a political climate trying to tear many of these safeguards down. Some people really think "caveat emptor" is some kind of natural, efficient, ideal way of life.


> [1]

Cybersecurity is also an exception here.

"men with guns" only work for cases where the criminal must be in the jurisdiction of the crime for the crime to have occurred.

If you rob a bank in London, you must be in London, and the British police can catch you. If you rob a bank somebody else, the British police doesn't care. If you hack a bank in London though, you may very well be in North Korea.


That's a fair point, and I suppose it is a major reason cybersecurity looks the way it does. The Internet as it is ignores the jurisdictional borders. But I still think cybersec is going overboard with controls, constraining use cases where international cybercrime is not a major factor in the threat model.


For this logic I like to point out that every AI service has text that says, essentially "AI can be wrong, double check your answers". If you had the same disclaimer on your food "This food's quality is not assured" would you feel comfortable buying it or would you take pause until you've built up trust with the seller and manufacturer.

There's so much CYA because there is an A that needs C'ing


Just doesn't understand the scale of money.

Maybe a million dollar company needs to be compliant. A billion dollar company can start to ward off any loopholes with lawsuits instead of compliance.

A trillion dollar company will simply change the law and fight governments over the law to begin with, rather than worrying about compliance.


And just how many rs does your boss think are in strawberry?


If only every LLM-shop out there would put disclaimers on their page that they hope absolve them of the responsibility of correctness, so that your boss could make up his own mind... Oh wait.


I think people's attitude would be better calibrated to reality if LLM providers were legally required to call their service "a random drunk guy on the subway"

E.g.

"A random drunk guy on the subway suggested that this wouldn't be a problem if we were running the latest SOL server version" "Huh, I guess that's worth testing"


There’s a non-zero number of people who would get a chuckle out of a browser extension at replaces every occurrence of LLM or AI with a random drunk guy on the subway .


It could be the same extension that replaces every occurrence of the cloud with my butt.


That’s the one I trying to think of, I could remember the ‘butt’ part.


People's trust on LLM imo stems from the lack of awareness of AI hallucinating. Hallucination benchmarks are often hidden or talked about hastily in marketing videos.


I think it's better to say that LLMs only hallucinate. All the text they produce is entirely unverified. Humans are the ones reading the text and constructing meaning.


[flagged]


To quote Luke Skywalker: Amazing. Every word of what you just said is wrong.


Which is why I keep saying that anthropomorphizing LLMs gives you good high-order intuitions about them, and should not be discouraged.

Consider: GP would've been much more correct if they said "It's just a person on a chip." Still wrong, but much less, in qualitative fashion, than they are now.


No, it does not, it just adds to the risk that you'd be fooled by them or the corporations that produce them and surveil you through their SaaS-models.

It's a person in the same sense as a Markov chain is one, or the bot in the reception on Starship Titanic, i.e. not at all.


Just a weird little guy.



Similar analogy, yes.

FWIW, I prefer my "little people on a chip" because this is a deliberate riff on SoC, aka. System on a Chip, aka. an actual component you put when designing computer systems. The implication being, when you design information processing systems, the box with "LLM" on it should go where you'd consider putting a box with "Person" on it, not where you'd put "Database" or any other software/hardware box.


No, it is not. It's a funny way of compressing and querying data, nothing else.


It is probabilistic unlike a database which is not. It is also a lossy way to compress data. We could go on about the differences but those two things make it not a database.

Edit: unless we are talking about MongoDB. It will only keep your data if you are lucky and might lose it. :)


No, it is still just a database. It is a way to store and query information, it is nothing else.

It's not just the weirdness in Mongo that could exhibit non-deterministic behaviour, some common indexing techniques do not guarantee order and/or exhaustiveness.

Let it go, LLM:s and related compression techniques aren't very special, and neither are chatbots or copy-paste-oriented software development. Optimising them for speed or manipulation does not change this, at least not from a technical perspective.


> It's just a database. There is no difference in a technical sense between "hallucination" and whatever else you imagine.

It's like a JPEG. Except instead of lossy compression on images that give you a pixel soup that only vaguely resembles the original if you're resource bound (and even modern SOTA models are when it comes to LLMs), instead you get stuff that looks more or less correct but just isn't.


It would be like JPEG if opening JPEG files involved pushing in a seed to get an image out. It's like a database, it just sits there until you enter a query.


This comes from not having a specific area or understanding, if you ask it about an area you know well, you'll see.


I get what you're saying but I think it's wrong (I also think it's wrong when people say "well, people used to complain about calculators...").

An LLM chatbot is not like querying a database. Postgres doesn't have a human-like interface. Querying SQL is highly technical, when you get nonsensical results out of it (which is most often than not) you immediately suspect the JOIN you wrote or whatever. There's no "confident vibe" in results spat out by the DB engine.

Interacting with a chat bot is highly non-technical. The chat bot seems to many people like a highly competent person-like robot that knows everything, and it knows it with a high degree of confidence too.

So it makes sense to talk about "hallucinations", even though it's a flawed analogy.

I think the mistake people make when interacting with LLMs is similar to what they do when they read/watch the news: "well, they said so on the news, so it must be true."


No, it does not. It's like saying 'I talk to angels' because you hear voices in the humming from the ventilation.

It's precisely like a database. You might think the query interface is special, but that's all it is and if you let it fool you, fine, go ahead, keep it public that you do.


I don't remember exactly who said it, but at one point I read a good take - people trust these chatbots because there's big companies and billions behind them, surely big companies test and verify their stuff thoroughly?

But (as someone else described), GPTs and other current-day LLMs are probabilistic. But 99% of what they produce seems feasible enough.


> But 99% of what they produce seems feasible enough.

This being a big part of the problem-- their false answers are more plausible and convincing then the truth. The output almost always seems feasible-- true or not is an entirely different matter.

Historically when most things fail they produce nonsense. If they do they are producing something related to the truth (but perhaps biased or mis-calibrated). LLM output can be both highly plausible and unrelated to reality.


Billions of dollars of marketing have been spent to enable them to believe that, in order to justify the trillions of investment. Why would you invest a trillion dollars in a machine that occasionally randomly gave wrong answers?


I think in science fiction it’s one of the most common themes for the talking computer to be utterly horribly wrong, often resulting in complete annihilation of all life on earth.

Unless I have been reading very different science fiction I think it’s definitely not that.

I think it’s more the confidence and seeming plausibility of LLM answers


People are literally taking Black Mirror storylines and trying to manifest them. I think they did a `s/dys/u/` and don't know how to undo it...


They codysld start by trying to dysndo it.

I'm sorry. That was a terrible joke.


Sure, but this failure mode is not that. "AI will malfunction and doom us all" is pretty far from "AI will malfunction by sometimes confabulating stuff".


In terms of mass exposure, you're probably talking things like Cmdr Data from Star Trek, who was very much on the 'infallible' end of the fictional AI spectrum.


Data was also famous for getting things embarrassingly wrong, particularly when interacting with his human colleagues.


The stories I read had computers being utterly horribly right, which resulted in attempts (sometimes successful) at annihilate humanity.


This is probably more of a GAI achievement, but we definitely need confidence levels when it comes to making queries with factual responses.

But yes, look at the US c.2025-6. As long as the leader sounds assertive, some people will eat the blatant lies that can be disproven even by the same AI tools they laud.


This sounds a bit like the "Asking vs. Guessing culture" discussion on the front page yesterday. With the "Guesser" being GP who's front-loading extra investigation, debugging and maintenance work so the project maintainers don't have to do it, and with the "Asker" being the client from your example, pasting the submission to ChatGPT and forwarding its response.


>> In Guess Culture, you avoid putting a request into words unless you're pretty sure the answer will be yes. Guess Culture depends on a tight net of shared expectations. A key skill is putting out delicate feelers. If you do this with enough subtlety, you won't even have to make the request directly; you'll get an offer. Even then, the offer may be genuine or pro forma; it takes yet more skill and delicacy to discern whether you should accept.

delicate feelers is like octopus arms


Or octocat arms in this context?

Still, I meant that in the other direction: not request, but a gift/favor. "Guess culture" would be going out of your way to make the gift valuable for the receiver - matching what they need, and not generating extra burden. "Ask culture" would be like doing whatever's easiest that matches the explicit requirements, and throwing it over the fence.


I've also had the opposite.

I raise an issue or PR after carefully reviewing someone else's open source code.

They ask Claude to answer me; neither them nor Claude understood the issue.

Well, at least it's their repo, they can do whatever.


Not OP, but I don't consider these the same thing.

The client in your example isn't a (presumably) professional developer, submitting code to a public repository, inviting the scrutiny of fellow professionals and potential future clients or employers.


I consider them to be the same attitude. Machine made it / Machine said it. It must be right, you must be wrong.

They are sure they know better because they get a yes man doing their job for them.


Our CEO chiming in on a technical discussion between engineers: by the way, this is what Claude says: *some completely made-up bullshit*


I do want to counter that in the past before AI, the CEO would just chime in with some completely off the wall bullshit from a consultant.


Hi CEO, thanks for the input. Next time that we have a discussion, we will ask Claude instead of discussing with who wrote the offending code.


Didn't happen to me yet.

I'm not looking forward to it...


Random people don’t do this. Your boss however…


On windows 10, too. Firefox 147.0.1 (You may want to update your "supported" chart! Firefox has WebGPU now)


>ultimately the fault of bad app devs

More like google's fault. They made a huge mess of completely different permission and behaviour changes between 11, 12, 13. At least since 14 they have stopped fucking around so much.

It is really much simpler for us to cut off all versions before 12, but it's unfeasible. So many devices still with 10/11. Now we cut off at 8.1, but will increase that every year starting next year as google mandates us an increase of minimum sdk version.


I don't like how companies behave like that and basically push users to upgrade their phones

Garmin in particular makes it mandatory to use their app for SOME connected functionalities (while others work just fine on wifi or wifi tethering). They unsupported old version of android for the garmin connect app pretty fast (my mom's phone was incompatible within 4 years of its release) while they don't support you to connect older devices on newer phones and say they know it doesn't work.

As a user, I don't care whose fault it is.

I ditched both Google in favour of degooglized android on older Xiaomi and Pixel phones that support custom ROMs, and Garmin for any sport equipment.

My next phone will be a Fairphone if they make something with a smaller screen.

I don't know which app you're doing, but I would most likely permanently just not download it or find an open source alternative if it stopped working for me, as no app is essential. Pay attention to the user-base, in particular is your app is supposed to work with a web of users.


While i always try to look for open source utility apps (i use several), our userbase simply don't care.

Context: Our apps are means to connect to our devices via BLE, are free and without ads (fuck ads, fuck all ads), no integrity checks. We don't publish the API but we know of a couple of clients that reverse engineered the protocol and made their own. Good for them. (one of them also came by the office to bring a friend and showed us his app that glued together the functionality of several modules from also our competitors. Cool!)

But given what we do our customers are complete normies, doing what google asks us is the path of least resistance, and gets us most audience.

Those who don't want to use the play store can find the APK in the usual sites, don't care.

If i made app for myself i would indeed distribute it differently.


I haven't done Android dev in a while, but I remember the Android SDK offered a 'backwards compatibility pack' - you selected which version you wanted to target, and how old a version you wanted to support (you could go back to like android 5) and it gave you all the polyfills necessary. The only downside was that your app size would balloon to crazy levels.


It's more or less what minimumsdk does, but there may be libraries that require you to bump the minimum.

For example, there are APIs that make feasible something that should be trivial (like autosizing a font based on size, the way it happens in iOS) but they are available from 8.0 so you cut out anything below that.

Or, we use BLE a lot and there are newer methods that makes our life easier but again are not available in older SDK versions


>Forty 9003s were installed, offered via lease between 1959 and 1964. The first 9003 was installed at Marzotto in Valdagno

I pass by marzotto almost every weekend during winter (GREAT spots for goulottes). I didn't know that one of the first computers in italy would be installed there.

Such a shame, the rise and fall of marzotto and recoaro.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: