Hacker Newsnew | past | comments | ask | show | jobs | submit | glaslong's commentslogin

Noted. Handing bro a cool $20 instead when he asks me to dial 911

Thank you I've had trouble articulating this sense, but it's strong. An uncanny valley.

I like this framing. At least as "nondeterministic" vs "deterministic" layers for the folks who flinch at "bullshit." Also "broadly capable but lossy" versus "limited capability but reliable."

Building structures of dependencies, the interface between each pair seems to collapse to the lesser of the two. So there's a ton of work right now going into TLA+, structured io, etc to force even a bit of reliability back into the LLM/program boundaries. To have any hope of chaining multiple LLM dependencies in a stack without the whole thing toppling chaotically.


Surprising, would have thought the target demo for gambling ad buys would skew MUCH more young male

Buy some lava lamps

All those lawsuits against students who downloaded but didn't even redistribute mp3s. Less than a fair use transformation. Just the file download itself. ... Lesson learned: those students should have stolen millions instead!

The real distinguishing criterion is whether you are super rich or not.

That may have been an information shaping campaign. If the end-user can get prosecuted, the discourse turns from a positive to a negative connotation inherently, which helps curb the behavior by the powers-that-be.

So if we want to run this strategy, I suppose we need to fine Meta out of existence, as an example to the rest.

Actually, it was the uploads that were a problem. Not downloads.

You mean the courts found a technical legal explanation for why very large scale copyright infringement by multinationals was legal, but infringement by individuals of multinational owned copyrights was to be HEAVILY punished?

No shit.

Of course, the excuse doesn't even apply: the offense of the tech bosses is not training these models (they had that declared legal the second it became clear only the big companies would be training big models), the offense of all these tech companies is running a piracy site. Taking copyrighted works, storing them, reproducing them and then publishing the results to third parties, in many cases for payment, and organizing this practice knowingly, willingly as a company. Paying others to help them do it. This is the worst copyright offense one can possibly commit. It is what one public prosecutor referred to in the Nappster case as "organizing a criminal cartel to violate criminal law on a huge scale".

Tech bosses weren't sued for downloading, in other words, they were sued for uploading. For asking payment for publishing copyrighted works, without any money going to the authors.

When Kim Dotcom did that, in the words of the US Attorney general, this is "charges of criminal copyright infringement, racketeering, and money laundering" (you see, getting paid for criminal activity is money laundering, a charge that was also made against teenagers selling warez cds)

ChatGPT tells me, unaware and unwilling to discuss the INCREDIBLE unfairness, that in the US, first-time offenders can face up to 5 years in prison, while repeat offenders can face up to 10 years PER OFFENCE. ChatGPT is unwilling to discuss it.

The courts are also unwilling to discuss this, but no worries! New technicality: only a public prosecutor gets to ask ...

Dario Amodei wilfully committed large scale copyright infringement, as did all the tech bosses from Musk to Bezos ... and "strangely" nobody in any court even mentioned how much 10 years times 500,000 is, despite systematically threatening that punsihment repeatedly in the cases against teenagers.

Note that the law is extremely clear that company management IS NOT shielded if ordering criminal actions (violating criminal law, as opposed to violating a contract). In that case, company management carries full criminal culpability, INCREASED from if they did it themselves. Of course, this is only ever applied for refusing to pay tax or court fees.

If the law were applied alike and fairly to individuals and tech bosses, Amodei would have to be VERY lucky the human race still exists by the time his corpse leaves prison.


So it isn't disrespectful, and neither is it respectful. A perfect nothing, not a thought or care involved. Like a 1-click eCard mailer.

Genuine question. Let's say you are bad with words.

If you ask AI to generate hundred different paragraphs and choose the one which best conveys what you actually feel and want to communicate.

Is it is still a perfect nothing?


Why are you a CEO if you are bad with words. If a CEO's work can be reduced to picking the best option from AI generated text why do they make so much money, and why would anyone chose to invest in a company that could be led by anyone picking from a list of AI responses.

It's further north of nothing.

It's a little less than if you got up and stood in the convenience store to pick 1 of 12 "Get Well!" cards. That's a few meaningful steps up (literally) from the 1-click eCard. The output will be a bit better for it, but more importantly, it shows more that interaction mattered for you.

The effort is meaningfully part of the output. I think many would still prefer if you scratched a couple non-perfect words yourself. I know I would. Those words are You, and if you're in a place to send me a card, what matters is that you showed up and offered Your words. The language and the card are transfer media.

In the same way....... you could go the opposite extra mile to make a very elaborate "you suck, here's your severance, loser!" message that would tip towards disrespectful :p


The problem with AI is that it tells you to say things you don't think, and can't tell you to say things which are original to you. Some things you will only say because they were presented to you by the bot. Others you won't say because they only exist in your head.

If you are bad enough with words that you can't write an authentic message, you are also bad enough with words that you won't understand the options with enough nuance to know what you are saying. The bot will put words in your mouth that aren't true.

It is generally better to write poorly and from the heart than to outsource your heart to a really big algorithm. What you accidentally say from the heart will still echo your thoughts, while the AI will not. ChatGPT can't suddenly remember the time when you and your wife went to the beach together and saw a penguin, and she was worried it wouldn't be able to reach the ocean, and then it was totally fine and she got embarrassed, but you felt really in love with her because she cared so much.


> Is it is still a perfect nothing?

You do get how that's worse, right? The person rather spends their time arguing with the clanker than thinking about the person and putting those thought into words, however unstructured they are.


Yeah, but communication is a two-way street. It might not matter to me that my words are unstructured, but it will to the person I'm writing to if they can't make head nor tail of what I'm saying, or worse, misunderstand it as being insulting when it isn't.

There is a whole industry built around [mis-]conception that people will take less offense on the content if it was presented differently. The predictable result is that it is actually rewriting content, not the presentation or tone. No amount of linkedinese corporate fluffery will wash off the core message that people are getting laid off unless you outright hide the message under ambiguity of double-speak like "slimming down operations", which can mean multiple things.

So essentially you have three choices:

1. Spend time writing (or have written by a copywriter) in corporate fluff dialect, where the actual message is still understandable by all parties. At the cost of appearing tone deaf.

2. Spend time reiterating with a bot that speaks some undefined sub-dialect of LLMinese where the reception of the message is unknown. At the cost of appearing even more tone deaf and insulting than a corporate cog.

3. Spend time restructuring message in genuine voice. At the cost of maybe being heard more harshly than intended.

I fail to see how option 2 can be perceived as anything but the worst, unless you assume that the target audience does not distinguish LLMinese from actual speech.


Totally agree. I don't understand why people are averse to working on their communication "soft" skills compared to other "hard" skills. People who find it hard to express themselves have my sympathy but at the same time I'm flabbergasted how they function in a team or in the workplace. Not to mention people for whom English is not the native language treating LLMs like the Star Trek universal communicator instead of helping with language acquisition.

And yeah, I know my tone is harsh and appears to lack empathy and I have only my writing skills to blame and a lack of time. That said I won't be the one to throw it in a LLM for "refinement" otherwise how would I improve? I'm not sure LLMs are to communication as are forklifts to lifting and moving stuff.

As a side note, the general advice regarding code review in my experience was not to take it personally and it's kinda funny to me for reasons I can't pin point how people (like me) have started giving unsolicited advice or criticism in regard to writing when in actuality both (code and writing) reflect personally on the human on the other side of the screen.

Anyway, I pretty much went off on my own tangent here with an apparent lack of empathy to boot but if we end up disregarding such fundamental human skills then what's to stop us from becoming dunces in a few generations? Sure, I'll add another abstraction layer even if it has a lot in common with reading tea leaves because it's not like I manually flip switches to input a program but I'll try my best to keep my individuality where it matters to me, specifically when it comes to expressing myself.

Thank you for coming to my TED rant.


You are contradicting yourself: either presentation is not important so LLM use does not matter (as long as core message is still there), or it is important and and LLM can change how the message is received (by improving presentation or making it worse).

I don't see a contradiction. What they are saying is that no amount of non-ambiguous presentation can make poor content acceptable. They never said the presentation was meaningless.

Example: A friend has died and consolation is given. No amount of consolation makes the death a good thing for you, but there is still a difference in how that consolation is presented to you.


It's fascinating to watch org managers decide that AI has made people management easy and cheap, instead of org management

Couldn't have happened to a nicer airline

Two roads diverged in a yellow wood, And sorry I could not travel both And be one traveler, long I stood And looked down one as far as I could To where it bent in the undergrowth;

Then took the other, as just as fair, And having perhaps the better claim, Because it was grassy and wanted wear; Though as for that the passing there Had worn them really about the same,


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: