Hacker Newsnew | past | comments | ask | show | jobs | submit | slowmovintarget's commentslogin

I hear this, and I want to sympathize, but I always have trouble reconciling it with the philosophy pointed to by the phrase "information wants to be free."

Should the people be compensated for their work? Yes. Is this piracy? Yes, we know that many of these big organizations just outright copied the work with out proper payment. (No, it isn't theft, theft deprives the owner of property, not just a portion of the revenue.)

But tokenization and vectorization of this data is fundamental transformation. Information wants to be free. Again, I don't know if I can reconcile this with fair use doctrine, but it smells like it, especially for books that would be in a library, or images of paintings.

If we embodied these learning systems in a robot, sent it to the library, and had it read every book, would it be piracy? That's how humans learn. They take away a fundamental summary of the thing. When I read a book I'm affected by it. My vocabulary, even my writing style sometimes. The fact that these are machine systems capable of doing this at scale... does that make a difference?


Fair use doctrine is explicitly for the humans. For the better overall state of humankind. It is not logically "fair" in the terms of some abstract logic, since we are giving preference to some part of the population and not everyone equally/fairly. So the fairness is not in the perfect mathematical equality of the application of fair use law, it is in the inequality of it and this is what makes it fair. In short - fair use is a hack, which we humans invented for ourselves, for our better living.

There is no some law of the universe that a hack we have invented for ourselves should be extended to literally everything everywhere, we don't "owe" this to anyone. So we don't "owe" it to our computers that fair use doctrine should be extended to the computer programs. The fairness word in the fair use doctrine doesn't mean that every entity should automatically benefit from such law. It's only for humans now.

My point of this long rant is that making fair use universally fair, automatically makes it unfair for the humans, for the original recipients of the benefits of this law. And there is no compromise here. We either ignore human rights, or computer program rights.


Fair use would be applied to the humans who caused it to be tokenized and vectorized. (Transformation.)

But I take your point about a robot doing it.


The real issue should not be whether they're paying the government. The issue is whether they're paying us for taking the human content of the last two thousand years and baking it into their generators.

How do we get royalties on this, like our share of the oil proceeds if we were citizens of Qatar? How do we trade our share of the contribution? There's twenty years of my posting on Reddit, Slashdot, HN, and other forums, that we know for a fact has been used in these frontier models. Great... where's my royalty check?

Pay us, not the government. We'll have to pay taxes regardless, and yes, close the tax loopholes on security-based capital gains (don't tax me for all the investment in my primary residence, that's a double dip).

I heard this called "Coasian" economics (as in Coase). I'm not sure what that actually means, though.


> where's my royalty check?

I support the idea of UBI with zero conditions, but not this. You didn't get royalties before AI when someone was heavily influenced by your work/content and converted that into money. If you expected compensation, then you shouldn't have given away your work for free.


> you shouldn't have given away your work for free.

Almost none of the original work I've ever posted online has been "given away for free", because it was protected by copyright law that AI companies are brazenly ignoring, except where they make huge deals with megacorporations (eg openai and disney) because they do in fact know what they're doing is not fair use. That's true whether or not I posted it in a context where I expected compensation.


> Almost none of the original work I've ever posted online has been "given away for free", because it was protected by copyright law that AI companies are brazenly ignoring.

I just don't think the AI is doing anything differently than a human does. It "learns" and then "generates". As long as the "generates" part is actually connecting dots on its own and not just copy & pasting protected material then I don't see why we should consider it any different from when a human does it.

And really, almost nothing is original anyway. You think you wrote an original song? You didn't. You just added a thin layer over top of years of other people's layers. Music has converged over time to all sound very similar (same instruments, same rhythms, same notes, same scales, same chords, same progressions, same vocal techniques, and so on). If you had never heard music before and tried to write a truly original song, you can bet that it would not sound anything like any of the music we listen to today.

Coding, art, writing...really any creative endeavor, for the most part works the same way.


Conjecture on the functional similarities between LLMs and humans isn't relevant here, nor are sophomoric musings on the nature of originality in creative endeavors. LLMs are software products whose creation involves the unauthorized reproduction, storage, and transformation of countless copyright-protected works—all problematic, even if we ignore the potential for infringing outputs—and it is simple to argue that, as a commercial application whose creators openly tout their potential to displace human creators, LLMs fail all four fair use "tests".

It's not just "heavily influenced" though, it's literally where the smarts are coming from.

I don't think royalties make sense either, but we could at least mandate some arrangement where the resulting model must be open. Or you can keep it closed for a while, but there's a tax on that.


originally we all posted online to help each other, with problems we mutually have. it was community, and we always gave since we got back in a free exchange.

now, there is an oligarchy coming to compile all of that community to then serve it at a paid cost. what used to be free with some search, now is not and the government of the people is allowing no choice by the people (in any capacity).

once capital comes for things at scale (with the full backing of the government), and they monetize that and treat it as "their own" i would consider that plagiarism.

how can we be expected to pay taxes on every microtransaction, when we get nothing for equally traceable contributions to the new machine?


The work was given to other humans. They paid taxes.

> The work was given to other humans. They paid taxes.

Says who? I mean what if black artists said they gave blues to black people, and white people making rock'n'roll? Black people spent money in black communities, now it's white people making it and spending it in theirs.

In essence they are the same point about outflows of value from the originating community. How you define a community, and what is integral is subjective.

I'm not convinced either way, but this line of reasoning feels dangerous.

I'd rather say that all ownership is communal, and as a community we allow people to retain some value to enable and encourage them further.


Still humans giving to humans. "White rock n roll artists" in your example paid taxes and those taxes benefited everyone.

That is your distinction because you chose to draw the line around all humans. But who is to say that the line shouldn't be drawn around black-people, or just men, or just Christians?

And no, taxes don't just magically benefit everyone. It's actually the point of them, that they are redistributive.


Who is to say the line should be drawn using discrimination?

Taxes fund the state. The state provides a minimum set of services - law and order, border security, fire safety - to everyone regardless of ability to pay. That others may derive additional state benefits is beside the point. Everyone gets something.


[flagged]


Curious, what is your solution to this situation? Imagine all labor has been automated - virtually all facets of life have been commoditized, how does the average person survive in such a society?

I would go further and ask how does a person who is unable to work survive in our current society? Should we let them die of hunger? Send them to Equador? Of course not, only nazis would propose such a solution.

Isn't this the premise of some sci-fi books and such?

(We in some way, in the developed world, are already mostly here in that the lifestyle of even a well-off person of a thousand years ago is almost entirely supported by machines and such; less than 10% of labor is in farming. What did we do? Created more work (and some would say much busy-work).)


No, it's not and we don't. The numbers we do have suggest that it's great in developing societies and terrible in developed.

Perhaps then the person you are responding to is focusing on developed societies.

Stop saying we like all of your schizophrenic identities are posting at once.

I'm suspicious of UBI as well. I guess I don't believe it brings about the best in human nature—nor does Capitalism in many regards.

Trials show that UBI is fantastic and does bring the best in people, lifting them from poverty and addiction, making them happier, healthier and better educated.

It is awful for the extractive economy as employees are no longer desperate.

Here’s a discussion with a historian who has done a lot of research on the topic https://www.reddit.com/r/IAmA/comments/8de9u2/i_am_a_histori...


This is not true. Don't ask a historian, ask an economist:

https://knowledge.insead.edu/economics-finance/universal-bas...

With some exceptions, UBI generally doesn't seem to work.


Maybe I'm misreading this article, but where does it actually say that anything UBI-related failed? The titular "failure" of the experiment is apparently:

> While the Ontario’s Basic Income experiment was hardly the only one of its kind, it was the largest government-run experiment. It was also one of the few to be originally designed as a randomised clinical trial. Using administrative records, interviews and measures collected directly from participants, the pilot evaluation team was mandated to consider changes in participants’ food security, stress and anxiety, mental health, health and healthcare usage, housing stability, education and training, as well as employment and labour market participation. The results of the experiment were to be made public in 2020.

> However, in July 2018, the incoming government announced the cancellation of the pilot programme, with final payments to be made in March 2019. The newly elected legislators said that the programme was “a disincentive to get people back on track” and that they had heard from ministry staff that it did not help people become “independent contributors to the economy”. The move was decried by others as premature. Programme recipients asked the court to overturn the cancellation but were unsuccessful.

So according to the article, a new government decided to stop the experiment not based on the collected data, but on their political position and vibes. Is there any further failure described in the article?


Ronald Coase. https://economics.com.au/2013/09/03/coasian-thinking/ ; more broadly he has some great writing on the "theory of the firm".

Mmm, Coase is my jam.

"How do we get royalties on this, like our share of the oil proceeds if we were citizens of Qatar?"

Alaska as a state managed to do just that and more or less has an annual UBI.


This nails my primary frustration with all gen AI - why are we all seemingly okay with a few massive companies and their billionaire CEOs training models on the output of all human civilization and then selling it back to us with the promise of putting all workers out of a job? How’s that not theft?

Well, you as a human train and live as a human on the output of all human civilization and if you are very efficient put people out of work. 99% of human jobs were physical labor and now machines do the work of thousands with one human superviser (oil tanker, machine loom, dump truck, train, etc). If humans are put completely out of the loop, that is a new problem for humans (super intelligent AI that takes over will likely be very bad for us) but avoiding that problem is another ball of wax.

> The issue is whether they're paying us for taking the human content of the last two thousand years and baking it into their generators.

Payment for content access is a sure way to limit progress and freedom. Should I pay you based on quantity, quality, or usage that relates to your content? How about the ideas you took from other people, should you pay them? Where does it stop?

I think the copyright system as it exists today is just absurd - a complete inversion of what it was supposed to do. It was meant to promote progress by protecting expression. Now look at what's happened: total concept and feel protects aesthetic gestalt, Structure and Srrangement protects how elements relate, Whelan Test protects the entire logical skeleton while AFC (abstraction filtration comparison) enables hierarchical abstraction protection.

Each rung up the ladder takes us further from "I wrote this specific thing" toward "nobody else can solve this problem in similar ways". This is how platforms get rich while common people, readers and creators, lose their freedoms and are exploited.



"Never let a good crisis go to waste."

The "think of the children!" argument has long been used by people in government to give themselves more power. In this case there's been a global effort to shut down unapproved speech. The government gains the power to censor and arrest for "bad speech" but it also gets to decide how the labels for the same are applied. There have been panel discussions and speeches on this at the WEF, and discussions of tactics for selling or pushing through this kind of legislation for at least a decade.

That's how we got that video of John Kerry lamenting the U.S. Constitution's First Amendment.

So under the aegis of "think of the children!" (which may or may not have come from "grass roots" organizations) you get a committee with the power to decide what speech is badthink or wrongthink, label it as such, and hand out arrest warrants for it.

Disagree with policy: that's "hate" or "misinformation" or "inflammatory."

Voice a moral opinion: that's "hate" or "bigotry" or "intolerance."

Express doubt over a leader's actions: that's "misinformation" or "inflammatory."

Fascinating that they're more worried about VPN use than about shutting down rape gangs.



While I'd certainly prefer raw human authorship on a forum like this, I can't help but think this is the wrong question. The labeling up front appears to be merely a disclosure style. That is, commenters say that as a way of notifying the reader that they used an LLM to arrive at the answer (at least here on HN) rather than citing the LLM as an authority.

"Banning" the comment syntax would merely ban the form of notification. People are going to look stuff up with an LLM. It's 2025; that's what we do instead of search these days. Just like we used to comment "Well Google says..." or "According to Alta Vista..."

Proscribing quoting an LLM is a losing proposition. Commenters will just omit disclosure.

I'd lean toward officially ignoring it, or alternatively ask that disclosure take on less conversational form. For example, use quote syntax and cite the LLM. e.g.:

> Blah blah slop slop slop

-- ChatGippity


Basic economics. If value is based on scarcity of a resource, and you lift the bottleneck that makes the resource scarce, the value is reduced.

In the case of software, the resource is time (you could build/host/operate that software yourself, but it takes a heck of lot more time than you're willing to spend so you trade money for the product instead), and you can't reduce the scarcity of time.


Seemingly, and those are often caused by diet and lack of exercise (or related sleep issues) like so many other disorders.


> AI failed at Microsoft...

Let's be fair. Microsoft has not succeeded at mainstream consumer AI products... yet.

To say AI failed at Micrsoft when CoPilot (the real one, for developers) was, last I heard, the most subscribed generative AI tool for software developers is not a failure. It's wild that most developers in a corporate environment pay Microsoft to use Chat GPT, Gemini, and Claude. Their other gen-AI components in Office, like Powerpoint and Word, are pretty darn good. But again, unless you're a corporate user in a work setting, you probably don't care.

This push to lease you your own computer is what hasn't worked very well so far. I dearly hope it pushes more people to Linux (though more likely they'll flee to Mac, which is a more palatable version of the clumsy crap MS is trying to do).

So consumer AI... perhaps that has failed. But the money isn't in you and me paying for a Windows license. The money's in big corporations paying for ten thousand seats at a time for their suites.


> It's wild that most developers in a corporate environment pay Microsoft to use Chat GPT, Gemini, and Claude

It definitely didn't help back when my manager asked me to recommend a subscription to buy everyone on our team that Anthropic didn't offer any plan with a predictable monthly cost for Claude Code with SSO/externally managed billing (I think that changed fairly recently).

Github Copilot for Business with an easily digestible flat monthly rate + straightforward per request rate beyond the quota (for devs who actually ended up using it heavily) made it extremely painless to get approved.

Cursor was really the only other subscription offering that checked all those boxes but our team uses the official Microsoft VS Code extensions and there was 0% chance of getting buy-in if it meant disrupting everyone's workflow for a 6 month trial period.


The thing that our management likes about co-pilot is that the data stays with us. We can't say that about ChatGPT.


That's because Microsoft understands business while it fails at private customers


Many cases of ADHD are allergic reactions. When I get a dose of something I'm allergic to (molds, perfumes...) I'll end up being scatter-brained for the day.

My child was the same way due to a wheat allergy (U.S. wheat is heavier than European wheat, and the flour comes out differently. It's actually tougher to digest for many).

As for letting the doctor tell you... The last few decades, I find I've been making the diagnosis first. Chronic fatigue, adult chicken pox, whooping cough... In each case, the medical history I gave was a reasonable path for ordering tests to get the correct diagnosis, but in each case the doctors missed it.

The whooping cough one was particularly annoying. I had traveled to Amsterdam, which had an outbreak a couple of years ago. I told the doctors this, and they still diagnosed me as having an ordinary sinus infection and gave me the wrong antibiotics. My wife looked up the medical alerts in Amsterdam, I messaged the doctor asking for the correct antibiotics (just in time).

Doctors in the U.S. follow protocols. When they don't they have to file extra paperwork. The entire system is designed to punish deviation from protocol, and the protocols don't get adjusted based on evidence or circumstance. They're handed down from a committee that the insurance companies take as law. Insurance companies in particular are geared to denying payment for deviation from protocol without prior approval. So doctors adjust their behavior to go with the flow... unless you find a great doctor, who knows better.

These days, you really do need to take your health in your own hands. Doctors become a path to confirmation and treatment, at least on the illness side of things. Injury tends to be a little more cut-and-dried, but even there you have to find the right experts. For example for sports injuries, you likely want doctors that see many patients that play that particular sport because they have experience with how things can go wrong.

Sorry for the rant. I've just seen the need for shopping a lot more in recent years.


we should probably take what those useless doctors say about ADHD with a similar pinch of salt then?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: