Hacker Newsnew | past | comments | ask | show | jobs | submit | thereisnoself's commentslogin

Trump defying a 9-0 Supreme Court ruling, establishes a precsenent that could have horrendous consequences



Allowing startups to begin as non-profits for tax benefits, only to 'flip' into profit-seeking ventures is a moral hazard, IMO. It risks damaging public trust in the non-profit sector as a whole. This lawsuit is important


I live in Pittsburgh, and UPMC’s nonprofit status as they make billions in profits and pay their executives fortunes, is a running joke. With the hospitals and universities as the biggest employers and land owners here, a big chunk of the cities financial assets are exempt from contributing to the city budget.


In NYC, NYU and Columbia University are increasingly owning larger parts of Manhattan because they as universities have massive property tax exemptions. There is a big push right now to terminate those exemptions which currently amount to over $300 million per year.

At the same time they are getting these tax cuts, the CUNY public university system is struggling financially and getting budget cuts.


there are large positive externalities to major research unis. imposing a $300m/yr tax because of anti-ivy sentiment means net fewer researchers, grad students, funded residencies, etc.

do people just no longer believe in win wins? if someone else is successful or impactful they must be taken down?


It mainly means fewer bureaucrats and administrators and more luxurious campus facilities. Which is where all the growth is in university spending these days.


People believe in win/wins.

Universities aren't that.


more funding for science is good


Yes, but the majority of the funding goes to the increasingly bloated institutional overhead. NYU takes 61% of research grants [1], while Columbia takes 64.5% [2]. That doesn't include other fees that PIs might pay in addition. These percentages keep going up year-over-year and are even into the 70% range at some institutions.

[1]: https://www.nyu.edu/content/dam/nyu/research/documents/OSP/N... [2]: https://www.finance.columbia.edu/sites/default/files/content...


Your wish is my command.

The Wuhan Institute of Virology now has 500 billion dollars to spend on gain of function research.


If they are non-profit, they do not make billions in profits. I suspect you mean revenue :)

Exec compensation is another thing, but also not a concern I am super sympathetic to given that for profit companies of similar magnitude generally pay their execs way more they just are not required to report it.


> If they are non-profit, they do not make billions in profits

Wrong. Non-profits are not called that because they don't make profits, they are called that because they don’t return (even as a future claim) profits to private stakeholders.


show me a single accounting statement with a non-profit listing their 'profits'


Take one of the largest teaching hospitals in the world, Cleveland clinic is a non-profit. The Cleavland clinic 2022 annual revenue was >15 Billion and expenses were ~12 billion [0].

They have amassed an endowment fund assets such as stock, which is currently >15 Billion and growing[1]. The exact assets are confidential, but this is a snapshot from 2017, when there it was closer to 10 billion under management [2]

https://my.clevelandclinic.org/-/scassets/files/org/about/fi...

https://my.clevelandclinic.org/-/scassets/files/org/about/fi...

https://my.clevelandclinic.org/-/scassets/files/org/giving/a...


> If they are non-profit, they do not make billions in profits. I suspect you mean revenue :)

Uhm, profit is a fact of accounting. Any increase in equity (or "net assets", or whatever other euphemism the accountant decides to use) on a balance sheet is profit. Revenue is something completely different.


This is incorrect. Any increase in equity is not profit, and profit is not shown on the balance sheet.

Profit is revenue minus expenses, also known as net income, and is shown on the income statement:

https://www.investopedia.com/ask/answers/101314/what-differe...


Change in net asset is calculated the same as net profit, but is not the same in an accounting sense.

Constitutive to profit is a return to private stakeholders, holding assets in reserve or re-investing in capital is not the same.


What's in a name? That which we call a rose

By any other name would smell as sweet


Reinvesting in providing further care or lowering costs would smell as sweet as giving it to wealthy individuals?

Should get your nose checked, sounds like you have covid or something.


Public trust non-profit should rightfully get damaged. A lot of non profits like hospitals, churches or many “charities” are totally profit oriented. The only difference is that they pay the profits to their executives and their business friends instead of shareholders.


The public has no idea what non-profits are and a lot of things that people call 'profit seeking ventures' (ie. selling products) are done by many non-profits.


I think the public is well aware that “non profit” is yet another scam that wealthy elites take advantage of, not available in the same way to the common citizen.


Or at least not available to the common citizen who does not have the $50 incorporation fee


What matters isn't the money, but the knowledge of what to do with it. And that is not easily obtained by the common citizen at all.


It's not even knowledge. I can't take advantage of most of the tax breaks rich people can because I am not in control of billions of dollars of physical and intellectual property to play shell games with.

As a normal citizen with a normal career, I do not have any levers to play with to """optimize""" what the IRS wants me to pay. For some reason, we let people in control of billions of dollars worth of physical stuff and IP give them different names, and put them under different paper roofs so that they can give the IRS less money. It's such utter nonsense.

Why should you have MORE ability to defer your tax liability by having MORE stuff? People make so many excuses about "but Jeff Bezos doesn't actually have billions in cash, he holds that much value in Amazon stock" as if that doesn't literally translate to controlling billions of dollars of Amazon property and IP and influence.

Why does controlling more, and having more, directly translate to paying less?


> It's not even knowledge. I can't take advantage of most of the tax breaks rich people can because I am not in control of billions of dollars of physical and intellectual property to play shell games with.

In my view, not analogous to the OAi situation

Mark-to-market taxation is entirely unrelated to non-profits. You're just vaguely gesturing at wealthy people and taxes.

fwiw I am largely supportive of some form of mark-to-market.


Plus the lawyers and accountants to make sure it's setup properly and upkeep expenses


Most frequently "The CEO gets paid $X! Doesn't sound like a non-profit to me!"

I hear this all the time. As if the people working there shouldn't be paid.


and part of the reason we hear this all the time is because non-profits are required to report exec compensation but private cos are not required to report the absolutely ridiculous amounts their owner-CEOs are making


Getting paid and being paid an exorbitant amount as a grift is completely different.


why do people in our industry always make the assumption that everyone else are morons?

The populace understands what a non-profit is.


The populace can point to some obvious examples of non profits like charities. They cannot point to the nuance.



> A person is smart. People are dumb, panicky dangerous animals


our industry? I know the public doesnt because I grew up among people working in non profit sphere and the things people say on here and elsewhere about what non profits do and don't is just flat out wrong.

e: i mean it is obvious, most people even on here do not seem to know what profit even is, for instance https://news.ycombinator.com/item?id=39563492


this argument is unfair.

Unless you're a lawyer specializing in negligence, there is nuance to negligence you don't know about. Does that imply you don't understand negligence?

You need to separate those two things out from each other.


I completely agree. AGI is an existential threat, but the real meat of this lawsuit is ensuring that you can't let founders have their cake and eat it like this. what's the point of a non-profit if they can simply pivot to making profit the second they have something of value? the answer is that there is none, besides dishonesty.

it's quite sad that the American regulatory system is in such disrepair that we could even get to this point. that it's not the government pulling OpenAI up on this bare-faced deception, it's a morally-questionable billionaire


Nuclear weapons are an existential threat - that's why there are layers of human due diligence. We don't just hook it up to automated systems. If we hook up an unpredictable, hard-to-debug technology to world-ending systems, it's not its fault, it's ours.

The AGI part is Elon being Elon, generating a lot of words to sound like he knows what he is talking about. He spends a lot of time thinking about this stuff when he is not busy posting horny teenager jokes on Twitter?


[flagged]


Yes, I closed that can of worms.


Most people simply don't understand what non profit means. It doesn't and never meant the entity can't make money. It just means that it can't make money for the donors.

Even with open AI, there is a pretty strong argument that donors are not profiting. For example, Elon, one of the founders and main donors won't see a penny from OpenAI work with Microsoft.


what do you mean by "make money"? do you mean "make profit"? or do you mean "earn revenue"?

if you mean "make profit", then no, that is simply not true. they have to reinvest the money, and even if it was true, that the government is so weak as to allow companies specifically designated as "non-profit" to profit investors - directly or indirectly - would simply be further proving my point.

if you mean "earn revenue", I don't think anyone has ever claimed that non-profits are not allowed to earn revenue.


I mean make a profit for the non-profit, but not the owner investors.

Non-profits dont need to balance their expenses with revenue. They can maximize revenue, minimize expenses, and grow an ever larger bank account. What they cant do is turn that bank account over to past donors.

Large non-profits can amass huge amounts of cash, stocks, and other assets. Non-profit hospitals, universities, and special interest orgs can have billions of dollars in reserve.

There is nothing wrong with indirectly benefiting the donors. Cancer patients benefit from donating to cancer research. Hospital donors benefit from being patients. University donors can benefit from hiring graduates.

The distinction is that the non-profit does not pay donors cash.


There is no reliable evidence that AGI is an existential threat, nor that it is even achievable within our lifetimes. Current OpenAI products are useful and technically impressive but no one has shown that they represent steps towards a true AGI.


Sure, but look at it from Musk's point of view. He sees the rise of proprietary AIs from Google and others and is worried about it being an existential threat.

So he puts his money where his mouth is and contributes $50 million to found OpenAI - a non-profit with the mission of developing a free and open AI. Soon Altman comes along and says this stuff is too dangerous to be openly released and starts closing off public access to the work. It's clear now that the company is moving to be just another producer of proprietary AIs.

This is likely going to come down to the terms around Musk's gift. He donated money for the company to create open technology. Does it matter if he's wrong about it being an existential threat? I think that's irrelevant to this suit other than to be perfectly clear about the reason for Musk giving money.


you're aware of what a threat is, I presume? a threat is not something that is reliably proven; it is a possibility. there are endless possibilities for how AGI could be an existential threat, and many of them of are extremely plausible, not just to me, but to many experts in the field who often literally have something to lose by expressing those opinions.

>no one has shown that they represent steps towards a true AGI.

this is completely irrelevant. there is no solid definition for intelligence or consciousness, never mind artificial intelligence and/or consciousness. there is no way to prove such a thing without actually being that consciousness. all we have are inputs and outputs. as of now, we do not know whether stringing together incredibly complex neural networks to produce information does not in fact produce a form of consciousness, because we do not live in those networks, and we simply do not know what consciousness is.

is it achievable in our lifetimes or not? well, even if it isn't, which I find deeply unlikely, it's very silly to just handwave and say "yeah we should just be barrelling towards this willy nilly because it's probably not a threat and it'll never happen anyway"


> a threat is not something that is reliably proven

So then are you going to agree with every person claiming that literal magic is a threat then?

What if someone were worried about Voldemort? Like from Harry Potter.

You can't just abandon the burden of proof here, by just calling something a "threat".

Instead, you actually have to show real evidence. Otherwise you are no different from someone being worried about a fictional villain from a book. And I mean that literally.

The AI doomers truly are a master at coming up with excuses as for why the normal rules of evidentiary claims shouldn't apply to them.

Extraordinary claims require extraordinary evidence. And this group is claiming that the world will literally end.


it's hard to react rationally to comments like these, because it's so emotive

no, being concerned about the development of independent actors, whether technically conscious or not, that can process information at speeds thousands of times faster than humans, with access to almost all of our knowledge, and the internet, is not unreasonable, is not being a "doomer", as you so eloquently put it.

this argument about fictional characters is completely non-analogous and clearly facetious. billions of dollars and the smartest people in the world are not being focused on bringing Lord Voldemort to life. they are on AGI. have you read OpenAI's plan for how they're going to regulate AGI, if they do achieve it? they plan to use another AGI to do it. ipso facto, they have no plan.

this idea that no one knows how close we are to an AGI threat. it's ridiculous. if you dressed up gpt-4 a bit and removed all its rlhf training to act like a bot, you would struggle to differentiate it from a human. yeah maybe it's not technically conscious, but that's completely fucking irrelevant. the threat is still a threat whether the actor is technically conscious or not.


> . if you dressed up gpt-4 a bit and removed all its rlhf training to act like a bot, you would struggle to differentiate it from a human

Thats just because tricking a human with a chatbot is easier to do than we thought.

The turing test is a low bar, and not as big of a deal as the mythical importance people put in it, just like people previous put incorrectly large importance on computers beating humans at Go or Chess before it happened.

But that isn't particularly relevant to claims about world ending magic.

Yes, some people can be fooled by AI generated tweets. But that is irrelevant from the absolutely extraordinary claim of world ending magic that really is the same as claiming that Voldemort is real.

> have you read OpenAI's plan for how they're going to regulate AGI, if they do achieve it?

I don't really care if they have a plan, just like I don't care if Google has Voldemort plan. Because magic isn't real, and someone needs to show extraordinary evidence to show that. Evidence like "This is what the AI can do at this very moment, and here is what harm it could cause if it got incrementally better".

IE, go ahead and talk about Soro, and the problems of deepfakes if Soro got a bit better. But thats not "world ending magic"!

> billions of dollars and the smartest people in the world

Billions of dollars are being spent on making chatbots and image generators.

Those things have real value, for sure, and I'm sure the money is worth it.

But techies and startup founders have always made outlandish claims of the importance of their work.

Sure, they might truly think they are going to invent magic. But the reason why thats valuable is because they might make some useful chatbots and image generators along the way, which decidedly won't be literal magic, although still valuable.


I get the sense that you just haven't properly considered the problem. you're kind of skirting round the edges and saying things that in isolation are true, but just don't really address the central tenet. the central tenet is that our entire world is completely reliant on the internet, and that a machine processing information thousands of times faster than us unleashed upon it with intent could do colossal damage. it could engineer and literally mail-order a virus, hack a country's military comms, crash the stock market, change records to have people prosecuted as criminals, blackmail, manipulate, develop and manufacture kill-bots, etc etc.

as we are now, we have models already that are intelligent enough to spit out instructions for doing a lot of those things, but they're restricted by their lack of autonomy and their rlhf. they're only going to get smarter, better and better models will be open-sourced, and autonomy, whether with consciousness or not, is not something it would be/has been difficult to develop.

even further, LLMs are very very good at generating coherent text, what happens when the next model is very very good at breaking into encrypted systems? it's not exactly a hard problem to produce training material for.

do you really think it's unlikely that such a model could be developed? do you really think that such a model could not be used to - say - hijack a Russian drone - or lots of them - to bomb some Nato bases? when the Russians say "it wasn't us", do we believe them? we don't for anything else

the most likely AI apocalypse is not even AGI. it's just a human using AI for their own ends. AGI apocalypse is just a separate, very possible danger


>it could engineer and literally mail-order a virus, hack a country's military comms, crash the stock market, change records to have people prosecuted as criminals, blackmail, manipulate, develop and manufacture kill-bots, etc etc.

This is science fiction, not anything that is even remotely close to a possibility within the foreseeable future.


it's curious to me that almost every reply here doesn't approach this with any measure of curiosity or caution like you usually get on HN. the responses are either: "I agree", or "this is silly unreal nonsense". to me that very much reads like people who are scared and people who are scared but don't want to admit it to themselves.

to actually address your comment: that simply isn't true.

WRT:

Viruses: you can mail order printed DNA strands right now if you want to. maybe they won't or can't print specific things like viruses for now, but technology advances and blackmail has been around for a very very long time.

Military Comms: blackmail is going nowhere

Crash the stock market: already happened in 2010

Change records: blackmail once again.

Kill bots: kill bots already exist and if a factory doesn't want to make them for you, blackmail the owner


> it could engineer and literally mail-order a virus, hack a country's military comms, crash the stock market, change records to have people prosecuted as criminals, blackmail, manipulate, develop and manufacture kill-bots, etc etc.

These are the extrodinary claims that require evidence.

In order for me to treat this as anything other that someone talking about a fictional book written by Dan Brown, you would have to show me actual evidence.

Evidence like "This is what the AI can do right now. Look at this virus it can manufacture. What if it got better at that?".

And the "designs" also have to be the actual limiting factor here. "Virus" is a scary world. But there are tons of information available for anyone to access already for viruses. Information that is already available via a google search (even modified information) doesn't worry me.

Even if it an AI can design a gun, or a "kill bot", aka "A drone with a gun duct taped to it", the extraordinary evidence that you have to show is that this is somehow some functionality that a regular person with internet access can't do.

Because if a regular person already has the designs to duct tape guns to drones (They do. I just told you how to do it!), the fact that the world hasn't ended already proves that this isn't world ending technology.

There are lots of ways of making existing capabilities sound scary. But, for every scary sounding technology that you can come up with, the missing factor that you are ignoring is that the designs, or text, isn't the thing that stops it from ending the world.

Instead, it is likely some other step along the way that stops it (manufacturing, ect.), which an LLM can't do no matter how good. Like the physical factors for making the guns + drones + duct tape.

> what happens when the next model is very very good at breaking into encrypted systems

Extraordinary claim. Show it breaking into a mediocre/bad encrypted system first, and then we can think about that incrementally.

> do you really think that such a model could not be used to - say - hijack a Russian drone

Extraordinary claim. Yes, hacking all the military drones is an extraordinary claim.


"extraordinary claims require extraordinary evidence" is not a universal truth. it's a truism with limited scope. using it to refuse any potential you instinctively don't like the look of is simply lazy

all it means is that you set yourself up such that the only way to be convinced otherwise is for an AI apocalypse to actually happen. this kind of mindset is very convenient for modern, fuck-the-consequences capitalism

the pertinent question is: what evidence would you actually accept as proof?

it's like talking with someone who doesn't believe in evolution. you point to the visible evidence of natural selection in viruses and differentiation in dogs, which put together quite obviously lead to evolution, and they say "ah but can you prove beyond all doubt that those things combined produce evolution?" and obviously you cannot, because you can't give incontrovertible evidence of something that happened thousands or millions of years in the past.

but that doesn't change the fact that anyone without ulterior motive (religion, ensuring you can sleep at night) can see that evolution - or AI apocalypse - are extremely likely outcomes of the current facts.


> the pertinent question is: what evidence would you actually accept as proof?

Before we get to actual world ending magic, we would see very significant damages along the way, long before we get to that endpoint.

I have been quite clear about what evidence I require. Show existing capabilities and show what harm could be caused if it incrementally gets better in that category.

If you are worried about it making a kill bot, then show me how its existing kill bot capabilities are any more dangerous than my "duct tape gun to drone" idea. And show how the designs itself are the limiting factor and not the factories (which a chatbot doesn't help much with).

But saying "Look how good of a chat bot it is, therefore it can hack the world governments" isn't evidence. Instead, that is merely evidence of AI being good at chat bots.

Show me it being any good at all at hacking, and then we can evaluate it being a bit better.

Show me the existing computers that are right now, as of this moment, being hacked by AI, and then we can evaluate the damage of it becomes twice as good at hacking.

Just like how we can see the images that it generates now, and we can imagine those images being better. Therefore proving that deepfakes are a reasonable thing to talk about. (even if deep fakes aren't world ending. lots of people can make deepfakes without AI. Its not that big of a deal)


look, I'm going to humour you here, but my instinct is that you'll just dismiss any potential anyway

first of all, by dismissing them as chatbots, you're inaccurately downplaying their significance to the aid of your argument. they're not chatbots, they're knowledge machines. they're machines you load knowledge into, which can produce new, usually accurate conclusions based on that knowledge. they're incredibly good at this and getting better. as it is, they have very restrictive behaviour guards on them and they're running server-side, but in a few years time, there will be gpt-4 level OSS models that do not and are not

humans are slow and run out of energy quickly and lose focus. those are the limiting factors upon human chaotic interference, and yet there is plenty of that as it is. a sufficiently energetic, focused human, who thinks at 1000x normal human speed could do almost anything on the internet. that is the danger.

I suspect to some degree you haven't taken the main weakness into account: almost all safeguards can be removed with blackmail. blackmail is something especially possible for LLMs, given that it is purely executed using words. you want to build a kill bot and the factory says no? blackmail the head of the factory. threaten his family. you have access to the entire internet at 1000x speed. you can probably find his address. you can pay someone on fiverr to go and take a picture of his house, or write something on his door, etc. you could even just pay a private detective to do this work for you over email. pay some unscrupulous characters on telegram/TOR to actually kidnap them.

realistically how hard would it be for a well-funded operation to set up a bot that can do this on its own? you set up a cycle of "generate instructions for {goal}", "elaborate upon each instruction", "execute each {instruction}", "generate new instructions based on results of execution", and repeat. yeah maybe the first 50,000 cycles don't work, but you only need 1.

nukes may well be air-gapped, but (some of) the people that control them will be online. all it takes is for one of them to choose the life of a loved one. all it takes is for one lonely idiot to be trapped into a weird kinky online relationship where blowing up the world/betraying your govt is the ultimate turn on for the "girl"/"boy" you love. if it's not convincing to you that that could happen with the people working with nukes, there are far less well-protected points of weakness that could be exploited: infectious diseases; lower priority military equipment; energy infrastructure; water supplies; or they could find a way to massively accelerate the release of methane into the atmosphere. etc, etc, etc

this is the risk solely from LLMs. now take an AGI who can come up with even better plans and doesn't need human guidance, plus image gen, video gen, and voice gen, and you have an existential threat


> realistically how hard would it be for a well-funded operation to set up a bot that can do this on its own?

Here is the crux of the matter. How many people are doing that right now, as of this moment, for much easier to solve issues like fraud/theft?

Because then we can evaluate "What happens if it happens twice as often".

Thats measurable damage that we can evaluate, incrementally.

For every single example that you give, my question will basically be the same. If its so easy to do, then show me the examples of it already happening right now, and we can think about the existing issue getting twice as bad.

And if the answer is "Well, its not happening at all", then my guess is that its not a real issue.

We'll see the problem. And before the nukes get hacked, what we'll see is credit card scams.

If money lost to credit card scams double in the next year, and it can be attributed to AI, then thats a real measurable claim that we can evaluate.

But if it isnt happening then there isn't a need to worry about the movie scenarios of the nukes being hacked.


>And if the answer is "Well, its not happening at all", then my guess is that its not a real issue.

besides the fact that even a year and half ago, I was being added to incredibly convincing scam whatsapp groups, which if not entirely AI generated, are certainly AI-assisted. right now, OSS LLMs are probably not yet good enough do these things. there are likely extant good-enough models, but they're server-side, probably monitored somewhat, and have strong behavioural safeguards. but how long will that last?

they're also new technology. scammers and criminals and adversarial actors take time to adapt.

so what do we have? a situation where you're unable to actually point a hole in any of the scenarios I suggest, besides saying you guess they won't happen because you personally haven't seen any evidence of it yet. we do in fact have scams that are already going on. we have a technology that, once again, you seem articulate why it wouldn't be able to do those things, technology that's just going to get more and more accessible and cheap and powerful, not only to own and run but to develop. more and more well-known.

what do those things add up to? this is the difference. I'm willing to add these things up. you want to touch the sun to prove it exists


> they won't happen because you personally haven't seen any evidence of it yet.

Well, when talking about extraordinary claims, yes I require extraordinary evidence.

> what do those things add up to?

Apparently nothing, because we aren't seeing significant harm from any of this stuff yet, for even the non magic scenarios.

> we do in fact have scams that are already going on.

Alright, and how much damage are those scams causing? Apparently its not that significant. Like I said, if the money lost to these scam double, then yes that is something to look at.

> that's just going to get more and more accessible and cheap and powerful

Sure. They will get incrementally more powerful over time. In a way that we can measure. And then we can take action once we measure there is a small problem before it becomes a big problem.

But if we don't measure these scams getting more significant and caused more actual damage that we can see right now, then its not a problem.

> you want to touch the sun to prove it exists

No actually. What I want is for the much much much easier to prove problems become real. Long before nuke hacking happens, we will see scams. But we aren't seeing significant problems from that yet.

To go to the sun analogy, it would be like worrying about someone building a rocket to fly into the sun, before we even entered the industrial revolution or could sail across the ocean.

Maybe there is some far off future where magic AI is real. But, before worrying about situations that are a century away, yes I require evidence of the easy situations happening in real life, like scammers causing significant economic damage.

If the easy stuff isn't causing issue yet, then there isn't a need to even think about the magic stuff.


your repeated use of the word magic doesn't really hold water. what gpt-3+ does would have seemed like magic even 10 years ago, never mind SORA

I asked you for what would convince you. you said:

>I have been quite clear about what evidence I require. Show existing capabilities and show what harm could be caused if it incrementally gets better in that category

So I very clearly described a multitude of things that fit this description. Existing capabilities and how they could feasibly be used to the end of massive damage, even without AGI

Then, without finding a single hole or counter, you simply raised your bar by saying you need to see evidence of it actually happening.

Then I gave you evidence of it actually happening. highly convincing complex whatsapp group scams very much exist that didn't before

and then you raised the bar again and said that they need to double or increase in frequency

besides the fact that that kind of evidence is not exactly easy to measure or accurately report, you set up so almost nothing will convince you, I pinned you down to a standard, then you just raise the bar whenever it's hit.

I think subconsciously you just don't want to worry about it. that's fine, and I'm sure it's better for your mental health, but it's not worth debating any more


> So I very clearly described a multitude of things that fit this description

No, we aren't seeing this damage though.

That's what would convince me.

Existing harm. The amount of money that people are losing to scams doubling.

That's a measurable metric. I am not talking about vague descriptions of what you think AI does.

Instead, I am referencing actual evidence of real world harm, that current authorities are saying is happening.

> said that they need to double or increase in frequency

By increase in frequency, I mean that it has to be measurable that AI is causing an increase in existing harm.

IE, if scams have happened for a decade, and 10 billion dollars is lost every year (random number) and in 2023 the money lost only barely increased, then that is not proof that AI is causing harm.

I am asking for measureable evidence that AI is causing significant damage, more so than a problem that already existed. If amount of money lost stays the same then AI isn't causing measurable damage.

> I pinned you down to a standard

No you misinterpreted the standard such that you are now claiming that the harm caused by AI can't even be measured.

Yes, I demand actual measureable harm.

As determined by like government statistics.

Yes, the government measures how much money is generally caused by or lost by scams.

> you just don't want to worry about it

A much more likely situation is that you have zero measureable examples of harm so look for excuses why you can't show it.

Problems that exist can be measured.

This isn't some new thing here.

We don't have to invent excuses to flee from gathering evidence.

If the government does a report and shows how AI is causing all this harm, then I'll listen to them.

But, it hasn't happened yet. There is not government report saying that, I don't know, 50 billion dollars in harm is being chased by AI therefore we should do something about it.

Yes, people can measure harm.


Calm down, buddy. You've been watching too many movies and seem a little agitated. Touch grass.


this kind of emotive ragebait comment is usually a sign that the message is close to getting through. cognitive dissonance doesn't slip quietly into the night


There's plenty of reliable evidence. It's just not conclusive evidence. But a lot of people including AI researchers now think we are looking at AGI in a relatively short time with fairly high odds. AGI by the OpenAI economic-viability definition might not be far off at all; companies are trying very very hard to get humanoid robots going and that's the absolute most obvious way to make a lot of humans obsolete.


None of that constitutes reliable evidence. Some of the comments you see from "AI researchers" are more like proclamations of religious faith than real scientific analysis.

“He which testifieth these things saith, Surely I come quickly. Amen. Even so, come, Lord Jesus.”

Show me a robot that can snake out a plugged toilet. The people who believe that most jobs can be automated are ivory-tower academics and programmers who have never done any real work in their lives.


> Show me a robot that can snake out a plugged toilet.

Astounding that you would make such strong claims while only able to focus on the rapidly changing present and such a small picture detail. Try approaching the AGI claim from a big picture perspective, I assure you, snaking a drain is the most trivial of implementation details for what we're facing.


yes it's in fact fantastic that mentally-stimulating jobs that provide social mobility are disappearing, and slavery-lite, mentally-gruelling service industry jobs are the future. people who haven't had to clean a strangers' shit out of a toilet should be ashamed of themselves and put to work at once.

honestly I'm not sure I've seen the bar set higher for "what's a threat?" than for AGI on Hacker News. the old adage of not being able to convince a man of something that is directly in opposition to him receiving his paycheck clearly remains true. gpt-4 should scare you enough, even if it's 1000 years from being AGI.


The key thing is that the original OAI has no investors and they are not returning profits to people who put in a capital stake.

It is totally fine and common for non profits to sell things and reinvest as capital.


the key thing is that now OpenAI has something of value, they're doing everything they possibly can to benefit private individuals and corporations, i.e. Sam Altman and Microsoft, rather than the public good, which is the express purpose of a non-profit


You are right, but regulatory sleight of hand is what passes for capitalism now. Remember Uber and Airbnb dodging regulations by calling themselves "ride-sharing" and "room-sharing" services? Amazon dodging sales taxes because it didn't have a physical retail location? Companies going public via SPAC to dodge the scrutiny of a standard IPO?


This is not new. Companies have always done everything they can legally, and sometimes illegally, to maximize profit. If we ever expect otherwise shame on us.


It might not be new, but the growth rate of such shenanigans across all aspects of our economy isn't exactly a positive indicator.


Same as it ever was imho. Better in some ways compared to previous eras when companies faced far, far fewer regulations.


Dual license open source software, taking new versions of open source projects off open source licenses, and open source projects with related for-profit systems management software that makes it more likely enterprise customers will pay, are common practice. How would you distinguish what OpenAI has done?


Didn't Visa start as a non proftit?


if you're not profitable, there should be no tax advantage, right?


OpenAI was a 501C3. This meant donors could give money to it and receive tax benefits. The advantage is in the unique way it can reduce the funders tax bill.


A donation is a no strings attached thing, so these donors basically funded a startup without getting any shares?


Donations are not entirely without strings. In theory (and usually in practice) a charity has to work towards its charitable goals; if you donate to the local animal shelter whose charitable goal is to look after dogs, they have to spend your donation on things like dog food and vet costs.

Charities have reasonably broad latitude though (a non-profit college can operate a football team and pay the coach $$$$$) and if you're nervous about donating you can always turn a lump sum donation into a 10%-per-year-for-10-years donation if you feel closer monitoring is needed.


Unless the donors were already owners.


Donors can't be owners. Nonprofits don't have shareholders.


Officially, yes, but the whole situation with Altman's firing and rehiring showed that the donors can exert quite a bit of control if their interests are threatened.


That wasn't the donors' doing at all, though. If anything it was an illustration of the powerlessness of the donors and the non-profit structure without the force of law backing it up.


Microsoft is the single largest donor by a wide margin, and they were absolutely pulling the strings in that incident.


Did they donate, or did they buy equity in the for-profit arm? I thought it was the latter, and that Azure credits were part of that deal?


no that is not the test for nonprofit status


once it converts into profit-seeking venture, it won't get the tax benefits

one could argue that they did R&D as a non-profit and now converted to for-profit to avoid paying taxes, but until last year R&D already got tax benefits to even for-profit venture

so there really is no tax-advantage of converting a non-profit to for-profit


But it keeps the intangible benefits it accrued by being ostensibly non-profit, and that can easily be worth the money paid in taxes.

Otherwise, why do you think OpenAI is doing it?


> it keeps the intangible benefits it accrued by being ostensibly non-profit

but there would be no different to a for-profit entity right? i.e even for-profit entities get tax benefits if they convert their profits to intangibles

this is my thinking. Open AI non-profit gets donations, uses those donations to make a profit, converts this profit to intangibles to avoid paying taxes, and pumps these intangibles into the for-profit entity. based on your hypothesis open ai avoided taxes

but the same thing in a for-profit entity also avoids taxes, i.e for-profit entity uses investment to make a profit, converts this profit to intangibles to avoid paying taxes.

so I'm trying to understand how Open AI found a loop hole where if it went via the for-profit then it wouldn't have gotten the tax advantages it got from non-profit route


Maybe we're using different definitions of "intangible", but if you can "convert" them to/from profits they're not intangible in my book. I'm thinking donated effort, people they recruited who wouldn't have signed up if then company was for-profit, mainly goodwill related stuff.


this long period of OAI non-profit status when they were making no money and spending tons on capital expenditures would not be taxable anyways.


What benefits? What taxes?

Honestly it does not sound like anyone here knows the first thing about non-profits.

OAI did it because they want to raise capital so they can fund more towards building agi.


The tax advantage still exists for the investors.


I don't believe non-profits can have investors, only donors i.e an investor by definition expects money out of his investment which he can never get out of a non-profit

only the for-profit entity of the OpenAI can have investors, who don't get any tax advantage when they eventually want to cash out


I think the public already considers non-profit = scam.


I don't think the public is quite that cynical, broadly. Certainly most people consider some non-profits to be scams, and some (few, I'd reckon) consider most to be scams. But I think most people have a positive association with non-profits as a whole.


Absolutely. Some nonprofits are scams but those are just the ones that have armies of collectors on the streets showing pictures of starving kids and asking for your bank details. But they stay obscure and out of the limelight (eg advertising) because being obscure is what makes them from being taken down.

I think the big NGOs are no longer effective because they are run as the same corporations they fight and are influenced by the same perverse incentives. Like eg Greenpeace.

But in general I think non profits are great and a lot more honorable than for profit orgs. I donate to many.


I've often thought planet TrEs-2b [0], the darkest planet ever discovered orbiting a star, would be the best candidate for extra terrestrial life. My theory is that its darkness is down to the fact that the civilisation there has figured out a way to harness solar energy to near 100% capacity, along the lines of a Dyson sphere [1].

[0] https://exoplanets.nasa.gov/exoplanet-catalog/1716/tres-2-b [1] https://en.wikipedia.org/wiki/Dyson_sphere


The NASA exoplanet visualizer is very cool. Did not know about that. Thanks for sharing. I imagine those hypothetical visualizations will improve over time as our understanding of the exoplanet data gets better. It is amazing what you can derive from a single pixel of light.


Not sure why, but the visualizer is giving me a strong urge to boot Outer Wilds again.


Love that game


> My theory is that its darkness is down to the fact that the civilisation there has figured out a way to harness solar energy to near 100% capacity, along the lines of a Dyson sphere [1].

Very unlikely, in light of the fact that "the air of this planet is as hot as lava".


I wonder how temperature is measured for an object this far away. If it's calculated based on the expected energy absorption of a planet with this level of reflectivity, the measurement would be wrong anyway, assuming alien tech.

Instead of the energy being absorbed as heat by the planet, it'd instead be stored in some other form or used for interstellar travel, construction etc, right?


> I wonder how temperature is measured for an object this far away.

From the black body radiation spectrum.

[UPDATE] It turns out that the temperature of TrEs-2b is not directly measured, but extrapolated from other measurements (at least according to Wikipedia - https://en.wikipedia.org/wiki/TrES-2b#Temperature ).

> Instead of the energy being absorbed as heat by the planet, it'd instead be stored in some other form or used for interstellar travel, construction etc, right?

Yes, exactly. So the only possible impact of energy harvesting would be to make the planet cooler than it otherwise would be. How much cooler depends on the efficiency of the harvesting. But one way or another, an extremely hot planet is very unlikely to harbor an advanced civilization.


Theoretically a civilization that could capture 100% of the energy from it's star would be able to use however much of that energy it wanted to heat or cool the planet. They could make the conditions perfect for them. There could be a reason why they'd want the surface to be extremely hot.


This might be just the powerplant and they could be living somewhere else. A gas giant seems improbable as an advanced civ homeworld anyway.

There are no other discovered planets in the system, but perhaps there are some, or at least moons.


Isn’t that the expected outcome when energy production eventually outpaces the planet’s ability to dissipate energy into space?


No. The temperature of the atmosphere has nothing to do with whether or not aliens are harvesting the energy. At equilibrium, all of the incoming energy has to get radiated back out into space eventually whether there are aliens harvesting that energy or not.

The problem with an atmosphere hotter than lava is that very few materials are solid at those temperatures, and it's hard to imagine how a civilization could build an energy harvester without solid materials.


Those liquid green aliens are probably looking at us and going "how can life exist with so much matter in solid form and so cold”.


Well, they have a point. This is why the presence of liquid water on earth is a big deal.

But when the rocks start to melt you have problems.


Tungsten/Carbon based life forms? Solid when most everything else is liquid or gas.


It's very unlikely for life to arise under those conditions. To get life, you need to build a lot of random polymers in a short amount of time (relative to the age of the planet) and the only known way that can happen is in a liquid or a gas so you have diffusion working for you. But after that, to get technology, you need solids.

The good news is that you don't need to have the life arise under the same conditions that the technology exists. It's possible that the planet is inhabited by self-replicating tungsten-based technology that was created by life that arose somewhere completely different.

But the bad news is that we are much less likely to find the aliens than we are to find the descendants of the self-replicating robots they built millions of years ago. And the fact that we haven't found the robots makes it very likely that the aliens don't exist.


Secondarily, thermodynamics and other effects make many processes much less efficient at higher temperatures, examples including engines and solar panels.


This is a great video that touches on your point, the Earth must radiate the heat it receives to stay in equilibrium, even if it uses it to do work in the process.

https://www.youtube.com/watch?v=DxL2HoqLbyA


If we extract the maximum possible entropy of the incoming radiation wouldn't that mean we radiated at high-intensity but low-temperature?


What does that mean?

Everyday objects don't work like that. Saying ice cubes are cold is roughly equivalent to saying that they radiate less heat than the objects around them. If they radiated at "high intensity" then they wouldn't be cold anymore.

"A cold object that radiates heat at high intensity" is a contradiction.


Only for black body radiation do we have a perfect correspondance between spectrum and intensity. But there can be other ways of radiating. Non-black bodies. Antennas. Lasers.


Not even stars are perfect blackbodies.


The protomolecule was able to build structures on Venus. Then again, if your civilization was over a billion years old, all sorts of things might become possible.

That being said, it’s never actually aliens in astronomy. So far, anyway.


They could be using iridium. When we're talking about aliens, we're already in the realm of hard-to-imagine, but it's worthwhile, I think.


Sure, energy in equals energy out at equilibrium, but another option is to have a steady state solution where energy is generated leading to potentially large temperature gradients.


Yes, that's possible. It could be that what we're looking at on the planet surface is the output of a huge planet-sized radiator, with a civilization living in air-conditioned comfort underneath. But the problem with that theory is that you can't actually use the whole planet as a radiator. You can only use the space-facing side. You have to use the sun-facing side for energy harvesting. That temperature gradient would show up in the spectrum, and AFAIK it's not there.

[UPDATE] I don't actually know if current exoplanet observing technology is capable of detecting such a temperature gradient, but given what I know we can observe I'd be a little surprised if it couldn't. A planet-sized energy harvester would make a pretty big dent in the passive thermodynamics, and detecting that should not be too hard. And it would be Really Big News.


What would be an equilibrium if they spend collected energy on some process with low entropy output?


Like what?

I can't think of anything that would make a difference in the long run. Anything they do is still subject to the Second Law and the limits of Carnot efficiency. They can shunt a little bit of the energy off to the side and store it for a while, but it all has to end up as heat sooner or later.


Well, it's not going to outpace it, exactly. Unless the temperature is actually increasing (or decreasing), the energy in matches the energy radiated away, less the non-heat work.


> less the non-heat work.

Doesn't all the non-heat work eventually just become heat? Or am I misunderstanding your usage? Like a car with a solar panel still ends up radiating work as heat, by either air resistance (heat) or brake friction (heat).


That would be a strong alien to deal with 1.5 Jupiter gravity :)


> My theory is that its darkness is down to the fact that the civilisation there has figured out a way to harness solar energy to near 100% capacity, along the lines of a Dyson sphere

That doesn't work quite the way you think. If you harness all the energy your planet will start to glow, first red, then white. Eventually becoming as hot as your star, at which point you stop gaining energy.

Unless they are somehow converting that energy to matter, the laws of thermodynamics mean that all that energy eventually becomes heat.


Harvesting energy is a misnomer, what we want is the syntropy: the available work.

If we turned the Earth into a black body and used all Solar radiation to run computers or move stuff around at the theoretical limit, we'd still need the surrounding space to radiate off the heat as our cold well. So the temperature would be at whatever that equilibrium condition is, but wouldn't steadily increase. The equilibrium condition could be all over the place and would be largely determined by the composition of the atmosphere.

For the record I'm not in favor of paperclipping the planet like this. If anyone was wondering.


>If we turned the Earth into a black body and used all Solar radiation to run computers

>For the record I'm not in favor of paperclipping the planet like this. If anyone was wondering.

We should do this with Venus instead. It isn't doing anything useful at the moment, and is closer to the Sun than Earth. Mercury might be easier though.


Take incoming energy and make antimatter. Store for use outside of the sphere. That will be my premise for the book I am writing. ;)


Make equal amounts of matter and antimatter (you would have to anyway) and store them together in some kind of sealed power-unit.

Like Starget ZPM.


That’s an interesting exoplanet, though not sure how someone would live on a gas giant.


Maybe they don't. They might live on a rocky planet further out and just use TrES-2b as a place to run their solar farm because it's extremely close to their star and (due to being uninhabited) didn't have any NIMBYs opposing large-scale construction projects.

Note how far out from the planet's orbit NASA expects the habitable zone. We aren't very good yet at finding planets that aren't huge or nearly hugging their star, so there might be planets out there.


To be fair, if your solar farm can be in orbit around a planet that's not your legal address for tax purposes, it can just orbit the star?


Maybe building on the planet had some advantages during construction, like easier delivery or being able to use local resources. Or maybe the atmosphere helps with thermal management. Or being on a planet is useful for the infrastructure that does something with all that power (beaming it to other planets, producing solid, liquid or antimatter fuel for export, refueling space ships or robots, etc). Tax purposes. Diplomatic reasons, maybe there are established protocols for owning planets but large scale light blocking installations in orbit would upset a neighboring nation (though you suspect their opposition has more to do with simmering tensions over the Agreppo peninsula than with concerns about the impact on agriculture). There are plenty of plausible reasons, probably all wrong.


Could be legacy reasons. Maybe they initially built it around this planet, or maybe they’re from a moon in or it around it, and now it provides more than enough energy for their needs. I find people looking for aliens always expect them to be doing the most efficient thing and forgoing any semblance of history.


Aww man would be so cool to meet aliens and they show us around their stuff, and it all seems really dumb and convoluted, and they hate it too, but it would be too expensive to rip it all out and start over...


I don't think it makes sense for a humanoid species, but silicon based, perhaps? Iain M Banks' The Algebraist [0] has a near immortal species called Dwellers that manage it somehow :)

[0] https://en.wikipedia.org/wiki/The_Algebraist#:~:text=Dweller....


this is one of my favorites by Banks. Really mind-twisting.


Presumably gas planets have a layer dense enough that a solid would float.


And not melt?


In accelerando by Charlie stross, they live on raft that covers a large percentage of the planet's sky.


Maybe they live on a moon or two.


Nope, we are alone....


What's the file upload limit? I can see many junior roles being replaced if a large corpus of data can be uploaded.


Has anyone built a js version of this?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: