Hacker Newsnew | past | comments | ask | show | jobs | submit | missedthecue's commentslogin

The supply is relatively fixed. The number of NFL seats has grown somewhat since 1990 because there are a few new teams, the season is a little longer, and lots of teams have built new stadiums with a little bit more capacity. But the supply of annual NFL seats is still similar to what it used to be.

But meanwhile, top quartile income Americans earn way (35% more inflation-adjusted), and there are 40% more of them since 1990. This is why the average NFL seat is $170 and the NFL sold out almost every game last year.

You'll notice this in all kinds of areas. Airliners are filling their planes with a greater and greater percentage of first class seating for example. The top 25% of american earners just have an insane amount of spending power.


"AI can hallucinate on any data you feed it, and it's been proven that it doesn't summarize, but rather abridges and abbreviates data."

Have you ever met a human? I think one of the biggest reasons people become bearish on AI is that their measure of whether it's good/useful is that it needs to be absolutely perfect, rather than simply superior to human effort.


> one of the biggest reasons people become bearish on AI is that their measure of whether it's good/useful is that it needs to be absolutely perfect, rather than simply superior to human effort.

Meanwhile people bullish on AI don't care if it's perfect or even vastly inferior to human effort, they just want it to be less expensive/troublesome and easier to control than a human would be. Plenty of people would be fine knowing that AI fucks up regularly and ruins other people's lives in the process as long as in the end their profits go up or they can still get what they want out of it.


I'm not saying it needs to be perfect, but the guy in this article is putting a lot of blind faith in an algorithm that's proven time and time again to make things up.

The reason I have become "bearish" on AI is because I see people repeatedly falling into a trap of believing LLMs are intelligent, and actively thinking, rather than just very very fine tuned random noise. We should pay attention to the A in AI more.


> putting a lot of blind faith in an algorithm that's proven time and time again to make things up

Don't be ridiculous. Our entire system of criminal justice relies HEAVILY on the eyewitness testimony of humans, which has been demonstrated time and again to be entirely unreliable. Innocents routinely rot in prison and criminals routinely go free because the human brain is much better at hallucinating than any SOTA LLM.

I can think of no more critical institution that ought to require fidelity of information than criminal justice, and yet we accept extreme levels of hallucination even there.

This argument is tired, played out, and laughable on its face. Human honesty and memory reliability are a disgrace, and if you wish to score points against LLMs, comparing their hallucination rates to those of humans is likely going to result in exactly the opposite conclusion that you intend others to draw.


> the human brain is much better at hallucinating than any SOTA LLM

Aren't the models trained on human content and human intervention? If humans are hallucinating that content, then LLMs even slightly hallucinating from fallible human content, wouldn't that make the LLMs hallucinations still, if even slightly, more than humans? Or am I missing something here where LLMs are somehow correcting the original human hallucinations and thus producing less hallucinated content?


Its ridiculous and laughable to say LLMs hallucinate because the justice system isn't flawless?

That's a cognitive leap.


Right now AI is inferior, not superior, to human effort. That's precisely why people are bearish on it.

I don't think thats obvious. In 20 minutes for example, deep research can write a report on a given topic much better than an analyst can produce in a day or two. It's literally cheaper, better, and faster than human effort.

Faster? Yes. Cheaper? Probably, but you need to amortize in all the infrastructure and training and energy costs. Better? Lol no.

> but you need to amortize in all the infrastructure and training and energy costs

The average American human consumes 232kWh of all-in energy (food, transport, hvac, construction, services, etc) daily.

If humans want to get into a competition over lower energy input per unit of cognitive output, I doubt you'd like the result.

> Better? Lol no

The "IQ equivalent" of the current SOTA models (Opus 4.5, Gemini 3 Pro, GPT 5.2, Grok 4.1) is already a full 1SD above the human mean.

Nations and civilizations have perished or been conquered all throughout history because they underestimated and laughed off the relative strength of their rivals. By all means, keep doing this, but know the risks.


What do you man by “better” in this context?

It synthesizes a more comprehensive report, using more sources, more varied sources, more data, and broader insights than a human analyst can produce in 1-2 days of research and writing.

I'm not confused about this. If you don't agree, I will assume it's probably because you've never employed a human to do similar work in the past. Because it's not particularly close. It's night and day. *Note that I'm not saying 20 minutes of deep research beats 9 months of investigative journalism with private interviews with primary sources or anything like that. I'm talking about asking an analyst on your team to do a deep dive into XYZ and have something on your desk tomorrow EOD.


Weird, I'm an attorney and no one is getting rid of associates in order to have LLMs do the research, no less so when they actually hallucinate sources (something associates wont do). I can't imagine that being significantly different in other domains.

> I can't imagine that being significantly different in other domains.

It’s not. There is no industry where AI performs “better” than humans reliably without torturing the meaning of the word (for example, OP says AI is better at analysis iff the act of analysis does not include any form of communication to find or clarify information from primary sources)


> It synthesizes a more comprehensive report, using more sources, more varied sources, more data, and broader insights than a human analyst can produce in 1-2 days of research and writing.

> Note that I'm not saying 20 minutes of deep research beats 9 months of investigative journalism with private interviews with primary sources or anything like that.

I like the idea that AI is objectively better at doing analysis if you simply assume that it takes a person nine months to make a phone call


It has more words put together in seemingly correct sentences, so it's long enough his boss won't actually read it to proof it.

"It’s verifiable. The books either balance or they don’t. Ledgers either reconcile or they don’t. There’s almost always a “ground truth” to compare against (bank feeds, statements, prior periods). It’s boring and repetitive. Same vendors, same categories, same patterns every month. Humans hate this work. Software loves it."

These are all true statements, but all of those things are solvable with classic software. Quickbooks has done this for decades now. The parts of accounting that aren't solvable with classic computing are generally also not solvable by adding LLMs into the mix.


This conviction doesn't seem to acknowledge the problem at scale. Decades of great UI development will still leave out edge cases that users will need to use the tool for. This happens fundamentally because the people who need to use the tools are not the people who make them, they rarely even talk to each other (instead they are "studied" via analytics).

When /humans/ bring up the idea of integrating LLMs into UIs, I think most of the time the sentiment comes from legitimate frustration about how the UI is currently designed. To be clear, this is a very different thing than a company shimming copilot into the UI, because the way these companies use LLMs is by delegating tasks away from users rather than improving their existing interfaces to complete these tasks themselves. There are /decades/ of HCI research on adaptive interfaces that address this, in the advent of expert systems and long before LLMs -- it's more relevant than ever, yet in most implemenations it's all going out the window!

My experience with accounting ^H^H^H^H^H^H^H^H^H^H bookkeeping / LLMs in general resonates with this. In gnu cash I wanted to bulk re-organize some transactions, but I couldn't find a way to do it quickly through the UI. All the books are kept in a SQL db, I didn't want to study the schema. I decided to experiment by getting the LLM to emit a python script that would make the appropriate manipulations to the DB. This seemed to take the best from all worlds -- the script was relatively straightforward to verify, and even though I used a closed source model, it had no access to the DB that contained the transactions.

Sure, other tools may have solved this problem directly. But again, the point isn't to expect someone to make a great tool for you, but to have a tool help you make it better for you. Given the verifiability, maybe this /is/ in fact one of the best places for this.


They might not be solvable but you can get 5-10% Improvement on them, unfortunately you can't do a new product that is exactly like QuickBooks but 5% better at reconciliation etc.

LLMs by their inherent nature cannot be relied on to be true and correct, which by coincidence are the only traits that matter in accounting.

If you want better software, then sure, maybe a coding assistant can help you write it faster, but when it comes to actually doing accounting I would not rely on an LLM in any way shape or form any more than I would do so for law.


Bingo! You found the prize! Putting tech that is prone to hallucination in charge of anything that has serious consequences when it's wrong is a terrible idea. You do not want hallucinated payments or receipts, or legal citations. You want these things to be both true and correct, EVERY TIME.

“We had to re-state our financials and amend our taxes because the AI screwed up and we didn’t have anyone who understood accounting look at our books.”

I go the other way and say this is a vast oversimplication of the job by a tech bro doing what tech bros typically do.

I am not an accountant but for many years I worked adjacent to them as a developer. I got a lot of time to ask questions and I was generally curious. Even at the SMB level accountants don't necessarily always have a "ground truth". There are so many ways to bury financial data that it needs almost constant vigilance. Yes, in theory, there is one ground truth (inputs should balance with whats there) but in practice humans are SHOCKINGLY good at committing accounting fraud. GAAP is another thing.


Compensation for employees is not based solely on revenue. CEOs of major global organizations cost a lot of money.

This site in general has a massive hate boner for any part of a corporate structure that isn't the engineering department. Sales, admin, marketing, legal, HR, etc... all get flak from the HN community for being irredeemably idiotic wastes of space.

"Hacker News commenters are frequently unaware that their use cases and customer preferences do not reflect the average customer demand in the market." - https://news.ycombinator.com/item?id=46192577

There's a reason I put that in my profile. :^)


Sounds like HN users represent an underserved and untapped market and are being rational market actors while discussing their preferences.

One of my favorite examples of this is when HNers insist that if only an auto-manufacturer would make a simple car with tactile buttons and no screen or creature comforts it would sell like hotcakes.

I think those could sell, but you'd have to make the screens a luxury trim item again. Which could honestly happen if vehicle Right to Repair laws happen.

But we're way off topic here. :D


In addition to the AOL mentioned elsewhere -

1. GE took a $22B impairment in 2018.

2. Shell took a $22B write down in 2020.

3. ConocoPhillips incurred a $34B impairment in 2009.


How did they lose $20B on this? Gigafactory was $10B.

Jamie Dimon is lining up his successor


I don't know if any of you have washed soiled clothes by hand, but that's shockingly intensive labor.


No offense but how is that not obvious by second grade. Don't have a big mouth if you don't have a big stick too. Ireland doesn't have quiet opinions, but a rather big mouth about other nations' foreign policy.


At my school there were a lot of big mouths and no big sticks. Non armed debate is a thing.


"Armed debate" is a misread. The point of my comment was that there is little sympathy for people that bite the hand that feeds them or talk themselves into situations they don't have the wherewithal to navigate.


Biting the hand that feeds is a nonsense characterization of disagreeing - even loudly - with the person who aids you. We do not own each other, as much as many of us would like to.

>Non armed debate is a thing

Until your mouth writes a check that your ass can't cash.


In your school. Where the kids were looked after by the teachers.


We can try having a non-armed debate with Putin, but I don’t think it’s going to be very productive.


Not that I'm recommending it but Putin's regime seems to behave much like the mafia and will get along with people who pay it protection money or ally with it.


You're absolutely right that this immoral principle applies in second grade. But humans advance from that point, not stagnate (often).

I find this position abject, but I'm curious what opinions are you talking about specifically. Can you elaborate?


Sure they have very extreme opinions about the Ukraine situation that they make very clear at every EU parliament meeting.


Can you point me to some examples? I have not followed closely but it seems that Ireland is on the same page as the other EU states in regards to supporting Ukraine.

I'm also curious about what you consider a "very extreme opinion" to be in this regard.


Here is a specific example where an Irish MEP specifically speaks against sanctions on Russia and against NATO donating any weapons to Ukraine in front of the EU parliament.

https://www.youtube.com/watch?v=hpieZnTQorQ

Here is a different Irish MEP saying similar things.

https://www.youtube.com/watch?v=qo1tgWr0KXI


They are two MPs, they can say whatever they want. Sadly they don't reflect the position of Ireland, and I hope you are not trying to say that Ireland or Europe should abolish their internal democracy.

I say "sadly" because they're perfectly right. Daly: "the more arms you pump into Ukraine, the more the war will be prolonged, and the more Ukrainians will die [...] We will sit down with Russia, there will be a negotiated peace and this organisation should promote it earlier".

She said this three years ago: in the meanwhile hundreds of thousands of Ukrainians and Russians have died, Ukraine has lost its territory anyway, we are sitting down with Russia and there is going to be a negotiated peace, and Europe is not part of it because it was never able to promote any diplomacy. Time proved her right on all points.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: