Hacker Newsnew | past | comments | ask | show | jobs | submit | boroboro4's commentslogin

It's unclear why the most probable next token given the context "please pick random number" won't be distributed uniformly across all the possible numbers (in the end it's totally possible for LLM return 10 logits of around same value for numbers 0..9 for example).


Total US wealth is ~170T so obviously it will be enough to cover federal and state government for a year (and more like 20 years).

Even considering obvious issue of wealth going down like crazy in such hypothetical scenario in its ends this would be enough. Because in the end it’s all part of same economy.


That’s including everyone. The wealth of U.S. billionaires is about $8 trillion total, while the government at all levels spends about $10 trillion annually.


You missed the "total wealth of U.S. billionaires". The billionaires own a tiny fraction of total US wealth. Most of the wealth is owned by people like me, who own a house, and stocks in retirement account.


My bad.

However billionaires don’t own tiny part of US wealth, more like 5%-10%. And top 1% (and grandparent was talking about rich people) own 1/3 of US wealth.


The point is that billionaire wealth is not that much compared to the government’s current spending, much less what you’d need to support large numbers of immigrants on welfare (as suggested by OP above).

The top 1% have a lot more, but the cutoff for that is $11 million, and that includes home equity, family farms, etc. The bulk of those people are retired professionals and small business owners. For example, 4% of 75-79 year olds are in the top 1% of wealth. These are rich people, but not the kind of rich that AOC is talking about taxing.

I’m a huge supporter of taxing upper middle class people, but we should just tax them instead of playing games about wealth. The top 5%, that is people making above $260,000 a year, have an income of $5.6 trillion a year. They only pay $1.3 trillion in income taxes. Just double that.


While I mostly agree with you, it worth noting modern llms are trained on 10-20-30T of tokens which is quite comparable to their size (especially given how compressible the data is)


This one doesn’t even have warmup for Java, which makes results complete non sense.

Those benchmarks should be just forbidden for their misleading nature.


How much difference does it make for tiny programs?

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


It's not an issue of warmup time, it's an issue of jit compilation.

On my server (AMD EPYC 7252): 1) base time of the java program from the repo is 3.23s (which is ~2 worse than the one in linked page, so I assume my cpu is about 2 slower, and corresponding best c++ result will be ~450ms 2) if you count from inside of java program you get 3.17s (so about 60ms of overhead) 3) but if you run it 10 times (inside of same java program) you cut this time to 1570ms

It's still much slower than c++ version, but it's between rust and go. And this is not me optimizing something, it's only measuring things correctly.

update: running vector version of java code from same repo brings runtime to 392ms which is literally fastest out of all solutions including c++.

update2: ran c++ version on same hardware, it takes 400ms, so I would say it's fair to say c++ and vectorized java are on par (and given "allows vectorization" comment in cpp code I assume that's the best one can get out of it).


Sorry, now I remember past performance variation with that program seemingly caused by switching the order of flip*= and sum+=

Not enough program to care about.


> the java program

Which java program?



He’s not a king to do whatever he promised as is, he’s bound by laws and constitution (which are passed by congress).

Also as you were corrected there is constant goalpost moving in terms of whom exactly should be deported and how.

If you’re really interested in public opinion people don’t support ICE and especially how do they do what they do.


You can find stats including pending charges: https://bsky.app/profile/reichlinmelnick.bsky.social/post/3m... the main uptick in recent arrests is mostly people without any criminal charges including pending.

You can see that a lot of charges aren’t that “criminal” too - it’s traffic violations or immigration itself.


There might be more than one reason for an ongoing crisis, and different takes on who’s responsible. However Maduro is responsible of huge number of refugees fleeing Venezuela, and we (and some other countries around) have some obligations to help asylum seekers.


Obviously the one which sets the law, also the one which has first article dedicated to it.


Language is just a form, what exactly is encoded inside of the model can be very different. And to encode logical reasoning inside of the weights with activation functions is more than possible.

Models solving IMO level problems imo proves it.

I also think you greatly overestimate human intelligence, the fact we got AGI is nothing but barely side effect of evolution.


Isn’t this what Tao is addressing in the link, that LLMs haven’t encoded reasoning? Success in IMO is misleading because they are synthetic problems with known solutions that are subject to contamination (answers to similar questions are available in the textbooks and online).

He also discusses his view on the similarity and differences between mathematics and natural language.Tao says mathematics is driven entirely by efficiency, so presumably using natural language to do mathematics is a step backwards.


This being said in this setup of 2-4 h100 you’ll be able to generate with batch size of somewhere around 128 ie its 128 humans and not one. And just like that difference in efficiency isn’t that high anymore.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: