Hacker Newsnew | past | comments | ask | show | jobs | submit | estomagordo's commentslogin

Using suicide rates as a measure for population happiness is very peculiar, given that the people who commit suicide represent fractions of a percent, and would only ever sum up to a rounding error.

It's not that peculiar if you assume all countries follow the same type of happiness distribution that is simply shifted/stretched lower or higher.

Then, the relative size of a bottom or top absolute threshold is highly meaningful. Even if it's a fraction of a percent, populations are huge and suicide rates are not rounding errors at all -- they're actually quite statistically significant.

And as macabre as it is, suicides are objective facts mostly unaffected by methodology, and unaffected by translation issues, cultural differences, etc.

This is why suicide rates are actually a powerful mental health statistic, just like height is a powerful physical health statistic, at the population level. There's obviously still a lot both of these metrics don't say, but the fact that they are highly objective makes them extremely valuable.


The World Happiness Report discusses this:

"The large variations in the systems and processes to define mortality causes imply there may be very different numbers of deaths that are registered with a specific cause. This creates a problem for cross-country comparisons of mortality by cause in general, and even more so for deaths of despair, and suicides in particular.

The person responsible for writing the cause of death on the death certificate may be different across countries. In some countries, the police are responsible, while in others a medical doctor, coroner, or judicial investigator takes on this role. Differences in doctors’ training, access to medical records, and autopsy requirements contribute to these discrepancies. The legal or judicial systems that decide causes of death also vary. For instance, in some countries suicide is illegal and is not listed as a classifiable cause of death, leading to underreporting or misclassification of suicides as accidents, violence, or deaths of “undetermined intent.”[25]

Data on suicides, even when reported, can be inaccurate due to social factors as well. In some countries, suicide might be taboo and highly stigmatised, so the families and friends of the person who committed suicide might decide to misreport or not disclose the mortality cause, causing underreporting of its incidence. In other societies, such as Northern Europe, there is less stigma attached to suicides, and alcohol and drug use."

https://www.worldhappiness.report/ed/2025/supporting-others-...


I don't think it would be that difficult to reconcile suicides between G20 countries. Outside of that, sure, data collection methods and quality heavily differ. But many people are interested in the varying levels of happiness among the G20 and there it doesn't seem that difficult to compare.

> And as macabre as it is, suicides are objective facts mostly unaffected by methodology, and unaffected by translation issues, cultural differences, etc.

I wouldn't be surprised if cultural differences are actually the largest factor that explains a country's suicide rate. Not easy to prove, of course, but I would be very careful drawing any conclusions from differences in suicide rates between countries with vastly different cultures.

I think you can also expect large differences in how countries report their suicide rates.


> if you assume all countries follow the same type of happiness distribution that is simply shifted/stretched lower or higher.

That's a pretty strong assumption, seems more likely that there's variation at the extremes than not. For example, if a small percentage of the population deals badly with extended nighttime in long winters, then it'll affect Finland's most-unhappy stats (and suicide rates) without meaning much for the average happiness.


I won't go into too much details on the topic, as it's loaded with triggering elements. Let's just say that if you were to study how different cultures apprehend and conceptualize life and death (whether philosophically or religiously), I'm fairly sure that you'd come out the other end questioning a lot of your original assumptions (which I only presume you hold based on your comment). Our collective outlook can have significant and far reaching influence in individual decisions.

Suicides are hugely affected by cultural norms. In certain Asian cultures this has quite the history, so this can't be a correct assumption.

Most Asian cultures with suicide problems acknowledge and try very hard to bring those rates down. It isn't just a cultural norm and is in fact a good indicator of the happiness of a population.

> It isn't just a cultural norm and is in fact a good indicator of the happiness of a population.

Prove it


Here's [1] the Japanese Ministry of Health, Labor, and Welfare's page on preventing suicides. The motto is 誰も自殺に追い込まれることのない社会の実現を目指して or "Aiming for a world where nobody must deal with suicide"

[1]: https://www.mhlw.go.jp/stf/seisakunitsuite/bunya/hukushi_kai...


That's a straw man; There are many cultures that have a strong emphasis on honor/shame mechanics, which in turn drive suicides in those cultures. And which match cultural expectations in a grim kind of way.

The fact that people want to change their culture is possibly an early indication of a shift, which could take decades or centuries to actually occur. And such a cultural shift can also lose momentum and be still-born.

---

I find counting suicides innovative. But if you do it in a global context without looking at the cultures as confounding factor: It's wrong.

There are many other confounding factors, such as a forgiving national (personal) bankruptcy regime. The USA has a pretty forgiving regime compared to other countries. But that doesn't mean you can say it correlates with how happy people are. Because - like suicides - the number of people that go bankrupt might not significantly correlate to the average happiness rate. Because a (small) minority of people go bankrupt / commit suicide.

It's in fact perfectly reasonable and possible to suppose that a country with higher average suicides and harsher penalties for bankruptcy still ends up higher on the happiness index. Because perhaps health and social-contact / family factors impact the rating more, on average.


QoL certainly has its effect on suicide rates. I assume that life is the shittier, the more people opt to leave on their own terms. Just look at russia, absolute shithole and it's on rank 11.

If people are happy, you have less suicides. I don't need a study for that.


The author would do well to educate themselves on the difference between Scandinavia and the Nordics.

It doesn’t matter. Finland is often included when talking about Scandinavia, which in modern days just makes sense culturally. There’s no value in trying to cling to the ”histprically correct” meaning of a particular term. Languages evolve, dictionaries change.

"evolve" meaning "diluted because lots of people are dumb"

That mindset will make you a grumpy old soul. Language is dynamic, not something you can force upon people. From every mouth to ear (or screen to eye) the word is interpreted slightly different.

I mean the original meaning of the word Scandinavia certainly doesnt make sense anymore:

1765, from Late Latin Scandinavia (Pliny), Skandinovia (Pomponius Mela), name of a large and fruitful island vaguely located in northern Europe, a mistake (with unetymological -n-) for Scadinavia, which is from a Germanic source (compare Old English Scedenig, Old Norse Skaney "south end of Sweden"), from Proto-Germanic skadinaujo "Scadia island." The first element is of uncertain origin; the second element is from aujo "thing on the water" (from PIE root *akwā- "water;" see aqua-). It might have been an island when the word was formed; the coastlines and drainage of the Baltic Sea changed dramatically after the melting of the ice caps.



Good point. As presented, I thought it was very easy.


What's a sign it's going to happen ever?


I used to believe in AGI but the more AI has advanced the more I’ve come to realize that there’s no magic level of intelligence that can cure cancer and figure out warp drives. You need data, which requires experimentation, which requires labor and resources of which there is a finite supply. If you had AGI tomorrow and asked it to cure cancer, it would just ask for more experimental data and resources. Isn’t that what the greatest minds in cancer research would say as well? Why do we think that just being more rational or being able to compute better than humans would be sufficient to solve the problem?

It’s very possible that human beings today are already doing the most intelligent things they can given the data and resources they have available. This whole idea that there’s a magic property called intelligence that can solve every problem when it reaches a sufficient level, regardless of what data and resources it has to work with, increasingly just seems like the fantasy of people who think they’re very intelligent.


Generally, I agree, but it also depends on perspective. Intelligence exists on many levels and manifests differently across species. From a monkey's standpoint, if they were capable of such reflection they might perceive themselves as the most capable creatures in their environment. Yet, humans possess cognitive abilities that go far beyond that, abstract reasoning, cumulative culture, large scale cooperation etc

A chimpanzee can use tools and solve problems, but it will never construct a factory, design an iPhone, or build even a simple wooden house. Humans can, because our intelligence operates at a qualitatively different level.

As humans, we can easily visualize and reason about 2D and 3D spaces, it's natural because our sensory systems evolved to navigate a 3D world. But can we truly conceive of a million dimensions, let alone visualize them? We can describe them mathematically, but not intuitively grasp them. Our brains are not built for that kind of complexity.

Now imagine a form of intelligence that can directly perceive and reason about such high dimensional structures. Entirely new kinds of understanding and capabilities might emerge. If a being could fully comprehend the underlying rules of the universe, it might not need to perform physical experiments at all, it could simply simulate outcomes internally.

Of course that's speculative, but it just illustrates how deeply intelligence is shaped and limited by its biological foundation.


> If a being could fully comprehend the underlying rules of the universe, it might not need to perform physical experiments at all, it could simply simulate outcomes internally.

It likely couldn't, though, that's the problem.

At a basic level, whatever abstract system you can think of, there must be an optimal physical implementation of that system, the fastest physically realizable implementation of it. If that physical implement was to exist in reality, no intelligence could reliably predict its behavior, because that would imply that they have access to a faster implementation, which cannot exist.

The issue is that most physical systems are arguably the optimal implementation of whatever it is that they do. They aren't implementations of simple abstract ideas like adders or matrix multipliers, they're chaotic systems that follow no specifications. They just do what they do. How do you approximate chaotic systems which, for all you know, may depend on any minute details? On what basis do we think it is likely that there exists a computer circuit that can simulate their outcomes before they happen? It's magical thinking.

Note that intelligence has to simulate outcomes, because it has to control them. It has to prove to itself that its actions will help achieve its goals. Evolution doesn't have this limitation: it's not an agent, it doesn't have goals, it doesn't simulate outcomes, stuff just happens. In that sense it's likely that certain things can evolve that cannot be intelligently designed (as in designed, constructed and then controlled). It's quite possible intelligence itself falls in that category and we can't create and control AGI, and AGI can't improve itself and control the outcome either, and so on.


I agree that computational irreducibility and chaos impose hard limits on prediction. Even if an intelligence understood every law of physics, it might still be unable to simulate reality faster than reality itself, since the physical world is effectively its own optimal computation.

I guess where my speculation comes in is that "simulation" doesn’t necessarily have to mean perfect 1:1 physical emulation. Maybe a higher intelligence could model useful abstractions/approximations, simplified but still predictive frameworks that are accurate enough for control and reasoning even in chaotic domains.

After all, humans already do this in a primitive way, we can't simulate every particle of the atmosphere, but we can predict weather patterns statistically. So perhaps the difference between us and a much higher intelligence wouldn't be breaking physics, but rather having much deeper and more general abstractions that capture reality's essential structure better.

In that sense, it's not "magical thinking", I just acknowledge that our cognitive compression algorithms (our abstractions) are extremely limited. A mind that could discover higher order abstractions might not outrun physics, but it could reason about reality in qualitatively new ways.


> A chimpanzee can use tools and solve problems, but it will never construct a factory, design an iPhone, or build even a simple wooden house. Humans can, because our intelligence operates at a qualitatively different level.

Humans existed in the world for hundreds of thousands of years before they did any of those things, with the exception of wooden hut, which took less time than that. But also wasn't instant.

Your example doesn't entirely contradict the argument that it takes time and experimentation as well, that intellect isn't the only limiting factor.


My point wasn't so much about how fast humans achieved these things, but about what's possible at all given a certain cognitive architecture. Chimpanzees could live for another million years and still wouldn't build a factory, not because they don't have enough time, but because they lack the cognitive and cultural mechanisms to accumulate and transmit abstract knowledge.

So I completely agree that intelligence alone isn't the only factor, it's the whole foundation.


> Chimpanzees could live for another million years and still wouldn't build a factory, not because they don't have enough time, but because they lack the cognitive and cultural mechanisms

Given a million years, that could change.


I think I see what you’re getting at, but the difference between apes and humans isn’t that we can reason in 3D. If someone could actually articulate the intellectual breakthrough that makes humans smarter than apes, then maybe I would accept there’s some intellectual ability AI could achieve that we don’t have, but I don’t see how it could be higher dimensional reasoning.


Agreed.

And, if you had AGI tomorrow and asked it to figure out FTL warp drives, it would just explain to you how it's not going to happen. It is impossible, the end. In fact the request is fantasy, nigh nonsensical and self-contradictory.

Isn’t that what the greatest minds in physics would say as well? Yes, yes it is.

No debate will be entered into on this topic by me today.


Actually, no, it isn't. They say it isn't necessarily possible, but not self-contradictory as far as we know. It's good that you aren't going to debate this.

https://en.wikipedia.org/wiki/Alcubierre_drive


You failed reading comprehension.


You think I'm the one who's failing here?

You said:

"(...) if you had AGI tomorrow and asked it to figure out FTL warp drives, it would just explain to you how it's not going to happen. It is impossible, the end. In fact the request is fantasy, nigh nonsensical and self-contradictory."

"Isn’t that what the greatest minds in physics would say as well? Yes, yes it is."

That is not in fact what the greatest minds in physics would say. Your meta-knowledge of physics has failed you here, resulting in you posting embarrassing misinformation. I'm just having to correct it to prevent you from misleading anyone else.


You failed to realise that I'm not debating you, I'm berating you. Some people see statements like "not debating" as a personal challenge, a reason to get aggressive. Lets be clear: they are not nice people, and you don't want to be trolls like them.


Yes, I can see that you're just trolling, not debating. I appreciate the fact that you aren't debating, because I don't want to have to correct more of your misinformation. I don't think your berating is productive either, although it does demonstrate that—as you said—you are not a nice person.


AGI isn't a synonym for smarter-than-human.


What’s your point? I’m saying there’s no level of smartness that can cure cancer, the bottleneck is data and experimentation not a shortage of smartness/intelligence


And I'm saying that AGI doesn't imply a level of smartness at all.


Eliezer’s short story “That Alien Message” providing a convincing argument that humans are cognitively limited, not data-limited, through the device of a fictional world where people think faster: https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien...

> Yes. There is. The theoretical limit is that every time you see 1 additional bit, it cannot be expected to eliminate more than half of the remaining hypotheses (half the remaining probability mass, rather). And that a redundant message, cannot convey more information than the compressed version of itself. Nor can a bit convey any information about a quantity, with which it has correlation exactly zero, across the probable worlds you imagine.

> But nothing I've depicted this human civilization doing, even begins to approach the theoretical limits set by the formalism of Solomonoff induction.

This is also a commonplace in behavioral economics; the whole foundation of the field is that people in general don't think hard enough to fully exploit the information available to them, because they don't have the time or the energy.

——

Of course, that doesn't mean that great intelligence could figure out warp drives. Maybe warp drives are actually physically impossible! https://en.wikipedia.org/wiki/Warp_drive says:

> A warp drive or a drive enabling space warp is a fictional superluminal (faster than the speed of light) spacecraft propulsion system in many science fiction works, most notably Star Trek,[1] and a subject of ongoing real-life physics research. (...)

> The creation of such a bubble requires exotic matter—substances with negative energy density (a violation of the Weak Energy Condition). Casimir effect experiments have hinted at the existence of negative energy in quantum fields, but practical production at the required scale remains speculative.

——

Cancer, however, is clearly curable, and indeed often cured nowadays. It wouldn't be terribly surprising if we already had enough data to figure out how to solve it the rest of the time. We already have complete genomes for many species, AlphaFold has solved the protein-folding problem, research oncology studies routinely sequence tumors nowadays, and IHEC says they already have "comprehensive sets of reference epigenomes", so with enough computational power, or more efficient simulation algorithms, we could probably simulate an entire human body much faster than real time with enough fidelity to simulate cancer, thus enabling us to test candidate drug molecules against a particular cancer instantly.

Also, of course, once you can build reliable nanobots, you can just program them to kill a particular kind of cancer cell, then inject them.

Understanding this does not require believing that "intelligence that can solve every problem when it reaches a sufficient level, regardless of what data and resources it has to work with", which I think is a strawman you have made up. It doesn't even require believing that sufficient intelligence can solve every problem if it has sufficient data and resources to work with. It only requires understanding that being able to do the same thing regular humans do, but much faster, would be sufficient to cure cancer.

——

There does seem to be an open question about how general intelligence is. We know that there isn't much difference in intelligence between people; 90+% of the human population can learn to write a computer program, make a pit-fired pot from clay, haggle in a bazaar, paint a realistic portrait, speak Chinese, fix a broken pipe, interrogate a suspect and notice when he contradicts himself, fletch an arrow, make a convincing argument in courts, program a VCR, write poetry, solve a Rubik's cube, make a béchamel sauce, weave a cloth, sing a five-minute lullaby, sew a seam, or machine a screw thread on a lathe. (They might not be able to learn all of them, because it depends on what they spend time on.)

And, as far as we know, no other animal species can do any of those things: not chimpanzees, not dolphins, not octopodes, not African grey parrots. And most of them aren't instinctive activities even in humans—many didn't exist 1000 years ago, and some didn't exist even 100 years ago.

So humans clearly have some fairly flexible facility that these other species lack. "Intelligence" is the usual name for that facility.

But it's not perfectly general. For example, it involves some degree of ability to imagine three-dimensional space. Some of the humans can also reason about four- or five-dimensional spaces, but this is a much slower and more difficult process, far out of proportion to the underlying mathematical difficulty of the problem. And it's plausible that this is beyond the cognitive ability of large parts of the population. And maybe there are other problems that some other sort of intelligence would find easy, but which the humans don't even notice because it's incomprehensible to them.


Regarding "Alien Message", I don't find that story particularly convincing. I think it's muddled and contrived.

The basic issue is that we have to deduce stuff about the world we live in, using resources from the world we live in. In the story, the data bandwidth is contrived to be insanely smaller than the compute bandwidth, but that's not realistic. In reality, we are surrounded by chaotic physical systems that operate on raw hardware. They are, in fact, quite fast, and probably impossible to simulate efficiently. For instance, we can obviously never build a computer that can simulate the behavior of its own circuitry, using said circuitry, faster than it operates. But I think there's a lot of physical systems that are just like that.

Being data-limited means that we get data slower than we can analyze and process it. It is certainly possible to improve our ability to analyze data, but I don't think we can assume that the best physically realizable intelligence would overcome data limitation, nor that it would be cost-effective in the first place, compared to simply gathering more data and experimenting more.


> Regarding "Alien Message", I don't find that story particularly convincing. I think it's muddled and contrived.

Well, yes. it's from Eliezer Yudkowsky. The kind of people who who generally find him persuasive, will do so. Those who don't find him convincing or even find him somewhat of a crank, like the other self-proclaimed "rationalists", will do do. "muddled" is correct, he lacks rigour in everything, but certainly brings the word count.


You're the guy who in https://news.ycombinator.com/item?id=45517647 I demonstrated was a physics crank: unskilled and unaware of it, dismissing the Alcubierre metric as "fantasy, nigh nonsensical and self-contradictory", unlike actual physicists. And, when I presented the evidence that that's not what actual physicists say about it, you responded by heaping personal abuse on me. Perhaps you posted this comment later as an additional form of ego defense, since it implicitly calls me a crank, by implying that I'm a "rationalist"?


Those are odd claims, but they don't interest me. You have not and are not demonstrating anything outside of your own fixations. Project much?


You seem to be agreeing with the story's thesis, rather than disagreeing. The story claims that we get an enormous amount of data from which we could compute much more than we do. You, too, are claiming that we get an enormous amount of data from which we could compute much more than we do. If that's true, then we aren't limited by our data, which is what I meant by "data-limited"—although you seem to mean the opposite, "we get data slower than we can analyze and process it", in which we are limited not by the data but by the processing. This tends to rebut the claim above, "If you had AGI tomorrow and asked it to cure cancer, it would just ask for more experimental data and resources."

It may very well be true that you could cure cancer even faster or more cheaply with more experimental data, but that's irrelevant to the claim that more experimental data is necessary.

It may also be the case that there's no "shortcut" to simulating a human body well enough to test drugs against a simulated tumor faster than real time—that is, that you need to have enough memory to track every simulated atom. (The success of AlphaFold suggests that this is not the case, as does the ability of humans to survive things like electric shocks, but let's be conservative.) But a human body only contains on the order of 10²⁴ atoms, so you can just build a computer with 10²⁸ words of memory, and processing power to match. It might be millions of times larger than a human body, but that's okay; there's plenty of mass out there to turn into computronium. It doesn't make it physically unrealizable.

Relatedly, you may be interested in seeing Mr. Rogers confronting the paperclip maximizer: https://www.youtube.com/watch?v=T-zJ1spML5c


It's not a strawman, it's a thought experiment: if the premise of AGI is that a superintelligence could do all these amazing things, what could it do today if it existed but only had its superintelligence? My suggestion is that even something a billion times more intelligent than a human being might not be able to cure cancer with the information it has available today. Yes it could build simulations and throw a lot of computing power at these problems, but is the bottleneck intelligence or computing power to run the algorithms and simulations? You're conflating the two, no one disagrees that one billion times more computing power could solve big problems, the disagreement is whether one billion times more intelligence has any meaningful value which was the point of isolating that variable in my thought experiment.


It's fair that I'm conflating raw computational power with strategic usage of that power. And it is at least theoretically conceivable that brute force computational power is not something that could be replaced by clever algorithms.

But if you agree that with 10²⁸ more times more computational power we could almost surely cure cancer without gathering much more data, then you agree that we have enough empirical data and just need to analyze it better. We're sort of arguing about the details of what kinds of approaches to analyzing the data better would work best.

I'll continue that argument about details a bit more here. So far, even with merely human intelligence, hard computational problems like car crash simulation, protein folding, and mixed integer-linear programming (optimization) have continued to gain even more efficiency from algorithmic improvements than from hardware improvements.

According to our current understanding of complexity theory, we should expect this to continue to be the case. An enormous class of practically important problems are known to be NP-complete, so unless P = NP, they take exponential time: solving a problem of size N requires k**N steps. Hardware advances and bigger compute budgets allow us to do more steps, while algorithmic improvements reduce k.

To be concrete, let's say k = 1.02, we have a data center full of 4096 1-teraflops GPUs, and we can afford to wait a month (2.6 megaseconds) for an answer. So we can apply about 10²² operations to the problem, which lets us solve problems up to about size N = 2600. Now suppose we get more budget and build out 1000 such data centers, so we can apply 10²⁵ ops, but without improving our algorithms. This allows us to handle N = 2900.

But suppose that instead we improve the heuristics in our algorithm to reduce k from 1.02 to 1.01. Suddenly we can handle N = 5100, twice as big.

We can easily calculate how many data centers we would need to reach the same problem size without the more intelligent algorithm. It's about 6 × 10²¹ data centers.

For NP-complete problems, unless P = NP, brute-force computing power lets you solve logarithmically larger problems, while intelligence lets you solve linearly larger problems, equivalent to an exponentially larger amount of computation.


Humans. There are arrangements of atoms that if constructed and activated, act perfectly like human intelligence. Because they are human intelligence.

Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term. If human intelligence is deterministic, then it can be written in software.

Thus, if we continue to strive to design/create/invent such, it is inevitable that eventually it must happen. Failures to date can be attributed to various factors, but the gist is that we haven't yet identified the principles of intelligent software.

My guess is that we need less than 5 million years further development time even in a worst-case scenario. With luck and proper investment, we can get it down well below the 1 million year mark.


> Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term.

No, not all processes follow deterministic Newtonian mechanics. It could also be random, unpredictable at times. Are the there random processes in the human brain? Yes, there are random quantum processes in every atom, and there are atoms in the brain.

Yes, this is no less materialistic: Humans are still proof that either you believe in souls or such, or that human level intelligence can be made from material atoms. But it's not deterministic.

But also, LLMs are not anywhere close to becoming human level intelligence.


>It could also be random, unpredictable at times.

It isn't. But if it were, we can also write that into the algorithm.

>But also, LLMs are not anywhere close to becoming human level intelligence.

They're no farther than about 5 million years distant.


"Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term. "

Determinism is a metaphysical concept like mathematical platonism or ghosts.


> Thus, if we continue to strive to design/create/invent such, it is inevitable that eventually it must happen.

~200 years of industrial revolution and we already fucked up beyond the point of no return, I don't think we'll have resources to continue on this trajectory for 1m years. We might very well be accelerating towards a brick wall, there is absolutely no guarantee we'll hit AGI before hitting the wall


>We might very well be accelerating towards a brick wall, there is absolutely no guarantee we'll hit AGI before hitting the wall

We've already set the course for human extinction, we're about 6-8 generations away from absolute human extinction. We became functionally extinct 10-15 years ago. Still, if we had another 5 million years, I'm one hundred percent certain we could crack AGI.


> if deterministic, then can be done in software.

You just need a few Dyson spheres and someone omniscient to give you all the parameter values. Easy peazy.

Just like cracking any encryption: you just brute force all possible passwords. Perfectly deterministic decryption method.

</s>


There needs to be break through papers or hardware that can expand context size in exponential way. Or a new model that can address long term learning.


You are planning to pay up to 40k USD per year, with no equity, for western European hours?


$25k – $40k to be exact.


Yes.


Would Americans consider learning to bake instead?


The guy quite obviously is diversely talented. A computer genius, but well below generally agreed upon levels of mental deficiency in areas that most people care about.


Yet you expect them to act in a way that would make them lose money?


I don't expect them to do so, but I will make them


This is a ridiculous take.


This was really fun.

After about ~10 questions though, I started getting the same question every time. Like five times in a row.


I'm sorry for that, I should put up a different screen when all questions are answered. The archive doesn't go back any further than that as of now.


This is fixed now and you can't play questions on repeat.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: