Hacker Newsnew | past | comments | ask | show | jobs | submit | youoy's commentslogin

To me that graph seems to say that the pure "subconscious" stuff or "ML similar" stuff peaks earlier, but comprehension peaks much later. So you perfect your tools in the brain at around 25, but then it takes another 20 years to really know how to use them correclty.


I would go even further: Not only the vast majority, but 100% of non pacifist like AI weapons.


I finished reading this comment wondering what should I take away from it. Is it better to include alarming titles and be read? Or the other way around? Or what would be the sweet middle point?


I'm really curious how a blogpost titled “Millennials are killing ham sandwiches” would fare, in comparison.


Don't get me started on how delicious Subway was 30 years ago compared to the pale version we have now.


I quote for context:

> But what about those of us who are well into the flattening part of the curve, what can we do for ourselves? You can seek new experiences perhaps. If time goes faster because your life has fewer firsts and more routine, then it can be extended by adding firsts. You can learn new things, travel, take up hobbies, or new careers.

> This works, to a point, but there are only so many firsts for you, and chasing this exclusively seems to lead to resentment. You remember the things you had as a kid. You remember the excitement and warmth of that world, how immediate and raw everything felt, and you want to go back. You start to regret that the world has changed, even though what changed the most is you.

I like to think that life slows down once you form a stable image and story of yourself. The more you convince yourself that that image is fixed, the faster time will go by. That might justify why childhood seems longer, since that image seems to form around adolescence.

Experiencing new "firsts" but keeping that image of yourselfe fixed just works for a while. That is why it may lead to resentment, as the article says.

So dont fool yourself: some image of who you are gives you some stability, but just use it for that, so that you dont run crazy with options.

If you treat every event as something that might reshape your ego, then suddenly a big number of experiences are new, and time suddenly slows dont. It may even appear to dissapear from time to time.


It completely depends on the way you prompt the model. Nothing prevents you from telling it exactly what you want, to the level of specifying the files and lines to focus on. In my experience anything other than that is a recepy for failure in sufficiently complex projects.


Several comments can be made here: (1) You only control what the LMM generates to the extent that you specify precisely what it should generate. You cannot reasons about what it will generate for what you don't specify. (2) Even for what you specify precisely, you don't actually have full control, because the LLM is not reliable in a way you can reason about. (3) The more you (have to) specify precisely what it should generate, the less benefit using the LLM has. After all, regular coding is just specifying everything precisely.

The upshot is, you have to review everything the LLM generates, because you can't predict the qualities or failures of its output. (You cannot reason in advance about what qualities and failures it definitely will or will not exhibit.) This is different from, say, using a compiler, whose output you generally don't have to review, and whose input-to-output relation you can reason about with precision.

Note: I'm not saying that using an LLM for coding is not workable. I'm saying that it lacks what people generally like about regular coding, namely the ability to reason with absolute precision about the relation between the input and the behavior of the output.


I think that the main missunderstanding is that we used to think programming=coding, but this is not the case. LLMs allow people to use natural language as a programming language, but you still need to program. As with every programing language, it requires you to learn how to use it.

Not everyone needs to be excited about LLMs, in the same way that C++ developers dont need to be excited about python.


In the end this depends on your definition of "fair". What percentage of your generated production do you think is fair for the company to take? 95%? 50%? 10%?


That depends on the value of your generated production, among many other things, and ultimately isn't the right question to ask.

Can an employee obtain better employment terms elsewhere (which is a complex concept to define in itself)? If so, they are underpaid, if not, they aren't.


You were talking about exploitation. Using the fact that the employee cannot obtain a better employment elsewhere to extract as much of the production or value from the employee smells a lot like exploitation to me.


If an employer offers an employee $100 per hour, and the next best offer that employee can obtain elsewhere is $90 for an otherwise equivalent job, should the employee take that job for granted? Is the employer exploiting them with their pay rate?


That would be the case in an idealized world. As with everything this depends on the circumstances and the economic activity of where the person is living in. I guess that with the north american eyes it is the employee's fault if the employee cannot find some other job since the only constraint for doing it is the personal drive. But there are other economical/educational constraints that don't allow people to have the necessary mobility for your example to be efficient and accurate.


Put down the Ayn Rand BS books. What if the employers make 10k per unit of work while they pay you only $10 per unit of work and they have all talked to each other to never pay more than $10? What do you do then? Complain? Go to court? Who do you think has more influence over the politicians/courts? You making $10 or your bosses that are all millionaires because of your severly underpaid work?


Your scenario is the equivalent of Ayn Rand books for the lazy and entitled.

What's the point of inventing a non-existent situation where you're obviously correct, other than self-gratification?


Non-existent situation? How about you read a little history? You can start with the word "collusion".


Notation an symbology comes out of a minmax optimisation. Minimizing complexity maximizing reach. As with every local critical point, it is probably not the only state we could have ended at.

For example, for your point 1: we could probably start there, but once you get familiar with the notation you dont want to keep writing a huge list of parameters, so you would probably come up with a higher level data structure parameter which is more abstract to write it as an input. And then the next generation would complain that the data structure is too abstract/takes too much effort to be comunicated to someone new to the field, because they did not live the problem that made you come with a solution first hand.

And for you point 2: where do you draw the line with your hyperlinks. If you mention the real plane, do you reference the construction of the real numbers? And dimensionl? If you reason a proof by contradiction, do you reference the axioms of logic? If you say "let {xn} be a converging sequence" do you reference convergence, natural numbers and sets? Or just convergence? Its not that simple, so we came up with a minmax solution which is what everybody does now.

Having said this, there are a lot of articles books that are not easy to understand. But that is probably more of an issue of them being written by someone who is bad at communicating, than because of the notation.


> As Venkatesh concludes in his lecture about the future of mathematics in a world of increasingly capable AI, “We have to ask why are we proving things at all?” Thurston puts it like this: there will be a “continuing desire for human understanding of a proof, in addition to knowledge that the theorem is true.”

This type of resoning becomes void if instead of "AI" we used something like "AGA" or "Artificial General Automation" which is a closer description of what we actually have (natural language as a programming language).

Increasingly capable AGA will do things that mathematitians do not like doing. Who wants to compute logarithmic tables by hand? This got solved by calculators. Who wants to compute chaotic dynamical systems by hand? Computer simulations solved that. Who wants to improve by 2% a real analysis bound over an integral to get closer to the optimal bound? AGA is very capable at doing that. We just want to do it if it actually helps us understand why, and surfaces some structure. If not, who cares it its you who does it or a machine that knows all of the olympiad type tricks.


> Right now, even people who reject meritocracy understand its logic. You develop rare skills, you work hard, you create value, and you capture some of that value.

The premise is that AI does not allow to do this any more, which is completely false. It may not allow to do it in the same way, so its true that some jobs may disappear, but others will be created.

The article is too alarmist by someone who has drank all of the corporate hype. AI is not AGI. AI is an automation tool, like any other that we have invented before. The cool thing is that now we can use natural language as a programming language which was not possible before. If you treat AI as something that can thin k, you will fail again and again. If you treat it as an automation tool, that cannot think you will get all of the benefits.

Here i am talking about work. Of course AI has introduced a new scale of AI slop, and that has other psycological impacts on society.


Yes, but I don't think it's about the present, necessarily.

AI is still shit. There are prompt networks, what some people call agents, but presently models are still primarily trained as a singular models, not made to operate as agents in different contexts with RL on each agent being used to improve the whole indirectly.

Tokens will eventually become cheaper to the point where it will be possible to actually train proper agents. So we probably will end up with very powerful systems in time, systems that might actually be at least some kind of AGI-lite. I don't think is far off. At most a decade.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: