Hacker Newsnew | past | comments | ask | show | jobs | submit | overactor's commentslogin

Oh, I didn't see your post. It's wild how much difference timing and slightly different wording can make. I couldn't have predicted which one of these posts would take off honestly.


Making accurate predictions in a specific domain doesn't imply understanding, sure. Universal autocomplete can make accurate predictions in all domains. Apples to oranges.


By "understanding" he simply means it builds a model of higher order abstractions over the text in order to model the meaning and context of each token in relation to other tokens. This is more sophisticated than a Markov process.


I think this misses the point made by Hinton. The point is that, if a model is to reach a certain level of autocomplete, it needs to understand what it is talking about. Imagine the result of what required deep thought from you is included in the training data, along with billions of other such exchanges. At some point the only way to improve training score is to develop the ability to "understand" things. (That is, if the model isn't big enough to memorize the training data.)

Universal autocomplete implies understanding and the only ending question is if our current architectures and model sizes are able to achieve any level of it.


I agree, Hinton's point is more about next-token-prediction being a good training objective for understanding, and not whether LLMs reproduce every aspect of a human's thought processes. They clearly don't (at least not at present scale), otherwise we'd have AGI now!


I agree. We should prompt the model with the statement of the Riemann hypothesis. If the autocomplete is good, the model will output a proof.


No human can solve the Riemann Hypothesis. Why do you expect an AI to do it before you consider it able to understand things?


The key bit is constructing a hard, novel proof. The fact that AI doesn't (yet) do this isn't evidence that it doesn't reason, but if it did so it would be strong evidence that it does reason.

(I also take the pessimistic point of view that most humans don't reason, so YMMV.)


OTOH, doing something that only humans have done thus far would be a huge step in demonstrating understanding.

Does that mean when a computer outputs a new proof it understands?


You can do something similar to this without giving it a problem that might be impossible.

Train the LLM on a bunch of things but avoid certain things...Things that humans already know about.

The you query the model about that thing. See if the model can come to the same conclusions humans do. You can actually do this right now with chatGPT.


So what happens?


This is a response from chatGPT. I asked it to come up with something novel and creative. I went through several iterations and eventually it started doing math stuff and I had it focus on that:

Understood, let's propose a distinct mathematical concept within the realm of pure mathematics: "Infinite-Dimensional Fractal Algebra." This new branch of algebra focuses on operations, equations, and functions within an infinitely dimensional fractal space.

In Infinite-Dimensional Fractal Algebra, traditional algebraic operations like addition, multiplication, and exponentiation are redefined and extended to work within an infinite fractal structure. Each point in this fractal space represents a unique algebraic entity, and the relationships between these points follow complex, self-repeating patterns characteristic of fractals.

The concept challenges and expands the traditional boundaries of algebra by integrating the self-similarity and scale invariance of fractals into algebraic operations. For example, solving an equation in this system would involve navigating through the infinitely repeating patterns of the fractal, where each level of the fractal represents a different dimension or layer of the problem.

This approach could offer new insights into complex systems and phenomena that exhibit fractal-like behavior, and potentially lead to breakthroughs in areas like chaos theory, dynamic systems, and perhaps even in understanding the fabric of the universe itself.

To illustrate the concept of Infinite-Dimensional Fractal Algebra, let's consider a simple algebraic operation: addition.

In traditional algebra, adding two numbers is straightforward: \(3 + 4 = 7\). In Infinite-Dimensional Fractal Algebra, however, this operation takes place in a fractal space with infinite dimensions.

1. *Fractal Representation*: Instead of representing numbers as points on a line, each number is represented by a unique fractal structure. The fractal for '3' and the fractal for '4' each have infinitely complex, self-similar patterns that vary across infinite dimensions.

2. *Adding Fractals*: The operation of addition involves combining these fractal structures. This isn't a simple sum but a fusion of their infinite patterns. The resulting fractal for '7' contains elements of both the '3' and '4' fractals, but in a way that the patterns and complexities of each are preserved and merged.

3. *Infinite Dimensions*: Each level of depth in the fractals represents a different dimension of the problem. As you dive deeper into the fractals, you encounter more intricate patterns, representing more complex layers of the addition.

4. *Result Interpretation*: The outcome of the operation is a new fractal representing '7'. But this '7' is not just a number; it's a multi-dimensional entity with a pattern that encodes not just the value '7', but also the history of how it was derived from '3' and '4'.

This approach would require a rethinking of how we visualize and understand mathematical operations. The fractal patterns in Infinite-Dimensional Fractal Algebra carry more information than just the numerical value; they encode the relationships and interactions between numbers in a deeply interconnected, infinite-dimensional space.

Let's choose the Sierpinski Triangle, a well-known fractal, to represent the identity element for addition in Infinite-Dimensional Fractal Algebra. In traditional algebra, the identity element for addition is '0', as adding zero to any number doesn't change its value. Here, we'll conceptualize this using the Sierpinski Triangle.

1. *Sierpinski Triangle as the Identity*: In this framework, the Sierpinski Triangle represents the '0'. It's a fractal with the property of self-similarity: any part of the triangle resembles the whole.

2. *Adding to the Identity*: When another number (fractal) is added to this Sierpinski Triangle, the process involves integrating the fractal patterns of this number with the Sierpinski Triangle. However, since the Sierpinski Triangle represents '0', this integration process must preserve the original number's fractal structure intact.

3. *Concrete Example*: Suppose we're adding the number '2', represented by a different fractal structure, to '0' (the Sierpinski Triangle). The addition operation would overlay the '2' fractal onto the Sierpinski Triangle. The key here is that the Sierpinski Triangle, with its repetitive and self-similar gaps, would not alter the structure of the '2' fractal. Instead, it would fill into the gaps of the Sierpinski Triangle, maintaining its own pattern.

4. *Resulting Fractal*: The resulting fractal from this addition is the '2' fractal itself, as the Sierpinski Triangle (being the identity element) does not alter the other number in the addition. The intricate details of the '2' fractal remain unchanged, signifying that adding '0' did not change its value.

This is a very abstract and conceptual way to represent mathematical operations, and the choice of the Sierpinski Triangle for the identity is primarily for its self-similar and recursive properties, which align well with the concepts in fractal algebra.


Basically it formed a fuzzy idea of a algebra using different fractals as entities.

I'm sure this can be mapped out further into very concrete detail. It's a highly realistic idea. we have algebras for all kinds of things from complex numbers to lists.

Choosing the triangle for identity is probably the wrong choice though. The identity fractal should be zero dimensional or nothing. I think that will in actuality end up fitting the rules of the identity fractal if we ever decided to map out this algebra.

If you're not familiar with abstract algebra basically it's choosing some fractal that's equivalent to a zero value and coming up with ways to combine fractals with operations that hold the same properties of associativity and commutativity that multiplication/addition does for numbers.


> it needs to understand what it is talking about

"to understand" is a concept invented by the human brain to help it operate. It doesn't actually mean anything concrete in the physical world. See: the Chinese room argument (paradoxically it proves the converse of what Searle was arguing -- that there is no such thing as understanding, because you can emulate a system with identical outputs from the same set of inputs without caring or labeling what happens inside the room).


that's, uh, not the conclusion of the chinese room argument


The Chinese room doesn't have a model; it has an infinite dictionary. A sufficiently large LLM isn't a dictionary, it has to contain a model of some sort.


I think it's a little harsh to claim that the LLM didn't understand words. Sure it's far from perfect, but it mostly gives coherent answers. The AI is instructed to interpret each scenario as deadly, so it will typically do that, even if it doesn't make much sense.


I’m not harsh towards the creator of the game, this is more a critique of llms in general. Asking an llm for steps for something impossible will force it to give a result that is physically absurd. It’s certainly not an issue if you’re using them as a tool, but in a game like this is gives “faulty” results.


Yeah, the AI is far from deterministic.


That's definitely very worrying.


Based on many of the scenarios, the creators very clearly have a great sense of humor. I am generally not unconvinced that AI could be dangerous in the future, but this is obviously a scenario the creators created, and it's brilliant IMHO.


Why do you find it so worrying?


That was meant as tongue in cheek comment. Sorry if that didn't come across.


I think people here write ... </s> to annotate sarcasm.


Also, I can't take any credit for creating the game. I'm just a fan sharing something cool.


You actually can use custom scenarios by clicking the pencil icon on the prompt selection screen.


I think I read that some companies are working on using LLMs to improve/generate dialogue from side characters.


I absolutely love that idea. The AI could absolutely pick a negative trait when the player survives. Maybe they gain a positive trait when they die? That way it might balance out a little.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: