Sorry wasn't there a post literally like a week ago about this being a long term experimental branch and how we needed to not kick the hatchling while it's an egg?
I've noticed on hackernews in the past year, a certain type of comment. A deep suspicion to first call out a surface behavior, then psychoanalyze strangers with whatever the flavor of the month "deep observation" is.
You can't be a dick on this platform without fancy prose I guess.
Doesn't symbolic AI have a lot of philosophical problems? Think back to Quine's two dogmas - you can't just say, "Let's understand the true meanings of these words and understand the proper mappings". There is no such thing as fixed meaning. I don't see how you get around that.
Deep learning is admittedly an ugly solution, but it works better than symbolic AI at least.
This reverse engineering effort is important between you and me, in this exchange right here. It is a battle that can never be won, but the fight of it is how we make progress in most things.
I mean, Quine invented (the term) holism. I don't think we're on different pages. Maybe I should've specified a bit more what I was getting at.
This has very specific implications in symbolic ai specifically where historically the goal was mapping out the 'correct' representation of the space, then running formal analysis over it. That's why it's not a black box - you can trace out all of the steps. The issue is, is that symbolic AI just doesn't work. To my knowledge, as compared to all the DL wins we have.
I think the win of transformers proves that symbolic AI isn't the way. At the very least, the complex interactions that arise from in-context learning clearly in no way imply some fixed universal meaning for words, which is a big problem for symbolic AI.
I don't know what or why our science education is like this, but it seems like everybody's understanding of science bottomed out at a straw man version of popper & positivism.
And to be clear, falsification and being empirical & skeptical about theoretical claims is great. What I see all too often on the Internet is just pattern matching to the words "observable" and "falsification" without a second thought, without actually looking into how science develops, and any and all narratives are historically rewritten to fit only those two categories.
Which is why it's even more impressive to be a real scientist, to actually be able to navigate the muddy waters properly where it's not just some simple adjective checklists to run through. (As a non scientist)
I feel the same, but then I remind myself of how I used to feel about dark matter (I really disliked it). Having an arena where no scientific question is out of bounds is great.
I'm pretty sure most people take issue with AGI, because we've been raised in culture to believe that AGI is a super entity who is a complete superset of humans and could never ever be wrong about anything.
In some sense, this isn't really different than how society was headed anyways? The trend was already going on that more and more sections of the population were getting deemed irrational and you're just stupid/evil for disagreeing with the state.
But that reality was still probably at least a century out, without AI. With AI, you have people making that narrative right now. It makes me wonder if these people really even respect humanity at all.
Yes, you can prod slippery slope and go from "superintelligent beings exist" to effectively totalitarianism, but you'll find so many bad commitments there.
It's not intuitive to humans, even after learning parsing theory. I can do basic name refactorings. I've even written neovim plugins to do 1 specific thing with the AST (dfs down and delete one subtree which I understand). Those are fine.
I would not be comfortable doing an on-the-fly "rewrite all subtrees that match this pattern" kind of edit.
It seems like a tool that's good for LLM's though.
Totally agree with the absorption thing. I've always found myself at a great calm, ever since I was a kid, from sitting en transit and looking out the window. A train ride is great for this reason. I think about things. I actively think about things. These things are often not daydreams, hard problems, rumination. I know what those feel like, and they are definitely different from depressive rumination or furiously working through tasks.
Again, I want to emphasize, that in none of these are you explicitly practicing the act of leashing in your mind.
All in all, I think the popular conception of meditation, Youtube-ized since the 2010s, has more nuance. Maybe people see this distinction and think it's obvious. To me, as someone who unironically feel like I'm net negative from self-help content than net-positive, this matters to me, personally.
If you want to get mystical, there are plenty of stories of deep Eastern masters practicing their craft every day. They certainly are thinking about their act - they are not trying their best to "get rid of all their thoughts". These are different activities, each with their own merits, both much different states than the common state of the modern man today.
That being said, meditation and the surrounding ideas have helped me overall, if not just because the specific influencers that I do hold as valuable had a good attitude when approaching it. But nowadays I'd imagine it's been silently incorporated into the very underlying forces they were trying to avoid (I have to meditate because it makes me a more improved human being compared to my peers!)
Do you think it's akin to Ilya's [1] claim that next token prediction is reality? E.g. any deeper claims about the structure of that intelligence or comparing to humans?
To be clear, I'm 100% with you that "next token predictor" is stupid to call what these machines are now. We are engineers and can shape the capability landscape to give rise to a ton of emergent behavior. It's kind of amazing. In that sense, being precise about what's going on, rather than being essentialist (technically, yes, the 'actual' algorithm, whatever that even means, is text prediction), is just good epistemology.
I still think it's still a very interesting question though to ask about deeper emergent structures. To me, this is evidence of a more embedded cognition kind of theory of intelligence (admittedly this is not very precise). But IDK how into philosophy you are.
I try really hard not to think about this stuff because I've seen how people talk when they get too deep into it. My mental model, or mental superstructure, if you will, for all of this stuff is that we've discovered a fundamentally novel and effective way of doing computing. Computer science is fascinating and I'm there for it, and prickly when people are dismissive of it. I'm generally not interested in the theory of human intelligence (it's a super interesting problem I just happen not to engage with much), which spares me from a lot of crazy Internet stuff.
I think you're actually making a point but overall still disagree.
I do think LLM's are evolving towards this kind of embodied cognition type intelligence, in virtue of how well they interoperate with text. I mean, you don't need to "make the text intelligible" to the LLM, the LLM just understands all kinds of garbage you throw at it.
Now the question is: Is intelligence being able to interoperate?
In the traditional sense, no. Well, in a loose sense, yes, because people would've said that intelligence is the ability to do anything, but that's not a useful category (otherwise, traditional computer programs would be "intelligent"). But when I hear that, I think something like "The models can represent an objective reality well, it makes correct predictions more often than not, it's one of these fictional characters that gets everything and anything right". This is how it's framed in a lot of pop culture, and a lot of "rationalist" (lesswrong) style spaces.
But if LLM's can understand a ton of unstructured intent and interoperate with all of our software tools pretty damn well... I mean, I would not call that "a bunch of hacks". In some sense, this is an appeal to the embedded cognition program. Brain in a vat approach to intelligence fails.
But it clearly enables new capabilities that previously were only possible with human intelligence. In a very blatant negative form: The surveillance state is 100% now possible with AI. It doesn't take deep knowledge of Quantum Physics to implement, with a large amount of engineering effort, data pipelines and data lakes, and to have LLM's spread out throughout the system, monitoring victims.
So I'd call it intelligence, but with a qualifier to not slip between slippery slopes. It may even be valid to call the previous notion of intelligence a bad one, sure. But I think the issue you may be running into is that it feels like people are conflating all sorts of notions of intelligence.
Now, you can add an ad hoc hypothesis here: In order to interoperate, you have to reason over some kind of hidden latent space that no human was able to do before. Being able to interoperate is not orthogonal to general intelligence - it could be argued that intelligence is interoperation.
If you're arguing for embodied cognition, fine, we agree to some extent :)
The fear is that the AI clearly must be able to emulate, internally, a latent space that reflects some "objective notion of reality". If it did that, then shit, this just breaks all of the victories of empiricism, man. Tell me about a language model that can just sit in a vat, and objectively derive quantum mechanics by just thinking about it really hard, with only data from before the 1900s.
I don't think you need to be this caricature of intelligence to be intelligent, is what I'm saying, and interoperability is definitely a big aspect of intelligence.
Now this I can agree with. One thing that is extremely important to maintain with this technology is nuanced perspective. Otherwise, it will lead you astray quickly. It's also a difficult thing for us to maintain.
1 week turnaround I guess is what they meant.
reply