Hacker Newsnew | past | comments | ask | show | jobs | submit | orobus's commentslogin

I'm disappointed this wasn't a Nelson Goodman reference.


I can’t figure out how to get the visidata frequency analysis histograms to render properly. Some render as blocks as expected but others are those diamond-question marks. I know it’s a silly hang up but it’s what’s keeping me on iterm. It’s seemingly impossible to search for (or at least I haven’t figured out the right keywords) to troubleshoot too.


Sounds like perhaps your main and fallback fonts are missing the Code Points in question.

I am not a ghostty user, but I think it has a setting for that, if you can figure out which font iTerm2 is rendering those glyphs with.


Try asking on https://github.com/saulpw/visidata the community there can help.

I am seeing this issue on a Mac with ghostty but not with wezterm, from the same binary. Ghostty shows some correct characters and some diamond-question marks, and a line usually contains one or the other. The ghostty inspector shows the diamonds are not the blocks.


The conditional relation represented in prolog, and in any deductive system, is material implication (~PvQ), not causation. You can encode causal relationships with material implication but you’re still going to need to discover those causal relationships in the world somehow.


Conditional statements don't really work because "if A, then B" means that A is sufficient for B, but "A causes B" doesn't imply that A is sufficient for B. E.g. in "Smoking causes cancer", where smoking is a partial cause for cancer, or cancer partially an effect of smoking.

"A causes B" usually implies that A and B are positively correlated, i.e. P(A and B) > P(A)×P(B), but even that isn't always the case, namely when there is some common cause which counteracts this correlation.

Thinking about this, it seems that if A causes B, the correlation between A and B is at least stronger than it would have been otherwise.

This counterfactual difference in correlation strength is plausibly the "causal strength" between A and B. Though it doesn't indicate the causal direction, as correlation is symmetric.


I didn't say one does not to discover the causal relationships, but once discovered, such relationships can be explored and followed and _inferred_ on in a very syllogistic manner. My comment was really about the proposal in the article.

On the other hand, what we seem to have with LLM models, and the transformer approach in particular, is a sort of probable statistical correlation, calculated by brute-forcing and approximation (the gradient descent). So this is not true causation also, it becomes one only after a human observes it and agrees it follows certain causality.

/Not sure whether I can state that it is also material but in another non-logical sense, perhaps would sound nonsensical, but the apparent logical structure in the LLM production rather emerges from training patterns, not from explicit logical operations./

There's nothing wrong having a graphical structure which models causality, and of course - this needs to be discovered first. But then we have LZW/Sequitur using very brute-force way in order to find the minimal grammar for compressing certain data lossless-ly, thus discovering some logical structure (and correlation), but this is not yet causation. Indeed finding patterns != finding causal relationships.

My gut feeling is we want something that would result in a correct PROLOG-like set of inference rules, but based on actual causality, not conflating correlation. And then this - for a larger corpus - world's knowledge, but we don't have the means (yet) to figure out the correlation, even though approaches exist for smaller corpus.

It is perhaps the gradient descent and the fact that this composition of tensor algebra is differentiable that is the ingenious thing about the ML we deal with now, but everyone is dreaming of some magic algo which would allow finding the causation so that it results in non-probabilistic graphical model, or at least a model that we can follow the stochastic branching on in a observable manner.

It is indeed ingenious to fold multi-dimensional spaces, multiple times, in order to disambiguate the curvature of bunny's ear from the one of bear's ear. But it just does not feel right to do logic and causation by means of differential calculus and stochastic structures.


If you were actually traveling at the speed of light it wouldn't feel like anything at all! Photons don't 'experience' time—any length trip would be instantaneous from the traveler's point of view.


Does Amazon Neptune not count?


One of my professors liked to translate it as “un-stirred-up-ness” which I always loved.


I’m not a neuroscience expert, but I do have a degree in philosophy. The Russell quote immediately struck me as misleading (especially without a citation). The author could show more integrity by including Russell’s full quote:

> Language serves not only to express thoughts, but to make possible thoughts which could not exist without it. It is sometimes maintained that there can be no thought without language, but to this view I cannot assent: I hold that there can be thought, and even true and false belief, without language. But however that may be, it cannot be denied that all fairly elaborate thoughts require words.

> Human Knowledge: Its Scope and Limits by Bertrand Russell, Section: Part II: Language, Chapter I: The Uses of Language Quote Page 60, Simon and Schuster, New York.

Of course, that would contravene the popular narrative that philosophers are pompous idiots incapable of subtlety.


Is Russell aligned with Ludwig Wittgenstein’s statement, "The limits of my language mean the limits of my world."? Is he talking about how to communicate his world to others, or is he saying that without language internal reasoning is impossible?

Practically, I think the origins of fire-making abilities in humans tend to undermine that viewpoint. No other species is capable of reliably starting a fire with a few simple tools, yet the earliest archaeological evidence for fire (1 mya) could mean the ability predated complex linguistic capabilities. Observation and imitation could be enough for transmitting the skill from the first proto-human who successfully accomplished the task to others.

P.S. This is also why Homo sapiens should be renamed Homo ignis IMO.


I think it’s a nicely summarised challenge to boot.

It’s doubtless to me that thinking happens without intermediary symbols; but it’s also obvious that I can’t think deeply without the waypoints and context symbols provide. I think it is a common sense opinion.


"Language" is a subset of "symbols". I agree with what you said, but it's not representative of the quote in GP.

Just a few days ago was "What do you visualize while programming?", and there's a few of us in the comments that, when programming, think symbolically without language: https://news.ycombinator.com/item?id=41869237


I've got a background in philosophy and I'm constantly asking myself these questions too. There seems to be a two way failure - first, the ML folks failing to engage with the extant literature, and second, academic philosophy's failure to produce anything remotely resembling concrete, practical, or really even relevant philosophical work. The former is pretty much par for the course (in academic philosophy you get used to being ignored early on), but it's the latter I find egregious. Especially given (as others here have pointed out) the almost universally lazy and magical thinking in the ML space. For example, there's much hand-wringing about the benefits and perils of "AGI" with barely any attempt to establish that "AGI" is even a coherent concept. I'm skeptical that it is, but I'd be happy to entertain arguments to the contrary---if there were any! "AI" has become a marketing term for increasingly sophisticated statistical methods for approximating functions. I think some sober discussion about whether such brute-force induction is the right sort of thing to warrant the term "AI" would be a welcome addition.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: