Hacker Newsnew | past | comments | ask | show | jobs | submit | drdeca's commentslogin

They didn’t say “statistical model”, they said “linear algebra”.

It very much appears that time evolution is unitary (with the possible exception of the born rule). That’s a linear algebra concept.

Generally, the structure you describe doesn’t match the structure of the comment you say has that structure.


Ok, how about "a pile of linear algebra [that is vastly simpler and more limited than systems we know about in nature which do experience or appear to experience subjective reality]"?

Context is important.


> If that were the case, we would only really love the games we grew up with.

I’m not sure that’s true? Like, perhaps the preference might generalize from the several games one did play as a child to other games which are similar to the ones one played as a child, with the preference still being a result of which games one played as a child.



That’s not what moral relativism is.

Utilitarianism, for example, is not (necessarily) relativistic, and would (for pretty much all utility functions that people propose) endorse lying in some situations.

Moral realism doesn’t mean that there are no general principles that are usually right about what is right and wrong but have some exceptions. It means that for at least some cases, there is a fact of the matter as to whether a given act is right or wrong.

It is entirely compatible with moral realism to say that lying is typically immoral, but that there are situations in which it may be morally obligatory.


I think you are interpreting “absolute” in a different way?

I’m not the top level commenter, but my claim is that there are moral facts, not that in every situation, the morally correct behavior is determined by simple rules such as “Never lie.”.

(Also, even in the case of Kant’s argument about that case, his argument isn’t that you must let him in, or even that you must tell him the truth, only that you mustn’t lie to the axe murderer. Don’t make a straw man. He does say it is permissible for you to kill the axe murderer in order to save the life of your friend. I think Kant was probably incorrect in saying that lying to the axe murderer is wrong, and in such a situation it is probably permissible to lie to the axe murderer. Unlike most forms of moral anti-realism, moral realism allows one to have uncertainty about what things are morally right. )

I would say that if a person believes that in the situation they find themselves in, that a particular act is objectively wrong for them to take, independent of whether they believe it to be, and if that action is not in fact morally obligatory or supererogatory, and the person is capable (in some sense) of not taking that action, then it is wrong for that person to take that action in that circumstance.


An RT is visible in the feed when following someone. Public likes are visible when going to their account and viewing their list of likes. (When they put both in the feed, it’s just dumb.)

Private likes are different from bookmarks in that it shows how many likes the post got, but not the number of bookmarks.


“prompt topology”?

This all sounds like spiralism.


I’m also an introvert, I think, but I do feel I would be better off with more socialization than I’m getting.

Though, I’m also single, so, maybe I wouldn’t feel such a need if I were married? Idk.


This comment I'm making is mostly useless nitpicking, and I overall agree with your point. Now I will commence my nitpicking:

I suspect that it may merely be infeasible, not strictly impossible. There has been work on automatically proving that an ANN satisfies certain properties (iirc e.g. some kinds of robustness to some kinds of adversarial inputs, for handling images).

It might be possible (though infeasible) to have an effective LLM along with a proof that e.g. it won't do anything irreversible when interacting with the operating system (given some formal specification of how the operating system behaves).

But, yeah, in practice I think you are correct.

It makes more sense to put the LLM+harness in an environment which ensures you can undo whatever it does if it messes things up, than to try to make the LLM be such that it certainly won't produce outputs that would mess things up in a way that isn't easily revertible, even if it does turn out that the latter is in principle possible.


And? Do the arguments differ for LLM vs the other models?

I guess the arguments sometimes mention languages. But I feel like the core of the arguments are pretty much the same regardless?


The discussion is about training an LLM on old text and then asking it about new concepts.


AlphaGo was trained on old games and then presented with a game it had never seen before. It came up with a move that a human would not have played.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: