Hacker Newsnew | past | comments | ask | show | jobs | submit | braindead_in's commentslogin

Interestingly, all these things are required for loving kindness meditation practice as well.


Why not build a new browser with GPT baked in?


Curious, how would that differ? Assuming it is just grabbing the rendered HTML DOM after each action, isn’t it nearly the same?


Does it respond in correct JSON?


I have been trying to figure out how to fine tune codellama. Will the llama2 examples work for codellama as well?


The universe is a infinite neural network and we are all weights and biases.


So your mum has more weights in universal decision


The 34b Python model is quite close to GPT4 on HumanEval pass@1. Small specialised models are catching up to GPT4 slowly. Why not train a 70b model though?


From a Nondualist perspective, our brain is highly complex biological neural network that has a special ability to reflect pure consciousness giving rise to the mind and the world with it. A sufficiently advanced artificial neural network can also reflect the same consciousness but thier minds and world would be entirely unlike ours. However, consciousness will add a random component to the predictions and might make them completely useless as a tool. They might decide not to follow the instructions prompt given their internal state of mind.

That is a problem even now though. Sometimes LLMs just goof up for no reason. Maybe they are conscious.


> Consciousness will add a random component to the predictions

I get why our form of consciousness is essentially incompatible with determinism (our brains process data in a massively parallel way - minor differences here and there like the length of a particular dendrite, or even quantom effects, will affect the results). But a synthetic consciousness might not have that particular problem. You might be able to "reset" it and get the same result given the same input.

> and might make them completely useless as a tool

Our own brains are non-deterministic and yet we manage to get at least some usage from them.


It would be unfortunate if our brains were actually non-deterministic. Chaos theory is about deterministic systems!

I'm pretty sure the brain is a chaotic system. Mostly because the other options are all <boring>.

Which behavior would you pick?

* static/equilibrium. (doesn't change over time)

* periodic (like a pendulum or sine wave)

* chaotic (unpredictable complex behavior, sensitive to initial conditions)

* stochastic (random noise, possibly subject to statistical analysis)

Only stochastic behavior is non-deterministic, and what we actually want is the chaotic behavior.

https://en.wikipedia.org/wiki/Chaos_theory


It sounds like you're talking about two different forms of determinism like they're the same thing. There's a theory of computation determinism, but then there's philosophical determinism as it's understood in the context of discussions in free will. I understand those to be different things. Just like randomness isn't agency, neither is parallelization.

That said, I think I agree with the upshot of your point that artificial conscious systems could process things predictably, and that our own brains appear to be perfectly functional despite whatever elements of uncertainty, (via our own agency or parallelization or whatever else one thinks is the special thing about consciousness).


Oh I don't believe in "free will" or "agency" at all.

I do believe that in this Universe there's some things are impossible to predict - that there's sources of "pure randomness". And since we don't exactly know how our brain works, I think it possible that our brains might incorporate some of those unpredictable phenomena. Or not. But that is it. Once those initial theoretical random values "are set", our behavior is deterministic. Inconceivable intricate, almost certainly impossible to model or simulate, but deterministic.


But above you said that our form of consciousness is incompatible with determinism?


Might need more detail on how those two forms of determinism are different. I think a lot of people do consider them the same. Enough people that spend time pondering them, that I don't think it is an idle misunderstanding.

Computers, with computational determinism, don't have free will.

But many consider Human Brains to be based on physics, and are pretty pre-determined, and also don't have free will.

Basically, humans also don't have free will, and both determinisms reduce to the same thing.


> Computers, with computational determinism, don't have free will.

Define (the properties of) free will, if you will.

Depending on your definition, I might be able to come up with code that exhibits those properties to your satisfaction.

A lot of people believe that the choice is between deterministic and stochastic, while there's actually 3 different classes of deterministic behavior (static, periodic, and chaotic), one of which is most interesting indeed .

Basically a lot of interesting systems are chaotic, and have some similar characteristics. They are nominally deterministic as a prerequisite for being chaotic, but that's about as far as it gets. If you can get the exact same initial conditions twice then yes you do get the same exact behavior out of a chaotic system (due to the criterium that the system is deterministic), but in practice: good luck with that.


The parent post had said there were two types of determinism and that they were not compatible.

I was just saying, a computer could be made to model a human and appear to have free will. And that would indicate that both types of determinism are the same.

Personally, I agree, a computer can be made to exhibit those properties.

I just take it further, since a human also has those properties, and can 'appear to have free will'. That any outside observer will have to accept that both have free will, or neither does.

There is no extra 'mystical essence' that gives a human agency, we are following our own programming. We need to eat, mate, get triggered, thoughts come un-bidden. Or another way, to your point, there are chaotic or stochastic systems at play, but those are not 'agency'. Somewhere in a human, maybe there is chaos, but that doesn't translate into a 'agency'.


Ah! A subtle point.

I'd think that it's a question of definitions at this point. If two systems are both <sufficiently unpredictable>, I would ascribe "free will" and "agency" to both.

Your reasoning seems to be almost exactly the same and equally valid, except all the signs are flipped, so you ascribe those properties to neither.

Either way the two systems are equivalent.


Yes.

Sometimes I like to frame it in the negative.

Because it rarely comes up as an option, that humans might not be 'conscious' either.

I don't mean in a solipsist way. More like in the 'controlled hallucination' framing from Anil Seth. We (human brains) respond to inputs, it is more of a Bayesian, math processing. We aren't really even aware of why we do things, we don't control what we think about. So are we really 'conscious'? And how can we say an AI wouldn't reach at least the same level of 'processing'.


>Might need more detail on how those two forms of determinism are different.

In the theory of computation, you can make a model representations of computers in deterministic or non-deterministic versions. The deterministic version could be described as a flowchart of possible states a computer can be in.

The non-deterministic version allows for the possibility of branching paths and doesn't make assumptions about what state the computer will be in.

One problem you sometimes get into when discussing, say, randomness or quantum weirdness, is that they might lead to confusion about philosophical determinism. There's determined in the sense of Laplacian predictable exchanges of cause and effect. But there's also a determined in the sense of determined by physics as opposed to an independent, libertarian form of free will.

Similarly, with computers, a computer architecture could be understood in some sense as being non-deterministic, but it's not the kind of determinism that's at stake in free will debates.


>However, consciousness will add a random component to the predictions and might make them completely useless as a tool.

I think I was with you until this part. I think the best way of making sense of what you mean by "random" is that you are presuming that something with consciousness would have agency, and therefore would be open to making it's own independent choices.

Whatever the case with agency, it wouldn't be the same as randomness.


> That is a problem even now though. Sometimes LLMs just goof up for no reason. Maybe they are conscious.

LLMs are neural networks trained on huge volumes of text. They are statistical machine that learns patterns and relationships from the training set. The text is diverse: from fiction to scientific. An LLM predicts (autocompletes) the next word in a sentence based on context from the preceding words.

Hallucination happens because LLMs are designed to generate text that is coherent and contextually appropriate rather than factually accurate. Training data contain inaccuracies, inconsistencies, fictional content and the model has no world view or principles or experience or an opinion or other way to disinguish between fact and fiction. So what it outputs aligns with patterns in teh training data but it isn't grounded in reality.

Putting it differntly, LLMs lack ground truth from external sources.


> has a special ability to reflect pure consciousness giving rise to the mind and the world with it.

It's nondualist in the way that it makes the entire world as insubstantial as souls.


The quote "The limits of my language mean the limits of my world" is attributed to Ludwig Wittgenstein, an Austrian-British philosopher. He expressed this idea in his work "Tractatus Logico-Philosophicus."

It's one of Sw Sarvapriyananda's favourite quotes. Wittgenstein was quite Vedantic it seems.


Beautiful. Today is Masik Shivratri. I am planning to stay up all night, do some 5, listen to bhajans and meditate. It's gonna be magical.


What if something is different but nothing has changed in the model? Transformers are non deterministic. The response to same prompt may vary slightly, and can be controlled somewhat by the temperature setting. Something could have gone wrong there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: