Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can you describe how you test for independent thinking? What does independent mean anyways? A decision tree can rationalize its outcomes. Does it count towards sentience? Is there an academic source for this definition of sentience?

> Telling this extra information is just communicating we spent enough time on it to give somewhat informed discussion.

Doing so is appeal to authority. When there is a claim of authority, it's very natural for it to be questioned. And those questions are based on subjective perception of authority.



> Can you describe how you test for independent thinking?

Lets take a simple case: If I give a set of toy blocks to an infant - they could take a variety of tasks unprompted : building new shape, categorizing them based on color, putting them back in shelf, calling out their shapes. If you gave the same setup without any further apriori information, what would you expect the ML model or a robotic device embodying a learning algorithm would do? Precisely nothing unless a task is designated. In the current advancement of ML, this task would lead nowhere. We aren't close to building the independent thinking capabilities of a toddler . If we define a purpose, it can match or exceed expectations. That is the purpose of embodied VQA direction in current research.

> Doing so is appeal to authority. When there is a claim of authority, it's very natural for it to be questioned.

You're welcome to question any claims. This is an incentive to me & makes me happy. It shows someone is willing to constructively discuss what I've learned. Its a win-win, as I see it.

But I take mentioning the credential disclaimer as a mode of mental preservation. It doesn't feel nice sometimes to explain others with utmost sincerity to be called a "garden variety fraud" for no rhyme or reason whatsoever (It happened right in this HN post somewhere)


> If you gave the same setup without any further apriori information, what would you expect the ML model or a robotic device embodying a learning algorithm would do? Precisely nothing unless a task is designated.

Google deepdream liked to draw dogs.

But also, we don't really run most of these ML models in a way that gives them an opportunity to form their own thoughts.

A typical GPT-3 run consists of instantiating the model, forcing it to experience a particular input, reading its 'reaction' off a bunch of output neurons, then euthanizing it.

If you did the same sort of thing with a human mind - waking it from a coma, blasting a blipvert of information into the visual cortex, then read off the motor neuron states, before pulling the plug on it again, you also wouldn't likely see much sign of 'independent thinking'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: