Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A.I. Has Grown Up and Left Home (nautil.us)
67 points by zbravo on Jan 22, 2015 | hide | past | favorite | 54 comments


I emailed Chomsky a month back about Thoughts and Language. I hope he doesn't mind that I'm sharing it. My email,

I was fascinated by your talk at British Academy. I'm no linguist. Please Ignore my ignorance.

If the formal language is not designed for communication but for our thoughts, Is it not the case that there are two different mechanisms of perception? One to understand internal language (if I can call it a language) and external language we use socially? It is bewildering. As I type this email, I'm thinking of the words that needs to be followed. Because of my ability for pattern recognition or for the conceptual understanding of rules of the external language. I'm not sure if my ability is a natural phenomenon or a self trained one(as far as English is concerned). English is not my native language. It is acquired by study. Recently, My thoughts have been primarily in English. I could switch between English and my native language for thoughts.

Is language the only way to think? Is it crazy to think that our brain does not use language to think at all? Can thoughts/perception , like language, have two different mechanism. Understanding external logic and internal logic? The internal system might be visual, physical etc. It seems to lead to a structure problem. Does internal thoughts have far more complex structure? If dreams are thoughts, It is extremely complex and might be a evidence that there are different systems. I strongly think it is because we have different involuntary mechanisms to perceive like sense of smell etc.

Chomsky's reply,

Very little is understood about the interesting questions you’re raising. To study them we’d have to have a way of accessing thinking by some means that does not involve conscious use of language – which is probably a superficial reflection of something happening within. But so far there are no good ways.


I tend to think I am an animal with a language faculty and that there is common some experience I share with animals and also that animals are capable of many of the kinds of practical cognition.

I deny the idea that animals have a moral inferiority to humans, in the sense that they make worse decisions as to practical ethics.

For instance, a horse will express affection if you feed it but if you feed a child it might complain it isn't right and then wonder why his mom would rather feed the horses.

I think on any continent you will have a much easier time getting killed by annoying a cop than by annoying an animal.


I’ve always found it interesting how children learn a language. After 3 months of crying as the only form of communication, they babble for about 6 months. Then they start to articulate single words (mama, ball etc.)—quite often even with meaning, context and while pointing at something. Once again 6 months later they start to form 2-word sentences (big bubble, my ball, mama ball, ball up etc.).

It seems that babies are already able to reason about the world in a rather complex way, for example the baby can already think that someone has the ball, but they can only express it using their limited vocabulary (mama ball etc.). It seems unlikely to me that once the baby can express the subject–object–verb sentence "mama has (the) ball" the thought processes would suddenly be replaced by grammatical constructs.

It does seem, however, that grammar can be used as a tool to verify thought processes. It casts the thoughts in a mould, so to speak, and puts it on the examination table of our echoic memory with a capacity of about 5-20 seconds. There also seems to be a higher motivation to form logically coherent thoughts when they are articulated, presumably because they are naturally intended to be interpreted by other individuals.


If you're interested in these questions, you should read http://www.amazon.com/Whos-Charge-Free-Science-Brain/dp/0061... to learn more about how brains work.

There is good evidence that different parts of our brain think in very different ways, and we suppress awareness of how different they are. Verbal thinking in particular relies heavily on a section of the brain that is very good at making rationalizations, and is not necessarily well connected to the parts of your brain that are making the decisions you are trying to understand. Therefore our verbal descriptions of thinking is interesting, but not particularly reliable or informative.

It is worth noting that many negative reviews come from people who think that current thought from philosophical schools should have been included. Given my temperament and beliefs, I took this as a buy recommendation and am happy that I did.


Thanks. Reminds me of this book, which I also highly recommend in the same vein: http://www.amazon.com/The-Master-His-Emissary-Divided/dp/030... .


Thanks, that book looks interesting as well.


Thanks!


Language is not the basis of thought. If someone thinks only with a mental narrative (in words, some language), then they're not really thinking in the broadest sense.


At risk of a Sapir Whorf debate, there is newer research that suggests your polemical statement isn't accurate[1], let alone a vast body of Postmodernist and Postcolonial theory[2].

Being an emergent / adaptive organism, we are taught and learn models and metrics, and in turn project those outward upon the world.

Not shocking that the models and metrics form the scaffolding for thought as expressed in language.

“Does Language Shape Thought?: Mandarin and English Speakers' Conceptions of Time”

[1] http://www.sciencedirect.com/science/article/pii/S0010028501... [2] Which, also likely not a coincidence, calls into question the veracity of narrative structure, and instead frames narrative as an ideological tool and construct.


My statement wasn't polemical. And contrary to the body of literature that likes to tout that "all is literature/language", there are many conceptual schemes that do not comport themselves to linguistic expression, which in turn simply implies that such schemes are not cognized in linguistic terms but meta-linguistic (or non-linguistic) terms. Diagrammatic reasoning is a case in point. Appealing to the cognitive limitations brought on by acculturation isn't a satisfactory refutation. Let there be no doubt: diagrammatic reasoning will figure in AGI.


Same.

Visual, wherever the arbitrarily privileged line is drawn, is another convention.

To suggest that somehow the nebulous “visual domain” doesn't require reading would be to ignore much of art history and theory.

Again, there is no “visual” realm that is not steeped in vernacular; cliche, symbology, context, etc.

In this way, the visual domain is not somehow more elevated / pure / true than any other language, despite the sad reality that many have been unexposed to understanding it as such.


Visual thinking, a more closely related subset of diagrammatic reasoning (which need not be visual, incidentally), is in no shape or form "privileged". Visual thought is in fact closer to the general form of thinking that might be found in other sentient creatures who are not "privileged" with human language. Stated another way, diagrammatic thought is closer in shape and form to domain general pattern recognition. Reasoning, or "illation", is of a peculiarly non-linguistic nature, such that you'd be hard pressed to point out, like Achilles, to the Tortoise, just what it is that makes a conclusion of reasoning one that necessarily follows from some set of premises, unless you made use of some diagrammatic method.

Not sure why you're arguing a priori that all thought must be "vernacular" - code for "linguistic", I'd assume. But thought is not merely verbalizing or grammatizing everything.


on the same subject: George Lakoff on Embodied Cognition and Language

https://www.youtube.com/watch?v=XWYaoAoijdQ


Language is the basis of thought.


At least in my own experience (admittedly subjective) I am rarely aware of thinking in words.


Some problems can also be solved by visualisation, like finding the shortest route between two points in the city.

Maybe thinking without obvious support is what we call intuition ?


Any visualization is also language, and ultimately convention[1]. The privileging of the visual, or even the division of the senses, is a construct.

“I have had an experience of this kind, similar to that reported from many parts of the world by those who have had occasion to show photographs to persons who had never seen a photograph before. To those of us accustomed to the idiom of the realism of the photographic lens, the degree of conventionalization that inheres in even the clearest, most accurate photograph, is something of a shock. For, in truth, even the clearest photograph is a convention; a translation of a three-dimensional subject into two dimensions, with color transmuted into shades of black and white. In the instance to which I refer, a Bush Negro woman turned a photograph of her own son this way and that, in attempting to make sense out of the shadings of greys on the piece of paper she held. It was only when the details of the photograph were pointed out to her that she was able to perceive the subject.” Herskovits, Art and value, from Aspects of Primitive Art, New York, 1959

[1] Most of Herskovits' work focuses on this.


You should have a look at Ithkuil [1].

[1] http://en.wikipedia.org/wiki/Ithkuil


Computers are just as dumb as they were in the 50's. They're just dumb a lot faster.


Some algorithms are smarter. The computer system that figures out how to book flights for millions of people every day is quite intelligent when it comes to that specific task.

The problem is, that intelligence is more like an insect's brain in that it doesn't generalize to any other higher level problems. We're still trying to figure out how to organize knowledge so the intelligence is transferable. Doesn't seem too difficult to do in the next 30 years given currently available tools.


I think the point OP is making is that the underlying technology and "thihnking process" behind most AI hasn't changed much in the last 50-60 years. We look at a task like figuring out how to book millions of flights and think "That seems like it would be very difficult for me to do, therefore a computer must be intelligent if it can do it". But that isn't true for all definitions of 'intelligent'. The basic optimization process, once figured out (by humans) is fundamentally the same and is just being applied by the computer at greater scales and speeds. If computers were getting smarter, for some definitions of the term 'smarter', they would be identifying problems to be solved on their own, or at least they would be coming up with novel solutions without any human involvement.

A pen and paper can be used to make powerful calculations, but we don't generally attribute intelligence to the tools themselves. The promise of artificial intelligence was that computers would eventually become capable of taking independent action and doing things that humans couldn't predict or do themselves. To say that we're still trying to figure out how to do that is a bit of an understatement. The time frame for fulfilling that expectation is completely unknowable, at best.


have you heard about deep learning? Computers are coming up with novel solutions without human involvement. It was done before in the 70s and 80s with neural networks, but it only worked on simple tasks. Now the main limits seem to be computing power (and learning how to train it).


Ah yes. Lucy's infamous "in the next 30 years" moving goal post.


When it comes to CPU cycles or training data, as in so many other things, quantity has a quality all of its own[1].

1] http://www.quora.com/Who-said-Quantity-has-a-quality-all-its...


Historical correction: The article mentions "AI Winter" as occurring in the 1970s. AI Winter actually refers to the period from about 1990-1997 when the Expert Systems approach was deemed to be a failure, funding for AI dropped considerably, and it was a mistake to put AI on your resume.

This opened a window that allowed the statistical-based AI techniques that currently dominate the field to mature.


As someone who went through Stanford CS in 1983-1985, when the "expert systems" people, led by Feigenbaum, were in charge, I can say that the expert systems crowd was stuck by then, but in denial about it. Feigenbaum was testifying before Congress: "The US will become an agrarian nation" if Congress didn't fund a big national AI lab headed by him. The expert systems crowd were claiming they were headed for strong AI in the near term. They were in complete denial about the fact that expert systems mostly give you back what you put in. You can code up some kinds of "how-to" information as rules, but it's just a form of declarative programming.

It was frustrating hearing those professors. I'd spent time in industry before going to Stanford for a MSCS, and I could tell that they were stuck and covering for it.

A few years later, Stanford moved computer science from Arts and Sciences to Engineering. This pushed the CS department into more practical directions. The School of Engineering had deans, supervision, and planning. CS under Arts and Sciences was sort of what each professor wanted to teach. There was no undergraduate CS at Stanford before that move; prospective undergrad students were told "get a liberal education".

That didn't end the "AI Winter" at Stanford, though. It took the DARPA Grand Challenge, in 2003-2005, to do that. The head of DARPA, Dr. Tony Tether, used the Grand Challenge to force the big-name AI schools to get some results. DARPA had been funding automatic driving work at Stanford and CMU since the 1960s, with disappointing results. The big-name schools were quietly told that if the private sector and amateurs outdid them in the Grand Challenge, DARPA was turning off their AI funding. Suddenly entire CS departments were devoted to the Grand Challenge. Stanford had to bring in machine learning people from CMU to compete. That's what turned the department around and finally pushed the logicians into the background.


The (extensive) wikipedia article[1] does not agree: "There were two major winters in 1974–80 and 1987–93 and several smaller episodes..." [1] https://en.wikipedia.org/wiki/AI_winter


We're still figuring out probabilistic knowledge representation. So much depends on context.

Earlier approaches are fraught with problems. Expressing an encoded fact that translates to "Obama is the president." is far too simplistic. It does not take into account the proper context (president of the USA), and time period (from January 20th, 2009 to the current time) and so on.

These contexts: conversation (were we talking about politics?), location (USA, Russia, etc.), all need to be captured as best as possible and appropriately considered when trying to apply a learned fact. Maybe the context is hypothetical (talking about an alternate history where McCain won the election) or fantastical (talking about President Garrett Walker in the TV show House of Cards).

The existing systems I've seen can't really handle all of that.

Somehow we humans are generally good at figuring out which sets of contexts are appropriate to apply in a given situation, and doing it quickly. What's tricky is that even in a single conversation with one person, the context may switch from historical, to hypothetical and back again quickly.


This is one of the better explanations of this fundamental shift in AI research. I remember even a decade ago this would have been a controversial stance.

Now it is very clear the neats have lost and the scruffies are on the rise. http://en.wikipedia.org/wiki/Neats_vs._scruffies


The article mentions Marvin Minsky as an advocate of symbolic logic approaches, and that Perceptrons (the book) was an attack on perceptrons and neural nets and connectionist theories. This is false on both accounts.

1) Minsky was emblematic of a Scruffy, and to this day, Scruffy approaches are associated with Minsky and MIT. He didn't believe that statistical or connectionist approaches were bad, just that they were not a silver bullet and not going to solve every problem with AI. He has embraced nearly every prominent approach to AI, and simultaneously criticized the near-religious beliefs that they spawn.

2) Minsky didn't attack perceptrons and neural nets. His book was an objective criticism of them, laying out very clearly the mathematical limitations of them, as well as philosophical problems with approaching them as a catch all solution.



This shows the folly of modern approaches:

> Pornographic images, for instance, are frequently identified not by the presence of particular body parts or structural features, but by the dominance of certain colors in the images

Neural networks are unsuccessful because they are not detecting anything. They're just making statistical correlations that happen to be right for the specific set of training data they have received. There's no robustness whatsoever. Artificial gullibility.

If we want to teach computers to think, then we have to teach them to model the physical world we live in. Plus the emotional, social world.

And even that is entirely separate from heavyweight abstract reasoning. Just because a computer can pass the turing test, doesn't mean it can spontaneously diagnose your medical problems, design a suspension bridge or play Go.


You make some pretty strong attacks on the state of the art approach to computer vision. Could you please provide citations?

As someone working on neural networks, although not really on computer vision, I do not believe several of the things you said are true.

> > Pornographic images, for instance, are frequently identified not by the presence of particular body parts or structural features, but by the dominance of certain colors in the images

> Neural networks

This is definitely not what state of the art neural networks for computer vision do, although I'm sure there are many models out there that do function in that manner.

We know that neural networks learn to detect higher level features. For example, see the excellent visualizations in Matthew Zeiler and Rob Fergus' Visualizing and Understanding Convolutional Networks paper. (http://arxiv.org/pdf/1311.2901.pdf)

> Neural networks are unsuccessful because they are not detecting anything. They're just making statistical correlations that happen to be right for the specific set of training data they have received.

Are you saying that the correlations they learn aren't true on non-training data?

If so, that is false. Any competent person in ML tests on different data than they train. Well trained neural networks generalize to new data very well. That's why we use them.

> There's no robustness whatsoever. Artificial gullibility.

Are you referring to the adversarial counterexamples section of Szegedy et al's Intriguing properties of neural networks paper? (http://arxiv.org/abs/1312.6199)

This paper shows that one can craft noise which will fool a neural network. But it isn't noise that is found in real life. It's created by an optimization process which is basically looking for ways to fool the network. Basically, it discovers an optical illusion for the network.

(From the manifold perspective, it moves us in a direction perpendicular to the manifold.)


> This is definitely not what state of the art neural networks for computer vision do, although I'm sure there are many models out there that do function in that manner.

I was just quoting the article really.

> Neural networks are unsuccessful because they are not detecting anything. They're just making statistical correlations that happen to be right for the specific set of training data they have received.

Neural networks are moderately good at replicating the part of the brain that identifies visuals, because that is just statistics and neural networks are a slightly awkward way of doing statistics.

But only up to a point, because humans don't sit around all day looking at static images. We humans move around, and the objects we are classifying move. Our world is 4-dimensional, not 2-dimensional.

Perhaps a neural network can tell what a cat looks like, but can it tell the difference between a dead cat and a sleeping one, by watching it breathe?

Neural networks are not good at replicating the parts of the brain that deal with physical properties of the world, because that is not a statistical operation. It's an algorithmic one.

Unfortunately for them, such physical properties are not an afterthought but an inseparable part of the task they are attempting to achieve.


>Perhaps a neural network can tell what a cat looks like, but can it tell the difference between a dead cat and a sleeping one, by watching it breathe?

Yes, yes it can: http://arxiv.org/abs/1411.4389 (Long-term Recurrent Convolutional Networks for Visual Recognition and Description) It's certainly surprising how much of perception can be reduced to classification.


But you can tell porn is porn based on a single 2d image.


Actually, humans often struggle to even agree on what is porn.


Well you can't expect a computer to solve that.


> Neural networks are unsuccessful because they are not detecting anything. They're just making statistical correlations that happen to be right for the specific set of training data they have received. There's no robustness whatsoever. Artificial gullibility.

This is what the brain does, too. All inductive reasoning comes down to statistical correlations. The human brain has NEVER had "robustness".


By robustness I mean the brain is pretty hard to exploit. You can't trick it into thinking an image is pornographic just by changing the color balance.



There's an entire subreddit devoted to a technique for fooling the brain into thinking an image is pornographic by overlaying patterns that obscures clothed sections [1] (and at least one more about more generic optical illusions)

Not quite the color balance, but how anyone can assert the brain is hard to exploit is beyond me.

[1] it's this technique - links is probably nsfw: https://richardwiseman.wordpress.com/2010/09/07/naked-women-... - I don't remember the name of the subreddit.


> The brain is pretty hard to exploit.

No it's not, just ask any advertiser.

> You can't trick it into thinking an image is pornographic just by changing the color balance.

If the only data someone has about an image is its colour balance and you asked them to classify which they think are pornographic, then yes you can.

It's a pretty stupid idea to impose such limitations though, on humans or ANNs. Even if you did only care about colour balance, you don't need an ANN to tell you that; just scale the image down to 1 pixel x 1 pixel, then look at its colour.


> > By robustness I mean the brain is pretty hard to exploit. You can't trick it into thinking an image is pornographic just by changing the color balance. > No it's not, just ask any advertiser.

One of the great things about humans is that they can consider the context of a remark, although apparently this is not universal.


Maybe it's an AI.


The phenomena outlined in http://en.wikipedia.org/wiki/Optical_illusion would seem to instantly belie the claim in your first sentence.


> By robustness I mean the brain is pretty hard to exploit.

As others have pointed out, the complete opposite is true.


> This is what the brain does

Good to know you know what our brain does, science hasn't figured out yet but you have...


Well, I don't know it, but if my brain operates at a higher certainty it certainly doesn't reach my thoughts.


I am downvoting you because you are factually incorrect on several points, while maintaining an aggressive, self-important tone. Neural networks are incredibly successful. The state of the art in computer vision performs /as well or better/ than human beings for image classification.


> Just because a computer can pass the turing test, doesn't mean it can spontaneously diagnose your medical problems, design a suspension bridge or play Go.

I think you may have too narrow a view of a turing test. As part of any reasonably tough turing test would be the ability for the test subject to learn something new and apply it.


A computer might be able to learn something new. That doesn't mean it'll be particularly good at it.

Our goal is presumably to design computers that can do the specialised jobs that people do. A computer that can pretend to be a novice is not much use in the grand scheme of things.


Seems to me you can buy and sell a face detector (or find it open source on github) all the better if somebody or something somewhere can identify the concept of a face detector.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: