Hacker Newsnew | past | comments | ask | show | jobs | submit | postnihilism's commentslogin

And when that company commits a crime, it should be punished and its stock should lose value.

It seems like you're arguing that shareholders should reap rewards for actions that a company takes but be shielded from negative financial repercussions that result from criminal actions it takes.

This creates a moral hazard similar to to the "too big to fail" situation of banks. It incentivizes risky and potentially criminal behavior because ownership is able to capture the value of any upside and is shielded from the downsides of the behavior.


> And when that company commits a crime, it should be punished and its stock should lose value.

I agree. That is part of the risk investors take.


It’s not enough to lose your investment. You should lose your share of the penalty. The idea of limited liability is heinous.


> "This 'Skinnerism' has been discredited in cognitive psychology decades ago and makes absolutely no biological sense whatsoever for the simple reason that any organism trying to adapt in this way will be eaten by predators before minimizing its "error function" sufficiently."

> "Living learning organisms have limited resources (energy and time), and they cut the search space drastically through shortcuts and heuristics and hardcoded biases instead of doing some kind of brute force optimization."

But those heuristics and hardcoded biases were developed through brute force optimization over the course of billions of years, a massive amount of energy input and many organisms being devoured.


> But those heuristics and hardcoded biases were developed through brute force optimization over the course of billions of years, a massive amount of energy input and many organisms being devoured.

This is true in the context of the universe as a whole, not by the organism itself.


Except no organism is born a blank slate. Parent is correct in that our prior was massively expensive to construct


So we can expect our ANN’s to yield AGI in a few million or billion years? That doesn’t sound like a good place to put our current efforts then.


That does not necessarily follow, as I imagine you well know.


Why wouldn't it follow? Human intelligence evolved in the real world with all its vast information content. Deep learning systems are only trained on a few terrabytes of data of a single type (images, text, sound etc). Even if they can be trained faster than the rate at which animals evolved, their training data is so poor, compared to the "data" that "trained" animal intelligence that we'll be lucky if we can arrive at anything comparable to animal intelligence by deep learning in a billion years.

Or unlucky, as the case may be.


You elided the "necessarily".

One can rationally argue either way over the speculative proposition that reinforcement learning will yield AI in less than a few million years, but that it took evolution half a billion years is hardly conclusive, and certainly not grounds for stopping work.


Not grounds for stopping work[1], but perhaps grounds to explore other avenues[2] to see if something else might yield faster results.

I’m no expert, but my personal opinion is that AGI will probably be some hybrid approach that uses some reinforcement learning mixed with other techniques. At the very least, I think an AGI will need to exist in an interactive environment rather than just trained on preset datasets. Prior context or not, a child doesn’t learn by being shown a lot of images, it learns by being able to poke at the world to see what happens. I think an AGI will likely require some aspect of that (and apply reinforcement learning that way).

But like I said, I’m no expert and that’s just my layperson opinion.

[1] if the goal is AGI, if it’s not then of course there’s no reason to stop

[2] some people are doing just that, of course


Fair enough, though I do not think the evidence from evolution moves the needle much with respect to the timeline. For one thing, evolution was not dedicated to the achievement of intelligence.


Sounds reasonable.


>> You elided the "necessarily".

Well, if it follows, then it follows necessarily. But maybe that's just a deformation professionelle? I spend a lot of time working with automated theorem proving where there's no ifs and buts about conclusions following from premises.


If I am not mistaken, it does not necessarily follow unless it turns out to be a sound argument in every possible world.


Ah, so you are making a formal argument? In that case you should stick to formal language. And probably publish it in a different venue :)


No, I am simply responding to your rather formal point, in kind. Unless you are aguing for it being an established fact that the time evolution took to produce intelligent life rules out any form of reinforcement learning producing AI in any remotely reasonable period of time, then that original point of yours does not seem to be going anywhere.

In your work on theorem proving, am I right in guessing that there are no 'ifs' or 'buts' because the truth of premises is not an issue? In the "evolution argument", the premises/lemmas are not just that evolution took a long time, but also something along the lines of significant speedup not being possible.

You might notice that in another comment, I suggested that we might still be in the AI Cambrian. I'm not being inconsistent, as no-one knows for sure one way or the other.


I didn't make a formal point- my comment is a comment on an internet message board, where it's very unlikely to find formal arguments being made. But perhaps we do not agree on what constitutes a "(rather) formal point"? I made a point in informal language and in a casual manner and as part of an informal discussion ... on Hacker News. We are not going to prove or disprove any theorems here.

But, to be sure, as is common when this kind of informal conversation suddendly sprouts semi-formal language, like "argument", "claim", "proof", "necessarily follows" etc, I am not even sure what exactly it is we are arguing about, anymore. What exactly is your disagreement with my comment? Could you please explain?


"Necessarily" has general usage as well, you know... why would you read it otherwise, especially given the reasonable observation you make about this site? And my original point is not actually wrong, either: whether reinforcement learning will proceed at the pace of evolution is a topic of speculation - it is possible that it will, and possible that it will not.

Insofar is I have an issue with your comment, it is that it is not going anywhere, as I explained in my previous post.


>> Insofar is I have an issue with your comment, it is that it is not going anywhere, as I explained in my previous post.

I see this god-moding of my comment as a pretend-polite way to tell me I'm takling nonsense, that seems to be designed to avoid criticism for being rude to one's interlocutor on a site that has strong norms against that sort of thing, but without really trying to understand why those norms exist, i.e. because they make for more productive conversations and less wasting of everyone's time.

You made a comment to say that unless I claim that X (which you came up with), then my comment is not going anywhere. The intellectually corteous and honest response to a comment with which one does not agree is to try and understand the reasoning of the comment. Not to claim that there is only one possible explanation and therefore the comment must be wrong. That is just a straw man in sheep's clothing.

And this is not surprising given that it comes at the heels of nitpicking about supposedly important terminology (necessarily!). This is how discussions like this one go, very often. And that's why they should be avoided, because they just waste everyone's time.


"Necessarily", when read according to your own expectations for this forum, made an important difference to my original post (without it, I would have been insisting that the issue is settled already), so it was reasonable for me to point out its removal. The nitpicking over it began with your response to me doing so, and you have kept it going by taking the worst possible reading of what I write. This is, indeed, how things sometimes go.

Meanwhile, in a branching thread, I had a short discussion with the author of the post I originally replied to, in which I agreed with the points he made there. Both of us, I think, clarified our positions and reached common ground. That is how it is supposed to go.

I did not set out to pick a fight with you, and if I had anticipated how you would take my words, I would have phrased things more clearly.


I think the point huh is being mad is that individual people (or models) dot learn that way. It’s not like models training models, all the way down.


Individual people are not trained from scratch. ML models often have to be (modulo fine-tuning) since the field is still young.


That's already changing. That we have only relatively recently moved beyond always starting from scratch might indicate that we are still in the Cambrian of AI, however...


kbensons is explicitly referencing a legal case: https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colora...


Given the relationship the current US administration has with Russia, it seems like Facebook would be a less welcome alternative for a pro-democracy Belarusian protestor.


Agreed, but the parking lot example seems like a clear example of public information.


And immediately the single most important aspect of financial success in the United States becomes having golf buddies that work in finance and M&A.


> ... becomes ...

What do you think the best predictor is at the moment?

Do you think it is even hypothetically possible that these people aren't comparing notes over golf?

And why is it objectionable? There are enormous information imbalances out there in the market - why be upset about this one? CEOs aren't the biggest fish in the pond.


It's currently a good predictor (socio-economic clustering) but it's less causal (find the right buddy -> immediately become wealthy).

It's objectionable because it introduces yet another mechanism for dramatically increasing inequality and economic stratification while also making capital markets more corrupt and less efficient, which undermines core aspects of our economic system.


True, but it's not due insider trading, rather all the other 'regular' insider stuff.


General intelligence in its biological form was achieved with hundreds of millions of years evolution, which required the "evaluation" of trillions and trillions of instantiations of nervous systems. The total energy consumption of all those individual organisms was many many orders of magnitude more than all of the energy that has been produced by the entirety of humanity.


Are you describing yourself as European with a capital E?


You think his marketing team randomly chose from a selection of polygons to run as the primary image on a national ad campaign? The Trump administration loves their dog whistles.


If Trump got a swastika face tattoo and posted a video of himself goose stepping on the white house lawn, do you think we should see that, or should the media censor and hide that from us?


Are you asking me if I think Facebook should have to host that content? Naw, seems like a better fit for tiktok.


I support your position of outlawing lobbyists, overturning Citizens United and instituting a blanket ban on political contributions by corporations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: