Hacker Newsnew | past | comments | ask | show | jobs | submit | 0binbrain's commentslogin

TL;DR but from the summary it says "Socially acquired" which seems like a leap to me. Couldn't it also be as simple as children don't fully understand all the difference between themelves and animals. For example, my 3 year old probably thinks dogs can talk and pilot helicopters.


With all of the Diseny movies that show animals acting like humans && knowing how kids love to watch movies on infinite loop, it's not really hard to imagine why


I often wonder to what degree frequent early exposure to humanoid animals is doing some kind of damage to people's social cognition. I think there's plenty of evidence to be had online that it's doing pretty significant damage to at least some people. But I wonder if it's also behind widespread harmful social effects like the concept of "fur babies".


Now that's a rabbit hole of a theory. Just like with anything, a "stable" person can live in a world with this kind of make believe and enjoy the story telling devices being used. However, someone less "stable" that can be less able to make these distinctions. Does that mean we all can't have it because of a small number of people? If that's the case, then I would apply this same rule to any and all social media. Clearly, some people can't handle it, so nobody can use it.


Finally getting it.


All this discussion is spot on. The problem is all these startups now think they are FAANG companies also. You're not Google dude and you're probabaly passing on great hires and making the hiring process harder on everyone.

We recently did a bunch of hiring. Our coding excersize was practical and involved working through some existing code that had a bug and extending it. They had to be able to state the problem back to us clearly. It was take home solutions submitted to git. It was simple to weed through candidates looking at the code for 30 secs. We looked for things like clean code and well thought out solutions. We didn't do the silly BigO optimization stuff that every company is obsessed with. We've been very happy with our hires.


From my perspective, the tech hiring process is still mostly broken and cold hearted. I don't think hiring companies are specifically discriminating against older developers though. I think its just difficult to leave the candidate feeling respected after passing. I'm not 100%, but I imagine the reason the author was passed on probably had more to do with other things than being senior. You'd have to be stupid to pass on a solid developer that was a good fit just because of age or (reasonable) salary requirements for being senior. That's not how a company wins.

Recently, we hired several new developers which I lead the effort on. I had (mostly) free rein on hiring so I set out to fix things I felt were broken based on my experience as a candidate in the past. Following are my major complaints I have about the tech interview process.

* Most coding exercises I've been giving were mostly impractical BS. * Coding exercises are often used as an easy scapegoat rather than a true measure of skill. * There were several instances where I thought the coding exercise was in place solely for the interviewer to demonstrate their wit. In one case, after embarrassingly fumbling on the whiteboard for 30 mins trying to explain an advanced dynamic programming algorithm that I threw together, I watched as the interviewer smugly walked through the answer seemingly to showoff how much smarter he was. He had clearly spent hours practicing and rehearsing the answer. * Companies mostly fail at giving respectful reasons for passing on the candidate. * Companies focus too much on specific technologies and stacks vs capability to ramp up. * Good fit is underrated. I would much rather have a mostly capable developer that has a positive can-do attitude than a poisonous genius. * A college degree is an accomplishment. There are other ways to reach your goals and achieve competency. I have a masters in software engineering. I know of 1 or 2 other developers at least that only have high school degrees that were as capable as me maybe even more.

We are finished with our hiring process and I like to think it was somewhat better than my past experience. We split every interview up into 3 parts.

1) 30 minute phone intro 2) Untimed take home programming exercise because nobody develops well like this https://cdn-static.denofgeek.com/sites/denofgeek/files/2/95/... 3) 1 hour follow to programming exercise (must explain submission in detail)

Here are my takeaways from our process and how we tried to rethink the process.

* Its actually hard to properly follow up with every single candidate with reasons you are passing. Doing it in a timely and respectful manor is also hard. I wish I had done a better job at this. * The coding exercise we provided was practical and gave us everything we needed to know about the candidate programming level. * The exercise was provided via github. It had 3 steps, each of which built on top of the previous, simple, medium, harder. * After the exercise the candidate was asked to explain the exercise back to us as if we had no prior knowledge and were not techies. Then explain the technical implementation. * Write at least 1 test case for each step because our actual code is heavily tested. * We gave the candidate at least a full 24 hours to complete and could request more if needed. Its more important to get the answer right than to rush it. People work at different paces. I'm personally a slow developer because I like to take my time thinking through everything. I like to think I usually get it right at the end of the day. That's how we write software, correct is better than rushed. * We valued curiosity and honesty. We devalued nudging experience. Admit when you don't know something. It probably won't majorly count against you unless its core to our project. * Consistent clean code that is readable is very important.


Vim worked out of the box for me on my macbook. I guess someone fixed it?


The built-in vim (/usr/bin/vim) as well as all ones which I've custom built all respect/use the ASCII control character values instead of requiring platform-specific keyboard scan codes. Of course, YMMV and all that ;-).


I believe an equilibrium would be achieved at a single point of state/time for any of those games you mentioned. The equilibrium being the best next move given what you know about your risk and your opponent.


I believe that the key point here is that as adversarial networks compete against each other repeatedly they eventually will converge on a Nash Equilibrium.

Nash Equilibrium from Wikipedia: "Amy is making the best decision she can, taking into account Phil's decision while Phil's decision remains unchanged, and Phil is making the best decision he can, taking into account Amy's decision while Amy's decision remains unchanged."

At the point this happens, a stable solution exists which is almost like saying that we have made the most rational decision given the information we had. Nash was novel in this theory to say that if the participants were rational they would converge on an equilibrium and not get lost in endless recursion. Endless recursion being something like participants endlessly guessing the other opponent move... If Amy knows Phil knows Amy knows Phil knows Amy knows Phil... Obviously it would be problematic for DL to plunge into infinite recursion.

I like to think of the Nash Equilibrium as a balanced teeter totter. When the teeter totter becomes off balanced rational systems will relearn to balance the odds again based the properties of risk (what Nash referred to as utility I believe). Once they do so, they once again converge on an equilibrium.


The problem with statement "as adversarial networks compete against each other repeatedly they eventually will converge on a Nash Equilibrium" is that they often won't, and a big problem with training GANs is to ensure that they converge.

Intuitively, what can happen is that one of the problems is simpler to learn than another, and when one of the "players" becomes overwhelmingly good, then the other part of the network stops receiving useful feedback and is unable to find out a direction for improvement, the learning stops. A "sufficiently smart" method would be able to go to a Nash equilibrium even in this case, but current GAN methods can not, so you need to take steps to ensure that it doesn't happen - e.g. extra normalization or less training for the "simpler to learn" component.

It's not enough to train the "adversarial" networks to compete during each inference, on a meta-level you must ensure a form of cooperation to ensure that they are effective teachers for each other during the learning process. For a real life analogy, there's a difference between effective behavior during combat and effective behavior during sparring.


Yeah, makes sense. It seems like when a network stops providing useful adversarial feedback that in and of itself is a learning experience for the smarter network. While it didn't reach an equilibrium it now knows it's at least smarter then the other guy. I feel like it should be able to use that experience of winning to beat the next guy.

It seems like for AI a perfect equilibrium wouldn't be easy to reach, its often hard for the human brain to reach one after all. It'd be the fact that an equilbrium exists though and that it's trained to find it that generates the knowledge along the way. Kinda like a journey not the destination learning method. I'm just theorizing though.


Doesn't the caveat at the end of your citation from Wikipedia prevent endless recursion? Both players assume that the other player will not change their decision.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: