Hacker Newsnew | past | comments | ask | show | jobs | submit | dhm's commentslogin

Should political contributions made in private be grounds for demoting or firing someone?


I am surprised the committees were "tasked with a 22.5% acceptance rate". Couldn't more than 77.5% of the submissions have been of poor quality?


NIPS gets a ton of submissions, so the law of large numbers governs pretty strongly. Imagine that each paper submitted is independently either good or bad, with 22.5% probability of being good. With 1660 submissions, the total number of good papers follows a Binomial(1660, 0.225) distribution, which has mean 374 and standard deviation 17. Under this model, the fraction of good papers would be somewhere in the range 20.5-24.5% (corresponding to a two-standard-deviation window around the mean) in 95% of reviewing cycles. So even though the quality of the individual papers is totally random, the randomness mostly "cancels out" and the overall number of good submissions is relatively constant.

Of course this is assuming an objective standard for what constitutes a "good" paper. As others have pointed out, the only really meaningful standard is "how does this paper compare to other work being done in this field"? So it's also reasonable to think of NIPS's goal as just trying to present the best papers that were written in any given year, not as bestowing a strictly-defined stamp of objective quality.


When I've arranged conferences, we had a certain number of time slots. It's a bit flexible, in that we can decide to allocate a longer time for two talks, or shorter time for three, depending on the talks.

It could be that they had a first pass at a schedule, used that to set a first cut for the reviewers, then adjusted the schedule once they figured they needed to add another 42 papers.

Also, not being accepted does not mean that a paper is poor. They used a rank system, so it only means that others had papers which appeared to be better.


I am more curious of the inverse: what if more that 22.5% of papers were at acceptable quality levels? Wouldn't that leave each committee to pick and choose, thus artificially inflating their disagreements?


Yes, and I think that is essentially why we're seeing these disagreements.

I've heard from lots of professors that a good conference gets a lot of "very-good-but-not-great" submissions and the job of the program committee is to pick the best among these. I wouldn't be surprised at all if minor personal preferences (which from the outside look rather random) ended up having a big say in the fate of a particular paper. Maybe some reviewers are more forgiving of poorly-written but technically strong papers, maybe some reviews consider certain fields "dead" and so are biased against them, reviewers tend to wildly different standards on how extensive an experimental analysis should be to be acceptable, ...


Highly (almost vanishingly) unlikely at a "top-tier" venue like NIPS.


Can you say more about why you believe this is true?


Consider the demographic submitting to NIPS. It's a self-selected group within the top researchers in the world in that area. The best people in the field don't want to be seen publishing the so called "second-tier" conferences, so they will submit exclusively to the likes of NIPS. And if you're an up and coming researcher or research group, you will want to establish credibility by publishing in these sorts of venues, and you will almost surely send your best work there. Add to this the fact that this is a "hot" field, so more and more researchers and research groups are getting into the field and trying to publish papers, I think it's very likely that NIPS gets a lot more good papers than they can possibly accept.


What does "poor quality" mean? There is no absolute standard for quality. "Poor" is something like "less good than usual compared to the recent work in this community". So the top-scoring third-ish of papers sent to the currently-converged-on favourite venue of a community are pretty much by definition not poor. Unless something very weird indeed happens one year. There are usually only very few really excellent papers, though. Most papers are filler in retrospect.

Also, conferences need to accept a decent number of papers so that people will show up and cover the costs of the meeting. Venues are usually booked long before the program is fixed.


Ok, we've gone from "top tier venue, basically impossible to have a large fraction of poor papers submitted" to "Most papers are filler in retrospect" and "conferences need to accept a decent number of papers so that people will show up and cover the costs of the meeting". I guess if I am deciding whether or not to hire a professor I would be tempted to disregard publications in this conference.


It depends what you are optimizing for.

Number of publications is a proxy for how much funding a professor can generate. Not much else.

> "Most papers are filler in retrospect" and "conferences need to accept a decent number of papers so that people will show up and cover the costs of the meeting"

None of these are conflicting. Conferences are often more about networking than the papers. Many paper are filler, but often only in retrospect. They are not obviously filler when presented.


> I would be tempted to disregard publications in this conference.

That was not something I suggested. NIPS is very good conference, and a paper there is suggestive of quality work. Lots of past NIPS authors have been aqui-hired or regular-hired by Google and Facebook recently in their machine learning spending sprees, for example.


I think its a bit harsh to call the papers "filler", but the reality is that most papers (in CS, anyway) are incremental work on important but well-studied problems or work on problems that are fairly narrow or not universally considered to be important. Reviewers tend to have wildly divergent opinions on how important or interesting that kind of work is.


The "in retrospect" was an important part of that point. Reviewers don't have access to it when reviewing.

Some conferences and journals have a retrospective prize for the best paper of, say, ten years ago. It's a neat way to recognize papers that turned out to be useful.


"Listen to me, get out of here and move forward. This never happened. It will shock you how much it never happened." -Don Draper, Mad Men, Season 2 Episode 5.


What is the context?


He says it to Peggy, who is distraught and in the hospital, after just giving birth to a son and immediately giving him up for adoption.

I had to look it up too. You are right, the quote doesn't exactly explain itself.


Since no one is responding...a major character gives birth to an illegitimate child fathered by a married co-worker, has a sort of mental breakdown, and her family takes the child away and tries to cover the whole thing up. Don finds her in a hospital and gives her that advice.


[deleted]


This has to be one the most pretentious comments I've ever read on Hacker News.


I especially appreciate that his TL;DR was a reference to one of the more dense pieces of literary theory that I was ever forced to digest...


One of the great quotes from Mad Men, and it does apply perfectly to this situation.


I think it's a huge hit to their brand image with consumers if this is true, because they haven't sold woefully underprovisioned current-model-year iPhones before, at least that I can recall.

I have heard the argument that they offer the smaller storage devices for institutional bulk purchase (think high school iPads with a narrower range of use cases and a managed and/or limited base of installed software), but if that's the case they should stop offering those models to consumers in general.


I suspect what he is saying is that people who probably had no visibility into the state of the Sony's security and certainly had no ability to influence it are unfortunate victims here. While security is difficult to measure and therefore difficult to manage and improve, it remains the responsibility of executives to allocate resources against that problem and it is they who ultimately bear the majority of the blame when the security posture falls short.


This.

IMO the higher up the chain you are, the more responsibility you have to secure the systems you are responsible for.

I feel like it's a perceived lack of accountability (from the perspective of the hackers) of the executive team that leads to these kinds of leaks. When they feel they aren't seeing justice - as defined by them - then I think they're more motivated to do something about it themselves.


I think it's important to at least acknowledge the desire a "razor/razorblades" device manufacturer has for maintaining the quality of their brand by controlling to some extent the user experiences that are possible with their tool. To me, this seems similar to Sony and Nintendo wanting the right to certify titles that run on their consoles. You can argue about whether removing freedom from the user is worth the trade for a reliable user experience, and you can argue about the right place to draw the quality line, but they're trying to guarantee a certain minimum level of user experience by doing this.

If Keurig coffee was somehow astoundingly good out of their machine, with their pods, would we have less of a problem with what they are doing?

What Keurig is doing also doesn't prevent another manufacturer from competing with an unencumbered alternative. Shouldn't we expect such a system to compete in the marketplace on its merits?


> What Keurig is doing also doesn't prevent another manufacturer from competing with an unencumbered alternative

Not sure if this fits the definition of irony, but Keurig is the company that came up with the unencumbered 1.0 coffee pod standard. The original DRM was the idea that a scoop of ground coffee beans was incompatible with the pod brewer. 2.0 is exactly the same, but with the RFID (I'm assuming. I haven't cared to look into it) "protection". I'd imagine 3.0 will have some kind of boolean logic much like inkjet cartridges have these days that will make the thing complain that the pod has already been used.

Though, I entirely agree with you. This DRM is only present because Keurig wants to protect it's brand. It came up with the pod brewer concept, much like Apple came up with the iPhone and it's app store. Rejecting what it deems to be inferior or competitive to it's goals is it's objective. I'd wish everything was more open and available for interoperability, but it's their product up until I purchase it, so the design is out of my control.


In many situations the actual cost to a vendor does fluctuate over time, but continuously updating prices has costs that outweigh the upside demand and perhaps brand loyalty that comes from transparency. Some example costs include (a) customer frustration from unpredictable prices and (b) consumers deterring purchasing decisions waiting for prices to reach some threshold they are hoping will arrive one day.


I feel like Steam, although the practice is successful for them, is pushing all consumerism in this direction. I defer all my gaming purchases until they come on sale on Steam. I can't help it, by this point I have been trained to act this way. So much so, that is almost Pavlonian.


I'm certainly in the (b) category. I don't have to have most things right away. I have a list of things that check pricing on frequently, and when the price is right, I buy. Sometimes I never buy because the price never meets the threshold. Of course, that's only for things I desire, and don't necessarily need.

Of course, as morley pointed out in a separate reply, this seems like it may only be limited to art and memorabilia, which tend not to have a set value. If limited to these types of items, then Amazon's approach may make sense.


Try a holiday in India, Nepal or Morocco. Nothing has a set price. Its annoying at first, but after a while you may appreciate that you pay for what you value, not paying based on some price set by someone else.


Or if you're American and want to try it closer to home, buy some tickets from a scalper.


The graphic in the "Per-Country IPv6 adoption" tab appears to be implemented with scalable vector graphics.


I guess the way to approach it would be to compare the statistics of sexual assault by driver of taxi or similar travel for hire with the statistics we see in the case of Uber and try to discern whether they are meaningfully different.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: