You changed the context pretty substantially by moving it to the end of the piece and putting it in scare quotes though. I also read it as intentionally pejorative. You might want to edit the post if that's not what you meant.
Elon talks about IP adresses like my wife, who is blissfully unaware of its intricacies. I mean, each person and company gets one IP address so we can track them, right? Right?
It's not a bad post per se but we've been reading similar, anecdotal blogs like this about making the interview process kinder for decades. Yet the only companies in a position to do a rigorous statistical test - large tech cos - stick with the traditional, somewhat adversarial whiteboarding process. I would even suggest that a strictly technical whiteboarding process can be less biased than what the author describes, because you can so regularly grade everyone on the same exact rubric. That's tougher when "pair programming" or doing a take home.
Also, stop giving take home projects. Bad candidates will cheat them and good candidates will not even do them. If one of the random startup names listed on the author's site sent me a 12 hour take home project I would delete the email. Do you think they pay twice as much as the bigger company that only makes you waste 6 hours doing a whiteboard? I doubt it.
Larger tech cos with very high TC know that they will always have a pipeline of more qualified candidates, so a false negative has basically no cost to them, whereas a false positive has a significant cost. I think the reason that they run these processes is because they have a very low false positive rate, and so long as that is true, it doesn't matter to them how high the false negative rate is.
And I think that smaller companies copy this as a part of the tendency to copy large companies without thinking about whether the thing they are copying actually makes sense at their scale. In this case it can be very damaging, because false negative for a startup with a limited pipeline can be very bad.
Also, as far as I know, nobody measures false negatives in hiring. How would you even do it? Keep track of everyone you rejected, and then 5, 10, and 20 years later check on their career? I'd be fascinated to see the results of such a study. I'm sure it would be super valuable if you could find some kind of pattern where some filter is falsely excluding candidates that are actually great.
I think the goal of avoiding false positives is actually more important for smaller companies. A bad hire at a startup can significantly shorten the company's runway, while a bad hire at a bigger company tends to get isolated and managed out without doing much harm (except to morale of course).
You're right about the copycat behavior. This goes all the way to top of funnel: these small, even trivial-scale web application startups just don't have hard engineering problems. Many imagine they do, or imagine they will once they take off, but the work they're offering these high powered candidates they claim to want to hire is like, wiring up CRUD apps and making javascript buttons. It's not technically deep work, it's product work. A little humility about whether or not your tech startup is truly doing "tech" problems would, I think, fix some of the expectation/reality mismatch people are having when they complain about how hard it is to hire engineers.
And sure, lots of people join Google to work on world-scale problems and end up wiring CRUD stuff anyway. But they can at least plausibly offer some technical depth (or could anyway, perhaps Google's reputation as a great place to develop an engineering career has been fading).
(this is not to knock on "CRUD" but to highlight that a technical problem solver is an overlapping but not identical skillset to someone who can work with a fast moving team to quickly and reliably develop product changes)
I agree about the false positives. What I'm saying is that false negatives are more costly at a startup because you have less of a good candidate pipeline. Passing up on good candidates extends your hiring process and may even push people to hire someone you otherwise wouldn't have later because you need engineers. Google doesn't care because they will have 100 more candidates the next day.
A bubble sort at startup co is annoying but often less likely to cost the company millions, at Google or Amazon it very well could. Extreme example, but I think the point stands.
Or there aren't objective standards for what "low, medium, and high confidence" actually mean, thereby allowing one agency to look at the evidence and say it's of low confidence and another to look at the same data and say it's moderate confidence.
Or they are coming to different conclusions given the same ambiguous/incomplete information. It could simply be disagreement, or just lack of a standard metric here to compare notes easily.
I read that they did massively increase sharing after 9/11, but there was a reassessment after the Manning leaks. Manning had access to way more information than someone at their level needed for their job, and they concluded this came about from going a bit too far on the post 9/11 sharing and so they dialed it back a bit.
Organizations like the CIA don't make factual statements, they make strategic statements. Any connection with reality is purely coincidental.
If the CIA came out tomorrow and said they were 100% confident it was a lab leak from China we would have no way of knowing if that's based on facts or political propaganda.
Maybe, in 40-50 years we'll get an answer. Until then, it's too politically and strategically useful to keep it up in the air. At least one goal is for people to be arguing about it.
I have not been inside the CIA, but that could just mean they haven't done any research on that or that their research is ongoing.
An institution will not have assessments on everything at any point in time. Two different institutions might have didferent assessments for a billion reasons, e.g. they could have done the research at different points in time, the underlying evisence could differ, the interviewed experts could have said different things and so on.
Two different instutions reaching different conclusions is not as surprising as it seems.
And for the record, the National Intelligence Council and four other unidentified agencies say with "low confidence" that the virus came about through natural transmission.
This is exactly why I can't give this article, or any of those multiple intelligence agency assessments any real credence. I don't have access to any of the data that went into those assessments. What I do have access to doesn't point very strongly at a lab leak origin, despite its plausibility.
It's perfectly reasonable for the government to support one theory based on non-conclusive evidence that they have and reasonable for the public to support another based on the non-conclusive evidence available to them.
It makes it a lot more likely that public support for the non-lab leak theories will change once we get the chance to see the government's new evidence though.
It's more about connections to acquaintances and mutual friends... maybe you just had to be there in ~2009 but there's really nothing like it today. For better or worse.
With what money? For this to work the way you want, the company would have to be trading at lower than the book value of its assets, in which case you might as well liquidate it. If you get the money from debt, now you have to focus on servicing the debt, which is probably worse than what the street wants (revenue growth).
The top 1% by household income in the US is about $500k a year. There are more than 100 million households in the US, so that's conservatively at least 1 million families who could afford to support a 5 figure microtransaction whale. That's a lot of money!
And uptime is important, so you want to have that secondary running and ready, with a proxy in front of everything so you can switch as soon as you detect a failure. That's three hosts, plus your alerting has to be separate too, so that's four. Now, to orchestrate all this, we'll first get out Puppet...
If you're going from one machine to two, and you add an automatically failover mechansim, chances are your load switching mechanism is going to cause more downtime than just running from your single machine, and manually switching on failure (after being paged).
What's a situation where Earth is destroyed and Mars is habitable? It isn't climate change, since we could just use whatever Terraforming Magic Technology we used on Mars to fix Earth. It isn't an alien attack, since they'd notice us on Mars too. It isn't war, since landing people on Mars also means we could send weapons. What exactly are we solving for here?
Earth remains more habitable than Mars even following staggeringly large asteroid impacts. As evidence, note that 25% of species on earth survived Chicxulub; those species would not have survived transportation to Mars.
And humans are well situated to be one of the survivors of a future large impact, by virtue of our tendency to burrow in the ground and to cache food. Indeed, one silver lining of the cold war is that major powers spent tremendous sums of money building fortifications with independent power and life support, as well as substantial food stockpiles, as well as numerous nuclear submarines which would weather the impact.
Another Chicxulub might kill 8 billion people, and would be an immense humanitarian tragedy -- putting people on Mars won't save us from that. But we have the technology for survivors to ride it out today.