Hacker Newsnew | past | comments | ask | show | jobs | submit | throw3823423's commentslogin

I worked at a very large Ruby shop where errors in production were very expensive. This meant that we spent many times more money on instances running the test suite for every build than we spent on all production servers combined.


Let's be realistic here: The US supreme court always tells us what the law is, every time, regardless of how clear the law's writing is, and how the court had rules in the past. The same can happen in lower courts, as federal circuits come back with head-scratching rulings whenever it suits the judge's aesthetic preferences. Judge shopping is quite popular in expensive cases for good reasons.

So we shouldn't be surprised when anything changes, ever, given how much activism we are seeing in courts today. So the question is, how much do the people that actually decide what a law means really like cryptocurrencies? I suspect the only good chance most of those companies have is rely on the court's dislike for government agencies, regardless of what laws say. But as far as I am aware, the good friends of the court tend to be very involved in old banking, and thus they aren't fond of crypto companies either.

So maybe those companies should start lobbying Harlan Crow and his circle of friends.


I worked with Ferrucci in BW. The internals of the employee rating system that preceded him were hilarious. In short:

Everyone is encouraged to rate anyone else in a variety of categories, as often as possible. Every rating is public. You know who is rating you, and how they did it. Those ratings are put together to get a score in every category, and can be seen by anyone. It's your Baseball Card.

The problem is that not everyone is equally 'credible' in their rating. If I am bad at underwater basketweaving, my opinions on the matter are useless. But if I become good, suddenly my opinion is very important. You can imagine how, as one accumulates ratings, the system becomes unstable: My high credibility makes someone else have bad credibility, which changes the rankings again. How many iterations do we run before we consider the results stable? Maybe there's two groups of people that massively disagree in a topic. One will be high credibility, and the other bad, and that determines final scores. Maybe the opinion of one other random employee just changes everyone else's scores massively.

So the first thing is that the way we know an iteration is good involves whether certain key people are rated highly or not, because anything that, say, said that Ray is bad at critical thinking is obviously faulty. So ultimately winners and losers on anything contentious are determined by fiat.

So then we have someone who is highly rated, and is 100% aware of who is rating them badly. Do you really think it's safe to do that? I don't. Therefore, if you don't have very significant clout, your rating of people should be simple: Go look at that person's baseball card, and rate something very similar in that category. Anything else is asking for trouble. You are supposed to be honest... but honestly, its better to just agree with those that are credible.

So what you get is a system where, if you are smart, you mostly agree with the bosses... but not too much, as to show that you are an independent thinker. But you better disagree in places that don't matter too much.

If there's anything surprising, is that more people involved into the initiative stayed on board that long, because it's clear that the stated goals and the actual goals are completely detached from each other. It's unsurprising that it's not a great place to work, although the pay is very good.


> My high credibility makes someone else have bad credibility, which changes the rankings again. How many iterations do we run before we consider the results stable?

Tangential to your point but this is actually a solved problem and you only have to run once. You might even recognize the solution [1] [2].

[1]: https://en.wikipedia.org/wiki/PageRank [2]: https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors


Why only once? When we did pagerank in class we chose a precision factor and then iterated pagerank until the delta values for the matrices were less than the precision factor


More generally factor graph optimization / msg passing, if you don't need the constraint that it must be an eigenvector/eigenvalue operation. The number of iterations is bounded by graph width but is locally constant, if memory serves.


I'm surprised by your comment. It's a basic knowledge of getting feedback in UX research or any social research that social pressure influences your response. That's why most feedback surveys are anonymous. Even in politics, there is an old theory called the Spiral of Silence on how people shut down their contrarian opinions in public.

It looks like Ray's fixation on "radical transparency" ignored the basics of human nature.


The people who practice/advocate radical transparency believe they are above human nature.


Doesn't look like anything I've ever called radical transparency.


Unfortunately, no one will trust that a work survey is anonymous. They sent it to an email address that includes my name, with a unique link. And it's often possible to just infer who it is.


I like my employer very much but still do not trust that their frequent employee surveys are anonymous. But since managers get grief for low response rates, I do the surveys, but skip each of the questions.


Well, anyonymous to who?

Do i trust that big bosses, IT, and HR can’t deanonymise my answer? Never. Obviously they can and will if they are interested.

Can the rank and file do the same? Probably not.



One of my absolute favorite episodes. Right up there with the Asscrack Bandit episode.

#SixSeasonsAndAMovie!


We have Six Seasons already and the movie is apparently in production now(Although the strike has delayed things a bit and may cause downstream trouble).

[1]:https://www.imdb.com/title/tt21958386/

BTW: Man the Asscrack Bandit was such a memorable episode.


Another aspect of this kind of system is that it favours visible over invisible jobs. The people working their asses off in the technology sewer might literally keep everything running, but because they are:

A) underfunded and understaffed

B) have no time and energy for niceties

all those in more comfy positions might be inclined to vote against them, while they probably don't vote at all because they got no time for that shit.

As you said, if you let a pidgeon vote on how good your chess moves were you will get pidgeon shit on a chessboard. A costly form of bureaucratic performance art, sans the art.


I worked at a place where publicly praising folks via company wide emails was encouraged. It was a painful experience. Almost immediately mutual admiration societies started up. It was like a competition between these small groups to send these nauseating emails every Friday. Mostly it was sales, HR, and the executive team sending these emails.

Most folks just didn't participate and it faded away pretty fast.


This is classic example of if you give people a metric (that is meant to be a condensation of things that truly matter) based on which they are evaluated it, they will just start optimizing for and gaming that metric instead of focusing on things that truly matter.


I don't think there ever was a time this metric was good.


Incentives are a hell of a drug.

It would be so cool if someone could apply game theory for mortals that could digest these systems and show how to make them more "fair". It may be an impossible task but it seems worthy of exploration.


> It would be so cool if someone could apply game theory for mortals that could digest these systems and show how to make them more "fair". It may be an impossible task but it seems worthy of exploration.

I consider this to be a hard, but clearly not an impossible task. What I rather consider to be nigh impossible is to get a good definition on what we actually want to achieve by creating these systems. There is where in my opinion the true difficulties lie.

Just to give some shallow examples:

Do we want to make this purpose be quite fixed (say, for decades) (and create a system around this purpose)? Then it will be quite hard to change the ship's directions if because of some event the economic environment changes a lot.

On the other hand: do you want to make the system's purpose very flexible, so that the system can react to such circumstances? You can bet that this will be gamed towards the political whims of the people inside the system.

On the other hand: if you want to get a glimpse at system ideas that did "work out", look at long-existing religions, or entrepreneurial dynasties with a history of centuries. It should be obvious that these examples are far off from a "democratic", "particapative" spirit (which perhaps should be a lesson for anybody designing such systems).


My goal would be to "maximize prosperity for all". It could be better qualified and defined, but making sure honest hard-working people don't get screwed over and cheaters don't prosper.

I think the goal should be the goal, as the circumstances will continue to evolve. That is, the process should serve the people rather than the other way around.

Sorry, I'm just an old naive idealist and would like to cling to hope that we can move society forward in a positive direction.


>Everyone is encouraged to rate anyone else in a variety of categories, as often as possible.

You'd think someone with as much experience as Ray Dalio would be vigilant against overfitting a model, and that's exactly what this is.

Just goes to show you that even the most successful are susceptible to ego-driven blind spots.



The people who survive in that culture, are they interesting, and do they go on to do anything interesting afterwards?


I have met or value relationships with plenty STEM BW alums, but they tend to speak about it as a funky place, seemed to color their views about buy side jobs and suggest that you know what you’re getting into with a job there.

Seems anecdotally that it’s used as a stepping stone into leadership or used to bank savings.

Several tech leaders I know are out of BW.

An SRE I knew there was at ~$600 TC and got a retention offer at ~$900 when they left to do a passion job.


If those scores can be seen by everybody it should be possible to mine the data and reverse engineer the algorithm.

A bad actor could use this to craft a „malicious review“.


Apparently there was no algorithm.


Is it true that Ray has a bias against STEM people, and prefers elite liberal arts pedigrees?


It's a turbulent time in games media: See how Yatzhee just quit The Escapist, taking with him most of the video team.

Either way we slice it, we'll all soon see what is what brings people to certain publications? The brand? Long form, high research articles that just take too much research? The wokeness/andti-wokeness posturing? Is it a matter of just a few extremely talented people, carrying a publication?

We all can make our guesses, but the market will say who is right.


Oh wow, I didn't know about Yatzhee. I really need my Zero Punctuation dose, and turns out he doesn't own it

https://www.gamesindustry.biz/the-escapist-staff-resign-foll...


I worked as a contract writer for The Escapist from 2005-2009 and consider Yahtzee's resignation the end of the publication. Zero Punctuation was most likely the primary revenue driver as I know how he had an outsized impact even back in the heyday when all of the content was getting accolades.


That's crazy, I didn't know either.


I've been a part of a system like this (a financial system that happens to not be technically a bank). The system was constantly under attack by fraudsters: You could find subreddits with guides on how to try to outright rob us. So we had teams building detection systems that tied to detect said fraud as early as possible: Hopefully before we handed the fraudsters any money. Depending on how bad the score was, there might not even be a manual review step before we closed the account, because the numbers were that blatant.

As with any classification system though, 100% accuracy isn't going to happen. But there's always some customer service rep that can look at the details of the account, and see why in the world the system said what it did. But a detailed explanation of why we thought something was fraudulent could (and sometimes would!) just lead to another fun reddit post where someone describes how to hide the fraud a little better.

For any given system like this, how much harm is actually being done, vs how much is being prevented (as fraud just leads to raising prices to cover for it: financial companies are not charities)? I've read way too many CSR conversations where a blatant fraudster with world-class chutzpah would claim that we were destroying their family for no reason, when the data was damning. But this doesn't mean that everyone who isn't a fraudster really reaches out to the CSRs, and has the energy to prove there was no fraud. The actual levels of damage are just hard to measure.

We should have sensible, mandatory, available customer service access, which costs just enough to access to not be hammered by bots, but that is completely refunded in case of error. But what is really causing this is that many companies have lowered the barrier of interaction so much that we are letting a lot of fraud through the door. Remember how getting a merchant account in a real bank is a multi-day affair? How getting hired to become a delivery driver needed an interview, with a real person, and a manager checking between deliveries? The price of not having to interact with a human to sign in is fraud detection that isn't a boss you interact with every day, makes sure you are working, and is paid from the work you do. Companies with billions of customers and probably hundreds of millions of suppliers aren't exactly workable without automating a lot of those intermediate jobs away.

Maybe we made the wrong call across the board, and lower-productivity, but far higher trust commerce is the way to go... but a lot of that commerce is losing in the market, right now. So if we like it, we have to be willing to pay extra for it.


> But a detailed explanation of why we thought something was fraudulent could (and sometimes would!) just lead to another fun reddit post where someone describes how to hide the fraud a little better.

If I were in charge, it'd be too bad for your company, and they'd have to give a detailed explanation every time even if that would be the result, because the alternative is way worse.


> Maybe we made the wrong call across the board, and lower-productivity, but far higher trust commerce is the way to go... but a lot of that commerce is losing in the market, right now.

Plenty of people in the comments here want to enforce that approach via laws and regulation..

> So if we like it, we have to be willing to pay extra for it.

.. and I wonder if they are taking this into account.


In the same way that nobody knows which milk bottles are full of chalk, nobody knows enough about the hundreds of businesses they interact with on a daily basis to TOS comparison-shop. The understanding of what Google could do to your online life by closing your Gmail account is near to nonexistent in the consumer population.


> In the same way that nobody knows which milk bottles are full of chalk, nobody knows enough about the hundreds of businesses they interact with on a daily basis to TOS comparison-shop.

That's what brands and reputation are for. And it works: brands often do command a premium.


It didn't work for milk bottles that's why I used it as an example.


Interestingly, it's working for milk in China.

(Mainland) China has regulations for milk. Including banning of chalk. Alas, those regulations aren't enforced with much teeth in practice, so consumers rely on reputation and brands. Specifically, they buy milks from oversees, like Australia, because of their superior reputation.

A big part of that reputation is that Australian companies won't be protected by the Chinese government when they screw up. So it's harder for them to hide blemishes on their reputation (at least to hide them from Chinese customers).


> But a detailed explanation of why we thought something was fraudulent could (and sometimes would!) just lead to another fun reddit post where someone describes how to hide the fraud a little better.

I worked for a company doing this sort of fraud detection. Knowing features and weights really would make it easier for fraudsters.


And it gets worse the smaller the market is: There is a chance that a youtuber with sufficiently large following could actually choose to buy said mac pro, because their revenue might be pretty large. But then you look at, say, boardgame reviews. Nobody, ever, buys a game. But the number of views isn't good enough to dedicate the time to it as anything other than a hobby. Thus, anyone posting enough that they make it their job is also getting sponsored on top of the free product, but nobody wants to tell you that. Thus, all you are seeing is 100% ad, just shaped as a review, or as entertainment.


It's a matter of letting things degrade so that the maintenance becomes outright firefighting. I am currently working on a project where a processing pipeline has a maximum practical throughput of 1x, and a median day's for said pipeline is... 0.95x. So any outage becomes unrecoverable. Getting that project approved 6 month from now would have been basically impossible. Right now, it's valued at a promotion-level difficulty instead.

At another job, at a financial firm I got a big bonus after I went live on November 28th with an upgrade that let a system 10x their max throughput, and scaled linearly instead of being completely stuck. at their 1x. Median number of requests per second received in dec 1st? 1.8x... the system would have failed under load, causing significant losses to the company.

Prevention is underrated, but firefighting heroics are so well regarded that sometimes it might even be worthwhile to be the arsonist


Intuitively, "fixing life-or-death disasterss is more visible and gets better rewards than preventing them" doesn't seem like it should be a unique problem of software engineering. Any engineering or technical discipline, executed as part of a large company, ought to have the potential for this particular dysfunction.

So I wonder: do the same dynamics appear in any non-software companies? If not, why not? If yes, have they already found a way to solve them?


Outside of software, people designing technology are engineers. Although by no means perfect, engineers generally have more ability to push back against bad technical decisions.

Engineers are also generally encultured into a professional culture that emphasizes disciplined engineering practices and technical excellence. On the other hand, modern software development culture actively discourages these traits. For example, taking the time to do design is labeled as "waterfall", YAGNI sentiment, opposition to algorithms interviews, opposition to "complicated" functional programming techniques, etc.


That's a very idealistic black-and-white view of the world.

A huge number of roles casually use the "engineer" moniker and a lot of people who actually have engineering degrees of some sort, even advanced degrees from top schools, are not licensed and don't necessarily follow rigid processes (e.g. structural analyses) on a day to day basis.

As someone who does have engineering degrees outside of software, I have zero problem with the software engineer term--at least for anyone who does have some education in basic principles and practices.


I have yet to see, with the exception of the software world, engineering with such loose process.


As someone who was a mechanical engineer in the oil business, I think you have a very positive view of engineering processes in general.


It's common to start constructing buildings before the design is even complete. And there can be huge "tech debt" disasters in civil engineering. Berlin Airport is one famous example.


>It's common to start constructing buildings before the design is even complete.

Of which the Sydney Opera House is one of the better known examples.


> If yes, have they already found a way to solve them?

A long history of blood, lawsuits, and regulations.

Preventing a building from collapsing is done ahead of time, because buildings have previously collapsed, and cost a lot of lives / money etc.


I remember my very first day of studying engineering, the professor said: "Do you know the difference between an engineer and a doctor? When a doctor messes up, people die. When an engineer messes up LOTS of people die."


How do you think we got into this climate change mess?


Yeah but if you had a release target of dec 15 and it crashed dec 1st and you could have brought it home by the 7th you would have been a bigger winner. Tragedy prevented is tragedy forgotten. No lessons were learned


No modification is perfect: While all GMO modification tries to be as narrow as possible, it's not like things are ever so perfectly narrow as to only just express a bonus protein that you wanted.

The problem is assuming that any other change that happens is dangerous and untested. There's mutation all over the place in perfectly organic plants, just like with GMOs. There are also changes on gene expression from those changes, also like GMO. An overwhelming majority of changes either do nothing, or make the plant unviable at all. The practical risks of a change that does something, doesn't harm yields, and yet somehow makes the parts of the plant that a human eats somewhat toxic is a huge stretch. It's even less likely when one considers the actual regulatory processes that happen later. It might seem crazy, but people that work on GMOs tend to be uninterested in poisoning the public.

If I was afraid of poisoning due to mutation (which I am not), I'd be more afraid of what someone that has been crossing plans with some localized, ancestral wildtypes that have been planted just in some village for the last hundred years or something. They are more likely to be untested. But it's like the risk of getting hit by lighting for the 5th time this week.

I am far more likely to be poisoned by a detergent, or someone that has let bacteria run amok in their packing facility, and is somehow selling, say, premixed salads that land in my local supermarket.


I’d argue back, but I’m too ignorant. You write as if you are supremely confident, so can you back this up? Are you working in the field, or can you point me to (recent) commentary from someone who is?

I have no dog in this fight, I literally know nothing and care very little. However, I can’t just ignore the opinions that have come to me from someone who is an expert in this space. That’s why I raised it; because I was assuming there would be someone working in the field who could provide detail and nuance. Wasn’t expecting “this is all crap” as the response.


There was an old patent about making generations completely infertile, but yes, never used. Bayer's corn seeds are optimized enough that, while the next generation grows just fine, the yield loss makes it not worthwhile over just buying new seed.


Its been that way for a lot of crops for almost a century at this point. This phenomenon is called hybrid vigor. Many things we grow are clonally propagated at this point to maintain these hybrid genetics.


This tells me you don't know much about Monsanto / Bayer Crop Science.

Roundup Ready is a relatively good idea: Roundup applies easily, and after the first few generations, when the control of how to add the GMO genes was so bad yields were impacted, it's a very narrow change to plants that makes farmers a lot of money. The vast majority of soybeans in the world are running Roundup-ready genes for a reason.

Terminator seeds sounded like a scary patent, but it was mostly useless. In corn, for instance, you'd never want to run it at all, as the seeds that are sold are almost perfect hybrids of 2 inbreds, which lose a whole lot of yield in the next generation without anyone really trying. There's no need for a gene when the next generation yields worse naturally.

If you want a sick situation with Bayer, forget Roundup, and look at what's been happening with Dicamba, the next generation of pesticide that GMOs are protected from. It's not a new pesticide, but it's very aggressive and it drifts: You spray a field, and many other fields around it are going to get hit. Supposedly Bayer is telling everyone that, on tests, the new formulations of it, when applied properly under the right weather conditions, there's no drift... but reality disagrees. Therefore, a whole lot of fields that aren't planting seeds protected from dicamba are getting wrecked by not-so-close that haven't mastered the really difficult ways to spray dicamba in the calmest of days. We aren't talking a pesticide that drifts 50 feet here, or 100, but people relatively far away that have their crops ruined. This is happening often enough that we'll see bans, while more and more generations of roundup GMO are going out of patent.

I am pretty sure that this one is what is scaring Bayer's lawyers, not Roundup.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: