I expect better from FT. Tether is very similar to prime money market funds, in both structure and portfolio composition. I guess the Fidelity's of the world aren't keen on competition.
That's a bit of a mischaracterization. The 75% that they're describing as "Cash & Cash Equivalents" might be similar to money market funds in composition, but we can't know whether it is or not because they don't give any details about the commercial paper. They don't even make the claim that it's asset-backed. Because they aren't saying anything about that, and they're not audited, they could very well be making riskier bets.
But more importantly, there's the 25% of their funds that bears absolutely no resemblance to anything a money market fund would do. The quote the FT article publishes from Martin Walker looks pretty reasonable to me:
> It is pretty clear looking at the makeup of the reserves — a tiny proportion of the reserves are cash on account at banks — that Tether is operating like a bank but with none of the normal disclosure.
> They are creating a dollar substitute and basically running a banking and payments business but without the oversight that anyone else doing a similar kind of business would have.
> They are creating a dollar substitute and basically running a banking and payments business but without the oversight that anyone else doing a similar kind of business would have.
You could take that word-for-word from the debate that raged after Treasury bailed out the prime funds in 2008.
Money market funds arguably act like a banking business, but are subject to regulation of their marketing material that obliges them to explain that they're not, and they certainly don't operate like a payments business.
Money market deposit accounts are a different story, and are highly regulated (and secured).
That being said, even if your comparison was accurate "hey, look at this similar business that caused harm and chaos during the last financial crisis" isn't exactly a glowing endorsement, is it?
Why not team up with the ProtonMail people to build a browser extension that verifies and logs javascript sigs/hashes? Corporate clients may like it. Gives them an IOC for the next big supply chain issue.
I don't know enough about browsers or js to know if its difficult or not.
Articles like this answer the question, "what if 1920s eugenicists got hold of 1980s magazine relationship tests?"
The scary thing is a lot of commercial "people analytics" systems marketed to HR departments, lenders, and government agencies are little more than dressed up relationship tests from 1980s magazines.
The main useful thing about this sort of thing, including the more modern Tilt 365 for example, is that it can be a useful exercise in helping people understand that the way they think about and approach the world differs from how others do.
Years ago, I remember a book (Tog on Interface) that had a chapter which discussed how Apple engineers tested on MBTI versus the general population. He went to discuss implications for how it was easier for engineers to have mental models for how systems operated than users. Was this a correct explanation based on MBTI? Who knows. But it was a useful reminder in this case that your users may not be like you.
I def found myres Briggs to be helpful, for your exact reasons. I know it gets a lot of hate here in HN, but as someone who was introduced to it in my first job right out of school, It made me more self aware of how I naturally process information and get energized, and how that can differ from others, and that bit of knowledge has absolutely helped me at times in my career. To me it was never sold as a “this is the only mold you fit in and it will define your role”, it was more to understand ways one might naturally process information, vs. ways one might need to actively work to process information differently.
I dont lie about my astrology signs or myer briggs result
I judge which one will likely help create an intimate consensual scenario and judge when to avoid criticizing either school of thought as disagreement is usually counterproductive to a consensual reproductive scenario
Pick your battles wisely
Out of even more adaptive curiosity, how do you read that as lying?
I dunno. “I didn’t say the things I thought you wouldn’t like so that we could have sex” somehow strikes me as disingenuous. Even though that’s just being considerate by a different name.
I guess it comes from the goal you are trying to achieve.
I get the exact same response when I try to debunk horoscopes to someone who believes that’s just classic Virgo behaviour haha.
Controversial, but also true, is that I get the same response trying to debunk diversity hiring etc. because I happen to be a white, straight male. I am also practically blind in one eye and grew up poor, but I refuse to play the victimhood game or even agree to that frame to win an argument. Also I don’t want to get into the pros and cons of that topic in this thread as that’ll inevitably be a train wreck as always, but I just want to point out how frustrating it is to “lose” an argument because of who you are, not what you actually say. Also, for the diversity people, it’s super insulting to essentially suggest “you’re only disagreeing to protect your own race”. I honestly think the appropriate response is to tell them to fuck off after they bring up your race, gender, sexuality, etc. in an argument.
On the flip side of that, I typically don't add a disclaimer about my heritage or political leaning before going into similar discussions, and it is hilarious that people assume I am a cisgender white male in order to invalidate anything I say. I also find it completely appalling how the "allies" act like female, trans and people of color have no ability to think for themselves or have different goals and opinions. To me, this is just as hurtful if not more than the "silent" and overtly racist people. Of course, now just add "allies" to the people that will argue about their worldview being altered. In person, people look at me like I have grown two heads when I suggest a minority might have listened to divisive comments from the former head of state. As if that is the biggest absurdity. People don't even want answers.
As someone who implemented over 70 different indicators and close to 20 tools for drawing Gann fans, Fibonacci retracements and so on, I've heard a similar thing about technical analysis and astrology and very much concur :)
Yeah, I half expected a couple downvotes on that one, lol. :P
But, seriously, IMO, to the extent that technical analysis "works," it's probably just exploiting short term momentum. Momentum strategies have been shown to generate alpha. Other than that, you'd probably be just as well off reading tea leaves and goat entrails to guide your trading strategy.
Unless a HR department is doing something wrong, the test results aren’t the point of the tests. It’s the conversation following, where possible employees are confronted with their answers and results that’s the point.
It’s just easier to get there when you use a disguised relationship tests from 1980ies magazines. Not only does it touch on relevant areas, most people that you’d use the HR resources on secretly love taking those things and then talking about themselves.
As with most things, it can obviously go wrong or get used in the wrong situations. HR is a department that’s there to help managers, but they really shouldn’t waste their resources testing people for a position as a software developer. If your company is doing that, something went wrong. Maybe HR found a way to keep themselves busy, or you’re not bolstering a management culture or people who can make hiring related decisions without consulting HR.
Because you don’t really need the full personality profile show when you are interviewing people who aren’t going to be managers.
It’s not an insignificant amount of resources you put into the process. Not only do you take up HR resources, the team that is helping you recruit is also going to need to spend time going over the results with HR and the candidate.
In my optics that is an unnecessary waste of resources. Not only doesn’t the process really show you anything team-related, you should frankly be capable of judging people from the conversations you have with them just as well.
The in-depth personality stuff only gets relevant when the people you hire have to manage other people.
It depends on what you mean by a "full personality profile". I think a skillfully chosen selection of tests, which may sometimes include personality tests, can aid in the selection of the proper candidate.
I agree that the battery of tests used should not be the same in every case. It should be tailored to the situation and to the position you are hiring for.
> Not only do you take up HR resources, the team that is helping you recruit is also going to need to spend time going over the results with HR and the candidate.
I have second-hand experience that this can be made very streamlined and does not take much time at all. Especially when you have a large number of prospective candidates and will not be able to see them all in person, a pre-filtering done by a skilled HR person is better than a random draw.
Well because it's hard for them to come up with question and judge responses.
But I guess this means HR shouldnt skill-test anyone. We d all agree they shouldn't test a racecar driver, a CEO, a mathematician.
But the guy saying that probably doesnt work: HR never test anyone. They handle the process. He may be confusing it with wrong job posting that people attribute to HR (like N years xp in N-5 years old tech) which are usually the product of idiotic managers, not HR who doesnt give a fig what your team needs as long as you express it to them.
Maybe we have different ways of defining things, or maybe my English isn’t good enough to convey the meaning.
But isn’t HR handling the process of personality testing, HR testing people? When we utilise personality testing, a HR consultant actually tests the candidates. It’s their test, they spend millions developing it and you have to be certified to perform it.
Yes, that's what I observe in my locale as well. HR people are usually trained psychologists. Administering and interpreting psychological tests properly is one of the major topics of the profession.
Yes, of course it is. Working out how to work with people is the core competency of most jobs. In this case, if you've worked out how to give the 'correct' answers on such a test, then you've got a good start on knowing how to navigate the corporate machine - and get your real job done despite corporate garbage (which in my opinion is what makes it ethically ok).
Related to this, often I see a survey question along the lines of say, "which cup do you think would hold more water" with one that looks like it'd hold a lot more than the other. I'm never sure whether to answer with the meta-analysis of the fact that it's a survey question included in my answer.
That is, if I saw the two cups in my example naturally I'd say the obvious one holds more water, but often there'd be no point in putting it as a survey question if the obvious answer was the case. Therefore I'm almost sure the correct answer is the seemingly wrong one.
So I put that as the answer and get it right. But for whoever's running the survey, I'm probably messing up their results by meta-analysing the questions.
I don't know if this question is an example, but people intentionally include "obvious" questions in surveys to check whether people are actually paying attention to the survey. There's a name for these, I just don't recall it at the moment—which shows you I don't have any survey-design experience myself :).
I think it’s perfectly ethical to game the tests, but it can be hard to tell what’s actually being tested.
One or ours was deliberately set up so that prospect hires applying for a leadership position would end up with very, very, little in their empathy score if they chose answers that sound like the “right” ones for a leadership position. It was done to catch people off guard in their next interview, and see how they handled getting shown a result they’d very likely not agree with.
HR departments know these things are bullshit. They know people game the systems, exactly like people practice for the coding interview. The test and the results are often extremely irrelevant to anyone getting hired, it’s how people handle their results that’s important.
Or, perhaps even more cynically, the HR people are just making up Important Tests to look like they’re doing something important, because they too just want to feed the corporate machine enough so they can feed their families.
If you wan my personal opinion I think HR exists solely to come up with reasons for HR to exist. Well minus the administrative part of handling employees payments (including stuff like maternity/sick leave) and the lawyers. But both of those functions could be branched into economy and general legal.
The entire branch of team-building/personality matching/management consulting/management training/management network facilitating is a fat unnecessary cow that enterprise organisations will never get rid of because they are very good at being “people skills” and positioning themselves with top management.
After a year of lockdowns, where they’ve been completely unable to execute their regular function, their absence have left no mark on our organisation though.
I want a job at a place where I can noodle around with <my hipster technology of choice> all day, but I'll settle for one that asks me to actually ship stuff, and pays the bills.
Whether or not their HR department is ran by a licensed phrenologist is a distant second concern.
maybe there's a Maslow's hierarchy of hiring needs to be defined somewhere, so after money, benefits comes work/life balance and then at the top of the pyramid comes working on things I like, and finally respecting the company I work for?
Is it ethical to give the "right" answer to a regular interview question? An interview might be a stressful situation for some people, who often suffer from a bias of downgrading themselves compared to others. So a calibrated interpretation of the questions may be necessary to provide a faithful result in the first place.
Whether Monte Hall is counter intuitive is a function of how the question is phrased.
When you phrase it in a way that underlines the mechanical nature of the host's decision, people get it right. When you phrase it in a way that suggests the host's choice is itself random, people get it wrong.
I think the first formulation primes people to think of it from the perspective of the host, which is the right perspective for this problem.
> When you phrase it in a way that suggests the host's choice is itself random, people get it wrong.
In other words, they still get it right; they get it right for the separate question that that phrasing implies.
If the host's choice is random, so that when you initially picked wrong it's equally probable that the host open the door with the car and then say "sorry, looks like you lost" (which is what I assumed when I first heard this problem, not being familiar with the show), then even if the host happened to open the door with a goat and give you a chance to switch, it doesn't matter if you take it or not. People are correct that, for that question, the probabilities are one in two for both of the remaining doors.
> Probably the worst example I've seen first-hand was an entire retail banking loan-approval process
There's some real doozies out there. The Reinhart-Rogoff error, which was used to justify imposing austerity on Greece [0]. The UK's COVID tracing fiasco [1]. The list goes on. People using Excel have no business making decisions that affect other people's lives.
There are. But it’s also not an exaggeration to say that entire industries run on Excel, especially the financial sector where it originated.
That’s not changing soon but it’s also worth bearing in mind that spreadsheets were the first killer apps starting way back with VisiCalc, and for good reason. Having worked with old school paper spreadsheets it’s not hard to understand why and with every update I do see these issues being addressed in really useful ways.
you're doing it wrong. first you drink all the booze. then you hack all the things. if your investors aren't down with that, they're in the wrong line of work. maybe they should consider haberdashery.
RNG quality is probably the least of your worries.
The curse of dimensionality means you often need a huge number of samples to guarantee a useful level of precision. Additionally, many quantities you'd like to know come up in Monte Carlo simulations themselves. That creates an intractable O(N^2) problem.
Here's an example that I encountered at work yesterday:
You want to know P(x>=a and y>=b) where (x,y) is a standard bivariate normal with correlation R.
You can:
1. Simulate a bunch of (x,y) and take the mean of the indicator function 1[x>=a, y>=b]. This will take whole seconds to compute.
2. Use an adaptive quadrature method like scipy.integrate.nquad. This will take dozens to hundreds of milliseconds.
3. Write a fixed grid Gauss-Legendre quadrature routine in c. This will take less than a microsecond.
I needed this value inside a Monte Carlo simulation. If I had to simulate the solution, the problem would be intractable.
https://www.nytimes.com/2008/09/20/business/20moneys.html
https://www.nytimes.com/2020/03/19/business/coronavirus-mone...