I also agree with the parent comment here, but wanted to shed some light on your input.
The problem with your "negative" vs "positive" distinction is that it's quite subjective and not at all clear cut.
> “We want to empower marginalized groups” is positive and doesn’t bother that much people who.. don’t think it’s necessary, because it’s not a political position you can disagreee with per se
There are groups who have very big problems with this statement per se. See the numerous court cases over the past decade challenging affirmative action in university admissions. Many view helping one group as oppressing another (a view I find quite misinformed given the inequities in our society, but not completely irrational, given bad priors).
Maybe what you're trying to get at is progressive politics vs. oppressive politics. This distinction I can understand more clearly, however when you choose to oppress a group that is oppressing others, maybe this is justified?
For instance, banning ISIS from access to Twitter seems like a pretty morally sound decision to me as an American. Given that, is there really a difference between banning ISIS and banning white supremacists who discuss violence? Or Presidents who promote violence against journalists? What's the moral outcome of NOT banning any of these groups?
Any choice, including choosing not to act, is morally liable. I think Gitlab's stance here is quite morally lazy, if not wrong.
Yeah, I was trying to figure out if gitlab’s position is a bit more morally consistent than it might seem at first glance, but I agree with everything you said. Perhaps we can say a progressive vs oppressive split is more consistent but not fully consistent
Regarding eg affirmative action, I guess that also brings up a good point that the implementation of an affirmative/positive/progressive opinion may start to get a bit hairier than a simple platitude. Platitudes or moral statements can easily be defended but once you begin to take concrete action based on those, people may disagree with how you rationalize those actions
I'm normally one to see both sides in a situation like this, but this one struck me differently.
YouTube appears scared of extremists. They claim to fear a anti-conservative bias. If by policing blatant bigotry and hate makes you anti-conservative, there may be an issue with conservatism, not your policy.
If mainstream conservatives have gotten to the point where they'd defend blatant bigotry, disrespect, and just down right hateful behavior, and we yield to them, we need a reality check. This type of behavior does not make our society any better, promote a discussion, or spread any kind of well-being.
The same behavior would not be tolerated in many of the community spaces we share: schools, churches, theaters, restaurants. Why do we allow such things online? Why do we allow women to be harassed endlessly with death rape threats on Twitter but not in person? I do not understand. To mistake poor cyber behavior as less important and less influential on our culture may be the gravest mistake we keep making.
There's a good faith, honest, and respectful way to have differing opinions and discuss important topics. But this isn't it, and it's not ambiguous. If we want a world where goodness prevails, we have to stop being scared and start making choices.
Unambiguous definitions do not exist outside of mathematics. Furthermore, modern fascist communication strategies rely on existing within this ambiguity.
Most of the author's points here derive from the fact that C has been around for nearly 50 years and Rust a mere 8. This post should really be titled "Rust is not a good C replacement _right now_".
Yes, Rust still has a long way to go to be the right tool for all the things you can do with C today, but that doesn't make it less. It clearly has benefits when writing concurrent and safe code. You pay for this with a learning curve, but the Rust team has been shaving this curve down through better tooling and documentation for the past few years.
> Yes, Rust is more safe. I don’t really care. In light of all of these problems, I’ll take my segfaults and buffer overflows.
I suspect that for the author's work, these things may not matter much. But to a programmer working on any software where security is paramount (many web infrastructure pieces), this feature is golden.
I expect that as Rust matures, we'll see a solidified spec, competing implementations, and expanded build tooling. C has a long head-start, not to acknowledge that is just wrong.
On the other hand, Go has a very readable specification that was written early and is maintained to always be up to date.
Why couldn't the Rust team maintain a spec and keep it up to date by changing it in the same pull request that changes a language feature in the compiler? This seems like mostly a matter of discipline.
> Why couldn't the Rust team maintain a spec and keep it up to date by changing it in the same pull request that changes a language feature in the compiler? This seems like mostly a matter of discipline.
It's not that, it's that we take the idea of a spec very seriously. We want to have a spec that's extremely solid, and so there's a lot of foundational work that needs to be done first. That work has been ongoing. Not all specs are created equal, and good ones take a lot of time.
For comparison, C was created in 1972, and the first specification happened in 1989.
Okay, but this seems like letting the perfect get in the way of the good? A maintained, readable, informal spec should be quite useful as a starting point for writing a more formal spec later.
It need not be definitive. It would be quite reasonable to point out areas that aren't nailed down yet - this is also useful for the reader.
It's useful but I'd like to see a bit more top-level completeness. Every feature should be listed and have at least a cursory description, even if it's not entirely nailed down.
(Maybe it already is like that, but the disclaimers say it's not.)
I don't know what the current state of affairs is, but this is something they have rightly delayed specing. The original borrow checking rules were quite byzantine compared to what we have today.
I don't know if there is active work going towards making them more relaxed (NLL landed I believe). However, a ton of work has happened to get them out of the way as much as possible.
The Rust team adopted a rule in December 2016 saying that all new language features must be documented in the language reference before landing on the stable release branch:
It's more complex than that. From a development standpoint, doing it this way was a real pain. This is due to the reference being in a different repo than the change itself.
We decided to relax this rule last year, and so instead, issues need to be filed. They then get filled in after. This has its own set of drawbacks.
Its not unreasonable to expect Rust to have looked at history, and improved upon it. To start out less than C means what? We don't want to wait 50 years for another language to 'mature'. Why can't it start out as good as, or even a step ahead of what came before?
That may be the source of the disappointment surrounding deployment of Rust - why is it behind and not ahead of other tools?
That doesn't make any sense. What do you mean "less than C"? Rust definitely looked at history and improved upon it, that's the whole point of the language! The author's gripe was that they feel Rust is evolving quickly much beyond C (like C++), not that it's trying to catch up on C. And do you really expect a language to just appear one day, fully grown, like Venus from a seashell?
The author's opinion of Rust being less is that, an opinion. It does have large advantages over C: memory safety, an incredibly powerful type system, a compiler that supports detailed error messages, a powerful build tool (Cargo), an expressive macro system, etc.
Others who have built applications with Rust seem to be coming away with a much different impression that the author. He does mention some gaps (lack of a spec, unstable ABI) but these are not hardstops for all usecases. The main thread of complaints appear to be from lack of choice of tooling and compilers.
I'd wager that the advantages outweigh the drawbacks for many projects, though not all.
There's a huge difference between what Nielsen does and what Facebook did.
1) Nielsen doesn't explicitly target children.
2) The data that Nielsen collects is far less intrusive than what FB collects.
3) The consumer is much more likely to be informed about the data Nielsen collects, where as with FB, it's unlikely that a user (especially a minor) understands the extent of what FB was collecting.
And yes, Facebook was requiring "parental consent" to collect this data, but as we all know that is very hard to verify and children have been ticking the "I'm 13 or older" box for years without their parents knowing.
What Facebook did clearly crossed a line. End of story.
I've been sent those packets offering to become a Nielsen family, and looked through the included description of how it works.
1) Nielsen does explicitly target children, insofar as Nielsen families are supposed to give them data on the usage habits of every member of the family, including the kids. That said, the decision of whether or not to become a Nielsen family remains firmly in the hands of the heads of the family. Perhaps regardless of the consent of its younger members.
2) They do also now track participating families' Internet usage at large, like Facebook's app was doing. I don't know whether it relied on a VPN or some other technology.
3) I think that most people could understand the TV consumption tracking that used to be Nielsen's bread and butter. But, at least based on the recruitment materials that were sent to me, I didn't have a clear understanding of the extent or nature of Internet usage data collection. I assume the story would be similar for most other users, especially minors.
Based on that, I think that a lot of these comparisons are comparing what Facebook is doing now to what Nielsen was doing 20 or 30 years ago. Which is fair comparison to explore, but let's be careful not to absolve the Nielsen of today from any scrutiny in the process.
They're really pushy about it too. They selected my house and sent a gift basket and some guy came to the house three times emphasizing the "prestige" of being a Nielsen house because you're supposedly helping to define what shows get made. I can't imagine what kind of person would be swayed by that argument.
My uncle used to tell a story about taking a studio tour in the 1960s where part of the tour was being a test audience for Lost in Space (he was a kid at the time). The whole family had a pad with a dial and you could turn it one way to display approval and the other to give a thumbs down.
He hated the show and tried to indicate as much throughout the showing. But when the lights came back on he realized that he'd had the pad backwards the whole time.
He never forgave himself for that one time he "got Lost in Space green-lit".
I could see it being compelling decades ago. Nowadays, though, I'm guessing fans of niche programming are increasingly cord cutters who don't need Nielsen to ensure their TV consumption is being tracked.
Totally non-scientific evidence: The only acquaintances I can think of who still have cable TV subscriptions do so because their TV consumption is dominated by sports.
Good information, and based on that, I agree, Nielsen is doing similarly bad things, one distinction being that a child is unlikely to sign up for these services without their parents' knowledge.
I'm not here to defend Neilsen at all, but I do think Facebook has a bit more responsibility to make the right decisions here given their ubiquity, reach, AND the invasiveness of how a root certificate allows them access to encrypted traffic and even text messages (really?).
I'm not sure what you mean by this because Nielsen absolutely targets children. The parents are explicitly consenting to having the box in the home but the box is constantly monitoring what is on the TV and invasively forces you tell it every 30 minutes or so exactly who is watching the screen.
My family was a Nielsen family for a time when I was in college and my 8-12 year old brothers were living at home.
Nielsen asks the parents to consent to monitoring. The parents are adults, and adults are in a position to be able to give such consent. Parents routinely make decisions for their children that the children are not in a position to make on their own. This ensures that children, who do not have the education and life experience to be able to make such decisions on their own, have their interests looked out for by responsible adults.
Facebook skipped the parents and pitched their app to the kids directly.
There is no invasion like what you're mentioning in the (recently) current systems. I was a Nielsen household. They use audio tracking via HDMI/optical audio to "see" what's being watched, and they can of course tell what TV it's coming from, but that's the extent of it.
I wonder what the actual effects of saying “Period! End of Story!” are in a discussion forum.
Obvioisly someone is still free to respond, and then that won’t be the end of the discussion. So what’s the point of saying it? It seems to escalate the stakes basically: “if you disagree then you are a LABEL!”
I understand your sentiment here, but the broader point here is that we as industry have been historically timid about taking hardline ethical stances. In my opinion, Facebook's behavior here is clearly wrong, and I'm going to state it as so.
By taking a hardline stance, I'm opening the opportunity to prove me wrong. This is an open forum and I'm not calling anyone names for disagreeing with me. In fact if you do have a valid counterargument, PLEASE DO disagree. I'm more concerned about getting to the truth than being right.
But if there isn't a counterargument, then I want my comment to stand out as a stark reminder that we should not accept or be complicit to these types of practices going forward. If we don't take these types of stances, I do not think we will change the culture in tech.
Agreed. If the original commenter cannot make a cohesive and convincing argument as to why what happened is wrong, then they ought not to say anything. If they believe their argument is convincing, then these kinds of statements are unnecessary
The problem with your "negative" vs "positive" distinction is that it's quite subjective and not at all clear cut.
> “We want to empower marginalized groups” is positive and doesn’t bother that much people who.. don’t think it’s necessary, because it’s not a political position you can disagreee with per se
There are groups who have very big problems with this statement per se. See the numerous court cases over the past decade challenging affirmative action in university admissions. Many view helping one group as oppressing another (a view I find quite misinformed given the inequities in our society, but not completely irrational, given bad priors).
Maybe what you're trying to get at is progressive politics vs. oppressive politics. This distinction I can understand more clearly, however when you choose to oppress a group that is oppressing others, maybe this is justified?
For instance, banning ISIS from access to Twitter seems like a pretty morally sound decision to me as an American. Given that, is there really a difference between banning ISIS and banning white supremacists who discuss violence? Or Presidents who promote violence against journalists? What's the moral outcome of NOT banning any of these groups?
Any choice, including choosing not to act, is morally liable. I think Gitlab's stance here is quite morally lazy, if not wrong.