So has the internet completely done away with innocent until proven guilty?
The primary benefit of things like MeToo was supposed to be people being able to take action against individuals who otherwise would have been expected to squash things due to undue influence on law enforcement, the media, and politics - like Harvey Weinstein.
But in cases like this, it seems quite dystopic that a D-list celeb, likely with little to no major influence, is suddenly getting completely cancelled across an entire swath of avenues and platforms, based solely on accusations.
"innocent until proven guilty" only applies in a court of law. Similar to when people cite the 1st amendment in situations where a private company is taking action, this phrase is meaningless here. A private company can do what it wants within the bounds of the law.
> "innocent until proven guilty" only applies in a court of law.
“Innocent until proven guilty” is a philosophical concept that many legal systems subscribe to in the context of criminal law.
> Similar to when people cite the 1st amendment in situations where a private company is taking action
Indeed, it’s very similar in the sense that the concept of the freedom of speech goes way beyond the 1st amendment. It existed before it. And it is the first amendment that exists because of the freedom of speech, not the other way round.
> A private company can do what it wants within the bounds of the law.
Yeah, including immoral actions that others may disagree with.
The whole philosophical backing of both "freedom of speech" and "innocent until proven guilty" is that the government doesn't itself have civil rights, only the rights explicitly outlined to it in the founding documents of that government (e.g. US Constitution).
Once you venture into private parties evaluating other private parties, you encounter a collision of rights. It's still freedom of speech and association to not want to do business with certain people, and as long as those certain people aren't of a protected class, this falls well within the moral concepts of both free speech and presumption of innocence.
Let's go more extreme. Tech companies are free to not host Nazi content. The US govt is NOT free to lock someone up for being a Nazi. That's the power of the 1st amendment.
Why can I never find you 'Corporations are people' advocates when corporate manslaighter is being discussed, for example when Boeing killed 200 people with a faulty plane?
Thanks for making the correction, luckily we don’t live in a world based purely on Lockean principles, but rather a practical one with a society much larger than existed in his day.
John Locke is renowned for being pragmatic, not an ideologue. I'm not sure why societal size would matter, but he lived in a period of social upheaval with lots of negative effects from intolerance and partisanship. Not really that different from today.
> Yeah, including immoral actions that others may disagree with.
The morality in this instance does not follow this principle. If people find these allegations credible—and most should—the morally correct action is to deplatform him and delete his content.
> If people find these allegations credible—and most should
Why should most people find these allegations credible? I do not believe there is a police report, arrest, and let alone a trial. These are currently just allegations, their credibility has not been adjudicated.
One might evaluate the situation based on what I think is called a "preponderance of evidence", combined with an understanding that the legal system is both slow and tends towards innocence unless a crime is proven "beyond a shadow of a doubt".
A person may know how slow and different a legal decision is compared to what may be obvious and a reflection of reality, and therefore might arrive at a conclusion well before a system designed to be conclusive would.
The law is more about what can be proven than it is about what is true, and for people who know that, legal judgement stands separately from moral evaluation.
What evidence has been provided to meet this preponderance of evidence standard you are putting forward for moral evaluation?
You have one party making an allegation claiming they have documents to back it up and the other party denying innocence with claims of their own exculpatory evidence. Nothing has been shared to the public by either party for me evaluate who has the preponderance of evidence.
I do believe YouTube (or any other private platform) can and should be able to set it's own rules for participation so I see no issue with what they did here. If it's a right for someone to be on that platform then we should not be relying on a private party to guarantee that and make the necessary legislative changes.
I would just love to understand why I should be outraged at this individual before anything has been presented before me so that I can evaluate for myself.
> ...to what may be obvious and a reflection of reality
And how exactly is it obvious that the guy is guilty? Just because he makes click-baity divisive videos, might allegedly have been a playboy in the past, and you don't like him, doesn't equate to "obviously he must have done it".
> "innocent until proven guilty" only applies in a court of law
No, it doesn't "only" apply in a court of law. I choose to apply it in my own psyche (which breaks the "only"), and I choose to do so because I understand the reasons why a court of law applies the principle.
Just because the whole village is wielding pitchforks doesn't mean it's rational for you to also do the same.
They used it in the context of the question whether "we, the internet" have done away with it. So clearly it never referred to the legal system, and diverting to that is just kicking up sand to not let the people who are interested discussing the question as it was asked discuss it in peace.
How do you propose this actually work out? Every time youtube, twitter, facebook, etc wants to ban someone they have to submit a request to the government or be subject to its oversight? That's far more dystopian.
Personally, I wouldn't mind if the judicial branch was in charge of arbitration.
These companies are not obligated to pay creators. They pay them because it's profitable, and the moment money exchanges hand and someone livehood depends on them, the relationship changes.
At that point, if you leave creators without recourse, you only changed labels and left workers without hundreds of years worth of labor rights thrown down the toilet.
This is a good point that I don’t see very often. Video producers who have an explicit (or even an implicit) agreement with YouTube and depend financially on the earnings that it provides are not just “creators” who can “go somewhere else”. Surely, one could say that to any worker: don’t like the job? Go somewhere else. And still we have fought so hard for labor rights that give employees more agency and some level of protection against abuse.
Makes me think whether receiving regular earnings from any online service should legally redefine the relationship between the user and the service to something closer resembling an employment contract.
You say this as a person with no fear of getting unpersoned when the wind changes, or a cosmic ray flips a bit. It never happened to you and you don't have empathy for the wide range of people it happens to (some of them as innocent as snow), so you don't quite have the fear of it in your bones. Until you're the one to get unpersoned, and then it's too late.
Aren't they still publishing his content, just not running ads and paying? The US government will do fuckall about that, even if platforms are forced to be quasi-national entities subject to the First Amendment.
Or alternatively companies have to provide clear and explicit rules about what is permissible on their platform and if you feel you're wrongly censored or removed from the platform you should be able to take legal action.
I'm fine with YouTube not wanting to provide a platform for people who they feel are harmful, but they need to define that in an explicit way so that these decisions are not made arbitrarily.
I believe primary Brand's job for the last few years has been as a content creator. Given this I think it's reasonable to expect he should have some legal rights. Personally I don't see a huge amount of difference between an Uber gig worker and a YouTube content creator. Both should have some basic rights regardless of whether they're technically classed as "employees".
Define "clear and explicit rules". Does the constitution of say United States qualify as examples of clear and explicit rules? If yes, then even after roughly a quarter millennium, there are still hundreds of thousands of cases filed each year.
I don't need to define it. It would be open to reasonable interpretation.
If an online platform creates an unclear or vague rule and use that rule to remove a user, then that user could pursue legal actions. If a court agrees that the rule (or rules) used to remove the user from the platform is unclear or too vague from the perspective of a reasonable person then the platform would need to pay out for their mistake.
Therefore it would be in their interest to ensure they have clear and explicit rules.
I don't think this is hard and we shouldn't pretend it is. It's just regulators in the West rather force Apple adopt USB-C and destroy E2E than than protect us from arbitrary corporate censorship.
If by "rules" you mean vague references to "harm" then sure.
My use of the word "explicit" here was intentional. As it stands the "rules" may as well just read "if we don't like what you're doing on or off our platform we reserve the right fire you as a content creator". And again I'll note, if you're fired as a content creator for some arbitrary reason you have no way to challenge the decision.
I don't think this is acceptable. I think Google should ultimately be able to run their platform however they like, but they have a responsibility to make those rules clear when people are dependant on them for their income.
Usenet was a set of fiefdoms mostly administered by academics in CompSci departments, and proved utterly unequal to its first real crisis*. Distributed systems work great as long as they're new and everyone is participating in good faith most of the time. In adversarial situations, they're rarely able to adapt flexibly enough, partly because the networked structure imposes a severe decision-time penalty on consensus formation. A negligent or malicious attacker just has to overwhelm nodes with high betweenness centrality and the whole network fails.
Immediately following crises everyone talks about making the network more resilient and so on, but it never fully recovers because everyone intuitively knows that establishing consus is slow and bumpy, and that major restructuring/retooling efforts are way easier to accomplish unilaterally. So people start drifting away because unless there's a quick technical fix that can be deployed within a month or two, It's Over. Distributed systems always lose against coherent attackers with more than a threshold level of resources because the latter has a much tighter OODA loop.
Exactly, and look what happened to Usenet. People abused the commons and we lost it to spam. Unmoderated networks always fall to bad actors.
I'm building a p2p social network and struggling hard with how to balance company needs, community needs, and individual freedom. A free-for-all leads to a tyranny of structurelessness in which the loudest and pushiest form a defacto leadership that doesn't represent the will of the majority. On the flip side, overly restrictive rules stifle expression and cause resentment. These are hard questions and there is no one answer, except that unmoderated networks always suck eventually, so the question is one of line drawing and compromise.
This is such a common thing for people to say I have to wonder if it's propaganda from big corporations. The idea that core tenets of our civilisation are invalid because "it's a private company" is insane. These principles are based on practicalities, not technicalities.
There's this really neat section of law known as administrative law. One of the tenants is that you are allowed to have pretty much any rule that you want to have but you have to apply it in a reasonably impartial and fair manner. This means, you can setup random rules that you enforce capriciously.
It doesn't have to apply everywhere but it's still a good policy in a lot of contexts. I think a massive general audience platform is a good example. If this were, let's say, an online community of survivors of abuse, maybe that sort of prudence could reasonably take a back seat.
Even then it’s a rather unfortunately named legal principle.
It would’ve been better for it to be called “not-guilty until proven guilty” since criminal courts aren’t in the business of establishing innocence nor do they have the power to declare someone innocent.
But I guess that doesn’t roll as nicely off the tongue.
Then let’s change the law. It’s obvious over the past few years that companies can’t be trusted with freedom of association or freedom of speech. Let’s strip them of both.
If you are incorporated (and therefore benefit from government-provided protection from liability and lower tax rates) then you no longer get to choose your customers; you’re a common carrier and must provide the same service to all customers. You can only terminate a customer for non-payment (if you’re a paid service) or if the customer takes actions that directly threaten your business (eg attempts to hack your service).
Social media companies may no longer promote or suppress content; they can only provide tools to let users do so themselves (eg filter/block/subscribe/tag). Advertisers can use similar filters for ad placement.
In the beginning YouTube was popular and had very little moderation. You could watch illegal streams of many films and movies and you could find some porn before it’d be taken down.
Advertisers are what demand moderation not users so as to protect their bottom line. It’s disingenuous to say otherwise and ignores a multitude of services that became and still are incredible popular with little moderation.
>YouTube was popular and had very little moderation.
Emphasis on the AND. There is some correlation between Youtube's popularity and the lack of moderation but that isn't what made them popular.
I do agree on the advertiser's demanding moderation and I honestly don't blame them. If I made a product and I'm paying good money for advertising. I wouldn't want my products to be even remotely associated with anything that might promote controversy AND lower sales. Emphasis on the AND. The companies job is to make money and if that means embracing censorship or decrying it then they'll do it. Hell, they'll even do both at the same time. Advertisers are a leech on society and I hate that I'm defending them. But they pay the bills so....
That doesn't mean that vast majority of users don't want moderation. Every "free-speech" alternative to an already existing platform that I've visited has been complete shit. Filled with nutjobs that couldn't play nice with the normal folk.
You're making the exact same logical fallacy you're pointing out. The reason free speech alternatives tend to be filled with less than desirable types is precisely because they're alternatives. Who are you going to disproportionately attract as early adopters? It's the same reason anti-Musk driven alternatives to Twitter are also failing. Instead of having a normal sampling of society, you end up with a hardcore bias which is offputting to most of everybody except those of that bias.
I also think Threads is perhaps a reasonable challenge to the idea that society wants moderation. Unlike the anti-Musk Twitter alternatives it started with a massive and mixed userbase and was a completely viable alternative, yet it almost immediately collapsed. It's really hard to see why without looking to the fact that were also featuring the sort of "moderation" that historically only comes as a bait-and-switch after a platform is extremely well established.
The reason people stopped using it was because after the initial install they realised it was missing basic features like a web app, search, chronological feed etc.
Those have now been added and reports from popular users is that engagement across the board is increasing again. Far from collapsing and well on its way to being a true Twitter alternative.
Multiple third party reports [1] are showing the site has lost ~80% of daily active users and of the < 10 million daily active users left, time spent in the app has decreased from nearly 20 minutes, to less than 3. I'm left to reference third party sites since Meta stopped reporting their numbers officially when it started cratering. That scale of collapse is unlikely to be due to the lack of effective search or a chronological feed.
> Since when has advertisement even implied endorsement of nearby content?
It's not an endorsement. People make associations all the time consciously or not. There are obviously positive and negative associations. And if it's within your power to reduce the negative associations which might impact the perception of your product then why won't you do it? Advertising is primarily an appeal towards emotion not logic. It's manipulative by nature.
I don't know what I'm saying that's so unreasonable.
Also, I can't control whether some homeless person pees next to my billboard, but if my competitors also have billboards in the area then I may still come out on top. But if I can move my weight to move those homeless people elsewhere, preferably to my competitors billboards then I'll do it. This isn't a moral argument.
Nobody associates Coke with the reek of bum piss because they encountered a messy billboard. This is simply an unreal line of argument.
It certainly would be interesting if we lived in a world where advertisers refused to run ads in stadiums of losing teams, ran their ads only on sunny days, and only on positive, uplifting tv episodes while entirely avoiding shows about serial killers. We can fantasize, but the actual world has never worked this way.
This seems like a generalization with as many counterexamples as examples. Also, users don't actually want censorship, they want a tailored experience that filters out whatever content they don't like.
> Social media companies may no longer promote or suppress content; they can only provide tools to let users do so themselves (eg filter/block/subscribe/tag)
Users don’t want the responsibility of filtering out CP, gore, sexual violence, etc. I would bet the average user actively wants that content suppressed. Just look at any of the cases of social media moderators developing PTSD from their work.
So if I run a social media site, I would be required by law to carry hate speech, incitement to overthrow the government, rape threats, heretical religious statements, fascist propaganda, and covid conspiracy videos? That's gonna be a no from me. Freedom of speech does not imply a mandate for others to broadcast your speech.
"innocent until proven guilty" and "freedom of speech" are principles codified in law.
The position that only the government is bound by "freedom of speech" is, at the very least, weird in an international context where things that are not the US government are expected to respect people's freedoms.
It is also perfectly legal to do a lot bad things like e.g. buying the product of slave labor in other countries or blood diamonds or buying stocks of companies known to pollute with wild disregard.
Also in the US:
> "innocent until proven guilty" only applies in a court of law.
is misleading, the more precise version is that "innocent until proven guilty" only applies in criminal courts.
This is not true. The independent corroborating evidence is also material. Contemporaneous records from a rape clinic is powerful evidence.
More generally, innocent until proven guilty is a legal concept, not a social one. From a social perspective, that's never been the standard, nor should it be. Bad folks have often been shunned without convictions - that's why the norm has been "resign in disgrace," not "get thrown in prison"
> innocent until proven guilty is a legal concept, not a social one
Yes, legalism is often taken too far, but that doesn't mean that mob rule is a good thing.
> Bad folks have often been shunned without convictions
Are you sure about that? I'd sooner say that only losers get "shunned". Powerful politicians don't get "shunned" for their corruption, actually sometimes it seems to help with their popularity. Likewise with mobsters?
Mobs go after the weak, not after the guilty. Whether they're lynching and necklacing their neighbors or "canceling" minor celebrity cranks.
Your rhetoric doesn't sound far off from that of people who called BLM protests mob riots. But they were protesting against militarized police, hardly the weak.
Or hell, from the other side of the political spectrum, Jan 6th was some real mob mentality behavior. But I'd hardly consider the "US government" weak.
Watergate happened a long time ago, not sure how relevant it is nowadays. It seems like the standards that politicians are held to have since crumbled, IMO. Nowadays it seems to be quite difficult for a politician or party to harm the public good or democracy enough to decrease their chances of reelection. The USA seems somewhat better in this respect than the EU, though.
> Anthony Weiner
Is there any hint that he was actually corrupt?
> Roy Moore
As far as I can see Moore was actually successful despite his corruption, even though he was actually sanctioned for it. In the end his fall was only caused by moralizing allegations about how he spends his private time.
> John Edwards
Again, it seems like he only lost his popularity due to his immoral actions as a family man, not as an official.
> compare YouTube demonetization to historical racial violence
Various kind of (physical, murderous) mob violence still happen regularly around the world. Some necklacing videos are available on the Web.
Think about what makes this alleged crime "really bad", and then consider if that might make it difficult for a victim to come forward. There is no statute of limitations for sexual assault in the UK.
“Innocent until proven guilty” is an incredibly high burden of proof that we reserve for criminal trials. In other contexts, this is not the appropriate standard —- civil suits, for example, use a “preponderance of evidence” standard. Non-state actors using a lower burden of proof is entirely appropriate.
Yes but the courts are legally empowered to lock someone in a cage for years. So they should be working by a different standard than a company firing someone.
I don’t know many people that would prefer the later, since being locked in a cage also comes with losing your job, a horrible accusation proven true (or admitted to) in court and a public criminal record.
Losing your income and being publicly shamed sucks, but you still can rely on close friends and family, a public safety net and lawsuits (if you’ve been defamed or illegally fired), while enjoying sunshine, fresh air and freedom of movement.
Yes, false accusations are indeed a bad thing. But the point is that people choosing to no longer associate with you is very different to people forcing you to live in a cage under the threat of violence.
Both are unpleasant. One is worse. Thus the burden of proof is different.
The only system? Which courts? Not all courts use the same system. The UK court system is different than the US system. Criminal court is different than civil.
I disagree with the premise of your comment but on a factual note: Russell Brand has been litigious on this very issue, he has threatened to take legal action and taken legal action against people who have spoken up about him. He has been widely "known" to be a predatory rapist for years but has used his money to intimidate those who wanted to speak up.
Innocent until proven guilty is a legal framework, it has nothing to do with popular actions, and never has. All it basically says is that Russell Brand cannot go to jail until he is proven guilty.
There are no laws requiring the public to treat an accused person as if they never committed a crime until said crime has been proven. It is up to the public whether they believe the victim or the accused. In this case youtube has decided to believe the victim. Perhaps youtube—like so many others—have deemed the accusations credible, and they are in their full right to act on these believes.
FTFY: Youtube has decided to believe the multiple independent lines of evidence which came out of a four year investigation by multiple journalists across more than one organisation.
This is not currently a legal matter, but a matter that concerns a public figure's ethical standards. Multiple independent lines of evidence is a powerful thing.
I’m under no legal obligation either to deny the allegations until proven. And in this case I choice to believe the victims. And I will keep calling them victims until proven otherwise.
You are saying it is wrong of me to call the victims, victims, and should instead call them ‘accusers’. I’m saying I am under no legal obligations to do so. I believe their stories and I believe they are victims, so I am allowed to call them victims.
Now I think there might be slander to call the accused something like an abuser, so I don’t do that (yet). However there is no slander laws which disallow me from using words which indicate that I believe the victims, so I’m not calling them ‘accusers’, I call them victims, because that is what I believe they are.
Of course you're allowed - there's no parent enforcing things and you aren't a child, and shouldn't be thinking that way.
But this is a strange response: "It's not illegal for me to say this!" Would you accept that as a response from a flat earther if you challenged something they stated as fact?
> So has the internet completely done away with innocent until proven guilty?
Yes. But to be fair it wouldn't be out of character for Russel, if you actually know who he is, so maybe that's why the internet finds it so easy to ignore silly things like "evidence" and "proof".
Your assumption is the reason his content was removed was because of the allegations, which is potentially not true. While it's very likely the allegations are what drew attention to it, it doesn’t mean there wasn’t a bunch of stuff there already that violated policies – especially given the content he had doubled down on.
All Youtube did was cite their “Creator responsibility“ clause[1] as the reason. This could have included a myriad of violations, especially considering the type of content he was producing.
Also, if you read the allegations, he very much was in the protected status you mention. “Open secret”, lots of people covering for him, running interference, etc etc. Calling him a “D-list celeb, likely with little to no major influence” illustrates your lack of research into the issue.
> there wasn’t a bunch of stuff there already that violated policies
Are you suggesting that it could be that his existing videos were in violation of community guidelines? Is there any evidence for this? I've watched some of his videos, and this seems like a rather silly accusation.
The primary benefit of things like MeToo was supposed to be people being able to take action against individuals who otherwise would have been expected to squash things due to undue influence on law enforcement, the media, and politics - like Harvey Weinstein.
But in cases like this, it seems quite dystopic that a D-list celeb, likely with little to no major influence, is suddenly getting completely cancelled across an entire swath of avenues and platforms, based solely on accusations.