Hacker Newsnew | past | comments | ask | show | jobs | submit | safety1st's commentslogin

I appreciate that there are people out there working on stuff like /e/OS, but the number one question I have when I learn about a mobile OS that isn't iOS or "Googled" Android is: will the banking and payment apps I need to operate in the modern world run on this OS?

A lot of people don't think this way because they haven't had any problems. But then one day it happens to you and you realize, ok, this is the one thing that matters - you're in a cashless store and the only way you can pay for your meal is to use Approved Apple or Approved Google operating systems.

Where I live, the app my electricity utility provides for viewing and paying my account DISABLES ITSELF FOREVER if you so much as enable USB debugging on your phone (even after you've disabled it again).

To their credit Graphene maintains a global database of which of these apps work and don't. They're the only ones I know of so a thousand upvotes to Graphene OS.

But for my banks, the records in that database are grim. They won't run on Graphene, and they don't respond to reports about it.

One of my banks just discontinued its web UI because "people don't use it anymore, they use the app only."

This is how they're going to get us, folks. This is how we're going to lose it all. Writing code alone will not solve this. It will require some kind of collective action to defend our liberties. Some parts of the world are already lost. So this situation will likely come to a jurisdiction near you eventually: to make a transaction you will need permission from Google, Apple, Visa, Mastercard, or it won't happen. Then that four company list will start to shrink.


> the app my electricity utility provides for viewing and paying my account DISABLES ITSELF FOREVER if you so much as enable USB debugging on your phone (even after you've disabled it again).

These are self-inflicted problems by these apps. Nothing to do with the OS. These apps simply don't work. Complain to the companies that push these broken apps to you.

Would you buy a microwave oven that kills itself if you play the wrong kind of music in your kitchen?


The problems may be inflicted by these apps but the reality is that in many cases you're stuck with them. Electric company freezes your account if you enable USB debugging? Well, you can't choose a new electric company. We can complain to these vendors all we want but they just ignore us.

So these problems become problems of the OS, not because the OS has a problem, but because it affects the reality of using the OS.


Is it such a burden to write them a letter stating, "Because you have decided to disable my electronic access, I am notifying you that I withdraw my consent to e-delivery. Please provide me statements and directions to mail you a check for payment." Maybe spend 20-30 min to find the specific laws that give you the right to do that and remind them of their timelines to comply.

Send a letter like that certified. It gets attention, and the time to write and mail a check really isn't, if you batch your bills, more than using an app.

We do have ways to push their inconvenience back on them.


It is great that you have the right in your jurisdiction to do that. Where I am, they just shut off your power if you don't pay.

It's a big and hairy world out there. Having lived on three continents and traveled to some pretty wild places, I always get a kick out of seeing which rights people have and assume that the rest of the world also has.


This only works if the company cares though.

This a pretty general recipe to make a company care.

A Professional letter letting them know that you know your rights, and that they know your rights (Them getting your letter is your proof of that) is what the beginning of someone losing his bonus for a compliance incident looks like.

Companies don't care about you, or even shareholders, they care about the incentives of leadership.


Not everyone has the time and resources to battle their utilities and bank(s). I know it’s important and sustained effort is necessary even if it’s hard, but we are talking about massive populations here and most people simply can’t or won’t fight that battle on their own. Organizing a large pushback is also a huge effort. And at the end of the day, there is an easy solution for folks: buy a “proper” smart phone that “just works” because it solves the problem now.

We’ve gotten to the point where unfortunately it is a luxury to fight for your privacy and consumer rights.


> Not everyone has the time and resources to battle their utilities and bank(s).

They do, they just don't want to. Typing a short letter and mailing it is very little effort. Less so with AI these days.


Fighting for your rights is usually not the easy path, yes. It's been like that since forever.

Well, we gotta choose our battles, right? It's easy to get collective support for visible oppression and fascism. Everyone sees it on the news. It's hard to get support for "lemme use a smartphone that isn't apple or android." the average person doesn't care.

Not saying that we should just give up. But as the above poster said, it's a luxury that takes a lot of time and resources.


Yes that is correct. So what do you suggest people do? What is a realistic way to move the needle? Because I can tell you now that (as I detailed in another comment) asking someone to change their banks, utilities, etc. to accommodate their smartphone choice is not a serious suggestion, nor is asking everyone to wage war with all the services they engage with. They’re simply not going to do it no matter how many passionate speeches or flippant comments you throw out there. They’re going to buy the thing that solves the immediate problem of not having access to critical services in their lives. If their amazing open source phone can’t pay their bills, it’s going in the bin.

To be clear I want the same thing you do. But just going “do it it’s important” is not going to make it happen.


It obviously depends on where you live. In my country you certainly con choose a new electric company. I mention that because we really should use consumer choice to overcome these types of problems where we can. Ie if you can switch to a bank/electricity provider/whatever that has a less terrible app it’s really good to do so.

I agree on principle. I'm not sure if everywhere in the US is like this, but everywhere I've lived in California basically had a monopolistic electric and gas provider.

For things where we do have a choice, yes I agree.


You’re implying we have more choice than we do and asking “the average joe” to change banks to accommodate their smartphone is not a serious suggestion.

My utility company, for instance, literally won’t let you navigate their site with a VPN running. These kinds of practices are commonplace and becoming standard.


I promise your electric company accepts payments outside of an app on your phone. I further promise that other banks are available that don't have terrible apps. These problems are way more surmountable than you're painting them here.

Plus, you can still do electronic banking and payments. Use your computer, it's a much better experience anyways

Until they start locking that behind shitty proprietary "security" solutions too.

The alternative they accept is traveling down to their office and handing them cash, no joke. Phone app or cash. No website, never has been one. No snail mail because they "modernized" and discontinued it some time ago.

But I'm OK because one of my banking apps has some method of reading my contract number from the disabled electricity company app, and telling me how much I should pay and then it fires off a payment to them. Even though I can no longer use the electricity app directly because I enabled USB debugging once, the banking app is somehow still able to pick up this info from it.

Of course, said banking app refuses to run on Graphene or any of these other Google Play-less OSes, and the bank doesn't respond to inquiries about that issue, multiple people have tried.

The other bank I use does respond, and says they'll never run on "alternative OSes" because "alternative OSes are too insecure." They don't respond to followups.

I'm just saying man. A lot of people think this stuff is trivially solved because there is an option available to them in their home country. You don't know how big and nuts this world of 8 billion people and 200 countries is. This stuff varies beyond imagination, sometimes for the much worse.


Can't you pay with a card?

My main takeaway from all of this is that Hegseth seems deeply unfit for his job. First there was the Signal leak and now this.

Look, Anthropic is not going to be designated a supply chain risk. 80% of the Fortune 500 have contracts with them. Probably a similar percentage of defense contractors. Amazon is a defense contractor for example. They'd have to remove Claude from their AWS offerings. Everyone running Claude on AWS, boom gone. The level of disruption to the US economy would be off the charts, and for what? Why? Because Hegseth had a bad day? Because he's a sore loser?

If he's decided he doesn't like the DoW's contract then he can cancel it, fine. To try and exact revenge on the best American frontier model along with 80% of the Fortune 500 in the process, to go out of his way to harm hundreds or perhaps thousands of American firms, defies all reason. This is behavior you would expect any adult would understand as petty and foolish, let alone one who's made it to the highest ranks of government.

So I think it's just not going to happen, Trump's statement on the matter notably didn't mention a supply chain risk designation. This suggests to me that Hegseth went off half cocked. The guy is a liability for Trump at this point, I'm guessing he won't last much longer.


> Everyone running Claude on AWS, boom gone. The level of disruption to the US economy would be off the charts

seriously? :)


My first reaction is that this is an insanely bad law:

* The signal has to be made available to both apps and websites

* So if you dutifully input valid ages for your computer users, now any groomer with a website or an app can find out who's a kid and who isn't. You just put a target on your kid's back.

* A fair share of parents will realize this, and in order to protect their children, will willfully noncomply. So now we'll have a bunch of kids surfing the net with a flag saying they're an adult and it's okay to show them adult content.

* Some apps/websites will end up relying on this signal instead of some real age verification, which means that in places like porn sites where there's a decent argument for blocking access from kids, it'll get harder. Or your kid will get random porn ads on websites or something.

So basically unless this thing is thrown out by the courts, California lawmakers have just increased the number of kids who get groomed and the number of kids who get shown porn.

Mind boggling that something this bad passed.


I'm not sure what the solution is, but to steel man a bit, the alternative is kids have access to all the adult spaces, where they will be groomed. A website/app serving grooming content to a kid is just so incredibly unlikely compared to a kid being groomed as the result of having unrestricted access.

Since I do not see a solution, and you see identifying children as a risk, what do you see as a solution for kids being in the same spaces as adults? Do you see a reasonable implementation to separate them, that doesn't have the "we know which accounts are children" problem? Maybe there's something in between?

Also, I think it's important to understand the life of a modern child, who's in front of a screen 7.5 hours a day on average [1], with that increasingly being social media, half having unrestricted access to the internet [2].

I hate government control/nanny state, but I think 5 year olds watching gore websites, watching other children die for fun, is probably not ok (I saw this at the dentist). People are really stupid, and many parents are really shitty. What do you do? Maybe nothing is the answer?

[1] https://www.aacap.org/AACAP/Families_and_Youth/Facts_for_Fam...

[2] https://fosi.org/parental-controls-for-online-safety-are-und...


The solution is parental liability.

So say one of the 50% of children that have unrestricted access goes somewhere they shouldn't, or interacts with people they shouldn't. How is it detected so the parents can be held liable? What does the implementation look like to you?

The same way anything illegal is detected: a police report.

You misread my comment.

How is it detected? A police report is for after it's detected.


So never.

As the problem is adults trying to groom kids, the answer is robust detection and enforcement of the current anti-grooming laws.

It's ironic that people supposedly care about this when there's also a child rapist/murderer being kept safe as President without being held accountable for his crimes.

I suppose this law could be used as a defense against getting caught grooming minors - "I thought they were adult as surely a kid wouldn't be able to access that chat group"


> robust detection and enforcement

How, exactly, does one accomplish "robust detection of a child"? I assume your answer would include complete surveillance of all internet communication? Could you expand on your idea of the implementation?


Sorry if I wasn't clear - I am proposing that the adults face the robust detection and enforcement of anti-grooming laws. One method is to set up honey-pots with law enforcement officers playing the part of an innocent child (i.e. avoiding entrapment) and then throwing the full weight of the law behind any adult showing predatory behaviour.

What I propose is rather than putting all the effort into preventing children from entering dangerous adult spaces, it's better to put the effort into ensuring that sex criminals are prosecuted and trying to make adult spaces less dangerous.


I think an obvious problem for this method is scaling, partly from grooming not being a local phenomenon. It would require worldwide cooperation, especially in a few countries that are statistical offenders.

Instead, websites should voluntarily put content ratings on their own stuff--most would because either they don't intend to harm children, or from societal pressure.

Then, software on the user's computer can filter without revealing any information about the user.


> So if you dutifully input valid ages for your computer users, now any groomer with a website or an app can find out who's a kid and who isn't. You just put a target on your kid's back.

I'm not going to say that's impossible but the number of sites that do the right thing and reduce risk are going to vastly outnumber that. And 90% of those kids already have targets on their backs by virtue of the sites they visit.


What risk exists from sites that are doing to do the right thing?

This smells strongly of I just made it harder for those that do the right thing and did nothing to solve any problem.


> What risk exists from sites that are doing to do the right thing?

To be clear, I'm talking about sites for adults that are doing their best right now, but have no idea who is 18 and who is 8. If they have communication between users, it's not set up to be filtered and moderated in a way that protects an 8 year old. If they could cut out a big majority of 8 year olds with the flip of a switch, that would be a good thing.

That's a lot of risk that exists right now and could be reduced.

> This smells strongly of I just made it harder for those that do the right thing and did nothing to solve any problem.

There is no meaningful difficulty in storing two bytes of extra data on the OS account and turning it into a two bit flag that programs can access and pass on to websites. And for most websites that let users communicate it makes their job a lot easier, even if the flag isn't always right.


I think there's room for both visions. Big Tech is generating more toxic sludge than ever, and yeah sure this is because they're greedy, but more precisely the root cause is how they lobbied Washington and our elected officials agreed to all kinds of pro-corporate, anti-human legislation. Like destroying our right to repair, like criminalizing "circumvention" measures in devices we own, like insane life-destroying penalties for copyright infringement, like looking the other way when Big Tech broke anti-trust laws, etc.

The Big Tech slop can only be fixed in one way, and actually it's really predictable and will work - we need to fix the laws so that they put the rights and flourishing of human beings first, not the rights and flourishing of Big Tech. We need to fix enforcement because there are so many times that these companies just break the law and they get convicted but they get off with a slap on the wrist. We need to legislate a dismantling of barriers to new entrants in the sectors they dominate. Competition for the consumer dollar is the only thing that can force them to be more honest. They need to see that their customers are leaving for something better, otherwise they'll never improve.

But our elected officials have crafted laws and an enforcement system which make 'something better' impossible (or at least highly uneconomical).

Parallel to this if open source projects can develop software which is easier for the user to change via a PR, they totally should. We can and should have the best of both worlds. We should have the big companies producing better "boxed" software. Plus we should have more flexibility to build, tweak and run whatever we want.


And then they will take away your right to boot whatever you want. For national security reasons and the children, of course.

Very good points, I agree and would add : "Interoperability" is the key to bring back competition and open the ecosystem again.

and being able to fire employees for profit gain when they already make a profit, thats illegal in other countries

Just in case you weren't aware, Gsuite has a clone of Docusign built into it now.

hate to say it, because who likes monopolies, but it's easier to send people docusign because then they don't go Wtf?

In my opinion people are fixating a little too much over the automation part, maybe because most people don't have a lot of experience with delegation... I mean, a VP worth his salt isn't generally having critical emails drafted and sent on his behalf without his review. It happens with unimportant emails, but with the stuff that really impacts the business far less often, unless he has found someone really, really great

Give me a stack of email drafts first thing every morning that I can read, approve and send myself. It takes 30 seconds to actually send the email. The lion's share of the value is figuring out what to write and doing a good job at it. Which the LLMs are facilitating with research and suggestions, but have not been amazing at doing autonomously so far


You might be right, but not for long. Once my agent is interacting directly with your agent (as opposed to doing drafts of your work on your behalf), expectations will shift to 24/7 operation.

This is uncharted territory and very interesting.. We humans live with a strong requirement of reputation management which shapes the way that we do things.

Once we have agents openly do things on our behalf but not in our voice, it will be interesting to see how of subpar performance or bad etiquette gets accepted just because agents don't have an individual personal reputation to maintain


There's no rude way to call an API. As more of human communication and commerce gets refactored into cold agentic interactions, the issue of reputation just vanishes.

But there's more than shifting etiquette standards at stake. Every BigCorp is currently reworking their APIs to be agent-friendly. CAPTCHAs and "Contact Sales" forms are being ripped out because they have no place in a world where the customer expects a complete transaction in the next 300 milliseconds. Agentic customers will demand agentic support, or else they'll take their RPCs elsewhere.

So what happens when you're CEO of BigCorp, and 90% of your customers are code, served by code, and the rest are messy humans who forget their passwords, complain that your website layout is confusing, and demand to speak to the manager? Is that last 10% worth keeping? Can you imagine Amazon in 2030 deprecating support for human customers?

Maybe this sounds cool, especially if OpenClaw agents have been doing all your domestic online chores for the past couple years. But along the way social grace was refactored out.

You take a life-saving prescription drug via an off-label usage, and your employer's PBM updates to Care Schema 2.3, which makes it semantically impossible to get a refill. Or you bend down to get the mail on your front porch, the wind slams your front door shut, and your fingerprint no longer works to open the door, because as of noon, your mortgage payment was past due. You could easily pay, but your phone is inside, next to your sleeping infant's crib. The system is operating as designed.

This is how the world would work when it's intended for agentic interactions and humans are an afterthought.


This is the first time I've ever heard somebody claim that section 230 exists to deter child predators.

That argument is of course nonsense. If the platform is aware of apparent violations including enticement, grooming etc. they are obligated to report this under federal statute, specifically 18 USC 2258A. Now if you think that statute doesn't go far enough then the right thing to do is amend it, or more broadly, establish stronger obligations on platforms to report evidence of criminal behavior to the authorities. Either way Section 230 is not needed for this purpose and deterring crime is not a justification for how it currently exists.

The final proof of how nonsensical this argument is, is that even if the intent you claim was true, it failed. Facebook and Instagram are the largest platforms for groomers online. Nazi and white supremacy content are everywhere on these websites as well. So clearly Section 230 didn't work for this purpose. Zuck was happy to open the Nazi floodgates on his platforms the moment a conservative President got elected. That was all it took.

The actual problem is that Meta is a lawless criminal entity. The mergers which created the modern Meta should have been blocked in the first place. When they weren't, Zuck figured he could go ahead and open the floodgates and become the largest enabler of CSAM, smut and fraud on earth. He was right. The United States government has become weak. It doesn't protect its people. It allows criminal perverts like the board of Meta and the rest of the Epstein class to prey on its people.


Reporting blatant criminal violations is not the same thing as moderating otherwise-protected speech that could be construed as misleading, offensive, or objectionable in some other way.

Indeed. However, there is no universal definition for what offends people, and never will be. People are individuals who form their own opinions and those opinions are diverse.

Ergo if you start to moderate speech which is offensive from one point of view, it will inevitably be inoffensive to others, and you've now established that you're a publisher, not a platform, because you're making opinionated decisions about which content to publish and to whom. At that point the remedy lies in reclassifying said platform as a publisher, and revisiting how we regulate publishers.

They can be publishers. They can censor material they object to. That's fine. But they don't need special exemptions from the rules other publishers follow.

I think it's good to have publishers in the world who are opinionated. There are opinions I don't like and don't want to see very often. Where we get into trouble is when these publishers get classified as platforms by the law, claim to be politically neutral entities, and enjoy the various legal privileges assigned to platforms by Section 230 of the CDA. The purpose of that section was to encourage a nascent tech industry by assigning special privileges to the companies in it. That purpose is now obsolete, those companies are now behaving like publishers, and reform of our laws is necessary.


There's no reason that a sizeable portion of LLM usage can't and won't end up free/ad-sponsored. Cutting edge stuff for professional use will probably be monetized via subscription or API credits for a long time to come. But running an older and less resource intensive model works just fine for tasks like summarization. These models will just become another feature in a "free" product that people pay for by watching or clicking ads.

I imagine the split will look a lot like b2b vs b2c in other technologies, b2b customers tend to be willing to pay for tech when it offers a competitive advantage, reduces their operating costs etc. b2c customers mostly just guzzle free slop.


It's actually pretty bonkers when you think about how basically every cutting edge professional you deal with is getting ads for all of their top search results for all of their work.

(Not quite "every", but outside of tech, most professional workplaces don't support ad blocking or Kagi.)


When you pick apart what's actually going on in Meta's revenue pipeline it's hideous. Think about this and compare it to what the world was like say 30 years ago:

* There are literally thousands of IG profiles that are essentially softcore porn which serves as a lead gen device for an OnlyFans account. Meta promotes these profiles to its users heavily because sex sells. Meta profits from the engagement with the profile, OnlyFans profits from signups sent to it by Meta.

* This is one of the primary ways OnlyFans has grown its pornography business to $8B a year

* Once users sign up for OnlyFans a common mode of engagement is that a managerial company lies and pretends to be the porn actress, and texts with the user under fraudulent pretense as the user consumes porn

Now... what was the world like 30 years ago?

* You couldn't buy porn mags without showing ID, Internet porn not really a thing for most people yet

* Even softcore stuff was mostly relegated to late night Cinemax

* Far fewer women had body image disorders and mental health disorders

* Far fewer young men had ED

This stuff is evil, when you connect the dots, it's crime, evil, lies and perversion all lined up to make a small number of companies a staggering amount of money. Somehow government and industry are OK with this, I guess this is the world the Epstein class built for us so no surprise. I am not a religious guy, and I would hardly call myself a prude, but this all exists and is widespread because it enables profit and fraud and exploitation, and I find that disgusting. Zuck's a porn baron. He knows what's going on. The fucker's on the take.

If anything should be in the dictionary next to the word evil, it's the 2026 state of affairs


> Far fewer young men had ED

Do you have some reference? The one (rather simple/incomplete) that I could find at : https://worldpopulationreview.com/country-rankings/erectile-... shows that overall ED dropped, maybe it is different for young men but would be curious to see an actual study.


If there was any increase of reported incidents of ED over the 30 years I would hazard to guess that it would have to do with the fact that various medications have been released over the last 30 years to address it. Fewer people will report an embarrassing issue when there is a narrow chance it can even be fixed.

I’m here before some pedantic person replies “correlation without causation.”

People repeat that phrase constantly forgetting that the lack of proof of correlation is not proof of no causation. It means it could go either way, not that it’s been debunked.


Oh sweetie, Meta's revenue pipeline has included knowingly playing a crucial supporting and fomenting role in a genocide in Myanmar, and continues to rely on a huge number of actual scam ads from China that are intentionally ignored to protect revenue. Besides of course the "developing algorithms that detect when teen girls are at their most vulnerable to manipulate them".

But you're right. Ellison and Thiel get all the attention, while Zuckerberg has caused magnitudes more societal destruction than both combined. Not because the former two are better people, far from it, just hard real-world impact from the companies they've founded.

In tech, nothing comes close to the damage of Meta. Not even the most despicable of companies like ClearView, as while their products might be worse on paper their actual impact pales in comparison.


I'm sure I'm in the minority here, but I read the announcement and other than the risk of a slippery slope into more invasive ID demands, I'm not sure I have a huge problem with it.

The default experience will be the "teen" experience - they list what that entails - stuff that's flagged as adult/NSFW/etc. is blurred out until your age is verified, which for most(?) people will require ID or face scan. DMs/friend requests from people you don't know take some extra clicks to view. Fine.

It depends on how broad the definition of adult content ends up being I guess, but I'm simply not convinced that requiring ID to view "adult" content is the end of the world. If that means porn, I'm 100% OK with it, put porn behind gates. It has become far too easy to access. It's 2026 and we now have a generation of gooning addicts out there who never have actual sex and it's basically a guarantee that they won't find partners or start families any time soon, exacerbating an already problematic decline in the birth rate. This is not a version of society or anyone's "rights" that I care to defend. You want to goon, show ID. That's how it was before the Internet anyway.

On the other hand if it means any speech that the platform deems to be "controversial" will be blurred out then my response will not be to submit ID, I'll simply limit how I use the platform. Anonymous speech continues to matter and needs protection. But Discord was never the entity that was going to provide that protection.

I mean Discord is a gaming chat room. Expectations should be set by that fact. I don't need a gaming chat room to be NSFW, or even host i.e. political speech really. I get that people have used it for more than gaming, but it was always pretty clear what it was. If people don't like that this gaming chat room no longer supports other uses, they should switch to an alternative.


We're all going to have to scan our faces and upload our IDs just to use the internet because of your weird obsession with birth rates? Wonderful.


> we're all

Speak for yourself, I didn't touch discord for 3 years.


I don't use it, but it doesn't start or stop at Discord. Age checking is already implemented as live face video & ID uploads and already deployed by every large tech company all over the world. They just have to flip a switch in our market.

To use my phone, Google wants me to verify my identity and age[1][2].

They're boiling the frog, give it a few years, and if you want to use any internet connected device at all, you'll need to sacrifice your face and ID as tribute. If you want to talk to someone else, you'll need to identify yourself with the platform or network on which you communicate. If you want to run an app that serves you any user generated content in any capacity, you'll need to identify yourself first.

[1] https://www.zdnet.com/article/google-play-users-are-starting...

[2] https://support.google.com/accounts/answer/10071085?hl=en


Clearly the outrage is about the slippery slope and the current techno-fascism gripping the US. I'm not being sarcastic.

You do it for the children now, you poo-poo concerns because "who uses discord for non gaming anyway" and you're just letting the foxes in the henhouse.

Twelve months from now and they'll want it for every chat.


The problem with the slippery slope argument is that it's a fallacy. That is the origin of the term, it describes a type of argument that's logically invalid. Yeah I am concerned that things could get worse and this might be the first step to broader censorship that we don't want, but a fallacious argument alone is unpersuasive to anyone who tries to form opinions rationally. Specific evidence needs to exist for the claim for it to be convincing.

Since slippery slopes are invalid by nature they're a type of argument that can be made for pretty much anything. If the case here is that a slippery slope is being used to defend pornographers and the "right to goon," I'm not on board. I think we have a long way to go to roll back porn's grip on teens and adults alike and reduce the harm it does to relationships, and this is just the beginning. Take for instance how Instagram at this point is basically a lead generation service for fraudulent OnlyFans businesses that sell parasocial relationships with a porn model's image where the customers aren't actually talking to her, they're talking to a team of guys in a basement in Eastern Europe somewhere. I think you shut down OnlyFans, you prosecute Meta, and to the extent where Discord is doing the same thing IG is, you prosecute Discord too. There's a long long list of things that needs to happen and shutting down the porn pipeline for teens on Discord is just the beginning.


First it was “just extreme porn”, then “just porn”, then “anywhere that could potentially contain adult content”, then VPNs, now all social media, all in about a year. You’re claiming slippery slopes aren’t real while in the gift shop at Splash Mountain.


In Russia none of that slippery slope stuff happened. Just they murdered journalists and opposition, installed TPUs at every ISP and passed a law making any VPN related advice illegal. And people are fine with that apparently

In Thailand porn was straight up illegal for ages and everything else was sane and open... until new government decided to kill freedom of speech.

So slippery slope is illusion. If government is bad it don't need to try to be so complicated and gradual. It can't even think so far ahead, they will no longer be elected when that times comes.

As for social media banning for teens that's just common sense. Social media is fuming pile of garbage designed to make people feel miserable so that corporate overlords make $$$ https://www.bbc.com/news/technology-58570353.amp


There’s a trivial way of fixing social media without mass surveillance or free speech restrictions: Just put a punitive tax on advertising revenue. People can say whatever they want, but the incentives behind social media disappear. This won’t be implemented because this was never about making society a better place.

And your examples only show that where there’s no safeguards, governments don’t need to be subtle, but in semi-functional democracies, they still need to at least pretend to be electable.


No, it started with "protecting the children" around 2010, and followed the bit-by-bit step-by-step boiling the frog approach for years, until the grip on the internet (as well as offline publications) became strong enough to do you know what to you know whom.


pls explain how 2010 is related to current censorship

it's not slippery slope when it's things that happened at different times. there are examples where x did not lead to y as well as where y happened without x happening before it.


Outside of formal logic an argument does not need to be logically sound to have merit. You are extrapolating from "logical fallacy" to (something approximating) "invalid line of reasoning in most or all cases" which is simply not correct.

There are many potentially slippery slopes in politics. The extent to which they prove to be a problem in practice depends entirely on context. Approximately none of those cases will involve formal logic.


You're displaying the fallacy fallacy[1], the assumption that because an argument contains a logical fallacy, it must be false.

[1] https://en.wikipedia.org/wiki/Argument_from_fallacy


Taking away porn access would be great except you can't do it at scale without with eliminating porn from the Internet altogether and prosecuting anyone who shares any, or by eliminating privacy and anonymity from the Internet altogether.

I agree with your take on the damage of porn to the youth but don't yet agree that asking the government to watch every conversation is worth it. (That's what you're enabling long term)


In order to make sure businesses aren't giving porn to teens, you can require they do meaningful age verification at the time they want to provide the porn. You can impose criminal penalties on a domestic business which doesn't do this, and other penalties on foreign businesses (such as locking them out of the payments network). You don't need to get 100%, even partial success will act as a deterrent. This is how the world worked before the Internet, you needed to show ID to buy porn, and public opinion is in favor of the world working this way again. Crucially, penalties on businesses (not consumers, and starting with the biggest ones) are the way you need to go because this is the only way this can feasibly be enforced.

The libertarian concerns around privacy, freedom of expression and surveillance are all valid, but they're downstream. We have hard evidence that porn damages sexual health and relationships, and it has basically zero value to society; it's like digital cigarettes in this sense. We can't allow ourselves to be paralyzed on this issues because of a theoretical slippery slope. Whether Discord is going about this the right way is open for debate, and whether legislation solves the porn problem without introducing surveillance risks is also a good discussion to have. But the porn as well as the fraud and exploitation which always seem to accompany that industry need to go. Libertarians would be wise not to conflate the endorsement of privacy with an endorsement of porn -- most people support the former to some degree, but when people come forward with enthusiastic support for the latter, more often than not their motivation is addiction or profit, not a crowd the defenders of privacy want to be lumped in with.


I don't care what degenerate stuff you look at as you are free to do so.

Privacy is a fundamental right, at that my opinion one of the more important ones, as when the right to privacy is removed the other ones are impossible to keep.

To give up the right to privacy because you don't want kids looking at degenerate stuff on the internet is stupid, additionally the kids will work around your barriers.

How about we teach kids (and adults) the dangers, putting the responsibility on the consumer instead of micromanaging/censoring everyone's information intake.

If a minor drives in a car without a license we also don't require the car brand to install license & age verification in each car. We punish the kid that did it.


Why do you want the children to grow up in an Orwellian dystopia?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: