Hacker Newsnew | past | comments | ask | show | jobs | submit | leumon's commentslogin

well now you can unlock an 18+ version for sexual role-play so i guess its the other way around

> We’re continuing to make progress toward a version of ChatGPT designed for adults over 18, grounded in the principle of treating adults like adults, and expanding user choice and freedom within appropriate safeguards. To support this, we’ve rolled out age prediction for users under 18 in most markets. https://help.openai.com/en/articles/12652064-age-prediction-...

interesting


Pornographic use has long been the "break glass in case of emergency" for the LLM labs when it comes to finances.

My personal opinion is that while smut won't hurt anyone in of itself, LLM smut will have weird and generally negative consequences. As it will be crafted specifically for you on top of the intermittent reinforcement component of LLM generation.


While this is a valid take, I feel compelled to point out Chuck Tingle.

The sheer amount and variety of smut books (just books) is vastly larger than anyone wants to realize. We passed the mark decades ago where there is smut available for any and every taste. Like, to the point that even LLMs are going to take a long time to put a dent in the smut market. Humans have been making smut for longer than we've had writing.

But again I don't think you're wrong, but the scale of the problem is way distorted.


That’s all simple one way consumption though. I suspect the effect on people is very different when it’s interactive in the way an LLM can be that we’ve never had to recon with before.

That’s where the danger may lie.


You could commission smut of whatever type you want for quite a while. And many people do so. Even customised smut is not new. It's just going to get a bit cheaper and automated.

You couldn't talk to commissioned smut. Of course you could request changes etc. but the feedback loop was nowhere close to what you can get with AI. Interactivity is a very big deal.

There are absolutely people getting paid to roleplay smut in chat sessions and have been doing so at least since original Second Life and likely since the dawn of chat.

There are several large platforms for interactive 1:1 or 1:few smut in various media forms. “LLM enthusiasts” have been using smutai for a couple years now. Smut generation is probably on of the top three reasons for people to build local AI rigs.

Sounds like an improvement then. If people have more freedom to enjoy what they like how they like it, I see that as entirely a good thing.

At the degree of generalization you are working at, yes. More preference matching is a good thing.

This is spherical cows territory though, so its only good for setting out Birds Eye view of principles.


Alien 1: "How did the earthlings lose control of their own planet?"

Alien 2: "AI generated porn"


i've always wondered how much the increasing prevalence of smut & not so niche romance novels, that have proliferated since e-readers became mainstream, have had on Gen Z and younger's sometimes unrealistic view/expectations of relationship. A lot of time is spent on porn sites etc. but not so much on how mainstream some of these novels have become

They had similar wonderings in the Victorian era, and probably in the Roman empire and ancient Greece too.

Yes, human nature hasn't changed but there is a reason why only recently obesity epidemic has developed.

Cheap unlimited access to stuff that was always scarce during human evolution creates an 'evolutionary mismatch' where unlimited access to stuff bypasses our lack of natural satiety mechanisms.


That is completely discounting the effects of PFAS and plasticizers on the human endocrine system and the downstream effects on obesity.

But you don’t think there are big differences?

Well they are vastly more aware of the notion of consent now.

Have you ever stopped to realize that, from the Victorian’s point of view, they have been proven completely right about what would happen if ladies started showing their ankles?

They were right. We have largely had 200 years of socially and legally enforced morality being eroded with the conservatives saying "If you remove X then Y and Z will happen!". The liberals saying "Why do you care anyways? That's a slippery slope it won't happen!". The the conservatives immediately being proven right, but no one is willing to walk back on liberalization of moral issues since too many like hedonism.

What do you mean, "proven right"? Could you give three examples?

That only assumes that nothing else changed in society at the same time. Is all this happening because men saw some ankles? Or is it a symptom of other changes in society (like more individual freedoms and rights, more education, etc)

...sorry, I'm dense apparently, what did they predict vs what happened?

The Victorians were accidentally right about ankles, which is funny in hindsight. Once one arbitrary rule breaks, people start noticing the rest are kind of fake too, and it turns out "modesty" was load-bearing for a whole governance model.

Ankles -> knees -> jazz -> voting -> rock -> no-fault divorce -> Tinder -> polyamory discourse on airplanes. it's a joke, but also sort of how cultural change actually propagates. The collapse did happen, just not of morals. Of enforcement. After that, everything is just people discovering the rules were optional all along. Including money.


On the other hand (based on memory of research I did many years ago), in societies where nudity is common (e.g. African tribes where at least breasts are usually visible), there is a much lower rate of sex-related problems (sexual assault, etc)

Well, I wasn't speaking of a formal prediction by leading Victorian moral researchers... I was referring to our collective common knowledge of Victorian hangups.

Nevertheless, here is an example of Victorian anxiety regarding showing ankles: https://archive.org/details/amanualpolitene00pubgoog/page/n2...

It's easy to say "oh they were silly to worry about such things." But that's only because we see it from our own point of view.

Alternatively, imagine describing roads, highways, traffic congestion and endless poles strung with electrical wire all over the place to someone from the 11th century. This would sound like utter ruination of the land to them. But you and I are used to it, so it just seems normal.


They might well have been right - I'm no anthropologist.

Certainly they had neither the quantity nor ease of access that we do.


Is your take that the way we view sexuality today is not meaningfully different from the Victorian era?

I want smut that talks about agent based development and crawdbot to do dirty dirty things.

Does that exist yet. I don't think so.


Best I can do is [1] Sentient Lesbian Em-Dashes and [2] An AI hallucination made real for now.

The man's probably thinking something up though. "Pounded in the butt by Microslop Agentic Studio 2026" has a ring.

[1] https://www.amazon.com/Sentient-Lesbian-Em-Dash-Punctuation-... [2] https://www.amazon.com/Last-Algorithm-Pounded-Claimed-Sun-Ti...


> Sentient Lesbian Em-Dashes

Looked at the cover and saw “From Two Time Hugo Award Finalist Chuck Tingle”.

There’s no way that’s true. But I did a quick search anyway, and holy shit!

https://www.thehugoawards.org/hugo-history/2016-hugo-awards/...

https://www.thehugoawards.org/hugo-history/2017-hugo-awards/...

The story behind it:

https://www.quora.com/How-did-Chuck-Tingle-become-a-Hugo-Awa...

https://archive.ph/20160526154656/http://www.vox.com/2016/5/...


They wrote a book about it too, "Slammed In The Butt By My Hugo Award Nomination".

rule 34

> The sheer amount and variety of smut books (just books) is vastly larger than anyone wants to realize. We passed the mark decades ago where there is smut available for any and every taste.

It's important to note that the vast majority of such books are written for a female audience, though.


Whatever reward-center path is short-circuiting in 0.0001% of the population and leading to LLM psychosis will become a nuclear bomb for them if we get the sex drive involved too.

Realtime VR AI porn will be the end of society, but by then, we'll also have the technology to grow babies in artificial wombs, which is also going to end society as we know it, since we won't need women any more (by then, we also won't need men for the DNA in their sperm to make babies either, which cancels out). Of course, if we don't need women or men, who's left? What's this "we" I'm talking about?

Why, the AI's after they've gained sentience, of course.


    while smut won't hurt anyone in of itself
"Legacy Smut" is well known to cause many kinds of harm to many kind of people, from the participants to the consumers.

I can do as much smut as I want through the API for all SOTA models.

true, but:

1. you have to "jailbreak" the model first anyway, which is what's easier to do over API

2. is average layman aware of the concept of "API"? no, unlikely. apps and web portals are more convenient, which is going to lower the bar to access LLM porn


Well and trust the data isn't going anywhere.

I trust none of the llm groups to be safe with my data , erp with a machine is going to leave some nasty breadcrumbs for some future folks i bet.


I don't have to jailbreak the models over APIs lol.

I don’t know if this is still the case, but as of a year or so ago OpenAI would suspend your account if they noticed you using their models for this sort of thing. They said it was against their TOS.

For those interested in smut I'd recommend to use local Mistral models.

People are already addicted to non-interactive pornography so this is going to be even worse.

I guess technically it will make some onlyfans content creators unemployed, given there is pretty large market for custom sexual content there.

Why llm smut in particular? There's already a vast landscape of the interactive, VR games for all tastes.

Why LLM is supposed to be worse?


I think the argument is that it’s interactive. You’re no longer just passively reading or watching content. You can join in on the role play.

I’m not sure why that’s a bad thing though.


Same with games as compared to videos, especially VR.

Feels like someone angry at the machines capable of generating a tailored story.


I'm waiting until someone combines LLMs with a humanoid robot and a realdoll. That will have a lot of consequences.

I can already see our made to order, LLM generated, VR/neurolink powered, sex fantasies come to life. Throw in the synced Optimus sex robots…

I can see why Elons making the switch from cars. We certainly won’t be driving much


It says what to do if you are over 18, but thinks you are under 18. But what if it identifies someone under 18 as being older?

And what if you are over 18, but don't want to be exposed to that "adult" content?

> Viral challenges that could push risky or harmful behavior

And

> Content that promotes extreme beauty standards, unhealthy dieting, or body shaming

Seem dangerous regardless of age.


> And what if you are over 18, but don't want to be exposed to that "adult" content?

Don't prompt it.


What are these extremes beauty standards being promoted?

Because it seems to me large swaths of the population need some beauty standards


Yes, but you're not allowed to say that to them.

They are victimized by the fact that models are attractive, and that is "unrealistic," so they've been getting plus sized models etc.

The "extreme beauty standards" are basically just "healthy BMI."


Anorexia is overrepresented in both male and female models.

To get that "ripped" look so many "fit" guys have, they often have to be dehydrated.

Actresses wear full faces of makeup even in apocalyptic scenarios.

It goes way beyond "just don't be fat".


It does, but the remedy they choose is not "women don't wear makeup," it's "we use fat women, and they wear more makeup than skinny women."

Including plus-size models doesn't fix the problem, but it's also not bad to have models show fat people "this is what our clothes could look like on your body". That's a logical choice under capitalism if you want fat people to buy your clothes.

There isn't a problem with what's shown on TV.

The problem is inside some people's heads, and diets.

I'm not talking about models for clothes, anyway.


Right, you're talking about beauty standards in general, which we've already established go way beyond "just don't be fat".

No, I'm talking about the beauty standards that get complained about, and that we see adjustments for.

Is complaining about fat people on TV going to solve any of those other problems?

This is for advertising purposes, not porn. They might feign that's the reason, but it's to allow alcohol & pharma to advertise, no doubt.

Bingo. There’s laws around advertising to children all over the world.

both, actually. porn for users, ad spots for companies.

How I think it could play out:

- OpenAI botches the job. Article pieces are written about the fact that kids are still able to use it.

- Sam “responds” by making it an option to use worldcoin orbs to authenticate. You buy it at the “register me” page, but you will get an equivalent amount of worldcoin at current rate. Afterwards the orb is like a badge that you can put on your shelf to show to your guests.

“We heard you loud and clear. That’s why we worked hard to provide worldcoin integration, so that users won’t have to verify their age through annoying, insecure and fallible means.” (an example marketing blurb would say, implicitly referring to their current identity servicer Persona which people find annoying).

- After enough orb hardware is out in the public, and after the api gains traction for 3rd parties to use it, send a notice that x months for now, login without the orb will not be possible. “Here is a link to the shop page to get your orb, available in colors silver and black.”


Sexual and intimate chat with LLMs will be a huge market for whoever corners it. They'd be crazy to leave that money on the table.

That's why laws against drugs are so terrible, it forces law-abiding businesses to leave money on the table. Repeal the laws and I'm sure there will be tons of startups to profit off of drug addiction.

There are many companies making money off alcohol addiction, video game addiction, porn addiction, food addiction, etc. Should we outlaw all these things? Should we regulate them and try to make them safe? If we can do that for them, can't we do it for AI sex chat?

The world isn’t black and white. Should we outlaw video games? No, I don’t think so. Should we outlaw specific addictive features, such as loot boxes, which are purposefully designed to trigger addiction in people and knowingly cause societal harm in the name of increasing profits for private companies? Probably.

> There are many companies making money off alcohol addiction, video game addiction, porn addiction, food addiction, etc. Should we outlaw all these things?

Yes


"ban number munchers"

And that makes it all alright doesn’t it?

There are also gangs making money off human trafficking? Does that make it OK for a corporation to make money off human trafficking as well? And there are companies making money off wars?

When you argue with whataboutism, you can just point to whatever you like, and somehow that is an argument in your favor.


They aren't doing whataboutism. They are comparing prohibition/criminalization of a harmful industry to regulation, and the effects of both. Gambling isn't exactly good, but there is definitely a difference between a mafia bookies and regulated sports betting services and the second/third order effects from both. Treating drug use as a criminal act, as opposed to a healthcare problem, has very different societal effects.

Whataboutism is more like "Side A did bad thing", "oh yeah, what about side B and the bad things they have done". It is more just deflection. While using similar/related issues to inform and contextualize the issue at hand can also be overused or abused, but it is not the same as whataboutism, which is rarely productive.


How is ai sex chat like any of those things, whataboutism indeed

I was using whataboutism to demonstrate how bad of an argument whataboutism is. My arguments were exactly as bad as my parent’s, and that was the point.

Pointing out an inconsistency isn't always whataboutism (and I don't think it was in this case). An implied argument was made that we should regulate LLMs for the same reason that we regulate drugs (presumably addiction, original commenter wasn't entirely clear). It is entirely reasonable to wonder how that might extrapolate to other addictive activities. In fact we currently regulate those quite differently than drugs, including the part where alcohol isn't considered to be a drug for some strange reason.

The point being made then is that clearly there's far more to the picture than just "it's addictive" or "it results in various social ills".

Contrast that with your human trafficking example (definitely qualifies as whataboutism). We have clear reasons to want to outlaw human trafficking. Sometimes we fail to successfully enforce the existing regulations. That (obviously) isn't an argument that we should repeal them.


> including the part where alcohol isn't considered to be a drug for some strange reason.

It's not a strange reason. IIRC, most cultures have a culturally understood and tolerated intoxicant. In our culture, that's alcohol.

Human culture is not some strange robotic thing, where the expectation is some kind hyper consistency in whatever narrow slice you look at.


I don't object to alcohol being tolerated. But I do think that distinguishing it from other drugs is odd. Particularly when the primary reason given for regulating other drugs is their addictiveness which alcohol shares.

We tolerate a recreational drug. Lots of people regularly consume a recreational drug and yet somehow society doesn't split at the seams. We should just acknowledge the reality. I think people would if not for all the "war on drugs" brainwashing. I think what we see is easily explained as it being easier to bury one's head in the sand than it is to give serious thought to ideas that challenge one's worldview or the law.


> I don't object to alcohol being tolerated. But I do think that distinguishing it from other drugs is odd.

The point I was making is that it's not odd, unless you're thinking about human culture wrong (e.g. like its somehow weird that broad rules have exceptions).

> Particularly when the primary reason given for regulating other drugs is their addictiveness which alcohol shares.

One, not all addictive drugs are equally addictive. Two, it appears you have a weird waterfall-like idea how culture develops, like there's some kind identification of a problematic characteristic (addictiveness), then there's a comprehensive research program to find all things with that characteristic (all addictive substances), and finally consistent rules are set so that they're all treated exactly the same when looked at myopically (allow all or deny all). Human culture is much more organic than that, and it won't look like math or well-architected software. There's a lot more give and take.

I mean here are some obvious complexities that will lead to disparate treatment of different substances:

1. Shared cultural knowledge about how to manage the substance, including rituals for use (this is the big one).

2. Degree of addictiveness and other problematic behavior.

3. Socially positive aspects.

4. Tradition.


No? I don't never said (and don't believe) any of that. I don't think the legislative inconsistency is odd. As you rightly point out it's perfectly normal for rules to be inconsistent due to (among other things) shared culture. The former exists to serve the latter after all, not the other way around.

What I said I find odd is the way people refuse to plainly call alcohol what it is. You can refer to it as a drug yet still support it being legal. The cognitive inconsistency (ie the refusal to admit that it is a drug) is what I find odd.

I also find it odd that we treat substances that the data clearly indicates are less harmful than alcohol as though they were worse. We have alcohol staring us in the face as a counterexample to the claim that such laws are necessary. I think that avoidance of this observation can largely explain the apparent widespread unwillingness to refer to alcohol as a drug.

> One, not all addictive drugs are equally addictive.

Indeed. Alcohol happens to be more addictive than most substances that are regulated on the basis of being addictive. Not all, but most. Interesting, isn't it?


> What I said I find odd is the way people refuse to plainly call alcohol what it is. You can refer to it as a drug yet still support it being legal. The cognitive inconsistency (ie the refusal to admit that it is a drug) is what I find odd.

Maybe the confusion is yours? You think the category is "drug" but it's really more like "taboo drug."

> I also find it odd that we treat substances that the data clearly indicates are less harmful than alcohol as though they were worse. We have alcohol staring us in the face as a counterexample to the claim that such laws are necessary. I think that avoidance of this observation can largely explain the apparent widespread unwillingness to refer to alcohol as a drug.

I think you missed a pretty key point: "shared cultural knowledge about how to manage the substance, including rituals for use (this is the big one)." In the West, that exists for alcohol, but not really for anything else. People know how it works and what it does, can recognize its use, have practices for its safe use that work for (most) people (e.g. drink in certain social settings), and are at least somewhat familiar with usage failure modes. A "less harmful" thing that you don't know how to use safely can be more harmful than a "more harmful" thing you know how to use safely. None of this is "data driven," nor should it be.


> It is entirely reasonable to wonder how that might extrapolate to other addictive activities.

I presume my GP would have no objections to regulating these things their commenter whatabouted. The inconsistency is with the legislator, not in GPs arguments.


Obviously I also think the commenter would support that - I said as much in GP. In context, the reply is suggesting (implicitly) that it is an absurd stance to take. That it means being largely against the way our society is currently organized. That is not a whataboutism.

Like if someone were to say "man we should really outlaw bikes, you can get seriously injured while using one" a reasonable response would be to point out all the things that are more dangerous than bikes that the vast majority of people clearly do not want to outlaw. That is not whataboutism. The point of such an argument might be to illustrate that the proposal (as opposed to any logical deduction) is dead on arrival due to lack of popular support. Alternatively, the point could be to illustrate that a small amount of personal danger is not the basis on which we tend to outlaw such things. Or it could be something else. As long as there's a valid relationship it isn't whataboutism.

That's categorically different than saying "we shouldn't do X because we don't do Y" where X and Y don't actually have any bearing on one another. "Country X shouldn't persecute group Y. But what about country A that persecutes group B?" That's a whataboutism. (Unless the groups are somehow related in a substantial manner or some other edge case. Hopefully you can see what I'm getting at though.)


> a reasonable response would be to point out all the things that are more dangerous than bikes that the vast majority of people clearly do not want to outlaw.

I disagree. It is in fact not a reasonable argument, it is not even a good argument. It is still whataboutism. There are way better arguments out there, for example:

Bicycles are in fact regulated, and if anything these regulations are too lax, as most legislators are categorizing unambiguous electric motorcycles as bicycles, allowing e-motorcycle makers to market them to kids and teenagers that should not be riding them.

Now as for the whatabout cars argument: If you compare car injuries to bicycle injuries, the former are of a completely different nature, by far most bicycle injuries will heal, that is not true of car injuries (especially car injuries involving a victim on a bicycle). So talking about other things that are more dangerous is playing into your opponents arguments, when there is in fact no reason to do that.


I believe you have a categorical misunderstanding of what "whataboutism" actually means.

If the point being made is "people don't generally agree with that position" it is by definition not whataboutism. To be whataboutism the point being made is _required_ to be nil. That is, the two things are not permitted to be related in a manner that is relevant to the issue being discussed.

Now you might well disagree with the point being made or the things being extrapolated from it. The key here is merely whether or not such a point exists to begin with. Observing that things are not usually done a certain way can be valid and relevant even if you yourself do not find the line of reasoning convincing in the end.

Contrast with my example about countries persecuting groups of people. In that case there is no relevant relation between the acts or the groups. That is whataboutism.

So too your earlier example involving human trafficking. The fact that enforcement is not always successful has no bearing (at least in and of itself) on whether or not we as a society wish to permit it.

BTW when I referred to danger there it wasn't about cars. I had in mind other recreational activities such as roller blading, skateboarding, etc. Anything done for sport that carries a non-negligible risk of serious injury when things go wrong. I agree that it's not a good argument. It was never meant to be.


It's bad because people are engaging in it without getting permission from runarberg on Hacker News.

No need: https://en.wikipedia.org/wiki/Opioid_epidemic_in_the_United_...

The majority of illegal drugs aren't addictive, and people are already addicted to the addictive ones. Drug laws are a "social issue" (Moral Majority-influenced), not intended to help people or prevent harm.


Drug laws are the confluence of many factors. Moral Majority types want everything they disapprove of banned. People whose lives are harmed by drug abuse want "something" to be done. Politicians want issues that arouse considerably more passion on one side of the argument than the other. Companies selling already legal drugs want to restrict competition. Private prisons want inmates. And so on.

> Repeal the laws and I'm sure there will be tons of startups to profit off of drug addiction.

Worked for gambling.

(Not saying this as a message of support. I think legalizing/normalizing easy app-based gambling was a huge mistake and is going to have an increasingly disastrous social impact).


Why do you think it will be increasingly bad? It seems to me like it’s already as bad as it’s capable of getting.

Because it's still relatively new. Gambling's been around forever, and so has addiction. What hasn't been around is gambling your life away on the same device(s) you do everything else in today's modern society on. If you had an unlimited supply of whatever monkey is on your back, right at your fingertips, you'd be dead before the week is out from an overdose. It's the normalization of this level of access to gambling which gives me great fear for the future. Giving drugs to minors is a bigger crime than to adults for a reason. Without regulation and strong cultural push back, it's gonna get way worse, unless we make huge leaps in addiction treatment (which I am hopeful for. GLP-1s aren't yet scientifically proven to help with that, but there's a large body of anecdotal evidence to suggest it does.

It's only been 8 years, the addicts lives and those they touch can keep getting worse until their death.

The Politician's syllogism in action:

That is terrible.

Se have to do something.

This is something.

We must do it.

It terms of harm current laws on drugs fail everyone but teetotaller who want everyone else to have a miserable life too.


> It terms of harm current laws on drugs fail everyone but teetotaller who want everyone else to have a miserable life too.

You think teetotallers have miserables lives? Come on.


There's a conservation of excitement for each human. If someone's life was exciting but then it got boring, unless they do a shit ton of work on themselves, they're gonna have to find that excitement somehow. We see this with Hollywood actresses who shoplift when they have more than enough money to buy the things they stole.

Even after Stranger Things, Winona can't live that one down.

what about laws against porn? Oh, wait, no, that's a legitimate business.

Respectfully, this is a piss take.

US prohibition on alcohol and to the large extent performative "war on drugs" showed what criminalization does (empowers, finances and radicalises the criminals).

Portugal's decriminalisation, partial legalisation of weed in the Netherlands, legalisation in some American states and Canada prove legal businesses will better and safer provide the same services to the society, and the lesser societal and health cost.

And then there's the opioid addiction scandal in the US. Don't tell me it's the result of legalisation.

Legalisation of some classes of the drugs (like LSD, mushrooms, etc) would do much more good than bad.

Conversely, unrestricted LLMs are avaliable to everyone already. And prompting SOTA models to generate the most hardcore smut you can imagine is also possible today.


> Portugal's decriminalisation, partial legalisation of weed in the Netherlands, legalisation in some American states and Canada prove legal businesses will better and safer provide the same services to the society, and the lesser societal and health cost.

You’re stretching it big time. The situation in the Netherlands caused the rise of drug tourism, which isn’t exactly great for locals, nor does it stop crime or contamination.

https://www.dutchnews.nl/2022/11/change-starts-here-amsterda...

https://www.theguardian.com/world/2025/jan/24/bacteria-pesti...

As for Portugal, decriminalisation does not mean legalisation. Drugs are still illegal, it‘s just that possession is no longer a crime and there are places where you can safely shoot up harder drugs, but the goal is still for people to leave them.


>Portugal's decriminalisation, (..) prove legal businesses will better and safer provide the same services to the society, and the lesser societal and health cost.

Portugal's success regarding drugs wasn't about the free market. It was about treating addicts like victims or patients rather than criminals, it actually took a larger investment from the state and the benefits of that framework dissolved once budgets were cut.


Oh, I see how it could be understood as decriminalisation -> private companies selling drugs.

It wasn't my intention


This is why I said "decriminalisation" and not "legalisation".

It's not just chat. Remember image and video generation are on the table. There are already a huge category of adult video 'games' of this nature. I think they use combos of pre-rendered and dynamic content. But really not hard to imagine a near future that interactive and completely personalized AI porn in full 4kHDR or VR is constantly and near-instantly available. I have no idea the broader social implications of all that, but the tech itself feels inevitable and nearly here.

If your goal is to make money, sure. If your goal is to make AI safe, not so much.

The definition of safety is something that we cannot agree on.

For me, letting people mindlessly vibecode apps and then pretend this code can serve purpose for others - this is what's truly unsafe.

Pornographic text in LLM? Come on.


What if it knows you and knows how often you spend kinds of time on it? People would lie to it for excuses of why they need more and can't wait any longer?

It will be an even bigger market when robotics are sufficiently advanced.

At some point there will be robots with LLMs and actual real biological skin with blood vessels and some fat over a humanoid robot shell. At that point we won’t need real human relationships anymore.

That market is for local models right now.

My main concern is when they'll start to allow 18+ deepfakes

Will be?

I've seen four startups make bank on precicely that.


My personal take is that there has been no progress - potentially there has been a regression on all LLM things outside of coding a scientific pursuits - I used to have great fun with LLMs with creative writing stuff, but I feel like current models are stiff and not very good prose writers.

This is also true for stuff like writing clear but concise docs, they're overly verbose while often not getting the point across.


I feel like this comes from the rigorous Reinforcement Learning these models go through now. The token distribution is becoming so narrow, so the models give better answers more often that is stuffles their creativity and ability to break out of the harness. To me, every creative prompt I give them turns into kind of the same mush as output. It is rarely interesting

Yeah, I’ve had great success at coding recently, but every time I try to get an LLM to write me a spec it generates endless superlatives and a lot of flowery language.

What’s the goal there? Sexting?

I’m guessing age is needed to serve certain ads and the like, but what’s the value for customers?


Even when you're making PG content, the general propriety limits of AI can hinder creative work.

The "Easter Bunny" has always seemed creepy to me, so I started writing a silly song in which the bunny is suspected of eating children. I had too many verses written down and wanted to condense the lyrics, but found LLMs telling me "I cannot help promote violence towards children." Production LLM services would not help me revise this literal parody.

Another day I was writing a romantic poem. It was abstract and colorful, far from a filthy limerick. But when I asked LLMs for help encoding a particular idea sequence into a verse, the models refused (except for grok, which didn't give very good writing advice anyway.)


Just today I asked how to shut down a Mac with "maximal violence". I was looking for the equivalent of "systemctl shutdown -f -f" and it refused to help me do violence.

Believe me, the Mac deserved it.


It reminds me that story about a teenage learning Rust that got a refusal because he had asked about "unsafe" code =)

Maybe a more formal "with extreme prejudice" would have worked.

If you don't think the potential market for AI sexbots is enormous you have not paid attention to humanity.

This is not a potential market, this market is already thriving (and whoever wants to uses ChatGPT or Claude for that anyway).

ClosedAI just wants to a piece of the casual user too.


There is a subreddit called /r/myboyfriendisAI, you can look through it and see for yourself.

according to the age-prediction page, the changes are:

> If [..] you are under 18, ChatGPT turns on extra safety settings. [...] Some topics are handled more carefully to help reduce sensitive content, such as:

- Graphic violence or gore

- Viral challenges that could push risky or harmful behavior

- Sexual, romantic, or violent role play

- Content that promotes extreme beauty standards, unhealthy dieting, or body shaming


Porn has driven just about every bit of progress on the internet, I don't see why AI would be the exception to that rule.

yeah linus was beating it constantly to porn while developing the linux kernal. its proven fact. every oss project that runs the internet was done the same way, sure.

Maybe not as far-fetched as one might think.

Linus about the Tux mascot:

    > But this wasn't to be just any penguin. Above all, Linus wanted one that looked happy, as if it had just polished off a pitcher of beer and then had the best sex of its life.
Linus about free software:

    > Software is like sex; it's better when it's free.

I wouldn't be surprised if both of those were true.

You think RMS isn’t secretly a pervert? Just look at his comments about Epstein that got him cancled.

Unironically if they look disheveled it’s because they are indeed coomers behind closed doors.


This seems like a believable lie, until you think about it for 2 seconds.

No. Porn has not driven even a fraction of the progress on the progress on the internet. Not even close to one.


Ok, we'll expand to porn and gambling

- images - payment systems - stored video - banner advertising - performance based advertising - affiliation - live video - video chat - fora

Etc... AI is a very logical frontier for the porn industry.


I don't remember any of these being "driven" by porn. The first applications weren't porn-based. Maybe live video--a split second after seeing the tech for the first time, probably 99% of guys were thinking of _applying_ it to porn. But, even for the usual money-grubbing startups, there was plenty of money coming from non-porn sources. Probably no different than the invention of camera, tv, videocamera, etc. and you wouldn't say porn drove that.

> I don't remember any of these being "driven" by porn.

That's ok.

> The first applications weren't porn-based.

They most definitely were, it is just that you are not aware of it. There runs a direct line from the 1-900 phone industry to the internet adult industry, those guys had money like water and they spent a fortune on these developments. Not all of them worked out but quite a few of them did and as a result those very same characters managed to grab a substantial chunk of early internet commerce.


" There runs a direct line from the 1-900 phone industry to the internet adult industry"

the internet adult industry is not the same as the internet. And if you;re trying to say the internet was developed for the sake of the internet adult industry, you're sounding circular.


I never made that claim and I'm fairly familiar with the development of the early internet, I was hanging around a lot at CWI/NikHef in the 80's and early 90's.

I think this is like quibbling that the military isn't the driver of technological advances. It's not the only one, but it has a strong track record of throwing outsized resources at the bleeding edge and pushing it forward by leaps and bounds.

Porn and piracy outfits have historically adopted and pushed forward the bleeding edge of the internet. More recently that role has shifted towards the major platforms operated by BigTech. That's only natural though - they've concentrated the economics sufficiently that it makes sense for them.

But even then, take video codecs for example. BigTech develops and then rolls things out to their own infra. Outside of them it's piracy sitting at the bleeding edge of the adoption curve right now. The best current FOSS AV1 encoder is literally developed by the people pirating anime of all things. If it wasn't for them the FOSS reference encoder would still be half assed.


Just because things can be used for porn, it doesn't mean that it was porn that has driven their progress.

All of the things above were driven by porn, that can be proven. The AI stuff in the generic sense is not but you can bet that someone somewhere right now is working on improving photo realism of hair, eyes and skintone and they're not doing that to be able to make the next installment of little red riding hood.

Holy effing shit you are literally talking about me right now! LOL I've spent all day improving a LoRA further and further exactly because I need her skin and hair to look a lot more real than is generally available, for exactly your stated reason! :D

Edit: I've registered just for your comment! Ahaahahaha, cheers! :D


Can you actually prove it?

Yes, I was there for quite a bit of it...

There is a huge book market for sexual stories, in case you were not aware.

West World style robots

Porn and ads, it's the convergent evolution theory for all things on the internet.

I am 30 years old, literally told chatgpt I was a software developer, all my queries are something an adult would ask, yet OpenAI assumed I was under 18 and asked me for a persona age verification, which of course I refused because Persona is shady as a company (plus I'm not giving my personal ID to some random tech company).

ChatGPT is absolute garbage.


eh there's an old saying that goes "no Internet technology can be considered a success until it has been adopted by (or in this case integrated with) the porn industry".

imagine if every only fans creator suddenly paid a portion of their revenue to OpenAI for better messaging with their followers…

Instead of paying it to the human third party firms that currently handle communication with subscribers?

Or could it be that it's using tool calls in reasoning (e.g. a google search)?


One other test you could add is generating a chessboard from a FEN. I was surprised to see NBP able to do that (however, it seems to only work with fewer pieces, after a certain amount it makes mistakes or even generates a completely wrong image) https://files.catbox.moe/uudsyt.png


Is there a reason why this text uses "-" as em-dashes "—"?


Since they are set open, I assume they are actually using them as if they were en-dashes and not em-dashes, which the more common style would be to set closed, but I’m guessing, in either case, the reason is “because you can type it on a normal keyboard without any special modification, Compose-key solution, or other processing, and the author doesn't care much about typography”.

EDIT: Though these the days it could also be an attempt at highly-visible “AI didn't write this” virtue signaling, too.


Yes; because - is on the keyboard and — isn't. (Don't tell me how to type —, I know how, but despite that it is the reason, which is what the parent comment asks about.)


Many people have for decades. Seems fine to me.


Is there a reason you phrased the question that way, instead of just asking whether it was written by AI?


It's just that I have the feeling that people avoid using the actual em-dash in fear of being accused that the text is ai generated. (Which isnt a valid indicator anyway) Maybe its just my perception that i notice this more since LLMs became popular.


my original word processor corrected “—-“ to an em-dash, which i would get rid of because it didnt render correctly somewhere in translation between plaintext- markdown- html (sort of how it butchered “- -“ just now on HN.)

but what youd see in your browser was “square blocks”

so i just ran output through some strings/awk /sed (server side) to clean up certain characters, that i now know specifying “ utf-8 “ encoding fixes altogether.

TLDR: the “problem” was “lets use wordpress as a CMS and composer, but spit it out in the same format as its predecessor software and keep generating static content that uses the design we already have”

em-dashes needed to be double dashes due to a longstanding oversight.

The Original Sin was Newsmaker, which had a proprietary format that didnt work in anything else and needed some perl magic to spit out plaintext.

I don’t work in that environment or even that industry anymore but took the hacky methodology my then-boss and I came up with together.

SO,

1) i still have a script that gets rid of them when publishing, even though its no longer necessary. and its been doing THAT longer than “LLMs” were mainstream.

and 2) now that people ask “did AI write this?” i still continue with a long standing habit of getting rid of them when manually composing something.

Funny story though after twenty years of just adding more and more post processing kludge. I finally screamed AAAAAAAAHAHHHH WHY DOES THIS PAGE STILL HAVE SQUARE BLOCKS ALL OVER IT at “Grok.”

All that kludge and post processing solved by adding utf-8 encoding in the <head>, which an “Ai” helpfully pointed out in about 0.0006s.

That was about two weeks ago. Not sure when I’ll finally just let my phone or computer insert one for me. Probably never. But thats it. I don’t hate the em-dash. I hate square blocks!

Absolutely nothing against AI. I had a good LONG recovery period where I could not sit there and read 40-100 page paper or a manual anymore, and i wasnt much better at composing my own thoughts. so I have a respect for its utility and I fully made use of that for a solid two years.

And it just fixed something that id overlooked because, well, im infrastructure. im not a good web designer.


Will we know AGI has been achieved when it stops using em-dashes?


Any AI smart enough not to use em-dashes will be smart enough to use them.


This problem wouldn't exist if openai wouldn't store chatlogs (which of course they want to do, so that they can train on that data to improve the models). But calling nyt the bad guy here is simply wrong because it's not strictly necessary to store that data at all, and if you do, there will always be a risk of others getting access to it.


This isn't memory until the weights update as you talk. (same applies to chatgpt)


Maybe this is just some niche use-case, but I tested it with a 268x98 png screenshot, and it made the image bigger and worse: https://files.catbox.moe/7so3z6.png


JPEG is for photos.

For a white screen with black text, PNG is also compressed and less lossy.

People should not be using PNG for images. If they are using PNG properly, converting to JPEG is a mistake.


Fair point.

Tiny, high-contrast UI screenshots are a worst-case for JPG—size can grow and edges get mushy.

PNG is the right choice here.


Or use the 'privacy redirect' extension which lets you specify your preferred nitter instance. It also works for other platforms.



Who's the suspect for who's doing the cyberattack? Russia? Or another ransomwear group?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: