Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is advocating for increasing the number of victims of CSAM to include source material taken from every public photo of a child ever made. This does not reduce the number of victims, it amounts to deepfaking done to children on a global scale, in the desperate hope of justifying nuance and ambiguity in an area where none can exist. That's not harm reduction, it is explicitly harm normalization and legitimization. There is no such thing (and never will be such a thing) as victimless CSAM.


What if there's a way to generate it without involving any real children pictures in the training set?


This is hoping for some technical means to erase the transgressive nature of the concept itself. It simply is not possible to reduce harm to children by legitimizing provocative imagery of children.


How so? No children involved - no harm done.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: