Did you mean to take that course on human sarcasm?
I can understand how a one or two sentence comment that is wholly sarcastic is readily misinterpreted, ala Poe's Law. But my context should have been pretty clear.
I don't see how talking about abject failures of leadership and policy response in the first term is "outdated" when the second term is essentially a doubling down on these derelictions of duty.
Are we just supposed to forget those past failures in favor of focusing on the current catastrophe? What tariff tantrum? What Greenland treachery? Don't you know we've always been at war with Iran?
Not defending any party, it's basic ethological expectation: a creature that try to beat an other should expect aggressive response in return.
Of course, never aggressing anyone and transform any aggression agaisnt self into an opportunity to acculturate the aggressor into someone with the same empathic behavior is a paragon of virtuous entity. But paragons of virtue is not the median norm, by definition.
> Not defending any party, it's basic ethological expectation: a creature that try to beat an other should expect aggressive response in return.
Another basic ethological expectation is that the strong dominate the weak, but maybe we shouldn’t base our moral framework around how things are, and rather on how they should be.
You don't think non-consensually revealing somebody's identity is a problem?
Resorting to DDoS is not pretty, but "why is my violent behavior met with violence" is a little oblivious and reversal of victim and perpetrator roles.
If it's information that's medium-difficult to get, and the only people that would use the information to cause harm can easily put in more effort than that, then I don't think it's "violence" to post that information.
I think this is a weak framing. Lots of things are moral or immoral under specific circumstances. We should protect people from being murdered. I think murder is usually wrong. But we also likely agree that there are circumstances in which killing someone can be justified. If we can find context for taking a life, I'm quite sure we can find context for a DoS.
So their desire to not be used to commit a cyberattack doesn’t factor in? As long as they aren’t legally liable, it doesn’t matter?
Also a checkbox that says something like “I would like to help commit a crime using my internet traffic” would keep people from having their traffic used without consent.
There's an old legal maxim "in pari delicto potior est conditio defendentis", that is "in a case of mutual fault the position of the defending party is the better one."
This is far too general of a philosophy to be applied in the dark. You can't say, without any familiarity with what the actual software looks like whether a rewrite is the best way to proceed. There are plenty of projects that get bogged down in the inertia of a system that wasn't built right in the first place.
Also, modern AI is only a few years old at this point. Whatever has been built so far is hardly load bearing.
This post's title is hyperbolic at best. At best the author is noticing what most people have known for a long time, there are bots on the internet. Most interactions I have online are with real people. Maybe we will end up with a dead internet, but moderation is still possible currently.
The elephant in the room is that a lot of social media companies have a conflict of interest. They can juice their user metrics by not moderating bots as well as they could be.
And I finally figured out how to get links to answers instead of just inlining the content as before. Anyways, there it is. We live in a time where questions like "Does inference or training use more compute?" can be answered quickly by just pasting it into a search box.
reply