The global workforce benefits from higher salaries and higher demand for labor, not from zero- or negative-sum moves of jobs from one place to another.
This works when protesting an unjust law with known penalties. King knew he would be arrested and had an approximate idea on the range of time he could be incarcerated for. I don't know if it's the same bargain when you are subjecting yourself to an actor that does not believe it is bound by the law.
What? No, he didn't. The police went after peaceful civil rights protesters with clubs and dogs. They knew they could be badly hurt or killed and did it anyway.
Oh, apologies, I'm not saying that King didn't risk considerable injury or death. I'm saying that I don't think he is talking about that in this particular passage. The passage gp quoted is about how accepting lawful penalties from an unjust law venerates and respects the rule of law.
I think it's different with illegal "penalties" like being mauled by a dog or an extrajudicial killing. While those leaders of the civil rights movement faced those risks, I don't think King is asking people to martyr themselves in that passage, but to respect the law.
In contrast to accepting punishments from unjust laws, I think there is no lawless unjust punishment you should accept.
I think we're never going to be able to have robust ai detection, and current models are as bad as they'll ever be. Instead we really need to have the ability to sign images on cameras that show these are the bits that came off this hardware unedited, that professional news outlets can verify.
But that's going to cost money to make and market all these new cameras and I just don't know how we incentivize or pay for this, so we're left unable to trust any images and video in the near future. I can only think of technical solutions and not the social changes that need to happen before the tech is wanted and adopted.
Thank you, this is fantastic to know! I think we have to normalize requiring this or similar standards for news, it will go a long way.
Ideally we would have a similar attestation from most people's cameras (on their smartphones) but that's a much harder problem to also support with 3p camera apps.
ok but then the conversation switches from "was this actually taken from a camera" to "is this a photo of a printout" and we're not really any further along in being able to establish trust in what we're seeing, my point is the goal posts will always get moved because unless we see literally anything in person these days, we can't really trust in it
This sounds like a good idea on its face, but it will have the effect of both legitimizing altered photos and delegitimizing photos of actual events.
You will need camera DRM with a hardware security module down all the way to the image sensor, where the hardware is in the hands of the attacker. Even when that chain is unbroken, you'll need to detect all kinds of tricks where the incoming photons themselves are altered. In the simplest case: a photo of a photo.
If HDCP has taught anything, it's that vendors of consumer products cannot implement such a secure chain at all, with ridiculous security vulnerabilities for years. HDCP has been given up and has become mostly irrelevant, perhaps except for the criminal liability it places on 'breaking' it. Vendors are also pushed to rely on security by obscurity, which will make such vulnerabilities harder to find for researchers than for attackers.
If you have half of such a 'signed photos' system in place, it will become easier to dismiss photos of actual events on the basis that they're unsigned. If a camera model or security chip shared by many models turns out to be broken, or a new photo-of-a-photo trick becomes known, a huge amount of photos produced before that, become immediately suspect. If you gatekeep (the proper implementations of) these features only to professional or expensive models, citizen journalism will be disincentivized.
But even more importantly: if you choose to rely on technical measures that are poorly understood by the general public (and that are likely to blow up in your face), you erode a social system of trust that already is in place, which is journalism. Although the rise of social media, illiteracy and fascism tends to suggest otherwise, journalistic chain of custody of photographic records mainly works fine. But only if we keep maintaining and teaching that system.
Then you can have a signed picture of a screen showing an AI image. And the government will have a secret version of OpenAI that has a camera signature.
But especially when a party has been shown to alter photos with evidence even for “memetic” reasons, they’ve poisoned their own reliability. As far as I’m concerned the DOJ us no longer a reliable source of evidence until a serious purge of leadership due to their intimate connection with the parties who posted this edited photo.
Because they bought a claude subscription on a personal account and the error message said that they belongs to a "disabled organization" (probably leaking some implementation details).
Anthropic banned the author for doing nothing wrong, and called him an organisation for some reason.
In this case, all he lost was access to a service which develops a split personality and starts shouting at itself, until it gets banned, rather than completing a task.
Google also provides access to LLMs.
Google could also ban him for doing nothing wrong, and could refer to him as an organisation, in which case he would lose access to services providing him actual value (e-mail, photos, documents, and phone OS.)
Another possibility is there (which was my first reading before I changed my mind and wrote the above):
Google routes through 3rd-party LLMs as part of its service ("link to a google docs form, with a textbox where I tried to convince some Claude C"). The author does nothing wrong, but the Claude C reading his Google Docs form could start shouting at itself until it gets Google banned, at which point Google's services go down, and the author again loses actually valuable services.
Because what is meant by "this organization has been disabled" is fairly obvious. The object in Anthropic's systems belonging to the class Organization has changed to the state Disabled, so the call cannot be executed. Anthropic itself is not an organization in this sense, nor is Google, so I would say that referring to them as "non-disabled organizations" is an equivocation fallacy. Besides that, I can't tell if it's a joke, if it's some kind of statement, or what is being communicated. To me it's just obtuseness for the sake of itself.
It’s a joke because they do not see themselves as an organization, they bought a personal account, were banned without explanation and their only communication refers to them as a “disabled organization”.
Anthropic and Google are organizations, and so an “un disabled organization” here is using that absurdly vague language as a way to highlight how bad their error message was. It’s obtuseness to show how obtuse the error message was to them.
Some things are obtuse but still clear to everyone despite the indirection, like the error message they got back. Their description of what caused it is obtuse but based on this thread is not clear to quite a few people (myself included). It's not dunking on the error message to reuse the silly but clear terminology in a way that's borderline incoherent.
Last time I went on X my feed which I curated from ML contributors and a few politicians had multiple white nationalist memes, and engagement slop. Fact checks frequently are added after millions of impressions.
I am sure there are very smart well meaning people working on it but it certainly doesn’t feel better than the BBC to me. At least I know that’s state media of the UK and when something is published I see the same article as other people.
>"but it certainly doesn’t feel better than the BBC to me"
BBC was cutting-edge for creating and fostering methodologies that went on to become most of the "impartial reporting" practices from journalists. So, even if it's not feeling any "better" than BBC, that's still a pretty good step in the right direction!
The parent poster was saying X was better than the BBC, I certainly wouldn't have picked that one, but it's likely because they get their news from conservative outlets outraged by the recutting of Trump's speech on January 6th.
That phrasing sounds like you're not yourself outraged by it. It wouldn't be surprising given the institutional attitudes seen at the BBC (and Channel 4 which got caught doing something even worse) - clearly, leftists have decided that framing politicians and publishing entirely fake news is acceptable if it's to attack right wing people.
Anyone who knows about that event and is still watching the BBC afterwards is saying they don't care about the truth of their own beliefs. Dangerous stuff.
>So, even if it's not feeling any "better" than BBC, that's still a pretty good step in the right direction!
The step in the direction of decentralized filter bubbles isoating society? With no channels to hold info accountable and checked/upfated for accuracy?
GP made a pretty good case for X being a good-faithed attempt at a new distributed structure for mass media that at least TRIES to have conflicting viewpoints or objectivist "fact checks", even if it occasionally misses the mark. I was VERY early on the "hate-Elon" bandwagon and even earlier on not being an active Twitter/X user (search my username).
In a post-Fairness Doctrine world, what else would satisfy you?
I don't think we're in a post fairness doctrine world, for one. So no, I haven't given up on the idea of he 4th estate. Your solution to bias is, as always, to not take any one source for granted. Take time to actually read articles from multiple angles that fall in line with the Fariness Doctrine. Then from there, use your own lived experiences to form your own viewpoint.
Outsourcing that to soundbites from randos on twitter with middle school lieracy is insanity. But let me use a charitable lens here.
Any notion of X being a good faith attempt at being a community-lead fact checker got broken with the introduction of Grok. Then those hopes were shattered to pieces when Grok was shown to be massively compromised by yet another central figure. One who, yes, has the literacy of a middle schooler. We somehow ended up with the worst of both worlds having centralization of a bad knowledge hub and stupidity.
>what else would satisfy you?
if using our brains is out of the equation and lack of censorship is truly the most important metric of "free discussion": let's just bring back 4chan. no names or personalities, 99% free-for-all, it technically has threading support to engage in conversations. There is centralization, but compared to the rest of the internet the moderators and admins stay very quiet.
There's a lot I hate about modern social media, but surprisingly 4chan only has like 2 things I strongly dislike. Big step up from the 20+ reasons I can throw at nearly every other site.
This wouldn’t bother me as much but it’s really like 5-7 days depending on freight use of the lines and they can’t tell you ahead of time what it’s going to be somehow?
reply