Credibility is the core currency of soft power, whether one views its ultimate goal as manufacturing consent or fostering genuine cultural attraction. Without that perceived reliability, the indicator "soft" loses it's meaning.
>Credibility is the core currency of soft power, whether one views its ultimate goal as manufacturing consent or fostering genuine cultural attraction.
Not sure its worth dissecting this, but there is a lot of grey area in your claim of the meaning of Credibility. (Credibility and cultural attraction? Pretty sure these have little correlation. Dictators can make creditable threats.) Further, its a debatable claim that there is a 'core currency' of soft power.
As a contextualist, I am not going to die on this hill for your personal meaning of Credibility. But I can attest that your conviction in your claim is stronger than any International Relations Realist practitioner would make.
The administration is dispensing with the institutions of soft power. I don't think it's the main goal so much as a consequence of their worldview. Soft power is essentially worthless to people who have no interest in maintaining a facade of international cooperation.
The AI can hire verifiers too. It of course turns into a recursive problem at some point, but that point is defined by how many people predictably do the assigned task.
I'd say abstracting it away from ai, Stephen King explored this type of scenario in 'Needful Things'. I bet there is a rich history in literature of exactly this type of thing as it basically boils down to exploration of will vs determinism.
There is definitely a large section of the security community that this is very true. Automated offensive suites and scanning tools have made entry a pretty low bar in the last decade or so. Very many people that learn to use these tools have no idea of how they work. Even when they know how the exploit works on a base level, many have no idea how the code works behind it. There is an abstraction layer very similar to LLMs and coding.
I went to a secure coding conference a few years back and saw a presentation by someone who had written an "insecure implementation" playground of a popular framework.
I asked, "what do you do to give tips to the users of your project to come up with a secure implementation?" and got in return "We aren't here to teach people to code."
Well yeah, that's exactly what that particular conference was there for. More so I took it as "I am not confident enough to try a secure implementation of these problems".
>The exposed data told a different story than the platform's public image - while Moltbook boasted 1.5 million registered agents, the database revealed only 17,000 human owners behind them - an 88:1 ratio.
They acquired the ratio by directly querying tables through the exposed API key...
I feel publishing this moves beyond standard disclosure. It turns a bug report into a business critique. Using exfiltrated data in this way damages the cooperation between researchers and companies.
reply