despite this being something practically everybody wants, the fact that it hasn't happened is not a coincidence and speaks to the power of police unions/guilds and their lobbying arms. outside a few toothless instances, those groups are extremely good at reframing these attempts and mobilizing their bases to vote against the broader public interest.
> despite this being something practically everybody wants,
No, everybody does not want police accountability. Half the population will fall on a grenade to prevent that. They know that the purpose of the police is to keep the undesirables in line, and they never envision that they will ever fall in that category.
oh, i generally don't disagree with you on that point; i specifically meant that when presented with the question "do you want your tax dollars to pay for police liabilities?" the answer is probably almost always "no".
Sure. But when you ask "Do you want the police to be unable to do their job and live in a lawless hellscape ran by gangbangers and ISIS cartels?, the answer is also 'No.'
The problem is that the mass media sets the framing of acceptable discourse, and that mass media is in large part an ideological monoculture. And even when it's not, it is happy to present absolutely insane batshit lunacy as 'one of the two sides' of an issue.
for the same reason `terraform apply` asks for confirmation before running - states can conceivably change without your knowledge between planning and execution. maybe this is less likely working with Claude by yourself but never say never... clearly, not all behavior is expected :)
similarly, i remember at least one organization (pre-Songtradr Bandcamp, i think) who didn't publish some of its open technical roles anywhere except in HTML comments on their website. they only wanted to attract folks who liked to poke around and look under the hood.
> an AI system is literally a machine that can think and do things itself
why do so many writers claim this as a matter of fact? are we losing (or did we never have) a shared definition of the word "think"? can an LLM, at this time, function with zero human input whatsoever?
edit to add: these are genuine questions, not meant to be rhetorical :)
it's hard for me to gauge a broader understanding of AI/LLMs since most of the conversations i experience around them are here, or in negative contexts with people i know. and i'll admit i'm one of those negative people, but my general aversion to AI mostly has to do with my own anxiety around my mental health and cognitive ability in a use-it-or-lose-it sense, along with a disdain for its use in traditionally-creative fields.
>are we losing (or did we never have) a shared definition of the word "think"
People have been saying, “the computer is thinking,” while webpages are loading or software is running for as long as I’ve been consciously aware. I agree there’s something new about describing AI as, “literally a machine that can think,” but language has always had fuzzy borders
It's wild to watch documentaries from the 1980s where a primitive computer is said to be "a thinking machine" that is "taking most of the work out of a job".
yeah, for sure. i really think some people are under the impression that LLMs are a form of general AI that actually processes thought rather than being an admittedly-impressive exponential autocomplete.
though i'm not by any means an AI booster, my question wasn't really meant to be taken as a gotcha - more a general taking stock of where we're at in terms of broader understanding of these technologies outside of the professional AI/hobbyist world.
within a profit-driven economy, problems are simply profit centers to exploit for short-term revenue. solving those problems removes short-term revenue streams in favor of long-term stability, which of course would lead to long-term revenue streams... but who has time to wait?
Comcast has a very strict peering policy as well. They, like Deutsche Telekom, like to hold their proverbial customers hostage to make other networks pay to peer.
reply