> It's like being mad at your bank that somebody stole your credit card on the subway and made purchases with it.
It's being mad that a store sold me a counterfeit rolex, actually. Spotify might claim to just be a "marketplace" like every other platform these days, but they're still the ones hosting that page that passes off slop as legitimate work by another artist. Spotify has a responsibility to govern what is hosted and sold on their platform.
> He was directly told that he won't be promoted because he was a white man.
Even if that was true (I don't believe his allegation), that's just _one company_. He obviously considered himself a very intelligent and capable person, so it seems the obvious next step would be to go work basically anywhere else? The Dilbert comics never seemed to push the ideal of company loyalty, so I don't think he felt trapped by obligation there.
One only needs to look at the upper management and board of any fortune 500 to disprove the idea that only non-white women are getting promoted.
Good for them, it's nice to see some management that hasn't totally bought into this "no workers, only subscription AI bots" vision of the future that so many tech CEOs are selling.
Personally I would never pay for tabletop miniatures or lore books generated by AI. It's the same core problem as publishing regurgitated LLM crap in a book or using ChatGPT to write an email - I'm not going to spend my precious time reading something that the author didn't spend time to write.
I am perfectly capable of asking a model to generate a miniature, or a story, or a dumb comment for reddit. I have no desire to pay a premium for someone else to do it and get no value from the generated content - the only original part is the prompt so if you try to sell AI generated "content" you might as well just sell the prompt instead.
Cool. That sure sounds nice and simple. What do you do when the multiple LLMs disagree on what the correct tests are? Do you sit down and compare 5 different diffs to see which have the tests you actually want? That sure sounds like a task you would need an actual programmer for.
At some point a human has to actually use their brain to decide what the actual goals of a given task are. That person needs to be a domain expert to draw the lines correctly. There's no shortcut around that, and throwing more stochastic parrots at it doesn't help.
Just because you can't (yet) remove the human entirely from the loop, doesn't mean that economising on the use of the humans time is impossible.
For comparison have a look at compilers: nowadays approximately no one writes their software by hand, we write a 'prompt' in something like Rust or C, and ask another computer program to create the actual software.
We still need the human in the loop here, but it takes much less human time than creating the ELF directly.
It’s not “economizing” if I have to verify every test myself. To actually validate that tests are good I need to understand the system under test, and at that point I might as well just write the damn thing myself.
This is the fundamental problem with this “AI” mirage. If I have to be an expert to validate that the LLM actually did the task I set out, and isn’t just cheating on tests, then I might as well code the solution myself.
I can see how LLMs can help with testing, but one should never compare LLMs with deterministic tools like compilers. LLMs are entirely a separate category.
> otherwise they would have paid for the service they were "depending fundamentally" on.
It's a "*.ai" company. Deductive probably spent more human time on their fancy animated landing page than engineering their actual system. If they vibe coded most of their product, I wouldn't be surprised if they didn't even know they were using Datadog until they got the email.
Doesn’t “doxxing” refer to publicly linking a private/anonymous identity to a public one? It’s disingenuous to imply the congresswoman had a problem with “linking to a public web page.” She had a problem with linking to a public web page in the context of unmasking the anonymous identity of a US military member. The linkage, not the linking, is the doxxing.
I don’t agree with Luna here, and my exposure to this story is limited to what’s in the article. But I do agree with the GP comment regarding the lack of impartiality in this article.
It’s the military’s responsibility to protect the identity of their operators, by ensuring they don’t publish information that could lead to doxxing. If they miss something, that’s on them. And prosecuting a private citizen is a deflection of responsibility that ignores the risk of actual motivated attackers (e.g. Maduro loyalists) uncovering the same information, without publishing it to twitter, but using it to threaten or harm the doxxed victim.
That's not sufficient. If a user copies customer data into a public google sheet, I can reprimand and otherwise restrict the user. An LLM cannot be held accountable, and cannot learn from mistakes.
It's structurally impossible. LLMs, at their core, take trusted system input (the prompt) and multiply it against untrusted input from the users and the internet at large. There is no separation between the two, and there cannot be with the way LLMs work. They will always be vulnerable to prompt injection and manipulation.
The _only_ way to create a reasonably secure system that incorporates an LLM is to treat the LLM output as completely untrustworthy in all situations. All interactions must be validated against a security layer and any calls out of the system must be seen as potential data leaks - including web searches, GET requests, emails, anything.
You can still do useful things under that restriction but a lot of LLM tooling doesn't seem to grasp the fundamental security issues at play.
So many people here want to bury their heads in the sand, like that will protect them. It's disgraceful to see people calling themselves "hackers" while they support the feds with no reservations or critical thought.
A good number of people on this site don’t mind the boot as long as they are wearing it. It’s not particularly rare for groups of software engineers to have American style libertarians amongst them
I really want an HN overlay where we can act in public. The rank cowardice here is so low, these dogs hiding and cloaking the truth again and again, putting the veil over the truth. Being able to post publicly what we are up to, being able to see more directly, not have these obfuscating hiding folks hiding in their anonymity on the layer feels necessary. What a sad pathetic foe humanity faces. It's so sad than HN allows a couple people to destroy our ability to understand and see the world.
The stock seems completely disconnected from the antics of Musk. I would think that having a CEO who is clearly a heavy ketamine user and spends more time playing politician than actually running the company would have a negative impact on the stock, but tesla's stock has been divorced from reality for a long time.
It's being mad that a store sold me a counterfeit rolex, actually. Spotify might claim to just be a "marketplace" like every other platform these days, but they're still the ones hosting that page that passes off slop as legitimate work by another artist. Spotify has a responsibility to govern what is hosted and sold on their platform.
reply