Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI will definitely, without a doubt, make executive decisions. It already makes lower level decisions. The company that runs the AI, can be held accountable. (meaning less likely OpenAI or the foundational LLM, but more likely the company calling LLMs that make decisions on car insurance, etc...)


Executives have always used decision-making tools. That’s not the point. The point is that the executive can’t point to the computer and say “I just did what it said!” The executive is the responsible party. She or he makes the choice to follow the advice of the decision-making tool or not.


The scary thing for me is when they've got an 18 year old drone operator making shoot/no-shoot decision on the basis of some AI metadata analysis tool (phone A was near phone B, we shot phone B last week...).

You end up with "Computer says shoot" and so many cooks involved in the software chain that no one can feasibly be held accountable except maybe the chief of staff or the president.


More than any other organization, the military can literally get away with murder, and they're motivated to recruit and protect the best murderers. It's only by political pressure that they may uphold some moral standards.


There is not a finite amount of blame for a given event. Multiple people can be fully at fault.


In most cases today if we don't attribute a direct crime solely to one person but instead to an organisation everyone avoids criminal prosecutions. Its only the people who didn't manage to spread the blame through the rest of the organisation that go down.


Yeah but it's fine because nobody cares if you kill a few thousand brown people extra.


Thing is, the chain of responsibility gets really muddled over time, and blame is hard to dish out. Let's think about denying a car insurance claim:

The person who clicks the "Approve" / "Deny" button is likely an underwriter looking at info on their screen.

The info they're looking at get's aggregated from a lot of sources. They have the insurance contract. Maybe one part is AI summary of the police report. And another part is a repair estimate that gets synced over from the dealership. A list of prior claims this person has. Probably a dozen other sources.

Now what happens if this person makes a totally correct decision based on their data, but that data was wrong because the _syncFromMazdaRepairShopSFTP_ service got the quote data wrong? Who is liable? The person denying the claim, the engineer who wrote the code, AWS?

In reality, it's "the company" in so far as fault can be proven. The underlying service providers they use doesn't really factor into that decision. AI is just another tool in that process that (like other tools) can break.


_syncFromMazdaRepairShopSFTP_ failing is also just as likely to cause a human to deny a claim.

Just because an automated decision system exists, does not mean an OOB (out of band) correctional measure should not exist.

In other words if AI fixes a time sink for 99% of cases, but fails on 1%, then let 50% of the 1% of angry customers get a second decision because they emailed the staff. That failure system still saves the company millions per year.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: