Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As an attorney, I don't see what the value of that would be if I have to double check its work. How else could I verify that the output is correct?


The key is to not give it any agency over the work product but rather have it act as an editor or advisor that can offer suggestions but every thing that goes into the document is typed by human hands.

Giving it a document and asking it about edge cases or things that may be not covered in the document. Asking it for various ways that one could argue against a given pleading and then considering ways that those could be headed off before they could even be raised.

In my on cases (writing short fiction), having it act as an editor and identifying grammatical mistakes, contradictory statements, ambiguous sentences, and tone mismatch for a given character has been very helpful... but I don't have it write the short fiction for me.

---

For software where it may be used to generate some material (write a short function that does...) the key is short. Something that I can verify and reason about without too much effort.

However, changes that are of the scope of hundreds of lines are exhausting to review no matter if an LLM or a junior dev wrote them. I would expect that similar things would be the case of several paragraphs or pages of legalese that would need additional levels of reading and reasoning and verifying.

If its too much to reason about and verify - its asking too much.

I'd no more trust an LLM to find citations to cases than I'd trust it to program a lesser known framework (where they've been notorious for hallucinating up functions that don't exist).


>The key is to not give it any agency over the work product but rather have it act as an editor or advisor that can offer suggestions but every thing that goes into the document is typed by human hands.

>Giving it a document and asking it about edge cases or things that may be not covered in the document.

As an attorney, how am I supposed to trust that it gave a proper output on the edge cases without reading the document myself?

>Asking it for various ways that one could argue against a given pleading and then considering ways that those could be headed off before they could even be raised.

Do people think attorneys don't know how to do their day-to-day jobs? We generally do not have issues coming up with how to argue against a pleading. Maybe if you're some sort of small-time generalist, working on an issue you hadn't before, but that's not most attorneys. And then, I'd be worried. You are basically not capable of having the expertise needed to verify the model's output for correctness anyway. This is why attorneys work in networks. I'd just find a colleague or a network of attorneys specializing in that area and find out from them what is needed, rather than trusting that an LLM knows all that because it was digested from the entire public Internet.

I've said it here before too, I think people talking about using AI as an attorney don't really understand what attorneys do all day.


The value is, at least in my field, that checking for correctness is often a less tiring task than writing the code. With proper prompting, the LLM will also cover more corner cases than will a human think to cover. And to be honest, I really like the names of internal identifiers that LLM come up with. It's a skill that is notoriously lacking in software development.

Additionally, code review with the proper tools can be done relatively quickly and it's always a good idea to get a second opinion - even that of an LLM. I suppose that the human could write the code then ask the LLM for a code review - but that is not common practice.


That's great for people who write code!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: