Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can't believe the pushback I'm getting here. The use case is stunningly obvious.

Companies want to dump all their Excels in it and get insights that no human could produce in any reasonable amount of time.

Companies want to dump a zillion help desk tickets into and gain meaningful insights from it.

Companies want to dump all their Sharepoints and Wikis into it that currently nobody can even find or manage, and finally have functioning knowledge search.

You absolutely want a privately trained company model.



None of the use cases you are describing require training a new model. You really don't want to train a new model, that's not a good way of getting them to learn reliable facts and do so without losing other knowledge. The fine tuning for GPT 3.5 suggests something like under a hundred examples.

What you want is to get an existing model to search a well built index of your data and use that information to reason about things. That way you also always have entirely up to date data.

People aren't missing the use cases you describe, they're disagreeing as to how to achieve those.


>>Companies want to dump all their Excels in it and get insights that no human could produce in any reasonable amount of time.

>>Companies want to dump a zillion help desk tickets into and gain meaningful insights from it.

>>Companies want to dump all their Sharepoints and Wikis into it that currently nobody can even find or manage, and finally have functioning knowledge search.

Mature organizations already have solutions for all of these things. If you can't mine your own data competently, you've got bigger problems than not having AI doing it for you. It means you don't have humans who understand what's going on. AI is not the answer to everything.


So these "mature" orgs are using something better than openai, can you explain ?


I wish I lived in the same universe as you


I wonder if corporations would train it on emails/Exchange as well, since they are often technically company property and could contain valuable information not found in tickets/wikis.


I think those are examples of prompting, not modeling. You'd use the API to develop an app where the end user's question gets prefaced with that stuff. Modeling is more like teaching it how to sensibly use language, which can be centralized instead of each enterprise having experts in that. It would be like having in-house English teachers instead of sending people to school, based on a desire to have a corporate accent -- interesting but probably not useful in most cases.


ChatGPT doesn’t work for this. There is a huge GIGO problem here that it’s missing the organizational knowledge to disambiguate. Unless you’ve pre-told it which excel sheets are correct, this is DOA.

ChatGPT only works as well as it does because it’s been trained on a corpus of “internet accepted” answers. It can’t fucking reason about raw data. It’s a language model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: