Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I know a few people privately unhappy that openai api compatibility is becoming a community standard. Apart from some awkwardness around data.choices.text.response and such unnecessary defensive nesting in the schema, I don't really have complaints.

wonder what pain points people have around the API becoming a standard, and if anyone has taken a crack at any alternative standards that people should consider.



I want it to be documented.

I'm fine with it emerging as a community standard if there's a REALLY robust specification for what the community considers to be "OpenAI API compatible".

Crucially, that standard needs to stay stable even if OpenAI have released a brand new feature this morning.

So I want the following:

- A very solid API specification, including error conditions

- A test suite that can be used to check that new implementations conform to that specification

- A name. I want to know what it means when software claims to be "compatible with OpenAI-API-Spec v3" (for example)

Right now telling me something is "OpenAI API compatible" really isn't enough information. Which bits of that API? Which particular date-in-time was it created to match?


It's a JSON API... JSON API's tend to be more... 'flexible'.

To consume them, just assume that every field is optional and extra fields might appear at any time.


and disappear at any time... was a leetle bit unsettled by the sudden deprecation of "functions" for "tools" with only minor apparante benefit


Hey swyx — I work at OpenAI on our API. Sorry the change was surprising, we definitely didn't do a great job communicating it.

To confirm, the `functions` parameter will continue to be supported.

We renamed `functions` to `tools` to better align with the naming across our products (Assistants, ChatGPT), where we support other tools like `code_interpreter` and `retrieval` in addition to `function`s.

If you have any other feedback for us, please feel free to email me at [email protected]. Thanks!


Might be a good idea to have API versions for this... Then when someone builds a product against "version 1", they can be sure that new features might be added to version 1, but no fields will be removed/renamed without openai releasing version 2.


and what does `auto` even mean?


Hey nl — I work at OpenAI on our API. Do you mean `tool_choice="auto"`? If so, it means the model gets to pick which tool to call. The other options are:

- `tool_choice={type: "function", function: {name: "getWeather"}}`, where the developer can force a specific tool to be called. - `tool_choice="none"`, where the developer can force the model to address the user, rather than call a tool.

If you have any other feedback, please feel free to email me at [email protected]. Thanks!


Amen! The lack of decent errors from OpenAI is the most annoying. They'll silently return 400 with no explanation. Let's hope that doesn't catch on.

OpenAI compatible just seems to mean 'you can format your prompt like the `messages` array'.


Hi te_chris — I work at OpenAI and am currently working to improve our error messages. Would you be willing to share more about what errors you find annoying? My email is [email protected] (or feel free to reply here). Thanks!


TBH, we debated about this a lot before adding it. It's weird being beholden to someone else's API which can dictate what features we should (or shouldn't) be adding to our own project. If we add something cool/new/different to Ollama will people even be able to use it since there isn't an equivalent thing in the OpenAI API?


That's more of a marketing problem than a technical problem. If there is indeed a novel use case with a good demo example that's not present in OpenAI's API, then people will use it. And if it's really novel, OpenAI will copy it into their API and thus the problem is no longer an issue.

The power of open source!


You're right that it's a marketing problem, but it's also a technical problem. If tooling/projects are built around the compat layer it makes it really difficult to consume those features without having to rewrite a lot of stuff. It also places a cognitive burden on developers to know which API to use. That might not sound like a lot, but one of the guiding principles around the project (and a big part of its success) is to keep the user experience as simple as possible.


At some point, (probably in a relatively close future), there will be the AI Consortium (AIC) to decide what enters the common API?


That's why it's good as an option to minimize friction and reduce lock-in to OpenAI's moat.


I would take an imperfect standard over no standard any day!


There is a difference between a standard and a monopoly, though.


It's so trivially easy to create your own web server in your language of choice that calls directly into llama.cpp functions with the bindings for your language of choice it doesn't really matter all that much. If you want more control you can get with just a little more work. You don't really need these plug and play things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: