Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I dunno, I get a response back for 100k tokens regularly. What is the point you are trying to make?


With which model are you getting 100k responses? The models are limited and are not capable of responding that much (4k max). The point I am trying to make is written 3 times in the previous messages I wrote. GPT4 is extremely slow to be useful with API.


As expected, you do not know anything about its API limits. Maximum token is 4096 with any gpt4 model. I am getting tired of HN users bs'ing at any given opportunity.


1. Your original wording, "getting a response _for_ n tokens", does not parse as "getting a response containing n tokens" to me.

2. Clearly, _you_ don't know the API, as you can get output up to the total context length of any of the GPT-4 32k models. I've received output up to 16k tokens from gpt-4-32k-0613.

3. I am currently violating my own principle of avoiding correcting stupid people on the Internet, which is a Sisyphean task. At least make the best of what I am communicating to you here.


You might want to see a specialist about your behavioral issues. Also gpt-4-32k is not open to public.


I've had access for many many months now


Skill issue.


You bullsh*t saying "I dunno, I get a response back for 100k tokens regularly." A model that doesn't even exist, then you talk about a 32k non-public API. Stop lying. It is just the internet, you don't need to lie to people. Get a life.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: