Hacker Newsnew | past | comments | ask | show | jobs | submit | printvoid's commentslogin

Does it always need an API key to work with any models, why can't you provide GPT3.5 which is free or any other models available that work without an API key.


As @meiraleal pointed out, there is no free API

And the only model I would recommend to use is Sonnet 3.5 I would still recommend adding OpenAI API key - for embeddings so code search and google search work


There is no free 3.5 api officially.


What do you mean my small codebase. I hope not the normal todo repos or basic apps. Can this be run on production codebase like a java application having dozens of microservices inside it.


Yes, this just depends on the model you're using. Small-medium size codebases would fit inside Claude's 200K context window and Gemini 1.5 has a 1M context window which would essentially fit 99% of codebases.

For reference:

- The Flask web framework for Python: 131880 tokens

- The Spring Framework for Java: 11070559 tokens


What is the advantage because the model is anyways gpt-turbo 3.5 and not 4 which costs money.


Model is in fact "gemini-pro" (and "gemini-pro-vision"). And in case of OpenAI "gpt-3.5-turbo" also costs money, if you want to use it via API.


I had a similar question in mind.


Can someone help what those statics in general mean and what should be the optimal values for these, I know it can be your Application specific but assuming I have a fairly medium complexity App with 1 Page and 2 tabs making around 10 API calls. What does these numbers tell me about my App performance?


Nothing beats Notion when it comes to note taking but maybe that's my personal opinion. I haven't tried the new version of Bear but the last version was no match to what notion provides.


You need more than 15000 signups just for building a counter. By that logic you are not going to get reddit with all the functionalities for 1M signups. lol


I'd say your logic is a bit flawed if you think that the logic for what to do when reaching 15K signups needs to be identical to the logic for what to do when reaching 1M signups. The reasoning for each goal can be totally different.

That said, personally I'm not going to use a site branded around the name spez.


I agree. This is absurd. A reddit clone is a few hours work. The technology is not the difficult part, and they haven't even done it?

Anyway the idea as a whole is stupid, because the problem with reddit is not its CEO but its entire design. It was an interesting idea, but it has failed. People are upset because third party mobile apps will be gone, but the site as a whole was far better before the influx of phoneposters anyway! Phoneposting is fundamentally incompatible with text beyond a couple of sentences. Phoneposters want - and get - short, easily consumed memes, photos, and videos.e They post short, easily consumed, low-value comments. Phoneposters have turned reddit from "the front page of the internet" into a site dominated by screenshots of tweets.


Does BARD even provide you the API for public consumption at this point?


Does it a accept only pdf docs at the moment? What's the maximum size of pdf file that I can provide it to analyze without breaking it.


Yeah it's only pdfs for now. What other file types did you want to try? Max file size is 600 pages/50 mbs


Cool.

Can it ingest multiple PDFs in the same 'context', or would I have to assemble it all into one (under the 50mb limit)?

What is it using to

Can we (provoke you to) set the model temperature in the conversation to either minimize the hallucinating or increase the 'conjecture/BS/marketing-claims' factor?


Right now it's just one pdf per bot but yeah you could hack it by merging the pdfs and then generating a new bot. Interesting suggestion - did you notice a particular hallucination? What kind of docs would be high vs. low temperature?


Yes. See this comment [0] Another HNer with API access tried just ingesting the paper without context and some instructions and model-temp=0 and got better results.

I've also found in my area that it'll happily hallucinate stuff -- after all, it has zero actual understanding, it just predicts the most likely filler in the given context.

Tamping that down and just getting it to cut out the BS/overconfidence response patterns and reply with "I know X and I don't know Y" would be incredibly useful.

When we get back an "IDK", we can probe in a different way, but falsely thinking that we know something when we are actually still ignorant is worse than just knowing we've not yet got an answer.

[0] https://news.ycombinator.com/item?id=35392685


Pick a thread and post your reply and then let GPT generate another reply on your behalf. The no. of upvotes you would for both these replies will give you a rough answer. But honestly speaking HN should maybe ban content generates via GPT in my opinion.


This is brilliant, but I am not sure if you have rushed it to launch, I am unable to upload any docs, tried to add a URL for scraping the import was successful but then it never appeared in my Library section. Connecting to github gave me issues but then when I did finally connect to github. the import is never successful. I keep on getting 500 service error and then a bunch of console.log messages with `hi` as null and sometimes `hi` having a value of an object. Let me know if you need more info regarding these issues. But this is a great product definitely.


Hi! I definitely did not expect this reception, so I've been running into quite a few issues (,:

Send me an email and I'll debug it for you https://libraria.dev/about


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: