Hacker Newsnew | past | comments | ask | show | jobs | submit | PokeyCat's commentslogin

It's just the digits of pi, likely not their real SSN.


Very cool! I did a similar project with an accompanying app that connected to one of these inexpensive printers over Bluetooth in order to play a Discord, Lord of Disharmony commander deck in person.

The portability of the whole setup makes it easy to bring to an LGS, it's always a treat to see people's reactions when I pull out the printer and they see how the deck works!


It's likely RAG / augmented with web data. Would be interested if local execution returned the same results.


It is. You can see a little "G" icon indicating that it searched the web with Google.


Interesting, mine has a different icon and used Bing


It certainly doesn't, I'm running the 7B locally with ollama

It provided a lot more detail about the case, but does not have current information. It hallucinated the question about juror count, or maybe confused it with a different case seems more likely, one of the E Jean Carroll cases or the SDNY Trump Org financial fraud case?


You: how many jurists have been selected in the Trump trial in New York?

Meta AI: A full jury of 12 people has been selected for former President Donald Trump's trial in New York City, in addition to one alternate ¹. The selection process will continue in order to select five more alternates, though it is hoped that the selection process will be finished tomorrow ². Once all alternates have been selected, opening statements can begin ¹.

--- https://www.nbcnews.com/politics/donald-trump/jury-selection... https://www.cbsnews.com/news/donald-trump-new-york-court-cri...


Yup, the Meta hosted system is much more than LLaMA 3. Seems to have RAG, search, and/or tool usage


Would love to experiment with this for work, but the following clause in the license (notably absent in the Llama 2 license) would make this really hard:

> i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name.

Really impressive HumanEval results for the 8B model though, would love to plug this into Continue for tab completion since the current benchmark numbers for Llama 3 8B blow every other 8B model out of the water


This is the mildest possible clause they could have included short of making the whole thing public domain. Heck the MIT license has similar requirements ("The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.")


deepseek-coder-instruct 6.7B still looks like is better than llama 3 8B on HumanEval [0], and deepseek-coder-instruct 33B still within reach to run on 32 GB Macbook M2 Max - Lamma 3 70B on the other hand will be hard to run locally unless you really have 128GB ram or more. But we will see in the following days how it performs in real life.

[0] https://github.com/deepseek-ai/deepseek-coder?tab=readme-ov-...


With quantized models you can run 70B models on 64GB RAM comfortably.


Probably pouet.net . I'm still amazed by so many of the 64k demos on there... incredible what you can do with that little space.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: