Hacker Newsnew | past | comments | ask | show | jobs | submit | guardiang's commentslogin

Every human is different, don't generalize the interface. Dynamically customize it on the fly.


Exactly why expert steering should be valued.


Open source just caught up to GPT-4, which was released over a year ago. You don't think all of the advances in hardware don't also play into their hands? They have GPT-5 in the pipeline and are likely hard at work planning and prepping for GPT-6. A year or two beyond the competition, at this point, is their moat.


> A year or two beyond the competition, at this point, is their moat.

That doesn't seem like much of a moat.

> They have GPT-5 in the pipeline and are likely hard at work planning and prepping for GPT-6.

I wouldn't be surprised if there's an element of diminishing returns here. The improvement from GPT4 to GPT5 is likely much less than GPT3 to GTP4.


And they are going to run into source data problems. The internet is not producing that much more quality content, and now they have a problem with AI-generated clickbait.


I decided to use ChatGPT as my journal when they made it so they wouldn't be training on my data. Was talking to it all the time anyway, now I can pull the export whenever and just vectorize the whole thing... timestamps and all.


You have to end your reliance on others to have control.


Like, die?


Fear is the mind killer. Fear is the little death that brings total obliteration.


Google is old like MetLife, relative to each's respective industry. Both are carrying too much baggage and are top heavy. As a result, I personally don't think Google will be able to keep pace with OpenAI in the long run.


It's way past time that consumer-first became modus operandi for Tech. LLMs (GPT-4 quality or better) have the potential to enable that future. It'll take an avalanche of high quality products coming out of small-footprint companies. Each company utilizing LLMs to shore up any limitations due to their size, and every one of them having a consumer-first mindset from the get-go. It can be done.


Large Language Models are no joke and if you think they are the I suggest you learn how to use them properly.

Even if AI doesn't ever go "evil" itself, that doesn't mean people that don't like other people won't use it to more effectively kill those people. All that's really needed is a capable programmer willing to give the LLM the tools it needs, along with funds and access. The LLM could hire humans to do the work it's lacking at. These things are already capable of all that, plus our world went mostly digital during the pandemic... they have easy egress because humans have made it so, even though that was meant for other humans to use. If you don't think they are scary, in the wrong hands, then you are lacking in imagination.


> If you don't think they are scary, in the wrong hands, then you are lacking in imagination.

Imagination is a great resource to apply to writing a sci-fi novel. When it comes time to actually predict the future, imagination should be tempered with a solid understanding of the technology and a good understanding of history. Neither of those resources persuades me that AI poses "the greatest of dangers".

> Even if AI doesn't ever go "evil" itself, that doesn't mean people that don't like other people won't use it to more effectively kill those people.

This I can absolutely see happening, but I don't see LLMs being even in the top ten most dangerous technologies we've invented for more efficient killing. It's a big enough risk that people working on the tech should seriously consider the ethical implications, but not big enough that the rest of us need to lose sleep over it.


He likely can't without heavy paraphrasing and/or not providing full context for the quote. They've said stuff along the lines of "good luck trying, but we're gonna win so...". That's just the kind of confidence you want to see in the frontrunner (imo). They've also encouraged regulation, but it's a smart idea to be the one to frame the conversation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: