Hacker Newsnew | past | comments | ask | show | jobs | submit | popeshoe's commentslogin

It's still incredible you can coerce a language model to produce this, but it's not a new game, I've had it on my phone for a few years: https://play.google.com/store/apps/details?id=com.rohitpailw...


And herein lies the issue with ChatGPT, it can generate functioning code, but can also lie through its none existent teeth about it. Using ChatGPT (or Co-Pilot) can feel like pair-programming with a very talented developer who loves to bullshit.


In this case I think I'd give ChatGPT the benefit of the doubt. It is possible to invent something that already exists, and it has happened on several occasions trough-out history. A great example is the history on who was really first at inventing the telephone. In the end Alexander Graham Bell got the patent, but perhaps Elisha Gray was actually first? Historians remain divided on the topic.

For instance, I once found what I thought was an ingeniously original idea about about how TV is really just a kind of reflection of reality akin to Plato's Cave. I immediately got started writing a thesis about it, but I didn't have to search for long on the topic before I found an entire book written on this way of thinking about television. I wasn't really disappointed, because in the back of my head I knew that it had too be too good to be true that I'd be first with such a great idea. In any case I kept working with the thesis, and I still did got a good grade on it despite the idea not being revolutionary.

The questions I now wonder about is, can ChatGPT forget? Or could it be that ChatGPT was never exposed to this game, but could still infer it through other game rules, such as those for Soduko? Which I guess opens up another rabbit hole on if or how AI can be creative. Which I guess opens up another rabbit hole on how creativity works in general.


The funny thing is that it is neither lying, nor inventing something new. What OpenAI did pretty well was collect data. And wouldn't you know it, the folks who developed that new puzzle describe it as what it is---a new kind of puzzle. So now in the training data you have a combination of puzzle, sudoku, and new/novel. And wouldn't you know it, by asking for a new puzzle, based on sudoku, you make ChatGPT dig for that kind of text. If ChatGPT really had a novel idea, I would not expect it to be this coherent---after all, logic and coherence are not a constrain on how language models work, just what words are likely to occur next. That is why it is being compared to entry level college writing, because that is how an excited student writes who hops from topic to topic.


But how is it different from humans? I can't tell you how many times now that I've come up with what I thought was a really cool idea but upon web searching found it was already invented/discovered etc. In fact before the Internet I had come up with my own algorithms and only upon the Internet existing did I find they were already discovered years earlier. There's no way that I was regurgitating something I had read in that case.


There’s a difference between coming up with a puzzle then finding out it already exists versus finding a puzzle and saying you came up with it.

If I told you “We need a brand new, never-never-before-seen puzzle for our next game release.” and you searched Google for “brand new, never-before-seen puzzle”, found a puzzle game with those words in its marketing copy and pitched it to me, that would be some combination of unintelligent and dishonest behavior. Like, surprisingly so. It’s different from forgetting some puzzle you played with as a little kid and thinking you made it up, or creating a puzzle you’d never seen but has been made before.


But ChatGPT is not a person, it is a text generator. By asking it to generate a new puzzle, you are prompting it to find text in its training data showing someone describing a new puzzle, and it is going to speak in their voice. It's going to emit sentences that were influenced by what the puzzle developer originally wrote, and that person correctly said that it was new.


I'm not entirely sure about this. ChatGPT would have to make a model for how such a game was made, and then infer its rules. From that perspective, it would be brand new, although very similar games would perhaps exist out there. And at that point it's also starting to look a lot more like human creativity, although I guess not entirely. As such the statistical or probabilistic approach, or the Chinese room approach, is getting less and less valid for the AI, because it's not doing simple probabilistic look-ups from some table. Instead it's actually developing something "new", or at least with respect to the the perspective of the AI and the data or source material available to it.


I agree with everything you’ve written here, so I’m not sure what the “But” that’s starts your comment is contrasting.

I was answering the question “But how is this different from a person?”. Being asked for something new and finding something that already exists with the word “new” in front of it isn’t normal human behavior. That’s how it’s different from a person.

Zooming out a bit, I think there’s some confusion in this whole chain. There’s a common topic about ChatGPT you could call Question of Creativity. If you ask for a new poem, it just smashes together its patterns around poems. You can debate if this is creativity, and if not, how are humans different. A few comments up, someone brought in a different idea you could call New Matching. If you ask for a new poem it will just grab you a poem that had the words “new poem” in front of it. New Matching is a different idea than Question of Creativity. The person I replied to seemed to be mistaking one idea for the other.


You're not prompting it to "find text". Comparing the size of the model to the size of the training data is sufficient to conclusively establish that it's an impossibility.

We train it to predict the next word based on the training data, that is true. But we still have no idea what kind of internal structures said training actually produces inside of neural net. It sure as hell isn't just a "stochastic parrot", though, which is rather obvious if you ever tried giving it a complicated multi-step task and solve it while "thinking out loud".


This. people who can ground themselves in what ChatGPT is (an auto completion text predictor) are able to best understand the origins of its output.


It is different to what you do. If I tell you that this is already a thing, you might go back to the drawing board, and do something from scratch. Maybe do some abstract drawing with numbers for brainstorming. A language model is not able to do this, the starting point for a language model is always the training data. That is why there is so many instances where you see some wrong (or correct) response from ChatGPT and when the other person corrects this, the model just agrees to whatever the user says. That is the right thing to do according to language etiquette, but it has nothing to do with what is true and right. (It invokes the image of a sociopath manager trying to sell you a product---they will find a way to agree with you to close the deal.)

I don't know what introspective is, but I know it when I see it. People around me genuinely come up with new concepts---some of what they came up decades ago with is now ubiquitous---and the sources is often not language. It comes from observing the world with your eyes, from physical or natural mechanisms. If you want to put it into the language of models: we just have so much more data to draw on. And we have a good feedback mechanism. If you invent a toy, you can build it and test it. Language models only get second hand feedback from users. They cannot prototype stuff if the data isn't out there already.


>It is different to what you do. If I tell you that this is already a thing, you might go back to the drawing board, and do something from scratch.

Wouldn't your "something from scratch" idea, be based on your "training set" (knowledge you've learned in your life), and ways of re-arranging it inside your brain, using neuron stuctures created, shaped, and reiforced in certain ways by exposure to said training set and various kinds of re-inforcement?


Human brains training data has orders of magnitude more complexity than text. Language models are amazing but they can only do text, based on previously available text. We have higher dimensional models and we can relate to those from entirely different contexts. Same thing to me limits 'computer vision' severely. We get 3d interactive models to train our brains with, machine learning models are restricted to grids of pixels.


>Human brains training data has orders of magnitude more complexity than text.

Still a training set though. There's no some magic non-training part creating stuff from zero, out of pure determination!


There is never any 'magic'. Magic is just a word for things we don't understand. This is beside the point. Just like you'll never reach orbit with a cannon, it is useful to know the limits of the tools. There will never be an isolated language model trained on bodies of text capable of reasoning, and people shouldn't expect outputs of language models to be more than accidentally cogent word salads.


One implication though, is that LLMs can currently come up with novel mixes of existing ideas. It might be a good blender, integrating different pieces into a new whole.


Yes, but the language model does not have the feedback mechanism we have. We can test ideas against reality. Language models can make up all kinds of crap until there is data somewhere mentioning that it's not going to work. You could come up with an idea and workshop it, e.g., seeing if it's physically feasible to make something, before sharing it with others, language models cannot.


There are very few new ideas, but many different people have the same ideas.


> Or could it be that ChatGPT was never exposed to this game, but could still infer it through other game rules, such as those for Soduko?

There is no way, the game type is centuries old, you can read this giant wikipedia articles about games like this.

https://en.wikipedia.org/wiki/Magic_square

ChatGPT "inventing" this is like thinking it invented chess.


from my understanding, anybody please correct me if i'm wrong, ChatGPT can not really invent anything, it can just generate text based on probabilities obtained from the mountain of source documents used for training it. it does not think in the same way we do, it is just amazing at writing coherent phrases (and very simple code).

there's a quite long article from Stephen Wolfram about how it works and this is why I belive it can't do that: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...


What does it mean to say that it can't invent anything? If, for example, I ask it to make a new poem with no line previously recorded in the english language it will do so. If I google that poem to test it's originality I won't find a match. It seems to me it just made something novel, right?


When humans write new literature, or design new games, are we simply remixing elements of language and game mechanics that we've seen before, or is there something more going on?


Who else's experiences can we pull from but our own? It can be anything else.


You may be splitting the wrong hair here.

However it generates a text, that text may describe what for practical purposes is a new invention.


>it does not think in the same way we do

And how do we think exactly? Don't we have a brain trained on input (livable experience, knowledge from books, school, videos, conversations, etc) and generating text based on probabilities (weighted sets of neurons with weights built from that set)?


This is not a magic square, though. All rows and columns explicitly do not add to the same number.


Yes, but the non magic square is inspired by the magic square and such games are everywhere. Just buy a random puzzle book and you find pages and pages of puzzles with "make the numbers add up to these columns and rows", because they are very easy to make.

Point about magic square is that every culture invents games like that, it is one of the most basic puzzle ideas humans have, I don't see how ChatGPT can't have that in its training set.


For someone trying to show that a chat bot could not possibly have generated this specific game on its own because it already exists, you kind of have to show that it already exists.

All that you’ve done is shown that similar types of puzzles exist. Which, I mean, is kind of the point of a generative AI.

“Games like this” exist. Does this specific game exist?


Much more advanced versions of it exists, and has existed for a long time, for example Kakuro is before computer games. Magic sum is just a special case of it. Finding a discussion with those exact rules are probably a bit hard, search engines aren't good at searching for that, but given how common these games are and how many game design discussions and ideas there are online, a game where "block out these numbers to make these sums" is surely to exist somewhere. The poster above even found the exact same game, although that wasn't described in text, but someone probably described it in text somewhere.

https://en.wikipedia.org/wiki/Kakuro


Once again, to show that’s one thing is blatantly copying another thing, you kind of have to show that thing exists already. Kakuro is also a similar game with its own unique rules that only somewhat overlap with this one.

It’s not enough to say “a lot of games with similar rules exist” and if anything, that just shows that a generative AI is good at what it does: break down the rules of a game and make modifications to make what is potentially a new game.

If you can show an example of this exact game having existed for centuries, then you have a point. But showing that magic squares and similar games exist… just shows that magic squares and similar games exist, not that the algorithm incorrectly said this is a new game.


The discussion was probability of ChatGPT having invented it, the probability that description for such a game is in ChatGPT's dataset is extremely high. We have examples of that exact game existing (the top post of this thread), and we know from my links that there are countless texts about puzzles like this out there, although they aren't exactly the same.

> It’s not enough to say “a lot of games with similar rules exist” and if anything, that just shows that a generative AI is good at what it does: break down the rules of a game and make modifications to make what is potentially a new game.

No it doesn't, even if that is the case it just shows that it adds random variations. Since we only see the trimmed subset of ideas it generates that people found good enough to post, the smart one is the person.

You would need to prove that ChatGPT actually consistently generates working puzzle ideas that are novel to convince anyone that it actually does so. Extraordinary claims require extraordinary evidence, so all I need to do is find plausible explanations to how ChatGPT found it, you would need much better evidence to convince people it actually did make a novel game.


> The discussion was probability of ChatGPT having invented it, the probability that description for such a game is in ChatGPT's dataset is extremely high.

If this were the case, it would have been trivial for you to find a game with its written rules described and which match the one generated.

You have done nothing but say that is the case. You haven’t actually proven that’s the case.

ChatGPT can’t magically infer the rules of the game from screenshots, and you have only shown that similar games exist and have existed for centuries. But that is not the same as saying that this specific game has and that ChatGPT just pulled it out of its dataset.

That is the extraordinary claim that you don’t have evidence for but are acting like it’s right there obviously out in the open for everyone to see.


> If this were the case, it would have been trivial for you to find a game with its written rules described and which match the one generated.

Search engines doesn't work like that. You are basically asking me the equivalent of proving that a photo isn't depicting a ghost. No, I can't prove that, I can however come up with examples showing how the photo could have been created even if it wasn't a ghost.

If you want to prove that ghosts are real you need plenty of photos from lots of angles and situations, or videos, and from many sources to show that it isn't all made up by a single person. The equivalent of that would be if they had made ChatGPT generate 100 different working games for example, that would be much more believable. But a single case of a game that already exists and has countless texts describing similar games? It just looks like random chance that got handpicked or plagiarism.

This isn't a court trial, I am not going to sue ChatGPT for plagiarism here, it is just a discussion whether it is reasonable to believe ChatGPT can generate novel puzzle games.

Edit: But do note that since ChatGPT can find such ideas that are hard to find with a search engine, that makes ChatGPT very useful in a way search engines aren't. So I am not saying it doesn't add value. Just that people seem to say ChatGPT does a lot of thing that it doesn't seem to be able to do.

Edit again:

> That is the extraordinary claim that you don’t have evidence for but are acting like it’s right there obviously out in the open for everyone to see.

Yes, you think it is obvious that ChatGPT is capable of very creative and productive thinking. But most people don't think that, to them that is an extraordinary claim. I'm not here to convince you, I'm here to explain to you why you aren't convincing anyone with what you say. People like you were convinced by articles like this before the discussion even began.


> Search engines doesn't work like that. You are basically asking me the equivalent of proving that a photo isn't depicting a ghost. No, I can't prove that, I can however come up with examples showing how the photo could have been created even if it wasn't a ghost.

The claim was that it pulled the game out of its dataset. If this were the case, I would argue it would absolutely be trivial to find them. It’s not some concept that can’t be described in words or would be hard to quantify. The rules have been provided, and, assuming they were plagiarized from somewhere else, would be listed verbatim or close to it.

If a student plagiarized on their work, whether in written form or in code, it’s been trivially easy to find the exact work that was copied from. It generally takes me a few seconds of searching to find it.

This is the same. If these rules existed in a dataset, then it should be equally easy to pull them up and prove the plagiarism. If all you can find is similar puzzles, you can’t just throw your hands up and say “yep, gottem”. That’s just not how this works.


> The claim was that it pulled the game out of its dataset. If this were the case, I would argue it would absolutely be trivial to find them. It’s not some concept that can’t be described in words or would be hard to quantify. The rules have been provided, and, assuming they were plagiarized from somewhere else, would be listed verbatim or close to it.

ChatGPT uses word vectors, it wont use the same words but variants of the words. You can't search for that. Cases where word vectors only maps to single words with no variations for every word are very rare, so ChatGPT is very good at plagiarising things without reproducing exactly, it just rarely fails at it.

> If a student plagiarized on their work, whether in written form or in code, it’s been trivially easy to find the exact work that was copied from. It generally takes me a few seconds of searching to find it.

No it isn't, they just change the words and rewrites it until it no longer looks the same. ChatGPT is trained to rewrite texts like that to avoid triggering trivial plagiarism detectors. They train it to produce the same text, but with different words, producing exactly the same text is punished.


> No it isn't, they just change the words and rewrites it until it no longer looks the same. ChatGPT is trained to rewrite texts like that to avoid triggering trivial plagiarism detectors. They train it to produce the same text, but with different words, producing exactly the same text is punished.

Do you think students plagiarizing don’t do the exact same thing? Clearly someone has never actually dealt with plagiarized work. This is plagiarizing 101. The structure remains the same even if they use synonyms. Considering it’s trivially easy to find in code which is magnitudes harder to pull off, I would still argue it should be easy as pie to find this supposed set of rules.

Your point is not very credible without proof of this game existing and ChatGPT pulling it from this source. Without showing this supposed proto-game having existed with rules the ChatGPT can pull from, then all you’ve done is wave your hands around and yelled “similar games exist so this can’t possibly be uniquely generated” and that’s not a very compelling argument.


> Do you think students plagiarizing don’t do the exact same thing? Clearly someone has never actually dealt with plagiarized work. This is plagiarizing 101. The structure remains the same even if they use synonyms.

You rewrite the structure of the text, you don't just use synonyms. ChatGPT is capable of rewriting text to a different structure while keeping the meaning, I hope you are aware of that.

Anyway, even if you just change the words to synonyms it wont be easy to find in a search engine. Search engines aren't very good at finding matches to synonyms. Google tries, but in doing so they fail to find more specific texts like scientific publications or documentation, so no search engines aren't good at finding plagiarism.

Edit: And you make it sound like most plagiarism is found. No, that isn't the case, most plagiarism is not found out because it is a very hard problem to solve. Only the most blatant cases are caught. For humans that is reasonable, for AI we can be stricter since there isn't a humans career at stake.


> Anyway, even if you just change the words to synonyms it wont be easy to find in a search engine.

Got it, so you’ve never actually dealt with plagiarized work. You should have just led with that.

I have literally said, from actual experience, that this is the case. But I guess discarding that and pretending it was never said and that the opposite is true is I’m sure an easier position to hold.


Do you believe you never missed any plagiarised work examples? You caught some people doing X, and then you declare that catching people who do X is trivial. But plenty of people get away with doing X so we know that it isn't easy to catch.

For students they are probably easier to catch since they use the same tools you do, they use a search engine to find an article and plagiarises that. But ChatGPT takes deep discussions from reddit or stack overflow, I can't find those with a search engine.


If it’s as blatant as copying the entire game, you’d think it would be easier for you to find the game it copied. By your own account, this is an example of an obvious case of plagiarism. You were dead set on it, 100% sure.

Yet here we are. Dozen comments later and still no written set of rules produced which definitively shows that it was copied.

Come back when you actually have that and maybe we can continue this conversation.

> But ChatGPT takes deep discussions from reddit or stack overflow, I can't find those with a search engine.

Where do you think the answers come from? It’s not like Google has a massive index island around Reddit and SO.


I tend to exaggerate my claims a bit, yes. But you exaggerate your claims as well, for example you claim that if it had copied the rules it would be easy to find an example, that isn't true at all. Many examples of plagiarism goes unnoticed for years, until someone who is familiar with the original work points it out. I know examples where the person was found out during his thesis defence, he had plagiarised his entire PhD work from papers in another language and nobody noticed until years later, not even all the peer reviewers of the papers.

So maybe these rules are described in Japanese? Most similar games comes from Japan, Kakuro, Sudoku etc. Would your plagiarism detection method of Googling it find a Japanese source? I doubt it. But ChatGPT transcends language barriers, it can translate to English just fine.


Being briefly mentioned in the dataset would not really help it, because it doesn't "remember" the entirety of the dataset anyway. It would have to be something described repeatedly in the training inputs for ChatGPT to really remember the rules with this level of precision.


One game I can think of being very similar, is a game within a game. Dungeons & Diagrams puzzle within Last Call BBS [0]. In that game you can place or remove walls for them to add up to the numbers shown per row/column. That game has another layer of strategy built on top, as there are certain "dungeon patterns" you could observe that would in theory guide you through completion. I myself haven't noticed any patterns when I've tried the game the first time, and just relied on the numbers shown. (Guess that's why I've only played 3-4 levels)

[0] https://steamuserimages-a.akamaihd.net/ugc/18583143573725211...


https://play.google.com/store/apps/details?id=com.rohitpailw...

this comment thread started with this link, this is exactly the same game


Sure, and then the next comment said that ChatGPT could have separately invented the game, to which the comment I replied to said that's impossible because the type of game is old and surely would have been written down and included in its corpus, which it then claimed it invented. The rest of the context matters.

ChatGPT can't deduce the rules of the game using the screenshots. They would need to be written somewhere for them to come out of its dataset. And so far, nobody has shown a game with the rules in a format that ChatGPT could consume.

Why is it so hard to believe that a generative AI generated this game from similar ones which exist? That is literally the purpose of it, after all.


Hm, well in that case I may well be wrong. Thanks for the info!


Chatgpt should be able to cluster things and see were clusters could be, collect everything necessary for that theoretically cluster and the human could evaluate it.


Re forgetting: we should be careful not to anthropomorphize ChatGPT.

In principle, ChatGPT cannot forget. It is trained on data and this training will stay as long as it didn't get deleted or destroyed. In other words, in all cases of someone having made ChatGPT tell something, it should be possible to repeat this. Perhaps in some case it will be effectively impossible for some rare combination of prompt and random seed, so one could say, ChatGPT forgot something. But this is not the same as people forgetting something.

Or during the training something was not considered important, but this is not forgetting, this is ignoring.


> can also lie through its none existent teeth about it

Ironically, it seems to me that you are anthropomorphizing ChatGPT a bit too much here. It has no reason to lie so I think it's more likely that it just doesn't know such game exists. It probably came up with it independently or doesn't have a strong memory of it. In some respect, it would be even more impressive if it was actually "lying through its teeth" because it would imply the AI had some kind of hidden agenda.


Similarly I don't think it makes sense to say it "knows" anything at all. I would be more comfortable saying Wolfram Alpha knows things than saying an LLM does, but I'm not comfortable with either.

I'm not sure I'm comfortable with "remembers" either. My gut says I want to say I'd be more comfortable with that word for a web cache, but due to my understanding of human memory as constructive maybe I should be more comfortable with that for an LLM than for any other software.

Happy to be convinced otherwise.


ChatGPT does indeed know nothing at all. Proving this is quite easy, it was trained on text generation and can generate paragraphs quite well, so if you ask it to tell you about Harry Potter's family tree it will do well.

However, it will fail immediately when you ask it to print an ASCII chart of Harry Potter's family tree, because it does not actually "know" anything, and it will make all sorts of odd connections.

The clearest observation I can make of ChatGPT's success is that the general public is quite ill-informed and easily impressed by theatrics, both lessons we've already learned from politics.


That seems less like a reasoning issue and more an issue of building up an ASCII chart in a single pass. I doubt most humans would be able to accomplish that.

To demonstrate this a bit, I asked for HP's family tree as JSON, suitable for use in a charting library, and this is what it came up with:

---

{ "nodes": [ { "id": "Harry Potter" }, { "id": "Ginny Weasley" }, { "id": "James Potter" }, { "id": "Lily Potter (née Evans)" }, { "id": "Arthur Weasley" }, { "id": "Molly Weasley (née Prewett)" }, { "id": "Fleamont Potter" }, { "id": "Euphemia Potter" }, { "id": "Mr. Evans" }, { "id": "Mrs. Evans" }, { "id": "Vernon Dursley" }, { "id": "Petunia Dursley (née Evans)" }, { "id": "Dudley Dursley" }, { "id": "James Sirius Potter" }, { "id": "Albus Severus Potter" }, { "id": "Lily Luna Potter" } ], "links": [ { "source": "Harry Potter", "target": "James Potter" }, { "source": "Harry Potter", "target": "Lily Potter (née Evans)" }, { "source": "Ginny Weasley", "target": "Arthur Weasley" }, { "source": "Ginny Weasley", "target": "Molly Weasley (née Prewett)" }, { "source": "James Potter", "target": "Fleamont Potter" }, { "source": "James Potter", "target": "Euphemia Potter" }, { "source": "Lily Potter (née Evans)", "target": "Mr. Evans" }, { "source": "Lily Potter (née Evans)", "target": "Mrs. Evans" }, { "source": "Petunia Dursley (née Evans)", "target": "Mr. Evans" }, { "source": "Petunia Dursley (née Evans)", "target": "Mrs. Evans" }, { "source": "Vernon Dursley", "target": "Petunia Dursley (née Evans)" }, { "source": "Dudley Dursley", "target": "Vernon Dursley" }, { "source": "Dudley Dursley", "target": "Petunia Dursley (née Evans)" }, { "source": "James Sirius Potter", "target": "Harry Potter" }, { "source": "James Sirius Potter", "target": "Ginny Weasley" }, { "source": "Albus Severus Potter", "target": "Harry Potter" }, { "source": "Albus Severus Potter", "target": "Ginny Weasley" }, { "source": "Lily Luna Potter", "target": "Harry Potter" }, { "source": "Lily Luna Potter", "target": "Ginny Weasley" } ] }

---

Sorry about HN formatting, but you get the idea. This looks fairly accurate to me. What about this demonstrates less "reasoning" then turning it into an ASCII chart?


I am confused at to how this would be "the issue" with ChatGPT. Being wrong and not being aware of it is not a unique concept. At least with ChatGPT it is fair to assume there is no hidden agenda and no need to worry about ill will. If anything that makes it less of an issue, compared to humans.


Ok, so maybe not the issue with ChatGPT, but with peoples understanding of its limitations. It can generate text and code from instructions, but it's limited in its logical analysis of what it's "saying". In this case it was asked:

> And to the best of you knowledge this type of puzzle does not currently exist?

and it responded:

> As far as I am aware, this specific type of puzzle with the given rules and mechanics does not currently exist in the puzzle game genre. However, there may be similar games out there that share some similarities with this puzzle.

That response is not generated (as far as I am aware) by any form of logical analysis or understanding, it's just generated text based on its training and prompting. It was asked to come up with something "new", and will continue to claim that as it was part of its prompts.

So yes, this may not be a failing of ChatGPT, but of users understanding of it. You cannot take what it states as "fact" as anything other than potential BS. But it is an incredible tool for using to generate text and code.

We are still early in its development though, who knows where it will be in 18 months time!


This is a very good comment. ChatGPT uses language so fluidly it's easy to interpret as there being more substance than there is.

Looking at the response the way you suggest, it's clear it's given a boilerplate answer that would seem likely given the context it has found itself in.


Exactly, as soon as were to butcher my english, a certain amount of credibility will be incurred, even if the communicators are aware of it. It could probably eloquently explain the workings of a Retro Encabulator fluidly and you’d nod a few times and thinking it’s fine.

If words not right said for listen like now, think you might not be smart as is tho.


ChatGPT will reverse that, if you sound smart it is likely nonsense generated by an AI.


Is it actually wrong though? Will the rules of 'summer' be in it's training data anywhere? AFAICT they aren't described on the google play page, although you can easily figure them out by the screenshots.


I feel like you can compensate with more complicated prompts. Or even different prompt categories (like negative prompts, but for programming it might be a list of constraints). Like this interface: https://github.com/AUTOMATIC1111/stable-diffusion-webui but for code


> At least with ChatGPT it is fair to assume there is no hidden agenda and no need to worry about ill will.

Is it? Even if it’s fair to assume that now, we have no idea if that will remain true or when the shift will happen.

The CEO of OpenAI is the same scammer who scanned eyeballs in return for a non-existing cryptocurrency[1] and the company itself is criticised all the time[2].

[1]: https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

[2]: https://techcrunch.com/2023/03/01/addressing-criticism-opena...


Yes, it is fair to assume that and in cases like these it will continue to be for the perceivable future. The AI does not stand to gain anything by about lying about a simple puzzle game, and neither does the CEO. Even if the CEO somehow did, it would be a disproportionately colossal amount of effort to tamper with ChatGPT in this specific instance. And that's also assuming that the CEO himself has all the knowledge and tools needed to do all of it himself, which I doubt.


You keep mentioning “cases like this” as a qualifier. This case isn’t relevant, it’s an inconsequential puzzle game. “This specific instance” is not the point.

There is no reason to assume “the CEO himself” would personally do it. History is full of bad CEOs making harmful decisions and they definitely don’t need to (and often wouldn’t even be able to) do it on their own. Sam (presumably) isn’t out there personally scamming more people for their retina scans, but someone is: https://news.ycombinator.com/item?id=34981352


With humans we can demand that people cite their sources. If they fail to do this, they run the risk of being accused of plagiarism. ChatGPT, on the other hand, plagiarizes all day long and never cites sources. That is why it's an issue.

And as for whether ChatGPT has an agenda or not, that is beside the point. People can and do use it as a tool for plagiarism while trying to hide behind a layer of plausible deniability provided by the "black box" of the model. This cannot be allowed to continue. This is why we need to push back, just as the GP is doing.


We can help it look for and use sources.

I've had it generate search terms that could be used to verify "facts" in is answer. Then I'd give it the page results and have it adjust and source it's answer using that.

Have not tried it yet, but perhaps Bing's implementation is a step in that direction?


I mean, sure, you can demand it. And people are just going to make up sources. It’s not like they have a gun held up to their head to ensure that demand is followed.

> People can and do use it as a tool for plagiarism while trying to hide behind a layer of plausible deniability provided by the "black box" of the model. This cannot be allowed to continue. This is why we need to push back, just as the GP is doing.

This is absolutely preposterous. People are going to lie and plagiarize whether they have a chat bot do it for them or not. The existence of a chat bot isn’t going to be the make or break in this equation and if anything, the people using it for that purpose should be rightfully vilified rather than the tool.


> People are going to lie and plagiarize whether they have a chat bot do it for them or not.

The difference is, with a chatbot it might not even be a conscious act, the chatbot is doing it for you and you're not aware that it's happening.


> And people are just going to make up sources. It’s not like they have a gun held up to their head to ensure that demand is followed.

The consequences actually are quite serious. A person falsifies work product once in an academic or professional setting and their career is severely impacted. This is why people are "surprised" to encounter such a BS generator operating under the trademark of a reputable company.


It’s not the tool that’s at fault in that case, it’s the person doing that falsification. The person would have faked their sources and made shit up without ChatGPT there.

It’s almost as if you ignored everything I said, cherry-picked a random part, then went on a tangent about a different part of my comment. All without actually comprehending what the things you replied to said.


No hidden agenda? It has an agenda and it is not honest about it. That's a hidden agenda. You don't know what is "motivating" ChatGPT. Neither does ChatGPT. But it has been given motivation. It has been designed to write in a certain way. Its design prevents it from learning or honestly engaging in serious discussions. It's not any sort of unbiased equation.

More dangerous than ChatGPT is the sheer gullibility of many people putting it to use.


Its agenda is predicting the next input token.


Yes, and that agenda has severe consequences, like “confabulates constantly”. Just because it’s simple to state doesn’t mean the consequences are simple or innocent.


When a meteor strikes, and causes a mass extinction, is it "guilty" or just "bad"?


How many stockholders does a meteor have?


There is an issue with how people are personifying ChatGPT and assigning it agency.

Some want to talk of these LLMs as approximating an intelligent actor. If that's the case, then we also need to assign metaphors for things like deceit and coercion. We also need to consider assignments of novelty to what's generated and think of their rights as quasi-sentient, etc.

Some want to talk about them as probabilistic text token generators, which brings the benefit of not being intelligent or independent actors at all really but also then comes with the issue of intellectual property theft in training them on information not licenced for reproduction or commercial use.

The industry prefers to thread the needle between these as the former case brings some pretty wild conversations and the latter may mean lawsuits.


“An” issue with it is that we may come to rely on these AI’s outputs as assumed correctness or truth. If we have to double check everything they produce then that’s not great either.


It looks like it sometimes, even though that may not be the case. I've had times when I've corrected ChatGPT, and yes, it knows that what it told me was wrong. It then goes on to tell me more along the lines of what it seemingly already knew what was right.

This obviously isn't the intention of the software, it's just an LLM after all, but there's something missing in the experience when it comes to working with code. Hopefully this sort of issue can be corrected.


I wonder if this could partially be a result of training on code found in question/answer environments like Stack Overflow. It sees "How do I do X, here's what I've tried" with broken code and then an answer "This is incorrect because Y, here is the correct answer" with the correct code.

Intuitively it makes sense to me that broken code would often be very close to questions about how to achieve something in code.


One of "the issues" is that you are led to believe that since there is no agenda and this is AI, its result must be true and you don't need to double-check whether they are. And of course, since it did invent a name for the game (or a new function name, or [insert your example here]), it's even harder to google to cross-check if it's actually new or if it's essentially telling you bullshit or inciting to plagiarism.


Bullshit is far more insidious than a lie, for a lie is wrong and will come to light, but bullshit is uncorrelated with truth and may even be coincident with it. Thus bullshit can go unnoticed far longer.


...and plagiarize like crazy, while lying about it. :)


Could it be plausible that ChatGTP processed some text describing this exact game, where it is claimed that the idea is novel (because it was, at that time)? Since ChatGPT does not understand the concept of novelty, it would simply "learn" that the rules are novel, and then repeat the rules, still claiming that they're novel. After all, that's the information it was trained with :)


...maybe this is the key to success and ChatGPT is here to show us the way! :)

Ride those coattails and take other peoples ideas as your own!


Is this actually in the training data though? I couldn't find a textual description of the rules though Google, so I'm not convinced.


Is it plagiarizing if ChatGPT knows about chess rules, analyzes every single chess strategy, then create a similar game?


ChatGPT does not know chess rules, nor can it “analyze” a chess strategy. ChatGPT has digested the conversations of many many people who have talked about chess, and can reproduce a transcript that sounds similar to this corpus of conversations.

But it is not synthesizing an understanding of the game of chess.


Except that there is demonstrable evidence that indicates GPT actually does have some level of understanding via internalized world models (of Othello in this case, not chess, but the idea is the same): https://thegradient.pub/othello/


In my experience using copilot for generating code is usually a lot less weird because it has more context; instead of using made up function names and APIs it can see what’s been defined in other files. But I primarily find copilot helpful for instances when I need a bunch of almost identical code but with tiny changes (which could mean I’m coding wrong)


"Very talented developer"? Sorry, I don't think googling my prompt and replying with the top stackoverflow answer (or a mashup of the top answers) counts as a talented developer.

Anecdotal, but I've not yet had any success in producing any non-trivial code with ChatGPT. It has, however, produced copious amounts of bullshit with plausible variable names... :)


It is a dilettante, it has not reached the level of "talented" in anything. It knows many things about many things and nothing in depth. Test it on your specialisation, you will see it make absurd mistakes and hallucinations. Try it on a domain you know less about - it looks perfect.


It depends on the language. I assume they trained it more on the most popular ones

It's pretty good in JS, it sucks in Rust


Yes, just a couple of days ago I asked it to write a Pytorch Lighning module for me, it looked great at first sight. But it mixed up the dimensions and made other hard to see bugs. It was frustrating to fix, almost the same effort with writing the damn thing manually.


Bro in this case the human is generating made up garbage.

That game is NOT the same game. It's similar but the games are different.


A while ago another poster thought ChatGPT invented good jokes.[1] All of them were ripoffs, which took less effort to verify than it takes to make a new post.

I get people are excited about a chatbot which doesn’t suck, but ideally it wouldn’t turn off critical thinking skills.

[1]: https://news.ycombinator.com/item?id=34744921


Nice find!

Seems to be similar to a game called Kakuro. This [1] repo even contains a similar rule:

> The algorithm exceed the rules that the sum over a row must equal to the value on the left and the sum over a column must be equal to the value on the bottom of the cells with the diagonal and one or two numbers

[1]: https://github.com/MarioBonse/KakuroSolverCSP

[2]: https://github.com/topics/kakuro


That's the first thing ChatGPT said at the start of the whole process. It created a game like Kakuro.


"If you can think about it somebody already did it and it's on the internet."

Loose quote from I don't remember who, early 90s.


You're probably thinking of (one variation of) Rule 34:

  "Rule 34: If you can imagine it, it exists as Internet porn."
https://en.wikipedia.org/wiki/Rule_34#Variations


Google Trends shows a small number of searches for "sumplete" going back to 2004 [1]. Not sure how to find what the results might have been, though.

[1] https://trends.google.com/trends/explore?q=Sumplete (search "Worldwide" and extend the time range)


Sumplete is Spanish for substitute


“Sumplete”[1] isn’t a word in either Spanish or Portuguese. You’re thinking of “suplente”[2] (which does mean “substitute” in both).

[1]: https://www.linguee.com/english-spanish/search?query=sumplet...

[2]: https://www.linguee.com/english-spanish/search?query=suplent...


Hah, nice!


But where would GPT have sourced information about how the game works from? That page only has screenshots, I suppose maybe there's a subreddit or something for it as well. Even if there's a bunch of info on it it's still incredibly impressive for it to parse those game rules and turn it into workable code.

Would be nice if GPT could dump the source of how it came to such a solution, if it generated the game by random chance via combining various unrelated chunks of text and mixing up the rules, or if it used some text describing the game you linked.


What are the rules for the phone game you link to? I can't see them on the google store page.


Great find. I would be amazed if a language model like ChatGPT could come up with a novel idea.


Except this game is different. It's similar but different.

Your game involves addition. chatGPT is using subtraction.


Sometimes if you catch a new video soon after it's been uploaded you can see it's in a state before the recommendations have been generated, the few times I've seen it happen the recommendations were entirely kids videos with vibrant thumbnails and strange names.

I have no idea how youtube gets it's recommendations when this happens, but if it's some sort of randomish fallback list, I can easily believe that cheap, mass produced kids content is a huge fraction of youtube uploads/views.


I was going to post the same thing, I bet it looks great on a retina mac screen, but on my windows machine with a monitor too big for 1080p, something about it feels off.

Otherwise I think it looks great though, nice job!


If you're on HN you're probably also conscious of your philanthropy being used for the maximal good, in which case you may be interested in projects like https://www.effectivealtruism.org/


Thanks for posting


A Prague citizen came to local police station in fall 1968. At the desk he claimed "Officer, a Swiss soldier stole my Russian watch". Officer looked puzzled and responded "I guess you mean that a Russian soldier stole your Swiss watch." The man replied "It might be so, but remember that you said that. Not me."


Ha ha!


Deus Ex got me at the perfect time in my life (15 y/o pseudo-intellectual nerd) so I really loved the game, so reading this was great. Such was my nerd infatuation with this game, that as I was reading about the various locations, the corresponding music would pop into my brain almost immediately, a testament to the great soundtrack (eg. https://www.youtube.com/watch?v=9FZ-12a3dTI )

Especially interesting is seeing how the times and what we expect from videogames have changed, on page 23 there's a dejected nod to people expecting a 90s first person game to have multiplayer, even though it's primarily a single player RPG, but they'll do it as a bullet point to go on the back of the box.


Definitely agree about the soundtrack. I find myself listening to it while working quite a bit.


I'm kind of astounded that modern planes haven't had this feature for a long time


It may be coming to larger military aircraft next.[1] Recovery strategies for fighters can be drastic. The F-16 auto-GCAS commands a roll rate up to 720°/sec, followed by a 5G pull-up. Only fighters and some aerobatic aircraft are capable of such aggressive maneuvers.

On the other hand, fighters are expected to fly fast and aggressively close to terrain. The goals of the F-16 auto-GCAS are

1. Do No Harm (don't initiate a maneuver that causes a crash)

2. Do Not Interfere (the pilot may be in an aggressive combat maneuver)

3. Avoid Ground Collisions

The conflict between 2) and 3) is tough. The rule of the F-16 system is not to interfere until a crash is less than 1.5 seconds away. This was established by flight testing with fighter pilots flying aggressive profiles that might be used in combat.

Larger aircraft are seldom flown that aggressively. Nor do they have the power and maneuverability to get out of trouble in 1.5 seconds. Today's GPWS and EGPWS systems provide up to 60 seconds of time from the warning to airplane impact. The FAA says "The GPWS mandate reduced CFIT (controlled flight into terrain) accidents from about 9 per year in the seven years immediately preceding the mandate to about 4 per year after. This rate has remained fairly constant". So there's room for improvement through automated recovery that isn't last-second.

[1] http://www.dtic.mil/dtic/tr/fulltext/u2/a618503.pdf


Look at all the benefits that come with FADEC (https://en.wikipedia.org/wiki/FADEC), and then ask me how widespread it is.

The aviation industry is veeeerry conservative, not to mention hesitant to retrofit perfectly functional airplanes.


It requires the aircraft to be aware of terrain around it. Most aircraft do not have topographical maps in their computers.


Not true, 95% of commercial aircraft have a TAWS system [0] that can show relative terrain on their navigation displays and provides FLTA alerts (forward looking terrain avoidance) by using an onboard terrain database. This is an FAA requirement for any aircraft with 6 or more passengers.

[0] https://en.wikipedia.org/wiki/Terrain_awareness_and_warning_...


I believe it work on a radar altimeter.


The terrain following radar feature that has been in fighter jets since the late 60s uses radar -- as the name implies, and serves a similar purpose. It allows the pilot to set an altitude and a "comfort" level regarding how aggressive the autopilot can be with regards to avoiding danger (basically, how quickly the plane can pull Gs to avoid terrain, and how many Gs it can pull).

That works well when you're straight and level, attentive, and the radar can point at the ground. This system can't rely on radar exclusively though because the aircraft may not have its radar pointing at the ground (as in the video, the aircraft is inverted in a pretty steep dive).

So, they have to factor in precision INS/GPS and known topology to assess terrain altitude in order to perform collision avoidance.


The F-16 system doesn't rely on radar, although it can use it, because fighters often fly with radar off. It tells the enemy you're coming. It's based on INS/GPS and a terrain database obtained from radar scans of the Earth made from the Space Shuttle in the 1990s.


Dumb question, maybe, but how long before that data becomes inaccurate? Or rather, are there any areas where the change in elevation for the purpose of this system could be big enough in a 30ish year timescale that it would cause problems?

I assume no geological process alters the land drastically enough, quickly enough, that you'd notice, but what about water-level changes (dammed rivers?), melting glaciers, etc? Is "hard" ground consistent enough that no human processes are going to cause the data to diverge from the database drastically without the chance to update the database with new topographical surveys?


Right, the Auto-GCAS feature doesn't rely on radar -- but the normal TFR system does, so the OP was half-correct in that there is a system that can use the radar to do ground collision avoidance... just not this particular system.


Not a woman, but Garry of Garrysmod fame has a very interesting set of blog posts about dealing with and eventually befriending his internet stalker in an attempt to curb the guy's worst behavior.

http://garry.tv/2015/11/10/stalkers-and-abuse-part-1/


As a big PC gamer I've basically abandoned mobile gaming as it feels like everything out there is trying to get my money by means ranging from insidious to annoying. This is compounded by the fact that browsing the google play games store is a showcase of shamelessness where different developers are releasing essentially identical clones of whatever's popular, it's kind of depressing.

The only mobile games I can bring myself to recommend are Andoku Sudoku, The little crane that could, and crossy road (which even includes monetisation schemes without feeling intensely annoying).

Suggestions for others would be appreciated


Here's a list of good mobile games off the top of my head. None of these require in-app purchases if they even have them:

You Must Build a Boat, The Executive, Out There, Monument Valley, Hitman GO, Lara Croft GO, DEVICE 6, The Room (and sequels), Prune, Lifeline (this one's great because it would only work in mobile), Threes!

If you're looking for good mobile recommendations, I'd point you at http://toucharcade.com. They do iOS reviews, but a good chunk of the games also release for Android these days.


I'm going to add Twenty to this list. Small amount of money to unlock all the modes, but the free mode is amazing anyway.


I know it seems hopeless but there really are some great mobile games out there which do not try to trick you or screw you around to get your money.

If you like puzzle games (I assume you do, since you mention Andoku Sudoku), you should check out games by Pyrosphere[0]. Especially look at Lazors[1] and The Weaver[2]. Also fun to play is Hoplite[3] and the training mode in Lichess[4]. All of these games are free, though Hoplite lets you pay to unlock an extended endgame, it isn't necessary at all.

[0] http://pyrosphere.net/

[1] https://play.google.com/store/apps/details?id=net.pyrosphere...

[2] https://play.google.com/store/apps/details?id=net.pyrosphere...

[3] https://play.google.com/store/apps/details?id=com.magmafortr...

[4] https://play.google.com/store/apps/details?id=org.lichess.mo...


Speaking of clones of whatever's popular, which came first The Weaver or Strata[1]

[1] https://play.google.com/store/apps/details?id=com.graveck.st...


I picked up a PS Vita and it's fantastic. So far through various sales I've picked up 35 physical and digital games, so, quite the backlog to last a long while. I don't play any games on my phone anymore. I heartily recommend Persona 4 Golden, Superbeat XONIC, Sparkle Unleashed, OlliOlli, Lumines Electric Symphony, and Mortal Kombat (basically the PS3 version with scaled-down graphics).


Spaceteam!


This is clearly a play by the plastic surgery lobby to get people to have work done so they look less like known criminals.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: