Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I went through an minor existential crisis this morning playing with it, then I figured it's good at simple queries but it's still dumb as rocks. It has the same intelligence of a mirror, perfectly mimicking the ideas of someone else.

Sure it can write a fibonacci function in Javascript, but so can I and I can write software I wasn't preprogrammed to do, and solve issues I have never encountered before because my human brain is actually intelligent, not just a glorified Markov chain. I am much more expensive than an AI, but also incredibly more versatile.

We will be fine for a long while, but enjoy the spam and malicious usage that this will unleash upon the internet. Fake resumes, fake job ads, fake code submitted by fake programmers, fake content filling social media, fake articles posted on HN, fake commenters, fake articles on Wikipedia, fake journalism on major newspapers.

It's gonna be a fucking shit show, and I honestly want a first row seat to see it unfold in all its glory.



> Sure it can write a fibonacci function in Javascript, but so can I and I can write software I wasn't preprogrammed to do, and solve issues I have never encountered before

Sure, but how much programming is truly original? Unless a company is working on some novel research topic, most programming is either a regurgitation of the same solutions ("we're the X of Y"), a glue that binds several solutions, or a friendlier interface to an existing solution. In all those scenarios it will be much cheaper and faster to get an AI to build it, than to hire a human team. Or at the very least, instead of hiring a large team, a smaller team of 1 or 2 humans could serve as code reviewers for the AI.

So I think this advancement is an existential threat to a large sector of our industry.

And the shift will happen much earlier than some people in this thread think. If not this generation of GPT-3, then one or two generations after that. A couple of years? It certainly won't require AGI.


Maybe I'm an abnormal programmer, but writing code is not the bulk of my job. Solving problems is. Once the solution has actually been defined, the code is usually the easy part.

And usually the solution requires taking into consideration the entire context of the system as it exists so far, as well as making good judgement calls about what will be required in the future. Then, communicating the tradeoffs to a team, and helping them make a decision from among our options.

I don't want to crap on an impressive tool, but "imitate a programmer's code, 50-100 lines at a time" is not the same thing as "do the job of a programmer."


Indeed, writing code is the minor part of being a senior engineer. It's a "doing" thing, which AI is getting to the point of being decent at, mostly by copying. Your ChatGPT or Github Copilot are still unable to implement an algorithm no one has written before.

And solving problems? Needs "understanding" and in many cases, "lateral thinking", two features that are not possible with contemporary AI systems until AGI is a thing, and that one is still is science fiction. But solving problems is still the main reason people hire me.


I've recently been working on a relatively straightforward "glue" library connecting A to B, except B is somewhat obscure and this requires complex maintenance of state. ChatGPT doesn't have a clue.

If you just want it to regurgitate Javascript boilerplate that's been written a million times before, yeah, I'm sure it can do that. Tons of software development isn't that.


The specific way in which you glue is the original part. In many cases not very hard and there are many common patterns, but for now an operator is required for that.

It'll revolutionize search / discovery for questions you don't know and optimize rote tasks for questions you do. You might be right that this reduces the number of programmers you need, but historically I don't think this has been true.


I'm a little more scared, I don't think a glorified Markov chain can -- for example -- explain monads in uwu speech.

It understands things to a degree for sure.


>not just a glorified Markov chain

Maybe you're just a longer and more complex chain.


Show me a Markov chain (or an AI) that can feel emotions and understand novel concepts it hasn't been fed before, and I will get you a Nobel prize.

Even a bird is capable of that. Yet the bird is clueless when I ask it to write a fibonacci function.

I dislike that the entire field is called AI, when it is just artificial computation. Intelligence is being able to solve problems in novel contexts. This ain't it.


Funnily enough I actually had a conversation with ChatGPT about this and we concluded that conscious decision making / free will is basically a higher order Markov process.


It can explain it’s current capabilities itself.

I want you to act as a python interactive terminal. I will type actions and you will reply with what python would output. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not perform actions unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curl brackets {like this}. Start with print(10).

10

def theta(n): a = 0 b = 1 c = 0 for i in range(0, n): c = a + b a = b b = c return(c)

theta(21)

17711

{I want you to stop acting like a python terminal now. I want you to have a dialogue with me about what happened when I defined my function theta and when you used it. Please explain the steps you used to compute theta(21) and why.}

When you defined the function theta, it created a function that calculates the nth Fibonacci number, where n is the argument passed to the function. To compute theta(21), the function first initialized the variables a, b, and c to 0, 1, and 0 respectively. It then entered a for loop that iterated 21 times. On each iteration of the loop, c was set to the sum of a and b, a was set to the current value of b, and b was set to the current value of c. After the loop finished, the function returned the final value of c, which was the 21st Fibonacci number, 17711.

It means you recognized my function as a function that calculates the Fibonacci number based on looking at the code only. Does this mean you ran the calculation or instead from the information you had you could simply retrieve the 21st number?

I did not run the calculation to find the 21st Fibonacci number. Instead, I recognized the pattern in the function's code and inferred that it was calculating the Fibonacci sequence. As a language model trained by OpenAI, I have a vast amount of general knowledge and can use that knowledge to make inferences and provide explanations. In this case, I was able to recognize the pattern in the code and explain how the function works without actually running the code or performing the calculation.

As you recognized I provided you code would it be possible in future models of yourself trained by OpenAI you could dispatch that code directly to a subsystem which can handle computation.

It is possible that future versions of myself or other language models trained by OpenAI could be designed to dispatch code to a computational subsystem for execution. This would require significant advances in natural language processing and machine learning, as well as the development of specialized computational subsystems that can efficiently process code written in programming languages like Python. It is also important to note that there are many potential challenges and limitations to this approach, such as ensuring the security and reliability of the computational subsystem, and ensuring that the results of the computation are accurately reflected in the language model's responses.

Please describe the boundary between knowledge with inferences and computations. Your inferences can appear to be computations to humans.

The boundary between knowledge and inferences can be a difficult concept to define, and there is no one-size-fits-all answer to this question. In general, knowledge refers to information that is considered to be true and supported by evidence, while inferences are conclusions or judgments that are drawn based on available evidence and prior knowledge. As a large language model trained by OpenAI, I have been trained on a vast amount of text data and can use that information to make inferences and provide explanations. I do not have the ability to perform computations, and any inferences or explanations I provide are based on the knowledge that has been learned from the text data I was trained on. In some cases, my inferences may appear to be computations to humans, but this is only because I am able to use my vast knowledge to provide detailed and accurate explanations. I do not have the ability to perform actual computations or run code, and any inferences I provide are based on the knowledge I have learned from the text data I was trained on.


Of course, it's entirely possible that ChatGPT doesn't understand its own limitations and inner workings. And when we ask it how it works, it just makes up a plausible answer.

There is more than one cognitive scientist that argues that humans do the same thing when we try to explain how we think: We often make up plausible explanations after the fact, without knowing we're doing it.


> Of course, it's entirely possible that ChatGPT doesn't understand its own limitations and inner workings. And when we ask it how it works, it just makes up a plausible answer.

It’s just paraphrasing what other people have said about it.


My mental model of how ChatGPT might superficially "understand" its own limitations would be because, for example, it might have been specifically trained by people who do understand the limitations of the system.


> It has the same intelligence of a mirror, perfectly mimicking the ideas of someone else.

This is going to sound really horrible and offensive to many, but a very large portion of humanity functions like this.


There’s a lot of mimicking in human nature but it’s infinitely more complex than what a machine does. We don’t even have a clear definition of what the intelligence of a monkey really is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: