Force pushing doesn't actually remove anything from the remote repository, only changes some references for which commits the branches point to. Plus, any forks on github will be completely unaffected. It's not perfect, since Github doesn't seem to offer any history of such reference alterations (a la the reflog), but it's still a valuable offsite backup from a developer's perspective.
Okay, fair enough re force pushing (though `gh repo delete` is still an option). I suppose for a sufficiently active codebase copies of it will exist elsewhere. Just seems odd to me that people aren't backing up anything else on their computers otherwise they could trivially just include their git-based projects.
Why not? It's their operating system, and they're trying to balance quite a few competing priorities. Scammers are not a threat to dismiss out of hand (i've had family who were victims).
For it to be truly considered open source, you should be able to fork it and create your own edits to change the defaults however you wish. Whether that is still a possibility or not, is a completely separate issue from how they proceed with their own fork.
Of course it's your phone, but the whole point of using Android is that it makes a lot of choices for you. It forces a billion things on you, and this is really no different than any of the others. Everything from UI colors, to the way every feature actually works. For instance, should you be able to text message one million people at a time? You might want to, but Android doesn't offer that feature. Do you want to install spyware on your girlfriends phone? Maybe that's your idea of complete freedom, but the fact that Google makes it harder, is a good thing, not a bad thing.
If you don't like their choices, you should be able to install other software you do like. There should be completely free options that people can choose if they desire. But the majority of people just want a working phone, that someone like Google is taking great pains to make work safely and reliably.
> Of course it's your phone, but the whole point of using Android is that it makes a lot of choices for you. It forces a billion things on you, and this is really no different than any of the others. Everything from UI colors, to the way every feature actually works.
There is a difference between making a choice because there has to be something there (setting a default wallpaper, installing a default phone/sms app so your phone works as a phone) and actively choosing to act against the user (restricting what I can install on my own device, including via dark patterns, or telling me that I'm not allowed to grant apps additional permissions).
> For instance, should you be able to text message one million people at a time? You might want to, but Android doesn't offer that feature.
There's a difference between not implementing something, and actively blocking it. While we're at it, making it harder to programmatically send SMS is another regression that I dislike.
> Do you want to install spyware on your girlfriends phone? Maybe that's your idea of complete freedom, but the fact that Google makes it harder, is a good thing, not a bad thing.
Obviously someone else installing things on your phone is bad; you can't object to the owner controlling a device by talking about other people controlling it.
> If you don't like their choices, you should be able to install other software you do like. There should be completely free options that people can choose if they desire. But the majority of people just want a working phone, that someone like Google is taking great pains to make work safely and reliably.
Okay, then we agree, right? I should be able to install other software I like - eg. F-Droid - without Google getting in my way? No artificial hurdles, no dark patterns, no difficulty that they wouldn't impose on Google Play? After all, F-Droid has less malware, so in the name of safety the thing they should be putting warning labels on is the Google Play.
The problem is that step by step ownership of your device is taken away. First most phones stopped supporting unlocking/relocking (thank Google for keeping the Pixel open), now the backtracked version of this, next the full version, etc.
Yes, that is a real problem. But it doesn't justify arguing uncritically or unrealistically in other areas. I think people should be free to do anything they want with their own devices. They should be able to install any software they want. That's very different than demanding someone make their software exactly how you desire. ie. You should be able to install your own operating system, you don't get to tell them how theirs should operate.
There are legitimate concerns being addressed by these feature restrictions.
> demanding someone make their software exactly how you desire
IMO the way this should work is that Google can make their software however they want provided they don't do anything to stop me from changing it to work the way I want.
Unfortunately, they've already done a lot of things to stop me from changing it to work the way I want. SafetyNet, locked bootloaders, closed-source system apps, and now they're (maybe) trying to layer "you can't install apps we don't approve of" on top of that.
> IMO the way this should work is that Google can make their software however they want provided they don't do anything to stop me from changing it to work the way I want.
That's exactly how it is. You're free to get your soldering iron out, or your debugger and reverse engineer anything you want. I don't mean to argue unfairly, but all we're talking about here is the relative ease with which you can do what you want to do. How easy do they have to make it?
As for their software, as delivered, there are literally an infinite number of ways that it stops you from changing it. Maybe you want everything in Pig Latin, or a language you made up yourself. Do they have to design around this desire? Do they have to make this easy to do?
> You should be able to install your own operating system
So you draw the line between the bootloader and the OS. Other people draw the line between the OS and applications. Most (nearly all) people can't write either, so for them it is just part of the device.
> you don't get to tell them how theirs should operate.
I paid for it, and I allow it to be legal in the jurisdiction I (partly) control. So it is not only theirs anymore.
Yes, and it should be 100% legal for you to hack it. Get the soldering iron out, and the debugger, and alter it to your hearts content. You bought it, you own it. But the supplier should be under no obligation to make any of that easy for you.
Just like they shouldn't be required to offer it in pink if that's your favorite color. It's up to you to paint it yourself. And if you want to load random apk's, you'll have to do whatever it takes to figure that out too, up to creating your own hardware and software.
I think you misunderstood me, the software is part of the device I paid for and own.
If I tell someone to install a light switch in my living room and then it occasionally switches states when someone presses another switch at my outside wall and occasionally refuses working, I don't feel like they fulfilled their contractual obligation. Same with smartphones and software.
I would agree with you if I would want additional features, like if I want a filesystem, but there is no filesystem manager yet, or if I want to install a package, but there is no package manager, or the package manager uses another format. But here there is a package manager and the package has the right format, so I tell the device to install it and it just doesn't solely because I am called John Brown and not Alphabet Inc. . That is not right.
You bought the device as delivered. They built it in the best way they know how. If you don't like it you're free to try to change it. But they're under no obligation to make it easy for you.
If the light switch you bought, has a little daylight sensor on it, and turns off when the sun is out, and that's what it does.. you may not like that light switch. You might want one that "does what you want, because you paid for it!" but then you should have purchased a different one, or made a light switch you actually liked. Of course you are free to get the soldering iron out, and try to change the light switch. But the manufacturer is under no obligation to make it easy for you to change the way it works.
> If the light switch you bought, has a little daylight sensor on it, and turns off when the sun is out, and that's what it does.. you may not like that light switch. You might want one that "does what you want, because you paid for it!" but then you should have purchased a different one, or made a light switch you actually liked.
Not sure this analogy works as it gives prospective light switch buyers a choice of different light switch types. What google is doing seems more like forcing EVERY light switch to have daylight sensors, thus forcing you to save power (even if you're pro-global warming and just trying to do your part for the cause), then telling people with vision problems relating to suboptimal indoor illumination or suffer from sunlight frequency melting disorder or think they've got some other random "daylight makes life suck" bullshit to create a student/hobbyist account.
> They should be able to install any software they want. That's very different than demanding someone make their software exactly how you desire. ie. You should be able to install your own operating system, you don't get to tell them how theirs should operate.
I don't think the distinction exists the way you're trying to describe. If I should be allowed to install any software I want, surely that includes any .apk I want? Conversely, someone could make the exact claim one step down the chain and argue that you don't get to tell them how their firmware should work and if you want to install your own OS you should just go buy a fab, make your own chips, write your own firmware, and make your own phone. And that's absurd, because users should be allowed to run their own software without being forced to ditch the rest of the stack for no reason.
No, I don't think you have the inerhent right to install any apk you desire, if their OS is designed to prohibit it. You should be free to try to alter their OS any way you want, but they should not have to make it easy.
And the argument is the same lower down the stack. You shouldn't be able to tell someone how to design their firmware.
The only problem is where the law prohibits us from trying to undo these restrictions, or make modifications ourselves. It's government that restricts us, and we should focus our efforts there.
> No, I don't think you have the inerhent right to install any apk you desire, if their OS is designed to prohibit it. You should be free to try to alter their OS any way you want, but they should not have to make it easy.
> And the argument is the same lower down the stack. You shouldn't be able to tell someone how to design their firmware.
Earlier, you claimed,
> They should be able to install any software they want.
but it sounds like actually you only mean that users should be allowed to futilely attempt it, not that there should actually be allowed to run software at will. If the firmware only allows running a signed OS, and that OS only allows running approved apps, then the user is not able to install any software they want.
I want maximum freedom, for everyone. That includes developers. We should be free to produce the software as we see fit. If that means we think that our users are best served by having devices that are locked down against scammers etc, then we should be free to produce locked down devices like that.
And as users we should be free to buy only devices that respect maximum capabilities and customization.
There is a tension between these goals, and it's difficult to resolve, so that everyone gets most of what they want. Google seems to be doing the right thing mostly though. Providing both the locked down device, and making provisions for people who want the non-standard option too.
Anyone who thinks they can do better, should enter the market and give us something better. I'd like more options for completely open and hackable phones.
There's a very easy way to achieve maximum freedom: punish people who take away other people's freedom. To achieve maximum freedom, the one freedom people must never be allowed to have is the freedom to take away other people's freedom. Google must be punished for every software module they wrote whose sole purpose is to make you less free.
They didn't make you less free. They protected your phone from scammers. On top of which, nobody twisted your arm and made you buy from them, you're free to change the phone any way you want, get the debugger out and change it. You have everything you need, it's your phone, change it any way you want; and they have the freedom to not help you.
The whole point of using Android for most users is that they have no other choice if they need a mobile phone.
Google killed every other competition via dumping and shady business practices. Sure, you can go to iOS, but that is even more closed and restrictive, not to mention the devices are overpriced.
Google makes it mandatory for your girlfriend's phone to have spyware on it. The spyware is made by Google. It doesn't protect you from spyware.
While we're talking about that, have you heard of Bright Data SDK? A lot of apps on the Play Store include it to monetize. What does it do? It uses your phone as a botnet node while the app is open, and pays the app developer. How is Google protecting you from spyware, again?
100%. If I buy something, it's mine. I should be able to resell it, modify it, or generally work on it however I see fit. Licensed digital media bound to platforms is different (barring some kind of NFT solution?) but an OS that my phone cannot function without (and that cannot be replaced in many cases) absolutely must be under my jurisdiction.
You paid for it but Google still has the control. I understand that you prefers things to be different (as do I) but the reality is that we don’t have control over devices we paid for.
You might choose to not have control. The reason people protest is because we should have more control over the things we own. Sure this might create a better market for alternatives but it is worse for most people. F-droid is spectacular.
I think it's reasonable for Google to control what happens in their version of Android (which can be installed by default) but it's not reasonable for Google to lock the bootloader (preventing installation of a non-Google OS).
Perhaps this is why Google hardware doesn't have locked bootloaders; Samsung et al can get away with locked bootloaders since it's not Google forcing the consumer in that case.
Whether the bootloader is or isn't locked should be very conspicuous before purchase, for consumer protection.
Reverse engineering the drivers, to permit you creating your own OS, for your own hardware, is already an area where people are accused of crimes. DMCA Section 1201 isn't something to so easily be worked around, to allow you to place your software in a working state onto undocumented hardware.
So, yes, there is a lot of things stopping you from coding your own OS.
Maybe i'm misunderstanding your point, but human's have pretty abysmal data efficiency, too. We have to use tools for everything... ledgers, spreadsheets, data-bases, etc. It'll be the same for an AGI, there won't be any reason for it to remember every little detail, just be able to use the appropriate tool, as needed.
I think the point the poster was making, is that there is an asterisk beside Gukesh and Liren's world champ status. Nobody really thinks they're the actual world-champ, regardless of what FIDE says. FIDE failed to attract the best player, to even play.
By the same logic, why would anyone expect that the best player in the world for any given sport happens to compete in the olympics? The issue here is the semantics. FIDE titling someone "world champion" is at the end of the day no different than a burger joint claiming to be the best in the country after winning some competition or another.
To be clear, I don't mean to take issue with the competitions themselves.
The other day I slammed the brakes at a green light, because I could hear sirens approaching -- even though the buildings on the corner prevented any view of the approaching fire trucks or their flashing lights. Do Teslas not have this ability?
> An algorithm is an algorithm. A computer is a computer. These things matter.
Sure. But we're allowed to notice abstractions that are similar between these things. Unless you believe that logic and "thinking" are somehow magic, and thus beyond the realm of computation, then there's no reason to think they're restricted to humanity.
It is human ego and hubris that keeps demanding we're special and could never be fully emulated in silicon. It's the exact same reasoning that put the earth at the center of the universe, and humans as the primary focus of God's will.
That said, nobody is confused that LLM's are the intellectual equal of humans today. They're more powerful in some ways, and tremendously weaker in other ways. But pointing those differences out, is not a logical argument in proving their ultimate abilities.
> Unless you believe that logic and "thinking" are somehow magic, and thus beyond the realm of computation
Worth noting that significant majority of the US population (though not necessarily developers) does in fact believe that, or at least belongs to a religious group for which that belief is commonly promulgated.
I think computation is an abstraction, not the reality. Same with math. Reality just is, humans come up with maps and models of it, then mistake the maps for the reality, which often causes distortions and attribution errors across domains. One of those distortions is thinking consciousness has to be computable, when computation is an abstraction, and consciousness is experiential.
But it's a philosophical argument. Nothing supernatural about it either.
You can play that game with any argument. "Consciousness" is just an abstraction, not the reality, which makes people who desperately want humans to be special, attribute it to something beyond reach of any other part of reality. It's an emotional need, placated by a philosophical outlook. Consciousness is just a model or map for a particular part of reality, and ironically focusing on it as somehow being the most important thing, makes you miss reality.
The reality is, we have devices in the real world that have demonstrable, factual capabilities. They're on the spectrum of what we'd call "intelligence". And therefore, it's natural that we compare them to other things that are also on that spectrum. That's every bit as much factual, as anything you've said.
It's just stupid to get so lost in philosophical terminology, that we have to dismiss them as mistaken maps or models. The only people doing that, are hyper focused on how important humans are, and what makes them identifiably different than other parts of reality. It's a mistake that the best philosophers of every age keep making.
The argument you're attempting to have, and I believe failing at, is one of resolution of simulation.
Consciousness is 100% computable. Be that digitally (electrical), chemically, or quantumly. You don't have any other choices outside of that.
Moreso consciousness/sentience is a continuum going from very basic animals to the complexity of humans inner mind. Consciousness didn't just spring up, it evolved over millions of years, and therefore is made up of parts that are divisible.
Reality is. Consciousness is.. questionable. I have one. You? I don't know, I'm experiencing reality and you seem to have one, but I can never know it.
Computations on the other hand describe reality. And unless human brains somehow escape the physical reality, this description about the latter should surely apply here as well. There are no stronger computational models than a Turing machine, ergo whatever the human brain does (regardless of implementation) should be describable by one.
Worth noting that this is the thesis of Seeing Red: A study in consciousness. I think you will find it a good read, even if I disagreed with some of the ideas.
The atoms of your body are not dynamic structures, they do not reengineer or reconfigure themselves in response to success/failure or rules discovery. So by your own logic, you can not be intelligent, because your body is running on a non-dynamic structure. Your argument lacks an appreciation for higher level abstractions, built on non-dynamic structures. That's exactly what is happening in your body, and also with the software that runs on silicon. Unless you believe the atoms in your body are "magic" and fundamentally different from the atoms in silicon; there's really no merit in your argument.
>>he atoms of your body are not dynamic structures, they do not reengineer or reconfigure themselves in response to success/failure or rules discovery.<<
you should check out chemistry, and nuclear physics, it will probably blow your mind.
it seems you have an inside scoop, lets go through what is required to create a silicon logic gate that changes function according to past events, and projected trends?
You're ignoring the point. The individual atoms of YOUR body do not learn. They do not respond to experience. You categorically stated that any system built on such components can not demonstrate intelligence. You need to think long and hard before posting this argument again.
Once you admit that higher level structures can be intelligent, even though they're built on non-dynamic, non-adaptive technology -- then there's as much reason to think that software running on silicon can do it too. Just like the higher level chemistry, nuclear physics, and any other "biological software" can do on top of the non-dynamic, non-learning, atoms of your body.
>>The individual atoms of YOUR body do not learn. They do not respond to experience<<
you are quite wrong on that. that is where you are failing to understand, you cant get past that idea.
there is also a large difference in scale. your silicon is going to need assembly/organization on the scale of individual molecules, and there will be self assembly required as that level of organization is constantly changing.
the barrier is mechanical scale construction, as the basic unit of function,that is why silicon and code cant adapt, cant exploit hysterisis, cant alter its own structure and function at an existentially fundamental level.
you are holding the wrong end of the stick. biology is not magic, it is a product of reality.
No, you're failing to acknowledge that your own assertion that intelligence can't be based on a non-dynamic, non-learning technology is just wrong. And not only wrong, proof to the contrary, is demonstrated by your very own existence. If you accept that you are at the very base of your tech stack, just atoms, then you simply must acknowledge that intelligence can be built on top of a non-learning, non-dynamic base technology.
All the rest is just hand waiving that it's "different". You're either atoms, or you're somehow atoms + extra magic. I'm assuming you're not going to claim that you're extra magic, in which case, your assertions are just demonstrably false, and predicated on unjustified claims about the nature of biology.
so you are a bot! i thought so, not bad, your getting better at acting human!
atoms are not the base of stack, you need to look at virtual annihilation, and decoherence. to get close to base. there is no magic, biology just goes to the base of the stack.
you cant access that base, with such coarse mechanisms as deposited silicon.
thats because it never changes, it fails at times and starts over.
biology is constantly changing, its tied to the base of existence itself. it fails, and varies until failure is an infeasible state.
Quantum "computers" are something close to where you need to be, and a self assembling, self replenishing, persistant ^patterning^ constraint is going to be of much greater utility than a silicon abacus.
sorry, but you are absolutely wrong on that one, you yourself are absolute proof.
not only that code is only as dynamic as the rules of the language will permit.
silicon and code cant break the rule, or change the rules, biological adaptive hysteretic, out of band informatic neural systems do, and repeat, silicon and code cant.
Programming languages are turing complete...the boundary is mathematics itself.
Unless you are going to take the position that neural systems transcend mathmatics (i.e. they are magic), there is no theoretical reason that a brain can't run on silicon. It's all just numbers, no magic spirit energy.
We've had evolutionary algorithms and programs that self-train themselves for decades now.
mathematics, has a problem with uncertainties, and that is why math, as structured cant do it. magic makes a cool strawman, but there is no magic, you need to refine your awareness of physical reality. solid state silicon wont get you where you want to go. you should look at colloidal systems [however that leads to biology] or if energetic constraints are not an issue, plasma state quantum "computation".
also any such thing that is generated, must be responsive to consequences of its own activities, capable of meta-training, rather than being locked into a training programming. a system of aligned, emergent outcomes.
It seems like you're doing a lot of work to miss the actual point. Focusing on the minutiae of the analogy is a distraction from the over arching and obvious point. It has nothing to do with how you feel, it has to do with how you will compete in a world with others who feel differently.
There were carpenters who refused to use power tools, some still do. They are probably happy -- and that's great, all the power to them. But they're statistically irrelevant, just as artisanal hand-crafted computer coding will be. There was a time when coders rejected high level languages, because the only way they felt good about their code is if they handcrafted the binary codes, and keyed them directly into the computer without an assembler. Times change.
In my opinion, it is far too early to claim that developers developing like it was maybe three years ago are statistically irrelevant. Microsoft has gone in on AI tooling in a big way and they just nominated a "software quality czar".
I used the future tense. Maybe it will be one hundred years from now, who knows; but the main point still stands. It would just be nice to move the conversation beyond "but I enjoy coding!".
I don’t think it’s correct to claim that AI generated code is just next level of abstraction.
All previously mentioned levels produce deterministic results. Same input, same output.
AI-generation is not deterministic. It’s not even predictable. And example of big software companies clearly show what mass adoption of AI tools will look like in terms of software quality. I dread if using AI will ever be an expectation, this will be level of enshittification never before imagined.
You're not wrong. But your same objection was made against compilers. That they are opaque, have differences from one to another, and can introduce bugs, they're not actually deterministic if you upgrade the compiler, etc. They separate the programmer from the code the computer eventually executes.
In any case, clinging to the fact that this technology is different in some ways, continues to ignore the many ways it's exactly the same. People continue to cling to what they know, and find ways to argue against what's new. But the writing is plainly on the wall, regardless of how much we struggle to emotionally separate ourselves from it.
They may not be wrong per se but that argument is essentially a strawman argument.
If these tools are non-deterministic then how did someone at Anthropic spend the equivalent of $20,000 of Anthropic compute and end up with a C compiler that can compile the Linux kernel (one of the largest bodies of C code out there).
To be frank, the C compilers source code were probably multiply times in its learning material, it just had to translate to Rust.
This aside, one success story doesn’t mean much, doesn’t even touch determinism question. Anthropic with every ad like this should have posted all the prompts they used.
People on here keep trotting out this "AI-generation is not deterministic." (more properly speaking, non-deterministic) argument on here …
And my retort to you (and them) is, "Oh yeah, and so?"
What about me asking Claude Code to generate a factorial function in C or Python or Rust or insert-your-language-of-choice-here is non-deterministic?
If you're referring to the fact that for a given input LLMs (or whatever) because of certain controls (temperature controls?) don't give the same outputs for the same inputs. Yeah, okay. If we're talking about conversational language that makes a meaningful difference to whether it sounds like an ELISA robots or more like a human. But ask an LLM to output some code then that code has to adhere to functional requirements independent of, muh, non-determinism. And what's to stop you (if you're so sceptical/scared) writing test-cases to make sure the code that is magically whisked out of nowhere performs as you so desire? Nothing. What's to stop you getting one agent to write the test-suite (and for you to review to the test-suite for correctness and for another agent to the write the code and self-correct based off of checking its code against the test-suite? Nothing
I would advise anyone encountering this but-they're-non-deterministic argument on HN to really think through what the proponents of this argument are implying. I mean, aren't humans non-deterministic. (I should have thought so.) So how is it, <extra sarcasm mode activated>pray tell</extra sarcasm mode activated> humans manage to write correct software in the first place?
I personally have jested many times I picked my career because the logical soundness of programming is comforting to me. A one is always a one; you don’t measure it and find it off by some error; you can’t measure it a second time and get a different value.
I’ve also said code is prose for me.
I am not some autistic programmer either, even if these statements out of context make me sound like one.
The non-determinism has nothing to do with temperature; it has everything to do with that fact that even at temp equal to zero, a single meaningless change can produce a different result. It has to do with there being no way to predict what will happen when you run the model on your prompt.
Coding with LLMs is not the same job. How could it be the same to write a mathematical proof compared to asking an LLM to generate that proof for you? These are different tasks that use different parts of the brain.
> A one is always a one; you don’t measure it and find it off by some error; you can’t measure it a second time and get a different value.
Linus Torvalds famously only uses ECC memory in his dev machines. Why? Because every now and again either a cosmic ray or some electronic glitch will flip a bit from a zero to a one or from a one to a zero in his RAM. So no, a one is not always a one. A zero is not always a zero. In fact, you can measure it and find it off by some error. You can measure it a second time and get a different value. And because of this ever-so-slight glitchiness we invented ECC memory. Error correction codes are a thing because of this fundamental glitchiness. https://en.wikipedia.org/wiki/ECC_memory
We understand when and how things can go wrong and we correct for that. Same goes for LLMs. In fact I would go so far as to say that someone doesn't even really think like how a software/hardware engineer ought to think if this is not nearly immediately obvious.
Besides the but-they're-not-deterministic crowd there's also the oh-you-find-coding-painful-do-you crowd. Both are engaging in this sort of real men write code with their bare hands nonsense -- if that were the case then why aren't we still flipping bits using toggle switches? We automate stuff, do we not? How is this not a step-change in automation? For the first time in my life my ideas aren't constrained by how much code I can manually crank out and it's liberating. It's not like when I ask my coding agent to provide me with a factorial function in Haskell it draws a tomato. It will, statistically speaking, give me a factorial function in Haskell. Even if I have never written a line of Haskell in my life. That's astounding. I can now write in Haskell if I want. Or Rust. Or you-name-it.
Aren't there projects you wanted to embark on but the sheer amount of time you'd need just to crank out the code prevented you from even taking the first step? Now you can! Do you ever go back to a project and spend hours re-familiarising yourself with your own code. Now it's a two minute "what was I doing here?" away from you.
> The non-determinism has nothing to do with temperature; it has everything to do with that fact that even at temp equal to zero, a single meaningless change can produce a different result. It has to do with there being no way to predict what will happen when you run the model on your prompt.
I never meant to imply that the only factor involved was temperature. For our purposes this is a pedantic correction.
> Coding with LLMs is not the same job. How could it be the same to write a mathematical proof compared to asking an LLM to generate that proof for you?
Correct, it's not the same. Nobody is arguing that it's the same. And it's wrong that it's different, it's just different that it's different.
> These are different tasks that use different parts of the brain.
> That's astounding. I can now write in Haskell if I want. Or Rust. Or you-name-it.
You're responsible for what you ship using it. If you don't know what you're reading, especially if it's a language like C or Rust, be careful shipping that code to production. Your work colleague might get annoyed with you if you ask them to review too many PRs with the subtle, hard-to-detect kind of errors that LLMs generate. They will probably get mad if you submit useless security reports like the ones that flood bug bounty boards. Be wary.
IMO the only way to avoid these problems is expertise and that comes from experience and learning. There's only one way to do that and there's no royal road or shortcut.
You’re making quite long and angry sounding comments.
If you’re making code in language you don’t know, then this code is as good as a magical black box. It will never be properly supported, it’s a dead code in the project that may do what it says it does or may not (a 100%).
I think you should refrain from replying to me until you're able to respond to the actual points of my counter-arguments to you -- and until you are able to do so I'm going to operate under the assumption that you have no valid or useful response.
Non-determinism means here that with same inputs, same prompts we are not guaranteed the same results.
This turns writing code this way into a tedious procedure that may not even work exactly the same way every time.
You should ask yourself, too: if you already have to spend so much time to prepare various tests (can’t trust LLM to make them, or have to describe it so many details), so much time describing what you need, then hand holding the model, all to get mediocre code that you may not be able to reproduce with the same model tomorrow - what’s the point?
I don't think he's missing the point at all. A band saw is an immutable object with a fixed, deterministic capability--in other words, a tool.
An LLM is a slot machine. You can pull keep pulling the lever, but you'll get different results every time. A slot machine is technically a machine that can produce money, but nobody would ever say it's a tool for producing money.
People keep trotting this argument out. But a band saw is not deterministic either, it can snap in the middle of a cut and destroy what you're working on. The point is, we only treat it like it's deterministic, because most of the time it's reliable enough that it just does what we want. AI technology will definitely get to the same level eventually. Clinging on to the fact that it isn't yet at that level today, is just cope, not a principled argument.
For every real valued function and every epsilon greater than zero, there’s a neural network (size unbounded) which approximates the function with precision epsilon.
It sounds impressive and, as I understand it, is the basis for the argument that algorithms based on NN’s such as LLM’s will be able to put perform humans at tasks such as programming.
But this theorem contains an ambiguous term that makes it less impressive when you remove it.
Which for me, makes such tools… interesting I guess for some applications but it’s not nearly as impressive as to remove the need for programmers or to replace their labour entirely with automation that we need to concern ourselves with writing markdown files and wasting tokens asking the algorithm to try again.
So this whole argument that, “you better learn to use them or be displaced in the labour market,” is a relying on a weak argument.
I think the distinction without a difference is a tool being deterministic or not. Fundamentally, its nature doesn't matter, if in actual practice it outperforms everything else.
Be that as it may, moving the goalpost aside. For me personally this fundamentally does matter. Programming is about giving instructions for a machine (or something mechanical) to follow. It matters a great deal to me that the machine reliably follows the instructions I give it. And compiler authors of the past have gone to great lengths to make their compilers produce robust (meaning deterministic) output, as have language authors tried to make their standards as rigorous (meaning minimize undefined behavior) as possible.
And for that matter, going back to the band saw analogy, a measure of a quality of a great band saw is, in fact, that the blade won’t snap in half in the middle of a cut. If a band saw manufacturer produces a band saw with a really low binomial p-value (meaning it is less deterministic/more stochastic) that is a pretty lousy band saw, and good carpenters will know to stay away from that brand of band saws.
To me this paints a picture of a distinction that does indeed have a difference. A pretty important difference for that matter.
I feel like we're both in similar minds of opposite sides, so perhaps you can answer me this: How is a deterministic AI any different from a search engine?
In other words, if you and me always get the same results back for the same prompt (definition of determinism,) isn't that just really, really power hungry Google?
I'm not sure pure determinism is actually a desirable goal. I mean, if you ask the best programmer in the world the same question every day, you're likely to eventually get a new answer at some point. But if you ask him, or I ask him, hopefully he gives the same good answer, to us both. In any case, he's not just a power hungry Google, because he can contextualize our question, and understand us when we ask in very obscured ways; maybe without us even understanding what we're actually looking for.
Maybe you disagree with it, but it seems like a pretty straightforward argument: A lot of us dismiss AI because "it can't be trusted to do as good a job as me". The OP is arguing that someone, who can do better than most of us, disagrees with this line of thinking. And if we have respect for his abilities, and recognize them as better than our own, we should perhaps re-assess our own rationale in dismissing the utility of AI assistance. If he can get value out of it, surely we can too if we don't argue ourselves out of giving it a fair shake. The flip side of that argument might be that you have to be a much better programmer than most of us are, to properly extract value out of the AI... maybe it's only useful in the hands of a real expert.
No, it doesn't work that way. I don't know if Mitchell is a better programmer than me, but let's say he is for the sake of argument. That doesn't make him a god to whom I must listen. He's just a guy, and he can be wrong about things. I'm glad he's apparently finding value here, but the cold hard reality is that I have tried the tools and they don't provide value to me. And between another practicioner's opinion and my own, I value my own more.
>A lot of us dismiss AI because "it can't be trusted to do as good a job as me"
Some of us enjoy learning how systems work, and derive satisfaction from the feeling of doing something hard, and feel that AI removes that satisfaction. If I wanted to have something else write the code, I would focus on becoming a product manager, or a technical lead. But as is, this is a craft, and I very much enjoy the autonomy that comes with being able to use this skill and grow it.
I consider myself a craftsman as well. AI gives me the ability to focus on the parts I both enjoy working on and that demand the most craftsmanship. A lot of what I use AI for and show in the blog isn’t coding at all, but a way to allow me to spend more time coding.
This reads like you maybe didn’t read the blog post, so I’ll mention there many examples there.
Nobody is trying to talk anyone out of their hobby or artisanal creativeness. A lot of people enjoy walking, even after the invention of the automobile. There's nothing wrong with that, there are even times when it's the much more efficient choice. But in the context of say transporting packages across the country... it's not really relevant how much you enjoy one or the other; only one of them can get the job done in a reasonable amount of time. And we can assume that's the context and spirit of the OP's argument.
>Nobody is trying to talk anyone out of their hobby or artisanal creativeness.
Well, yes, they are, some folks don't think "here's how I use AI" and "I'm a craftsman!" are consistent. Seems like maybe OP should consider whether "AI is a tool, why can't you use it right" isn't begging the question.
Is this going to be the new rhetorical trick, to say "oh hey surely we can all agree I have reasonable goals! And to the extent they're reasonable you are unreasonable for not adopting them"?
>But in the context of say transporting packages across the country... it's not really relevant how much you enjoy one or the other; only one of them can get the job done in a reasonable amount of time.
I think one of the more frustrating aspects of this whole debate is this idea that software development pre-AI was too "slow", despite the fact that no other kind of engineering has nearly the same turn around time as software engineering does (nor does they have the same return on investment!).
I just end up rolling my eyes when people use this argument. To me it feels like favoring productivity over everything else.
This is a really well done talk, with a very interesting peek into how the port is going. But beyond that, it's also a great example of how to give a technical talk in general, walking the line between bringing the uninitiated along, without talking down to anyone; all the while providing truly useful (and new) information. And kudos too, to whoever recorded the audio and visuals, and posted them for us to see -- if only all tech talks were recorded this well!
reply