I agree with the fact that LLM's inherently lack agency and I think it's a pretty subtle distinction but it's an extremely consequential one. AI cannot self-initiate anything, they are only able to produce an output as a response to input. The impetus for their action is always an idea that came from a humans mind, no matter how indirect that is. That much is inarguable imo, the possible point of contention is whether the same can be said about humans, and that's a very philosophical question but I am very convinced it's not the case(I won't get into my own philosophical beliefs here).
Despite the fact that the distinction is very philosophical, I think the implications are very practical. Without it's own initiating energy everything an AI produces will be a response to an input, and it's response will be constrained by the bounds implied by that input. The specific type of dialectic between the programmer and the person giving requirements, which leads to creating the ACTUAL requirements, is impossible to happen with an AI, because a dialectic requires two opposed agents/forces while an AI is incapable of being an opposing force because it is only a derivative or product of whatever force is providing it's input; basically, it is constrained inside a box defined by the input it is given, and precisely what is needed for true synthesis(new ideas/thoughts, as opposed to an analytic breaking down of the already proposed ideas) is a whole separate box to interact with the one defined by the input.
My explanation is extremely abstract and will probably only make sense to someone who almost agrees with me already, but that's the best I could do. I'm sure there is a more down-to-earth way to explain this but I guess my understanding isn't good enough to find it yet. In my defense I do think this particular issue of agency in AI is one of the most subtle and philosophical problems in the world right now that actual has practical implications.
I agree with most of this. The part I question is if both sides of the requirements exploring process need to have agency, or if it is sufficient to have just one human participant (ideally the customer/user).
My thinking being that the adversarial or opposing agency could by provided by way of strict business rules written with human intent. So, when the customer makes a request to the model, a tool could get called that raises an exception also designed by a human. But, physically only one person is involved in that transaction.
you gotta explain why the analogy is applicable, otherwise I could do the same thing and say "I don't understand why so many people are convinced that hovercars won't replace automobiles one day." Infinite analogies could be made to support any conclusion, they aren't worth anything without reasoning(implied or explicit) for why your analogy is particularly relevant compared to others.
Tbh I am completely unsure about the AI Programmer debate, I don't have the knowledge of the AI landscape to make an informed decision. For that reason I do what I often do and make a meta-judgement based on the types of arguments made by each side.
Who is arguing that AI will replace programmers? People who are invested in AI, or people who want cheaper labor.
Who is arguing against that? Programmers who want to keep their job.
Not much to draw from that angle.
What KIND of arguments is each side making?
Programmers: specific points that touch reality directly.
AI Programmer supporters: Typically, arguments are abstract and never touch reality directly, and seem to be motivated by hype more than experience. In the past, there has been cases where the abstract dreamer hype crowd has been right, but typically there are many pre-emptive waves who are wrong(as would be expected, unless you assume that people are incapable of expecting a thing to come before it's time, then there will be waves of people who pick up on a thing before it gets here, and they will be pre-emptive).
For this reason, plus the very limited amount of AI-generated code applied to non-trivial projects that I've seen(which doesn't and shouldn't hold much weight, because I'm not super familiar with the latest tech), I'm feeling like AI replacing programmers is at least a decade off.
I also feel like people are thinking about the problem wrong in general. They are jumping from our current state to a state where we have capable AI programmers without imagining the incremental transformations in work-place structure over time. We've been going through a trend where coding language gets closer to human language since the days of punch cards, and programmers will exist as a job until that trend reaches the point where programmers are "squeezed out". By that I mean, a programmers job is to convert the intentions(not words, important distinction) of the product manager into code, from this perspective they can be considered middlemen. Programmers will exist until the day that AI is so good that a middleman is no longer needed, that a product manager can talk directly to an AI and get the desired results. Knowing how bad product managers are at explaining what they ACTUALLY need, on a concrete literal level, I think this problem is more difficult than people assume. Even if we had AI that produced perfect code that did exactly what was asked of it, I'm not sure if that'd be good enough, precisely because it does EXACTLY what is asked of it.
For a second I thought the title was implying that they were giving the "all clear" TO the asteroid, like, "all good, we won't stop you if you wanna destroy us all." And it didn't even stand out to me as an especially crazy headline considering all the other shit I've been seeing lately.
My wife has a "Dr Dennis Gross Red Light Mask" that has a setting for 650nm + 850nm red light. Supposed to help with aging or something but the specs match up perfectly.
Probably way too expensive for people to buy for this purpose(probably too expensive for skin care too, it's just a mask with red LED's, if I knew that's what she was buying I probably would've just made one. That's probably why she didn't tell me). Anyway, for anyone out there looking to buy a product, it's worth checking if your wife/gf already has something. Lots of products like this for anti aging as well as for hair growth
the problem is that if this becomes more common your argument won't be an option. It'd be akin to saying "just ignore the company's offers for health insurance" - well now our whole system is designed to account for and expect employer sponsored health insurance and you'll be at a massive disadvantage by not accepting it.
It because a lot of people are already struggling with the cost of housing. It's hard to tell people they should be worried about this company exploiting them, when their only other option is to pay someone else even more money to keep a roof over their head.
Sure. But "if this was scaled up several orders of magnitude, and continued for years, then there would likely be bad side-effects" is a argument against doing pretty much anything. Ever.
It's also a really good argument for not putting all our eggs in one basket and for avoiding total homogenization of the systems by which we live. This is not currently practiced at scale, as anyone who has boarded a plane in one state and lands in identical surroundings in another state can attest.
I think people who don't like it are just afraid of where this appears to be heading, and they are just resisting the idea of employers gaining anymore control of other aspects of peoples lives. Of course, the people who do this are technically choosing to give up this control to employers on their own volition, but it's possible to imagine a future where this becomes very normal and difficult to avoid
My objection is the additional leverage it grants over employees. If by reporting a safety issue in the workplace, harassment or even just landing on a bad boss I risk getting evicted from my house - I'm going to be far more motivated to suck it up and deal with it. Strike? Eviction. Complain? Eviction. Try to unionize? Turbo-eviction.
Now that's less likely when the employee owns the house as the article says, but that'll change too. And of course it ties your fates together. If the company goes under, your house becomes worthless too. If you want to move to a new employer chances are you'll have to relocate anyways.
I guess next step is company scrip? What's old is new again.
In the specific case above you have purchased and own the house. Nobody can evict you. So you're comparing company-owned rental housing to companies buying tracts of land and constructing housing for purchase by employees, which is totally different.
The trouble with owning a house in a company town is that it's hard to sell it.
If the company won't buy it back, and they're the only local employer, and employment is declining, you're stuck. This is the story of many coal mining towns in West Virginia.
More likely you have a mortgage, and now there’s all the incentive in the world for the bank backing that mortgage to have a sweetheart arrangement with the company that “sold” you that property.
Has this ever happened in American history? That someone who was paying their mortgage was evicted because the bank had a sweetheart deal with someone? That's not how I understand mortgages to work.
- You buy your house from the company via a mortgage owned by the company bank.
- You start to build equity in your house.
- You talk on your lunchbreak with your coworker about whether a union would be a good idea.
- The company fires you for no particular reason.
- There are no other employers in commuting range, and your skill-set is not conducive to working from home. As a result:
- You fall behind on your mortgage. The company bank notes that you are delinquent and immediately starts foreclosure.
- The company real estate agency doesn't show your house to people. There are no other real estate agencies in this town.
- The company bank takes possession. You leave. The bank sells the house at auction. There is one bidder: the company-owned house leasing company. They bid less than the amount the bank is owed. The bank takes all of that money, as first creditor, and you get no money from the sale.
The company-owned house leasing company cleans it out and leases it or decides to sell it to another employee.
Fannie Mae just settled a foreclosure discrimination class action in which they where failing to maintain foreclosed properties in minority neighborhoods. This caused a chain reaction where nearby homes went underwater and were subsequently foreclosed upon.
Mortgage lenders have remarkable legal resources with which to make a homeowner’s life miserable. While they usually have a financial incentive not to, that doesn’t always pan out.
Despite the fact that the distinction is very philosophical, I think the implications are very practical. Without it's own initiating energy everything an AI produces will be a response to an input, and it's response will be constrained by the bounds implied by that input. The specific type of dialectic between the programmer and the person giving requirements, which leads to creating the ACTUAL requirements, is impossible to happen with an AI, because a dialectic requires two opposed agents/forces while an AI is incapable of being an opposing force because it is only a derivative or product of whatever force is providing it's input; basically, it is constrained inside a box defined by the input it is given, and precisely what is needed for true synthesis(new ideas/thoughts, as opposed to an analytic breaking down of the already proposed ideas) is a whole separate box to interact with the one defined by the input.
My explanation is extremely abstract and will probably only make sense to someone who almost agrees with me already, but that's the best I could do. I'm sure there is a more down-to-earth way to explain this but I guess my understanding isn't good enough to find it yet. In my defense I do think this particular issue of agency in AI is one of the most subtle and philosophical problems in the world right now that actual has practical implications.