There seems a fair enthusiasm in the UI of these to hide code from coders. Like the prompt interaction is the true source and the actual code is some sort of annoying intermediate runtime inconvenience to cover up. I get that productivity can be improved with a lot of this for non developers, just not sure using 'code' as the term is the right one or not.
> There seems a fair enthusiasm in the UI of these to hide code from coders. Like the prompt interaction is the true source and the actual code is some sort of annoying intermediate runtime inconvenience to cover up.
I've finally started getting into AI with a coding harness but I've take the opposite approach. usually I have the structure of my code in my mind already and talk to the prompt like I'm pairing with it. while its generating the code, I'm telling it the structure of the code and individual functions. its sped me up quite a lot while I still operate at the level of the code itself. the final output ends up looking like code I'd write minus syntax errors.
This is the way to do it if you're a serious developer, you use the AI coding agent as a tool, guiding it with your experience. Telling a coding agent "build me an app" is great, but you get garbage. Telling an agent "I've stubbed out the data model and flow in the provided files, fill in the TODOs for me" allows you the control over structure that AI lacks. The code in the functions can usually be tweaked yourself to suit your style. They're also helpful for processing 20 different specs, docs, and RFCs together to help you design certain code flows, but you still have to understand how things work to get something decent.
Note that I program in Go, so there is only really 1 way to do anything, and it's super explicit how to do things, so AI is a true help there. If I were using Python, I might have a different opinion, since there are 27 ways to do anything. The AI is good at Go, but I haven't explored outside of that ecosystem yet with coding assistance.
My workflow is quite similar. I try to write my prompts and supporting documentation in a way that it feels like the LLM is just writing what is in my mind.
When im in implementation sessions i try to not let the llm do any decision making at all, just faster writing. This is way better than manually typing and my crippling RSI has been slowly getting better with the use of voice tools and so on.
The power to the people is not us the developers and coders.
We know how to do a lot of things, how to automate etc.
A billion people do not know this and probably benefit initially a lot more.
When i did some powerpoint presentation, i browsed around and draged images from the browser to the desktop, than i draged them into powerpoint. My collegue looked at me and was bewildered how fast I did all of that.
I've helped an otherwise very successful and capable guy (architect) set up a shortcut on his desktop to shut down his machine. Navigating to the power down option in the menu was too much of a technical hurdle. The gap in needs between the average HNer and the rest of the world is staggering
This. I’m sure everyone has a similar story of how difficult it was to explain the difference between a program shortcut represented as a visual icon on a desktop versus the actual executable itself to somebody who didn’t grow up in the age of computing. And this was Windows… the purported OS for the masses not the classes.
Initially I thought you meant “software architect” and I was flabbergasted at how that’s possible. Took me a minute to realize there’s other architects out there lol.
Oh boy, the gap between the average it professional and ai pros here is already staggering, let alone the rest of the world. I feel like an alien, no matter where.
Yes! Even closing the windows of programs that users no longer need is hard.
It's easy to develop a disconnect with the level that average users operate at when understanding computers deeply is part of the job. I've definitely developed it myself to some extent, but I have occasional moments where my perspective is getting grounded again.
It's a while since I've used Windows but I seem to remember it giving a choice of sleep, logout, switch session etc. I could totally see someone wanting a single button for it.
In UI, I’m pretty sure that replacement is already here. We’ll be lucky if at least backend stays a place where people still care about the actual source.
I'd say the opposite, the frontend code is so complex these days that you can't escape the source code.
If you stick to tailwind + server side rendered pages you can probably go pretty far with just AI and no code knowledge but once you introduce modern TS tooling, I don't think it's enough anymore.
Yes, the code is still important. For example, I had tasked Codex to implement function calling in a programming language, and it decided the way to do this was to spin up a brand new sub interpreter on each function call, load a standard library into it, execute the code, destroy the interpreter, and then continue -- despite an already partial and much more efficient solution was already there but in comments. The AI solution "worked", passed all the tests the AI wrote for it, but it was still very very wrong. I had to look at the code to understand it did this. To get it right, you have to either I guess indicate how to implement it, which requires a degree of expertise beyond prompting.
Do you ask it for a design first? Depending on complexity I ask for a short design doc or a function signature + approach before any code, and only greenlight once it looks sane.
I understand the "just prompt better" perspective, but this is the kind of thing my undergraduate students wouldn't do, why is the PhD expert-level coder that's supposed to replace all developers doing it? Having to explicitly tell it not to do certain boneheaded things, leave me wondering: what else is it going to do that's boneheaded which I haven't explicit about?
Because it's not "PhD-expert level" at all, lol. Even the biggest models (Mythos, GPT-Pro, Gemini DeepThink) are nowhere near the level of effort that would be expected in a PhD dissertation, even in their absolute best domains. Telling it to work out a plan first is exactly how you would supervise an eager but not-too-smart junior coder. That's what AI is like, even at its very best.
I understand that but 1) expert-level performance is how they are being sold; but moreover 2) the level of hand-holding is kind of ridiculous. I'll give another example, Codex decided to write two identical functions linearize_token_output and token_output_linearize. Prompting it not to do things like that feels like plugging holes in a dyke. And through prompting, can you even guarantee it won't write duplicate code?
I'll give a third example: I gave Codex some tests and told it to implement the code that would make the tests pass. Codex wrote the tests into the testing file, but then marked them as "shouldn't test", and confirmed all tests pass. Going back I told it something to the effect "you didn't implement the code that would make the tests work, implement it". But after several rounds of this, seemingly no amount of prompting would cause it to actually write code -- instead each time it came back that it had fixed everything and all tests pass, despite only modifying the tests file.
In each example, I keep coming back to the perspective that the code is not abstracted, it's an important artifact and it needs/deserves inspection.
> the code is not abstracted, it's an important artifact and it needs inspection.
That's a rather trivial consideration though. The real cost of code is not really writing it out to begin with, it's overwhelmingly the long-term maintenance. You should strive to use AI as a tool to make your code as easy as possible to understand and maintain, not to just write mountains of terrible slop-quality code.
Yep, all models today still need prompting that requires some expertise. Same with context management, it also needs both domain expertise as well as knowing generally how these models work.
Code isn't going anywhere. Code is multiple orders of magnitude cheaper and faster than an LLM for the same task, and that gap is likely to widen rather than contract because the bigger the AI gets the sillier it gets to use it to do something code could have done.
Compare the actual operations done for code to add 10 8-digit numbers to an LLM on the same task. Heck, I'll even say, forget the possibility the LLM may be wrong. Just compare the computational resources deployed. How many FLOPS for the code-based addition? How many for the LLM? That's a worst-case scenario in some ways but it also gives you a good sense of what is going on.
Humans may stop looking at it but it's not going anywhere.
Everyday people can now do much more than they could, because they can build programs.
The idea that code is something sacred and only devs can somehow do it is dying, and I personally love it, as I am watching it enable so many of my friends and family who have no idea how to code.
Today, when we think of someone "using the computer" we gravitate towards people using apps, installing them, writing documents, playing games. But very rarely have we thought of it as "coding" or "making the computer do new things" -- that's been reserved, again, for coders.
Yet, I think that a future is fast approaching where using the computer will also include simply coding by having an agent code something for you. While there will certainly still be apps/programs that everyone uses, everyone will also have their own set of custom-built programs, often even without knowing it, because agents will build them, almost unprompted.
To use a computer will include _building_ programs on the computer, without ever knowing how to code or even knowing that the code is there.
There will of course still be room for coders, those who understand what's happening below. And of course that software engineers should know how to code (less and less as time goes on, though, probably), but no doubt to me that human-computer interaction will now include this level of sophistication.
> The idea that code is something sacred and only devs can somehow do it is dying, and I personally love it, as I am watching it enable so many of my friends and family who have no idea how to code.
People on HN are seriously delusional.
AI removed the need to know the syntax. Your grandma does not know JS but can one shot a React app. Great!
Software engineering is not and has never been about the syntax or one shotting apps. Software engineering is about managing complexity at a level that a layman could not. Your ideal word requires an AI that's capable of reasoning at 100k-1 million lines of code and not make ANY mistakes. All edge cases covered or clarified. If (when) that truly happens, software engineering will not be the first profession to go.
I never said Software Engineering is dying or needs to go. I'm not the least bit afraid of it.
In fact, in the very message you're replying to, I hinted at the opposite (and have since in another post stated explicitly that I very much think the profession will still need to exist).
My ideal world already exists, and will keep getting better: many friends of mine already have custom-built programs that fit their use case, and they don't need anything else. This also didn't "eat" any market of a software house -- this is "DIY" software, not production-grade. That's why I explicitly stated this is a new way of human-computer-interaction, which it definitely is (and IMO those who don't see this are the ones clearly deluded).
i WISH we weren't phoning with them anymore, but people keep trying to send me actual honest-to-god SMS in the year 2026, and collecting my phone number for everything including the hospital and expect me to not have non-contact calls blocked by default even though there are 7 spam calls a day
In what world would I prefer to give someone access to me via a messaging app rather than a fully-async text SMS message? I don't even love that people can see if you've read their texts now.
It's not the code I write, it's what I've noticed from people in 25 years of writing code in the corner.
All of my friends who would die before they use AI 2 years ago now call themselves AI/agentic engineers because the money is there. Many of them don't understand a thing about AI or agents, but CC/Codex/Cursor can cover up for a lot.
Consequently, if Claude Code/"coding agents" is a hot topic (which it is), people who know nothing about any of this will start raising money and writing articles about it, even (especially) if it has nothing to do with code, because these people know nothing about code, so they won't realize what they're saying makes no sense. And it doesn't matter, because money.
Next thing you know your grandma will be "writing code" because that's what the marketing copy says. That's all it takes for the zeitgeist to shift for the term "code". It will soon mean something new to people who had no idea what code was before, and infuriating to people who do know (but aren't trying to sell you something).
I know that's long-winded but hopefully you get where I'm coming from :D.
Totally this. People who don't see this seem to think we're in some sort of "bubble" or that we don't "ship proper code" or whatever else they believe in, but this change is happening. Maybe it'll be slower than I feel, but it will definitely happen. Of course I'm in a personal bubble, but I've got very clear signs that this trend is also happening outside of it.
Here's an example from just yesterday. An acquaintance of mine who has no idea how to code (literally no idea) spent about 3 weeks working hard with AI (I've been told they used a tool called emergent, though I've never heard of it and therefore don't personally vouch for it over alternatives) to build an app to help them manage their business. They created a custom-built system that has immensely streamlined their business (they run a company to help repair tires!) by automating a bunch of tasks, such as:
- Ticket creation
- Ticket reporting
- Push notifications on ticket changes (using a PWA)
- Automated pre-screening of issues from photographs using an LLM for baseline input
- Semi-automated budgeting (they get the first "draft" from the AI and it's been working)
- Deep analytics
I didn't personally see this system, so I'm for sure missing a lot of detail. Who saw it was a friend I trust and who called me to relay how amazed they were with it. They saw that it was clearly working as intended. The acquaintance was thinking of turning this into a business on its own and my friend advised them that they likely won't be able to do so, because this is very custom-built software, really tailored to their use case. But for that use case, it's really helped them.
In total: ~3 weeks + around 800€ spent to build this tool. Zero coding experience.
I don't actually know how much the "gains" are, but I don't doubt they will definitely be worth it. And I'm seeing this trend more and more everywhere I look. People are already starting to use their computer by coding without knowing, it's so obvious this is the direction we're going.
This is all compatible with the idea of software engineering existing as a way of building "software with better engineering principles and quality guarantees", as well as still knowing how to code (though I believe this will be less and less relevant).
My experience using LLMs in contexts where I care about the quality of the code, as well as personal projects where I barely look at the code (i.e. "vibe coding") is also very clearly showing me that the direction for new software is slowly but surely becoming this one where we don't care so much about the actual code, as long as the requirements are clear, there's a plethora of tests, and LLMs are around to work with it efficiently (i.e. if the following holds -- big if: "as the codebase grows, developing a feature with an LLM is still faster than building it by hand") . It is scary in many ways, but agents will definitely become the medium through which we build software, and, my hot-take here (as others have said too) is that, eventually, the actual code will matter very little -- as long as it works, is workable, and meets requirements.
For legacy software, I'm sure it's a different story, but time ticks forward, permanently, all the time. We'll see.
Fully agree. Non-dev solutions are multiplying, but devs also need to get much more productive. I recently asked myself "how many prompts to rebuild Doom on Electron?" Working result on the third one. But, still buggy though.
The devs who'll stand out are the ones debugging everyone else's vibe-coded output ;-)
That’s a pretty far cry from “nobody makes phone calls”. You can also find people who spend 6+ hours on phone calls everyday, including people under 30.
On the flip side, I cause a medium panic in my daughter when I text "please call me when you can" without a why attached. She assumes someone's in the hospital or dying or something.
My mom had to lay down a rule that if I called her at a weird hour I needed to open with whether or not I was okay. Almost 30 now and still do the same thing.
Agree with that, they seem really good limits for daily use on something like Chat GPT Pro $20 account. I'm in the curious situation of using the Codex CLI within Cursor IDE and not really getting value out of my $60 Cursor sub. Plus at every update it seems Cursor seems to break more of their UI in the 'not a cloud agent chat UI' vs the more traditional VSCode sort of layout of code first. I should probably cancel.
This looks interesting and I use Codex a fair bit already in vscode etc, but I'm having trouble leaving a 'code editor with AI' to an environment that sort of looks like it puts the code as a hidden secondary artefact. I guess the key thing is the multi agent spinning plates part.
I find that the case too. For more complex things my future ask would be something that perhaps formalized verification/testing into the AI dev cycle? My confidence in not needing to see code is directly proportional in my level of comfort in test coverage (even if quite high level UI/integration mechanisms rather than 1 != 0 unit stuff)
I did too - did the SF back to NYC leg that goes the north route as well. Amazing experience. I was only 19 at the time. My favorite memory is that on the westbound leg we met up with the eastbound one in (I think) Yellowstone and while parking the buses they managed to slowly crash in to each other (just a dent, nothing serious). I liked the fact they both started from separate coasts and ending up colliding.
Can I ask what year that was in? Several people in here seem to be intimately familiar with this company, but it's my first time hearing about it, despite me being quite interested in travel and all sorts of weird transit methods. It made me wonder if everyone here used them when they were at their peak many decades ago, or if this varied.
1989. I saw a paper flyer in the New York Public Library and turned up a few days later at the bus station on impulse. It was basically a moving commune, where you slept on flat boards, cooked together and moved glacially slowly towards the destination. The NY -> SF went south via FL, TX etc. It was mainly young people not actually from the US, Europe, Australia etc.
So is maybe gpt-5.2 with reasoning set to 'none' identical to gpt-5.2-chat-latest in capabilities but perhaps with a different system (system) prompt? I notice chat-latest doesn't accept temperature or reasoning (which makes sense) parameters, so something is certainly different underneath?
> ChatGPT is widely used for practical guidance, information seeking, and writing, which together make up nearly 80% of usage. Non-work queries now dominate (70%). Writing is the main work task, mostly editing user text. Users are younger, increasingly female, global, and adoption is growing fastest in lower-income countries
That's funny, the way I interpreted this sentence is that usage was already high in older, male, and high-income countries so most of the new users are coming from outside these demographics. Which, ironically, is the exact opposite of what you're saying.
You read "Users are younger, increasingly female, global, and adoption is growing fastest in lower-income countries" and gathered that "Young moms with no money in poor countries use this product the most". Do I really need to spell out the fact that you completely failed to understand basic English here?
If they mostly ask how to raise their children and follow the received advice... Then yeah, in some 20 years we'll see what kind of return we get. People raised on social media are one thing; people raised by (with the assistance of) ChatGPT may be even worse off because of it.
My initial assumption would be that there are a lot, likely a majority, of parents who have had next to no advice on how to raise kids. Furthermore, I would posit that many of them were not raised in particularly nurturing circumstances themselves.
As such, I would expect that the advice ChatGPT gives (i.e. an average from parenting advice blogs and forums), would on average result in better parenting.
That's obviously not to say that ChatGPT gives great advice, but that the bar is very low already.
You're right, as much as I'd like not to be aware of it. Indeed, the bar is very low.
Whether heeding ChatGPT advice would be better or worse than no advice at all, I honestly cannot say. On the one hand, getting some advice would probably help in many, many cases - there's a lot of low-hanging fruit here; on the other, low-quality advice has the potential to ruin the lives of multiple people at any moment. This is like medical or lawyer advice: very high stakes in many cases. Should we rely on a model that doesn't really understand the underlying logic for advice on such matters? The "average" of parenting blogs can be a mish-mash of different philosophies or approaches glued together, making up something that sounds plausible but leads to catastrophic results years or decades later.
I don't know. Parenting is a complex problem in itself; then you have people generally not looking for advice or being unable to recognize good advice. It doesn't look like adding a hallucinating AI model to the mix would help much, but I may be wrong on this. I guess we'll find out the hard way: through people trying (or not) it out and then living with consequences (if any).
No amount of LinkedEn speech can fix the poor part of it.
In 2025, it's abundantly clear that the mask is off. Only the whales matter in video games. Only the top donors matter in donation funding. Modern laptops with GPUs are all $2k+ dollars machines. Luxury condos are everywhere. McDonalds revenues and profits are up despite pricing out a lot of low income people.
The poor have less of the nothing they already have. You can make a hundred affordable cars or get as much, if not order of magnitudes more, profit with just one luxury vehicle sale.
Most political donors are $25/month Actblue donations, and it doesn't matter because the campaigns with the most donations regularly lose.
> McDonalds revenues and profits are up despite pricing out a lot of low income people.
They didn't really raise prices, they just put coupons in the app.
> Luxury condos are everywhere.
Houses don't cost more because they have "luxury" features. A nicer countertop doesn't hypnotize people into paying more for a house. Prices are negotiated between buyer and seller and most of the development cost is the land price.
> The poor have less of the nothing they already have.
Wage inequality in the US is lower than it was in 2019. In general income inequality hasn't increased since 2014.
You have no idea if they’re ambitious or educated. Absolutely no idea. Is it just commonplace to inject “facts” into conjecture? Comes off as desperate.
I get a lot of productivity out of LLMs so far, which for me is a simple good sign. I can get a lot done in a shorter time and it's not just using them as autocomplete. There is this nagging doubt that there's some debt to pay one day when it has too loose a leash, but LLMs aren't alone in that problem.
One thing I've done with some success is use a Test Driven Development methodology with Claude Sonnet (or recently GPT-5). Moving forward the feature in discrete steps with initial tests and within the red/green loop. I don't see a lot written or discussed about that approach so far, but then reading Martin's article made me realize that the people most proficient with TDD are not really in the Venn Diagram intersection of those wanting to throw themselves wholeheartedly into using LLMs to agent code. The 'super clippy' autocomplete is not the interesting way to use them, it's with multiple agents and prompt techniques at different abstraction levels - that's where you can really cook with gas. Many TDD experts have great pride in the art of code, communicating like a human and holding the abstractions in their head, so we might not get good guidance from the same set of people who helped us before. I think there's a nice green field of 'how to write software' lessons with these tools coming up, with many caution stories and lessons being learnt right now.
It feels like Tdd/llm connection is implied — “and also generate tests”. Thought it’s not cannonical tdd of course. I wonder if it’ll turn the tide towards tech that’s easier to test automatically, like maybe ssr instead of react.
Yep, it's great for generating tests and so much of that is boilerplate that it feels great value. As a super lazy developer it's great as the burden of all that mechanical 'stuff' being spat out is nice. Test code being like baggage feels lighter when it's just churned out as part of the process, as in no guilt just to delete it all when what you want to do changes. That in itself is nice. Plus of course MCP things (Playwright etc) for integration things is great.
But like you said, it was meant more TDD as 'test first' - so a sort of 'prompt-as-spec' that then produces the test/spec code first, and then go iterate on that. The code design itself is different as influenced by how it is prompted to be testable. So rather than go 'prompt -> code' it's more an in-between stage of prompting the test initially and then evolve, making sure the agent is part of the game of only writing testable code and automating the 'gate' of passes before expanding something. 'prompt -> spec -> code' repeat loop until shipped.
The only thing I dislike is what it chooses to test when asked to just "generate tests for X": it often chooses to build those "straitjacket for your code" style tests which aren't actually useful in terms of catching bugs, they just act as "any change now makes this red"
As a simple example, a "buildUrl" style function that put one particular host for prod and a different host for staging (for an "environment" argument) had that argument "tested" by exactly comparing the entire functions return string, encoding all the extra functionality into it (that was tested earlier anyway).
A better output would be to check startsWith(prodHost) or similar, which is what I changed it into, but I'm still trying to work out how to get coding agents to do that in the first or second attempt.
But that's also not surprising: people write those kinds of too-narrow not-useful tests all the time, the codebase I work on is littered with them!
LLMs (Sonnet, Gemini from what I tested) tend to “fix” failing tests by either removing them outright or tweaking the assertions just enough to make them pass. The opposite happens too - sometimes they change the actual logic when what really needs updating is the test.
In short, LLMs often get confused about where the problem lies: the code under test or the test itself. And no amount of context engineering seems to solve that.
I think in part the issue is that the LLM does not have enough context. The difference between a bug in the test or a bug in the implementation is purely based on the requirements which are often not in the source code and stored somewhere else(ticket system, documentation platform).
Without providing the actual feature requirements to the LLM(or the developer) it is impossible to determine which is wrong.
Which is why I think it is also sort of stupid by having the LLM generate tests by just giving it access to the implementation. That is at best testing the implementation as it is, but tests should be based on the requirements.
Oh, absolutely, context matters a lot. But the thing is, they still fail even with solid context.
Before I let an agent touch code, I spell out the issue/feature and have it write two markdown files - strategy.md and progress.md (with the execution order of changes) inside a feat_{id} directory. Once I’m happy with those, I wipe the context and start fresh: feed it the original feature definition + the docs, then tell it to implement by pulling in the right source code context. So by the time any code gets touched, there’s already ~80k tokens in play. And yet, the same confusion frequently happens.
Even if I flat out say “the issue is in the test/logic,”, even if I point out _exactly_ what the issue is, it just apologizes and loops.
At that point I stop it, make it record the failure in the markdown doc, reset context, and let it reload the feature plus the previous agent’s failure. Occasionally that works, but usually once it’s in that state, I have to step in and do it myself.
Not sure. If the Flash image output is $30/M [1] then that's pretty similar to gpt-image-1 costs. So a faster and better model perhaps but not really cheaper?
Since I can't edit, it seems like Flash image is about 23% (4 cents vs 17 cents) of the cost of Openai gpt-image-1, if you're putting an image and prompt in and getting out, say, a 1024x1024 generated image. With the quicker production time that makes it interesting. Expecting Openai to respond at least in terms of pricing, e.g. a flat rate output cap price or something to be comparable.
Maybe if there was some political will for building stuff but there isn't. Canada should be an absolute AI and energy powerhouse, but our politicians are some of the most incompetent buffoons on the planet.
I don't know enough about Canada to know if this is a reasonable take or not, but I think you'd get downvoted less if you took a few sentences to articulate what the politicians' main failings are.
That would take more than a few sentences, but in general there is a lack of willingness to build new infrastructure. Canada has endless opportunities to both export energy (and not just oil!) and use it domestically - we should be utilising this untapped potential to build datacenters and invest in AI companies and research. Instead we can't even build new houses or hospitals for our exploding population.
So the question becomes whether these countries truly want to move off of the platform, or if this is all more of a bargaining chip in the trade negotiations.
JD Vance pretty much single-handedly destroyed most trust in the US in with his speech at the Munich Security Conference. Europe (and probably Canada and Australia) were shaken for days after it and realized that the US is not a reliable ally (or even not an ally) anymore. This was confirmed by the disastrous meeting with Zelensky in the White House and the US stopping to provide intelligence to Ukraine and F-16 updates (F-16s which were provided by European countries, not the US).
The pathetic little show you saw at the White House last week (with Macron, Mertz, etc.) is just a strategy to appease the US as long as needed so that the Europe can speed up its own weapon's production, increase independence, etc. It's damage control. The reason countries have stopped buying the F-35 is because nobody trusts the US anymore. And one or two sane presidents are not going to fix it (the US elected Trump a second time after all).
It is interesting how it is basically an indictment on the ability of the american people to manage their hard and soft power and military capability. That being said, populist right wing movements are taking root in europe as well. This threatens long term strategic planning in general, not just with the US, when critical positions of world power are replaced every few years by a subset of the population increasingly liable to propaganda influence granted by technology. In some ways regimes like North Korea are the most stable on earth due to careful control of the reigns of power and lack of any possibility of inroads for third party influence.
It's crazy that you're acting like this is some kind of policy failure for the US, when this administration has been telling Europe it shouldn't rely on the United States at this level. This isn't some "gotcha" that you're describing, it's exactly what the administration wanted europe to do. Wake up and start innovating instead of being the Disneyland for American tourists.
Us Europeans are just baffled by the fact that this ‘administration’ wants this. The EU is a big economy that’s relatively easy to deal with. Why would you alienate us?
But yeah so far Trump has been relatively true to his word, as far as it goes. Not really practically but going further down the road of a dare I say fascist outlook. I think Europeans still can’t believe it’s happening, much less intentionally so.
You are effectively saying that Europe should be a vassal state to the US and cannot have its own laws. Europe has a different vision on privacy and competition. The regulation asks for e.g. Apple are peanuts compared to what China asks. Apple bends back over to please China, but if Europe has some requirements for doing business part of the US do the tired trope “US innovates, Europe regulates”.
We have too many problems at home to be daddy with a credit card.
First, this is rich for a country living on borrowed money (that they can only get away with because the rest of the world uses it as the default currency).
Second, a lot of the problems of the US are caused by the lack of proper wealth redistribution, lack of efficient health care (no, the US doesn’t subsidize European healthcare, European countries spend far less on healthcare with better outcomes). It’s not solved by throwing lifelong allies under the bus and trading the for some dictator friends.
Finally, the security situation also arisen because the US did not want European militaries to become too powerful and has pushed a lot to be dependent on the US and US tech. For instance, countries have to buy US fighters for nuclear sharing, etc. The primary exception is France because they never wanted to be reliant and have their own nuclear force, etc.
Also let’s not forget Article 5 was only invoked once (by the US) and we were happy to help, because that’s what friends do. We have been in Afghanistan for over 20 years as a result and a lot of our soldiers died and were injured.
> The fact that you can't understand this is exactly why this administration is doing this.
> because you believe that excellence is not worthy of being rewarded. Your culture has the mindset that excellence is not a product of hard work and determination, it's a product of luck and nepotism, so any hint of excellence gets taken away and diminished.
This administration does not believe in rewarding excellence, hard work, or determination. It’s an administration by the most malicious, incompetent people who have ever led this country.
I don't know why Europe wants so badly to be reliant on the US. It's bad for them, it's bad for us. It's embarrassing for Europe that Ukraine is relying on the US instead of Europe for defense. It's embarrassing for Europe how little they contribute to NATO. The US isn't a partner, it's a caretaker. And as they say, if someone provides what you need, they also have the power to take it away.
Outsourcing your defense is stuupiiid.
Europe should be thanking Trump for waking them up to the reality that has always been the case through his boorish negotiation.
Defense is a bit like advertising or finance. It has some aspects of a zero-sum game and a negative-sum game. All the money you invest in it is wasted. But if your enemy/competitor chooses to waste more money, you may be in trouble.
From an European perspective, the entire purpose of NATO from 1992 to 2022 was to prevent wasting too much money on defense. Because, for some reason, Americans were willing to do it instead.
Then Russia invaded Ukraine, and the calculus changed. Now European countries are rebuilding their defensive capabilities, while Russia is still bogged down in Ukraine. Given the lack of credible short-term threats, limiting defense spending was clearly the right choice until 2022.
Also it makes sense to have a capability only once within an alliance. If the US has the command, space and air capabilities, why would anyone else need to have this. You can add to their capability by buying F-35s and hosting their air bases.
Now that we are not allies anymore we need to wastefully build up our own command, space and air capabilties resulting in duplicated effort.
>> Ukraine is relying on the US instead of Europe for defense.
Is it? Especially in 2025.
It is embarrassing how little of (very old) heavy equipment USA provided to Ukraine. North Macedonia provided same amount of main battle tanks as USA, Poland provided ten times more. And zero fighter jets.
Anyway, people of Ukraine are thankful for any support and USA was the biggest donor during first years of the war.
So it’s true that Europe/Canada spent less, but it comes with a bit fat asterisk that the US also wants to project power in the pacific/Asia, whereas European defense is primarily focused on avoiding Russian aggression (+ peace missions + supporting the US in various operations to give them more legitimacy).
Europe should be thanking Trump for waking them up to the reality that has always been the case through his boorish negotiation.
That credit should go to Putin, European spending has grown rapidly since the annexation of crimea.
The credit the Trump should get: stop buying US weapons as quickly as we can and focus on non-US alternatives. It’s going to take a while, but US material has certainly become less attractive.
reply