I had to upgrade the firmware in my HP printer a couple years ago.
It’s a printer that I think was released in ~2009 (I am not able to check right now), and in order to upgrade the RAM to 256MB I needed to do a firmware update.
I dreaded this, but then I found out that all you do to update the firmware was FTP a tarball to the printer over the network. I dropped it in with FileZilla, it spent a few minutes whirring, and my firmware was updated.
Then I got mad that firmware updates are ever more complicated than that. Let me FTP or SCP or SFTP a blob there, do a checksum or something for security reasons, and then do nothing else.
My favorite firmware update story is a time when I had to reflash firmware on an old IBM Fibre Channel/SCSI gateway because it had become corrupted and wouldn't boot.
Fortunately the first stage bootloader (which may have been in ROM) was intact, and had debugging commands that allowed reading and writing bytes of memory one at a time, and to jump to a specific memory address.
After using IDA to find the compressed firmware in the update blob and figure out how the update process worked, I was then able to use an expect script to use bootloader commands to slowly poke the firmware and the code that decompressed and copied the updated firmware to flash (extracted from the firmware itself after decompressing it with zlib) into RAM a byte at a time, then to jump to the uploaded code to finish the installation.
Worked like a charm, and enabled me to continue using the device for several years until I no longer had a use for it.
I think my favorite is wifi access points that support tftp to load a firmware image (with some kind of hardware switch to enable this state). These can be made effective unbrickable and it's really nice for experimenting.
I'm not sure if it was what OP meant, but it's arguably a good availability technique (as long as you can generate the checksum, that is). Like, if I want to run custom firmware and flash it, having a checksum which verifies that the firmware isn't corrupted may help prevent bricking.
I've never actually touched weed, but I would see this with my friends in high school and college.
In the better case, they just become insufferable and pseudo-intellectual because they started watching Alan Watts and Carl Sagan while stoned and would become convinced that they know everything about physics and philosophy.
In a lot of cases though, and this is more obvious in hindsight, it feels like they were using weed as a means of dealing with the fact that they were deeply unhappy and depressed people. Instead of confronting their problems and seeing a therapist/psychiatrist or any of the other things that they could do to actively improve their life, they would spend their evenings and weekends getting high.
I don't inherently have an issue with people using recreational drugs; I've gotten drunk before [1], but it should be done in moderation.
[1] I never did it that much and I haven't had anything to drink at all in years.
On the opposite end of that is the plethora of psychiatry/psychology professionals whom are terrible at their profession and are likely causing more harm than good.
I see it along the same lines as brands, your typical Great Value psychologist will greatly underperform the Kirkland psychologist who will greatly underperform the ... and so on.
Then there's the subset of the population whom have been abused in the most horrific ways by psychologists.
Not to counter your point, just as additional discussion.
Great, how does one fix the therapists? Or is this another one of those tricky systemic generational problems that means that I'll have to YOLO something for my own life before they're fixed?
I'm not claiming I know how to fix it, but I am quite confident that the solution to fixing issues with depression, focus, or any other number of psychological issues is not reading a Reddit post about weed and/or mushrooms and pretending you understand enough about pharmacology to fix these problems.
Therapists and psychologists and psychiatrists require training and as such will still be considerably more likely to help you than weed. Obviously there are bad professionals; I've hired bad electricians before but that does not imply I should try and do all the wiring in my house myself.
If only there were some sort of system where everyone who had money gave it to some organization, percentage based so rich people gave more, and then this organization used it to pay for therapists, and everything had access to therapy and other stuff like that? Totally insane crazy idea, I know. It could never possibly work. But just imagine if it did! Everyone would be able to get the therapy and help they deserve. What a world that would be. Alas.
Just because you can name counterexamples doesn't really undermine my point. I know people who were alcoholics in college and then stopped being alcoholics later but that doesn't mean we should encourage people to be alcoholics in college.
I don't the president can pardon away a lawsuit. He could pardon away a crime, and sometimes the crime can be a basis for a civil lawsuit, but in this case I don't think anyone has seriously considered criminally charging Jones for anything here.
> I don't [think] the president can pardon away a lawsuit.
Never underestimate Trump's ability in decreeing something and hoping for it to stick long enough to cause real damage before the courts eventually strike it down - it took almost a year until the Supreme Court struck down the tariffs, by the time the first large corporations get their refunds it will be over a year, and honestly I'd be surprised if the first consumers get refunds by the end of 2026.
Trump's ability to do that is solely caused by a lot of people across all branches and levels of government too afraid to say "no" to him and getting on the receiving end of "you're fired".
That was an impressively stupid and/or lazy fuck up, to a point where I think Jones could have a lawsuit against his attorneys there.
IANAL but it does seems like "sending an entire copy of your clients phone and making no effort to redact it" could be a thing that, you know, is bad counsel.
No, the lawyers can argue about the scope of what they show the jury during trial. There are plenty of rules against biasing or inflaming the jury with unrelated material.
I think the theoretical idea is that we want to ensure that there is always a very large domestic corn supply, in the event of war or something. Corn is a crop that has a lot of uses; it can be refined into sugar, it can be used as flour, it's relatively energy dense food-wise, and it can be fairly easily converted into fuel if necessary.
I'm not saying I fully agree with the reasoning but I at least kind of get it.
right. view it instead as "we need to keep domestic corn production above a certain threshold." the result is that we have a lot of extra corn. now the question becomes: what do we do with this extra corn? we can either throw it away, or turn it into fuel.
it's not an efficient course if the target is fuel, but that's not the target. it is a decent use if we have lots of corn that nobody wants, which we do.
I wasn't suicidal when I was in high school, but I absolutely understand people's depression around the school year.
I actually don't hate school as much as an adult, but I really did view school like a prison when I was a teenager. I didn't like homework, I didn't like most of my teachers, I liked learning but due to the fact that schools have to go at a pace slow enough for the dumbest person they want to pass, I would get very bored during class, and so high school in general was existentially dreadful every day. Even when I got home, I would dread the fact that in about ~15 hours, I would have to go back to school again.
It didn't help that there was a dread with grades in general; I wasn't abused or anything, and I think my parents in general were pretty ok at parenting, but as report card season came nearer and nearer, I would get more and more depressed, because when I would inevitably get middling-to-bad grades, I would get a lecture and/or grounded by my parents. This meant no computer, no games, I wasn't allowed to hang out with my friends, and they hoped that it would force me to study more. It's not dumb logic, but it just didn't work. I would just be sad and angry and still wouldn't do the homework.
No doubt a large chunk of this was just hormonal, but I really think that the typical American school system is not a good fit for a lot of people, myself included. I don't think anyone has ever seriously called me stupid, but I would be in camp that endlessly frustrated teachers: I would do well on the tests, I would do well on the AP exams, no one disputed that I understood the course material well enough, but I just didn't care enough to do the homework so they would be forced to give me bad grades. I don't blame the teachers for this at all, they're just doing their jobs.
Despite being in AP classes and having skipped two grades in math, I was seriously considering dropping out of high school and just trying for the GED so I wouldn't have to go anymore, and I probably would have done that if I didn't think that my parents would freak out.
I didn't want to kill myself, but very few things brought me more joy in my life than knowing I wouldn't ever have to go back to high school again. I know a lot of people say that these are the best times of their lives, and power to them for that, but they were decidedly not for me.
I've thought about doing stuff like that (or mounting S3), but I've never done it because I figured that the latency would be enormous and it would be too slow to actually be useful.
That said, I could actually kind of see it being useful for stuff that doesn't need to be super fast but does allocate a lot of RAM (though I'm drawing a blank on what that would be at the moment).
Enormous latency is all relative. A network drive on a local network holding a swap file would still outperform some number of computers I've owned that put their swap on much, much slower hard drives of their time. Of course, nobody was trying to swap two gigabytes to these drives as that would have been 10 times their capacity....
I was required to use Cursor for my job when I first started, but once I figured out how to use the command line version of Codex, I kind of stopped seeing the point. It just kind of seemed like a bloated, overpriced wrapper around what I could do with the included ChatGPT membership I already had for work.
Maybe I was missing something, but I do not understand how it is worth sixty billion dollars.
The anti-Cursor sentiment here is baffling to me given how useful it is to me. I use it interactively and actively review everything it produces. I like how I can plan a feature and refine the plan before instructing the agent to implement it. Last I checked, vscode had none of those features. Do (seemingly most) people prefer Codex because it gives a greater degree of autonomy to the agents?
> I like how I can plan a feature and refine the plan before instructing the agent to implement it
You can do that with claude code, github copilot (built into vs code) and codex, in any of their IDE versions, plugins for other ides (jetbrains, vscode, anything else you care to name) and also, of course, the CLI versions of all of them. They're also integrated into github, jira, and everything else.
Seriously, try other tools! if only to get a more balanced perspective.
This all being said, its been a long time since I last tried cursor... I'll give it a go.
I am personally not a fan of VS Code regardless, but I guess I don’t understand what it buys you over one code editor window and a Codex window both being open?
I have, right now, a tmux session with Codex on the bottom and Neovim on the top. It does what I was doing in Cursor just fine.
I am not really “anti Cursor”, I just genuinely am confused as to what it actually buys me over the setup I just described.
Here's why I use Cursor. My company pays for it, although I could switch to Claude Code or use Codex more since I also have ChatGPT enterprise account.
* Perhaps could be solved with the right terminal software, but I like the GUI for seeing my running agents and viewing all my conversations
* Works with multiple model providers in the same tool. I probably worry about cost optimization more than my employer would care for me to, but I frequently switch between openai/anthropic and switch between model sizes to use the tool that I think can get the job done for the least money. Another thing I like is having a long conversation with an expensive model, then I can switch to 5.4-nano to cheaply extract some little piece of information or summary from the conversation. Really this is big being able to switch model providers throughout the months without having to change my interface.
* Good support for the various ways of providing context. Rules, AGENTs.MD/CLAUDE.md files (if you want it to automatically read those), skills. Good hook support.
* I think the agent diff review experience is pretty good, but maybe it works similarly when you hook the cli agents into an editor, IDK.
* The default shell sandbox behavior is quite good. Every shell command runs in some sort of sandbox so that read only commands work without approval. The model asks for more permissions when it tries to do something that needs more permissions like network access or writing outside of the workspace directory. I know Claude code has a similar feature you can use.
* Good fork / revert conversation to checkpoints, with the option of reverting the code or just reverting the conversation.
* Feels decent that I am an API customer through Cursor. I don't hit Claude limits. Cursor doesn't have an incentive to limit reasoning or token usage, although they do have an opposite incentive.
* They are reasonably responsive to bugs and feature requests through their forum.
* Works well with a lot of repos / folders added to your workspace. I probably should organize all my stuff under a single directory, but alas I have like 8 different folders added to my workspace and it handles this well. Perhaps Claude --add-dir support works fine too.
DOWNSIDES:
* They are not quickly adding the best open source models to Cursor. Like Kimi 2.6 or whatever. Possibly not incentivized to given their Composer models.
* Don't love the subagent support. I can define custom subagents although it is not easy to get models to use mine instead of the builtin ones. The builtin ones do not allow me to control what model they run, so they will always run something like composer-2-fast, which is a fine model for all I know, but I would like to control it. Also, I would like if you could optionally make the subagent experience more first class. Like browse all the subagents and continue conversations with them or switch their model etc, although that is probably tricky / weird.
My employer pays for Cursor and Claude but not Codex. I often find Claude dumb (yes, even Opus), thus I'm using Cursor with GPT-5.4. If you have Codex, you don't miss anything.
and I'm being completely neutral and objective in saying this: Elon Musk has been a horrible capital allocator but great at financial engineering. X is still struggling to win back advertisers (they will never come back) and still in the red. I have little reason to believe this is also another careful and shrewd financial decision.
He spun that story into "he was saving democracy" so it sounds like he paid for that reason. He will do the same here, he never does a wrong move you just can't see the 76D chess.
I do think the Codex harness is a bit better than others.
Doesn't make a ton of difference with OpenAI models, but with Google and Anthropic models the difference is quite noticeable I think.
The biggest issue I have with premature optimization is stuff that really doesn't matter.
For example, in Java I usually use ConcurrentHashMap, even in contexts that a regular HashMap might be ok. My reasoning for this is simple: I might want to use it in a multithreaded context eventually and the performance differences really aren't that much for most things; uncontested locks in Java are nearly free.
I've gotten pull requests rejected because regular HashMaps are "faster", and then the comments on the PR ends up with people bickering about when to use it.
In that case, does it actually matter? Even if HashMap is technically "faster", it's not much faster, and maybe instead we should focus on the thing that's likely to actually make a noticeable difference like the forty extra separate blocking calls to PostgreSQL or web requests?
So that's the premature optimization that I think is evil. I think it's perfectly fine at the algorithm level to optimize early.
I can fully understand "bickering" about someone sprinkling their favourite data type over a codebase which consistently used a standard data type before. The argument that it might be multithreaded in the futurure does not hold if the rest of the code base clearly was not writen with that in mind. That could even be counter productive, should someone get the misguided idea that it was ready for it.
Make a (very) good argument, and suggest a realtistic path to change the whole codebase, but don't create inconsistency just because it is "better". It is not.
I don’t expose it at the boundaries, just within functions. With everything outside of the function I take a Map interface in and/or return the Map out.
You are communicating with future readers of the code. The presence of ConcurrentHashMap will lead future engineers into believing the code is threadsafe. This isn't true, and believing it is dangerous.
It’s actually a lot better. That’s literally the whole point of interfaces and polymorphism: to make it so the outside does not care about the implementation.
Ironically to your point, I think adding a ConcurrentHashMap because it might be multithreaded eventually IS premature optimisation.
The work can be done in future to migrate to using ConcurrentHashMap when the feature to add multithreading support is added. There's no sense to add groundwork for unplanned, unimplemented features - that is premature optimisation in a nutshell.
The point of the premature saying is to avoid additional effort in the name of unnecessary optimization, not to avoid optimization itself. Using a thing that's already right there because it might be better in some cases and is no more effort than the alternatives is not a problem.
Typing ten extra characters is an investment that pays off vs the overhead of doing any piece of work in the future. There's no real downside here, it doesn't make the code more complex, it gives less surprising results.
I think you are conflating YAGNI and premature optimisation and neither apply in this case.
I would agree if I were adding a dependency to Gradle or something. In this case it’s just something that conforms to the same interface as the other Map and is built into every Java for the last 20+ years. The max amount of effort I am wasting here is the ten extra characters I used to type “Concurrent”.
My point was that even if it is not optimal, there’s really no utility in bickering about it because it doesn’t really change anything. The savings from changing it to regular hashmap will, generously, be on the order of nanoseconds in most cases, and so people getting their panties in a bunch over it seems like a waste of time.
I think you are in the wrong. But my reason is that when you support concurrency, every access and modification must be checked more carefully. By using a concurrent Map you are making me review the code as if it must support concurrency which is much harder. So I say don’t signal you want to support concurrency if you don’t need it.
Locks are cheap performance-wise (or at least they can be) but they’re easy to screw up and they can be difficult to performance test.
ConcurrentHashMap has the advantage of hiding the locking from me and more importantly has the advantage of being correct, and it can still use the same Map interface so if it’s eventually used downstream somewhere stuff like `compute` will work and it will be thread safe without and work with mutexes.
The argument I am making is that it is literally no extra work to use the ConcurrentHashMap, and in my benchmarks with JMH, it doesn’t perform significantly worse in a single-threaded context. It seems silly for anyone to try and save a nanosecond to use a regular HashMap in most cases.
As soon as you have a couple of those ConcurrentHashMaps in scope with relationships between their elements, your concurrency story is screwed unless you've really thought everything out. I'd encourage using just plain HashMap to signal that there's no thread safety so you know to rethink everything if/when it comes up.
It’s a printer that I think was released in ~2009 (I am not able to check right now), and in order to upgrade the RAM to 256MB I needed to do a firmware update.
I dreaded this, but then I found out that all you do to update the firmware was FTP a tarball to the printer over the network. I dropped it in with FileZilla, it spent a few minutes whirring, and my firmware was updated.
Then I got mad that firmware updates are ever more complicated than that. Let me FTP or SCP or SFTP a blob there, do a checksum or something for security reasons, and then do nothing else.
reply