I think that in a world where code has zero marginal cost (or close to zero, for the right companies), we need to be incredibly cognizant of the fact that more code is not more profit, nor is it better code. Simpler is still better, and products with taste omit features that detract from vision. You can scaffold thousands of lines of code very easily, but this makes your codebase hard to reason about, maintain, and work in. It is like unleashing a horde of mid-level engineers with spec documents and coming back in a week with everything refactored wrong. Sure you have some new buttons but does anyone (or can any AI agent, for that matter) understand how it works?
And to another point: work life balance is a huge challenge. Burnout happens in all departments, not just engineering. Managers can get burnout just as easily. If you manage AI agents, you'll just get burnout from that too.
I find it interesting that this does lead to a pattern that consumes more tokens (and by extension usage and money). If you don’t interrupt something going wrong, you’ll burn more tokens faster. Food for thought, but it does seem like a perverse incentive.
I think not; it's not somewhere you can conduct a nuclear test without starting a war with Mexico.
However it is interesting to look at the TFR area in Google maps; it looks just like a nuclear test site, but the craters are natural volcanoes.
Mexico isn’t going to start a war with the US. it would last a week at most, and they’d end up glowing even more than if the us ‘downwinded’ them all year.
If Mexico went to war with America they would rely on asymmetric insurgency tactics. They have no shortage of sympathetic people in America, not just Mexican nationals but native born Americans as well. America hasn't dealt with a genuine domestic insurgency situation before.
Welcome. Tremendous to have you here. Really historic. Some people said it couldn’t happen, but I said keep an open mind, and now look. Intergalactic diplomacy. Nobody’s ever seen anything like it. We’re ready to make a deal, a fair deal, maybe the best deal in the galaxy.
Wouldn’t the Nevada Test Site be much better for this? Huge, government controlled, no major airports or cities, and moreover, already used for this sort of thing.
> I've only been in tech for like 20 years or so but I feel like either I'm missing something substantial or some kind of madness is happening to people.
People are extremely eager for a helpful AI assistant that they are willing to sacrifice security for it. Prompt injection attacks are theoretical until they hit you. Until you're hit you're just having fun riding the wave.
Have you used the log viewer? Because I swear the log viewer is the biggest letdown. I love that GitHub Actions is deeply integrated into GitHub. I hate the log viewer, and that's like one of the core parts of it.
> However, when describing and managing our company, we resort to digital paper and tidbits of info distributed across people in the building.
The perception that ISO/IEC 27001:2022 is simply an exercise in document creation and curation is frustrating. It is not, but an auditor cannot be in your company for a year or three, so the result is the next best thing: your auditor looks at written evidence, with things like timestamps, resumes, meeting minutes, agendas, and calendars, and concludes that based on the evidence that you are doing the things you said you're doing in your evidence reviews and interviews.
The consequence if you are not doing these things happens if you get sued, if you get yelled at by the French data protection regulator, or if you go bankrupt due to a security incident you didn't learn from, and your customers are breathing down your neck.
All of the documentation in the world doesn't mean you actually do the things you write down, but we have to be practical: until you consider these things, you aren't aware of them. You can read the standard and just do the best practices, and you'll be fine. The catch is that if you want the piece of paper, you go to an auditor, and people buy things because that paper means that there is now an accountability trail and people theoretically get in trouble if that turns out to be false.
It's like the whole problem with smart contracts is that you can't actually tether them to real world outcomes where the smart aspect falls apart (like relying on some external oracle to tell the contract what to do). Your customers care about ISO because your auditor was accredited by a body like ANAB to audit you correctly, and that reduces the risk of you botching some information security practice. This means that their data is in theory, more safe. And if it isn't, there is a lawsuit on the other end if things go awry.
Jason Meller was the former CEO of Kolide, which 1Password bought. I doubt he's beholden to anything like word count requirements. There is human written text in here, but it's not all human written -- and odds are since this is basically an ad for 1Password's enterprise security offerings that this is mostly intended as marketing, not as a substantive article.
Author here, I did use AI to write this which is unusual for me. The reason was I organically discovered the malware myself while doing other research on OpenClaw. I used AI for primarily speed, I wanted to get the word out on this problem. The other challenge was I had a lot of specific information that was unsafe to share generally (links to the malware, URLs, how the payload worked) and I needed help generalizing it so it could be both safe and easily understood by others.
I very much enjoy writing, but this was a case where I felt that if my writing came off overly-AI it was worth it for the reasons I mentioned above.
I'll continue to explore how to integrate AI into my writing which is usually pretty substantive. All the info was primarily sourced from my investigation.
As a longtime customer (I have my challenge coin right here), and fan of your writing, I do implore you to consider that your writing has value without AI. I would rather read an article with 1/5 the words that expresses your thoughts than something fluffed out.
> The other challenge was I had a lot of specific information that was unsafe to share generally (links to the malware, URLs, how the payload worked) and I needed help generalizing it so it could be both safe and easily understood by others.
What risk would there be to sharing it? Like, sure, s/http/hXXp/g like you did in your comment upthread to prevent people accidentally loading/clicking anything, but I'm not immediately seeing the risk after that
Thank you for the heartfelt reply - I wish to apologize for crude assumptions I made.
My view of how people are getting affected by AI and choosing to degrade values that should matter for a bit of convenience - has become a little jaded.
While we should keep trying to correct course when we can, I should also remember when it's still a person on the other side, and use kindness.
I think no matter how you slice it though, it's unethical and reprehensible to coordinate (even a shoddy) DDoS leveraging your visitors as middlemen. This is effectively coordinating a botnet, and we shouldn't condone this behavior as a community.
It's definitely interesting to see this roll around since the only individuals that see the CAPCHA page mentioned, are users of Cloudflare's DNS services (knowingly or not).
P.S. Shout-out to dang for dropping the flags. I have a small suspicion that their may be some foul play, given the contents...
I use my ISP's default DNS servers and have consistently gotten the CAPTCHA page for weeks now. The CAPTCHA seems to be broken too, rendering archive.today entirely inaccessible.
And to another point: work life balance is a huge challenge. Burnout happens in all departments, not just engineering. Managers can get burnout just as easily. If you manage AI agents, you'll just get burnout from that too.
reply