Should have been handled better, but some context is necessary:
If your name is associated with a startup in a visible leadership position you will get mass-spammed from people claiming to have discovered critical vulnerabilities in your system. When you engage with them, the conversation will turn into requests to hire them for their services.
So the CEO handled it poorly, but it's also not a great choice to withhold the details of the vulnerability in initial contact. If the goal was to get something fixed it should have been included in an easy-to-forward e-mail that could have been sent to someone who could act upon it.
Anyone who works with security or bug bounties can tell you that the volume of bad reports was a problem before LLMs. Now that everyone thinks they're going to use LLMs to get gigs as pentesters the volume of reports is completely out of control.
The number of spam "I found a vulnerability" emails you get as a SaaS operator is ridiculous, they never offer any proof of a vuln and just want you to confirm you have a bug bounty program (in which case they'll start scanning afterwards), or to pay ahead of time for the information or they'll threaten to release it.
Their response isn't damning to me. It sounds like they just assume they're one of these spammers.
I keep getting emails with the content like: "I found a critical bypass vulnerability in your app what is the appropriate channel to disclose it, and do you have a bounty program?"
I tried engaging and replying to them, and it inevitably turns into: "Yeah, we don't actually have the vulnerability, but you are totally vulnerable, just let us do a security audit for you".
I have a pre-written reply for these kinds of messages now.
Yeah, the signal to noise ratio on vulnerability reports is very weak, especially when the initial report withholds any detail.
I get tons of these messages too and the ones that do include details are the kind of junk you get from free "website vulnerability scanners" that are a bunch of garbage that means nothing -- "missing headers" for things I didn't set on purpose, "information disclosure vulnerabilities" for things that are intentionally there, etc... You can put google.com into these things and get dozens of results.
I run bug bounty for a fairly large OSS project and the amount of shitty/bad actor spam/beg bounties etc we get is huge.
Like 95% of the emails to security@ are straight garbage
Sure that is perhaps a good way to inquire about the appropriate channels to disclose a security vulnerability, but email is not a secure communication method for sending the details about a security vulnerability
When the "good Samaritan" do not go to the vendor, they go to the client (i.e., they do not contact the DIB company, they contact the Gov agency).
I have seen government contractors getting pilloried, losing their livelihood when this happened. And, yes there is always a "quick fix offer" by the "good Samaritan" to the vendor and promised re-assurance to the Gov agency, only if this misguided vendor would go with their solution.
It is also not unusual to find out later on, that the identification or even the resource reported on was wrong - but by this time the Gov agency already punished the contractor and the reporting "good Samaritan" is laughing (sometimes to the bank).
they can get away with unethical vulnerability disclosure because think of the children, the threat to the nation, grandma off the cliff, and <insert your favorite cliche justification of malfeasance>.
If you sell something to someone and they do computer crimes, you're going to have to prove that you couldn't've known that they're a computer crimer.
It's the same thing with selling general offensive security tools. You have to proactively make it clear that it's for testing and not criminal use. Otherwise, cops are going to assume you're complicit and make things shitty.
That would be even worse than our already bad system.
The system is already pretty bad because vendors underinvest in security, and then to fix it, researchers have to volunteer their time to investigate with no guarantee of payment. If the vendor could force researchers to hand over findings for free, nobody would want to do security research except hobbyists having fun. They're basically signing up for hours of tedious forced labor to explain vulnerabilities to the vendor.
I wish there was legislation that allowed the government to fine vendors for security vulnerabilities like this where the amount scales based on how much user data they leaked. And it could function like other whistleblower systems where a researcher who spots a leak can report it to the government and collect 50%. That way, if the vendor says, "We're not paying you," the researcher can turn around and collect the money from fines.
Gemma4:e4b is crazy good and quite usable on 10 years old midrange hardware.
Not sure about the security capabilities and haven't tested it all that well, as I usually just use hosted models, but I do find myself using it and it's been quite successful for parsing unstructured data, writing small focused scripts and translations.
The fact that I retain control of the data itself makes it incredibly useful, as I work in an environment where I can't just paste internal stuff into Codex.
But since it's run locally on a toaster testing it is out of scope for me. It takes a fairly long time to do anything.
Local models are 6-12 months behind the “frontier” models. This mean anthropic, openai, and google don’t have a moat, they’re on a treadmill running to stay ahead. Treadmills don’t justify their valuation.
I thought it was true too, for a couple of months. Then the honeymoon phase ended and now I only use Claude to write commit message drafts (which I rewrite myself) and review PRs.
It’s not their code, and it’s not for them to understand. The endgame here is that code as we know it today is the “ASM” of tomorrow. The programming language of tomorrow is natural human-spoken language used carefully and methodically to articulate what the agent should build. At least this is the world we appear to be heading toward… quickly.
But the endgame is not here and will likely never be, because unlike ASM, LLMs are not deterministic. So what happens when you need to find the bug in the 100,000k LoC you generated in a few weeks that you've never read, and the agent can't help you ? And it happens a lot. I am not doing this myself so I can't comment, but I've heard many vibe coders commenting that a lot of the commits they do is about fixing the slop they outputted a week prior.
Personally, I keep trying OpenCode + Opus 4.6 and I don't find it that good. I mean it does an OK job, but the code is definitely less quality and at the moment I care too much about my codebase to let it grow into slop.
Well that’s pretty damning.
reply