Long answer: Most DBs key on lastname, so MegaZone is my last name, officially, and I have no first name. Then I leave the first name blank if it'll let me, but more often I need to put something in there - so these days I use 'MZ' as that's what I have people call me. I used to use 'Mr', and I still need to use that on some government forms so everything will match up.
License and passport have no first name and MegaZone for a last name.
My SSN Card has 'Mr MegaZone' - when I changed it, back in 2000, the SSA said their computers just could not handle a blank first name. They wanted to put in something like NFN MegaZone, FNU MegaZone, etc. (No First Name, First Name Unknown) I suggested 'Mr' because it tickled me to make the government call me Mister MegaZone. ;-)
Airline tickets are something where I use Mr MegaZone. I have Global Entry/PreCheck, and even though my IDs have no first name, the ticketing systems do NOT like that. So my PreCheck stuff is under Mr MegaZone. I made the mistake if booking once under MZ MegaZone, and even though all my PreCheck info was on there, it didn't work and I got stuck in the normal security line. Lesson learned.
My passport does break other systems sometimes - I go on cruises with my wife and they, of course, base things on your passport. So I usually end up as FNU MegaZone, which leads to hilarity as crew members try to pronounce 'FNU' as a name. Usually once I'm onboard I can get into the system and edit things, but such is life.
Internally at F5 (where I work as a Principal Security Engineer in the F5 SIRT and was one of the people responsible for making the call on assigning the CVEs).
I haven't read the content of the patches to understand the impact of the bugs, but from my own experience [0] I can suggest a few reasons:
- CVEs are gold to researchers and organizations like citations are to academics. In this case, the CVEs were filed based on "policy" but it's unclear if they are just adding noise to the DB.
- The severity of the bug is not as severe as greater powers-that-be would like to think (again, they see it as doing due diligence; developers who know the ins and outs might see it as an overreaction).
- Bug is in an experimental feature.
I'm not saying one way is right or not in this case, just pointing out my experience has generally been that CVEs are kind of broken in general...
To summarize: the more CVEs a "security researcher" can say he created on his resume, the more impressive he thinks he looks. Therefore, the incentive to file CVEs for any stupid little problem is very high. This creates a lot of noise for developers who are forced to address sometimes nonsense that are filed as "high" or "critical".
If you run a web app of any sort, and you don't have "X-Frame-Options: Deny" in your headers, you'll get lots of "researchers" (that are probably bots) e-mailing you that you have a CRITICAL security issue.
The issue you linked to is an excellent example of why everyone and their dog is becoming a CNA these days. It's the only way to keep CVE spam at bay. The system has been broken by the gamification of CVEs and is in desperate need of reform.
"Denial of service" is never a security bug; it's a huge mistake people have started classifying these things as such to start with. Serious bug? Sure. Loss of security? Not really.
That very much depends on what service is being denied. Nginx is _everywhere_. While not a direct security concern for nginx (instead an availablity issue) it could have security or safety implications for wider systems. What if knocking out nginx breaks a service for logging & monitoring security information? Or an ambulance call out management system? Or a payment progressing system for your business at the busiest time if your trading year? There are many other such examples. This sort of thing is why availablity can be considered a security matter and therefore why DoS vulnerabilities, particularly those affecting common software, are handled as security issues of significant severity.
Almost every bug can be considered a security bug under the wrong set of circumstances.
With fairly cheap ddos services you can "just" order you can knock most servers offline anyway. Internet reachability is rarely safety-critical, and if it is, that's probably a huge design flaw somewhere because there's tons of reasons outside of your control that can make the internet not work for either the server or clients.
Is all of this inconvenient and (potentially) a serious problem? Sure. But not "zomg criminals have credit card records / can spoof random domains / read private data / etc. etc." type serious.
> Almost every bug can be considered a security bug [...] With fairly cheap ddos services...
A DoS bug and an DDoS attack are very different things. One is a flaw that can bring a service down, the other is a brute force technique for making a service unusable. You can DDoS services without exploiting bugs.
I am aware; my point is that "denying the service" is pretty easy even without the presence of any bugs in the service. Stealing credit cards on the other hand...
We could argue that about almost anything though . There are always secondary effects possible and sometimes even likely. I can only think of the proverb/poem - "For want of a nail".
In those cases you just know that any problem can cause you trouble, so you pay attention to all problems including low severity ones like DoS, performance slowdowns or lack of bells and whistles.
Many security specialists via security as described by the CISSP material (Certified Information Systems Security Professional). Loosely speaking, that means ensuring the confidentiality, integrity, and availability of the system (including data received, data stored, and data sent).
Viewed in this light a bug that enables a successful Denial of Service (DoS) or Distributed Denial of Service (DDoS) attack is a security bug. A bug that causes a DoS or DDoS, but is not exploitable, would not be a security bug (e.g., some idiot added an infinite loop to the startup code). That's where issue triage comes in, a bug should never be assigned before its triaged. Sometimes triage results in 'we don't know enough' and someone gets assigned to evaluate the bug to answer specific questions before triage can finished. After triage is get assigned - or even better, a developer with a matching skill set chooses it to work on for the next release/sprint/etc.
Almost any bug in those kind of systems are potential security bugs. Not having the service available at all is probably among the least critical type of bug that can happen.
>The most recent "security advisory" was released despite the fact
that the particular bug in the experimental HTTP/3 code is
expected to be fixed as a normal bug as per the existing security
policy, and all the developers, including me, agree on this.
>And, while the particular action isn't exactly very bad, the
approach in general is quite problematic.
Yeah, I've been with F5 since 2010 - gotta love those old PortMasters though, Livingston was good times, until Lucent took over. I was there 95-98.
I don't know what else there is to say really. The QUIC/HTTP/3 vuln was found in NGINX OSS, which is also the basis for the commercial NGINX+ product. We looked at the issue and decided that, by our disclosure policies, we needed to assign a CVE and make a disclosure. And I was firmly in that camp - my personal motto is "Our customers cannot make informed decisions about their networks if we do not inform them." I fight for the users.
Anyway, Maxim did not seem to agree with that position. There wasn't much debate about it - the policy was pretty clear and we said we're issuing a CVE. And this is the result as near I can tell.
Honestly, anyone could have gone to a CNA and demanded a CVE and he would not have been able to stop it. That's how it works.
Oof. Presumably Dounin had other gripes about the company that had been building up? This seems like a pretty weird catalyst for a fork. Feels more like this was the last straw among many.
I get that CVEs have been politicized and weaponized by a bunch of people, but it seems weird to object that strenuously to something like this.
Oh my god, the Internet is such a small place. Good to hear you're doing well - we interacted a bit when I was running an ISP in the 90s as well. (Dave Andersen, then at ArosNet -- we ran a lot of PM2.5e and then PM3s).
And appreciate the clarification about the CVE disagreement.
Those were great times. I learned a hell of a lot working at Livingston, because we had to. We were basically a startup selling to ISPs right as the Internet exploded and we grew like crazy. Suddenly we're doing ISDN BRI/PRI, OSPF, BGP, PCM modems, releasing chassis products (PM-4)... Real fun times, always something new happening. I even ended up our corporate webmaster since I'd been playing with web tech for a few years and thought it'd be a good idea if we had a site. Quite a way to jumpstart a career.
I don't know much about this situation, but from what I've read, you were clearly in the right. It doesn't matter if the feature is in optional/experimental code. If it's there and has a vulnerability, give it a CVE. The customers/users can choose how much they care about it from there.
> Honestly, anyone could have gone to a CNA and demanded a CVE and he would not have been able to stop it. That's how it works.
I recently did exactly that when a vendor refused to obtain a CVE themselves. In my case, I was doing it as part of an effort to educate the vendor on how CVEs worked.
You bring up NGINX+, a commercial product with a CVE reporting policy, but just from reading the docs on it it doesn't support QUIC or HTTP/3. So I guess I can see why the maintainer would be mad about a commercial policy applying to noncommercial work in the absence of any real threat.
> Honestly, anyone could have gone to a CNA and demanded a CVE and he would not have been able to stop it. That's how it works.
Even if third parties can file CVEs, do you think it hits different when the parent organization decides to do so against the developer's wishes? Why do he and F5 view the bugs differently? It sounds like the fork decision was motivated less by the actual CVEs and more about how the decision was negotiated (or not at all).
Personally, I think its more honest if the parent org does not try to contest a CVE being assigned to a legitimate issue. If a CNA gets a report of a vulnerability in code, even if its an uncommon configuration, they should be assigning a CVE to it and disclosing it. The entire point of the CVE program is to identify with a precise identifier, the CVE, each vulnerability that was shipped in code that is generally available.
Based on my observation of various NGINX forums and mailing lists, the HTTP/3 feature, while experimental, is seeing adoption by the leading edge of web applications, so I don't think it could be argued that its not being slowly rolled into production in places.
We (F5) published two CVEs today against NGINX+ & NGINX OSS. Maxim was against us assigning CVEs to these issues.
F5 is a CNA and follows CVE program rules and guidelines, and we will err on the side of security and caution. We felt there was a risk to customers/users and it warranted a CVE, he did not.
I worked there before and after the acquisition. F5 Security was woefully incompetent. We spent 3 months trying to get approval for a web hook from Gitlab -> Slack, including endless documents (Threat Model Assessment), and meetings - god, the meetings - at one point on a call with 35 people. So I feel Maxim’s pain trying to deal with that team at F5.
On the other hand nginx core developers (the Russians) were arrogant to the point of considering anyone else as inferior and unworthy of their attention or respect, unless they contributed to nginx oss. They managed that project secretively and rewrote most “outside” contributions. They also ignored security issues - one internal developer spotted security issues with NGINX Unit (a failed oss project 20 years out of date before it started) and was told to fix the issues quietly and not to mention “security” anywhere in the issue messages or commit history.
So I can imagine exactly how these meetings would have gone, I’m sure it was the last straw!
I can agree to this. I worked there too, and it took 2 months to get a simple approval for a similar project, despite preparing extensive TMA documents, etc
This is confusing. The CVE doesn't describe the attack vector with any meaningful degree of clarity, except to emphasize how you'd have to have a known unstable and non-default component enabled. As far as CVEs go, it definitely lacks substance, but it's not some catastrophic violation of best practices. It hardly reflects poorly on Maxim or anything he's done for Nginx. This seems like an extreme move, and it makes me wonder if there's something we're missing.
Maybe, but he only mentioned disagreements on security policies. Doesn't sound very convincing as a last straw, especially from a marketing standpoint when trying to gain more traction for his fork.
Yes, those are the two CVEs I was referring to. All I know is he objected to our decision to assign CVEs, was not happy that we did, and the timing does not appear coincidental.
QUIC in Nginx is experimental and not enabled by default. I tend to agree with him here that a WIP codebase will have bugs that might have security implications, but they aren't CVE worthy.
We know a number of customers/users have the code in production, experimental or not. And that was part of decision process. The security advisories we published do state the feature is experimental.
When in doubt, err on the side of doing the right thing for the users. I find that's the best approach. I don't consider CVE a bad thing - it shouldn't be treated like a scarlet letter to be avoided. It is a unique identifier that makes it easy to talk about a specific issue and get the word out to customers/users so they can protect themselves. And that's a good thing.
The question I ask is "Why not assign a CVE?" You have to have a solid reason why not to do it, because of default is to assign and disclose.
I don't think having the CVEs should reflect poorly on NGINX or Maxim. I'm sorry he feels the way he does, but I hold no ill will toward him and wish him success, seriously.
FWIW, in my project the main reason we don't issue security advisories for "unsupported" code ("experimenal" or "tech preview") is to reduce the burden for our downstreams: many of our immediate downstreams are expected by their users to apply every single security patch, regardless of whether they even use the affected functionality. For cloud providers doing this across a massive fleet, this is a fair amount of work that's worth avoiding if we can.
On the other hand, since the definition of "supported" is specifically designed to help downstreams, if it were known that some bit of code was widely used in production, we'd be open to declaring it "security supported", regardless of whether we thought it was "finished" or not.
Recently I had to support a client who had a "no CVEs in a production deploy, ever" policy.
The stack included Linux, Java, Chromium, and MySQL. It took multiple person-years of playing whack-a-mole with dependencies to get it into production because we'd have to have conversations like:
Client: there's a CVE in the this module
Us: that's not exploitable because it's behind a configuration option that we haven't enabled
Client: somebody could turn it on
Us: even if they somehow did and nobody noticed, they would have to stand up a server inside your VPC and connect to that
Client: well what if they did that?
Us: then they'd already have root and you are hosed
Client: but the CVE
Us:
So I definitely appreciate any vendor that tries to minimize CVEs.
I mean, yeah, but if that's the way big bureaucratic organizations get sometimes. Bigger means more likely to have a brain-dead policy like this, but also more money... so, do you give up the money, or do you accommodate their policy while trying to minimize the cost?
There's tons of reasons why you wouldn't, but the core reason for this fork probably isn't really about the CVEs as such. It's either the final straw in a long line of disagreements, or the entire thing was handled was so badly that he no longer wants to work with these people. Or most likely: a combination of both.
I once quit after a small disagreement because the owner cut off my explanation on why I built something the way I did with "I don't care, just do what I say". This was after he ignored the discussion on how to design it, and ignored requests for feedback when I was building it. And look, I don't mind to re-doing it even if I don't agree it's better better, but I did put quite a lot of thought and effort in to it and thought it worked very well. If you don't even want to spend 3 minutes listening to the reasons on why it's like that then kindly go fuck yourself.
It's not the disagreement as such that matters, it's the lack of basic respect.
As an outsider to this whole thing (having discovered this issue in this thread, like pretty much anyone), the CVE rules simply say that you cannot assign a CVE to vulnerabilities in a product that is not publicly available or licensable. Experimental, but publicly available features are still in scope.
This makes sense IMHO: experimental features may be buggy, but they may work in your limited use case. So you may be inclined to use them...except you don't know they expose you in a critical way.
Exactly - this very question came up. And pretty much everyone looked at me as I'm the one who sits on every CVE.org working group (BTW, the CVE rules are currently being revised and in comment period for said revision) and I explained exactly that - just because it is experimental doesn't mean it is out of scope.
Also, something that keeps getting lost here, the CVE is NOT just against NGINX OSS, but also NGINX+, the commercial product. And the packaging, release, and messaging on that is a bit different. That had to be part of the decision process too. Since it is the same code the CVE applies to both. This was not a rash decision or one made without a lot of discussion and consideration of multiple factors.
But one of our guiding principles that we literally ask ourselves during these things is "What is the right thing to do?" Meaning, what is the right thing for the users, first and foremost. That's part of the job, IMHO. Some vendors never disclose anything, but that's not how we operate. I've written a few articles on F5's DevCentral site about this - "Why We CVE" and "CVE: Who, What, Where, and When" are particularly on topic for this, I think.
All features have limited use case, but experimental features may be buggy in all use cases, which is exactly what happened here. CVE is uninformative there, defects are implied, might as well create a CVE for every commit "something happened, don't forget to repedloy".
That's a whole different discussion - which isn't as dramatic as it is being made out to be.
Other hats I wear (outside of my day job) include being on every (literally, every) CVE.org Working Group and being the newly elected CNA Liaison to the CVE Board. This has been a subject of discussion and things are a bit overblown right now, IMHO. Some of the initial communications were perhaps not as clear as they could have been. But it isn't going to be every kernel bug being a CVE - not every bug is a vuln.
I'm also one of the co-chairs for the upcoming VulnCon in Raleigh, NC. Just a plug. ;-)
Answering your original question to posted to me a bit down thread with this important context. The answer to "why not issue a CVE?" is the same reason that you don't call every random car burglary or graffiti an act of terrorism.
While I agree the whole Linux CVE thing is a bit overblown, but as an outside observer the new policy [1] does not read like they are super happy with CVE in general.
Too bad the CFP is closed for VulnCon, it might be fun to do a "Assume everything is wrong and you can't do anything the way you do it now - how do you build CVE 2.0" (also that title is too long).
We got around 150 submissions for 30ish panel slots over three days, so we're good there. Schedule should be out soon.
The CVE program has grown and changed a lot the past few years, and the rules are undergoing a major revision right now (comment period currently) taking in a lot of the feedback. And the rate of CNAs joining has been picking up rapidly as global interest in the program has increased.
No one thinks it is perfect, but that's why a lot of us are active in the working groups and trying to keep moving things forward.
I think you'd have to ask Maxim. My take is he felt experimental features should not get CVEs, which isn't how the program works. But that's just my take - I'm the primary representative for F5 to the CVE program and on the F5 SIRT, we handle our vuln disclosures.
I'm inclined to agree with your decision to create and publish CVEs for these, honestly. You were shipping code with a now-known vulnerability in it, even if it wasn't compiled in by default.
Incorrect. Features available to users still require a minimum, standard level of support. This is like the deceptive misnomer of staging and test environments provided to internal users used no differently than production in all but name.
If the feature is in the code that's downloaded, regardless of whether or not the build process enables it by default, the code is definitely being shipped.
This is an insane standard and attempting to adhere to it would mean that the CVE database, which is already mostly full of useless, irrelevant garbage, is now just the bug tracker for _every single open source project in the world_.
Why is it insane? The CVE goal was to track vulnerabilities that customers could be exposed to. It is used…in public, released versions. Why wouldn’t it be tracked?
It's in the published source code, as a usable feature, just flagged as experimental and not compiled by default. It's not like this is some random development branch. It's there, to be used en route to being stable. People will have downloaded a release tagged version of the source code, compiled that feature in and used it.
By what definition is that not shipped?
> I am actually completely shocked this needs to be explained. Legitimate insanity.
I've had an optional experimental feature marked with a CVE. It's not a big deal as it just lets folks know that they should upgrade if they are using that experimental feature in the affected versions.
Where did you get this info? It might be the feature is actively being worked on and the DoS is a known issue which would be fixed before merge. Lot of projects have contrib folder for random scripts and other things which wouldn't get merged before some review but users are free to run the script if they want to. Experimental compile time build flags are experimental by definition.
You're all also missing the fact that the vuln is also in the NGINX+ commercial product, not just OSS. Which has a different release model.
Being the same code it'd be darn strange to have the CVE for one and not the other. We did ask ourselves that question and quickly concluded it made no sense.
"made no sense" from a narrow, CVE announcement perspective, but Maxim disagrees from another perspective:
> [F5] decided to interfere with security policy nginx
> uses for years, ignoring both the policy and developers’ position.
>
> That’s quite understandable: they own the project, and can do
> anything with it, including doing marketing-motivated actions,
> ignoring developers position and community. Still, this
> contradicts our agreement. And, more importantly, I no longer able
> to control which changes are made in nginx within F5, and no longer
> see nginx as a free and open source project developed and
> maintained for the public good.
I'm not sure what "contradicts our agreement" means but the simple interpretation is that he feels that F5 have become too dictatorial to the open source project.
The whole drama seems very short-sighted from F5's perspective. Maxim was working for you for free for years and you couldn't find some middle ground? I imagine there could have been some page on the free nginx project that listed CVEs that are in the enterprise product but that are not considered CVEs for the open source project given its stated policy of not creating CVEs for experimental features, or something like that.
To nuke the main developer, cause this rift in the community, and create a fork seems like a great microcosm of the general tendency of security leads to wield uncompromising power. I get it. Security is important. But security isn't everything and these little fiefdoms that security leads build up are bureaucratic and annoying.
I hope you understand that these uncompromising policies actually reduce security in the end because 10X developers like Maxim will start to tend to avoid the security team and, in the worst case, hide stuff from their security team. I've seen this play out over and over in large corporations. In that sense, the F5 security team is no different.
But there should be a collaborative, two-way process between security and development. I'm sure security leads will say that they have that, but that's not what I find. Ultimately, if there's an escalation, executives will side with the security lead, so it is a de facto dictatorship even if security leads will tend to avoid the nuclear option. But when you take the nuclear option, as you did in this case, don't be surprised by the consequences.
OK - I need to make very clear that I'm speaking for myself and NOT F5, OK? OK.
Ask yourself why this matters? What is the big deal about having a CVE assigned? A CVE is just a unique identifier for a vulnerability so that everyone can refer to the same thing. It helps get word out to users who might be impacted, and we know there are sites using this feature in production - experimental or not. This wasn't dictating what could or could not go into the code - my understanding was the vuln wasn't even in his code, but from another contributor. So, honestly, how does issuing the CVEs impact his work, at all?
That's what I, personally, don't understand. At a functional level, this really has no impact on his work or him personally. This is just documentation of an existing issue and a fix which had to be made, and was being made, CVE or no CVE. And this is worth a fork?
What you're suggesting is the best thing to do is to allow one developer to dictate what should or should not be disclosed to the user base, based on their personal feelings and not an analysis of the impact of that vulnerability on said user base? And if they're inflexible in their view and no compromise can be reached then that's OK?
Sometimes there's just no good compromise to be reached and you end up with one person on one side, and a lot of other people on the other, and if that one person just refuses to budge then it is what it is. Rational people can agree to disagree. In my career there have been many times when I have disagreed with a decision, and I could either make peace with it or I could polish my resume. To me it seems a drastic step to take over something as frankly innocuous as assigning a CVE to an acknowledged vulnerability. Clearly he felt differently, and strongly, on the matter. Maybe he is just very strongly anti-CVE in general, or maybe he'd been feeling the itch to control his own destiny and this was just the spur it took to make the move.
His reasons are his own, and maybe he'll share more in time. I'm comfortable with my personal stance in the matter and the recommendations I made; they conform with my personal and professional morals and ethics. I'm sorry it came to this, but I would not change my recommendation in hindsight as I still feel we did the right thing.
Only time will tell what the results of that are. I think the world is big enough that it doesn't have to be a zero sum game.
I guess a vulnerability doesn’t count unless it’s default lol. Just don’t make it default and you never have any responsibility nor does those who use it or use a vendor version that has added it in their product.
>I guess a vulnerability doesn’t count unless it’s default lol.
It's still being tested. It's not complete. It's not released. It's not in the distribution. The amount of people that have this feature in the binary AND enabled is less than the amount of people that agree that this should be a CVE.
CVE's are not for tracking bugs in unfinished features.
It IS in the code that anyone can compile to use or integrate in projects as is the OSS way. Splitting hairs because it’s not in the default binary is absurd. Guess all the extra FFMPEG compilation flags and such shouldn’t count either.
(not explicitly asking you, MZMegaZone) Does anyone understand why a disagreement about this would be worth the extra work in forking the project?
I'm not very familiar with the implications, so it seems like a relatively fine hair to split- as though the trouble of dealing with these as CSV would be less than the extra work of forking.
It probably wasn't. There's likely something else going on. Either Dounin had already decided to fork for other reasons, and the timing was coincidental, or there were a lot of reasons building up, and this was the final straw.
Or he's just a very strange man, and for some reason this pair of CVEs was oddly that important to him.
If you have more information, share it (I don’t think you do, as all you could say was “I’m sure”.). People actually involved sharing their side is a unique advantage of HN. Empty ad hominem attacks are not allowed here, and you have no right to tell anyone to “get out of here”.
Could you expand on your reasoning here? I'm genuinely curious what makes you react in this way?
To me it seems like a very simple disagreement with policies, and because the implications of the decision that was made and the impact it has to the agreed relationships.
Long answer: Most DBs key on lastname, so MegaZone is my last name, officially, and I have no first name. Then I leave the first name blank if it'll let me, but more often I need to put something in there - so these days I use 'MZ' as that's what I have people call me. I used to use 'Mr', and I still need to use that on some government forms so everything will match up.