Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Noteworthy that this wasn't a bug, but a "feature":

  void downloadThread::ignoreSslErrors(QNetworkReply* reply,QList<QSslError> errors) {
    // Ignore all SSL errors
    reply->ignoreSslErrors(errors);
  }
https://github.com/qbittorrent/qBittorrent/commit/9824d86a3c...


Is the motivation behind this known?


As the commit message was "Fix HTTPS protocol support in torrent/rss downloader" I suppose it was a quick fix to make things work, and as things worked no one ever took a look at it until now.

EDIT: The author of the PR[0] (who is one of the top qBittorrent contributors according to GitHub[1]) that fixed this also came to this conclusion:

> I presume that it was a quick'n'dirty way to get SSL going which persisted to this day. It's also possible that back in the day Qt4 (?) didn't support autoloading ca root certificates from the OS's store.

[0]: https://github.com/qbittorrent/qBittorrent/pull/21364 [1]: https://github.com/qbittorrent/qBittorrent/graphs/contributo...


To be fair, this function ignoreSslErrors is not from the authors of qBittorrent, it comes from QT framework. The idea behind the function is that you provide it a small whitelist of errors you wish to ignore, for example in a Dev build you may well want to ignore self-signed errors for your Dev environment. The trouble is, you can call it with no arguments and this means you will ignore every error. This may have been misunderstood by the qBittorrent maintainers, maybe not.

Much more likely is that someone knew they had implemented this temporary solution while they implemented OpenSSL in a project which previously never had SSL support - a major change with a lot of work involved - and every programmer knows that there is nothing more permanent than a temporary solution. Especially in this case. I can understand how such code would make it into the repo(I think you do too), and it's very easy for us to say we would then have immediately amended it in the next version to properly verify certs.

Having been in contact with the maintainers, I have to say I was disappointed in how seriously they took the issue. I don't want to say any more than that.

Source: author of the article


Temporary solutions can become more dangerous with time. Years ago, in one of our projects, someone wrote a small helper class, HTTPClient, to talk to one of our internal subsystems. The subsystem in the dev environment used self-signed certificates, so one of the devs just disabled SSL validation. Whether SSL errors were ignored or not was specified in a config. Later, someone messed up while editing the configs, and SSL validation got disabled in the live environment, too. No one noticed, because nobody writes tests to check if SSL validation is enabled. But that's only part of the story, this HTTPClient class was still only used to communicate with our internal subsystem on our own network.

The real problem came later when the next generation of developers saw this HTTPClient class and thought, "Hey, what a nifty little helper!", and soon they were using it to talk to pretty much everything, including financial systems. I was shocked when I discovered it. An inconsequential temporary workaround had turned into a huge security hole.


This is interesting, I haven't ever used the Qt framework but I'm surprised that it would even have an SSL implementation, sounds a bit out of scope for a GUI toolkit. I think I'd prefer to do all my networking separately and provide the fetched data to Qt.

Edit (just noticed this was the author): I'm curious what torrent client do you prefer? I like Deluge but mostly go to it because it's familiar.


Qt isn't just a GUI toolkit - it's an everything toolkit. It's somewhat intended to be used (potentially) alone with C++ to allow the creation of a wide variety of apps. It includes modules like Bluetooth, Network, Multimedia, OAuth, Threading and XML.

See a full list: https://doc.qt.io/qt-6/index.html


In the spirit of the C++ compiler frameworks that were quite common during the 1990's before C++98, and then we got quite a thin standard library instead, and a mess of how to manage third parties that is still being sorted out.


(most?) programming languages have a way to handle these scenarios, something like `#warning | TODO | FIXME` ...

I understand temporary, but 14 years seems a bit ...too long


How much notification did you give the developers before you disclosed? Did you enforce a timeline?


In total it was about 45 days or so from the initial conversation. I waited for a patched version to be released, because the next important milestone after that would be finished backports to older versions still in use, which is clearly going to take a long time as it is not being prioritized, so I wanted to inform users.

Initially I had said 90 days from the initial report, but it seemed like they were expanding the work to fill that time. I asked a number of times for them to make a security advisory and got no answer. Some discussions on the repo showed they were considering this as a theoretical issue. Now it's CVE-2024-51774, which got assigned within 48 hours of disclosing.


> Some discussions on the repo showed they were considering this as a theoretical issue.

That's hilarious. It's all theoretical until it's getting exploited in the wild...


Any proof that actually happened or you just wearing a tin foil hat? Crypto enforcement en masse matter, intercepting highly specific targets using BitTorrent does not.


I feel as though there is a generational gap developing between people who do and do not remember how prolific Firesheep used to be.


Lol wait til you get personally targeted by a 0'day in extremely popular software for that sentiment to make you look stupid both ways.


I think a better question is: why are you looking for evidence (not proof!) on me for something you are supposing?


Honestly I think full disclosure with a courtesy heads-up to the project maintainers/company is the most ethical strategy for everyone involved. “I found a thing. I will disclose it on Monday. No hard feelings.” With ridiculous 45-90 day windows it’s the users that take on most all the risk, and in many ways that’s just as if not more unethical than some script kids catching wind before a patch is out. Every deployment of software is different and downstream consumers should be able to make an immediate call as to how to handle vulns that pop up.


Strongly disagree. 45 days to allow the authors to fix a bug that has been present for over a decade is not really much added risk for users. In this case, 45 days is about 1% additional time for the bug to be around. Maybe someone was exploiting it, but this extra time risk is a drop in the bucket, whereas releasing the bug immediately puts all users at high risk until a patch can be developed/released, and users update their software.

Maybe immediate disclosure would cause a few users to change their behavior, but no one is tracking security disclosures on all the software they use and changing their behavior based on them.

The caveat here is in case you have evidence of active exploitation, then immediate disclosure makes sense.


What if we changed the fundamental equation of the game: no more "responsible" disclosures, or define responsible as immediate and as widely published as possible (ideally with PoC). If anything, embargoes and timelines are irresponsible as they create unacceptable information asymmetry. An embargo is also an opportunity to back-room sell the facts of the embargo to the NSA or other national security apparatus on the downlow. An embargoed vulnerability will likely have a premium valuation model following something which rhymes with Black Scholes. Really, really think about it...


Warning shots across the bow in private are the polite and responsible way, but malicious actors don't typically extend such courtesies to their victims.

As such, compared to the alternative (bad actors having even more time to leverage and amplify the information asymmetry), a timely public disclosure is preferable, even with some unfortunate and unavoidable fallout. Typically security researchers are reasonable and want to do the right thing with regard to responsible disclosure.

On average, the "bigger party" inherently has more resources to respond compared to the reporter. This remains true even in open source software.


This is a pretty dangerous take. The reality is that the vast majority of security vulnerabilities in software are not actively exploited, beause no one knows about them. Unless you have proof of active exploitation, you are much more likely to hurt users by publicly disclosing a 0-day than by responsibly disclosing it to the developer and giving them a reasonable amount of time to come out with a patch. Even if the developers are acting badly. Making a vulnerability public is putting a target on every user, not on the developer.


Your take is the dangerous one. I don’t disagree that

> the vast majority of security vulnerabilities in software are not actively exploited

However I’d say your explanation that it’s

> because no one knows about them

is not necessarily the reason why.

If the vendor or developer isn’t fixing things, going public is the correct option. (I agree some lead time / attempt at coordinated disclosure is preferable here.)


> (I agree some lead time / attempt at coordinated disclosure is preferable here.)

Then I think we are in agreement overall. I took your initial comment to mean that as soon as you discover a vulnerability, you should make it public. If we agree that the process should always be to disclose it to the project, wait some amount of time, and only then make it public - then I think we are actually on the exact same page.

Now, for the specific amount of time: ideally, you'd wait until the project has a patch available, if they are collaborating and prioritizing things appropriately. However, if they are dragging their feet and/or not even acknowledging that a fix is needed, then I also agree that you should set a fixed time as a last ditch attempt to get them to fix it (say, "2 weeks from today"), and then make it public as a 0-day.


Indeed, we’re in agreement. Though I’d suggest a fixed disclosure timeframe at time of reporting. Maybe with an option to extend in cases where the fix is more complex than anticipated.


> Unless you have proof of active exploitation

Wouldn’t a “good criminal” just exploit it forever without getting caught? Your timeline has no ceiling.


My point is: if you found a vulnerability and know that it is actively being exploited (say, you find out through contacts, or see it on your own systems, or whatever), then I would agree that it is ethical to publicize it immediately, maybe without even giving the creators prior notice: the vulnerability is already known by at least some bad actors, and users should be made aware immediately and take action.

However, if you don't know that it is being actively exploited, then the right course of action is to disclose it secretly to the creators, and work with them to coordinate on a timely patch before any public disclosure. Exactly how timely will depend on yours and their judgement of many factors. Even if the team is showing very bad judgement from your point of view, and acting dismissively; even if you have a history with them of doing this - you still owe it to the users of the code to at least try, and to at least give some unilateral but reasonable timeline in which you will disclose.

Even if you don't want to do this free work, the alternative is not to publicly disclose: it's to do nothing. In general, the users are still safer with an unknown vulnerability than they are with a known one that the developers aren't fixing. You don't have any responsibility to waste your own time to try to work with disagreeable people, but you also don't have the right to put users at risk just because you found an issue.


100%

It’s unethical to users who are at risk to withhold critical information.

If McDonalds had an e-coli outbreak and a keen doctor picked up on it you wouldn't withhold that information from the public while McD developed a nice pr-strategy and quietly waited for the storm to pass, would you?

Why is security, which seriously is a public safety issue, any different?


It's different because bad actors can take advantage of the now-public information.

The point of a disclosure window is to allow a fix before _all_ bad actors get access to the vulnerability.


And some may already be taking advantage. This is a perfect example where users are empowered to self mitigate. You’re relatively okay on private networks but definitely not on public networks. If I know when the bad actors know then I can e.g. not run qbittorrent at a coffee shop until it’s patched.


What about a pre-digital bank? If you came across knowledge of a security issue potentially allowing anyone to steal stuff from their vault, would you release that information to the public? Would everyone knowing how to break in make everyone's valuables safer?

Medicine and biosafety are PvE. Cybersecurity is PvP.


Another point against the “security” of open source software.

“Oh, it’ll have millions of eyes on it”… except no one looks.


As opposed to the “security” of closed source software? Where severe vulns are left in as long as they aren't publicized because it would take too much development time to justify fixing and the company doesn't make money fixing vulns - it makes money creating new features. And since it isn't a security-related product any lapses in security are an "Oopsy woopsy we screwed up" and everyone moves on with their lives?

Even companies that are supposed to get security right have constant screw ups that are only fixed when someone goes poking around where they probably shouldn't and thankfully happens to not be malicious.


I think your comment works as a reply to claiming closed source is more secure than open source - you try to bring them both to the same level.

I dont think it replies to what the user asks though. It seems reasonable expecting widely used open source software to be studied by many people. If thats true it would be good to question why this wasnt caught by anyone. Ignoring all ssl errors is not something you need to be an expert to know is bad...


Codebases outside of security-contexts are rarely audited, much less professionally so. The culture of code reviewing PR's from 14 years ago is a little different from today and is also why any "quick hacks to make things work" should always have some form of "//HACK: REVIEW OR REMOVE BY <DATE>" attached to it to make it easy to find.

From a security perspective there are only two kinds of code bases: open & closed. By deduction one of those will have more eyeballs on the codebase than the other even if "nobody looks".

Case in point: It may have taken 14 years but someone looked. Had the code base been closed source that may never have happened because it might not have been possible to ever happen. It's also very easy to point to the number of security issues that never made it into production because it was caught in an open source code review by passerbys and other contributors while the PR was waiting to be merged.

The fact it was caught at all is a point for open source security - not against it. Even if it took 14 years.


> From a security perspective there are only two kinds of code bases: open & closed. By deduction one of those will have more eyeballs on the codebase than the other even if "nobody looks".

Is that the classification that matters? I'd think that there are only following two kinds of code bases: those that come with no warranty or guarantee whatsoever, and those attached to a contract (actual or implied) that gives users legal recourse specific party in case of damages caused by issues with that code (security or otherwise).

Guess which kind of code, proprietary or FLOSS, tends to come with legal guarantees attached? Hint: it's usually the one you pay for.

I say that because it's how safety and security work everywhere else - they're created and guaranteed through legal liability.


Can you cite an example where a company was sued over bad code? I want to agree with you and agree with your reasoning (which is why I upvoted you as I think it is a good argument) but cannot think of any example where this has been the case. Perhaps in medical/aviation/government niches but not in any niche I've worked in or can find an example of.

The publicly known lawsuits seem to come from data breeches and the large majority of those data breeches are due to non-code lapses in security. Leaked credentials, phished employee, social engineering, setting something Public that should be Internal-only, etc.

In fact, in many proprietary products they rely on FLOSS code which resulted in an exploit and the company owning the product may be sued for the resulting data breeches as a result. But that's an issue with their product contract and their use of FLOSS code without code review. As it turns out many proprietary products aren't code reviewing the FLOSS projects they rely on either despite their supposed potential legal liability to do so.

> I say that because it's how safety and security work everywhere else - they're created and guaranteed through legal liability.

I don't think the legal enforcement or guarantees are anywhere near as strong as other fields, such as say... actual engineering or the medical field. If a doctor fucks up badly enough they can no longer practice medicine. If a SWE fucks up bad enough they might get fired? But they can certainly keep producing new code and may find a job elsewhere if they are let go. Software isn't a licensed field and so is missing a lot of safety and security checks that licensed fields have.

Reheating already cooked food to sell to the public requires a food handler's card which is already a higher bar than exists in the world of software development. Cybersecurity isn't taken all that serious by seemingly anyone. I wouldn't have nearly as many conversations with my coworkers or clients about potential HIPAA violations if it were.


> Can you cite an example where a company was sued over bad code?

Crowdstrike comes to mind? Quick web search tells me there's a bunch of lawsuits in flight, some aimed at Crowdstrike itself, others just between parties caught in the fallout. Hell, Delta Airlines and Crowdstrike are apparently suing each other over the whole mess.

> The publicly known lawsuits seem to come from data breeches and the large majority of those data breeches are due to non-code lapses in security.

Data breaches don't matter IMO; there rarely if ever is any obvious, real damage to the victims, so unless the stock price is at risk, or data protection authorities in some EU countries start making noises, nobody cares. But the bit about "non-code lapses", that's an important point.

For many reasons, software really sucks at being a product, so as much as possible, it's seen and trades as a service. "Code lapses" and "non-code lapses" are not the units of interest. The vendor you license some SDK from isn't going to promise you the code is flawless - but they do promise you a certain level of support, responsiveness, or service availability, and are incentivized to fulfill it if they want to keep the money flowing.

When I mentioned lawsuits, that was a bit of a shorthand for an illustration. Of course you don't see that many of them happening - lawsuits in the business world are like military actions in international politics; all cooperation ultimately is backed by threat of force, but if that threat has to actually be made good on, it means everyone in the room screwed up real bad.

99% of the time, things get talked out without much noise. Angry e-mails are exchanged, lawyers get CC-d, people get put on planes and send to do some emergency fixing, contractual penalties are brought up. Everyone has an incentive in getting themselves out of trouble, which may or may not involve fixing things, but at least it involves some predictable outcomes. It's not perfect, but nothing is.

> I don't think the legal enforcement or guarantees are anywhere near as strong as other fields, such as say... actual engineering or the medical field. If a doctor fucks up badly enough they can no longer practice medicine. If a SWE fucks up bad enough they might get fired? But they can certainly keep producing new code and may find a job elsewhere if they are let go. Software isn't a licensed field and so is missing a lot of safety and security checks that licensed fields have.

Fair. But then, SWEs aren't usually doing blowtorch surgery on live gas lines. They're usually a part of an organization, which means processes are involved (or the org isn't going to be on the market very long (unless they're a critical defense contractor)).

On the other hand, let's be honest:

> Cybersecurity isn't taken all that serious by seemingly anyone.

Cybersecurity isn't taken all that serious by seemingly anyone, because it mostly isn't a big problem. For most companies, the only real threat is a dip in the stock price, and that's if they're trading. Your random web SaaS isn't really doing anything important, so their cybersecurity lapses don't do any meaningful damage to anyone either. For better or worse, what the system understands is money. Blowing up a gas pipe, or poisoning some people, or wiping some retirement accounts, translates to a lot of $$$. Having your e-mail account pop up on HIBP translates to approximately $0.

The point I'm trying to make is, in the proprietary world, software is an artifact of a mesh of companies, bound together by contracts. Down the link flows software, up the link flows liability. In between there's a lot of people whose main concern is to keep their jobs. It's not perfect, and corporate world is really good at shifting liability around, but it's doing the job.

In this world, FLOSS is a terminating node. FLOSS authors have no actual skin in the game - they're releasing their code for free and disclaiming responsibility. So while "given enough eyeballs, all bugs are shallow", most of those eyes belong to volunteers. FLOSS security relies on good will and care of individuals. Proprietary security relies on individual self-preservation - but you have to be in a position to threaten the provider to benefit from it.


> As opposed to the “security” of closed source software?

No, I don't think that's what they were saying.


Security by obscurity works, it works, no matter how hard people regurgitate the bs that it's not working.


The contexts of security by obscurity is usually in regards to data that would attract people who would specifically target you for being a mark that will make them a lot of money rather than opportunistically target you because you are an easy mark that will make them a quick & easy profit of unknown value.

If someone wants to rob you - a door lock isn't going to stop them. Likewise if someone wants to pwn you - a little obfuscation isn't going to stop them.

Security by obscurity only works in the case that you aren't known to be worth the effort to target specifically and so nobody bothers. Much like very few people bother to beat my CTF. I'm sure if I offered a $1,000 reward for beating it the number would increase tenfold because it is suddenly worth the effort to spend a bit of time attacking. But as it stands with no monetary incentive the vast majority (>99%) give up after a few days.


Yeah, but how will an attacker know to target you if they don't even know you have anything valuable, and you are flying under the radar, hm?


Except this was found eventually.

How many fifteen year old plus problems exist in closed source bases?


You mean those that too "get found eventually"?

Ignoring bad SSL certs in particular is one issue that can be reliably and easily tested regardless of how available the source of a given software is. It's a staple in Android app security testing even.


seems like some thing like this might be searchable by regex's? "/.*ignore.*ssl/i"* , at least in reasonably popular packages like qbittorrent or transmission. I'm sure some regex gurus could come up with some good ones**


A guess that's probably correct: Many torrent sites (where the client can download .torrent files from when given an URL) their infra sucks. This includes expired certificates. Users don't want to deal with that shit. Developers don't want to deal with users complaining. It's not really considered a risk because lots of those torrent sites (used to) just use HTTP to begin with, so who cares, right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: