Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OPNsense and HardenedBSD are parting ways (opnsense.org)
75 points by zdw on April 23, 2021 | hide | past | favorite | 49 comments


> Since Shawn has been a core team member due to the involvement into our operating system, we decided to remove him from our core team as well.

I am but an ignorant bystander, but the way this is phrased, it sounds like a quorum of OPNsense leadership unilaterally closed ranks against someone who likely had significant emotional investment in the project, and did so specifically after determining that they had extracted the maximum value they could from them.

As written, it sounds deeply unempathetic and, at least for me, actively fosters a desire to see the project burn, replacing the blossoming curiosity I had prior.

Is this just badly worded?


From #OPNsense on freenode, after the forum post was posted this morning:

    <> fitch: But I really do hope the parting is on good terms with lattera
    <@fitch> no, we are cool. obviously it's not a short term decision that breaks everthing next week. For the rest of the year this doesn't change much
    <lattera> yeah, we're on good terms. I'm somewhat disappointed, but I understand
    <lattera> once opnsense switches to fbsd, I'll likely migrate to bare hbsd on my firewalls
(fitch is Franco Fichtner. lattera is Shawn Webb.)


Thank you, that seems reassuringly friendly in the presence of philosophical differences. It's heartening to hear.


Consider though

https://twitter.com/lattera/status/1385299526270046209?s=20

> I resigned shortly after the notification. My resignation was separate from their decision to transition to FreeBSD.

https://twitter.com/lattera/status/1385299908656345095?s=20

> Ah, re-reading the original announcement sent to me, they did decide to remove me from the core team. I had misread the private email.


Wouldn't OpenBSD a better choice than either FreeBSD or HardenedBSD if security is a focus? It also has better wifi support and a much more modern PF firewall. OpenBSD is really "the" router OS. It makes a lot more sense. (no pun intended)


OpenBSD security is overstated in my opinion (and the opinion of others[1]), and I have serious doubts it would be worth the work to migrate to unless FreeBSD is absolutely terrible.

1: https://isopenbsdsecu.re/quotes/


It's interesting that OpenBSD's security reputation is such an article of faith that this comment is light-grey now. I get that it's not supremely substantive, but "OpenBSD's security is overrated" is not a controversial statement in the software security community. They built their reputation on the 1990s OpenBSD security audit, which was a real innovation, but one that every other mainstream operating system has since picked up. Linux probably gets substantially more audit attention now than OpenBSD does, and is quicker and more open to deploying kernel countermeasures. I'd probably trust a minimal Linux distro with a decent kernel configuration more than I trust OpenBSD.

Later

The comment I'm responding to isn't light grey anymore; the perils of flouting the guidelines and talking about downvotes! I wasn't objecting to the votes, though, just noting that the C.W. on HN seems different than in specialized security communities.


Though security isn't just about how secure a thing could be, but how secure it is in common practice. I would trust a minimal Linux distro with a decent kernel configuration as well, but I wouldn't trust Ubuntu server with systemd and who knows what else by default. I'd much rather have OpenBSD base; and OpenBSD base is fundamentally a minimal install with decent kernel defaults.

The people in the OpenBSD community tend to focus on the minimalist configuration by default, which helps security in practice; and things are easy to configure. Running a postgresql database on top of OpenBSD base with PF blocking everything except whitelisted clients will go a long way for security. For a lot of servers, OpenBSD base with a few libraries is enough for a server application, and the work put into making base minimal and secure by default is great for that.


I would trust a distribution with systemd _more_ than I would trust one without it. Systemd has a lot of knobs for sandboxing services now.


OpenBSD has much easier to use features such as pledge and unveil as part of libc, and led the way in W^X.


I know, but just setting MemoryDenyWriteExecute=yes is even easier.


I’m not an active OpenBSD user, but I’m reasonably confident that the OpenBSD W^X check kills any violating process since about 2016. Applies to all executables (not just system services/daemons with conscientious maintainers who think to apply that setting) except if the filesystem is mounted wxallowed. I think that’s a strict improvement.

I like systemd on Linux, but I think the systemd sandboxing is chasing OpenBSD, not the other way around.


Doesn't systemd's sandboxing do seccomp-bpf? Pledge is an interesting system, but it's not BPF.


systemd’s syscall filtering is pretty good and comparable to pledge in terms of ease of use — both let you use nice aliases for sets of syscalls, like stdio or @aio. I’ve only _played_ with both, so my opinions are weakly held, but pledge+unveil excel for designing a system where I have full source code control/authorship, and systemd’s sandbox excels for deploying third party services. pledge+unveil just make it so easy to incrementally drop privileges after your initialization phase(s), where for systemd, so far as I know, you’d have to have separate service files for the smaller sandboxes of subsequent phases — or better yet, use seccomp-bpf directly, or even better... a pledge-like wrapper on top.

Ideally your app wouldn’t need any more syscalls at startup than it needs later on though, so it’s quite legitimate to argue it’s no big deal at all, or that systemd encourages better design discipline.


Can you link to any resources, to where current security community hangs out ? Blogs, articles ... ? Maybe my google-fu is failing but I mostly find someone trying to sell me some product, whenever I search for such topics ?

[1] During the 90's and early 00, I was sysadmin and interested in security, past decade and a half I have been dev/devops for various small and medium firms and have had security "beaten" out of me. I am now back at sysadmin role, but most of the forums, mailing lists, are gone or shadows of itself etc.


> is not a controversial statement in the software security community.

Funny how often this happens. Maybe we need to do a better job of conveying these sorts of things - happens all the time that the security community at large considers X or Y to be obvious and well known, and then you find that actually very large portions of the larger tech community don't know that at all.


I want to be careful not to be seen as speaking for that whole community; I'm just saying, if you were in a gathering of vulnerability researchers, and you said "OpenBSD's security reputation is really overrated", you'd get shrugs, not gasps.


Or maybe that’s how consensus work in general. There’s some proposition or statement that are truthful in the end, but until it has been orthographically formatted, individually verified, and/or accepted by the majority, remains unproven.

So you might know something for a obvious fact, like “hey it’s raining”, and you might be objectively right, but it could be literally not-a-fact until at least everyone in the room also agrees.

Some security researchers might “just know” the OpenBSD security is “duh, ha ha”, but until someone formally compose it in a text format as “OpenBSD security is overrated”, they don’t necessarily has the words to put it together. And once it has been written down, security sub-community would agree, and the greater software engineering community then has to verify the phrase and the agreement process within the security minded, and only then the phrase can become a “well-known fact that everyone knew about since forever”.

But I think that’s how facts and consensus work.


The issue here is that there is seldom consensus in the software security community when it comes to opinions of certain projects. This is doubly so when the project makes some good choices, and some not-so-good choices, like OpenBSD does. For a similar discussion, ask your local security engineer what they think about WhatsApp or Signal.


C.W. ? Common wisdom?


“Conventional wisdom”


I've seen this before and my thoughts on it are simple: ok, so OpenBSD does some silly and archaic things, but where are the exploits?

I keep hearing that the security community has doubts about OpenBSD, but I don't hear about those doubts being validated.


Well, that could also be related to openbsd barely being used...


Publishing content relating to security can be seen as a kind of challenge. See the blogs of many security researchers for examples. I can't see why this same effect would not apply to OpenBSD to a certain extent.


Oh, it does. It’s just that it doesn’t apply to an extent necessary for someone to spend enough time on it.


Where are the exploits for TempleOS? Come to think of it, where are the users, and where do most security researchers direct their attention to?


If SerenityOS can be a target for exploits then surely OpenBSD can.

https://devcraft.io/2021/02/11/serenityos-writing-a-full-cha...


Open bsd is targeted and does get CVEs though... the question is about the amount of attention compared to other platforms


Well, popping a toy OS for a CTF challenge does make it a target, yes.


That site doesn't make a very cogent point and likes quoting tweets of people without substance; just because you have credentials doesn't mean you don't have to make logical points. It does have some criticisms that can be taken account of, though some are obsolete, and they do show that lots of other OSes have security mitigations too (lots of which had them before OBSD), but it really does appear to be written by someone with an axe to grind. If you want the other side, [1][2][3] from the OpenBSD mailing list have some discussion.

From my understanding, OpenBSD is partly a research OS and written by developers for developers. They want other OSes to grow and get better and adopt more security features; I don't get the feeling they are trying to compete with other OSes, but rather cooperate. If you like it great, if not, then you are free to go somewhere else. Linux appears to have a very competitive environment with a lot of toxic power games and the desire to prove that it is the One True OS. The BSDs thankfully have less of that; I use Linux, BSD, Windows, and Mac OSX. I like all of them for different reasons; I hope they all continue to improve. There is room enough for all.

[1] https://marc.info/?l=openbsd-misc&m=158906273407900&w=2

[2] https://marc.info/?l=openbsd-misc&m=158897658715925&w=2

[3] https://marc.info/?l=openbsd-misc&m=158886904708799&w=2


Intigued as to why you think it's overstated. It seems pretty robust to me. Those quotes are fairly snipey and there's quite a bit of history behind the people to bias their opinion.


The website includes actual analysis on OpenBSD security as well.

OpenBSD has this reputation of having some god-like security where it's leaps and bounds above every other OS. I believe its security is actually relatively similar to alternative OSs like linux, and perhaps even less as containerization seems to be getting really popular on linux right now which allows linux to somewhat catch up to what I consider to be one of the biggest advantages OpenBSD had (pledge).

Most of the advocacy I see about OpenBSD security seems to be misleading and/or vague. OpenBSD's own website[1] proudly claims that OpenBSD has had "Only two remote holes in the default install, in a heck of a long time!". What is "a heck of a long time", and where's the info about any vulnerabilities other than an RCE in the default installation?

1: https://www.openbsd.org/


One example that is in my wheelhouse was illustrative of the not-always-clever attitude of OpenBSD to security.

If you need certificates from the Web PKI (for your SMTP server, or your IRC server, or any other TLS server, but most obviously your web server) you likely want to talk ACME to Let's Encrypt or another CA.

The popular way to do this is Certbot, a program written in Python. Clearly that's a lot of dependencies and thus potential attack surface area so not ideal.

A popular alternative is acme.sh which is Unix shell code. So this is potentially a far smaller attack surface but shell is hardly the most auditable or secure language by default...

OpenBSD ships acme-client which is written in C.

Now, acme-client uses lots of OpenBSD flavour tooling to improve on its security, components of acme-client are sandboxed off from themselves and the rest of the system using pledge() and so on such that components with access to your private keys are not also doing TCP/IP. On the surface then although it's written in an unsafe language this looks like a good trade and it exactly fits OpenBSD's approach.

However, the thing about ACME is it's built out of existing standard components such as Certificate Signing Requests. So instead of attack surface in Python, or Bash, or C, and trying to mitigate it with local sandboxing, you can choose to hand the CSRs to a completely separate system and so to literally airgap your private keys and all the TLS server code from your ACME implementation if you want.

You can use CSRs with Certbot, you can use CSRs with acme.sh, but, they aren't an option with OpenBSD's acme-client at all.

acme-client is the Beast so that a US President can be driven somewhere very dangerous and most likely get out of there alive - but just using CSRs is having him just make a video call and never leave the safety of the Oval Office.


> However, the thing about ACME is it's built out of existing standard components such as Certificate Signing Requests. So instead of attack surface in Python, or Bash, or C, and trying to mitigate it with local sandboxing, you can choose to hand the CSRs to a completely separate system and so to literally airgap your private keys and all the TLS server code from your ACME implementation if you want.

> You can use CSRs with Certbot, you can use CSRs with acme.sh, but, they aren't an option with OpenBSD's acme-client at all.

acme-client uses a privilege separated process model (using pledge and unveil) so that private keys are never loaded or accessible to the network-facing process.

And it's not just acme-client that does this. Most OpenBSD service daemons, like httpd and smtpd, perform private key operations in a privilege-separated subprocess. The technique won't win any awards for request throughput, but it reflects OpenBSD's "secure by default" ethos, which of course always has to be understood in the context of what they're trying to accomplish.

Air-gapping the long-term account private keys for a semi-automated Let's Encrypt renewal process sounds more aspirational than anything. If you're going to attempt that, the lack of support in acme-client is the least of your problems in terms of initial and long-term operational complexity.[1]

OpenBSD tools are primarily designed for the needs of OpenBSD maintainers, and secondarily for the needs of typical OpenBSD users. These people aren't trying to build the next AWS using OpenBSD, nor are they trying to secure Apple's or the NSA's proprietary key escrow system root of trust key. "Secure by default" has to be understood in that context. And so the fact other tools support more features, some of which can be used to theoretically implement hypothetically more secure systems is sort of beside the point. Moreover, a general rule of thumb is complexity is anathema to security, and OpenBSD is rigorous about providing the fewest knobs possible while minimizing administrative burden and effective risk for their typical user base. This is why the utility and pervasive use of pledge and unveil are simply incomparable to the Linux equivalents, seccomp and filesystem namespaces; and why the fact that OpenBSD services privilege separate private key operations is something most OpenBSD users, let alone non-OpenBSD, don't even realize.

I don't really care about the exploit writers' opinions of OpenBSD. How many of them host and use their own public-facing web or e-mail servers? Attackers and defenders have completely different perspectives that sometimes lead to completely different opinions. And I definitely don't care about the consensus opinion of the security community considering that over the past decade the majority of self-described and even employed "security engineers" are neither experienced systems programmers nor experienced system administrators. Prowess with nmap and Wireshark seem to make for "hacker" credentials in this community. The depth and sophistication of their knowledge is about the same as many Node.js-cum-Rust programmers who write "parsers" and "servers" by stitching together a bunch of crates. That's not nothing, but I'm not going to attach much worth to their opinions regarding complex architectural or algorithmic problems. How many Kubernetes engineers can even understand the value of, let alone conceptualize or even implement, a pipelined series of network daemons that rigorously preserves backpressure? Why would I value their opinion on how to efficiently scale services when their measure of scalability and survivability is the ease with which you can throw more hardware or AWS account credits at a problem, and generate more Prometheus data sets?

People who use OpenBSD over the long term know perfectly well what "secure by default" means, even when they themselves might have made different choices.

[1] Not that I think air gapping is generally impossible or impractical. I've defended DNSSEC on multiple occasions for preserving this ability. And I do a lot of work with HSM, secure enclave, and smartcard integration, so I very much agree with and promote the idea of moving private key operations entirely off network-facing hosts, whether air-gapped or not. But WebPKI infrastructure tends to be operationally quite shallow and quite dynamic, making manually administered key signing operations less practical at any scale. But if someone can make it work, more power to them. In general this use case is too niche for OpenBSD to emphasize--either not enough potential utility, or invites too much complexity. The exception would be OpenSSH's recent embrace of FIDO HID USB keys, but it's not really an exception considering typical interactive ssh workflows, and considering the implementation complexity relative to the traditional ecosystem involving PKCS#11, PC/SC, PIV, etc. I don't think those latter standards are deal killers, but I totally understand and appreciate why they wouldn't see much attention from OpenBSD developers. OpenBSD continues to embrace and enhance IPSec and IKE despite the complexity, even as they integrate Wireguard; they're capable of making context-sensitive analyses.


> Attackers and defenders have completely different perspectives that sometimes lead to completely different opinions. And I definitely don't care about the consensus opinion of the security community considering that over the past decade the majority of self-described and even employed "security engineers" are neither experienced systems programmers nor experienced system administrators. Prowess with nmap and Wireshark seem to make for "hacker" credentials in this community.

TYVM for this healing missive, but especially the nugget I pull-quoted. That sums up a lot of what I had come to think about "security" as a career: You need to be an engineer first.


The unfair ad hominem in the middle of your comment that detracts from your overall message, IMO. Plus, I disagree with the technical assertion: when crafting defenses, you absolutely want to be asking attackers to evaluate them. Time and time again it has been shown that mitigations that are designed in a vacuum often fail “surprisingly” when put to the test, or sometimes even actively hurt security because they add complexity and open up holes. And in OpenBSD’s case, I generally agree with the criticism that some of the “novel features” to improve security seem to have inaccurate or outdated threat models behind them, and I say this as someone who is just tangentially familiar with the edge of exploit development.


> The unfair ad hominem in the middle of your comment that detracts from your overall message, IMO

I regretted it shortly after adding that part, but decided to leave it as I feel it dishonest to hide such comments after the fact. FWIW, I think Node.js, Rust, and Kubernetes are great technologies that fully merited their enthusiastic adoption. But it's precisely their ability to enable engineers to implement often sophisticated functional solutions that means the fact of doing so is much less a reflection of an engineer's experience and knowledge with the underlying problem space than was the case with different (often older) systems. Case in point: somebody arguing that Make is the dumbest, most useless tool and/or syntax, Go or Rust/Cargo are the gold standards for build systems, and then adding as an aside the observation that they don't work well in multi-language contexts or when needing to apply ad hoc (i.e. non-blessed) source transformations, which can sometimes be a hassle. It makes your head explode. Their opinion isn't even wrong; it's just oblivious to the design goals and the broader problem space, and that in some respects they're comparing apples and oranges despite providing nominally similar high-level functionality. How many Go or Rust/Cargo advocates even point out the problems with relying on timestamps? It's totally outside their focus despite being one of the most indisputable short-comings. (Though, at the same time also being one of the easiest to explain and justify given context--lack of forward monotonic change counters provided by filesystem APIs and the costs of adding a default file watcher capability to a tool that by original design was stateless between invocations.) Bazel engineers, of course, are quick to point this out, but they're also perfectly willing to boil the oceans in an attempt to more comprehensively solve the various problems in wrangling the chaotic world of FOSS software; more power to them.

> some of the “novel features” to improve security seem to have inaccurate or outdated threat models behind them

Can you name one technique? I'll even make it easier for you--name one that the Chrome architecture team would outright categorically oppose adding today were they maintaining OpenBSD? (There are plenty of things the team can't or won't add to Chrome for very pragmatic reasons.) The only one that comes to mind might be syscall-origin-verification, but even then it's too early to tell, and implications are beyond inaccurate that the OpenBSD team was oblivious to its limitations. (And FWIW, when I first read about it also seemed completely pointless to me until I dug up context from the mailing list to understand the motivations and goals.)

OpenBSD was one of the first systems (arguably the first, depending on how you define things), to comprehensively implement and integrate ASLR, stack protectors, W^X, and various privilege separation techniques. (And I say that having contemporaneous knowledge of pre-existing examples, such as patches to GCC; that is, I know these techniques didn't originate with OpenBSD by any stretch.) At the time people said everything they say now about modern mitigations--circumventable, little effective security, more rigorous alternatives, etc--despite the fact that they've become table stakes today even while circumventions have become more sophisticated and in some cases even automatable.

For years people argued you absolutely had to distinguish /dev/urandom from /dev/random, and only in the past couple years have people finally come around to understanding that theoretical and practical security are different, and that the practical benefits of a unified approach are undeniably superior.

I mean, just go through the list here: https://www.openbsd.org/innovations.html

Which ones even today are so useless that their continued use is unsupportable? Obviously in many cases there can be differences of opinion regarding relative merit, both then and now, but that's a far cry from saying that OpenBSD developers were naive or ignorant about their utility. In fact, in every case I can think of the naysayers were proven wrong--not because they were wrong about basic facts like circumvention, but because their calculus regarding cost/benefit was mistaken, often for precisely the reasons I stated before--e.g. exploit writers aren't implementing and hosting services, don't appreciate the relative costs and burdens from a systems programmer perspective (particularly one interested in targeting OpenBSD), and tend to assume that OpenBSD is choosing one approach in lieu of another. IIRC, on ARM and MIPS, for example, OpenBSD has removed almost all ROP gadgets even while continuing to add substantially less comprehensive ROP mitigations to cover other architectures and subsystems where it's been more difficult to remove gadgets.

AFAIU, the Linux PaX team has had a low opinion of OpenBSD. Of course, they had a low opinion of many mainline Linux kernel maintainers, too. I never cared to follow the debates and recriminations, but objectively speaking much like OpenBSD their choices seemed to have generally been vindicated over the long term. AFAIU, almost every mitigation they implemented and advocated for the Linux kernel was eventually added in substantially similar form to Linux and its ecosystem, even though it took well over a decade in some cases.

If history is any guide, criticisms of syscall-origin-verification will soften. I'm hardly of the opinion that OpenBSD design decisions are the most rigorous and appropriate answers to more abstract security dilemmas. But if you start from the perspective of maintaining and extending the received Unix application programming environment, and doing so for something more than running and orchestrating hypervisors, Erlang servers, WASM modules, etc, then their track record is incredible. I still think Capsicum is underappreciated, and if I had the time and inclination (which I don't), were I an OpenBSD maintainer it's something I would emphasize. (Ditto for FreeBSD, where the Capsicum API is nearly complete--I'd focus on increasing usage throughout the system.) But that's different than saying I think unveil and pledge are inferior alternatives.


I am not on the Chrome architecture team, so I cannot predict with certainty what they would choose to do, but I can at least provide my opinion and hope that it doesn't stray too far from what they think ;)

As for an overall summary, I think it is accurate to say that the OpenBSD people are obviously not idiots, and that they do make a lot of right choices. However, those decisions are (as I have mentioned elsewhere: https://news.ycombinator.com/item?id=26911420) mixed with some questionable ones. Overall I think they do adopt a lot of the right things–the ones you mentioned–and I do agree that they are fairly quick to do so, although sometimes they seem like they want to claim a bit more credit about being "first" or designing something than they really should. But those are social issues, so it's not really worth talking about them here.

My main issue is that some of the OpenBSD mitigations seem to be designed by people who either have an outdated view of how modern exploits work, or are just entirely wrong. Like, they sound plausible for a bit if you just think about them, but if you look at real exploits you realize that they are not worth it, or they are just not how real chains look like. Ok, let's break this apart a bit.

What does it mean to have an outdated view of exploits? This means that you're protecting against things that nobody does these days, because other mitigations exist to prevent it. If your mitigation protects against something that people used to do when NX didn't exist and ASLR was nonexistent or weak (perhaps due to lack of VA space), then it's clear that you're living in the past. Similarly, if your mitigation is intended to protect against some form of attack, but if these kinds of attacks never actually happen in the wild, or protecting against them opens up an alternative vector that you leave open, then your model is incomplete. Finally, when judging a mitigation, you don't look at what fraction of some exploit it mitigates when it is working perfectly–you need to consider it in the context of other things failing. ASLR is a "weak" mitigation because one partial, mildly controlled read might be enough to break it. Privdrop is a "strong" mitigation because there it requires a full kernel chain to undo. That doesn't mean they aren't both good to have, but it is important to keep this in mind when evaluating this kind of stuff. Anyways, onto some of the worse mitigations.

Syscall origin verification seems like a weak hardening technique, designed with a lack of understanding of what kind of attacker might exploit it. The reasoning seems to be to prevent an attacker from doing things like writing a syscall into a JIT region, but generally attackers at that point can easily craft shellcode that gives them arbitrary R/W (or have this already), which lets them jump through libc anyways. As such, the general understanding is that it is pretty much useless at stopping an attacker at the level of control it claims it can protect against.

ROP gadget removal is similarly problematic, because "removal" in this case doesn't mean "full removal", it means "reduction in their frequency". A ROP chain doesn't actually take that many gadgets, though, so you can do things like remove half or even more of all gadgets with pretty much zero impact on the ability to ROP. (And I'm not even going to get into the fact that using dumb gadget finding scripts, like OpenBSD did, is not how you measure gadget reduction.) And even once you have all the ROP opportunities removed, you need to then start looking at JOP, because that's what attackers are going to pivot to (heh), which AFAIK is not yet being considered.

I'll do one more, but hopefully you get the idea by now: trapsleds are basically considered to be a joke. They are meant to protect against ROP nop sleds, except it fails to understand that these don't exist. When people ROP they jump precisely to where they want to go. It sounds good on paper, but in reality the mitigation just doesn't match up with any real attacks.

There's a lot of good going on in OpenBSD, but there is also some places where they seem to not have caught up with what is going on in the real world. And I think people's general criticism towards the mitigations follow along those lines.


The "trick" is the default install bit. OpenBSD's default install is very limited. Since there's basically nothing enabled, the surface is limited.

They still pop up every few years or so.


IIRC, "a heck of a long time" refers to the time elapsed since the initial release of OpenBSD in July 1996.

Patches for vulnerabilities are listed per release on https://www.openbsd.org/errata.html


You need to be aware that while pledge is a security technology, linux containers aren't. Pledge was designed as another security layer, while containers were designed as "management" or "separation" layer, but not strictly as security measure - read up on how bad is to run things as a root in a container.


They're like the raspberry pi's of OS's, great to tinker with and great to use if you have a use case that fits it. The main focus of OpenBSD's security seems to be a low attack surface and slower adoption of certain things. The network stack is still single-threaded for example. It's great for a home firewall but their philosophy is pretty paranoid and slow moving to the point where they're great for what they are but not very useful in any actual commercial data centers.


Perhaps, but is different enough that it would be a significant migration cost.


My take as an infrequent BSD OS analyst but a lifelong Linux OS analyst is that this is a GRSecurity redux.

Sad case of landmass breaking off into an island with a much smaller ecosystem befitting only the current biodiversity (or should I say, cyber-diversity).


Not sure a great move here.

HardenedBSD have some additional security mechanisms that vanilla FreeBSD does not have:

https://vermaden.wordpress.com/2018/04/06/introduction-to-ha...


One of very important reasons why FreeBSD haven't incorporated the patches from HardenedBSD was that their code quality was disputed and historically the author had issues with undergoing reviews and applying requested changes. HardenedBSD may do a very good job for the PR about their "superior security" but one may wonder if a system without badly implemented feature isn't more secure than the one with it.


> the author had issues with undergoing reviews and applying requested changes.

Does this mean the author would send code for review and never actually action the change requests? Or was it more they sent patches and expectecd them to just be merged?


> Over time we have seen that building on top of HardenedBSD not always guarantees interoperability,

It sounds like interop was the deciding factor here. Without speculating, does anyone know details? What kinds of issues were difficult to resolve on HardenedBSD compared to FreeBSD?


From #OPNsense on freenode, after the forum post was posted this morning:

    <> > issues we or our users run into are not always very widespread and have the tendency to complicate tracking issues
    <> ... what issues? is this kernel, base, or packages?
    <@fitch> https://github.com/opnsense/ports/issues/95 https://github.com/opnsense/src/issues/91 https://github.com/opnsense/core/issues/4263 are just three of those




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: