Thanks for flagging this. You’re right that the issue sat too long without a response, and that’s on us. We’ve now replied on GitHub and are actively looking into it. It appears the crash may be related to CPU feature mismatches (e.g. missing AVX support when using pgvector), especially in emulated environments like ARM Macs running x86 containers. We’ve asked for system details to help confirm. Happy to dig in and resolve it quickly from here.
This is debatable. Perhaps the poor quality management issues and lack of rigour that have been seen in various Boeing production facilities are a case of normalization of deviance. However, the original problem with the 737 MAX was the top management decision not to invest in a new airframe design for cost/strategic reasons, to oblige designers to implement various unsafe workarounds to accommodate larger and more fuel-efficient engines that made the plane unstable, and to ruthlessly silence engineers who argued that this was unsafe. This problem was compounded by the FAA's move to increased delegation of safety oversight to designer-manufacturers, which left it with insufficient ability independently to assess the new design. These are both big, important decisions made by top leaders of the two organizations, rather than the slow progressive evolution driven by people's efforts to optimize their bit of the workplace which characterizes drift to danger.
Yes, but it was also a problem of relying on just one AOA sensor. That was approved across the chain of command and deemed "good enough". It's pretty clear to anyone that a critical system such as this should not rely on just one sensor, which may be faulty.
But they thought that, since the pilots would just have to do the 'runaway stabilizer procedure', it would be enough. They didn't consider how startling and different the system would feel.
"Being cheap" is a pressure from management to increase production in order to avoid economic failure. "Being lazy" is an effort from frontline workers to improve efficiency and avoid being swamped by production demand. One of the points of the diagram is that these partially cancel each other out, but the net effect of the addition of these two "vectors" is to push the system towards the failure boundary.
I have a smartphone app that scrapes replay TV listings for a few shows that I like to watch at the gym and allows me to download the low-quality media stream to the phone to view offline ad-free.
I released the Rust library that downloads and reassembles media segments from a DASH stream (https://github.com/emarsden/dash-mpd-rs). Won't release the web scraping bits because they are against website terms and conditions, and because annoying countermeasures will be implemented if too many people use them.
From the README: “Stirling PDF does not initiate any outbound calls for record-keeping or tracking purposes”. Beyond auditing the code, how could a potential user verify this claim in advance, and how can a web-based app help support such a claim (in particular when the app does need to make some web requests to operate, but only to a restricted list of URLs that might be listed in a manifest along the lines of a Content-Security-Policy for instance)?
This is a concrete problem when deploying apps that need the user to “upload” some sensitive content.
If you're self-hosting on kubernetes, you can set up network policies with deny-all egress rule for this deployment/pod. This would block all outward network calls.
> in particular when the app does need to make some web requests to operate
A web app doesn't need to make an outbound web requests to operate. A user interacting with a web is the one initiating the requests.
You can give the access to the up through a HTTP proxy and you can filter out any outbound requests from the web app or even not configuring the network routing for the server hosting that app. That leaves you with only JS initiated requests in the rendered pages of the app.
Sure, but these types of applications are running in a web browser sandbox, which benefits from enormous engineering resources to protect the host computer from malicious actions by the remote code. I'm wondering whether this execution environment (augmented with some policy mechanism to allow apps to declare their URL access needs, a little like an AppArmor or network firewal policy) could also provide some guarantees concerning privacy or information security.
Just put a sniffer or network capture tool like Wireshark in between. Additionally you could restrict the apps network access entirely to just your local home network.
It seems that there is some missing tooling to make this convenient.
You can run a local bundle of HTML/JS/WASM in a web browser instance that you isolate (for example with firejail) to prevent network access. You distribute as a zip/tgz, but it's not obvious how to handle updates without a full redownload. Distributing with a full Electron-like interface is obviously overkill.
If you're running a web app that's hosted elsewhere (which will be much more convenient for most people), your web browser or the software isolation functionality (or firewall/proxy) needs to distinguish between the initial resource loads (approve) and later sneaky logging requests (ban).
There are Android applications such as TrackerControl that have related functionality (operates as a local VPN to filter all network requests and block tracking) but I don't know of convenient tools for the desktop (Linux, in particular).
Historically, the fact that lives of the public are clearly on the line was part of the argument for allowing partial delegation of safety oversight. The idea has been that any engineer/manager making these decisions will be regularly flying themselves/loved ones, so is going to be suitably cautious. This differed from the situation in coal mines for example, where the miners exposed to the risk were socially/culturally disjoint from engineers/managers.
There are several good reasons for allowing it. One is that it's difficult for a public inspectorate/regulator to maintain the necessary levels of expertise to assess such complex systems (and increasingly so with technological progress). Furthermore, people working inside the industry have much better access to information about the risks than an outside inspector has.
A second reason is simply costs to the public. In 2019, the interim FAA director Dan Elwell testified to the US Senate after the 737 Max disasters that bringing all delegated oversight back into the FAA would require 10000 extra staff and USD 1.8B in costs. There are fairness/democratic arguments to having the costs borne by the industry (and thus indirectly by the privileged portion of the taxpayers who consume air traffic) rather than by all taxpayers.
There are indeed several other examples of this partial delegation of authority for decisions concerning safety, either to industry players or to third parties, though the FAA "ODA" mechanism is probably the most prominent. EASA's mechanism is similar in many respects, but more focused on properties of the oversight entity within the designer-manufacturer firms than on properties of the individuals doing the work. I wrote a discussion paper on precisely these issues a few weeks ago, which discusses some of the questions related to independence from commercial pressure, access to expertise, and so on:
https://github.com/emarsden/pgmacs
(self-plug)