There's a better, longer video on Reddit. At the beginning of the video, it sounds like the Ram Air Turbine (RAT) is deployed, which would suggest a dual engine failure.
I'm assuming the Ram Air Turbine gives extra evidence of a dual engine failure (or at least the engine that generated power). Engine spools down, power is lost, air turbine needs to be deployed.
Those things are tiny and very transparent (since it's pretty much a propeller) and video compression, plus it's a video of a screen, will eat it up against a clear sky.
I jokingly mentioned something like that to my wife when Trump was elected. Now, given the current circumstances, I may not get another chance to visit the US. Still, I’m grateful for the time I spent in California, the people I met, and all the beautiful roads I had the chance to explore.
In 2021 the company received €200 million from the Croatian government and the EU to develop a self-driving car, presumably to compete with US companies. However, they ended up using Mobileye technology. The initial goal was to have the cars on the road by 2024, which everyone knew was unrealistic. Now, the launch is supposedly targeted for 2026 but that's probably unrealistic as well.
During the presentation, they attempted to summon the car twice, but it failed to move. A photo from the event shows an employee holding what appears to be an industrial-grade remote controller.
You don't achieve fault tolerance solely by using Erlang. Erlang does not inherently 'achieve fault tolerance.' Instead, you make your system fault-tolerant through deliberate engineering. While Erlang provides tools and design guidelines, the responsibility for achieving fault tolerance ultimately lies with you. Source: I implemented and operated a large Erlang system for approximately 3 years.
that's always true. i think the author is interested in code examples of such.
and unlike many other frameworks/tools, erlang provides a great pit-of-success for implementing fault tolerance - e.g if you follow common/best practices - you'll achieve a fairly good fault tolerance.
The big benefit in my experience was that I could have a program with real users, that did have errors (from me being new to Elixir and not knowing better) and still not experiencing downtime.
Instead, CPU or Memory would increase over time, hit the VM limit, kill and restart.
So later when I noticed this, I could debug and fix it without simultaneously fighting a prod incident.
I would argue that a lot of public APIs are a mess. We used to joke that instead of firing the bottom 10%, Google just reallocates them to the Android team.
Except you're talking to a former Nokia employee that knows enough about Symbian and J2ME, and had enough of Dalvik falsehoods regarding JVM implementation techniques on constrained devices.
Yes, I remember meeting many Nokia employees at that time (when there was a still a Nokia Symbian vs. iPhoneOS vs. Android race in progress) and they all seemed to be on the wrong planet when it came to developer mindset.
I distinclty remember Nokia trying to sell us on Qt app development... where apps were only actually able to run on like 2 devices out of their 100+ device portfolio. It was hillarious how misguided they were.
That's a really good point but thinking about it that way brings the decision to use Java at all into question. Clearly iOS/iPhones did great with ObjC in the same era.
This guy sounds like a textbook example of CV driven developer.
I've delivered large amount of value to business with boring technologies like Java, Django and made good money in the process. Sometimes all the client needs is a simple CRUD app. Coincidentally, I turned 40 this year.