I understand it's maybe a bit of an uphill battle to support a software stack that the OS/hardware vendor itself doesn't support, but:
1. telegram/whatsapp do, so it's definitely possible.
2. The point is not to add new features, but just to leave the current version be without deprecating it.
There is probably a bit of a development burden in keeping old versions up, but I don't think it's that big: they could even let the community take care of that…
3. But most importantly, "the iphone 6 had a good run", so the natural continuation to that sentence is "so let's discard all those working devices" ? wtf?
I think that from a company that defends "public interests", some more thought should be given to not wasting energy, time and money forcing people to upgrade working smartphones…
My guess would be that they want to dump the messy and easy-to-screw-up CommonCrypto code and switch fully to CryptoKit, which was introduced in ios 13.
If you don't like their way of running their project, you have other options.
I think you have an idealistic interpretation of Apple's PR. No one is forced to upgrade, your phone presumably still works as a phone and as a camera, music player, etc. Apple has nothing to do with Signal. They do offer a trade-in program, though an iPhone 6 won't get much.
Yeah, i'm thinking of getting a oneplus 6 since it's kind of the best supported by pmOS, and I can also just install some privacy friendly android rom in the meantime.
One thing I am slightly wary about after listening to quite a few episodes is Melvyn Braggs seems to be a bit too patriotic oftentimes.
It's nothing really obvious, but you hear a lot of "we" when talking about positive aspects of english history, and a lot of focus on all english innovations compared to other countries. Maybe I'm wrong though.
Seems pretty reasonable for a national broadcaster, no? Also British encompasses more than just the English, unless you specifically mean he’s excluding the other Home Nations…
Sorry, I didn't mean English more than British (the distinction being a bit too subtle for me to make). It's "par for the course" I guess for a national broadcaster, but:
* That doesn't mean it's not less than ideal.
* It's something one might want to bear in mind, when listening, since it might introduce some kind of a bias to the whole thing (maybe the other comment about number of episodes about european vs african history is a good argument in that direction)
Wrt bias it's actually a refreshing change for some of us to hear the voice of a traditional (old-school) leftist, as they tend to be more interested in actual social problems rather than the made-up ones that infest modern discourse, and - perhaps - shift the centre so much that plain common-sense is now perceived as "bias".
It's hard to understand what tigerbeetle is about. Can anyone ELI5 it for me?
As far as I can tell, it's some kind of a library/system geared at distributed transactions? But is it a blockchain, a db, a program ? (I did look at the website)
Hey thanks for the feedback! We've got concrete code samples in the README as well [0] that might be more clear?
It's a distributed database for tracking accounts and transfers of amounts of "thing"s between accounts (currency is one example of a "thing"). You might also be interested in our FAQ on why someone would want this [1].
The faq helped, thanks!
So, an example of typical use would be, say, as the internal ledger for a company like (transfer)wise, with lots of money moving around between accounts?
But I understand it's meant to be used internally to an entity, with all nodes in your system trusted, and not as a mean to deal with transactions from one party to another, right?
Yes, exactly. You can think of TigerBeetle as your internal ledger database, where perhaps in the past you might have had to DIY your own ledger with 10 KLOC around SQL.
And to add to what Phil said, you can also use TigerBeetle to track transactions with other parties, since we validate all user data in the transaction—there are only a handful of fields when it comes to double-entry and two-phase transfers between entities running different tech stacks.
The TigerBeetle account/transfer format is meant to be simple to parse, and if you can find user data that would break our state machine, then it's a bug.
First, because I'm a strong believer in defense-in-depth. Secondly because both disk corruption and network packet corruption happen. Alarmingly often, in fact, if you're operating at large scale.
This means that we fully expect the disk to be what we call “near-Byzantine”, i.e. to cause bitrot, or to misdirect or silently ignore read/write I/O, or to simply have faulty hardware or firmware.
Where Jepsen will break most databases with network fault injection, we test TigerBeetle with high levels of storage faults on the read/write path, probably beyond what most systems, or write ahead log designs, or even consensus protocols such as RAFT (cf. “Protocol-Aware Recovery for Consensus-Based Storage” and its analysis of LogCabin), can handle.
For example, most implementations of RAFT and Paxos can fail badly if your disk loses a prepare, because then the stable storage guarantees, that the proofs for these protocols assume, is undermined. Instead, TigerBeetle runs Viewstamped Replication, along with UW-Madison's CTRL protocol (Corruption-Tolerant Replication) and we test our consensus protocol's correctness in the face of unreliable stable storage, using deterministic simulation testing (ala FoundationDB).
Finally, in terms of network fault model, we do end-to-end cryptographic checksumming, because we don't trust TCP checksums with their limited guarantees.
So this is all at the physical storage and network layers.
"Zero deserialization? That sounds rather scary."
At the wire protocol layer, we:
* assume a non-Byzantine fault model (that consensus nodes are not malicious),
* run with runtime bounds-checking (and checked arithmetic!) enabled as a fail-safe, plus
* protocol-level checks to ignore invalid data, and
* we only work with fixed-size structs.
At the application layer, we:
* have a simple data model (account and transfer structs),
* validate all fields for semantic errors so that we don't process bad data,
* for example, here's how we validate transfers between accounts: https://github.com/tigerbeetledb/tigerbeetle/blob/d2bd4a6fc240aefe046251382102b9b4f5384b05/src/state_machine.zig#L867-L952.
No matter the deserialization format you use, you always need to validate user data.
In our experience, zero-deserialization using fixed-size structs the way we do in TigerBeetle, is simpler than variable length formats, which can be more complicated (imagine a JSON codec), if not more scary.
> Where Jepsen will break most databases with network fault injection, we test TigerBeetle with high levels of storage faults on the read/write path, probably beyond what most systems, or write ahead log designs, or even consensus protocols such as RAFT (cf. “Protocol-Aware Recovery for Consensus-Based Storage” and its analysis of LogCabin), can handle.
Oh, nice one. Whenever I speak with people who work on "high reliability" code, they seldom even use fuzz-testing or chaos-testing, which is... well, unsatisfying.
Also, what do you mean by "storage fault"? Is this simulating/injecting silent data corruption or simulating/injecting an error code when writing the data to disk?
> validate all fields for semantic errors so that we don't process bad data,
Ahah, so no deserialization doesn't mean no validation. Gotcha!
> In our experience, zero-deserialization using fixed-size structs the way we do in TigerBeetle, is simpler than variable length formats, which can be more complicated (imagine a JSON codec), if not more scary.
That makes sense, thanks. And yeah, JSON has lots of warts.
Not sure what you mean by variable length. Are you speaking of JSON-style "I have no idea how much data I'll need to read before I can start parsing it" or entropy coding-style "look ma, I'm somehow encoding 17 bits on 3.68 bits"?
> Also, what do you mean by "storage fault"? Is this simulating/injecting silent data corruption or simulating/injecting an error code when writing the data to disk?
Exactly! We focus more on bitrot/misdirection in our simulation testing. We use Antithesis' simulation testing for the latter. We've also tried to design I/O syscall errors away where possible. For example, using O_DSYNC instead of fsync(), so that we can tie errors to I/Os.
> Ahah, so no deserialization doesn't mean no validation. Gotcha!
Well said—they're orthogonal.
> Not sure what you mean by variable length. Are you speaking of JSON-style "I have no idea how much data I'll need to read before I can start parsing it"
Yes, and also where this is internal to the data structure being read, e.g. both variable-length message bodies and variable-length fields.
There's also perhaps an interesting example of how variable-length message bodies can go wrong actually, that we give in the design decisions for our wire protocol, and why we have two checksums, one over the header, and another over the body (instead of one checksum over both!): https://github.com/tigerbeetledb/tigerbeetle/blob/main/docs/...
But Zig is the charm. TigerBeetle wouldn't be what it is without it. Comptime has been a gamechanger for us, and the shared philosophy around explicitness and memory efficiency has made everything easier. It's like working with the grain—the std lib is pleasant. I've learned so much also from the community.
My own personal experience has been that I think Andrew has made some truly stunning number of successively brilliant design decisions. I can't fault any. It's all the little things together—seeing this level of conceptual integrity in a language is such a joy.
Even if you think so, Python is really an easy language, and you can easily port the code to something else.
If you already have the basic ideas about the parts of a Neural Network pipeline, you can just search google "implement part-X in Y language", and you will get well written articles/tutorials.
Many learners/practitioners of Deep Learning, when they have the big enough picture, write an NN training loop in their favorite language(s) and post it online. I remember seeing a good enough "Neural Network in APL" playlist in YT. It implements every piece in APL and gains like 90%+ accuracy in MNIST.
I also remember seeing articles in Lisp (of course!), C, Elixir, and Clojure.
I suggest the book Programming Machine Learning. I'm slowly going through the book using another language, and it's easy to translate since the book doesn't use Python's machine learning libraries.
You know how when you hear a Dave Matthews song, you know instantly that it's him, and also that you're not going to hear anything interesting? Don't write like that. If you start noticing you have a style, cut those bits out.
A cursory glance shows that the densest urban centers have the lowest carbon footprint. The suburbs are much higher, with affluent suburbs often being more than twice as high. Only some very rural areas approach levels as low as the cities.
I understand it's maybe a bit of an uphill battle to support a software stack that the OS/hardware vendor itself doesn't support, but:
1. telegram/whatsapp do, so it's definitely possible.
2. The point is not to add new features, but just to leave the current version be without deprecating it. There is probably a bit of a development burden in keeping old versions up, but I don't think it's that big: they could even let the community take care of that…
3. But most importantly, "the iphone 6 had a good run", so the natural continuation to that sentence is "so let's discard all those working devices" ? wtf? I think that from a company that defends "public interests", some more thought should be given to not wasting energy, time and money forcing people to upgrade working smartphones…