The commenter says pre-rendered/server-side-rendered mathematics (via katex) is great - I’ve found the opposite. It’s probably great if you have an article with one or two equations. On the other hand, if you have an article which uses mathematics pervasively, like many pure mathematics articles, it quickly becomes far more space efficient to render the mathematics on the client side. You can quickly get 200kB+ pages by pre-rendering.
My experience with dynamically rendered math has been the opposite: if you have lots of equations to render, it inevitably takes some milliseconds to render, which makes the whole content move around and shake as rendering takes places.
Of course, if the page uses more symbols in various sizes, then a few more fonts files (.woff2) need to be pulled in which case the weight of KaTeX would increase a bit too. Each font file weighs between 4 kB and 28 kB.
“Pixi instead of uv” would be a more fair comparison, as Pixi is a more modern tool which still uses the conda package format and ecosystem, much like uv is a modernised pip which still uses the PyPI package format.
One thing an conda package can do which an PyPI package cannot is have binary dependencies: a conda package is linked upon installation, and packages can declare dependencies on shared libraries. As common example is numeric libraries depending on a BLAS implementation: in a conda/pixi environment you will get exactly one BLAS shared library linked into your process, used by numpy, scipy, optimisers, etc. For some foundational libraries like BLAS which have multiple implementations, the user even has the power to consistently switch the implementation within the environment, eg from OpenBLAS to Intel’s MKL.
The PyPI package format does not allow binary dependencies: wheels must be self-contained when it comes to binary code (not when it comes to Python code - which hopefully makes it clear that something here is inconsistent). Take any numerical python environment and enumerate the copies of BLAS you have, it is probably 3-5. All running their own threadpools.
Another very simple example is with inbuilt modules depending on native code, like the sqlite3 module. In a conda/pixi installation you are guaranteed that the python binary links against the same sqlite3 code as the command-line sqlite3 cli tool in the same environment. Stuff like this removes many cross-language or cross-tool hassles.
I prefer uv or poetry if I’m doing anything simple or pure python (or perhaps with a small binary dependency like an event loop). But pixi is the way to go for large environments with lots of extra tools and numerical libraries.
If a country has the capability to "lock down ports", they're probably shipping ports - do you think Australia is just suddenly going to (or has the capability to) block all IP traffic on certain ports? A notable exception is China.
Thanks for making this! I’ve found that as a pedestrian in Sydney, up to half my walking time is spent waiting for traffic lights, and have always wondered what could be done about it.
The compare-how-big-a-lookup-table-is argument is a bit of a red herring for comparing how complex things are. For example, a 3x3 matrix implements a map from 3 floats to another three floats, a huge space of possibilities (if we have 4-byte floats, this function space has (2^96)^(2^96) elements). From this perspective, representing that map as 9 numbers is an amazing compression ratio. But surely one cannot argue that matrices “have more going on” than arbitrary functions.
I would interpret this as showing that matrix multiplication code is carefully engineered to correctly implement... well, matrix multiplication. Stumbling on that specific mapping of 96 input bits to 96 output bits would be hard to pick out of a hat by chance, from the set of all possible mappings. Learning that precise mapping, starting from a uniform prior and only given a finite set of examples, could be seen as an impressive task, although less impressive than sorting. If a model learns the correct mapping -- and better yet, needs only 9 parameters to implement it -- then I think it's fairer to say the model does matrix multiplication, rather than that the model convincingly imitates the statistics of matrix multiplication.
The Mazda software works ok with the wheel, but using something like CarPlay with it is almost impossible without taking your eyes off the road for a long time. It’s worse than touch-screens in that respect: what will the spinning knob select next on a screen which has three separate panes?
Yes, it's terrible, CarPlay was designed to with touch in mind. You spend more time looking at the screen to figure out where your "cursor" is. I found the Tesla touch screen to be much safer to use.
SQLite is so modular that someone could write a replacement for the filesystem layer that ended up sending requests across HTTP to query a database on another server [1]. Without touching any other layer of the code. How much more modular do you want a database to be, without making other compromises?
The IntelliJ git interface makes a lot of sense, and makes many helpful operations easy, like “compare what I have now to this particular commit”. The VSCode git interface, even with plugins like GitLens, seems to make these operations hard to get to, and how VSCode manages diffs with the staging area involved is totally bananas.
Aside from that, PyCharm has a slightly better debugging interface but otherwise it’s quite close to VSCode for Python development. Sane version control is a bit aside though.