It talks about trimming 'legacy' features and specifically says they are omitting 'font-related' operations. That obviously means no useful core X11 application will work (unless you count xlogo and xeyes). Whether the XRender glyph cache mechanism is included is unclear. It also says only DRI is *currently* supported, but maybe that's incidental?
XRender isn't part of the core protocol so it should be implemented in the future. There is already some xrender code in there. Almost no applications use x11 core protocol font, except for cursor. Since the x11 core protocol font (on xorg server at least) is rendered without anti aliasing and doesn't really support unicode.
Core fonts absolutely support Unicode. My (non-Xft) xterm windows are full of Unicode characters right now. It is true that anti-aliasing is not supported by the X.Org server, although scalable fonts have been for a while (https://www.x.org/archive/X11R7.5/doc/fonts/fonts.html#AEN49...). But you don't need anti-aliasing on a high-DPI display, and on a low-DPI display you can use any of the many beautiful bitmap fonts, unlike a lot of ‘modern apps’ these days.
I always liked the idea of literate programming, but it never seemed to get a toe in the door.
A good start would be just commenting code! Almost all the code I've looked into recently has been startling - the only comments are the licence boilerplate at the top of each file!
I can think of only one product/library/package that was commented to explain what was happening. Go look at the source for a random package that you depend on. If you're really lucky, there might be something hinting at the meaning of function arguments, but like as not, not even that ;(
And it's been possible to run android on x86 for years. It's just that nobody wants to, except for app developers ... because you wouldn't/couldn't/shouldn't develop on a phone ;)
>> a solution that seems correct under their heuristic reasoning, but they arrived at that result in a non-logical way
Not quite ... LLMs are not HAL (unfortunately). They produce something that is associated with the same input, something that should look like an acceptable answer. A correct answer will be acceptable, and so will any answer that has been associated with similar input. And so will anything that fools some of the people, some of the time ;)
The unpredictability is a huge problem. Take the geoguess example - it has come up with a collection of "facts" about Paramaribo. These may or may-not be correct. But some are not shown in the image. Very likely the "answer" is derived from completely different factors, and the "explanation" in spurious (perhaps an explanation of how other people made a similar guess!)
The questioner has no way of telling if the "explanation" was actually the logic used. (It wasn't!) And when genuine experts follow the trail of token activation, the answer and the explanation are quite independent.
> Very likely the "answer" is derived from completely different factors, and the "explanation" in spurious (perhaps an explanation of how other people made a similar guess!)
This is very important and often overlooked idea. And it is 100% correct, even admitted by Anthropic themselves. When user asks LLM to explain how it arrived to a particular answer, it produces steps which are completely unrelated to the actual mechanism inside LLM programming. It will be yet another generated output, based on the training data.
Yes BUT "I paid it back, so nothing bad ever happened" is not sufficient.
Someone (or a company) does something bad - yes, pay it back, but there needs to be some punishment for doing evil. Pay back X 100. Or pay purchaser + pay fine.
Just paying back the cost (or fraud) is saying "it's fine if you don't get caught, and if you do get caught, there's no real cost to you."