Hacker Newsnew | past | comments | ask | show | jobs | submit | carapace's favoriteslogin

Your comment history is some of the worst stuff on HN and some of the best stuff on HN. Could you attempt to leave out the worst bits?

The most important operation in QNX is MsgSend, which works like an interprocess subroutine call. It sends a byte array to another process and waits for a byte array reply and a status code. All I/O and network requests do a MsgSend. The C/C++ libraries handle that and simulate POSIX semantics. The design of the OS is optimized to make MsgSend fast.

A MsgSend is to another service process, hopefully waiting on a MsgReceive. For the case where the service process is idle, waiting on a MsgReceive, there is a fast path where the sending thread is blocked, the receiving thread is unblocked, and control is immediately transferred without a trip through the scheduler. The receiving process inherits the sender's priority and CPU quantum. When the service process does a MsgReply, control is transferred back in a similar way.

This fast path offers some big advantages. There's no scheduling delay; the control transfer happens immediately, almost like a coroutine. There's no CPU switch, so the data that's being sent is in the cache the service process will need. This minimizes the penalty for data copying; the message being copied is usually in the highest level cache.

Inheriting the sender's priority avoids priority inversions, where a high-priority process calls a lower-priority one and stalls. QNX is a real-time system, and priorities are taken very seriously. MsgSend/Receive is priority based; higher priorities preempt lower ones. This gives QNX the unusual property that file system and network access are also priority based. I've run hard real time programs while doing compiles and web browsing on the same machine. The real-time code wasn't slowed by that. (Sadly, with the latest release, QNX is discontinuing support for self-hosted development. QNX is mostly being used for auto dashboards and mobile devices now, so everybody is cross-developing. The IDE is Eclipse, by the way.)

Inheriting the sender's CPU quantum (time left before another task at the same priority gets to run) means that calling a server neither puts you at the end of the line for CPU nor puts you at the head of the line. It's just like a subroutine call for scheduling purposes.

MsgReceive returns an ID for replying to the message; that's used in the MsgReply. So one server can serve many clients. You can have multiple threads in MsgReceive/process/MsgReply loops, so you can have multiple servers running in parallel for concurrency.

This isn't that hard to implement. It's not a secret; it's in the QNX documentation. But few OSs work that way. Most OSs (Linux-domain messaging, System V messaging) have unidirectional message passing, so when the caller sends, the receiver is unblocked, and the sender continues to run. The sender then typically reads from a channel for a reply, which blocks it. This approach means several trips through the CPU scheduler and behaves badly under heavy CPU load. Most of those systems don't support the many-one or many-many case.

Somebody really should write a microkernel like this in Rust. The actual QNX kernel occupies only about 60K bytes on an IA-32 machine, plus a process called "proc" which does various privileged functions but runs as a user process. So it's not a huge job.

All drivers are user processes. There is no such thing as a kernel driver in QNX. Boot images can contain user processes to be started at boot time, which is how initial drivers get loaded. Almost everything is an optional component, including the file system. Code is ROMable, and for small embedded devices, all the code may be in ROM. On the other hand, QNX can be configured as a web server or a desktop system, although this is rarely done.

There's no paging or swapping. This is real-time, and there may not even be a disk. (Paging can be supported within a process, and that's done for gcc, but not much else.) This makes for a nicely responsive desktop system.


The halting problem is more than that. While the proof constructs a relatively trivial Turing machine that basically puts the toddler's "Nuh-uh!" on solid mathematical ground, that's just the proof that there exists at least one Turing machine for a given halting oracle that can not be predicted by that halting oracle.

To get to where you're complaining about people going, it needs to be paired with Rice's Theorem, as mentioned in the article: https://en.wikipedia.org/wiki/Rice%27s_theorem

It is true that the halting problem itself is perhaps not directly what people seem to think it is, but people turn out to be right anyhow, because Rice's Theorem can be largely summed up as "It can be proven that we can not have nice things", if you'll allow me a bit of mathematician-style humor, which corresponds more-or-less to the way people talk about the Halting Problem.

This also explains why the Goldbach conjecture is in fact directly related to the halting problem and not orthogonal at all. Through Rice's theorem we can extend the halting problem quite easily to the solution to that and a lot of other problems.

By computer science standards, Rice's theorem is relatively basic stuff. I don't recall if the sophomore-level class I learned about this stuff did a full proof of Rice's theorem but I certainly recall several homework problems around applying the idea of the theorem to several specific problems by proving reductions of those problems to the halting problem. I also recall they really weren't particularly difficult problems. It's not hard to reduce that sort of interesting question to the halting problem. Low-level undergrad stuff, stuff down in the very basics of computer science qua computer science suitable for making up several problems in the weekly homework assignment for the intro to (real) computer science course, not complicated abtruse stuff that can't be applied by anyone less than a doctoral candidate with a few months to burn. All sorts of problems are reducible like this.

And that is why the "halting problem" in fact is the reason why we can't have certain nice things we'd really like. For example, the mentioned ability to prove that two functions really are equivalent would have practical utility in a wide variety of places (assuming somewhat reasonable efficiency), like compiler optimizations.

Your intuition is correct in one respect though; there is a sense in which it isn't really "The Halting Problem (TM)". You could take a wide variety of other problems and call them "the" fundamental problem, and reduce the halting problem to those problems. The Halting Problem just turns out to be a particularly concise specific expression of the general problem that is easy to express for Turing Machines. It just makes for nicer proofs than the alternatives. "The" core problem could arguably be something more like Rice's Theorem proved on its own as a fundamental proof without reference to the halting problem. You could call the Halting Problem a historical accident that we hold on to due to a path dependency in how math happened to develop in our history. An alien race might have a reasonably different view on the "fundamental problem", but then they'd have a something fairly similar to Rice's Theorem sitting on top of it that we'd all recognize.


The Library of Congress does currently archive limited collections of the internet[0]. They have a blog post[1] breaking down the effort, currently it's 8 full time staff with a team of part time members. According to Wikipedia[2], it's built on Heritrix and Wayback which are both developed by the Internet Archive (blog post also mentions "Wayback software"). Current archives are available at: http://webarchive.loc.gov/

[0] https://www.loc.gov/programs/web-archiving/about-this-progra...

[1] https://blogs.loc.gov/thesignal/2023/08/the-web-archiving-te...

[2] https://en.m.wikipedia.org/wiki/List_of_Web_archiving_initia...


Here's what Helen Keller had to say about this in _The World I Live In_:

"Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory. It enables me to remember that I never contracted my forehead in the act of thinking. I never viewed anything beforehand or chose it. I also recall tactually the fact that never in a start of the body or a heart-beat did I feel that I loved or cared for anything. My inner life, then, was a blank without past, present, or future, without hope or anticipation, without wonder or joy or faith.

It was not night—it was not day.

. . . . .

But vacancy absorbing space, And fixedness, without a place; There were no stars—no earth—no time— No check—no change—no good—no crime.

My dormant being had no idea of God or immortality, no fear of death.

I remember, also through touch, that I had a power of association. I felt tactual jars like the stamp of a foot, the opening of a window or its closing, the slam of a door. After repeatedly smelling rain and feeling the discomfort of wetness, I acted like those about me: I ran to shut the window. But that was not thought in any sense. It was the same kind of association that makes animals take shelter from the rain. From the same instinct of aping others, I folded the clothes that came from the laundry, and put mine away, fed the turkeys, sewed bead-eyes on my doll's face, and did many other things of which I have the tactual remembrance. When I wanted anything I liked,—ice-cream, for instance, of which I was very fond,—I had a delicious taste on my tongue (which, by the way, I never have now), and in my hand I felt the turning of the freezer. I made the sign, and my mother knew I wanted ice-cream. I "thought" and desired in my fingers. If I had made a man, I should certainly have put the brain and soul in his finger-tips. From reminiscences like these I conclude that it is the opening of the two faculties, freedom of will, or choice, and rationality, or the power of thinking from one thing to another, which makes it possible to come into being first as a child, afterwards as a man.

Since I had no power of thought, I did not compare one mental state with another. So I was not conscious of any change or process going on in my brain when my teacher began to instruct me. I merely felt keen delight in obtaining more easily what I wanted by means of the finger motions she taught me. I thought only of objects, and only objects I wanted. It was the turning of the freezer on a larger scale. When I learned the meaning of "I" and "me" and found that I was something, I began to think. Then consciousness first existed for me. Thus it was not the sense of touch that brought me knowledge. It was the awakening of my soul that first rendered my senses their value, their cognizance of objects, names, qualities, and properties. Thought made me conscious of love, joy, and all the emotions. I was eager to know, then to understand, afterward to reflect on what I knew and understood, and the blind impetus, which had before driven me hither and thither at the dictates of my sensations, vanished forever.

I cannot represent more clearly than any one else the gradual and subtle changes from first impressions to abstract ideas. But I know that my physical ideas, that is, ideas derived from material objects, appear to me first an idea similar to those of touch. Instantly they pass into intellectual meanings. Afterward the meaning finds expression in what is called "inner speech." When I was a child, my inner speech was inner spelling. Although I am even now frequently caught spelling to myself on my fingers, yet I talk to myself, too, with my lips, and it is true that when I first learned to speak, my mind discarded the finger-symbols and began to articulate. However, when I try to recall what some one has said to me, I am conscious of a hand spelling into mine.

It has often been asked what were my earliest impressions of the world in which I found myself. But one who thinks at all of his first impressions knows what a riddle this is. Our impressions grow and change unnoticed, so that what we suppose we thought as children may be quite different from what we actually experienced in our childhood. I only know that after my education began the world which came within my reach was all alive. I spelled to my blocks and my dogs. I sympathized with plants when the flowers were picked, because I thought it hurt them, and that they grieved for their lost blossoms. It was two years before I could be made to believe that my dogs did not understand what I said, and I always apologized to them when I ran into or stepped on them.

As my experiences broadened and deepened, the indeterminate, poetic feelings of childhood began to fix themselves in definite thoughts. Nature—the world I could touch—was folded and filled with myself. I am inclined to believe those philosophers who declare that we know nothing but our own feelings and ideas. With a little ingenious reasoning one may see in the material world simply a mirror, an image of permanent mental sensations. In either sphere self-knowledge is the condition and the limit of our consciousness. That is why, perhaps, many people know so little about what is beyond their short range of experience. They look within themselves—and find nothing! Therefore they conclude that there is nothing outside themselves, either.

However that may be, I came later to look for an image of my emotions and sensations in others. I had to learn the outward signs of inward feelings. The start of fear, the suppressed, controlled tensity of pain, the beat of happy muscles in others, had to be perceived and compared with my own experiences before I could trace them back to the intangible soul of another. Groping, uncertain, I at last found my identity, and after seeing my thoughts and feelings repeated in others, I gradually constructed my world of men and of God. As I read and study, I find that this is what the rest of the race has done. Man looks within himself and in time finds the measure and the meaning of the universe."


I think the major thing is that F# did not ignore the things that came before it, whereas Python did. While Python did come about in the early 90s, it had already ignored things like Lisp, Scheme, Smalltalk, Erlang, and Standard ML (SML). As it evolved into the 2000s and 2010s, it continued to ignore developments. Compare that to Clojure, Elixir, and F#, and there's a stark difference. All three of those languages were designed to be highly pragmatic and not academic, but they all three stood on the shoulders before them. If you read interviews with van Rossum, he shows a complete unwillingness to accept the functional programming paradigm as pragmatically useful and basically rejects it wholesale. In his Slashdot interview, he explicitly said "functools (which is a dumping ground for stuff I don't really care about :-)".

https://developers.slashdot.org/story/13/08/25/2115204/inter...

It's frustrating to me, because if Python had adopted at least some of the things that languages like SML then OCaml and then F# were doing, it would dramatically elevate it. At the time of the interview, in which he goes on to state "Admittedly I don't know much about the field besides Haskell", the year was 2013, and so at that point, F#, Clojure, OCaml had already been around for a while, Erlang had been released with a permissive license, and Elixir had actually been released the year before. Further, it's not like even before that that Haskell was the only functional language. SML, Scheme, and even Racket (then known as PLT Scheme) had been around for a while.

Python is a wolf in sheep's clothing to me. It has a deceptively simple look at at the surface, but it is actually a quite complicated language when you get into it. I've done concurrent programming my entire programming life in F#, Elixir, and LabVIEW. I simply do not understand how to do concurrency things in Python in a reliable and composable way (for one, there's several different implementations of "concurrent" programming in Python with various impedance mismatches and incompatibilities). I.e., it's complicated.


Endgame: After creating a language that encourages programmers to make robust and optimal software, using a toolchain that exemplifies these ideals by providing an order of magnitude faster development iteration speed, point this energy towards the ecosystem, with a focus on our core principle of prioritizing the needs of end users. Building upon this rich ecosystem of high quality software, create and maintain free, libre, and open-source applications that outcompete proprietary versions. I want to see the next Blender, the next Postgresql, the next Linux. This is my vision.

Mitchell's Ghostty project is a perfect example of this movement. At least, it will be when it is open sourced.


I created https://lowbackgroundsteel.ai/ in 2023 as a place to gather references to unpolluted datasets. I'll add wordfreq. Please submit stuff to the Tumblr.

A university spinoff using the interaction between RF and nearby devices, https://www.physec.de/en

https://www.sciencedirect.com/journal/computer-networks/vol/...

> We describe the first MITM-resistant device pairing protocol purely based on a single wireless interface with an extensive adversarial model and protocol analysis. We show that existing wireless devices can be retro-fitted with the VP protocol via software updates, i.e. without changes to the hardware.


Any time someone who does not understand the practical outworkings of a given topic tries to "teach" it, you end up with a "presentation" - mostly in the form of propositional statements (dogs are mammals; barges are boats), or key-value pairs (in 1492, Columbus sailed the ocean blue; Caracas is the capital of Venezuela)

Propositional statements and key-value pairs are absolutely vital aspects of learning and understanding - but seeing how those ideas, facts, and statements will work themselves out in practical application is absolutely mandatory for proper understanding of the topic

If you never learn how you might use these facts, you are going to have an incredibly difficult time "learning" them

This is the fundamental problem of why "teaching to the test" (or, for that matter, relying too heavily on multiple-choice, true|false, matching, etc type "tests" to evaluate "learning" or "knowledge") is problematic: I can teach pretty much anyone how to pass a multiple-choice test with zero knowledge of the subject material (did it myself years ago when I passed the first-tier amateur radio licensing exam without ever looking at any study guide)

When you "teach to the test" (which is what, fundamentally, exclusive reliance on propositional statements (and drills over them) and key-value pairs are), you create people who may (or may not) end up being good at trivia challenges ... but have no mental framework for connecting all those dots into anything coherent - iow, you create human-based data lakes: it is all sitting there, but no connective lines have been drawn

To get someone to want to learn, you have to show they why it "matters" - you have to get them to develop intrinsic motivation to learn (vs the purely extrinsic "you have to to pass")

To develop that intrinsic motivation, you need to show the 'why' and the 'where' of the 'what'

And to do that you have to understand it yourself


Nah, that's not it. Browsers do statically detect if eval is there or not, and react accordingly. This is possible because of https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

speaking of hard, the DOE actually funds a project that has been around for 20+ years now (ROSE) that involves (among other things) doing static analysis on and automatically translating between C/C++/Cuda and even high level languages like Python as well as HPC variants of C/C++. They have a combined AST that supports all of those languages with the same set of node types essentially. Quite cool. I got to work on it when I was an intern at Livermore, summer of 2014.

and it's open source as well! http://rosecompiler.org/ROSE_HTML_Reference/index.html


An x86 processor can detect when it makes a serious error in some pipelines and rerun these steps until things go right. This is the first line of recovery (this is why temperature spikes start to happen when a CPU reaches its overclocking limits. It starts to make mistakes and this mechanism kicks in).

Also x86 has something called “machine check architecture” which constantly monitors the system and the CPU and throws “Machine Check Exceptions” when something goes very wrong.

These exceptions divide into “recoverable” and “unrecoverable” exceptions. An unrecoverable exception generally triggers a kernel panic, and recoverable ones are logged in system logs.

Moreover, a CPU can lose (fry) some caches (e.g.: half of L1), and it’ll boot with whatever available, and report what it can access and address. In some extreme cases, it loses its FPU or vector units, and instead of getting upset, it tries to do the operations at microcode level or with whatever units available. This manifests as extremely low LINPACK numbers. We had a couple of these, but I didn’t run accuracy tests on these specimens, but LINPACK didn’t say anything about the results. Just the performance was very low when compared to normal processors.

Throttling is a normal defense against poor cooling. Above mechanisms try to keep the processor operational in limp mode, so you can diagnose and migrate somehow.


yes! also when you only have two or three things in them, they are faster than hashing, especially on machines without a data cache. clojure uses hamts for this, thus getting this fp-persistence† as well as efficient lookup in large dictionaries, and chris wellons posted a concise, fast c hamt implementation last year at https://nullprogram.com/blog/2023/09/30/

______

† 'persistent' unfortunately also means 'stored in nonvolatile memory', so i feel the need to disambiguate


yeah, that seems quite reasonable

reversibly transforming a data sequence into a more easily compressible one of the same size (sometimes called 'whitening' or 'predicting') is a very general technique for data compression, and first differences are a very commonly used transformation here. others include the burrows–wheeler transform, paeth prediction, and linear predictive coding


Neither.

* I meant s-types.

* There is no such thing. The problem, as Clinton: "It depends on what the meaning of the word 'is' is."

S-types WERE a thing, but never a major one.

I ran across them as a high school student on a random research project at a university as a summer internship of some kind. They were used by a handful of research groups for some data interchange format. Literally the only person who would find the interesting is myself, and not for what they were, but because they were my first exposure to Lisp / Scheme concepts, which I had not heard of before, and which triggered a series of (high school grade) epiphanies for me (which would probably be obvious to 50+% of the people here).

As far as I know, S-types no longer exist. A quick web search brought up zero references. There is absolutely no reason to know about them in 2023. However, if you want to know about them:

- They were quite literally S-expressions. All S-types were S-expressions

- However, they were well-specified, so useful as an import/export format

- In particular, they had research-grade libraries to do the above from at least two languages (I would guess Java and LISP, but I no longer recall; perhaps it was C++ and Scheme).

- There was some attempt to simplify, but I no longer recall what it was. I recall that I was told it had something to do with lists versus function calls, but looking back on it, the explanation I recall being told makes zero sense. That's more likely a function of my memory than the explanation.

I believe on that internship, my direct use of them was probably limited to looking at some data in that format. This was really at the outskirts of the project I was doing. They did inspire a lot of work during the school year, when I schemed for the first time, not really knowing what I was doing but figuring out a lot of stuff in the process.

Looking back at it, though, having a standardized Python / JavaScript / etc. import/export library for some standardized version of S-expressions might be helpful. Or it might not be. Who knows.


> libc isn't really getting in the way here.

It depends. For the standard set of system calls, the libc is pretty great. For Linux-specific features, it could take years for glibc to gain support, if it ever does. All the libcs will get in the way if you try to use something like the clone system call:

https://sourceware.org/bugzilla/show_bug.cgi?id=10311

> Ulrich Drepper 2009-06-22 19:35:54 UTC

> If you use clone() you're on your own.

My obsession with Linux system calls started years ago when I read about an episode where glibc literally got into Linux's way. The LWN wrote extensively about the tale of the getrandom system call and the quest to get glibc to support it:

https://lwn.net/Articles/711013/

A kernel hacker wrote in an email:

> maybe the kernel developers should support a libinux.a library that would allow us to bypass glibc when they are being non-helpful

That made a lot of sense to me. I took that concept and kind of ran with it. Started a liblinux project, essentially a libc with nothing in it but the thinnest possible system call wrappers. Researched quite a bit about glibc's attitude towards Linux to justify it:

https://github.com/matheusmoreira/liblinux#why

The more I used this stuff, the more I enjoyed it. I was writing freestanding C and interfacing directly with the kernel. The code was so clean. No libc baggage anywhere, not even errno. And I knew this could do literally anything when I wrote a little freestanding program to print my terminal window dimensions. When I did that I knew I could write code to mount disks too if I really wanted to. I was talking to the kernel.

Eventually I discovered Linux was already doing the same thing with their own nolibc.h file which they were already using in their own tools. It was a single file back then, by now it's become a sprawling directory full of code:

https://github.com/torvalds/linux/tree/master/tools/include/...

Even asked Greg Kroah-Hartman on reddit about it once:

https://old.reddit.com/r/linux/comments/fx5e4v/im_greg_kroah...

Since the kernel was already developing their own awesome headers, I decided to drop liblinux and start lone instead. :)


I write safety critical code, so I've worked with many of these: frama-C, TIS, Polyspace, KLEE, CBMC, and some that aren't on the list like RV-match, axivion, and sonarqube.

With virtually any tool in this category, you need to spend a good amount of time ensuring it's integrated properly at the center of your workflows. They all have serious caveats and usually false positives that need to be addressed before you develop warning blindness.

You also need to be extremely careful about what guarantees they're actually giving you and read any associated formal methods documentation very carefully. Even a tool that formally proves the absence of certain classes of errors only does so under certain preconditions, which may not encompass everything you're supporting. I once found a theoretical issue in a project that had a nice formal proof because the semantics of some of the tooling implicitly assumed C99, so certain conforming (but stupid) C89 implementations could violate the guarantees.

But some notes:

Frama-C is a nice design, but installing it is a pain, ACSL is badly documented/hard to learn, and you really want the proprietary stuff for anything nontrivial.

TrustInSoft (TIS) has nice enterprise tooling, but their "open source" stuff isn't worth bothering with. Also, they make it impossible to introduce to a company without going through the enterprise sales funnel, so I've never successfully gotten it introduced because they suck at enterprise sales compared to e.g. Mathworks.

RV-match (https://runtimeverification.com/match) is what I've adopted for my open source code that needs tooling beyond sanitizers because it's nice to work with and has reasonable detection rates without too many false positives.

Polyspace, sonarqube, axivion, compiler warnings, etc are all pretty subpar tools in my experience. I've never seen them work well in this space. They're usually either not detecting anything useful or falsely erroring on obviously harmless constructs, often both.


This is a good opportunity to point out this strategy of climate denial, and why it is not a rational approach to refuting climate change.

Let's suppose that all the other commenters are wrong, that the river has not changed course in the past. The implication, from a probabilistic perspective is that P(evidence|climate change) is the same as P(evidence| natural variation), and your implied assumption is that because your prior belief that no climate change is more likely than climate change, then this works ultimately as evidence in favor that there is no climate change.

The reason this does not yield interesting results, even if all these other assumptions we're making hold, is because the hypothesis of climate change doesn't simply explain this one observation, it explain an enormous range of phenomena.

To explain away all of the observations explained by climate change you end up building a huge list of different hypotheses to explain away each fact. Even if individually these alternate hypothesis do explain each piece of evidence better (which in practice they often don't), and even if your prior for all of these other hypotheses is higher than climate change, because climate change is a single hypothesis that explains many observations, the joint probability of the individual hypothesis quickly becomes much lower than the climate change hypothesis.

In order to effectively argue against climate change you would need a competing singular hypothesis (or small set of very likely hypotheses) that ultimate do a better job explaining all of the observations we have collected.

I don't know if this fallacy has a name, but it sure should since it comes up in a lot of arguments about systemic problems.


Thanks! I celebrate weird hobby web sites. Here are some of my favorites:

* 59828 different definitions of "triangle center" https://faculty.evansville.edu/ck6/encyclopedia/ETC.html

* Amazing reference on bezier curves https://pomax.github.io/bezierinfo/

* Sunbeam radiant toaster http://automaticbeyondbelief.org/

* How to pack geometric shapes inside other shapes https://erich-friedman.github.io/packing/

* Pictures of library cards https://librarycards.tripod.com/id1.html

* All about pc fans https://nick-black.com/dankwiki/index.php?title=PC_Fans

* Cross sections of candy bars https://scandybars.tumblr.com/

* All kinds of info about bicycles https://www.sheldonbrown.com/

* Every type of plastic used by lego https://bricknerd.com/home/every-type-of-plastic-used-by-leg...

* Ian's Shoelace Site https://www.fieggen.com/shoelace/


Perfectly synchronized input event delivery across all applications was one of the strong and important guarantees that the NeWS window system was designed from the ground up to provide (from since it was originally called "SunDew" in 1985).

But ever since then, no other window system really gave a flying fuck about that, and just blithely drops events on the floor or delivers them to the wrong place, gaslighting and training the users to make up for it by clicking slowly and watching the screen carefully and waiting patiently until it's safe to click or type again, before proceeding.

This kind of loosey-goosey race condition input handling problem that's intrinsic to every "modern" window system and web browser and UI toolkit is exactly why bugs like this tooltip bug appear across all platforms, and go unfixed for 22 years, because everybody is gaslighted into thinking that's just the way it has to be, and they're the only one with the problem, and it's unfixable anyway, and even if it were fixable, they deserve it, etc...

What's could ever go wrong with the occasional indestructible floating randomly worded tooltip blocking your desktop or video player or game? It's "Tooltip Roulette"! Just hope you don't accidentally screen share a naughty tooltip with your mom during a zoom meeting.

(Not that NeWS was without its own embarrassingly stuck popup windows, but NeWS had an essential utility for removing embarrassing windows called "pam", named after the sound you made when you used it, or maybe the original easy cleanup canola oil spray ideal for use in cooking and baking.)

Failing to properly support synchronous event distribution makes it impossible for window managers (ESPECIALLY asynchronous outboard X11 window managers running in a different process than the window system) to properly and reliably support "type ahead" and "mouse ahead".

For example, when a mouse click on a window or function key press changes the input focus, or switches applications, or moves a different window to the top, or pops up a dialog, or opens a new window, the subsequent keyboard and mouse events might not be delivered to the right window, because they are not synchronously blocked until the results of the first input event are handled (changing where the next keyboard or mouse events should be delivered to), so clicking and typing quickly delivers the keystrokes to the wrong window.

I find it extremely annoying to still be forced to use flakey leaky "modern" window systems for 37 years after getting used to NeWS's perfect event distribution model, which is especially important on slow computers or networks (i.e. dial-up modems), or due to paging or thrashing because of low memory (NeWS competing with Emacs), or any other system activity, and especially for games and complex real time applications.

There are still to this day many AAA games that force you to hold a key down for at least one screen update, because they're only lazily checking for key state changes on each draw or simulation tick, instead of actually tracking input events, otherwise they don't register sometimes if you just tap the key, and you have to slowly mash and wait, especially when the game gets slow because there's a lot of stuff on the screen, or has a hiccough because of garbage collection or autosave or networking or disk io or...

James Gosling first wrote the importance of safe synchronous event distribution in 1985 in "SunDew - A Distributed and Extensible Window System":

http://www.chilton-computing.org.uk/inf/literature/books/wm/...

>5.3.3 User Interaction - Input

>The key word in the design of the user interaction facilities is flexibility. Almost anything done by the window system preempts a decision about user interaction that a client might want to decide differently. The window system therefore defines almost nothing concrete. It is just a loose collection of facilities bound together by the extension mechanism.

>Each possible input action is an event. Events are a general notion that includes buttons going up and down (where buttons can be on keyboards, mice, tablets, or whatever else) and locator motion.

>Events are distinguished by where they occur, what happened, and to what. The objects spoken about here are physical, they are the things that a person can manipulate. An example of an event is the E key going down while window 3 is current. This might trigger the transmission of the ASCII code for E to the process that created the window. These bindings between events and actions are very loose, they are easy to change.

>The actions to be executed when an event occurs can be specified in a general way, via PostScript. The triggering of an action by the striking of the E key in the previous example invokes a PostScript routine which is responsible for deciding what to do with it. It can do something as simple as sending it in a message to a Unix process, or as complicated as inserting it into a locally maintained document. PostScript procedures control much more than just the interpretation of keystrokes: they can be involved in cursor tracking, constructing the borders around windows, doing window layout, and implementing menus.

>Synchronization of input events: we believe that it is necessary to synchronize input events within a user process, and to a certain extent across user processes. For example, the user ought to be able to invoke an operation that causes a window on top to disappear, then begin typing, and be confident about the identity of the recipient of the keystrokes. By having a centralized arbitration point, many of these problems disappear.

[...]

>Hopgood:

>How do you handle input?

>Gosling:

>Input is also handled completely within PostScript. There are data objects which can provide you with connections to the input devices and what comes along are streams of events and these events can be sent to PostScript processes. A PostScript process can register its interest in an event and specify which canvas (a data object on which a client can draw) and what the region within the canvas is (and that region is specified by a path which is one of these arbitrarily curve-bounded regions) so you can grab events that just cover one circle, for example. In the registration of interest is the event that you are interested in and also a magic tag which is passed in and not interpreted by PostScript, but can be used by the application that handles the event. So you can have processes all over the place handling input events for different windows. There are strong synchronization guarantees for the delivery of events even among multiple processes. There is nothing at all specified about what the protocol is that the client program sees. The idea being that these PostScript processes are responsible for providing whatever the application wants to see. So one set of protocol conversion procedures that you can provide are ones that simply emulate the keyboard and all you will ever get is keyboard events and you will never see the mouse. Quite often mouse events can be handled within PostScript processes for things like moving a window.

NeWS Window System:

https://en.wikipedia.org/wiki/NeWS

>Design [...]

>Like the view system in most GUIs, NeWS included the concept of a tree of embedded views along which events were passed. For instance, a mouse click would generate an event that would be passed to the object directly under the mouse pointer, say a button. If this object did not respond to the event, the object "under" the button would then receive the message, and so on. NeWS included a complete model for these events, including timers and other automatic events, input queues for devices such as mice and keyboards, and other functionality required for full interaction. The input handling system was designed to provide strong event synchronization guarantees that were not possible with asynchronous protocols like X.

NeWS 1.1 Manual, section 3.6, p. 25:

http://www.bitsavers.org/pdf/sun/NeWS/NeWS_1.1_Manual_198801...

>Processing of input events is synchronized at the NeWS process level inside the NeWS server. This means that all events are distributed from a single queue, ordered by the time of occurrence of the event, and that when an event is taken from the head of the queue, all processes to which it is delivered are given a chance to run before the next event is taken from the queue. When an event is passed to redistributeevent, the event at the head of the event queue is not distributed until processes that receive the event in its redistribution have had a chance to process it. No event will be distributed before the time indicated in its TimeStamp.

>In some cases, a stricter guarantee of synchronization than this is required. For instance, suppose one process sees a mouse button go down and forks a new process to display and handle the menu until the corresponding button-up. The new process must be given a chance to express its interest before the button-up is distributed, even if the user releases the button immediately. In general, event processing of one event may affect the distribution policy, distribution of the next event must be delayed until the policy change has been completed. This is done with the blockinputqueue primitive.

>Execution of blockinputqueue prevents processing of any further events from the event queue until a corresponding unblockinputqueue is executed, or a timeout has expired. The blockinputqueue primitive takes a numeric argument for the timeout; this is the fraction of a minute to wait before breaking the lock. This argument may also be null, in which case the default value is used (currently 0.0083333 == .5 second). Block/unblock pairs may nest; the queue is not released until the outermost unblock. When nested invocations of blockinputqueue are in effect, there is one timeout (the latest of the set associated with current blocks).

>Distribution of events returned to the system via redistributeevent is not affected by blockinputqueue, since those events are never returned to the event queue.

The X-Windows Disaster:

https://donhopkins.medium.com/the-x-windows-disaster-128d398...

>Ice Cube: The Lethal Weapon [...]

>The ICCCM is unbelievably dense, it must be followed to the last letter, and it still doesn’t work. ICCCM compliance is one of the most complex ordeals of implementing X toolkits, window managers, and even simple applications. It’s so difficult, that many of the benefits just aren’t worth the hassle of compliance. And when one program doesn’t comply, it screws up other programs. This is the reason cut-and-paste never works properly with X (unless you are cutting and pasting straight ASCII text), drag-and-drop locks up the system, colormaps flash wildly and are never installed at the right time, keyboard focus lags behind the cursor, keys go to the wrong window, and deleting a popup window can quit the whole application. If you want to write an interoperable ICCCM compliant application, you have to crossbar test it with every other application, and with all possible window managers, and then plead with the vendors to fix their problems in the next release.

Window Manager Flames:

http://www.art.net/~hopkins/Don/unix-haters/x-windows/i39l.h...

>Why wrap X windows in NeWS frames? Because NeWS is much better at window management than X. On the surface, it was easy to implement lots of cool features. But deeper, NeWS is capable of synchronizing input events much more reliably than X11, so it can manage the input focus perfectly, where asynchronous X11 window managers fall flat on their face by definition. [...]

>If NeWS alone manages the input focus, it can manage it perfectly. An X window manager alone cannot, because it runs in a foreign address space, and is not in a position to synchronously block the input queue and directly effect the distribution of events the way NeWS is. But even worse is when an X window manager and NeWS both try to manage the input focus at once, which is the situation we are in today. The input focus problem could be solved in several ways: OWM solves the problem elegantly, as PSWM did in the past; OLWM could be made NeWS aware, so that when our own customers run our own external X window manager on our own server that we ship preinstalled on the disks of our own computers, OLWM could download some PostScript and let NeWS handle the focus management the way it was designed.

>It's criminally negligent to ship a product that is incapable of keeping the input focus up to date with the cursor position, when you have the technology to do so. Your xtrek has paged the window manager out of core, and the console beeps and you suddenly need to move the cursor into the terminal emulator and type the command to keep the reactor from melting down, but the input focus stays in the xtrek for three seconds while the window manager pages in, but you keep on typing, and the keys slip right through to xtrek, and you accidentally fire off your last photon torpedo and beam twelve red shirt engineers into deep space!


Easy! Start by chopping the asteroid into tessellating hendecahedra, then use one of the faces of each solid as a reference surface to project a cube of known size.


Quoting my comment on this paper from 8 months ago on https://lobste.rs/s/pc1v4g/coq_the_world_s_best_macro_assemb..., where it is full of links to the things I'm referring to:

This is a really interesting and thought-provoking idea. In writing httpdito, and in particular in putting a sort of tiny peephole optimizer into it as gas macros, it occurred to me that (some) Forth systems are awfully close to being macro assemblers, and that really the only reason you’d use a higher-level language rather than a macro assembler is that error handling in macro assemblers, and in Forth, is terrible verging on nonexistent. Dynamically-typed languages give you high assurance that your program won’t crash (writing the Ur-Scheme compiler very rarely required me to debug generated machine code), while strongly-statically-typed languages give you similarly weak assurances with compile-time checks; but it seems clear that the programmer needs to be able to verify that their program is also free of bugs like memory leaks, infinite loops, SQL injection, XSS, and CSRF, or for that matter “billion-laughs”-like denial-of-service vulnerabilities, let alone more prosaic problems like the Flexcoin bankruptcy due to using a non-transactional data store.

Against this background, we find that a huge fraction of our day-to-day software (X11, Firefox, Emacs, Apache, the kernel, and notoriously OpenSSL) is written in languages like C and C++ that lack even these rudimentary memory-safety properties, let alone safety against the trickier bogeymen mentioned above. The benefit of C++ is that it allows us to factor considerations like XSS and loops into finite, checkable modules; the benefit of C is that we can tell pretty much what the machine is going to do. But, as compiler optimizations exploit increasingly recondite properties of the programming language definition, we find ourselves having to program as if the compiler were our ex-wife’s or ex-husband’s divorce lawyer, lest it introduce security bugs into our kernels, as happened with FreeBSD a couple of years back with a function erroneously annotated as noreturn, and as is happening now with bounds checks depending on signed overflow behavior.

We could characterize the first of these two approaches as “performing on the trapeze with a net”: successful execution is pleasant, if unexpected, but we expect that even a failure will not crash our program with a segfault — bankrupt Flexcoin, perhaps, and permit CSRF and XSS, but at least we don’t have to debug a core dump! The second approach involves performing without a net, and so the tricks attempted are necessarily less ambitious; we rely on compiler warnings and errors, careful program inspection, and testing to uncover any defects, with results such as Heartbleed, the Toyota accelerator failure, the THERAC-25 fatalities, the destruction of Ariane 5, and the Mars Pathfinder priority-inversion lockups.

So onto this dismal scene sweep Kennedy, Benton, Jensen, and Dagand, with a novel new approach: rather than merely factoring our loops and HTML-escaping into algorithm templates and type conversion operators, thus giving us the opportunity to get them right once and for all (but no way to tell if we have done so), we can instead program in a language rich enough to express our correctness constraints, our compiler optimizations (§4.1), and the proofs of correctness of those optimizations. Instead of merely packaging up algorithms (which we believe to be correct after careful inspection) into libraries, we can package up correctness criteria and proof tactics into libraries.

This is a really interesting and novel alternative to the with-a-net and the without-a-net approaches described earlier; it goes far beyond previous efforts to prove programs correct; rather than attempting to prove your program correct before you compile it, or looking for bugs in the object code emitted, it attempts, Forth-like, to replace the ecosystem of compilers and interpreters with a set of libraries for your proof assistant — libraries which could eventually allow you to program at as high a level as you wish, but grounded in machine code and machine-checkable proofs. Will it turn out to be practical? I don’t know.

(End quote from lobste.rs comment.)

Also, WHAT IN THE LIVING FUCK IS WRONG WITH YOU PEOPLE THAT THE ONLY THING YOU HAVE TO SAY ABOUT THIS PAPER IS ABOUT WHETHER YOU LIKE COCK. HAVE YOU EVEN READ THE FUCKING PAPER. JESUS FUCKING CHRIST WITH A SHOTGUN IN A SOMBRERO, PEOPLE. FUCK YOU, YOU DROOLING MONSTERS. MAKE YOUR COCK JOKES BUT SAY SOMETHING SUBSTANTIVE.


Speaking of Emacs:

https://news.ycombinator.com/item?id=22849522

DonHopkins on April 12, 2020 | parent | context | favorite | on: Enemy AI: chasing a player without Navigation2D or...

James Gosling's Emacs screen redisplay algorithm also used similar "dynamic programming techniques" to compute the minimal cost path through a cost matrix of string edit operations (the costs depended i.e. on the number of characters to draw, length of the escape codes to insert/delete lines/characters, padding for slow terminals, etc). https://en.wikipedia.org/wiki/Gosling_Emacs

>Gosling Emacs was especially noteworthy because of the effective redisplay code, which used a dynamic programming technique to solve the classical string-to-string correction problem. The algorithm was quite sophisticated; that section of the source was headed by a skull-and-crossbones in ASCII art, warning any would-be improver that even if they thought they understood how the display code worked, they probably did not.

https://donhopkins.com/home/archive/emacs/skull-and-crossbon...

Trivia: That "Skull and Crossbones" ASCII art is originally from Brian Reid's Scribe program, and is not copyrighted.

https://donhopkins.com/home/archive/emacs/mw/display.c

    /*  1   2   3   4   ....            Each Mij represents the minumum cost of
          +---+---+---+---+-----        rearranging the first i lines to map onto
        1 |   |   |   |   |             the first j lines (the j direction
          +---+---+---+---+-----        represents the desired contents of a line,
        2 |   |  \| ^ |   |             i the current contents).  The algorithm
          +---+---\-|-+---+-----        used is a dynamic programming one, where
        3 |   | <-+Mij|   |             M[i,j] = min( M[i-1,j],
          +---+---+---+---+-----                      M[i,j-1]+redraw cost for j,2
        4 |   |   |   |   |                           M[i-1,j-1]+the cost of
          +---+---+---+---+-----                        converting line i to line j);
        . |   |   |   |   |             Line i can be converted to line j by either
        .                               just drawing j, or if they match, by moving
        .                               line i to line j (with insert/delete line)
     */
https://donhopkins.com/home/documents/EmacsRedisplayAlgorith...

https://dl.acm.org/doi/10.1145/1159890.806463

A redisplay algorithm, by James Gosling.

Abstract

This paper presents an algorithm for updating the image displayed on a conventional video terminal. It assumes that the terminal is capable of doing the usual insert/delete line and insert/delete character operations. It takes as input a description of the image currently on the screen and a description of the new image desired and produces a series of operations to do the desired transformation in a near-optimal manner. The algorithm is interesting because it applies results from the theoretical string-to-string correction problem (a generalization of the problem of finding a longest common subsequence), to a problem that is usually approached with crude ad-hoc techniques.

[...]

6. Conclusion

The redisplay algorithm described in this paper is used in an Emacs-like editor for Unix and a structure editor. It's performance has been quite good: to redraw everything on the screen (when everything has changed) takes about 0.12 seconds CPU time on a VAX 11/780 running Unix. Using the standard file typing program, about 0.025 seconds of CPU time are needed to type one screenful of text. Emacs averages about 0.004 CPU seconds per keystroke (with one call on the redisplay per keystroke).

Although in the interests of efficency we have stripped down algorithm 5 to algorithm 6 the result is still an algorithm which has a firm theoretical basis and which is superior to the usual ad-hoc approach.

carapace on April 12, 2020 [–]

I wonder how the ratio of display overhead to cost to compute the minimal cost path has changed over the years? At some point it must be cheaper to do a "dumb" redraw if the display hardware is fast enough?

Also, it seems like this might also be useful for computing minimal diffs, eh?

DonHopkins on April 12, 2020 | parent [–]

It was definitely worth it if you had a 300 baud modem and a lightly loaded VAX mostly to yourself. But it became an overkill with higher baud rates and high speed networks.

It's based on the "string-to-string correction problem" (edit distance / Levenshtein distance), combined with "dynamic programming" (finding the best cost path through a cost matrix), which is (kind of) how diff works, on a line-by-line basis for inserted and deleted lines, and then on a character-by-character basis between changed lines. Each "edit" has a "cost" determined by the number of characters you have to send to the terminal (length of the escape codes to perform an edit in multiple possible ways, versus just redrawing the characters). Emacs computes its way through a twisty maze of little escape codes each time it updates the screen.

H. M. Wagner and M. J. Fischer. "The string-to-string correction problem." JACM 21, 1 (January 1974), 168-173

Citation: https://dl.acm.org/doi/10.1145/321796.321811

PDF: https://dl.acm.org/doi/pdf/10.1145/321796.321811?download=tr...

https://en.wikipedia.org/wiki/String-to-string_correction_pr...

>In computer science, the string-to-string correction problem refers to determining the minimum number of edit operations necessary to change one string into another (i.e., computing the shortest edit distance). A single edit operation may be changing a single symbol of the string into another, deleting, or inserting a symbol. The length of the edit sequence provides a measure of the distance between the two strings.

https://en.wikipedia.org/wiki/Diff

>In computing, the diff utility is a data comparison tool that calculates and displays the differences between two files. Unlike edit distance notions used for other purposes, diff is line-oriented rather than character-oriented, but it is like Levenshtein distance in that it tries to determine the smallest set of deletions and insertions to create one file from the other. The diff command displays the changes made in a standard format, such that both humans and machines can understand the changes and apply them: given one file and the changes, the other file can be created.

https://en.wikipedia.org/wiki/Levenshtein_distance

>In information theory, linguistics and computer science, the Levenshtein distance is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other. It is named after the Soviet mathematician Vladimir Levenshtein, who considered this distance in 1965.

https://en.wikipedia.org/wiki/Edit_distance

>In computational linguistics and computer science, edit distance is a way of quantifying how dissimilar two strings (e.g., words) are to one another by counting the minimum number of operations required to transform one string into the other. Edit distances find applications in natural language processing, where automatic spelling correction can determine candidate corrections for a misspelled word by selecting words from a dictionary that have a low distance to the word in question. In bioinformatics, it can be used to quantify the similarity of DNA sequences, which can be viewed as strings of the letters A, C, G and T.

Here's a Levenshtein demo that shows you the cost matrix with the shortest path highlighted. Try calculating the distance between "fotomat" and "tomato", and you can see the shortest path which has a cost of 3 edits (1: delete f at beginning, 2: delete o at beginning, 3: insert o at end):

http://www.let.rug.nl/kleiweg/lev/

>Levenshtein distance is obtained by finding the cheapest way to transform one string into another. Transformations are the one-step operations of (single-phone) insertion, deletion and substitution. In the simplest versions substitutions cost two units except when the source and target are identical, in which case the cost is zero. Insertions and deletions costs half that of substitutions. This demonstration illustrates a simple algorithm which basically looks at all of the different ways for operations to transform one string to another.


Erlang mailboxes:

1. Are tied to processes. They are not first-class values. (The processes are, and the mailboxes go with them.)

2. Can receive messages out of order using pattern matching. This is critically used all over the place for RPC simulations; you send a message out to some other process, and then receive the reply by matching on the mailbox. You may receive other messages in the meantime in that mailbox, but the RPC process is safe because the block waiting for the answer will keep waiting for that message, and the rest will stay there for later reception.

3. Are zero-or-one delivery. Messages going to the same OS process are reliable enough to be treated as reliable exactly-once delivery but you are not really supposed to count on that. (This is one of the ways you can write an Erlang system and you accidentally make it so it can't run on a cluster.) Messages going across nodes may not arrive because the network may eat them. If you're really excited you can examine a process ID to see if it's local but you're generally not supposed to do that.

As part of this, mailboxes are fundamentally asynchronous. You can not wait on "the message has arrived at the other end" because in a network environment this isn't even a well-defined concept. The only way to know that a message arrived is to be explicitly sent an acknowledgement, an acknowledgement that may itself get lost (Byzantine general problem).

4. Send Erlang terms, only Erlang terms, and exactly Erlang terms. Erlang terms are an internal dynamically-typed language with no support for user-defined types. This is important because this is how Erlang is built around upgradeability and having multiple versions of things in the same cluster. Since there are no user-defined types, you never have problems with having mismatched type definitions. (You can still have mismatched data, of course; if a format changes, it changes. But the problem is at least alleviated by not supported complicated user defined types. It is, arguably and in my opinion, a throwing the baby out with the bathwater situation, but I do admit against interest (as the lawyers say) that practically it works out reasonably well. Especially since the architect of the system really ought to know this is how it works up front.)

Because of the dynamically-typed nature, they are effectively untyped. Any message can be sent to a mailbox.

5. Are many-to-one communication, across an entire Erlang cluster. A given mailbox is only visible to one Erlang process; it is part of that process. Again, the process ID is a first-class value that can be passed around but the mailbox is not.

6. There are debug mechanisms that allow you to look into a mailbox live, on the cluster's shell. You can see what is currently in there. You really shouldn't be using these debug facilities as part of your system, but you can use them as a devops sort of thing. (As hard as I've been on Erlang, the devops story is pretty powerful for fixing broken systems. That said, the dynamic types means I had to fix more broken systems live than I ever have for Go; I haven't missed this because my Go systems generally don't break. Still, if they are going to break, Erlang has a lot of tools for dealing with it live.)

Go channels:

1. Are first-class values not tied to any goroutine. One goroutine may create a channel and pass it off to two others who will communicate on it, and they may further pass it around.

2. Are intrinsically ordered, at least in the sense that receivers can't go poking along the channel to see what they want to pull out of it.

However, an aspect of Go channels is that with a "select" statement, a single goroutine can wait on an arbitrary combination of "things I'm trying to send" and "things I'm trying to receive". This is entirely unlike pattern matching on a mailbox and is probably a really good example of the way you need solutions to certain communication problems, but they don't have to be the exact Erlang solution in order to work. Complicated communications scenarios that Erlang might achieve with selective matching can be done in Go with multiple channels multiplexed with a select. There are tradeoffs in either direction and it isn't clear to me one dominates the other at all.

3. Are synchronous exactly-once delivery. This further implies they only work on a single machine, and in Go they only work within a single process. This further implies that there is no such thing as a "network channel" in Go. You can, of course, have all sorts of things that sort of look like channels that work over a network, but it is fundamentally impossible to have a channel (of the "chan" type that can participate in the "select" statement) that goes over the network because no network can maintain the properties required for a Go channel.

It is also in general guaranteed that if you proceed past a send on a channel, that some other goroutine has received the value. This makes it useful for synchronization.

(There are buffered channels that can hold a certain fixed number of values without having actually been sent, but in general I think they should be treated as exceptions precisely because losing this property is a bigger deal that people often realize. A lot of things in Go are built on it. Contrary to popular belief, unbuffered channels are not asynchronous, because they are fixed size. They're just asynchronous, up to a certain point. Erlang mailboxes are asynchronous, until you run out of memory, which is not unheard of or impossible but isn't terribly common, especially if you follow the normal OTP patterns.)

4. Are typed. Each channel sends exactly one type of Go value, though this value can be an interface value meaning it can theoretically send multiple concrete types. (Generally I define the channel with the type I want though there is the occasional use case where I have a channel with a closed interface that basically uses the interface value like a sum type. I am still not sure whether this is better or worse that having a channel per possible type, and I've been doing this for a long time. I'm still going back and forth.)

5. Are many-to-many communication, isolated to a single OS process. It is perfectly legal and valid to have a single channel value that has dozens of producers and dozens of consumers. Performance implications are related to the rate these goroutines are trying to communicate over; if, for instance, at low rates it's no problem at all.

6. Are completely opaque, even within Go. There is no "peek", which would after all break the guarantee that if an unbuffered channel has had a "send" that there has been a corresponding "receive".

Contra another comment I see, no, you can not implement one in terms of the other. You can get close-ish, but there are certain aspects of them that simply do not cross, notably their sync/async nature, their differing network transparencies, and the inability of an Erlang mailbox to be "many-to-many", particularly in the way the "many-to-many" still guarantees "exactly once" delivery. (You can set up certain structures in Erlang that get close to this, but no matter what you do, you can not build any abstraction on a zero-or-one delivery system to create an exactly-one delivery system.)

You can solve almost any problem you have with either of these. You can use either to sort of get close to the other, but in both directions you'll sacrifice significant native capabilities in the process. It's really an intriguing study in how the solution to very similar problems can almost intertwine like an Asclepius staff, twisting around the same central pole while having basically no overlap. And again, it's not clear to me that either is "better"; both have small parts of the problem space where they are better than the other, both solve the vast bulk of problems you'll encounter just fine.


It's fully rejuvenated in Clojure as "Emmy", with Sussman's support and a bunch of 2D and 3D graphing extensions. See Emmy-Viewers: https://emmy-viewers.mentat.org/ and Emmy: https://emmy.mentat.org/

Thanks to https://2.maria.cloud, everything in SICM and FDG works in the browser as well: https://2.maria.cloud/gist/d3c76ee5e9eaf6b3367949f43873e8b2


Let's see. The two big codes I worked with- BLAST and AMBER- have competitors. For BLAST there have been a long history of codes that attempted to do better than it, and I don't think anybody really succeeded until that except possibly HMMER2. Both BLAST and HMMER2 had decades of effort poured into them. BLAST was rewritten a few times by its supporting agency (NCBI) and the author(s) of HMMER rewrote it to be HMMER2. I worked with the guy who wrote the leading competitor to HMMER, he was an independently wealthy computer programmer (with a physics background). In the case of AMBER, there are several competitors- gromacs, NAMD, and a few others are all used frequently. AMBER has been continuously developed for decades (I first used it in '95 and already it was... venerable).

All the major players in these fields read each other's code and papers and steal ideas

In other areas there are no competitors, there's just "write the minimal code to get your idea that contributes 0.01% more to scientific knowledge, publish, and then declare code bankruptcy". And a long tail of low to high quality stuff that lasts forever and turns out to be load-bearing but also completely inscrutable and unmodifiable.

After typing that out I realize I just recapitulated what you said in your first paragraph. My knowledge of finance is limited beyond knowing "jane street capital has been talking about FP for ages" and most of the people I've talked to say their work in finance (HPC mostly) is C++ or hardware-based.


Of course! And referencing your other comment, during the ~2 year period I've been working on Emmy (on top of work by Colin Smith), I was keen to make the implementation more accessible and well-documented than the original.

There's still not a great map of the project (from primitives to general relativity), but many of the namespaces are written as literate programming explorations: https://emmy.mentat.org/#explore-the-project

Here's the automatic differentiation implementation/essay, for example: https://sritchie.github.io/emmy/src/emmy/differential.html

A rough sketch of the tower is:

- `emmy.value` and `emmy.generic` implement the extensible generic operations

- `emmy.ratio`, `emmy.complex` and `emmy.numbers` fleshes out the numeric tower

- `emmy.expression` and `emmy.abstract.number` add support for symbolic literals

Next we need an algebraic simplifier...

- `emmy.pattern.{match,rule,syntax} give us a pattern matching language

- `emmy.simplify.rules` adds a ton of simplification rules, out of which

- `emmy.simplify` builds a simplification engine

Actually the simplifier has three parts... the first two start in `emmy.rational-function` and `emmy.polynomial` and involve converting an expression into either a polynomial or a rational function and then back out, putting them into "canonical form" in the process. That will send you down the rabbit hole of polynomial GCD etc...

And on and on! I'm happy to facilitate any code reading journey you go on or chat about Emmy or the original scmutils, feel free to write at sam [at] mentat.org, or else visit the Discord I run for the project at https://discord.gg/hsRBqGEeQ4.


Historically, Rebol family takes roots in the ideas of symbolic programming, denotational semantics and distributed computing. It never was an academical project (so you won't find any papers or articles), but rather a mix of pragmatism mixed with nonacceptance of the current state of software, so there's no magic to it: you reject the bloat and keep things simple.

The core idea is that, in Red/Rebol, you don't write "code" in the usual sense of the word, but rather load a data structure in memory that interpreters/compilers then process.

This data structure is a concrete-syntax tree (CST), a product of lexing syntactically-rich data format (think of a human-centered, futuristic JSON with unique literal forms for things like IP addresses with RGBA tuples, dates in various ISO formats, monetary values, URLs, e-mails, etc). The result of evaluation, then, is the end game of interpreters/compilers "walking" this CST — and each one can interpret the same tree differently.

So, Red/Rebol is both a programming language and a data format, there is no distinction between code and data. Rebol called itself "messaging language" because this CST, coupled with semantic rules for its interpretation, constitutes an embedded DSL (aka dialect), a message that you can exchange not only between machines (as a data structure), but between machines and humans (as a readable format).

Implementation-wise, the leverage comes from the runtime library that reuses what OS provides, high degree of polymorphism, everything being a first-class citizen, and above-mentioned syntactically-rich homoiconicity with general orientation towards linguistic abstraction.

With all of that we end up with a tool (speaking of Red) that covers the whole spectrum of software development, from metal to meta: user-land functions that seamlessly operate on CST values and enable meta-programming at run-time without macros, and a set of dialects that process said CST and encapsulate the problem domain's complexity, like e.g. VID for declarative GUI specification, Draw for raster drawing, Parse for creating other DSLs (think of it as Meta-II on speeds, wearing jet-rollers) and Red/System for low-level programming. All packed into a few megabytes.

These are the principles. Rebol was an interpreted language bootstrapped from C, Red is both compiled and interpreted: runtime is written in Red/System, and bootstrapping compiler with Red/System toolchain are written in Rebol2.

So, Red is written in itself, Red/System and Rebol; Red/System is a dialect of Red; Rebol is 90% compatible with Red (roughly the same data format and language semantics). Let this meta-circularity sink in.

Balbaert's book is IMO a very shallow explanation of the language basics, often plain wrong and technically incorrect, so I cannot in good faith recommend it, especially if you are a newcomer to Rebol family with ambitions to go advanced and master the tool.

WRT pointers toward design-centered discussions around Red/Rebol, consider to join community Gitter channel and read wiki articles. E.g. one of them (shameless plug) goes a bit more into conceptual underpinnings that I described above.

https://gitter.im/red/red

https://github.com/red/red/wiki/%5BDOC%5D-How-Red-works,-a-b...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: