Can't somebody just invent a pluggable PC architecture, ala Kubernetes?
So you would connect several PCs, notebooks to a switch, or by wifi, then you run a desktop, gnome or something, but there's a layer, heck it can be even a wrapper before running the binary.
The layer or the wrapper would just check how's going your load, memory and would the app from whatever machine has enough free resources available, the it would show you a window with you app running there (in a PC/notebook different from you "main" one), as fast as the network connection you're using to cluster up your assembled PCs / notebooks.
I think Xwindow and some go code could do the trick? There's some experience sharing clipboards/mouse/keyboard input (synergy).
What would be really necessary is a scheduler ala kubernetes (but a somewhat clever bash script could the job as well), to be able to orderly tap the free resources in several connected PCs/notebooks.
No need to rewrite thousands of apps, or design a new desktop environment completely from scratch, and you could actually would be able to run Chrome in its own hardware, just to use a hundred of tabs if you feel like to do it
Almost nobody really knows how developed is the state-of-the-art theory / applied technology in confidentials advances that the usual suspects may have already achieved. I.E. deepmind, openai, baidu, nsa, etc.
AGI could have already been achieved - even theoretically - somewhere, and like when Edison got to make work a light bulb, we're still using oil and not knowing anything about electricity, or light bulbs or energy distribution networks / infrastructure.
The actual current - new, mostly unimplemented yet - technology level.
Back then you wouldn't have believed if someone had said you "hey, city nights in ten years won't be dark anymore"
This is an AI equivalent of believing that the NSA has proved that P=NP and can read everyone's traffic.
There's no way to disprove it, but given that in the open literature people haven't even found a way to coherently frame the question of general AI, let alone theorize about it, it becomes just another form of magical thinking.
You're partially right (because AGI really looks like VERY far away for the current status in theory publicly known), but it's not exactly like "magical thinking".
There are several public examples of radically more advanced theory/technology than the publicly known possible at a certain time/year, kept secret by governments / corps for a very long time (decades).
Lockheed achieved the blackbird decades before it was even admitted that a technology like that could even exist. But, looking backwards, it just looks like an "incremental" advance, but it wasn't, the engineering required to make fly the blackbird was revolutionary for the time when it was invented (back in the 50s / 60s ).
The Lockheed F-117 and its tech had a similar path, just somewhat admitted in late 80s (and this was 70s technology, probably based on theoretical concepts from the 60s).
More or less the same could be said about the tech in Blechtley Park: current tech / theory propelled to extraordinary capabilities by radical improvement achieved by new top secret advances in engineering. The hardware, events and advances ocurred in Bletchley Park were kept secret for years (I think just in the 50s they started to be carefully mentioned but not fully admitted, but nothing even close to the details currently found in the Wikipedia).
At any given time there could be a lot of theory/technology jump-aheads being achieved out there, several decades ahead of the publicly published/known, supposedly current, theory/technology.
> ts unbelievable how much code does not react properly on Communications failures / init / lag.
I think this is the main reason those we do some kind of engineering work have jobs.
Those issues are prevalent in anything built with some constraint in time and/or money, that would be just almost everything built and in production on Earth.
I can confirm that from point of a spanish spoken viewer here in LATAM. We have had our share of time - decades - of a military fully controlled society, so now living in democracy many expressions or even words/verbs had subtlety changed their usual mean (as it appear in the dictionary), to something else.
I.E. the verb "to order", in spanish it is used more frequently to express the action to put things in their intended configuration ("to order the kitchen"). But in our formerly military controlled LATAM societies there's an extra meaning:
"to establish and enforce/assert a NEW set of rules"
So we use "to order" as a non-official synonym of those verbs (enforce/assert) plus the notion of NEW rules:
"Something different that what we were doing till now it is going to be implemented somehow".
Ordino already had both meanings (arrange and command) in latin and I think it remains true in romance and germanic languages alike. I'm not sure that's really "an extra meaning" in Latin America.
This is exactly how some guys in gyms used to consume anabolic steroids, I'm talking about the practice around the 90s, for them this was just a dark pattern in sports (heavy weight lifting). Some used to train several months a year consuming, then stopping, and after a while (could be months to a year or more), they'd be returning to competition.
the street theory, quite supported by old consumers and ex-competitors, was that, ok the "super-size" could be gone after some months, but the strenght increase would not. So you've got - and you'd be able - to train really hard in those months, consuming steroids, supplements, controlling diet, etc.
Could have been just the best possible shot to avoid a neo-totalitarian society, and the current abuse could be actually a minor consequence (and completely manageable for nation states > take a look at EU GDPR).
That moral opinions equals the one of Peter Thiel about the good in mass surveillance: based on his opinion that after 9/11, powerful western entities were on the side of really harsh intrusive countermeasures (and the restriction of current liberties quite probably), that could probably, fundamentally change the western societies as they existed at that moment.
The alternative to that way of doing things was to allow the existence of public and private entities like Facebook and their subrepticious intrusiveness, allowing mass surveillance, but also allowing to avoid other gruesome measures.
And that could explain why nobody can pull-out a facebook takedown.
I hope it's clear in my comment that I'm neither apologizing for Facebook nor sympathizing with Thiel and his crap etc. I think assuming-good-faith and looking for charitable explanations is a good practice, even if I actually don't believe people actually deserve the charity.
So you would connect several PCs, notebooks to a switch, or by wifi, then you run a desktop, gnome or something, but there's a layer, heck it can be even a wrapper before running the binary.
The layer or the wrapper would just check how's going your load, memory and would the app from whatever machine has enough free resources available, the it would show you a window with you app running there (in a PC/notebook different from you "main" one), as fast as the network connection you're using to cluster up your assembled PCs / notebooks.
I think Xwindow and some go code could do the trick? There's some experience sharing clipboards/mouse/keyboard input (synergy).
What would be really necessary is a scheduler ala kubernetes (but a somewhat clever bash script could the job as well), to be able to orderly tap the free resources in several connected PCs/notebooks.
No need to rewrite thousands of apps, or design a new desktop environment completely from scratch, and you could actually would be able to run Chrome in its own hardware, just to use a hundred of tabs if you feel like to do it