> is Alan Kay for real? Where's the Lisp equivalent to this C code?
Have you ever heard of the STEPS project? It's a complete OS, with applications libraries, and compiler collection, all in 20,000 lines of code. Pretty real if you ask me. Here are the reports (latest first).
I happened to have read many of VPRI’s papers and some domain expertise in a few of their topics, or know close friends with such domain expertise. Btw, my collection of their links are at https://news.ycombinator.com/item?id=19844088
The goal is a noble one; though it's unclear to me if their research is the right way to go.
Alan Kay's gang's research's end result are often great, but their implementations are usually more geared toward prototyping rather than production. For example, the GUI innovation is great, whereas its implementation using Smalltalk is clearly geared toward faster iteration and not meant to be as competitive as a production language (let's not enter into this debate as this is another rabbit hole. Plenty of other HN threads on this). But I think so far these implementations have gotten a pass because the end result is futuristic enough for someone else to come along and implement e.g. the GUI in more production-oriented languages.
In the context of VPRI, however, the goal wasn't to create more futuristic end results, but to recreate existing result using hopefully drastically more concise paradigms and DSLs. In this case, the implementation itself is the thing we should examine, and imo they kinda fall short.
For example:
- Yes, it's very appealing that Nile can reduce the antialiasing logic by so much. Is it resilient to real world requirement changes though? Unclear (and the Nile paper still isn't out as the author seems busy with some startup).
- The OMeta parser tool is cool until you encounter more complex grammar and proper error messages (most non-hand-rolled parser tools' error messages and error recoveries are treated as secondary concerns).
- Reusing the same state machine from OMeta (afaik) for TCP is clever, theoretically sound, but likely also not shippable in production the moment you want to tweak some low-level details.
- The dynamic language used for the final product, they baked in some pretty hardcore first-class features but..., this is too long to explain, though experienced folks who read this probably know what'd happen to such language in production.
- The bulk of the final line count reduction wouldn't come from some language-related line count reduction, but from VPRI's rehash of OpenDoc, aka the end product is a Word/PowerPoint hybrid put together by massively (over)reused component. The moment such app hits the real world and a component needs to deviate from its use-case in other callsites, that amount of reuse is gonna decrease quickly. HN has plenty of comments regarding OpenDoc so we can check history here. The final use-case being an OpenDoc rehash is also slightly cheating imo. Demo a game. The line count there is a better stress test.
- Real-world perf requirements are gonna increase the line count by a lot. I understand they're employing he classic PARC strategy to "iterate as if you have the supercomputer of tomorrow, today", but plenty of languages today can already reduce their line count by a lot if perf wasn't considered.
All in all, I'm not sure VPRI's respectable goal can concretize through their method. I am however extremely on board with their goal (of shrinking the code size to be understandable by as few people as possible). Here's an alternative take on the same goal: https://www.youtube.com/watch?v=kZRE7HIO3vk and of course Casey's friend Jon Blow whose language Jai so far obsoleted a bunch of other DSLs he'd have usually needed in his workflow (https://www.youtube.com/watch?v=uZgbKrDEzAs). Now that's a more realistic way of shrinking code and simplify, imo. Between:
- a great language that can demonstrably do low to high level coding and with an in-language metaprogrammmming facility that removes the need for DSLs, and
- VPRI's opposite direction of proliferating many DSLs (which in real world are gonna cause nontrivial accidental complexities),
I think I agree with him at this point. Which is a bit sad. 12 years ago I went into academia because I wanted to do self-driven research, but then I realized academia isn't a good place to do research unless you are at the top of the hierarchy.
Then ~6 years ago I tried starting a startup, working for startups, but as Kay points out that's not a great place to do research either. Really, you need to be ruthlessly market- and production-focused.
So I guess my first two guesses for where I could do my own research were wrong. My new theory: The best place to do research is in your garage.
Let's see if I still feel the same way 6 years from now.
Final Viewpoint Research Institute report, where the entire system (antialiasing, font rendering, svg, etc) was rendered using Nile into Frank, the word/excel/powerpoint equivalent written completely in Nile + Gezira + KScript): http://www.vpri.org/pdf/tr2012001_steps.pdf
Since Alan Kay himself is on Hacker News, maybe he can comment with more links I couldn't find. It's hard to fire up the live Frank environment, though understandably reproducing the working env might not have been a top goal. Maybe someone more cunning can merge these into a single executable repo or whatever so that I can stop playing archaeologist =)
Actually, we didn't claim to have successfully done this.
More careful reading of the proposal and reports will easily reveal this.
The 20,000 lines of code was a strawman, but we kept to it, and got quite a bit done (and what we did do is summarized in the reports, and especially the final report).
Basically, what didn't get finished was the "bottom-bottom" by the time the funding ran out.
I made a critical error in the first several years of the project because I was so impressed with the runable-graphical-math -- called Nile -- by Dan Amelang.
It was so good and ran so close to actual real-time speeds, that I wanted to give live demos of this system on a laptop (you can find these in quite a few talks I did on YouTube).
However, this committed Don Knuth's "prime sin" : "the root of all evil is premature optimization".
You can see from the original proposal that the project was about "runnable semantics of personal computing down to the metal in 20,000 lines of code" -- and since it was about semantics, the report mentioned that it might have to be run on a supercomputer to achieve the real-time needs.
This is akin to trying to find "the mathematical content" of a complex system with lots of requirements".
This is quite reasonable if you look at Software Engineering from the perspective of regular engineering, with a CAD<->SIM->FAB central process. We were proposing to do the CAD<->SIM part.
My error was to want more too early, and try to mix in some of the pragmatics while still doing the designs, i.e. to start looking at the FAB part, which should be down the line.
Another really fun sub-part of this project was Ian Piumarta's from scratch runnable TCP/IP in less than 200 lines of code (which included parsing the original RFC about TCP/IP.
It would be worth taking another shot at this goal, and to stick with the runnable semantics this time around.
And to note that if it wound up taking 100K lines of code instead of 20K, it would still be helpful. One of the reasons we picked 20K as a target was that -- at 50 lines to a page -- 20K lines is a 400 page book. This could likely be made readable ... whereas 5 such books would not be nearly as helpful for understanding.
This strikes me as very much a 21st century sort of comment, which looks at the "what" and totally fails to consider the "why" and the historical context.
> The thick borders took up valuable screen space and weren’t necessary.
Define "necessary". I think you are not considering why they were present in the historical context.
> They weren’t present prior to Mac OS 8
Yes, but there are reasons for that which I will go into.
> and they weren’t present after Mac OS 9.
There was no "after"; MacOS 9 (no space) was the last version. It was replaced.
> You might consider the era of thick borders as a 5-year blip on the timeline from 1997 to 2002.
Which fails to consider what happened in that timeframe.
Up to System 7.x you could only resize a MacOS (note, again, no space; that was important) window from the bottom right corner, where there was quite a big widget for this sole purpose... but in a brilliant bit of UI design, it was at the intersection of the vertical and horizontal scrollbars. Continuing either of them into the space past the end of the other would imply priority and that was a bad thing; the classic MacOS UI thought about this.
Now consider what happened in 1996. Apple was in big trouble, bought NeXT, and Steve Job came back. The primary reason was to replace MacOS with Mac OS. (Note: that's why the space is important. MacOS = classic; Mac OS = OS X.)
Jobs cancelled Copland, the planned MacOS 8, and directed the internal Apple team to start salvaging what could be taken from Copland into what was really MacOS 7.7 or something, renaming it to MacOS 8 in order to make it look big and important.
It wasn't; it was UI tweaks and stuff. E.g. the multithreaded Finder from Copland, and the Appearance control panel that allows skinning of the OS, which MacOS couldn't do before.
(All this while the new NeXT team are porting OpenStep to PowerPC and building a VM to run Classic in, stuff that has no customer impact or benefit yet.
Important point #1: this is adding a lot of customisability to the MacOS UI that wasn't there before. This is not some minor trivial point of graphical design.
Important point #2: this is Jobs aping a Microsoft tactic.
Windows 98 is the same timeframe. Win98 is the same basic OS as 95, but with UI tweaks. Why? Because NT 4 is late, and not ready for consumers, but also, because at the time, MS is fighting the US DoJ over monopoly claims, because MS is bundling IE with Windows.
To fight this MS rewrites bits of the Windows Explorer in IE. Gaining, oh hey look, what a coincidence, a multithreaded Explorer, because window contents are rendered in HTML... which means it gets a selling point to upsell W95 customers to W98.
Apple borrows the adaptable UI stuff from Copland and backports it to MacOS 7.
Result: now you can resize a window from any side, like Windows. Jobs comes back and Apple starts "borrowing" ideas from Windows UI and MS GUIs, something pre-Jobs Apple wouldn't do, and only fair as Microsoft "borrowed" so much from Apple.
So how do you show that a window can be resized from any side, not just from the bottom right corner? Answer: you put big fat draggable window borders on it, just like Windows has.
Because Apple was recycling tech from its own, very expensive, failed new-OS development project, so that it could:
[1] offer UI tweaks that [a] enabled it to upsell customers an OS facelift and [b] showed that it had learned both UI and business methods from MS.
[2] as a byproduct kill the Mac clones as that only covered System 7
[3] find a use for the hundreds of millions of dollars it spent on Copland
[4] show people it could adapt and survive and sell new stuff while the NeXT team worked on Rhapsody, which would in time become Mac OS X.
In other words, there are very good strong reasons by those windows got big fat borders, important reasons that helped to save the company.
Second lump of history you failed to consider.
Why didn't Mac OS X (note, with a space), have those?
Two reasons.
[1] Because NeXTstep didn't have fat window borders, and Mac OS X is NeXTstep. NeXTstep only let you resize from bottom left or bottom right, and to do that, it had a big fat bottom window border with market bits at the left and right end to show you where to grab.
And why didn't NeXTstep use the bottom right corner? Because NeXTstep doesn't put scroll bars on the right. It puts them on the left.
Why? Because in the late 1980s when Jobs started NeXT, Apple had recently sued Digital Research over GEM, because GEM looked too like MacOS (no space), and it won, and PC GEM was crippled as a result.
So the little startup company founded by the departed leader of the hostile litigious one does things as differently as it can so it can't get sued by the CEO's former employers. Or by Microsoft.
So, no menu bar, menus stack up on the left.
So, scroll bars are somewhere else.
So, windows aren't resizable by the edges and don't have fat window edges.
Aside:
Fun fact: Motif had those fat window edges and resizing from all corners and all edges too, because the design of Motif was licensed from Microsoft by the OSF. This is also why Motif menu bars are inside the window, like on MS Windows. Because back then MS was trying to be Not Like Apple, and Apple wasn't like anyone by default, but both sued anyone who copied their designs.
And that's also why almost all Linux desktops today are recycling the same tired old ideas. Because since the dawn of GUIs on xNix, it's been under the influence of Apple and Microsoft designs.
Sun did interesting stuff in OpenWindows and OpenLook but it wasn't really "open" despite the name and it's gone now. Damned shame.
SGI did some, in other areas, in IRIX. Also not really open. Also gone now. Also a shame, although there is the Maxx desktop, but nobody cares because everyone else did an end-run around it and the industry moved on. By 1993 it was a little throwaway gag in _Jurassic Park_ -- "hey, this is Unix, I know this!"
End of aside.
Mac OS X dropped that clunky bottom bar, because when the litigious company that sues copiers owns you you don't need to worry. So, scrollbars move back to the right. Menu bars go on the top screen edge, like MacOS and GEM and AmigaOS, and where Fitt's Law of ergonomics says they are easy to hit.
And then Mac OS X one point zero, sold as Mac OS X 10.0, uses 3d shading to show the edges of windows, because by the 21st century you could assume the display was in 24-bit colour and could do stuff like that.
And truecolour icons and things, and shading everywhere, because when you control the hardware you can assume stuff like this and show it as the only way of showing window edges.
Fun fact: we early adopters of Mac OS X used little utilities to turn off the window shadows because it sapped performance on the low-powered old Macs we were using to run this stuff.
But oh hey look, the window corner widget has come back because we need to tempt Mac users onto the new OS, too.
Because Jobs _thought_ of stuff like this. How can I use any of the failed OS project my company rendered obsolete? How can I salvage some of all that wasted R&D budget? How can I sell interim releases? How can I make MacOS (no space) a little bit more familiar to Windows users? MS is making money selling small UI innovations to Windows customers, so how can I do that too?
There are good solid reasons for all this stuff, reasons totally driven by business models and IP ownership and tech of the time.
It wasn't accidental. It wasn't a blip. It was all for very good reasons, all of which you just skipped over with your rash assumption that it was a glitch of late-1990s design cosmetics.
> It's almost like some tiny extremist faction has gained control of Windows
This has been the case for a while. I worked on the Windows Desktop Experience Team from Win7-Win10. Starting around Win8, the designers had full control, and most crucially essentially none of the designers use Windows.
I spent far too many years of my career sitting in conference rooms explaining to the newest designer (because they seem to rotate every 6-18 months) with a shiny Macbook why various ideas had been tried and failed in usability studies because our users want X, Y, and Z.
Sometimes, the "well, if you really want this it will take N dev-years" approach got avoided things for a while, but just as often we were explicitly overruled. I fought passionately against things like the all-white title bars that made it impossible to tell active and inactive windows apart (was that Win10 or Win8? Either way user feedback was so strong that that got reverted in the very next update), the Edge title bar having no empty space on top so if your window hung off the right side and you opened too many tabs you could not move it, and so on. Others on my team fought battles against removing the Start button in Win8, trying to get section labels added to the Win8 Start Screen so it was obvious that you could scroll between them, and so on. In the end, the designers get what they want, the engineers who say "yes we can do that" get promoted, and those of us who argued most strongly for the users burnt out, retired, or left the team.
I probably still know a number of people on that team, I consider them friends and smart people, but after trying out Win11 in a VM I really have an urge to sit down with some of them and ask what the heck happened. For now, this is the first consumer Windows release since ME that I haven't switched to right at release, and until they give me back my side taskbar I'm not switching.
If Parc had failed at dissemination we'd all still be using the command line.
The windows/icons/mouse/menus paradigm disseminated just fine. It wasn't necessarily completely original - see also, Mother of All Demos - but they showed it could be implemented in real systems with real usability benefits.
Laser printers and networking also disseminated just fine.
What didn't disseminate was the development environment. That's not necessarily a surprise, because an environment that delights creative people with PhDs isn't going to translate well to the general public.
Ultimately Smalltalk, MESA, etc were like concept cars. You could admire them and learn from them, but they were never going to be viable as mainstream products.
Windows and Mac filled the gap by producing dumbed-down and overcomplicated versions of the original vision, which - most importantly - happened to be much cheaper.
Xerox get a lot of criticism for missing the potential, but it's easy to forget that in the 70s word processors typically cost five figures, and minicomputers equivalent to the Dorado cost six figures.
Maybe the future would have been different if someone had said "We need to take these ideas and work out how to build them as cheaply as possible."
But realistically commodified computing in the 70s ran on Z80s and still cost five figures for a business system with a hard drive.
The technology to make a cheaper version didn't exist until the 80s.
The problem since Parc is more that there has been no equivalent concentration of smart, creative, playful people. Technology culture changed from playful exploration to hustle and "engagement", there's less greenfield space to explore, and - IMO - there are very few people in the industry who have the raw IQ and the creativity to create a modern equivalent.
It's pretty much the opposite problem. Instead of exploring tech without obvious business goals, business goals are so tightly enforced it's very hard to imagine what would happen if completely open exploration was allowed again.
One interesting argument I came across recently is that most of the great science and innovation of the world has been produced by institutionally independent scientists and inventors:
--------------------------
The university as an institution has approached irrelevance several times in its history. Most of the new learning of the Renaissance, the great revival of classical studies, took place amongst small groups of learned men and leisured aristocrats outside the universities. They formed "academies" and learned societies in which they could discuss the literature and philosophy that had been rediscovered, not in the universities, but by men of letters like Petrarch and by wealthy collectors of manuscripts like Cosimo di Medici. After a century or so, the classics were finally co-opted by the universities, where they soon ceased to be the province of readers for whom they contained living wisdom, and fell into the hands of the gerund-grinders.
Early natural science was burgeoning at about the time the classics started to be sapped of life by the university, and for a couple of blessed centuries the sciences flourished in similar learned cenacles, like the Accademia dei Lincei or the Royal Society, while the inmates of the colleges considered such mere mechanical activities unworthy to soil their dainty fingers. Galileo, Boyle, Huyghens, etc., were not university men; Newton's university appointment was in mathematics, not the natural sciences. As late as the early nineteenth century, most scientists were not of the university, but operated outside it, e.g., Rumford, Davy, or Faraday. While there are isolated examples here and there of university instruction in the sciences as early as the late 17th century (e.g., Barchusen at Utrecht, Boerhaave at Leyden), Iaboratory education in physics and chemistry at universities did not really gain acceptance until after 1850. Independent scientist-inventors, e.g., Lammot du Pont, the Maxim brothers, Edison, or Tesla, continued to be dominant in this country until World War I gave the first government funding to institutionalized research. (source: http://unqualified-reservations.blogspot.com/2007/07/my-navr...)
-----------------------------------
You can also add to the list above Darwin, the Wright Brothers, and Einstein.
Wholesale government funding of research worked well at first, as it essentially put the independent thinkers of old on steroids. But overtime, grant money started going to a new generation of scientists whose main ability was to work the government grant system. By the present, the modern university has essentially become a government bureaucracy where top scientists spend most of their time hunting for grants and doing paperwork. Worse, because of high taxes and the high costs of living in high IQ cities, the gentleman scientist is going extinct, thus cutting off the flow of new innovation.
What do people think of this argument? There are thousands of professors researching computer science, and tens of millions in grant money going into the field. But does anyone know what they are working on? Has anything useful come out of the computer science departments in the past twenty years? ( I'm asking seriously - not rhetorically). Will modern independent scientists like Trevor Blackwell and Jeff Hawkins by themselves out produce all the computer science professors in the country combined?
I realize that it's an aside in the article but I wanted to contradict something:
> What’s less well understood is that Apple’s board and investors were absolutely right to fire Jobs. Early in his career, he was petulant, mean, and destructive
This idea (Jobs needed to be away from Apple to improve himself) gets mentioned a lot but I think it's nonsense.
First: Jobs was petulant, mean and destructive to people he disagreed with – for his whole life. He never dropped this trait. It might have appeared that he mellowed but he really just left an Apple that he no longer controlled and started another company where he had full control (he owned Pixar as well as NeXT but never exerted full control there).
Second: Was Apple really right to fire Jobs? In his first 7 years at Apple, Jobs oversaw (not designed or engineered) the only successful product lines in Apple's first 20 years: the Apple 2, the Mac and the Laserwriter. The latter two happened against the best wishes of the board who only wanted to focus on the Apple 2. In the 12 years after Jobs left, Apple never launched another successful hardware product line, it merely upgrading the existing products (Mac II, PPC Mac or unsuccessful ideas like the Newton). The early 90's at Apple's R&D in particular was completely chaotic and directionless sinking billions into Pink, Taligent, OpenDoc, CHRP and other doomed initiatives.
Jobs didn't need to leave Apple to fix himself. He left Apple because he disagreed with everyone (in hindsight: probably rightfully so) and he couldn't fire them. When he returned, he had the authority to fire everyone he disliked (and he did).
As for the comparison to Larry Page – I don't think they were as similar as the article implies. Jobs – for better and worse – was his own special brand of crazy.
From the very proper English lady across the table from me at a supper at Douglas Adams' house: "You Americans have the best high school education in the world -- what a pity you have to go to college to get it!"
I think this is too much interpreting into it and the specific, and very different, views of people like Alan Kay and Richard Gabriel. Alan was a visionary who actually thought a lot about making programming and computing accessible - even to children. In the Lisp/AI community SOME were working on similar things (Minsky influenced LOGO for example). Richard Gabriel was running a Lisp vendor, which addressed the UNIX market with a higher-end Lisp development and delivery tool: Lucid Common Lisp. You'd shell out $20k and often much more for a machine and a LCL license. Customers were wealthy companies and government - the usual Lisp/AI customers who also wanted to deploy stuff efficiently.
> See the Lisp community practiced the Right Thing software philosophy which was also know as "The MIT Approach" and they were also known as "LISP Hackers".
A typical mistake is to believe that there is a single homogenous Lisp community, a single approach or a single philosophy. In fact the Lisp community was and is extremely diverse. If you look at the LISP hackers, their approach wasn't actually to do the 'right thing' (whatever that is), but to tackle interesting problems and having fun solving it. The Lisp Hackers at MIT (and other places like Stanford) were working for government labs swimming in money and people like Marvin Minsky provided a fantastic playground for them - which then clashed with the 'real world' when DARPA wanted to commercialize the research results it funded, to move the technology in to the field of military usage (also doing technology transfer into other application areas like civilian logistics). If you've ever looked at the Lisp Machine code, you see that it is full of interesting ideas, but the actual implementation is extremely diverse and grown over time. Often complex, under-documented, sketchy - not surprising since much of that was research prototypes and only some was developed for actual commercial markets. The 'MIT Approach' was creating baroque style designs. Is it the 'right thing' to have a language standard telling how to print integers as roman numbers?
Thus 'the right thing' might not be what you think - I think it is more 'image' than reality.
> The destiny of computers is to become interactive intellectual amplifiers for everyone in the world pervasively networked worldwide
That was Alan Kay's vision, not the vision of the Lisp community. Kay's vision was personal computing - and not the crippled version of Apple, IBM and others. Much of the Lisp community was working on AI and related. Which had much different visions and Lisp for them was a tool - they loved or hated. Lisp/AI developers think of it as 'AI assembler' - a low-level language implementing much higher-level languages ( https://en.wikipedia.org/wiki/Fifth-generation_programming_l... ). When no effective systems were available to be used as powerful development environments for small and medium-sized research groups - they invented their own networked / interactive development system using the technology they knew best: Lisp. They hacked up development environments and even operating systems. But it was not necessary to keep them, once similar platforms were available from the market. With Lucid CL one could develop and deploy a complex Lisp application on a UNIX workstation and were not bound to a Lisp Machine - which was still more expensive, used special hardware/software and was less general as a computing platform. Lucid CL was quite successful in its niche for a while - but Gabriel then tried to make that technology slightly more mainstream by addressing C++ developers with a sophisticated development environment - sophisticated, and expensive. This tool was then sinking the company. But, anyway, much of the commercial AI development moved to C++ - for example most of the image/signal processing stuff.
Parts of the Lisp community shared different parts of the Kay vision: OOP as basic computing paradigm (Flavors, Object Lisp, CLOS, ...) , accessible programming (LOGO as an educational Lisp dialect), intellectual amplifiers (AI assistants) - but where Kay developed an integrated vision (Smalltalk and especially Smalltalk 80), the Lisp community was walking in all directions at the same time and this created literally hundreds of different implementations. Simple languages like Scheme were implemented a zillion times - but only sometimes with an environment like that of Smalltalk 80.
The Lisp community addressed both medium and high-end needs. Something like Interlisp-D (developed right next to Alan Kay - but as a development tool for Lisp/AI software and not addressing programming for children and similar) was a very unique programming tool, but its user base wasn't large and more towards the higher end of development - most of it still in AI. There was no attempt to make that thing popular in a wider way by for example selling it to a larger audience. It was eventually commercialized, but Xerox quit the market with the first signs of the AI winter in the 80s. Its actual and practical impact was and is also very limited, since only very few living developers have really seen it and almost no one has an idea what to do with it or even how to use it. It's basically lost art. I saw them in the end 80s when they were on they way out.
I doubt that from hundred authors of advocacy articles has even one used something like Interlisp-D or Lucid CL to develop or ship software. I know only very few people who have actually started it, even much less having seen it on a Xerox Lisp Machine. So much of it is based on some old people telling about it and very few have ever checked out how much of what they hear is actually true and how useful that stuff actually is. One reason for it: it's no longer available.
The 'worse is better' paper was slightly misaddressed at the Lisp users - since they were not after the operating system and base development tool market (like UNIX and C was) and were not married to a particular system or environment. They used Lisp on mainframes in the 60s/70s, on minicomputers in the 70s/80s, on personal workstations (even developing their own) in the end 70s / 80s and personal computers from the 80s onwards - unfortunately Lisp never really arrived at mobile systems - though it participated in an early attempt (transferring a lot of technology to the early Apple Newton - or what it was called before it was brought to the market - projects). The main Lisp influence on what we see as web technology, was the early influence on Javascript.
A related thought: I sometimes think how cool it would be if some of the empowering ideas from the smalltalk and lisp communities (think Xerox, Symbolics etc.) would push personal and end user computing forward.
HCI seems generally stagnant or at least the real world influences. Apple managed to extract and distill the UX portions, but their products lack the empowerment aspect. Linux is kind of the opposite of that.
I feel like there are so many problems with end user computing, specifically because every program defines its own little world. Nothing is integrated well, most things are incredibly hard and shaky to integrate and extend. So much time and effort is wasted because of lock-in effects and feature bloat.
Also we’re missing out on large portions of users solving their own problems, bringing in their expertise and creativity to computing instead of their frustrations.
I'm responding to my own post in order to answer already-posted objections as well as to flesh out some ideas.
The X-Windows partisans -- particularly those trumpeting the use of Linux in the 1990s -- made predictions about which kind of graphical user interface would see more adoption, more innovation, and achieve greater ease of use. These predictions were consequent to the design principles of X-Windows. My point is that we should see if those predictions came true and, if not, how far they diverged from reality.
One of the design principles of X-Windows was the classic Unix tenet of "mechanism, not policy." This principle was most visible in the separation of the window system from the window manager and optional desktop manager. The prediction consequent to this principle was that this separation -- combined with the healing, cleansing power of open source -- would cause a thousand flowers to bloom. Unconstrained by the diktats of The Man, neck-bearded GNU hippies would unleash innumerable window managers and desktop managers showcasing a dizzying number and kind of user interface enhancements. These ideas would compete and combine in Darwinian frenzy faster than the suits at Redmond or Cupertino could cope with.
What actually happened? For the past fifteen years, the greater number of Linux users stuck to programs that aped the interface of Windows 95 or NeXT. This is especially hilarious in the case of Windows because Windows 95 was loudly reviled by the Slashdot crowd on release. "You have to press 'Start' to shut down? M$ is so stupid, LOL!" Then they paid Microsoft the ultimate compliment by implementing Start buttons, task bars, and even CUA keyboard shortcuts on Gnome and KDE.
When people are free to do as they please, they mostly imitate each other. And it's not just the so-called 'mundanes,' either; most nerds are no different.
Aside from Gnome, KDE, and perhaps WindowMaker, other user interfaces for X-Windows have remained in the minority. That includes wm2, ratpoison, xmonad, Symfony, and so forth. The Shell for Gnome3 came very late in the game and has received "mixed reviews" to put it charitably.
More to the point, if the prediction were true, then user-interface innovation should have been immediate, obvious, and embraced by nearly all. Didn't happen.
Now look at Microsoft and Apple. Their user interfaces were never as configurable as what Linux had. Closed source. Little separation between mechanism and policy. Everything was tightly coupled. And in 2003, RedHat advised general users to stick to Microsoft Windows. Hardy har har.
Microsoft and Apple understand that the purpose of a graphical user interfaces is not just to put pixels on a screen; it is to make computers easier to use. That requires research into human-computer interaction. That requires establishing interface guidelines and getting people to abide by them. That means that depending on "Don't Tread on Me" hackers may not be the best approach.
Another design principle -- really, a body of design principles -- was the X-Windows approach to network transparent display. It is central to X-Windows. X-Windows may have been the first software subsystem to pioneer this concept. X-Windows partisans declared that this feature put X-Windows far ahead of Microsoft Windows and the Apple Macintosh.
What actually happened? Here we are, living in the future, using applications delivered over the Internet. Almost none of them are delivered using the X-Windows protocol. Instead, they use the HyperText Transport Protocol. It's easy to see why. Elsewhere on this page, a commenter noted that Hacker News would crawl if it had to be delivered over X-Windows. What X-Windows is best at, it isn't much good at.
As Don Hopkins has noted (sorry, I don't have a link handy), the ideas behind NeWS, X-Windows' competitor, were so good that they got re-invented. The client side does the rendering and input management. The back end is often written in an entirely different programming language. This bifurcation between the front end and back end is what put off so many people from adopting NeWS, even though Sun provided tools for transitioning from PostScript to C. Here we are with a far clunkier substitute in the form of HTML+CSS+JavaScript because it's still better than making it work on X-Windows.
Once again, the Unix-Haters were right. As usual.
Go ahead. Say that X-Windows addresses problems that web pages can't. Make whatever objections you like. Come back to this: What do we observe? Are cross-network apps delivered more often by X or by HTTP? Again, go ahead and say that X-Windows is better for some critical use case. As funky as HTML+CSS+JavaScript can be, no venture capitalist will throw money at you to port it to GNOME. They haven't been that stupid since the tech bubble.
The centrality of network-transparent display has hindered the adoption of any replacement to X-Windows. The barrier to entry is simply too high. Whenever such a replacement was proposed on Slashdot, the inevitable comments got posted: "Does it do cross-network display? No? That's when I stopped reading." Don't tell me that it's a life-saver when you really need it. That's a tautology. If you "really need something," then, by the law of identity, you really need that thing. The fact remains that the overwhelming majority of people never needed cross-network display implemented in the way that X-Windows implemented it. Microsoft's and Apple's customers were not screaming for this feature.
The final prediction was adoption. The combination of the above features would make X-Windows so compelling, that Microsoft and Apple would have to concede or the time would come for "The Year of Linux on the Desktop."
What do we observe? Microsoft Windows remained the dominant desktop operating system without adopting any of those features. X-Windows clients for operating systems other than Unix never took off. Apple and Microsoft continued to refine their user interfaces. Apple practically came back from the dead, spurned X-Windows in favor of Display PDF, and cemented its reputation for ease-of-use and good design.
Most damning of all, it wasn't just "the mundanes" who voted with their wallets; it was hackers, too. Paul Graham has written about how many of his hacker friends switched from Linux to Macintosh. Go to any hacker or start-up meetup and note how many Macintosh notebooks you see. Think back to the drama that occurred when Jamie Zawinskie switched to the Macintosh. Ask yourself: Was it marketing? Was it Steve Jobs' Reality Distortion Field?
X-Windows has failed. The most parsimonious explanation for this failure is that the principles behind its design are false.
It often feels like the mainstream computer industry did a single dip into PARC, took the ideas that could be reasonably implemented on a 1982 Motorola 68k in assembly and Pascal with minimal RAM, and forgot about everything else.
Soon thereafter PCs became obsessed with being either Unix or VMS, and over time that created a harsh immune response to the influence of Smalltalk/Self because it's such a different paradigm.
With word for word translations, romanized transliteration, and extensive notes. They also had lots of articles on Japanese culture, advertisements, etc.
I couldn't wait for the next issue to come out. I wonder if there's any equivalent for learning other languages through comics?
The combination of nostalgia for the early days of the Web combined with increasing distrust of the mega-tech companies combined with the pendulum swinging back to decentralized systems from centralized systems (the pendulum constantly swings back and forth) is the current zeitgeist of the moment. The problem is that the money markets (VC, private equity, crypto-hordes) simultaneously reject and passively support this decentralized approach meaning that the only way to get real traction for an Old School Style Decentralized Internet is to do it outside of the current realm of venture-backed entrepreneurship.
These financial markets support decentralization in that it serves to create new markets of opportunity not subservient to current market leaders, but at the same time reject decentralization in ways that would limit the ability for those new networks to monetize in the ways that would provide incentives for the investors and founders. Since decentralization favors the individual over an authority of any type, it's very difficult to gain support for decentralization from within current markets.
I really hope that we rethink the VC and public market-based underpinnings of our current tech echosystem. Many of the problems of what we're experiencing today with the steady decay of open and good quality Internet systems as well as the erosion of distrust in authority can really be traced to the way that money (and fortunes) are generated today. Walled gardens. Captive audiences. Standards balkanization. A shift from pay-to-own to rental subscription models. Locking down the ability for individuals to service their own systems in order to coerce continued vendor purchasing. All of these trends favor centralization, single-vendor ownership, and the behavior of market leaders to acquire and entrench their competition and thereby restrict choice.
I don't want to get too macro, but it's possible even many of the ills of modern day capitalism can be traced to financial markets that motivate bad behavior. But I'll keep it on subject and say that the Internet of the past was the Internet of the past not only because of decentralized technology but because of personal, non-financial interest motivations and technophile centric development that pursued the "what if" instead of the "what's in it for me".
I agree these are rose-tinted glasses, but it can be argued that the Internet would not have developed as it has today if it wasn't for the tech-centric naivete of the early Web innovators.
This is happening because these large tech companies- not just tier-1 unicorns but FAANGs themselves, have been presenting themselves as paternalistic employers for the past decade. You provide employees with all the free food, on-campus gyms, and massages by appointment as they want, then present them with supposedly open cultures of free inquiry, your employees are going to use it. You satiate someone's lower levels of Maslow's hierarchy, and they're going to want to self-actualize. So the freethinkers and activists motivated to join these companies are going to try to engage in activities beyond, well, work.
So the companies reaped what they sowed. They wanted to have their employees stay at work and did so by making the office a desirable place to not only work but live at, and their employees start wanting to bring their lives into work.
This totalistic capture of big tech over employee lifestyles mirrors what these companies are also seeking to do at the marketplace. When a company manages not only the technological needs of their customers, but everything from entertainment, to healthcare, to home security, to shopping, logistics, and everything else, then they set themselves up to be veritable empires. That's why big tech companies are now dealing with every issue including free speech, environmentalism, worker's rights, what activities should be eligible for payment services. That's just the price of being a business that wants to be involved in all facets of human society. It includes politics. You can't be an empire and still pretend to be above it all. Both your customers and your employees are going to treat you as one.
I think software ecosystems are currently infested by corporate interests. I feel kinda funny for saying this. It sounds like some kinds of left-wing politics. But software has become increasingly consolidated. There're 5-10 companies in the US that drive most of its development. They even picked up the most popular open source projects. When you're part of an ecosystem, your objective is to work the system. Making software has lower priority.
It's extremely hard to make money being an independent software developer. There's a lot of noise and money in the market. It's hard to compete with marketing from big companies. You have to work for a corporation or a startup with funding. You have to be part of an ecosystem. When you are accepted to a program like YC, you win an entry to an ecosystem.
Can you be a software artisan nowadays? Can a small team develop and sell software without having an ecosystem behind it? I've seen some examples of this, like Ruby on Rails, 37signals. But they are rare exceptions.
I'm currently working on an open source project. Let's see how long I can be independent for. Check out my project :)
As a UX designer & executive for 30 years, I’ll respond.
I agree that UX/UI is sometimes swayed more by fashion than empirical goals in service of the user. E.g., Jony Ive’s sad obsession with flat (featureless) design in iOS 7 is something we are still paying a price for.
However, the majority of UX research these days goes into things that are explicitly not in service of the user. Facebook doesn’t want you to be happy, they want you to keep using their product. Pay-to-play games don’t want you to have a good life, they want to squeeze micro-transactions from you at every opportunity.
Creating and propagating these manipulative dark patterns is a huge amount of leading-edge UX these days. It works. We know how to manipulate people towards goals that are antithetical to their well-being. The tech industry as a whole makes billions of dollars a day doing exactly this thing.
So yes, the research exists. UX continues to get much better. Just not in service of goals that you (or I, frankly) embrace.
This isn’t the fault of UX as a discipline or UX designers generally. Just like a coder intentionally optimizing a ratio of negative to positive stories to keep you fearful and scrolling, UX designers are driven by the same constraints — the product direction of their parent organizations.
Should UX designers individually, or as a discipline, rise up in revolt? Exactly as much, or as little, as programmers should. We’re all in the same boat. We can choose to serve the manipulators or not. Trouble is, there’s a fuckton of money in this manipulation, and you don’t have to spend much time here on HN to see how motivating that is, and the extent to which individuals will hold their noses and do what they’re told, as long as they’re motivated richly enough.
There seems to be a critical role of consequence-free exploration (or at least consequence-reduced) in creativity.
Richard Feynman's commentary on being burned out and playing with ideas of spinning plates, which culminated in his Nobel Prize award, comes to mind. Recently featured on HN (and something of a perennial favourite): https://news.ycombinator.com/item?id=26931359
You'll find similar observations regarding corporate research and development. See David Hounshell's work on R&D at DuPont: Science and Corporate Strategy: Du Pont R&D 1902-1980, originally published in the 1980s.
Similar stories exist for AT&T's Bell Labs, Xerox PARC, IBM Research, and even Ford Motor Company's research division. In the case of the latter, Henry Ford (in)famously gave his engineers significant discretion, but somewhat-less-than-optimal equipment, the latter apparently a spur to creativity in order to overcome limitations.
Geoffrey West has discussed this regarding the Santa Fe Institute, I believe in the Q&A of this presentation: https://youtube.com/watch?v=w-8sbSPf4ko (At 1:09:00)
West quotes the late Max Perutz's guidelines for organizing research, in full:
Impishly, whenever he was asked whether there are simple guidelines along which to organise research so that it will be highly creative, he would say: no politics, no committees, no reports, no referees, no interviews; just gifted, highly motivated people picked by a few men of good judgment. Certainly not the way research is usually run in our fuzzy democracy but, from a man of great gifts and of extremely good judgment, such a reply is not elitist. It is simply to be expected, for Max had practised it and shown that this recipe is right for those who, in science, want to beat the world by getting the best in the world to beat a path to their door.
As an anti-creative environment, the most effective obstruction is to ensure that everything is not only consequential, but a path to failure. This tends to emerge in oppressive and excessively bureaucratic environments. John Cleese outlines the general parameters toward the end of this presentation: https://youtube.com/watch?v=Pb5oIIPO62g
The start-up world seems to me to increasingly exemplify the high-consequence, no-win environment.
This is like the Grand Unification Theory of Deming for me. I've been thinking about it for years.
This is a classic question. It doesn't come up only in the software realm—in every company, there are always high performers, and there is always resistance to the idea of treating their output as systematic, or treating them in an individual focus, rewarding and seeking them out.
My thoughts are that fundamentally, the relative performance of your workers is irrelevant to Deming's prescription for quality. Your job is to improve the output of the system as a whole, and by no small coincidence, improving all your processes optimizes the company for all individual worker output the best. Continual improvement of process, as it applies to the product, management, and worker alike, will result in higher gains than a focus on individual performance. This is the difference of kind—the Profound Knowledge Deming talked about.
If you think of it like that, it's more about human systems than it is about unleashing individual performance. Traditional management is about motivating the individual; Deming management is about recognizing that most of the output results from the system, and thus, improving the systems surrounding the individual.
The really fun part is how you look at performance. Firstly, you have to use statistics (after all, this is Deming we're talking about). Recognize that the performance of a group of humans is a sample, and as in most natural distributions it will resemble a bell curve. You will always have outliers: mythical people who seem to outperform all others (most people focus all their attention on these). But most of the group will be somewhere within 2 standard deviations of the average—AKA, normal. You will always have a handful of low-performers, too. This appears to match reality in my experience at several companies, as I'm sure it does for you.
Now, keep in mind that Deming never talked about reducing variation in people. He talked about continual improvement, he talked about psychological effects of management on workers, he talked about enabling pride. His message was to understand variation in the workforce, not to abolish it. Reducing variation is the goal for the output (as it should be in software also!), but not for the employee. He never applied those numerical style goals to workers at all, and in fact decried them as inhuman, and more importantly, ineffective. See Principle #10: "Eliminate slogans, exhortations, and targets for the work force asking for zero defects and new levels of productivity." In other words, recognize and understand statistical variation; don't demand a reduction in variation from individuals, improve the system to improve the output. Remember that: I'll come back to it.
To your second point, that software somehow does not have a "common cause" consistent enough to build upon. I also want to correct this a bit. Deming's first principle, "Create constancy of purpose toward improvement of product and service, with the aim to become competitive, stay in business and to provide jobs" applies to absolutely all companies imaginable. This constancy of purpose is not a constancy of performance or constancy of variability, but a shared knowledge and goal that everyone strives for. It's an abstract and unmeasurable concept. This was Deming's only foray into "common cause," though it was very important.
Back to the individual versus the system. Improve the system to improve the output. If ever there was a sentence to sum up Deming, that's it.
Yet you think you want to "unleash your 10x performers." That this is somehow the goal in Software, or in any company for that matter. It's not. And here's why.
Your 10x performers are your outliers: the top end of the bell curve. The instinctual reaction when you get your hands on one or two is that you've gotta hire more people like this, and to reward and bonus and hold them up as pinnacles of achievement. After all, they perform better than everyone, they should be rewarded. And if we reward this kind of performance, we'll motivate other people to achieve more, right?
Wrong. This instinct is incorrect, and Deming based this (quite correct) conclusion on his extensive knowledge of psychology and sociology. Firstly, while you're focusing on one or two high performers, you lose the recognition that your output and your entire company is actually systematic. You might have 50 other programmers in the body of your bell curve, the lot of whom are still collectively producing 5x what your one 10x programmer supposedly outputs, and possibly more due to complex interrelated factors. Yet, by focusing on individual performance, you unintentionally demotivate the great majority of your employees, who are almost by statistical certainty "just average." The result of your efforts at improving and rewarding the performance of your 10x'er might be simply to keep them, or perhaps motivate them into being an 11x'er (big deal). You may motivate one or two people who happen to be motivated by money to improve their performance, but it probably won't last. The main outcome will be a slew of unintentional negative consequences throughout your company, including corporate politics, ladder-climbing, blame, mistrust, fear, and demotivation. Not good. (Side note: this is Corporate America in a nutshell, the result of a century of individually focused management. We've become complacent about it, sadly).
Deming would look at this in many ways. There would be laughter and enlightenment. There has to be a paradigm shift.
First, look at the performance as a statistical anomaly. It will sometimes happen, quite by accident, that two programmers will have random variation in performance month-to-month or project-to-project. These are not unexpected occurrences. Sometimes the right state of mind aligns with the right motivation and personal circumstances to produce extraordinary performance. It has happened to all of us at times, surely. By looking at it this way, you get many realizations: that circumstances impact performance; that individual variation will occur naturally; that you might be rewarding an individual for performance which is an anomaly. This might sound like a small step, but in fact it's the root of many cultural problems and corporate politics, mistrust of management, power games, and infighting.
Second, how are you measuring performance? How do you determine 10x? Deming would scoff at the very idea. His 3rd "Deadly Disease" was "Evaluation by performance, merit rating, or annual review of performance." For similar reasons to the above, these imprecise methods cause more harm than good. They'll cause secrecy, fear, more infighting and game playing in an effort to reach the performance or review goal, instead of the true goal: quality work. Removing these false metrics reduces fear, improves culture, improves collaboration and allows people to be prideful of their work without ulterior motives. This unmeasurable metric is more important than it sounds.
Third, and most importantly, Deming would surely say: ignore the outliers. Focus on improving the system. Focus on the majority of your people, and the whole of your organization to improve quality of the final product. Implement improvements system-wide, remove rewards and quotas and individual expectations and replace them with leadership, training, continual improvement, and knowledge. Deming would tell you that most of the quality of your output is not tied to individual worker performance, but rather the systems of management, technology, and collective improvement that the company has implemented.
The point of this is to improve your entire organization: your average 50 "1x" programmers might double their performance, resulting in 50x more performance than hiring one holy grail 10x programmer, or unleashing your existing one. That is a simplistic way to think about it, however. The actual improvement is much greater. By focusing on the company as a whole, and focusing on the needs and performance of all employees, you create something far greater: a cohesive culture driven by a desire to do good work as a whole, and produce quality output toward a clear purpose. That should be the holy grail, not "unleashing your 10x performers."
Having 10x performers is surely not a bad thing; they will occur, and you should attempt to hire and keep the best employees you can, of course. But the end of this story gets even better: high performers like one thing above all else, in my experience, and that is working in an environment in which they flourish, are able to take pride in their work, and are able to work with other high performers. By enabling all your programmers to perform at their best, you simultaneously enable your best performers to perform better and flourish as well, which is exactly what they're looking for. Focusing on the whole system does unleash your best performers; and everyone else too! It's counter-intuitive, but that's the lesson here.
The result is positive on all fronts, and not only that, but exponentially better than an individual-reward system due to the intertwined self-reinforcing effects of systematic improvements. Instead of having a handful of 10x workers, you instead build a 10x company which regularly nurtures its employees into becoming them. A 10x culture of performance.
This was the true genius of Deming's perception of the workplace, and of quality. This was never a sticking point for him; it was right there in his perspective if you look for it.
(Apologies for the length; I've been meaning to start a blog, and will sum this and other concepts up and submit to HN at some point...)
In 1916, J.J. Thomson (as in Thomson scattering; discoverer of the electron) said it perfectly [1]:
“If you pay a man a salary for doing research, he and you will want to have something to point to at the end of the year to show that the money has not been wasted. In promising work of the highest class, however, results do not come in this regular fashion, in fact years may pass without any tangible result being obtained, and the position of the paid worker would be very embarrassing and he would naturally take to work on a lower, or at any rate a different plane where he could be sure of getting year by year tangible results which would justify his salary. The position is this: You want one kind of research, but, if you pay a man to do it, it will drive him to research of a different kind. The only thing to do is to pay him for doing something else and give him enough leisure to do research for the love of it."
One of the problems is that experience is highly dependent on hardware choice and/or distro/software choice in rather unpredictable ways.
Generally I agree with grandparent: Linux for the desktop is a lot more reliable than for the laptop. The kind of stuff that needs to work on a desktop/server has considerably more testing and polish behind. With the laptop is about as hit-and-miss as things used to be for desktop in the late 90s. A surprising amount of people just accept some stuff not working or working unreliably in their laptops (some of my mates just "deal with it" - for instance one has a webcam that simply isn't supported, just got an external one - and same for the microphone; another one has trouble with external monitors not keeping config or even crashing the machine sometimes: "it's ok I don't need to use an external monitor", eventually managed to make it work after some research - but I'd rather not have to deal with that sort of thing... etc etc).
Having said that, Apple is so far gone that I'm going to have to move to Linux for my next laptop (Linux is already my main choice on the desktop for a long time).
But the thing is, the way that computers work nowadays, the "choose-your-own-OS model" is broken. It's "less broken" for desktop hardware because it moves so much more slowly and incrementally than they used to, especially at the interface level, but laptop hardware moves faster and mobile a lot faster. Hardware-OS combos with OEM pre-made troubleshooting and tailor-made workarounds (hardware nowadays is very buggy, but the user is sheltered from this fact mainly by kernels and drivers). This is much worse in mobile, btw, people are not expected at all to alter hardware or even connect peripherals beyond strong constraints.
So the situation is that you usually get a machine with something installed that has testing done on it as a combo, and "it works" even if the "internal components" (hardware, OS sub-services, etc) don't quite work to spec. You break this link and someone has to do the patching work, which is often "the community", driver/kernel hackers, etc. But this is a lot harder than working with fixed solutions and stuff keeps breaking, and there's when the end user comes in with some final, hopefully trivial fixes. Or, if the machine is popular enough, "the community" again. But few machine-Linux combos are really popular these days, especially compared to Apple laptops.
TL-DR; server computing is pretty solid, desktop computing is rather solid, laptop computing is a mess, mobile computing is a messy hack. The more things are "integrated" and not expected to be interchangeable, the more likely you are to find hiccups along the way, and the shorter hardware cycles don't help - so it's not a problem with Linux per se (the work behind Linux is amazing in terms of adapting to large ranges of hardware, even when hardware vendors didn't facilitate things) but a problem with installing and troubleshooting your own OS rather than having the OEM do it with flexibility to just change their hardware to make the system work best.
Have you ever heard of the STEPS project? It's a complete OS, with applications libraries, and compiler collection, all in 20,000 lines of code. Pretty real if you ask me. Here are the reports (latest first).
http://www.vpri.org/pdf/tr2012001_steps.pdf
http://www.vpri.org/pdf/tr2011004_steps11.pdf
http://www.vpri.org/pdf/tr2010004_steps10.pdf
http://www.vpri.org/pdf/tr2009016_steps09.pdf
http://www.vpri.org/pdf/tr2008004_steps08.pdf
http://www.vpri.org/pdf/tr2007008_steps.pdf