Yeah. I used to work as a phone surveyor, the one you hate. Our software is a terminal connected to a mainframe. I got used to it after a few weeks and was very productive.
Costco Canada vision shops still use a terminal connected to an AS/400 machine as I snooped around last month.
In the late 90s I was required to slowly replace dumb terminals with PCs. One of the older ladies taking phone orders was most put out by this, understandably. She was lightning fast on that terminal. She'd never used a PC (I hit on the idea of using solitaire to learn to use a mouse, which worked amazingly well), and was never able to get to the same speed with one as she'd done on her dumb terminal. It's hard to beat the performance of dedicated devices.
While I agree that dedicated devices can be more efficient than Windows-style user interfaces, and even more so than browser-based user interfaces, many people don't use those modern interfaces in efficient ways.
I have observed countless times how many people fill in a field, than move their hand to the mouse to move the focus to the next field or button, than move their hand back to the keyboard, instead of just pressing tab to move the focus. It's painful to watch. Knowing just a few keyboard shortcuts makes filling in forms so much faster.
Things are getting worse, unfortunately. Modern user interfaces, especially in web interfaces, are made by people who have no idea about those efficient ways of using them, and are starting to make it more and more difficult to use any other method than keyboard -> mouse -> keyboard -> mouse -> ... . Tab and shift-tab often don't work, or don't work right. You can't expand comboboxes with F4, only the mouse. You can't type dates, but have to painstakingly select all the parts in inefficient pickers. You can't toggle options with the spacebar. You can't commit with enter or cancel with esc.
It's for this reason that I dream of us going back to keyboard-first HCI. I wish the underlying BIOS could easily boot and run multiple operating systems simultaneously and there were keys that were hardwired to the BIOS to switch out of whatever GUI crap you were in to the underlying "master control mode".
I wish we'd made better correspondence between the GUI and the keys on the keyboard. For example, the ESC should always be top-left of the keyboard and every dialog box should have an escape that basically always does the same thing (go back/cancel) and is wired to the hardware key. Instead of drop-down menus at the top of the screen, we could have had pop up menus at the bottom of the screen that positionally correspond to the F1-F12 keys.
I recall reading somewhere that the entire point of solitaire (at least the original implementation that came with windows 3) was to teach users how to click and drag, so I'm not surprised that it was good for teaching your colleague how to use a mouse
An inventory management app was one of my first paid software engineering projects. Sometime in early 00s I had to rewrite it for Windows because the ancient DOS codebase had a bunch of issues running on then-modern Windows versions. I sat down with the users and watched how they were using the DOS version, including the common patterns of keyboard navigation, and then meticulously recreated them in the WinForms version.
For example, much of the time would be spend in a search dialog where you had a textbox on top and a grid with items right below. In the TUI version, all navigation was with arrow keys, and pressing down arrow in the textbox would move the focus to the first item on the grid. Similarly, if you used up arrow to scroll through the items in the grid all the way to the top, another press would move the cursor to the textbox. This was not the standard focus behavior for Windows apps, but it was very simple to wire up, and the users were quite happy with the new WinForms version in the end.
The world needs more of this. It is nowadays rare for programmers to sit down with users and observe what they are doing. Instead we have UX designers designing the experience and programmers implementing that.
It is so frustrating that I’m not good enough to create software for myself. Maybe I should just buckle up and start working on that.
I use an iPhone and found a lot of usability issues. Some apps such as Stock are perhaps not too difficult to recreate in Obj-C. I’m kinda old timer so prefer Obj-C even that I don’t know anything about it.
The sit down with users part seemed to be the most crucial one. Sadly nowadays developers of such software perhaps are not even in the same continent, and zoom talk can’t do this easily.
In my own little world, I saw this first with mail and news readers. It was fast and simple to read mail and news with pine and tin: The same keystroke patterns, over and over, to peruse and reply to emails and usenet threads.
As the network ebbed and flowed, email too-often became unreadable without a GUI, and what was once a good time of learning things on usenet became browsing web forums instead. It sucked. (It still sucks.)
In the greater world, I saw it happen first at auto parts stores.
One day, the person behind the counter would key in make/model/year/engine and requested part in a blur of familiar keystrokes on a dumb terminal. It was very, very fast for someone who was skilled -- and still pretty quick for those who hadn't yet gotten the rhythm of it.
But then, seemingly the next day: The terminals were replaced by PCs with a web browser and a mouse. Rather than a predictable (repeatable!) series of keystrokes to enter to get things done, it was all tedious pointing, clicking, and scrolling.
I saw this at an airport. Took the same plane twice, one year apart, in between they had replaced the terminal by a web UI. First trip it took 15 seconds from the hostess (well into her 50s) to find my booking and print my pass. Second trip (on the web UI), it took 4 hostesses to team up for something that felt like 5 good minutes to do the same thing.
I doubt it, probably just running on a regular Power ISA rack mount server from IBM. Though I guess technically all IBM i aka AS/400 is running on an emulator.
Nope, we still have an IBM i deployment kicking around at $DAYJOB, it's running natively on POWER hardware. Way back in the days of the original OS/400 running on AS/400 hardware, IBM had the foresight to have applications compile to MI (Machine Interface) code; which is a bytecode format closer to something like LLVM IR instead of something like JVM or CLR bytecode. When a PGM object is copied or created on an IBM i system, TIMI (Technology Independent Machine Interface) takes the MI code and translates it to a native executable for the underlying platform.
We probably still have a couple of PGM objects kicking around on our modern POWER hardware that were originally compiled on an old AS/400 system, but they run as native 64-bit POWER code like everything else on the machine.
The IBM midrange line gets a lot of undue disgust these days, it's not sexy by any means, sure, but just like anything running on modern day Z/OS you know that anything you write for it is going to continue to run decades down the line. Well, as long as you limit the amount of stuff you have running on 'modern' languages; because Java, Node, Python, Ruby, etc. are all going to need upgrades while anything written in 'native' languages (RPG, COBOL, C/C++, CL) compiles right down to MI and will keep working forever without changes.
In some ways the IBM mainframe line is an amazing piece of engineering. My understanding is that the emulation layers can even emulate hardware bugs/issues from specific lines of long-dead equipment so that ancient code (that was written to take these issues in mind) will still function as expected.
It's funny that they keep renaming it and everyone still calls it AS/400. I remember when they wanted people to call it iSeries but everyone just still used AS/400. I didn't even know about the others you posted and I still use the AS/400 occasionally.
Costco Canada vision shops still use a terminal connected to an AS/400 machine as I snooped around last month.