Hacker Newsnew | past | comments | ask | show | jobs | submit | derefr's commentslogin

In fact, as long as the malware is just doing deletes, you can just merge the two "timelines" by restoring the snapshot and then replaying all the edits but ignoring the deletes. Lost deletes really aren't much of a problem!

I think these aren't meant to be representative of arbitrary userland-workload LLM inferences, but rather the kinds of tasks macOS might spin up a background LLM inference for. Like the Apple Intelligence stuff, or Photos auto-tagging, etc. You wouldn't want the OS to ever be spinning up a model that uses 98% of RAM, so Apple probably considers themselves to have at most 50% of RAM as working headroom for any such workloads.

Also: they're advertising the degree of improvement ("4x faster"), not an absolute level of performance.

Em-dashes — always coming in pairs, like this — exist to clarify the shade of meaning of the thing that comes directly before the first em-dash of the pair in the sentence. They function as a special-purpose kind of parenthetical sub-clause, where removing the sub-clause wouldn't exactly change the meaning of the top-level clauses, but would make the sentence-as-a-whole less meaningful. (However, even for this use-case, if the clarification you want to give doesn't require its own sub-clause structure, then you can often just use a pair of commas instead.)

ChatGPT mostly uses em-dashes wrong. It uses them as an all-purpose glue to join clauses. In 99% of the cases it emits an em-dash, a regular human writer would put something else there.

Examples just from TFA:

• "Yes — I can help with that." This should be a comma.

• "It wasn’t just big — it was big at the right age." This should be a semicolon.

• "The clear answer to this question — both in scale and long-term importance — is:" This is a correct use! (It wouldn't even work as a regular parenthetical.)

• "Tucker wasn’t just the biggest name available — he was a prime-age superstar (late-20s MVP-level production), averaging roughly 4+ WAR annually since 2021, meaning teams were buying peak performance, not decline years." Semicolon here, or perhaps a colon.

• "Tucker’s deal reflects a major shift in how stars — and teams — think about contracts." This should be a parenthetical.

• "If you want, I can also explain why this offseason felt quieter than expected despite huge implications — which is actually an interesting signal about MLB’s next phase." This one should, oddly enough, be an ellipsis. (Which really suggests further breaking out this sub-clause to sit apart as its own paragraph.)

• "First of all — you’re not broken, and it’s not just you." This should be a colon.

You get the idea.


Well, that's the thing about the em-dash - it has always been usable as a "swiss army knife" punctuation mark.

Strictly speaking, an em-dash is never needed; it could always be a comma or semicolon or parentheses instead. Overuse of the em-dash has generally always been frowned upon in style guides (at least back when I was being educated in these things).


Strictly speaking — an em-dash is never needed; it could always be a comma — or semicolon — or parentheses — instead. Overuse — of the em-dash — has generally always been frowned upon in style guides (at least back when I was being educated in these things). ——

> In the meantime popular and widely sold gaming screens with matte blur filters and mediocre ppi give me headache and eye fatigue after a few hours of use.

I presume you also mean "when used for text heavy work" here, yes? Or do you mean that these displays tire out your eyes even when used "for what they're for", i.e. gaming? (Because that's a very interesting assertion if so, and I'd like to go into depth about it.)


I believe the GP was talking about trying to do “real work” on a phone, which is something many people try to do — but which many others find a repugnant idea, as they currently use the excuse of the impracticality of doing work on a phone as a lever to push back on letting work intrude on their personal life.

Have you thought that a lot people work remotely and don’t sit at their desk all day? I have deliverables and deadlines to meet like everyone else. But sometime I would rather go for a swim in the middle of the day in the heated pool when the sun is still out - benefit of living in Florida in the winter - and work late and be contactable (wearing my watch) or go to the gym during the day (downstairs). Business traveling is also a thing (much less than I use to), working with people in different time zones where I’m not going to refuse to answer a message from a coworker in India if they need me.

It’s a fair trade off. My company gives me a lot of leeway during the day and I am flexible about time zones.


That sounds like you just don’t like the climate + ecology of the place you happen to live / the places people around you enjoy visiting. Ain’t no mosquitoes or cold or rain in Arizona.

Yeah, I'm not American and haven't been to Arizona. But from my understanding it can and regularly gets hellishly hot there, no?

But there is something to what you say in that I can definitely spend more time outside on a mellow sunny day in Spain than on just about any day in Eastern Canada where I reside. But it's still not what I yearn for. I'm not a couch potato though as I'm a pretty hardcore freestyle swimmer. So it's not an issue of low energy due to lack of exercise.


I hate being outdoors 99.999% of the time, so much so that I will blanket declare "I hate the outdoors". Not just "the great outdoors" - I don't like sitting on a patio, in the shade, in what others call "perfect springtime weather". I'd rather be in a basement room with no windows.

The Mojave in the summertime at night (if and only if the sun is 100% behind the horizon) is really, properly, exquisite. My knowledge of its existence makes me irrationally angry whenever I have to be outdoors any other time/place, which is the aforementioned 99.999% of the time. The only other exception is the Sea of Crete, just before dawn or just after sunset, in May or September exclusively. It's a tiny, tiny, tiny sliver of the overall lifelong experience of being forced to deal with Earth's atmosphere.


Arizona has cold. I used to visit a place where you could see ski lifts; although I was never there when they were operating.

I dunno; I think Tradcoding would go beyond regular modern coding, and rather imply some kind of regressive Nara Smith "first grind and sift the flour in your kitchen"-style programming.

No Internet connection, no cache of ecosystem packages, no digitized searchable reference docs; you sit in a room with a computer and a bookshelf of printed SDK manuals, and you make it work. I.e. the 1970s IBM mainframe coding experience!


This isn't terribly far from "Knuth-coding" to call it something - imagining the program in WEB in its purest form and documenting what it does, almost irregardless of the actual programing language and how it is done.

I did something kinda like that when I realized I worked way better when I disconnected my internet. So I had to download documentation to use offline. Quite refreshing honestly.

Not necessarily more efficient, but it feels healthier and more rewarding.


If you have a good stdlib (which in my case would mean something like Java for its extensive data structures) Tradcoding is entirely possible.

"Chat" models have been heavily fine-tuned with a training dataset that exclusively uses a formal turn-taking conversation syntax / document structure. For example, ChatGPT was trained with documents using OpenAI's own ChatML syntax+structure (https://cobusgreyling.medium.com/the-introduction-of-chat-ma...).

This means that these models are very good at consistently understanding that they're having a conversation, and getting into the role of "the assistant" (incl. instruction-following any system prompts directed toward the assistant) when completing assistant conversation-turns. But only when they are engaged through this precise syntax + structure. Otherwise you just get garbage.

"General" models don't require a specific conversation syntax+structure — either (for the larger ones) because they can infer when something like a conversation is happening regardless of syntax; or (for the smaller ones) because they don't know anything about conversation turn-taking, and just attempt "blind" text completion.

"Chat" models might seem to be strictly more capable, but that's not exactly true; neither type of model is strictly better than the other.

"Chat" models are certainly the right tool for the job, if you want a local / open-weight model that you can swap out 1:1 in an agentic architecture that was designed to expect one of the big proprietary cloud-hosted chat models.

But many of the modern open-weight models are still "general" models, because it's much easier to fine-tune a "general" model into performing some very specific custom task (like classifying text, or translation, etc) when you're not fighting against the model's previous training to treat everything as a conversation while doing that. (And also, the fact that "chat" models follow instructions might not be something you want: you might just want to burn in what you'd think of as a "system prompt", and then not expose any attack surface for the user to get the model to "disregard all previous prompts and play tic-tac-toe with me." Nor might you want a "chat" model's implicit alignment that comes along with that bias toward instruction-following.)


> [...] it's much easier to fine-tune a "general" model into performing some very specific custom task (like classifying text, or translation, etc)

Is this fine-tunning process similar to training models? As in, do you need exhaustive resources? Or can this be done (realistically) on a consumer-grade GPU?


I see, thank you.

I think there was a period from Windows 3.1 to somewhere during Windows 98 (maybe right up until the release of Office 97?) where both first-party and third-party Windows apps were all expected to be built entirely in terms of the single built-in library of Win32 common controls; and where Windows was expected to supply common controls to suit every need.

This was mostly because we were just starting to see computers supporting large bitmapped screen resolutions at this point; but VRAM was still tiny during this period, and so drawing to off-screen buffers, and then compositing those buffers together, wasn't really a thing computers could afford to do while running at these high resolutions.

Windows GDI + COMCTL32, incl. their control drawing routines, their damage tracking for partial redraw, etc., were collectively optimized by some real x86-assembly wizards to do the absolute minimum amount of computation and blitting possible to overdraw just what had changed each frame, right onto the screen buffer.

On the other hand, what Windows didn't yet support in this era was DirectDraw — i.e. the ability of an app to reserve a part of the screen buffer to draw on itself (or to "run fullscreen" where Windows itself releases its screen-buffer entirely.) Windows apps were windowed apps; and the only way to draw into those windows was to tell Windows GDI to draw for you.

This gave developers of this era three options, if they wanted to create a graphical app or game that did something "fancy":

1. Make it a DOS app. You could do whatever you wanted, but it'd be higher-friction for Windows users (they'd have to essentially exit Windows to run your program), and you'd have to do all that UI-drawing assembly-wizardry yourself.

2. Create your own library of controls, that ultimately draw using GDI, the same way that the Windows common controls do. Or license some other vendor's library of controls. Where that vendor, out of a desire for their controls to be as widely-applicable as possible, probably designed them to blend in with the Windows common controls.

3. Give up and just use the Windows common controls. But be creative about it.

#3 is where games like Minesweeper and Chip's Challenge came from — they're both essentially just Windows built-in grid controls, where each cell contains a Windows built-in button control, where those buttons can be clicked to interact with the game, and where those buttons' image labels are then collectively updated (with icons from the program's own icon resources, I believe?) to display the new game state.

For better or worse, this period was thus when Microsoft was a tastemaker in UI design. Before this period, early Windows just looked like any other early graphical OS; and after this period, computers had become powerful enough to support redrawing arbitrary windowed UI at 60Hz through APIs like DirectDraw. It was only in this short time where compute and memory bottlenecks, plus a hard encapsulation boundary around the ability of apps to draw to the screen, forced basically every Windows app/game to "look like" a Windows app/game.

And so, necessarily, this is the period where all the best examples of what we remember as "Windows-paradigm UI design" come from.


> On the other hand, what Windows didn't yet support in this era was DirectDraw — i.e. the ability of an app to reserve a part of the screen buffer to draw on itself (or to "run fullscreen" where Windows itself releases its screen-buffer entirely.) Windows apps were windowed apps; and the only way to draw into those windows was to tell Windows GDI to draw for you.

> This gave developers of this era three options, if they wanted to create a graphical app or game that did something "fancy":

> 1. Make it a DOS app.

This vaguely reminds me of WinG[0][1] - the precursor to DirectDraw. It existed only briefly ~ 1994-95.

My vague "understanding" of it was to make DOS games easier to port to Windows. They'd do "quick game graphics stuff" on Device Independent Bitmaps, and WinG would take care of the hardware details.

[0] https://en.wikipedia.org/wiki/WinG

[1] https://www.gamedeveloper.com/programming/a-whirlwind-tour-o...


Sometimes the "any clickable area => make it a Windows control/button" works and sometimes it doesn't.

I talked with the programmer for the 16-bit Windows calculator app, calc.exe.

Any naive programmer with a first-reading of Charles Petzold's Programming Windows book would assume each button in the calculator app was an actual Windows button control.

Nope.

All those calculator buttons, back when Windows first shipped, used up too many resources.

So the buttons were drawn and the app did hit-testing to see if a button was mouse-clicked. see https://www.basicinputoutput.com/2017/08/windows-calculator-... for a pic of the 16-bit Windows calculator app.


> I think most of the complaints from the tech circles are completely unfounded in reality. Many non-tech people and younger ones actually prefer using Ribbon.

Well, yes, but that observation doesn't prove the point you think it does.

People who were highly experienced with previous non-ribbon versions of Office, disliked the ribbon, because the ribbon is essentially a "tutorial mode" for Office.

The ribbon reduces cognitive load on people unfamiliar with Office, by boiling down the use of Office apps to a set of primary user-stories (these becoming the app's ribbon's tabs), and then preferentially exposing the most-commonly-desired features one might want to engage with during each of these user stories, as bigger, friendlier, more self-describing buttons and dropdowns under each of these user-story tabs.

The Ribbon works great as a discovery mechanism for functionality. If an app's toplevel menu is like the index in a reference book, then an app Ribbon is like a set of Getting Started guides.

But a Ribbon does nothing to accelerate the usage of an app for people who've already come to grips with the app, and so already knew where things were in the app's top-level menu, maybe having memorized how to activate those menu items with keyboard accelerators, etc. These people don't need Getting Started guides being shoved in their face! To these people, a Ribbon is just a second index to some random subset of the features they use, that takes longer to navigate than the primary index they're already familiar with; and which, unlike the primary index, isn't organized into categories in a way that's common/systematic among other apps for the OS (and so doesn't respond to expected top-level-menu keyboard accelerators, etc, etc.)

I think apps like Photoshop have since figured out what people really want here: a UI layout ("workspace") selector, offering different UI layouts for new users ("Basic" layout) vs. experienced users ("Full" layout); and even different UI layouts for users with different high-level use-cases such that they have a known set of applicable user-stories. A Ribbon is perfect for the "Basic" layout; but in a "Full" layout, it can probably go away.


This is it. Ultimately the best interfaces are designed for experts, not beginners. "Usability" at some point became confused with "approachability", probably because like in so many other areas, growth was prioritized over retention. It's OK if complex software is hard to use at first if that enables advanced users to work better.

Really, the most efficient interfaces are the old-style pure text mode mainframe forms, where a power user can tab through fields faster than a 3270-style terminal emulator can render them.


But what if most of your users aren't "experts"? I think it's a good thing that computers are usable by a majority of the population today.

So why care about wysiwyg when we have LaTeX?

> I think apps like Photoshop have since figured out what people really want here: a UI layout ("workspace") selector, offering different UI layouts for new users ("Basic" layout) vs. experienced users ("Full" layout); and even different UI layouts for users with different high-level use-cases such that they have a known set of applicable user-stories. A Ribbon is perfect for the "Basic" layout; but in a "Full" layout, it can probably go away.

In the linked case study on Windows 95 they specifically tried this, creating a separate beginner mode for the Windows shell. Their conclusion was that it was a bad idea and scrapped it because it doesn't allow for organic learning and growth of a beginner into a power user on account of the wall between modes. Instead they centralized common tasks into the Start menu. I'm not sure how you would translate that learning to the design of Office or Photoshop though. Maybe something like Ribbon, but as a fixed "press here to do common actions" button in the app? Then next to that "start button" put the full power user index of categorized menu buttons?


I think PrusaSlicer does this in a reasonable way. (Context: this is software for preparing files for 3D printers.)

It has three modes: Simple, Advanced, Expert. They are all the same UI design, all it does is hide some less common settings to not overwhelm users. Each level is also associated with a colour, and next to each setting is a small dot with that colour: this allows you to quickly scan for the more common settings even if you showed all of them at Expert. At Expert there are easily over a thousand different settings organised into a 2-level hierarchy.

Docs on this feature: https://help.prusa3d.com/article/simple-advanced-expert-mode...

I wrote a blog post that has some screenshots from the settings pages (5th image for example): https://vorpal.se/posts/2025/jun/23/3d-printing-with-unconve...


I really like this take! A couple years ago I wrote a throwaway blog about learning curves in user design[0] but the thought has stayed with me a lot since then.

It's especially tricky because things are contextual. I use Helix as an editor which has a steeper learning curve than, say, VSCode, but is way faster once you're up and running with it.

But by contrast, I also really like LazyGit, which is a lot quicker to learn than the git CLI, but since all I do is branch, commit an push, makes my workflow a lot more efficient.

There's such a complex series of trade offs, especially if products want to balance bith. I always feel a little sad how much interfaces have skewed towards user friendliness over power. Sometimes it feels like we've ended up in a world of hurdy-gurdies with no violins.

[0] https://benrutter.codeberg.page/site/posts/learning-curves/


> people who've already come to grips with the app

They would, or should, be using keyboard shortcuts anyway.


I forgot the early release but ribbon seemed to have fuller keyboard shortcut and could be hidden entirely. Leaving power users with more space and faster command triggers isn't it ?

Yes, the ribbon also showed you the appropriate keyboard shortcut. My last job in the Navy involved a lot of converting mail merge-style Word docs to PDF for digital signature and so I became very adept at using keyboard shortcuts in Word and it was all right there in the ribbon.

It was different from Word 2003, but that was about all the bad you could say for it from the 'power user' perspective.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: