Hacker Newsnew | past | comments | ask | show | jobs | submit | AceJohnny2's commentslogin

you're not getting copilot on the self-hosted version, which is what the parent was focusing on.

Incorrect, you can use GitHub connect to sync licenses, this allows you to license users under GHEC and GHES at the cost of a single seat. You will need an entitlement for Copilot, but the fact is you can absolutely get access while storing none of your code on .com.

> It seems unlikely [Apple] would throw away that investment and move to Rust.

Apple has invested in Swift, another high level language with safety guarantees, which happens to have been created under Chris Lattner, otherwise known for creating LLVM. Swift's huge advantage over Rust, for application and system programming is that it supports an ABI [1] which Rust, famously, does not (other than falling back to a C ABI, which degrades its promises).

[1] for more on that topic, I recommend this excellent article: https://faultlore.com/blah/swift-abi/ Side note, the author of that article wrote Rust's std::collections API.


Swift does not seem suitable for OS development, at least not as much as C or C++.[0] Swift handles by default a lot of memory by using reference counting, as I understand it, which is not always suitable for OS development.

[0]: Rust, while no longer officially experimental in the Linux kernel, does not yet have major OSs written purely in it.


What matters is what Apple thinks, and officially it is, to the point it is explicitly written on the documentation.

The practical reality is arguably more important than beliefs. Apple has, as it turns out, invested in trying to make Swift more suitable for kernel and similar development, like trying to automate away reference counting when possible, and also offering Embedded Swift[0], an experimental subset of Swift with significant restrictions on what is allowed in the language. Maybe Embedded Swift will be great in the future, and it is true that Apple investing into that is significant, but it doesn't seem like it's there.

> Embedded Swift support is available in the Swift development snapshots.

And considering Apple made Embedded Swift, even Apple does not believe that regular Swift is suitable. Meaning that you're undeniably completely wrong.

[0]:

https://github.com/swiftlang/swift-evolution/blob/main/visio...


You show a lack of awareness that ISO C and C++ are also not applicable, because on those domains the full ISO language standard isn't available, which is why freestanding is a thing.

But freestanding is not experimental, unlike Embedded Swift according to Apple. And there are full, large OS kernels written in C and C++.

You continue being undeniably, completely wrong.


Is it really? It always depends on which specific C compiler, and target platform we are talking about.

For Apple it suffices that it is fit for purpose for Apple itself, it is experimental for the rest of the world.

I love to be rightly wrong.


There's an allocation-free subset.

https://www.swift.org/get-started/embedded/

Rust's approach is overkill, I think. A lot of reference counting and stuff is just fine in a kernel.


But at least a lot of tasks in a kernel would require something else than reference counting, unless it can be guaranteed that the reference counting is optimized away or something, right?

There are some allocations where it doesn't make sense for them to have multiple owners (strong references) but I wouldn't say it makes sense to think about it as being optimized away or not.

Nothing wrong with using reference counting for OS development.

Even kernel development? Do you know of kernels where reference counting is the norm? Please do mention examples.


Is this even a fair question? A common response to pointing out that Oberon and the other Wirth languages where used to write several OS’s (using full GC in some cases) is that they don’t count, just like Minix doesn’t count for proof of microkernels. The prime objection being they are not large commercial OS’s. So, if the only two examples allowed are Linux and Windows (and maybe MacOS) then ‘no’ there are no wide spread, production sized OS’s that use GC or reference counting.

The big sticking point for me is that for desktop and server style computing, the hardware capabilities have increased so much that a good GC would for most users be acceptable at the kernel level. The other side to that coin is that then OS’s would need to be made on different kernels for large embedded/tablet/low-power/smart phone use cases. I think tech development has benefitted from Linux being used at so many levels.

A push to develop a new breed of OS, with a microkernel and using some sort of ‘safe’ language should be on the table for developers. But outside of proprietary military/finance/industrial (and a lot of the work in these fields are just using Linux) areas there doesn’t seem to be any movement toward movement toward a less monolithic OS situation.


Apple is extending Swift specifically for kernel development.

... yes?

>> Yes, you can use it everywhere. Is that what you consider a success?

> ... yes?

Then perhaps you and I define success differently? As I've said in other comments above, C persisting or really standing still is not what I would think of as a winning and vibrant community. And the moving embedded and kernel development from what was previously a monoculture to something more diverse could be a big win for developers. My hope is that competition from Rust makes using C better/easier/more productive, but I have my doubts as to whether it will move C to make changes.


Sometimes it's nice to know that something will run and compile reliably far into the future. That's a nice thing to have, and wide support for the language and its relatively unchanging nature make it reliable.

> Sometimes it's nice to know that something will run and compile reliably far into the future.

I'm not sure why you think this is a problem that Rust has? Perhaps you mean something different but the Rust project compiles available code on crate.io upon the release of a new version.[0] C compilers may imagine their new compiler code doesn't break old software, but Rust takes that extra step, so we know it won't.

Now, the Rust kernel is currently using beta and nightly features which are on track for inclusion in the stable Rust compiler. So, yes, right now compilation is tied to a specific kernel version, and may need to be updated if a feature changes. However, any C compiler used to compile the Linux kernel uses non-standard GCC extensions only recently adopted by clang. Imagine if the C standards committee chose to change the syntax/sematics of a non-standard extension. Do you not imagine the non-standard extension would also be similarly deprecated?

The issue seems to be Rust is telling you what is non-standard, and you're yelling "Look it's non-standard!". But consider that the kernel in practice is using lots of non-standard features, and should the C standard simply adopt this non-standard behavior that means likely having to changes lots of code.

[0]: https://github.com/rust-lang/crater


> My guess is that C will be around long after people will have moved on from Rust to another newfangled alternative.

if only due to the Lindy Effect

https://en.wikipedia.org/wiki/Lindy_effect


Sure, but the reason tends to be that if you want something to run everywhere, C is your best bet.

> I mean I'm considered old here, in my mid 30's

sigh


I feel like a grandpa after reading that comment now.

This year my in-your-face-old-fart-moment was realising I was contributing to Wikimedia projects for longer than some fellow wikimedians existed. XD

I got a coat older than that (and in decent nick).

I used to tell the “kids” that I worked with that I have a bowling ball older than them.

I was greeted with blank stares by the kids on my team when they wanted to rewrite an existing program from scratch, and I said that will work for as well as it did with Netscape. Dang whippersnappers

I own 90's comic books and video games older than most Gen-Z users in HN.

But am I wrong? I am joking, but good jokes have an element of truth...

Depends what you mean by "old". If you mean elderly then obviously you're not. If you mean "past it" then it might reassure you to know the average expecting mother is in her 30s now (in the UK). Even if you just mean "grown up", recent research [1] on brain development identifies adolescence as typically extending into the early thirties, with (brain) adulthood running from there to the mid sixties before even then only entering the "early aging" stage.

For my part, I'm a lot older than you and don't consider myself old. Indeed, I think prematurely thinking of yourself as old can be a pretty bad mistake, health-wise.

[1] https://www.nature.com/articles/s41467-025-65974-8


FWIW I doubt I'd consider you old were I to know your actual age. I still think I'm quite young

"inside every old person there is a young one wondering what happened."

I assume you're on the younger end

No need to assume, I already told everyone my age

The assumption implies the median of the people's age who frequent HN is higher.

It'd be interesting the know the median age of HN commenters.

I guess the median age of YCombinator cohorts is <30 ?


Aw, I was actually a bit disappointed how much on the nose the usernames were, relative to their postings. Like the "Rust Linux Kernel" by rust_evangelist, "Fixing Lactose Intolerance" by bio_hacker, fixing an 2024 Framework by retro_fix, etc...

Somewhat tangential:

> What I find really interesting about this change on Apple’s part is how it seemingly goes against their own previous human interface guidelines (as pointed out to me by Peter Gassner).

> They have an entire section in their 2005 guidelines titled “Using Symbols in Menus”

2005?? Guidelines evolve.


Here's from 2020: https://web.archive.org/web/20201027235952/https://developer...

> Use text, not icons, for menu titles. Only menu bar extras use icons to represent menus. See Menu Bar Extras. It’s also not acceptable to use a mixture of text and icons in menu titles.

> Avoid using custom symbols in menus. People are familiar with the standard symbols. Using nonstandard symbols introduces visual clutter and could confuse the user.

The notable thing here is how recent of a shift this is, and how longstanding the prior rule was. Navigating internet archive is slow/tedious, but I think the rule/guideline was explicitly called out in the guidelines up until a year or two ago. So it was probably the guideline for ~20 years on macOS and has just now been changed.


They sure do, and just like biological evolution it is not a principled process. Sometimes evolution results in a worse outcome.

Though styles and capabilities have changed, the same basic principles apply when using a mouse pointer and keyboard.

"Titans", huh?

... anyone here familiar with the RPG Eclipse Phase?


I'm not, but I'm familiar with the mythology of the eastern Mediterranean they're likely getting the word from.

There the titans did incest, birthed the olympians, then the youngest of the titans castrated his dad and took all power for himself, and then Zeus and the olympians waged a decade long war against him which they won.


In Eclipse Phase:

> The acronym TITAN stands for Total Information Tactical Awareness Network. These were a group of highly advanced, self-improving seed Artificial Intelligences (AIs) that are responsible for the catastrophic event known as The Fall.

Someone else has already made the mandatory Torment Nexus quote.


and how do you feel about HDD vendors (and Apple) using giga-/tera- for their strictly SI power-of-ten and not power-of-two meaning?

This ship sailed out of view a long time ago, the only GB you’ll see that is still base-2 is RAM. And that’s only because you literally can’t address physical RAM in non power-of-2 blocks in most architectures.

> And those companies all realized they can make billions more dollars making RAM just for AI datacenter products, and neglect the rest of the market.

I wouldn't ascribe that much intent. More simply, datacenter builders have bought up the entire supply (and likely future production for some time), hence the supply shortfall.

This is a very simple supply-and-demand situation, nothing nefarious about it.


That makes it sound like they are powerless, which is not the case. They don’t have to have their capacity fully bought out, they could choose to keep a proportion of capacity for maintaining the existing PC market, which they would do if they thought it would benefit them in the long term.

They’re not doing that, because it benefits them not to.


$20B, 5 years and you can have your own DDR5 fab to print money with.

jokes aside, if the AI demand actually materializes, somebody will look at the above calculation and say 'we're doing it in 12 months' with a completely straight face - incumbents' margin will be the upstart's opportunity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: