I wrote this book so you can spend a boring weekend writing an operating system from scratch. You don’t have to write it in C - you can use your favorite programming language, like Rust or Zig.
I intentionally made it not UNIX-like and kept only the essential parts. Thinking about how the OS differs from Linux or Windows can also be fun. Designing an OS is like creating your own world—you can make it however you like!
BTW, you might notice some paragraphs feel machine-translated because, to some extent, they are. If you have some time to spare, please send me a PR. The content is written in plain Markdown [1].
If you want PRs, I suggest you set up Semantic Linefeeds* sooner, rather than later. Each sentence gets its own line, and it really helps avoid merge conflicts.
Unfortunately not. Maybe I'll write another online book like this in English, after building a more practical microkernel-based OS. It's a very interesting topic indeed :D
Is it really more complicated to make a video game that a kernel?
I don't consider myself knowledgeable enough to make a full kernel by myself but I did manage to make a full 3d video game by myself.
Also I'm not speaking about making a game engine along with the game.
Having a kernel dev job is different than making your own kernel, speaking on a "world builder" perspective.
How does your book compare with the classic "Operating systems design and implementation" by Andrew S. Tanenbaum and Albert S. Woodhull implementing MINIX?
I thinks this 1,000 OS book is a good start for beginners before diving into the MINIX book.
MINIX book describes more practical designs, with a more feature-rich implementation. However, UNIX features such fork, brk, and tty are not intuitive for beginners. By writing a toy OS first, readers can compare the toy OS with MINIX, and understand that UNIX-like is just one of many possible designs. That's an important perspective IMO.
Also, readers can actually implement better algorithms described in the MINIX book. It makes the MINIX book more interesting to read.
This is so much easier. You are implementing printf on page 5 and it can handle formatted output to the screen for integers, hexadecimals, and strings. Minix eventually gets around to the write system call but even then it is actually just a way of sending bytes from a buffer to a file descriptor. To a what? That is a lot more complexity for, in some ways, less functionality. For write, you still have to write printf on top.
The Tanenbaum book is great but it is a particle physics textbook compered to this OS cookbook.
Hi OP, this looks super cool. I remember hearing about this (https://www.linuxfromscratch.org/) many years ago but have never done it.
Curious, what are the prerequisites for this? Do I have to know about how kernels work? How memory management, protection rings or processes are queued? Some I'd like to definitely learn about.
LFS is about building a Linux distribution from scratch (i.e. using the Linux kernel.)
The book in question is about how to build your own operating system (i.e. a non-Linux) kernel from scratch.
> We'll implement basic context switching, paging, user mode, a command-line shell, a disk device driver, and file read/write operations in C. Sounds like a lot, however, it's only 1,000 lines of code!
Thank you! If you've written some programs (ideally in C), you're good to go. You might stuck on some concepts, but you can learn one by one. IMO, implementing it makes you to understand the concepts more deeply, and learn more that you won't notice when you just read textbooks.
Also, because the implementation in this book is very naive, it would stimulate your curiosity.
I thought the written text was very high quality and didn't show many of the usual tells of a non-native writer. Could you share some details of how you used AI tools to help write it?
Copy-and-pasting to ChatGPT. Machine translation (JP to EN) before this LLM era was already high quality, but LLM does a really great job. It does not translate as is, but understands the context, drops unnecessary expressions (important!), and outputs a more natural translation.
That said, LLM is ofc not perfect, especially when the original text (or Japanese itself) is ambiguous. So I've heavily modified the text too. That's why there are some typos :cry:
But seriously, it's not that hard. Change build options to generate 64-bit ELF, replace all 32-bit-wide parts (e.g. uint32_t, lw/sw instructions), and implement a slightly more complicated page table (e.g. Sv48).
Author here. I've made the demo system public [1] but it's written just for me so it should be painful to set up the same environment.
The mechanism is pretty simple: a Node.js server (running on genuine Linux) listens on tcp:22. Once you connect, it boots a dedicated Firecracker microVM instance and forwards network packets between your SSH client and VM.
Regarding the command history, others (including I) can't see what you type. If you could, it must be a vulnerability.
Author here. I'm surprised to see my hobby project on Hacker News.
I know this kind of stuff spark the ``it's meaningless to rewrite everything (especially Linux) in Rust'' debate. I agree 100% that rewriting Linux in Rust (or your favorite language) is just a waste of time and such a project won't live long.
That said, what I'd say here is that it's fun. Really fun. Implementing ABI compatibility requires you to understand under the hood of the kernel. You can learn how printf(3) works. You can learn what happens before main().
Especially, in Rust, it's inspiring to explore ways to write a kernel safely and expressively. One of my favorite part is the implementation of read system call [1].
Lastly, You can try this OS on SSH. Hope you enjoy :)
> I know this kind of stuff spark the ``it's meaningless to rewrite everything (especially Linux) in Rust'' debate. I agree 100% that rewriting Linux in Rust (or your favorite language) is just a waste of time and such a project won't live long.
Considering that this is exactly how Linux was born (just a hobby project for fun), I wouldn't assume so fast that it's useless. Moreover, you don't need to justify yourself if you just want to have fun !
Also, the rust community is too nice to openly admit it, but it's goal IS to displace C as the main low level language especially for the kernel. This is doomed to attract a lot of attention
I don't see why we need to pretend that every hobby has the potential for greatness. Being a hobby is a good enough end to itself.
In fact in this instance I think it's a little disingenuous to quote Linux and say it could happen again. The industry is totally different now. There's much more competition than there was when Linux was released and that competition is much more mature too. Plus the bar for a production-quality kernel is a lot higher than it was when Linux was released.
> In fact in this instance I think it's a little disingenuous to quote Linux and say it could happen again.
Seems apropos to me, given the fact that a Linux ABI compatible hobby project is under discussion - and that everyone here is familiar with the famous Usenet announcement.
> The industry is totally different now. There's much more competition than there was when...
I wonder how you define "competition"? Because there were way more operating systems in use then, and the industry was far more fractured. Fractured in a way that was meaningful - not like today where you can spin up a VM and be productive in short order, thanks to the significant lack of distinguishing difference. That is the really interesting thing about these hobby projects - they introduce possibilities that are either completely ignored by organizations suffering from inertia constraints, or can't be mimicked because they're diametrically opposed to present designs.
> I wonder how you define "competition"? Because there were way more operating systems in use then, and the industry was far more fractured.
There is way more competition now:
+ Linux (countless distributions)
+ FreeBSD
+ OpenBSD
+ NetBSD
+ DragonflyBSD
+ HardenedBSD
+ Darwin
+ Minix 3 (which wasn't free when Linux was released)
+ Illumos
+ OpenIndiana
+ Nexenta OS
+ SmartOS
+ ...and many others based off OpenSolaris / Illumos
This isn't even an exhaustive list of UNIX-like platforms that are new since Linux and free.
Don't conflate standardisation of the industry with a lack of options. More options do exist today and are in use (eg some games consoles run FreeBSD, Netflix uses BSD, Nexenta is used in some enterprise storage solutions, Darwin may not be used in any free capacity but macOS is clearly used heavily by HN readers, and so on and so forth).
Moreover, I've used FreeBSD, Solaris, OpenSolaris, Nexenta and OpenBSD on production systems over the last 10 years (and the list gets more esoteric if we look past 10 years). So just because you might exist in a Linux-only ecosystem it doesn't mean that's the case for the entire IT industry.
Odd, you purportedly know the difference between an OS and a distro - but that doesn't stop you from generating the above nonsense list. Just look at the last 5 items... seriously - all these Illumos derivatives are distinct operating systems in competition with one another? And to a degree that is no different from Netware vs OS/2?! Be serous, that list only further proves my point about the total lack of distinguishing difference in the present offerings - relative to the pre-linux environment.
> So just because you might exist in a Linux-only...
> but that doesn't stop you from generating the above nonsense list.
lol it's not nonsense. Every item I've listed offers something unique.
> Just look at the last 5 items... seriously - all these Illumos derivatives are distinct operating systems in competition with one another?
Actually the differences in the Illumos derivatives are quite fundamental:
- Nexentra is aimed at being an enterprise storage solution with Debian user land. That's massively unlike Illumos
- Smart OS is designed to be a smarter virtualisation and containerisation host OS. It even bundles Linux's KVM virtualization. It's not intended to be run as a desktop platform
- OpenIndiana and Illumos are probably the most similar, however OpenIndiana aims to be more true to OpenSolaris while Illumos aims to be more of a hybrid upstream. So while they're both multi-paradigm (like how Debian can be a desktop or server OS) there are some major differences in their commit tree.
Honest question: how close are you to the Illumos projects? Or are you just an outsider looking in and making assumptions about equivalence based on your experience with Linux? I ask because Illumos forks are much more akin to BSD forks than they are Linux distros.
> Be serous, that list only further proves my point about the total lack of distinguishing difference in the present offerings - relative to the pre-linux environment.
Even if you want to pare down the list to upstreams, you still have half a dozen BSDs (I really hope you at least have enough experience with BSD to realise these aren't just respins like Linux distros), Illumos, Minix, and Darwin. Verses your point in the 90s which was basically non-free BSD, non-free SysV, non-free QNX, non-free Minix etc....you still have those options now PLUS the ones I've listed.
This is the problem with your argument. You're assuming the old choices have gone away, which they haven't...well, apart from SCO and Minix is now free. And in addition we have dozens of interesting new platforms, some of which I've exampled, too.
There's no way on Earth we have less choice now than in the 90s. We might have the industry largely standardising on a subset but that's an entirely different argument and it's only representative of the lowest common denominator doing common problems. However if you look slightly outside of the status quo and you'd see there is a lot of variety still happening in the industry. I know this first hand too -- as I've said in my previous post, I've worked in plenty of places that weren't just Linux shops :)
You then go on to characterize components grafted in from another OS as "unique". That is a laughably low bar, making every potential permutation of existing operating system components distinct offerings in competition for mindshare.
> Honest question...
Honest answer: you are running code I've written. I've been using ZFS in production for as long as one possibly can. My home file server has been SmartOS for many years, before that it was FreeBSD.
> I really hope you at least have enough experience with BSD to realise...
I'm keenly aware of their differences and commonality, having tracked the introduction of a terminal mode bug all they way back to a time when "Melbourne getty" was a thing.
> There's no way on Earth we have less choice now than in the 90s.
Well, I suppose that if you are including abandonware and dead product offerings in the list of competing operating systems...
> We might have the industry largely standardising on...
It seems like you are trying really hard to avoid using the more appropriate word for what has happened: "consolidating".
> You then go on to characterize components grafted in from another OS as "unique". That is a laughably low bar, making every potential permutation of existing operating system components distinct offerings in competition for mindshare.
That's literally no different to how Unix evolved. You talk about diversity in the 90s when most of the platforms were being trolled by SCO for containing the same code base. Don't you see the hypocrisy of your argument there?
> Honest answer: you are running code I've written. I've been using ZFS in production for as long as one possibly can. My home file server has been SmartOS for many years, before that it was FreeBSD.
Then it seems very strange that you don't acknowledge the differences in current platforms while proclaiming that Unix was more diversified in the 90s (if it was that clear cut SCO wouldn't have been trolling everyone).
> Well, I suppose that if you are including abandonware and dead product offerings in the list of competing operating systems...
You're overstating things once again. :)
> It seems like you are trying really hard to avoid using the more appropriate word for what has happened: "consolidating".
No. Consolidating means the removal of options. Those options still exist they're just not as commonly used. Thus term I used of "standardisation" is more apt.
Look, I do understand the rose tinted glasses you're wearing. There are aspects of 90s era systems administration and development that I miss too. But I still feel you're way off the mark with your opinions here. In some places you are exaggerating a nuanced point to such an extent that as much as I'd love to cheer on for the "good old days of computing", your comments simply don't represent my experiences then nor now.
> That's literally no different to how Unix evolved.
Uh, that is exactly my point. You know that you were just arguing that they were unique operating systems, right? You don't see how the comparison totally undermines that position?
> ...while proclaiming that Unix was more diversified in the 90s...
I'll never understand why anyone bothers fabricating strawmen in thread based mediums - I never made that claim, and that is plainly obvious to anybody who has the ability to scroll up. I have a hard time believing that you could, in good faith, read "there were way more operating systems in use then" and honestly think "oh, he is obviously only talking about Unix!"
> No. Consolidating means the removal of options.
Yes... you remember what the option was selecting from? I'll save you the trouble of manipulating your scroll wheel: "The industry is totally different now. There's much more competition... the bar for a production-quality kernel is a lot higher..."
You still wanna keep saying that stuff from before, about options?
> Uh, that is exactly my point. You know that you were just arguing that they were unique operating systems, right? You don't see how the comparison totally undermines that position?
I never said they were unique operating systems. I said we have more choice now than we ever with regards to Unix-like systems and I said they were distinct platforms, code bases, even upstreams. All of which is true. I have no interest in entering a philosophical question of what changes constitute a "unique operating system" however we can at least have a technical discussion about choice and build of the Unix-ecosystem.
> > ...while proclaiming that Unix was more diversified in the 90s...
> I'll never understand why anyone bothers fabricating strawmen in thread based mediums - I never made that claim, and that is plainly obvious to anybody who has the ability to scroll up.
All these comments you make of strawman arguments and now I can see that the original reason for our disagreement was that you replied arguing a different point to me from the outset!
I was only ever talking about Unix-like platforms (given we're talking about Linux ABI) and you misunderstood that post to think we were talking about operating systems in the broader sense. In fact the language I used should have made the scope explicit so I'm giving you the benefit of the doubt here that you weren't intentionally just pulling an asshole move of subtly changing the scope to win a different argument (which is the very definition of a straw man -- since we're already arguing about who's trying to pull a straw man)
> I have a hard time believing that you could, in good faith, read "there were way more operating systems in use then" and honestly think "oh, he is obviously only talking about Unix!"
I was though. You were the one who replied to my point where I was only talking about Unix so good faith would dictate you continue with the same scope and context as the conversation started out with. So yes, I did assume you were still only talking about Unix. Because that's what the conversation was about prior to you joining it.
-----
If we are to discuss your point then I agree there are fewer operating systems in general. And I agree that is a great loss. I'm also happy to talk at lengths about that too..... but I don't really see what relevance that has within a discussion about whether yet another POSIX kernel will "become the next Linux" because even if Kerla was to "become the next Linux" it still wouldn't satisfy your argument about diversity. So why make it in the first place?
Like I said before, I'm using a lot of good faith here assuming your misunderstanding was a genuine one and that you weren't intentionally trying to derail the conversation.
> There's much more competition than there was when Linux was released and that competition is much more mature too.
I think it's the opposite. When Linux was released, basically every major tech company had their own variant of Unix. Now almost everyone has migrated to Linux or Windows.
Competing with modern Linux by creating a drop-in replacement for Linux would be a pretty difficult task given that people have been optimizing Linux for decades now, but on the other hand I think there are ways to improve dramatically on the traditional POSIX-style API that Linux mostly adheres to. For a random example, why can't a given process have more than one "current working directory" at a time? That seems like an arbitrary limitation imposed by an implementation detail in early Unix system, and it causes problems for modularity. There are many other little details like that. I think if Linux is replaced by something eventually, it'll most likely be because the new thing has a cleaner, more powerful, or more generally usable API.
(Kerla is apparently not trying to invent a new API; I'm just saying that's the direction I would recommend to anyone project with a goal of seriously competing with Linux.)
Follow up to add: another way to compete with Linux is on security. The Linux kernel generally has a pretty good security record I think, but there have been plenty of serious bugs over the years. How many exploitable array-out-of-bounds errors or use-after-free errors remain in the Linux kernel? No one knows. If you can rule those out by using a safer language, that might be compelling to a lot of users who care about security above other concerns.
Of course that's hard to pull off in practice. Linux might have classes of errors that wouldn't exist if it were written in Rust, but even if using Rust eliminates three fourths of the code defects, the end result could be less secure if Linux gets ten or a hundred times as much scrutiny from people actively looking through the code for bugs to fix.
If OpenBSD has taught us anything, it's that when you need to start hardening at that level, C stops becoming weakest link and actually the design of the broader UNIX ABIs are the bigger problem. This is why things like selinux and cgroups exist in Linux -- POSIX ABIs are about as secure as Win32 APIs in Windows and thus you need to take additional steps to isolate your running processes if you really care about them behaving.
> I think it's the opposite. When Linux was released, basically every major tech company had their own variant of Unix. Now almost everyone has migrated to Linux or Windows.
There wasn't many UNIXes that targeted x86 aside from 386BSD. And even those that were available were often expensive. Even Minix wasn't free at that time. Which is exactly the reason Linus created his hobby OS.
Now we have a dozen different flavours of BSD, countless Linux distributions and several OpenSolaris spin offs too. Plus many other underdogs used in speciality domains. And that's before even looking at the commercial offerings like QNX, Solaris and macOS.
Just because there are a couple of industry heavyweights that take up the majority of common use cases, don't be fooled into thinking there is a lack of choice nor even that the industry is scared to use anything outside of Linux and Windows.
> Competing with modern Linux by creating a drop-in replacement for Linux would be a pretty difficult task given that people have been optimizing Linux for decades now, but on the other hand I think there are ways to improve dramatically on the traditional POSIX-style API that Linux mostly adheres to. For a random example, why can't a given process have more than one "current working directory" at a time? That seems like an arbitrary limitation imposed by an implementation detail in early Unix system, and it causes problems for modularity. There are many other little details like that. I think if Linux is replaced by something eventually, it'll most likely be because the new thing has a cleaner, more powerful, or more generally usable API.
That's not the goal of this nor any other project that aims for ABI compatibility with Linux and there are plenty of research kernels out there if that's your thing.
> (Kerla is apparently not trying to invent a new API; I'm just saying that's the direction I would recommend to anyone project with a goal of seriously competing with Linux.)
The only way to seriously compete with Linux would be to get large commercial backing. And even then, good luck. Linux won not because it is the best but instead because it's "good enough". Same is true with Windows. Any engineer with aspirations of creating a "better kernel" or "better OS" needs to remember that.
Aside from "rewrite in rust" debate that I was unaware of, there seems to be this pervasive attitude among the HN hivemind that the end-goal for all projects, side- or main-, is "launch", and therefore needs a market analysis to decide the "worth" of such an idea, and it ends up being rather silly.
For example, I've written a Mandelbrot visualizer so many times I've lost count. Not because the world needs another poorly written or optimized rainbow-ladybug-simulator, but because it serves as a slightly-non-trivial hello-world. For example, it's the first end-to-end thing I made in Common Lisp. https://git.sr.ht/~amtunlimited/mandelbrot-plot
I've actually been considering adding a toy compiler to my collection of getting-up-to-speed projects, do you mind sharing what you think are good features?
Not OP, but there's also "crafting interpreters". In the second half of the book you emit bytecode for whatever language you designed in the first half, and also implement a VM for said bytecode.
Wow, I really like the implementation of that syscall. I've got my own toy OS project in C, for the M68k, and my `read` is "a little" less clean [0, 1].
Even "rewriting Linux in Rust" is far from a waste if you make it run and if your code quality is good enough for others to join.
If only you could write an entirely new (meant to be better, for some cases at least) kernel compatible with Linux hardware drivers - this would be not waste at all but in fact fantastic.
The most challenging point is, as others said, the lack of the compatibility with Linux Driver API. I believe it would be really hard to implement and keep following changes in Linux.
An idea in my mind is to use Kerla for virtualized environments like KVM where only limited device drivers are needed (virtio for example).
>> The most challenging point is, as others said, the lack of the compatibility with Linux Driver API.
IMHO getting a minimal set that works is good enough to get people on board. This is still nontrivial. But for example, getting FUSE working would be useful. Even if the driver itself was not ABI compatible, it would bring functionality and someone might then aim for ABI compatibility afterward which would open even more doors.
> The most challenging point is, as others said, the lack of the compatibility with Linux Driver API. I believe it would be really hard to implement and keep following changes in Linux.
Especially since, as I understand it, even Linux isn't compatible with the Linux driver API across versions; they can and will change internals at will and just update in-tree drivers to match. Hence some of the difficulty doing things like getting a newer kernel on assorted embedded devices (ex. 99% of phones) because you have to port the vendor's drivers to the new version and both sides changed stuff.
In addition to its huge contribution to OS development in Rust, as a microkernel enthusiast, it sounds exciting to writing a microkernel in Rust, in the "Everything is a URL" principle. Moreover, it can run a good-looking GUI on a real machines [1]! I know it's very hard to be realized.
Aside from Redox, I'd mention Tock [2], an embedded operating system. It introduces a novel components isolation using Rust's power. I believe this field is where Rust shines and am looking forward to seeing it in production.
As a embedded service processor OS for a big server rack, Oxide Computer is working on 'HubrisOS'. They seem to have not released it yet, but that will be open sourced.
Hubris is targeted at microcontrollers. Much more information (and, importantly, the source code!) will be available in our talk at the Open Source Firmware Conference[0], the abstract for which elaborates on our motivations and use case:
On Hubris and Humility: when "write your own OS" isn't the worst idea
Hubris is a small open-source operating system for deeply-embedded computer systems, such as our server's replacement for the Baseboard Management Controller. Because our BMC replacement uses a lower-complexity microcontroller with region-based memory protection instead of virtual memory, our options were limited. We were unable to find an off-the-shelf option that met our requirements around safety, security, and correctness, so we wrote one.
Hubris provides preemptive multitasking, memory isolation between separately-compiled components, the ability to isolate crashing drivers and restart them without affecting the rest of the system, and flexible inter-component messaging that eliminates the need for most syscalls -- in about 2000 lines of Rust. The Hubris debugger, Humility, allows us to walk up to a running system and inspect the interaction of all tasks, or capture a dump for offline debugging.
However, Hubris may be more interesting for what it doesn't have. There are no operations for creating or destroying tasks at runtime, no dynamic resource allocation, no driver code running in privileged mode, and no C code in the system. This removes, by construction, a lot of the attack surface normally present in similar systems.
This talk will provide an overview of Hubris's design, the structure of a Hubris application, and some highlights of things we learned along the way.
I think its an FPGA with a costume Open-Source Titan chip on it (RISC-V). It is not really a traditional BMC, its more like a service processor that does secure boot and gets you in the OS. It does a few other things but I think they really want it to have minimum functionality.
This is a very reasonable inference, as it absolutely was when I gave that talk. ;) Very shortly after that talk, however, we came to the realization that the OpenTitan was not going to be what we needed when we needed it, and moved to a Cortex M7-based microcontroller for our service processor (and a separate M33-based microcontroller for our root of trust); Hubris is the operating system that runs on those two MCUs.
Not that it matters all that much, but why a Cortex when you can get RISC-V chips form SiFive (or whoever) that do about the same stuff?
Does that mean the product will not have an FPGA? I kind of liked that idea of updating the hardware.
P.S: Twitter spaces about the history of computing are really fun. Love to hear more about all the dead computer companies you researched. There is not enough content about computer history out there.
Yeah... a bunch of reasons. We definitely have a couple of FPGAs in the product (Bluespec FTW!), and we anticipate that we will have more over time -- but not for the RoT or SP, or at least not now. The reasons are different for each, but include:
1. ASICs are out of the question for us for a bunch of economic reasons
2. FPGAs by and large do not have a good security story
3. The ones that do (or rather, the one that does) has an entirely proprietary ecosystem and a terrible toolchain -- and seems to rely on security-through-obscurity
4. Once you are away from a softcore, the instruction set is frankly less material than the SoC -- and the RISC-V SoC space is still immature in lots of little ways relative to the Cortex space
5. Frankly, there's a lot to like about ST: good parts, good docs, good eval boards, pretty good availability (!), minimum of proprietary garbage.
We tried really hard (and indeed, I think my colleagues would say that I probably tried a little too hard) to get FPGAs to be viable, and then to get FPGAs + hardcores to be viable, and then multicore hardcores to be viable (one for the SP, one for the RoT). Ultimately, all of these paths proved to be not yet ready. And while we're big believers in FPGAs and RISC-V, we're even bigger believers in our need to ship a product! ;)
I have to say, I feel a bit “dirty” carrying so much unused and legacy code around with Linux, so I like people trying to reinvent the wheel just for the pleasure of a fresh start. For the aesthetics. Even if it’s merely a fantasy and not replacing anything soon, realistically. They are also keeping OS development accessible to new generations of geeks. The unfriendliness of C, the gigantic codebase and seemingly distinct culture make the Linux kernel quite off putting, filtering possible engagement by unfortunate parameters IMO. Novel OS development in Rust takes away at least some of those barriers and some of the gained knowledge may be applicable with the Linux kernel later.
> That said, what I'd say here is that it's fun. Really fun. Implementing ABI compatibility requires you to understand under the hood of the kernel.
> You can learn how printf(3) works. You can learn what happens before main().
Yes! It's such a wonderful experience. This is exactly what I most enjoy doing, just seeing how things work, maybe making my own version. I hope you have lots of fun.
You're reimplementing Linux's kernel-userspace binary interface, right? The system call interface is stable and language agnostic, it's really nice. Some pointers for anyone who'd like to know more:
I honestly think there is real value here for firecracker style containers. Running memory safe code for your whole stack, with a minimal number of virtio devices? Fantastic!
The permissive license would also be interesting in some applications.
I'm new to Rust and am far less along than you are, as evidenced by this project. I noticed that in the few files I spot checked, there aren't many/any tests. Without any context or opinion, I'm curious whether unit and integration testing are hard for a project like this.
nice project! curious how the rust ownership model works in a monolithic kernel context. are the lifetimes of kernel structures associated with a process "owned" by the process itself somehow?
sortof. i'm just kinda curious if any of rust's cool memory management features work particularly well or have to be bypassed in a monolithic kernel context.
About 3-4 months in total: took 1.5 month to run a simple Hello World program, 1 month to implement bunch of system calls, and another 1 month to implement features essential to run Dropbear SSH server (tty/pry, bug fixes, …).
I was wondering that too. I'll update it with other examples (x86 and Arm).