Is "SubscriberLink" a way to share paywalled LWN stuff with non-subscribers? Damn, I had no idea. I just presumed they un-paywalled stuff every now and again (like after N days or so).
I can honestly say the resources at LWN have helped my career immensely. The in depth series on topics like namespaces and io_uring directly allowed me to tackle some hard projects - without those articles I would have steered clear of "magic i don't understand" and the relevant projects would have suffered.
It's definitely worth subscribing if you have the means (and the low cost of subscriptions suggests a lot of readers here have the means).
Just to show support for corbet and his team. He could keep the articles paywalled forever, but is making them free after a week. Becoming a paying member is our way to say thank you for his fair attitude (and keeping ads at minimum level).
I work at totaly different field in IT, most articles are just "wow, that's interesting" for me and - I still keep LWN's subscriptionm and it is the only one I have.
Indeed. They even provide a mailing list you can subscribe to which will notify you whenever an article goes free. I think you need to have an account with them firstly though.
> I just presumed they un-paywalled stuff every now and again (like after N days or so).
All of LWN's paywalled articles become free for everyone to view after a week. During the week that articles are paywalled any subscriber can share currently paywalled content by creating a SubscriberLink.
LWN produces some of the best highly technical Linux and Linux adjacent content on the web.
It's ok to post stories from sites with paywalls that have workarounds.
In comments, it's ok to ask how to read an article and to help other users do so. But please don't post complaints about paywalls. Those are off topic. More here.
It should be pointed out that Corbet himself has come out in support of subscriber links posted here and at other places. That's why they provided that capability. That said it shouldn't be abused and I suspect if it ends up becoming too much they might remove that ability.
I agree with one of the other posters, we should support quality journalism, and lwn definitely is.
I recommend browsing their index pages, and pause and ponder where else you'd find such high quality, curated, approachable content, in such concentration. You could suggest the kernel's own documentation, but that's more developer-facing.
> As long as the posting of subscriber links in places like this is occasional, I believe it serves as good marketing for LWN - indeed, every now and then, I even do it myself. We just hope that people realize that we run nine feature articles every week, all of which are instantly accessible to LWN subscribers.
> Where is it appropriate to post a subscriber link?
> Almost anywhere. Private mail, messages to project mailing lists, and blog entries are all appropriate. As long as people do not use subscriber links as a way to defeat our attempts to gain subscribers, we are happy to see them shared.
I think posting this on HN does not defeat the purpose of gaining subscribers, but supports them. At least for me it worked. After reading some LWN articles on HN I decided to become a paying subscriber.
Telling people they are not allowed to post such links here actually IMHO hurts LWN.
I only found out about LEB via links shared on HN. I haven’t subscribed yet but will consider it once I landed a job. I don’t think that sharing their most interesting articles is hurting them, more like the opposite.
Thanks for the reminder. I was a subscriber for a few years and then I stopped when I hit a long time crunch with no time to read much news. I just subscribed now that I feel I have some leisure time again for this and HN in general.
If they wanted to keep it subscriber only, they could achieve that by requiring users to be logged in.
It's clear that the intention is to allow for sharing (although there is the question of "at what scale?").
Then there's the page itself:
> Welcome to LWN.net
> The following subscription-only content has been made available to you by an LWN subscriber. Thousands of subscribers depend on LWN for the best news from the Linux and free software communities. If you enjoy this article, please consider subscribing to LWN. Thank you for visiting LWN.net!
The people who run the site seem to have less of a problem with sharing its content than you do.
I'm a huge fan of Rust (I write in it for a living), but I still can't help wonder how they are going to tolerate the compile times. A couple of drivers...no problem, but what happens once they get a large number of drivers in Rust? Compile times will definitely start going up considerably. I wonder how much this has been considered, or possibly they have no plans to do anything this extensive?
It might not be a big hurdle. OS developers aren’t as sensitive to compile times as app devs. An hour of futzing to bring up on a hardware dev board isn’t unheard of (although 5-10 mins would be more typical).
There was a big regression in compile times in the linux 2.2 to 2.4 time frame (the kernel grew a lot of functionality) but i don’t recall it getting much more than a little grumbling.
That was in the time of the Linux from Scratch book and Gentoo stage 0 builds. Some people don’t mind overnight compiles if it means they can specify just exactly the right combo of compiler flags to eeek out the max performance of their cpu.
I worked on a (legacy now) larg-ish code base (mostly C++ '03 with some self-imposed restrictions, but extensive usage of templates) for an 'embedded' system (enterprise NAS). A full build took north of 6h on the reasonable beefy desk-side workstations available to us (we considered distributed builds, but never got to test it afair). Every once in a while there was some grumbling about the build times, but it never really improved (every three years or so we got better build machines, but by that time the code base had grown as well), partly, I suspect, due to https://xkcd.com/303/ and partly because we learned to work-around that by finding the Makefile in the tree which (likely) gave us a proper build in the shortest possible time for the files we were actually working on (not being helped by the most complex Makefile tree I've ever seen).
I very much doubt that such large build times would go over well in the Linux kernel community.
Kernel compiles are a classic benchmark by which Linus has judged performance-related patches. Kernel developers do care(, mostly about incremental builds).
Given the disposition of kernel developers and the bespoke needs of drivers, I assume that Rust-written drivers will be neither pulling in dependencies willy-nilly nor will they be using generics all that heavily (but feel free to correct me if I assume wrongly). In that instance I don't think the Rust compilatiom process would impose any undue burden, especially since Rust can only be used for optional components of the kernel since it doesn't compile for all the targets that the kernel supports.
The Asahi GPU driver which is currently being developed uses a proc macro to deal with the different versions of firmware that the driver must support (i.e. fields are added/removed in firmware updates, and the driver must support both).
That probably will never compile especially quickly. I'm not sure how sensitive kernel devs are to clean build times. Presumably doing a non-incremental build is relatively rare?
> That probably will never compile especially quickly.
Can you link the code? Proc macros are not inherently slow to compile, but most proc macros pull in the `syn` and `quote` crates for convenience, which are fairly heavyweight (and then most people experience proc macros most commonly via Serde, which is doing a ton of work on its own). In simple cases you can forego most of that work, and some libraries like https://github.com/Manishearth/absolution exist as lightweight alternatives in these cases. Depending on what the Asahi driver needs it could also benefit from something like this (although without seeing the code I'm not sure why it would need proc macros when it sounds like conditional compilation via `cfg` would suffice there).
If you watch the actual talk and check code at https://github.com/Rust-for-Linux you will see that it’s actually pretty heavy on generics and macros. Eg for the trait that defines file system operations (https://github.com/Rust-for-Linux/linux/blob/459035ab65c0ebb...) and other bindings to C code. I was actually surprised of seeing those, since they certainly don’t help with telling people a story that Rust is an easy language. But it’s probably the best one can do when having to interface with predefined APIs.
But You are certainly right in that external dependencies won’t be an issue.
The trait there doesn't look concerning to me (e.g. none of its methods are generic (its methods do use associated types, but those are pretty trivial to resolve or generate code for; there's only ever one set of associated types per impl, whereas with generic functions there's the potential for combinatorial code explosion depending on usage)).
However, I do see a custom `vtable` macro in there, and indeed macros can be a concern for compilation speed if they're not implemented with compiler-friendliness in mind. But other than that, I'd say that file shouldn't represent a problem for the compiler.
For the professional work I have done in Rust the compile times are workable when using cargo workspaces and incremental compilation. Making sure to when possible work directly in a member crate compiling only that for the iteration cycle. That requires a DAG structure for all dependencies so does not work everywhere.
Getting CI up to speed is a completely different (awful) topic.
increased compile times are a reasonable tradeoff for correctness, no? hardware can always be thrown at the problem. as soon as someone does a c oopsie you're going to lose whatever time you saved to debugging
When tightly controlled, Rust compilation times are quite good. Without attention it can bloat tremendously with proc macros and excessive generics (turn them into dyns!), but it's totally manageable. Depending on your corners of the ecosystem, your default experience could be a few orders of magnitude better or worse than someone else's.
I haven't done much experimentation, but I've heard the same. Currently, for new projects I try to design with as many crates as possible in order to a) keep them small b) make as many 'spokes' (aka not dependent on other crates) as possible. This helps a ton and something I wish I'd have known before starting some of my large projects. I also wish I had used more trait objects and less generics - the overhead rarely matters in practice.
Traits vs generics: yeah, personally I hope to one day see a language that allows application developers to decide what gets monomorphized, rather than library developers.
It seems like you could usually just synthesize the boxing needed. So why would you ever leave that choice in the library's hands, which has no idea how it will be used?
I believe they mean spokes as opposed to "wheels with other spokes", i.e. leaves vs trees.
The best dependency trees are as short and wide as possible. It means fewer changes with each upgrade, fewer chances of conflicts between them, and generally more-parallel compilation and better build caching.
Which is true in every language, but every community seems to need to relearn it from scratch.
I did not expect Plan 9, the Linux kernel, and Rust to intersect like that! Would adding this module mean that I could finally mount a 9P filesystem on my Linux machine without the need for third-party software and FUSE?
You already could, actually! zgrep CONFIG_9P_FS /proc/config.gz to see if your kernel's configured for it, modprobe 9p if it's built as a module. The article's talking about doing an in-kernel 9P _server_ instead; I think this is just so that it can act as an abstraction of an SMB server, because you ought to be able to do a 9P server in userspace with no loss of capabilities...
> I think this is just so that it can act as an abstraction of an SMB server, because you ought to be able to do a 9P server in userspace with no loss of capabilities...
Correct, 9p file servers can live in user-space. At some point you have to mount the fs which on lunix/unix is done in-kernel.
In plan 9 the kernel is just a VFS that lets you build arbitrary file trees from collections of mounts and binds. On-disk file systems like FAT and EXT are served via user-space programs which open the disk partition (served as a file by sd(3)) and translates the 9p messages to EXT, FAT, etc operations. File servers can also provide synthetic files, e.g. an arduino can serve its io bits and/or internal variables over 9p (ninepea on github written by echoine.)
Since its a VFS all the way down and data moves via two way pipes mapped to fd's you can boot and pull root from ANY 9p server. Don't care if its CWFS, HJFS, FAT, EXT, SMB, SSHFS, NFS, etc. Kernel don't care as long as it sees files through a 9p connection over whatever 2-way medium you are using including rs232. You cant touch that level of flexibility with other operating systems.
Question: The author of the 9P server module "created an executor that can run async code in a kernel workqueue"; is this made a lot easier by the fact that Rust doesn't include an async executor as part of the core/stdlib?
The work on Rust in the Linux kernel doesn't include the standard library, and even if it did, having an executor around doesn't prevent you from creating and using a different one.
I think they are pointing out that because the Rust design has decoupled the functionality from the implementation, these kind of replacements are even possible in the first place.
Of course, if you use an alternative executor there are large swaths of the (non-std) async ecosystem that are not available to you (thinking of anything like relying on a tokio::timer::Delay as an example). But as you point out Rust in the Linux kernel already doesn't rely on the std, so they wouldn't even begin to imagine to use an off the shelf crate and expect it to work unmodified for their use case. I just think it is worth pointing out that this is a constraint we currently have.
Thank you, you're too kind. I don't know how much that experience of "make the compiler try to talk to a human in a way they can understand it" translates to other parts of life, though :)
Right, yes: if we had a design that built a single executor into the language or we had a runtime that included it, you wouldn't be able to replace that as easily. Because executors are, at most, a library mechanism, you can always replace them or run more than one.
It doesn't in rust but in many other languages there would have been a single blessed async library and trying to use anything else would have been very painful. The fact that rust doesn't particularly force the use of it's standard runtime is key to why rust is useful for kernel work.
Yeah, but I think what they're getting at is that Rust could provide a standard executor in the library and also be modular enough that you can use whichever one you want. The questions are orthogonal
How do these Rust kernel modules handle out of bounds access in an array?
In a C module it would be undefined behaviour leading to kernel panic or weird bugs, in a Rust userspace binary it would be a panic that terminates the process with an error message, what happens here? Would it be a kernel panic as well? or is there a possibility of handling it?
I believe Rust in linux was made so that it never panics. Here's a patch that removes panicking allocations for example: https://lore.kernel.org/lkml/20210704202756.29107-1-ojeda@ke... (but I think all other instances of panicking were removed as well).
EDIT: Look at replies. "Linus considers panics acceptable in some cases".
Linus considers it acceptable to panic for some programming mistakes, since after all the C code also blows up if you make some programming mistakes.
One I ran into (in the sense of read about, not experienced) was if I flatten a Vec of arrays, it's theoretically possible that the flattened structure has too many items in it to represent as a machine word integer. If this happens the flatten operation will panic.
This can't happen in a language like C (or C++) because their smallest type has size 1, so all the arrays can't possibly be bigger in total size than the amount of memory, that's nonsense. But Rust has two smaller sizes than this. The relevant one here is the Zero Size Type, Rust has no problem with the idea of an array of empty tuples, such an array could have say, a billion empty tuples in it, yet on a 32-bit system it just needs 4 bytes (to remember how many empty tuples are in it).
We can see that flattening a Vec of arrays of empty tuples is a pretty wild thing to choose to do, and nevertheless even if we do it, it only panics when the total amount of empty tuples won't fit in the integers of our native word size. But the function could be asked to do this, and so it might panic.
[ You might be wondering how can there be two sizes smaller than C's single byte types in Rust. The answer is the Never type ! and its pseudonym Infallible. The Never type is Empty, no values of this type are possible, so not only does Rust never need to store this type, it doesn't even need to emit machine code to handle this type - the code could never run. This makes sense in Generic programming, we can write Generic error handling code, but it evaporates when the error type was Infallible ]
So accessing an array out of bound will have a runtime check that will call the panic handler, and that panic handler calls BUG() which means kernel panic.
Worth noting that one doesn't need to use raw array accessing in Rust nearly so much as in C because you have things like iterators and for..in loops that will ensure correct access.
But I would assume it would be a kernel panic. It definitely won't be UB.
How does the Rust compiler handle, check bounds and ensures nothing bad can happen with a dynamic array that is passed to a Rust module from the Kernel, another C module or a C userspace program?
If an array and a length can be passed to a Rust module, I can just lie about it's size and, unless there is a runtime bound check (which can be slow), I guess bad things can happen too.
The Rust for Linux implementation converts a Rust panic into a Linux kernel BUG macro call. I believe this will expand to an invalid CPU instruction (at least on popular architectures), and if you're a kernel thread you die immediately with the kernel reporting the state where this happened. Obviously in some cases this is fatal and the report might only be seen via say, a serial port hooked up to a device in a test lab or whatever.
So, it's not a kernel panic, but it's extremely bad, which seems appropriate because your code is definitely wrong. If you're not sure whether the index is correct you can use get() or get_mut() to have an Option, which will be None if your index was wrong (or of course you could ask about the length of the array since Rust remembers how long arrays are).
I guess that's quite drastic for a checked out of bound access, when there's no actual memory safety issue and the compiler can simply return an error from the function, or do something else less drastic.
In Rust code, if you're not able to locally reason that an array index is valid, it should be written with .get() and the None case handled appropriately.
It's impossible to claim there's "no actual memory safety issue" when a program's invariants have been broken: all bets are off at that point.
Some people have argued that indexed access is a wart, but it would be quite heavyweight to always have to unwrap an option when accessing a known-good index:
let foo = [0_u8, 1, 2];
foo[0].unwrap(); // really?
Instead, indexing on arrays / vecs is (essentially) sugar for .get(index).unwrap(), if you don't want the unwrap behavior use get. This is very similar to Python, though Python throws an exception which obviously isn't available to Rust.
Because in the common case you assume that, if your code is correct, all of your indexing will be in bounds, but for memory safety reasons we need a bug to be reported if memory safety would have been violated. So we allow direct indexing with a panic on out of bounds because it's the most ergonomic for that common case
I've come to believe ergonomics is a siren song here, mostly because recently I've been considering panics as forbidden as memory unsafety is... it's never okay for your embedded system or web server to panic, so don't act like that style is somehow preferable.
If you "know" the index is in bounds, you can get_unchecked. Otherwise you should get. Either would be a sane choice for the index operator.
Because it's convenient and familiar to most programmers. Not providing bounds-checked indexing makes some kinds of code very hard to write.
But note his problem also happens with integer division.
In Rust, a[x] on an array or vec is really a roughly a shortand for a.get(x).unwrap() (with a different error message)
Likewise, a / b on integers is a kind of a shortand for a.checked_div(b).unwrap()
The thing is, if the index ever is out of bounds, or if the denominator is zero, the program has a bug, 100% of time. And if you catch a bug using an assertion there is seldom anything better than interrupting the execution (the only thing I can think of is restarting the program or the subsystem). If you continue execution past a programming error, you may sometimes corrupt data structures or introduce bizarre, hard to debug situations.
Doing a pattern match on a.get(x) doesn't help because if it's ever None (and your program logic expects that x is in bounds) then you are kind of forced to bail.
The downside here is that we aren't catching this bug at compile time. And it's true that sometimes we can rewrite the program to not have an indexing operation, usually using iterators (eliding the bounds check will make the program run faster, too). But in general this is not possible, at least not without bringing formal methods. But that's what tests are for, to ensure the correctness of stuff type errors can't catch.
Now, there are some crates like https://github.com/dtolnay/no-panic or https://github.com/facebookexperimental/MIRAI that will check that your code is panic free. The first one is based on the fact that llvm optimizations can often remove dead code and thus remove the panic from a[x] or a / b - if it doesn't, then compilation fails. The second one employs formal methods to mathematically prove that there is no panic. I guess those techniques will eventually be ported to the kernel even if panics happen differently there (by hooking on the BUG mechanism or whatever)
> It's impossible to claim there's "no actual memory safety issue" when a program's invariants have been broken: all bets are off at that point.
When the underlying _runtime's_ invariants have been broken, not when the program invariants have been broken. i.e. you can recover from almost everything save for a VM error in a VM language like Java, since there's no way for the program to mess up the VM's data structures in a way that they cannot be brought back to a defined state.
Would a kernel module be written as a normal Linux-targeting Rust program, or would it be more like a bare metal target with its own (user-provided) panic handler?
The 9P file protocol, he said, comes from the Plan 9 operating system. The kernel has a 9P client, but no 9P server. There is a 9P server in QEMU that can be used to export host filesystems into a guest. The protocol is simple, Almeida said, defining a set of only ten operations. His 9P server implementation works now, in a read-only mode, and required just over *1,000* lines of code.
... I wonder how many lines of code the Plan9 (C) server takes? (much less I suspect)
From previous example drivers, I'd expect that it's not dissimilar overall and mostly the cause will be style preferences e.g. maybe the C programmers loves one-line for loops and the Rust programmer chooses default Rust style which doesn't do that, or contrariwise the C programmer finds it helpful to write out complicated variable types across multiple lines and the Rust programmer was comfortable letting them be inferred instead.
If the C ends up hand rolling something Rust just has built-in, that adds up pretty quickly. For example Rust's arrays, strings etc. know how big they are, C's do not and so you need extra code to re-calculate or to explicitly store and retrieve those sizes. On the other hand it may be tempting to express a Rust type as having methods defined over it, rather than as you must in C using only free functions throughout, there's also a Rust discipline of always deriving common traits e.g. Clone, Debug and Eq, when they're appropriate, as a convenience, that's one line extra in typical style.
This Linux 9P server is a 9P-to-Linux-VFS translator. No such software component exists in Plan9.
You can think of the Plan9 kernel as a 9P multiplexer. 9P in, 9P out. The closest matching thing would be e.g. a 9P-to-ext4-in-a-file translator -- but it's always "9P server talking to $FOO" (or "9P server that makes files up on the fly").
Most of the complexity of this Linux 9P server would be in translating between the unrelated worlds of 9P and Linux VFS. A more apt comparison is the size of this 9P server vs an in-kernel NFS server -- they both translate something unrelated to Linux VFS.
More seriously: many different security improvements are filtering into the kernel idea-by-idea, insofar as folks working on kernel security do the actual work of making them fit in with the kernel, and/or coming up with better alternatives.
(Oddly, grsec itself has expressed opposition to Rust in the kernel, I think on the grounds that their custom GCC plugins can't cross the language boundary. Now that there's work on supporting Rust in GCC itself, I'm not sure if that objection still applies.)
Vaguely descriptive attacks that serve no purpose other than religious wars. When I see statements like that I see another person who thinks it’s simply fashionable to hate popular languages because they heard someone like Torvalds, Thompson, or Stallman say it. If you have a fundamental explanation like they do, then it's at least reasonable, disregarding the irrelevant context of a Linux kernel forum thread. But I rarely see that, instead I find mutter like "cpp is bloated, unstable, overcomplicated crap with ugly syntax, that's why I use rust, or nim, or any other pretty new thing that was conceived 35+ years later".
For instance:
> c++ is so brittle that a gentle breeze could cause a cacophony of soundness issues
Okay, how so, anything about C++ or just C in general?
There's a design principle here, rather than just some sort of hubris.
In languages as powerful as Rust and C++ you can express programs which fall into three categories, two of these aren't very interesting, the programs which are obviously valid, and the programs which are obviously nonsense. We know what to do with these programs, the former should result in emitting correct machine code, the latter should cause a diagnostic (error message). The problem is the last group, programs whose validity is difficult to discern. The bigger and more complicated your software the more likely it may end up in this last group.
In C++ the third category are treated as valid. In the ISO standard this is achieved by having clauses which declare that in certain cases the program is "Ill-formed, no diagnostic required" which means the standard can't tell you what happens, but you won't necessarily get an error message, your compiler may spit out a program that does... something. Maybe it does what you expected, and maybe it doesn't, the standard asks nothing more.
In Rust these are treated as invalid. If you try hard enough (or cheat and Google for one) you can write Rust programs which you can reason through why they should work but the compiler says no. You get an error message explaining why the compiler won't accept this program.
Now, if Rust's compiler doesn't like your program, you can rewrite it so that it's valid. Once you do that, which is often very easy - you're definitely in that first category of correct programs, hooray. The program might well do something you didn't intend, the compiler isn't a mind reader and has no idea that you meant to write "Fnord" in that text output and "Ford" is a typo, but it's definitely a valid Rust program.
In the C++ case we can't tell. Maybe our program is the ravings of a lunatic, the compiler isn't obliged to mention that and we are in blissful ignorance. This also provides little impetus for the standard's authors or compiler vendors to reduce the size of the third category of programs, after all they seem to compile just fine.
Obviously it'd be excellent in theory to completely eliminate the third category. Unfortunately Rice's Theorem says we cannot do that.
Inherently. Because of Rice's theorem you can't just ensure all the valid programs are correctly identified (basically you need to solve the halting problem to pull that off). But the ISO document doesn't allow you to do what Rust does and just reject some of the valid programs because you aren't sure.
Now of course you could build a compiler which rejects valid programs anyway and is thus not complying with the ISO document. Arguably some modes of popular C++ compilers are exactly that today. But now we're straying from the ISO C++ standard. And I'm pretty sure I didn't see a modern C++ compiler which reliably puts all the "No diagnostic required" stuff into the "We generate a diagnostic and reject the program" category even with a flag but if you know of one I'm interested.
So one person said creating the rust interface is very difficult to do correctly and the other said rust bare metal isn't complete enough for his use case
I called both of these (and compile times) when people first tried to get rust on the linux kernel. Maybe it's better to acknowledge these problems before you start instead of beating your head? I'm no psychic and I know I wasn't the only one who called these
> Maybe it's better to acknowledge these problems before you start instead of beating your head?
What's unacknowledged? They're doing exploratory work. That often involves developing under known and expected limitations to understand the problem space and have something to test with when progress is made towards addressing limitations.
The fact that writing unsafe rust code is harder than unsafe C due to more rules? The fact that Rust has little support of nostdlib which is what the kernel is using? There's also the fact that most of the time in drivers (in my limited experience) you almost always want unsafe things but I assumed noone would think or admit that one
This was a pretty weak article? All it basically says is that it can be done and adds some speculation that it might make things better. But not much testing have been carried out and the comparisons to existing code that have been made show the Rust implementation to perform worse.
Here in the Linux Plumbers Conference talk regarding the NVMe driver (the same one discussed in the OP), the benchmarks show nearly identical performance to the existing implementation. At the end of the talk the co-author of the NVMe spec (Matthew Wilcox) made a point to stand up and comment with how unexpectedly impressed he is with the performance: "I was not expecting to see these performance numbers, they are amazing." (at this timestamp: https://youtu.be/Xw9pKeJ-4Bw?t=9486 ).
Linux Weekly News focusses heavily on work going on in the Linux kernel, and often looks at features and patch sets which aren't quite ready for mainline yet, but could be an interesting shift in direction for the kernel in future.
The inclusion of Rust in the kernel is one of these ongoing bits of work which isn't ready for mainline yet, but has still generated a lot of interesting discussion in the past on whether Rust is suitable at all, or whether it's suitable in it's default form, or if it is suitable then what is the best way to start introducing it, or, or, etc...
And this is another step along that (long, convuluted) path. The proposed 9P implementation specifically - to me at least.
If you mostly care about the Linux kernel insofar as what features it can offer you right now then, sure, this might not seem that interesting. And that's entirely fair - there are plenty of other technologies out there that I have that level of interest in.
But for the people who take a deep interest in the upcoming work and future direction of Linux, I think it's quite interesting.
I'd agree that the article probably is a bit too niche, or forward-looking, for a generic tech website/magazine, and doesn't have a huge target audience. But for a focussed site like LWN, I think it fits in pretty well, and matches the audience that LWN targets. It's definitely the kind of article that I keep my subscription for.
Folks, there is a good reason this is a subscriber-only content. It takes time and effort (and money!) to keep us up to date. For such a long time.
This is the 5'th SubscriberLink over the last week on HN: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Please, folks, subscribe, and let Corbet put the subscriber links here himself. The LWN weekly editions are definitely worth every penny.
Edit: I am paying LWN since 2012 and intend to keep paying for as long as there is LWN.
Is "SubscriberLink" a way to share paywalled LWN stuff with non-subscribers? Damn, I had no idea. I just presumed they un-paywalled stuff every now and again (like after N days or so).
I believe all paywalled content becomes free to read two weeks after initial publication
Ah ok. I should still probably sub, I tend to read LWN when it pops up here and never really gave it a moments thought before :O
In my humble opinion it is definitely worth it. All people over at LWN do tremendous job and should really be supported.
I haven’t even used Linux for a few years now. Still pay for LWN. It’s so worth supporting.
I can honestly say the resources at LWN have helped my career immensely. The in depth series on topics like namespaces and io_uring directly allowed me to tackle some hard projects - without those articles I would have steered clear of "magic i don't understand" and the relevant projects would have suffered.
It's definitely worth subscribing if you have the means (and the low cost of subscriptions suggests a lot of readers here have the means).
I'd agree that often with new features and subsytems recent LWN articles are the best documentation around.
Highly recommend.
Just to show support for corbet and his team. He could keep the articles paywalled forever, but is making them free after a week. Becoming a paying member is our way to say thank you for his fair attitude (and keeping ads at minimum level).
I work at totaly different field in IT, most articles are just "wow, that's interesting" for me and - I still keep LWN's subscriptionm and it is the only one I have.
Indeed. They even provide a mailing list you can subscribe to which will notify you whenever an article goes free. I think you need to have an account with them firstly though.
It's (usually?) less than that.
https://lwn.net/op/FAQ.lwn
> Must I subscribe to read the Weekly Edition?
> No, the Weekly Edition becomes freely available to all readers one week after its publication.
> I just presumed they un-paywalled stuff every now and again (like after N days or so).
All of LWN's paywalled articles become free for everyone to view after a week. During the week that articles are paywalled any subscriber can share currently paywalled content by creating a SubscriberLink.
LWN produces some of the best highly technical Linux and Linux adjacent content on the web.
Hacker News is not the place to post paywalled content, whether you believe the reasons are valid or not.
Rules are a bit more subtle: https://news.ycombinator.com/newsfaq.html (see: "Are paywalls ok?")
In fact, the only thing the rules call out as not ok with regards to pay walls is complaining about them.
... or posting paywalled links that don't have work-arounds.
It is according to the faq[0]
---
Are paywalls ok?
It's ok to post stories from sites with paywalls that have workarounds.
In comments, it's ok to ask how to read an article and to help other users do so. But please don't post complaints about paywalls. Those are off topic. More here.
---
[0] https://news.ycombinator.com/newsfaq.html
i really hope substack takes off, the strategy behind pay content is much more responsible than javascript "walls"
It should be pointed out that Corbet himself has come out in support of subscriber links posted here and at other places. That's why they provided that capability. That said it shouldn't be abused and I suspect if it ends up becoming too much they might remove that ability.
I agree with one of the other posters, we should support quality journalism, and lwn definitely is.
If LWN disappears tomorrow, I'll get my news elsewhere. Thank you for your concerns.
Where else do you get in depth kernel reporting?
Like the other poster said, in the source code.
That's not reporting.
I recommend browsing their index pages, and pause and ponder where else you'd find such high quality, curated, approachable content, in such concentration. You could suggest the kernel's own documentation, but that's more developer-facing.
> This is the 5'th SubscriberLink over the last week on HN
Its the first post to gain any meaningful traction
> Please, folks, subscribe, and let Corbet put the subscriber links here himself. The LWN weekly editions are definitely worth every penny.
According to corbet its totally fine if every now and then a subscriber link pops up on HN: https://news.ycombinator.com/item?id=1966033
> As long as the posting of subscriber links in places like this is occasional, I believe it serves as good marketing for LWN - indeed, every now and then, I even do it myself. We just hope that people realize that we run nine feature articles every week, all of which are instantly accessible to LWN subscribers.
The LWN FAQ also considers it no problem: https://lwn.net/op/FAQ.lwn#slinks
> Where is it appropriate to post a subscriber link?
> Almost anywhere. Private mail, messages to project mailing lists, and blog entries are all appropriate. As long as people do not use subscriber links as a way to defeat our attempts to gain subscribers, we are happy to see them shared.
I think posting this on HN does not defeat the purpose of gaining subscribers, but supports them. At least for me it worked. After reading some LWN articles on HN I decided to become a paying subscriber.
Telling people they are not allowed to post such links here actually IMHO hurts LWN.
But they didn't tell them that its not allowed, they just encouraged people to subscribe to this excellent publication.
"and let Corbet put the subscriber links here himself"
Another, more recent statement from corbet:
https://news.ycombinator.com/item?id=31852477
I only found out about LEB via links shared on HN. I haven’t subscribed yet but will consider it once I landed a job. I don’t think that sharing their most interesting articles is hurting them, more like the opposite.
Thanks for the reminder. I was a subscriber for a few years and then I stopped when I hit a long time crunch with no time to read much news. I just subscribed now that I feel I have some leisure time again for this and HN in general.
If they wanted to keep it subscriber only, they could achieve that by requiring users to be logged in.
It's clear that the intention is to allow for sharing (although there is the question of "at what scale?").
Then there's the page itself:
> Welcome to LWN.net
> The following subscription-only content has been made available to you by an LWN subscriber. Thousands of subscribers depend on LWN for the best news from the Linux and free software communities. If you enjoy this article, please consider subscribing to LWN. Thank you for visiting LWN.net!
The people who run the site seem to have less of a problem with sharing its content than you do.
I'm a huge fan of Rust (I write in it for a living), but I still can't help wonder how they are going to tolerate the compile times. A couple of drivers...no problem, but what happens once they get a large number of drivers in Rust? Compile times will definitely start going up considerably. I wonder how much this has been considered, or possibly they have no plans to do anything this extensive?
It might not be a big hurdle. OS developers aren’t as sensitive to compile times as app devs. An hour of futzing to bring up on a hardware dev board isn’t unheard of (although 5-10 mins would be more typical).
There was a big regression in compile times in the linux 2.2 to 2.4 time frame (the kernel grew a lot of functionality) but i don’t recall it getting much more than a little grumbling.
That was in the time of the Linux from Scratch book and Gentoo stage 0 builds. Some people don’t mind overnight compiles if it means they can specify just exactly the right combo of compiler flags to eeek out the max performance of their cpu.
I worked on a (legacy now) larg-ish code base (mostly C++ '03 with some self-imposed restrictions, but extensive usage of templates) for an 'embedded' system (enterprise NAS). A full build took north of 6h on the reasonable beefy desk-side workstations available to us (we considered distributed builds, but never got to test it afair). Every once in a while there was some grumbling about the build times, but it never really improved (every three years or so we got better build machines, but by that time the code base had grown as well), partly, I suspect, due to https://xkcd.com/303/ and partly because we learned to work-around that by finding the Makefile in the tree which (likely) gave us a proper build in the shortest possible time for the files we were actually working on (not being helped by the most complex Makefile tree I've ever seen).
I very much doubt that such large build times would go over well in the Linux kernel community.
Kernel compiles are a classic benchmark by which Linus has judged performance-related patches. Kernel developers do care(, mostly about incremental builds).
Given the disposition of kernel developers and the bespoke needs of drivers, I assume that Rust-written drivers will be neither pulling in dependencies willy-nilly nor will they be using generics all that heavily (but feel free to correct me if I assume wrongly). In that instance I don't think the Rust compilatiom process would impose any undue burden, especially since Rust can only be used for optional components of the kernel since it doesn't compile for all the targets that the kernel supports.
The Asahi GPU driver which is currently being developed uses a proc macro to deal with the different versions of firmware that the driver must support (i.e. fields are added/removed in firmware updates, and the driver must support both).
That probably will never compile especially quickly. I'm not sure how sensitive kernel devs are to clean build times. Presumably doing a non-incremental build is relatively rare?
> That probably will never compile especially quickly.
Can you link the code? Proc macros are not inherently slow to compile, but most proc macros pull in the `syn` and `quote` crates for convenience, which are fairly heavyweight (and then most people experience proc macros most commonly via Serde, which is doing a ton of work on its own). In simple cases you can forego most of that work, and some libraries like https://github.com/Manishearth/absolution exist as lightweight alternatives in these cases. Depending on what the Asahi driver needs it could also benefit from something like this (although without seeing the code I'm not sure why it would need proc macros when it sounds like conditional compilation via `cfg` would suffice there).
A big source of compile time used to be proc macros — the kernel could just disallow those?
If you watch the actual talk and check code at https://github.com/Rust-for-Linux you will see that it’s actually pretty heavy on generics and macros. Eg for the trait that defines file system operations (https://github.com/Rust-for-Linux/linux/blob/459035ab65c0ebb...) and other bindings to C code. I was actually surprised of seeing those, since they certainly don’t help with telling people a story that Rust is an easy language. But it’s probably the best one can do when having to interface with predefined APIs.
But You are certainly right in that external dependencies won’t be an issue.
The trait there doesn't look concerning to me (e.g. none of its methods are generic (its methods do use associated types, but those are pretty trivial to resolve or generate code for; there's only ever one set of associated types per impl, whereas with generic functions there's the potential for combinatorial code explosion depending on usage)).
However, I do see a custom `vtable` macro in there, and indeed macros can be a concern for compilation speed if they're not implemented with compiler-friendliness in mind. But other than that, I'd say that file shouldn't represent a problem for the compiler.
For the professional work I have done in Rust the compile times are workable when using cargo workspaces and incremental compilation. Making sure to when possible work directly in a member crate compiling only that for the iteration cycle. That requires a DAG structure for all dependencies so does not work everywhere.
Getting CI up to speed is a completely different (awful) topic.
increased compile times are a reasonable tradeoff for correctness, no? hardware can always be thrown at the problem. as soon as someone does a c oopsie you're going to lose whatever time you saved to debugging
When tightly controlled, Rust compilation times are quite good. Without attention it can bloat tremendously with proc macros and excessive generics (turn them into dyns!), but it's totally manageable. Depending on your corners of the ecosystem, your default experience could be a few orders of magnitude better or worse than someone else's.
I haven't done much experimentation, but I've heard the same. Currently, for new projects I try to design with as many crates as possible in order to a) keep them small b) make as many 'spokes' (aka not dependent on other crates) as possible. This helps a ton and something I wish I'd have known before starting some of my large projects. I also wish I had used more trait objects and less generics - the overhead rarely matters in practice.
Traits vs generics: yeah, personally I hope to one day see a language that allows application developers to decide what gets monomorphized, rather than library developers.
It seems like you could usually just synthesize the boxing needed. So why would you ever leave that choice in the library's hands, which has no idea how it will be used?
Mind expanding on the b) a bit?
If by that you mean writing things from scratch instead of pulling a dependency that does it already, it sounds a bit counterintuitive.
I believe they mean spokes as opposed to "wheels with other spokes", i.e. leaves vs trees.
The best dependency trees are as short and wide as possible. It means fewer changes with each upgrade, fewer chances of conflicts between them, and generally more-parallel compilation and better build caching.
Which is true in every language, but every community seems to need to relearn it from scratch.
I did not expect Plan 9, the Linux kernel, and Rust to intersect like that! Would adding this module mean that I could finally mount a 9P filesystem on my Linux machine without the need for third-party software and FUSE?
You already could, actually! zgrep CONFIG_9P_FS /proc/config.gz to see if your kernel's configured for it, modprobe 9p if it's built as a module. The article's talking about doing an in-kernel 9P _server_ instead; I think this is just so that it can act as an abstraction of an SMB server, because you ought to be able to do a 9P server in userspace with no loss of capabilities...
/proc/config.gz seems to be unavailable to Debian 11, at least by default, but "modinfo 9p" does show the module. Thanks for the clarification!
> /proc/config.gz seems to be unavailable to Debian 11
For clarification:
> /proc/config.gz isn't available in Debian, because the config is provided in /boot/config-*, no need for the in-memory variant
https://wiki.debian.org/KernelFAQ#line-46
> I think this is just so that it can act as an abstraction of an SMB server, because you ought to be able to do a 9P server in userspace with no loss of capabilities...
Correct, 9p file servers can live in user-space. At some point you have to mount the fs which on lunix/unix is done in-kernel.
In plan 9 the kernel is just a VFS that lets you build arbitrary file trees from collections of mounts and binds. On-disk file systems like FAT and EXT are served via user-space programs which open the disk partition (served as a file by sd(3)) and translates the 9p messages to EXT, FAT, etc operations. File servers can also provide synthetic files, e.g. an arduino can serve its io bits and/or internal variables over 9p (ninepea on github written by echoine.)
Since its a VFS all the way down and data moves via two way pipes mapped to fd's you can boot and pull root from ANY 9p server. Don't care if its CWFS, HJFS, FAT, EXT, SMB, SSHFS, NFS, etc. Kernel don't care as long as it sees files through a 9p connection over whatever 2-way medium you are using including rs232. You cant touch that level of flexibility with other operating systems.
Question: The author of the 9P server module "created an executor that can run async code in a kernel workqueue"; is this made a lot easier by the fact that Rust doesn't include an async executor as part of the core/stdlib?
The work on Rust in the Linux kernel doesn't include the standard library, and even if it did, having an executor around doesn't prevent you from creating and using a different one.
Thank you!
I think they are pointing out that because the Rust design has decoupled the functionality from the implementation, these kind of replacements are even possible in the first place.
Of course, if you use an alternative executor there are large swaths of the (non-std) async ecosystem that are not available to you (thinking of anything like relying on a tokio::timer::Delay as an example). But as you point out Rust in the Linux kernel already doesn't rely on the std, so they wouldn't even begin to imagine to use an off the shelf crate and expect it to work unmodified for their use case. I just think it is worth pointing out that this is a constraint we currently have.
You're better at expressing my thoughts than I am :)
estebank reads other people's mind, that's why he's so good at writing helpful compiler errors.
Thank you, you're too kind. I don't know how much that experience of "make the compiler try to talk to a human in a way they can understand it" translates to other parts of life, though :)
Right, yes: if we had a design that built a single executor into the language or we had a runtime that included it, you wouldn't be able to replace that as easily. Because executors are, at most, a library mechanism, you can always replace them or run more than one.
It doesn't in rust but in many other languages there would have been a single blessed async library and trying to use anything else would have been very painful. The fact that rust doesn't particularly force the use of it's standard runtime is key to why rust is useful for kernel work.
Yeah, but I think what they're getting at is that Rust could provide a standard executor in the library and also be modular enough that you can use whichever one you want. The questions are orthogonal
How do these Rust kernel modules handle out of bounds access in an array?
In a C module it would be undefined behaviour leading to kernel panic or weird bugs, in a Rust userspace binary it would be a panic that terminates the process with an error message, what happens here? Would it be a kernel panic as well? or is there a possibility of handling it?
You can catch the panic. It will panic, but I don't know if the driver will catch it. I hope so :)
This not actually substantively different from throwing an exception.
Indeed, but it's much better than undefined behavior :)
I believe Rust in linux was made so that it never panics. Here's a patch that removes panicking allocations for example: https://lore.kernel.org/lkml/20210704202756.29107-1-ojeda@ke... (but I think all other instances of panicking were removed as well).
EDIT: Look at replies. "Linus considers panics acceptable in some cases".
I presume it will still Oops...
I'm not sure what you mean by "it will still Oops".
https://en.wikipedia.org/wiki/Linux_kernel_oops
Gotcha. Thanks!
Linus considers it acceptable to panic for some programming mistakes, since after all the C code also blows up if you make some programming mistakes.
One I ran into (in the sense of read about, not experienced) was if I flatten a Vec of arrays, it's theoretically possible that the flattened structure has too many items in it to represent as a machine word integer. If this happens the flatten operation will panic.
This can't happen in a language like C (or C++) because their smallest type has size 1, so all the arrays can't possibly be bigger in total size than the amount of memory, that's nonsense. But Rust has two smaller sizes than this. The relevant one here is the Zero Size Type, Rust has no problem with the idea of an array of empty tuples, such an array could have say, a billion empty tuples in it, yet on a 32-bit system it just needs 4 bytes (to remember how many empty tuples are in it).
We can see that flattening a Vec of arrays of empty tuples is a pretty wild thing to choose to do, and nevertheless even if we do it, it only panics when the total amount of empty tuples won't fit in the integers of our native word size. But the function could be asked to do this, and so it might panic.
[ You might be wondering how can there be two sizes smaller than C's single byte types in Rust. The answer is the Never type ! and its pseudonym Infallible. The Never type is Empty, no values of this type are possible, so not only does Rust never need to store this type, it doesn't even need to emit machine code to handle this type - the code could never run. This makes sense in Generic programming, we can write Generic error handling code, but it evaporates when the error type was Infallible ]
This is exactly the kind of stuff I come on HN for. Thank you!
Can you, in kernel context?
After comments below, I'm not so sure. I was talking about regular Rust, I didn't know that Linux Rust is patched. Sorry.
You can also use .get(idx), which gives you either Some(data) or None in case of out-of-bounds access.
From what I understand a rust panic will just call BUG(). There is no support for unwinding as such.
Most likely you would have to use .get() which returns an Option rather than [] array index which panics.
Exactly. A rust panic will call the panic_handler, implemented there: https://github.com/Rust-for-Linux/linux/blob/459035ab65c0ebb...
So accessing an array out of bound will have a runtime check that will call the panic handler, and that panic handler calls BUG() which means kernel panic.
Worth noting that one doesn't need to use raw array accessing in Rust nearly so much as in C because you have things like iterators and for..in loops that will ensure correct access.
But I would assume it would be a kernel panic. It definitely won't be UB.
How does the Rust compiler handle, check bounds and ensures nothing bad can happen with a dynamic array that is passed to a Rust module from the Kernel, another C module or a C userspace program?
If an array and a length can be passed to a Rust module, I can just lie about it's size and, unless there is a runtime bound check (which can be slow), I guess bad things can happen too.
The Rust for Linux implementation converts a Rust panic into a Linux kernel BUG macro call. I believe this will expand to an invalid CPU instruction (at least on popular architectures), and if you're a kernel thread you die immediately with the kernel reporting the state where this happened. Obviously in some cases this is fatal and the report might only be seen via say, a serial port hooked up to a device in a test lab or whatever.
So, it's not a kernel panic, but it's extremely bad, which seems appropriate because your code is definitely wrong. If you're not sure whether the index is correct you can use get() or get_mut() to have an Option, which will be None if your index was wrong (or of course you could ask about the length of the array since Rust remembers how long arrays are).
BUG() will panic the kernel.
https://elixir.bootlin.com/linux/latest/source/include/asm-g...
I guess that's quite drastic for a checked out of bound access, when there's no actual memory safety issue and the compiler can simply return an error from the function, or do something else less drastic.
In which case you can call the get method (which returns an Option - i.e. either the value or null) rather than indexing and return an error value.
In Rust code, if you're not able to locally reason that an array index is valid, it should be written with .get() and the None case handled appropriately.
It's impossible to claim there's "no actual memory safety issue" when a program's invariants have been broken: all bets are off at that point.
Why allow indexed access at all if the compiler is emitting a conditional check anyway?
Some people have argued that indexed access is a wart, but it would be quite heavyweight to always have to unwrap an option when accessing a known-good index:
Instead, indexing on arrays / vecs is (essentially) sugar for .get(index).unwrap(), if you don't want the unwrap behavior use get. This is very similar to Python, though Python throws an exception which obviously isn't available to Rust.
But in real code you never want unwrap, so why provide sugar for it?
Because in the common case you assume that, if your code is correct, all of your indexing will be in bounds, but for memory safety reasons we need a bug to be reported if memory safety would have been violated. So we allow direct indexing with a panic on out of bounds because it's the most ergonomic for that common case
The bug is not just reported here, the whole computer shuts down and all your unsaved work gets lost. That's not very ergonomic either.
I've come to believe ergonomics is a siren song here, mostly because recently I've been considering panics as forbidden as memory unsafety is... it's never okay for your embedded system or web server to panic, so don't act like that style is somehow preferable.
If you "know" the index is in bounds, you can get_unchecked. Otherwise you should get. Either would be a sane choice for the index operator.
Because it's convenient and familiar to most programmers. Not providing bounds-checked indexing makes some kinds of code very hard to write.
But note his problem also happens with integer division.
In Rust, a[x] on an array or vec is really a roughly a shortand for a.get(x).unwrap() (with a different error message)
Likewise, a / b on integers is a kind of a shortand for a.checked_div(b).unwrap()
The thing is, if the index ever is out of bounds, or if the denominator is zero, the program has a bug, 100% of time. And if you catch a bug using an assertion there is seldom anything better than interrupting the execution (the only thing I can think of is restarting the program or the subsystem). If you continue execution past a programming error, you may sometimes corrupt data structures or introduce bizarre, hard to debug situations.
Doing a pattern match on a.get(x) doesn't help because if it's ever None (and your program logic expects that x is in bounds) then you are kind of forced to bail.
The downside here is that we aren't catching this bug at compile time. And it's true that sometimes we can rewrite the program to not have an indexing operation, usually using iterators (eliding the bounds check will make the program run faster, too). But in general this is not possible, at least not without bringing formal methods. But that's what tests are for, to ensure the correctness of stuff type errors can't catch.
Now, there are some crates like https://github.com/dtolnay/no-panic or https://github.com/facebookexperimental/MIRAI that will check that your code is panic free. The first one is based on the fact that llvm optimizations can often remove dead code and thus remove the panic from a[x] or a / b - if it doesn't, then compilation fails. The second one employs formal methods to mathematically prove that there is no panic. I guess those techniques will eventually be ported to the kernel even if panics happen differently there (by hooking on the BUG mechanism or whatever)
> It's impossible to claim there's "no actual memory safety issue" when a program's invariants have been broken: all bets are off at that point.
When the underlying _runtime's_ invariants have been broken, not when the program invariants have been broken. i.e. you can recover from almost everything save for a VM error in a VM language like Java, since there's no way for the program to mess up the VM's data structures in a way that they cannot be brought back to a defined state.
Would a kernel module be written as a normal Linux-targeting Rust program, or would it be more like a bare metal target with its own (user-provided) panic handler?
More like the latter. Kernel modules don’t run in userspace.
The 9P file protocol, he said, comes from the Plan 9 operating system. The kernel has a 9P client, but no 9P server. There is a 9P server in QEMU that can be used to export host filesystems into a guest. The protocol is simple, Almeida said, defining a set of only ten operations. His 9P server implementation works now, in a read-only mode, and required just over *1,000* lines of code.
... I wonder how many lines of code the Plan9 (C) server takes? (much less I suspect)
From previous example drivers, I'd expect that it's not dissimilar overall and mostly the cause will be style preferences e.g. maybe the C programmers loves one-line for loops and the Rust programmer chooses default Rust style which doesn't do that, or contrariwise the C programmer finds it helpful to write out complicated variable types across multiple lines and the Rust programmer was comfortable letting them be inferred instead.
If the C ends up hand rolling something Rust just has built-in, that adds up pretty quickly. For example Rust's arrays, strings etc. know how big they are, C's do not and so you need extra code to re-calculate or to explicitly store and retrieve those sizes. On the other hand it may be tempting to express a Rust type as having methods defined over it, rather than as you must in C using only free functions throughout, there's also a Rust discipline of always deriving common traits e.g. Clone, Debug and Eq, when they're appropriate, as a convenience, that's one line extra in typical style.
This Linux 9P server is a 9P-to-Linux-VFS translator. No such software component exists in Plan9.
You can think of the Plan9 kernel as a 9P multiplexer. 9P in, 9P out. The closest matching thing would be e.g. a 9P-to-ext4-in-a-file translator -- but it's always "9P server talking to $FOO" (or "9P server that makes files up on the fly").
Most of the complexity of this Linux 9P server would be in translating between the unrelated worlds of 9P and Linux VFS. A more apt comparison is the size of this 9P server vs an in-kernel NFS server -- they both translate something unrelated to Linux VFS.
Grsec in the kernel eta?
I can only see that happening if spender goes insane.
*goes sane
More seriously: many different security improvements are filtering into the kernel idea-by-idea, insofar as folks working on kernel security do the actual work of making them fit in with the kernel, and/or coming up with better alternatives.
https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Pr... is basically trying to upstream grsec's good ideas in upstream-friendly ways.
(Oddly, grsec itself has expressed opposition to Rust in the kernel, I think on the grounds that their custom GCC plugins can't cross the language boundary. Now that there's work on supporting Rust in GCC itself, I'm not sure if that objection still applies.)
> and unlike in C++, we can write it and actually trust that it's correct
Some of these people are so obnoxious...
obnoxious about what? c++ is so brittle that a gentle breeze could cause a cacophony of soundness issues
> obnoxious about what?
Vaguely descriptive attacks that serve no purpose other than religious wars. When I see statements like that I see another person who thinks it’s simply fashionable to hate popular languages because they heard someone like Torvalds, Thompson, or Stallman say it. If you have a fundamental explanation like they do, then it's at least reasonable, disregarding the irrelevant context of a Linux kernel forum thread. But I rarely see that, instead I find mutter like "cpp is bloated, unstable, overcomplicated crap with ugly syntax, that's why I use rust, or nim, or any other pretty new thing that was conceived 35+ years later".
For instance:
> c++ is so brittle that a gentle breeze could cause a cacophony of soundness issues
Okay, how so, anything about C++ or just C in general?
There's a design principle here, rather than just some sort of hubris.
In languages as powerful as Rust and C++ you can express programs which fall into three categories, two of these aren't very interesting, the programs which are obviously valid, and the programs which are obviously nonsense. We know what to do with these programs, the former should result in emitting correct machine code, the latter should cause a diagnostic (error message). The problem is the last group, programs whose validity is difficult to discern. The bigger and more complicated your software the more likely it may end up in this last group.
In C++ the third category are treated as valid. In the ISO standard this is achieved by having clauses which declare that in certain cases the program is "Ill-formed, no diagnostic required" which means the standard can't tell you what happens, but you won't necessarily get an error message, your compiler may spit out a program that does... something. Maybe it does what you expected, and maybe it doesn't, the standard asks nothing more.
In Rust these are treated as invalid. If you try hard enough (or cheat and Google for one) you can write Rust programs which you can reason through why they should work but the compiler says no. You get an error message explaining why the compiler won't accept this program.
Now, if Rust's compiler doesn't like your program, you can rewrite it so that it's valid. Once you do that, which is often very easy - you're definitely in that first category of correct programs, hooray. The program might well do something you didn't intend, the compiler isn't a mind reader and has no idea that you meant to write "Fnord" in that text output and "Ford" is a typo, but it's definitely a valid Rust program.
In the C++ case we can't tell. Maybe our program is the ravings of a lunatic, the compiler isn't obliged to mention that and we are in blissful ignorance. This also provides little impetus for the standard's authors or compiler vendors to reduce the size of the third category of programs, after all they seem to compile just fine.
Obviously it'd be excellent in theory to completely eliminate the third category. Unfortunately Rice's Theorem says we cannot do that.
>In the C++ case we can't tell
Inherently, or in the compiler implementations you've seen?
I'm not sure if there's a formal proof, but I believe its inherently undecidable.
There's some discussion here https://stackoverflow.com/questions/7237963/a-c-implementati...
Inherently. Because of Rice's theorem you can't just ensure all the valid programs are correctly identified (basically you need to solve the halting problem to pull that off). But the ISO document doesn't allow you to do what Rust does and just reject some of the valid programs because you aren't sure.
Now of course you could build a compiler which rejects valid programs anyway and is thus not complying with the ISO document. Arguably some modes of popular C++ compilers are exactly that today. But now we're straying from the ISO C++ standard. And I'm pretty sure I didn't see a modern C++ compiler which reliably puts all the "No diagnostic required" stuff into the "We generate a diagnostic and reject the program" category even with a flag but if you know of one I'm interested.
So one person said creating the rust interface is very difficult to do correctly and the other said rust bare metal isn't complete enough for his use case
I called both of these (and compile times) when people first tried to get rust on the linux kernel. Maybe it's better to acknowledge these problems before you start instead of beating your head? I'm no psychic and I know I wasn't the only one who called these
> Maybe it's better to acknowledge these problems before you start instead of beating your head?
What's unacknowledged? They're doing exploratory work. That often involves developing under known and expected limitations to understand the problem space and have something to test with when progress is made towards addressing limitations.
The fact that writing unsafe rust code is harder than unsafe C due to more rules? The fact that Rust has little support of nostdlib which is what the kernel is using? There's also the fact that most of the time in drivers (in my limited experience) you almost always want unsafe things but I assumed noone would think or admit that one
This was a pretty weak article? All it basically says is that it can be done and adds some speculation that it might make things better. But not much testing have been carried out and the comparisons to existing code that have been made show the Rust implementation to perform worse.
> show the Rust implementation to perform worse
Here in the Linux Plumbers Conference talk regarding the NVMe driver (the same one discussed in the OP), the benchmarks show nearly identical performance to the existing implementation. At the end of the talk the co-author of the NVMe spec (Matthew Wilcox) made a point to stand up and comment with how unexpectedly impressed he is with the performance: "I was not expecting to see these performance numbers, they are amazing." (at this timestamp: https://youtu.be/Xw9pKeJ-4Bw?t=9486 ).
EDIT: here's the timestamp of the benchmarks themselves: https://youtu.be/Xw9pKeJ-4Bw?t=8626
Linux Weekly News focusses heavily on work going on in the Linux kernel, and often looks at features and patch sets which aren't quite ready for mainline yet, but could be an interesting shift in direction for the kernel in future.
The inclusion of Rust in the kernel is one of these ongoing bits of work which isn't ready for mainline yet, but has still generated a lot of interesting discussion in the past on whether Rust is suitable at all, or whether it's suitable in it's default form, or if it is suitable then what is the best way to start introducing it, or, or, etc...
And this is another step along that (long, convuluted) path. The proposed 9P implementation specifically - to me at least.
If you mostly care about the Linux kernel insofar as what features it can offer you right now then, sure, this might not seem that interesting. And that's entirely fair - there are plenty of other technologies out there that I have that level of interest in.
But for the people who take a deep interest in the upcoming work and future direction of Linux, I think it's quite interesting.
I'd agree that the article probably is a bit too niche, or forward-looking, for a generic tech website/magazine, and doesn't have a huge target audience. But for a focussed site like LWN, I think it fits in pretty well, and matches the audience that LWN targets. It's definitely the kind of article that I keep my subscription for.