Hector Martin on Nostr: So I just pushed a kernel fix for Asahi Linux to (hopefully) fix random kernel ...
So I just pushed a kernel fix for Asahi Linux to (hopefully) fix random kernel panics.
The fix? Increase kernel stacks to 32K.
We were running out of stack. It turns out that when you have zram enabled and are running out of physical RAM, a memory allocation can trigger a ridiculous call-chain through zram and back into the allocator. This, combined with one or two large-ish stack frames in our GPU driver (2-3K), was simply overflowing the kernel stack.
Here's the thing though: If we were hitting this with simple GPU stuff (which, yes, has a few large stack frames because Rust, but it's a shallow call stack and all it's doing is a regular memory allocation to trigger the rest all the way into the overflow) I *guarantee* there are kernel call paths that would also run out of stack, today, in upstream kernels with zram (i.e. vanilla Fedora setups).
I'm honestly baffled that, in this day and age, 1) people still think 16K is acceptable, and 2) we still haven't figured out dynamically sized Linux kernel stacks. If we're so close to the edge that a couple KB of extra stack from Rust nonsense causes kernel panics, you're definitely going over the edge with long-tail corner cases of complex subsystem layering *already* and people's machines are definitely crashing already, just perhaps less often.
I know there was talk of dynamic kernel stacks recently, and one of the issues was that implementing it is hard on x86 due to a series of bad decisions made many years ago including the x86 double-fault model and the fact that in x86 the CPU implicitly uses the stack on faults. Of course, none of this is a problem for ARM64, so maybe we should just implement it here first and let the x86 people figure something out for their architecture on their own ;).
But on the other hand, why not increase stacks to 32K? ARM64 got bumped to 16K in *2013*, over 10 years ago. Minimum RAM size has at *least* doubled since then, so it stands to reason that doubling the kernel stack size is entirely acceptable. Consider a typical GUI app with ~30 threads: With 32K stacks, that's less than 1MB of RAM, and any random GUI app is already going to use many times more than that in graphics surfaces.
Of course, the hyperscalers will complain because they run services that spawn a billion threads (hi Java) and they like to multiply the RAM usage increase by the size of their fleet to justify their opinions (even though all of this is inherently relative anyway). But the hyperscalers are running custom kernels anyway, so they can crank the size down to 16K if they really want to (or 8K, I heard Google still uses that).
Published at
2024-05-28 05:15:53Event JSON
{
"id": "7ad94d5461bb6771f851a347e2d7253514001d311df14133b43097c9ec78ac9f",
"pubkey": "058a6d106c5e6719008ce4db3f64c846caf49925227a39533d12a846fbab21ee",
"created_at": 1716873353,
"kind": 1,
"tags": [
[
"proxy",
"https://social.treehouse.systems/@marcan/112517012101266710",
"web"
],
[
"proxy",
"https://social.treehouse.systems/users/marcan/statuses/112517012101266710",
"activitypub"
],
[
"L",
"pink.momostr"
],
[
"l",
"pink.momostr.activitypub:https://social.treehouse.systems/users/marcan/statuses/112517012101266710",
"pink.momostr"
]
],
"content": "So I just pushed a kernel fix for Asahi Linux to (hopefully) fix random kernel panics.\n\nThe fix? Increase kernel stacks to 32K.\n\nWe were running out of stack. It turns out that when you have zram enabled and are running out of physical RAM, a memory allocation can trigger a ridiculous call-chain through zram and back into the allocator. This, combined with one or two large-ish stack frames in our GPU driver (2-3K), was simply overflowing the kernel stack.\n\nHere's the thing though: If we were hitting this with simple GPU stuff (which, yes, has a few large stack frames because Rust, but it's a shallow call stack and all it's doing is a regular memory allocation to trigger the rest all the way into the overflow) I *guarantee* there are kernel call paths that would also run out of stack, today, in upstream kernels with zram (i.e. vanilla Fedora setups).\n\nI'm honestly baffled that, in this day and age, 1) people still think 16K is acceptable, and 2) we still haven't figured out dynamically sized Linux kernel stacks. If we're so close to the edge that a couple KB of extra stack from Rust nonsense causes kernel panics, you're definitely going over the edge with long-tail corner cases of complex subsystem layering *already* and people's machines are definitely crashing already, just perhaps less often.\n\nI know there was talk of dynamic kernel stacks recently, and one of the issues was that implementing it is hard on x86 due to a series of bad decisions made many years ago including the x86 double-fault model and the fact that in x86 the CPU implicitly uses the stack on faults. Of course, none of this is a problem for ARM64, so maybe we should just implement it here first and let the x86 people figure something out for their architecture on their own ;).\n\nBut on the other hand, why not increase stacks to 32K? ARM64 got bumped to 16K in *2013*, over 10 years ago. Minimum RAM size has at *least* doubled since then, so it stands to reason that doubling the kernel stack size is entirely acceptable. Consider a typical GUI app with ~30 threads: With 32K stacks, that's less than 1MB of RAM, and any random GUI app is already going to use many times more than that in graphics surfaces.\n\nOf course, the hyperscalers will complain because they run services that spawn a billion threads (hi Java) and they like to multiply the RAM usage increase by the size of their fleet to justify their opinions (even though all of this is inherently relative anyway). But the hyperscalers are running custom kernels anyway, so they can crank the size down to 16K if they really want to (or 8K, I heard Google still uses that).",
"sig": "db2c8a4dd429df37ff41bdec626d9427a6be0e5b04d8ae6ae7d2a7f24ac4ab4251c564524fe5be2884a1e9ad71417236aaf61aa09b838586a8c9263b8d2835ad"
}