- cross-posted to:
- linux@lemmy.ml
- cross-posted to:
- linux@lemmy.ml
Why is there a massive chip hanging precariously on a disorganized network rack?
This is the best summary I could come up with:
This effort has been around optimizing cacheline consumption and adding safeguards to ensure future changes don’t regress.
In turn this optimizing of core networking structures is causing TCP performance with many concurrent connections to increase by as much as 40% or more!
This patch series attempts to reorganize the core networking stack variables to minimize cacheline consumption during the phase of data transfer.
Meanwhile new Ethernet driver hardware support in Linux 6.8 includes the Octeon CN10K devices, Broadcom 5760X P7, Qualcomm SM8550 SoC, and Texas Instrument DP83TG720S PHY.
NVIDIA Mellanox Ethernet data center switches can also now enjoy firmware updates without a reboot.
The full list of new networking patches for the Linux 6.8 kernel merge window can be found via today’s pull request.
The original article contains 387 words, the summary contains 124 words. Saved 68%. I’m a bot and I’m open source!
Thought from the headline this was going to be tcp_bbr related, but now. This is a welcome surprise.
9th Jan …
“A hell of an improvement especially for the AMD EPYC servers”
Look closely at the stats in the headers of those three tables of test results. The NICs have different line speeds and the L3 cache sizes are different too. IPv4 and 6 for one and only IPv6 for the other.
Not exactly like for like!
This isn’t a benchmark of those systems, it’s showing that the code didn’t regress on either hardware set with some anecdotal data. It makes sense they’re not like for like.
Okay, it is up to ~40%, but the underlying changes is fundamental.
I watched a video on this, the way they managed it was by reordering variables in structs. That’s kinda insane
Good lord the comments on this one are a mess