- Red Hat Linux 5.1 - 7.x
- Slackware 7.0 - 12.0
- Ubuntu 6.10 - 9.10
- Slackware 13.37 - 14.1
- Mint 16 - 17
- Arch
Yeah god forbid people have some interesting discussion on this platform, right?
The post doesn’t answer the questions, it’s why I asked.
It says:
All running on a krun microVM with FEX and full TSO support 💪
I was not expecting Party Animals to run! That’s a DX11 game, running with the classic WineD3D on our OpenGL 4.6 driver!
Now I know some of these words, but it does not answer my question.
So how does that work given that most Steam games are x86/x64 and the M2 is an ARM processor? Does it emulate an x86 CPU? Isn’t that slow, given that it’s an entirely different architecture, or is there some kind of secret sauce?
I ran it perfectly on a 33MHz 486 with 4mb RAM for a long time. Even Doom II with some of its heavier maps ran fine.
“Perfectly” would mean it ran at 35fps, the maximum framerate DOS Doom is capped at. In the standard Doom benchmark, a dx33 gets about half that: 18fps average in demo3 of the shareware version with the window size reduced 1 step. Demo3 runs on E1M7, which isn’t the heaviest map, so heavier maps would bog the dx33 down even more.
I’m sure you found that acceptable at the time, and that you look back on it with slightly rose-tinted glasses of nostalgia, but a dx2/66 and preferably even better definitely gave you a much better experience, which was my point.
If anyone can enlighten me, This is pretty much why you can find DooM on almost any platform BECAUSE of its Linux code port roots?
I mean yeah. Doom was extremely popular and had a huge cultural impact in the 90s. It was also the first game of that magnitude of which the source was freely released. So naturally people tried to port it to everything, and “but can it run Doom?” became a meme on its own.
It also helps that the system requirements are very modest by today’s standards.
It ran like absolute ass on 386 hardware though, and it required at least 4MB of RAM which was also not so common for 386 computers. Source: I had a 386 at the time, couldn’t play Doom until I got a Pentium a few years later.
Even on lower clocked 486 hardware it wasn’t that great. IIRC, it needed about a 486 DX2/66 to really start to shine.
What has Rocky done?
Also, I 100% understand not liking Oracle as a company, but anyone can use OEL freely without ever having to deal with Oracle the company, and it’s a damn good RHEL substitute.
Without knowing what was being hosted, the only surefire way would be pulling a complete disk image with cat or dd.
That’s not surefire, unless you’re doing it offline. If the data is in motion (like a database that’s being updated), you will end up with an inconsistent or corrupt backup.
Surefire in that case would be something like an lvm snapshot.
If you wanted to stay on a similar system, RHEL 9 would be a good option or one of its “as similar as possible” like AlmaLinux.
No love for Rocky?
Also Oracle Linux is still free, and fully compatible with RHEL.
How the fuck am I supposed to know that Network Manager won’t support DNS over TLS
Read the documentation? Use google?
The very first hit when you google “dns over tls tumbleweed” provides the answer: https://dev.to/archerallstars/using-dns-over-tls-on-opensuse-linux-in-4-easy-steps-enable-cloud-firewall-for-free-today-2job
A more generic query “dns over tls linux” gives this, which works just the same: https://medium.com/@jawadalkassim/enable-dns-over-tls-in-linux-using-systemd-b03e44448c1c
Both google searches return several more hits that basically say the same thing.
Even the NetworkManager reference manual refers you to systemd-resolved as the solution: https://www.networkmanager.dev/docs/api/latest/settings-connection.html
Key Name | Value Type | Description |
---|---|---|
dns-over-tls | int32 | Whether DNSOverTls (dns-over-tls) is enabled for the connection. DNSOverTls is a technology which uses TLS to encrypt dns traffic. The permitted values are: “yes” (2) use DNSOverTls and disabled fallback, “opportunistic” (1) use DNSOverTls but allow fallback to unencrypted resolution, “no” (0) don’t ever use DNSOverTls. If unspecified “default” depends on the plugin used. Systemd-resolved uses global setting. This feature requires a plugin which supports DNSOverTls. Otherwise, the setting has no effect. One such plugin is dns-systemd-resolved. |
I don’t use NetworkManager, I’ve never even used Tumbleweed and I found the answer in all of 10 minutes. Of course that doesn’t help if you’re so clueless that you didn’t even know that you were using DNS-over-TLS, or that DoT is a very recent development that differs significantly from regular DNS and that it requires a DNS resolver that supports it.
when every other operating system does?
Like Windows 10? (Hint: it doesn’t)
You use Arch. Mr skillful
Who cares what I use. When I’m messing with something I don’t understand, I at least read the documentation first instead of complaining on the internet and calling the whole community toxic and, I quote, “Butthurt Linux gobblers” when you get the slightest bit of pushback.
I have had so many instances of having to spend hours upon hours upon hours just do figure out how to do some basic shit on Linux that I can do on every operating system within a matter of 5 minutes
skill issue.
Read the post. The user obviously didn’t even know that Mullvad uses DNS over TLS and that the other providers used regular DNS, nor did he know how to properly troubleshoot a DNS issue, which is a skill you should have on any OS if you’re going to mess about with DNS settings.
LOL this isn’t even a Linux issue. This is an “I’m confused about how DNS works” issue.
There was a short period of time when enlightenment was the default window manager for Gnome, later to be replaced by Sawfish. It was a hideous experience by the way.
Early Gnome was weird. The Gnome File Manager was also originally based on the terminal program Midnight Commander.
It was a bit rocky coming over from Plasma 5, but settled in nicely now.
Oh and don’t forget to take backups of your /home. Thats good practice for every desktop environment.
The config files of the major desktop environments have become a mess though. Plasma absolutely shits files all over ~/.config
and /.local/share
where they sit mingled together with the config files of all your other applications and most of it is thoroughly undocumented. I’ve been in the situation where I wanted to restore a previous state of my Plasma desktop from my backups or just start with a clean default desktop and there is just no straightforward way to do that, short of nuking all your configurations.
Doing a quick find query in my current home directory, there are 57 directories and 79 config files that have either plasma or kde in the name, and that doesn’t even include all the /.config/*
files belonging to plasma or kde components that don’t have it in their name explicitly (e.g. dolphinrc
, katerc
, kwinrc
, powerdevilrc
, bluedevilglobalrc
, …)
It was much simpler in the old days when you just had something like a ~/.fvwmrc
file that was easy to backup and restore, even early kde used to store everything together in a ~/.kde
directory.
apt purge nano
is one of the first things I do on a new Debian installation. Much easier to remember than having to use update-alternatives
, select-editor
and the $EDITOR
variable to convince the likes of vigr
,vipw
, visudo
,crontab -e
,… that I really want to use vim as my primary editor.
Not really, because you’re now going to make it do more, i.e. incorporate the functionality of sudo and expose it to user input. So unless you can prove that the newly written code is somehow inherently more secure than sudo’s existing code, the attack surface is exactly the same.
The attack surface will be a systemd daemon running with UID=0 instead, because how else are you going to hand out root privileges?
So it doesn’t really change anything to the attack surface, it just moves it to a different location.
We are talking about LTS distros, not about bridges. The context is pretty clear.