Technically i think the worst they could do would be to record your screen. (Barring some extra fancy exploits or something.)
Technically i think the worst they could do would be to record your screen. (Barring some extra fancy exploits or something.)
That depends on how you speed it up. For example, the Covid vaccines were “accelerated” compared to normal vaccines but they did that by spending additional money to run the steps of the process in parallel. Normally they don’t do that because if one of the steps fail they have to go back and those parallel processes are wasted. For the Covid vaccines, the financial waste was deemed worth it to get the speed up of parallelization.
As well as the package manager (and release type/schedule as mentioned in a different reply) you might want to look at the overall structure.
Does the distro use selinux or app armor (you probably want at least one)? Does it follow traditional distro structure like Ubuntu/Debian or is it weird like atomic (ex Silverblue) or declarative (ex Nixos) distro? Is it a minimalist distro (Arch is the big modern one) it maximalist (Suse)? Those kinds of things can also be informative.
It sure as fuck does!
Hadn’t seen that before but given what else i know about him it’s not really surprising…
Someone above mentioned screen reader support for blind use accessibility stuff. For users who are blind, this is critical.
See the start of this post talking about device tree models vs boot time hardware discovery.
There’s no reason an arm chip/device couldn’t support hardware discovery, but by and large they don’t for a variety of reasons that can mostly be boiled down to “they don’t want to”. There’s nothing about RISC-V that makes it intrinsically more suited to “PC style” hardware detection but the fact that it’s open hardware (instead of Apple and Qualcomm’s extremely locked down proprietary nonsense) means it’ll probably happen a lot sooner.
There’s also the fact that Arm doesn’t really work with arbitrary PC style hardware. Unless this got fixed (and there have been some pushes) you have to pretty much hard code the device configuration so you can’t just (for example) pull a failed graphics card and swap a new one and expect the computer to boot. This isn’t a problem for phone (or to an extent: laptop) makers because they’re happy to hard code that info. For a desktop, though, there’s a different expectation.
RiscV does support this, i believe, so in that sense it fits the PC model better.
All the PPA maintainers went to Arch.
I think you already got a good answer but let me throw in another:
Fedora’s dnf provides some good history and update reversion tools. You can use:
dnf history list
to get a list of all actions taken on the system since install. Use “dnf history info 5” to get info on the 5th transaction. (Get the transaction ID numbers from “dnf history list”.)
Then to revert a change use either:
dnf history rollback or dnf history undo
Using undo reverses a single transaction, so if you have one where you did something like “dnf install tmux” and then ran undo on it then that would be equivalent to running “dnf remove tmux” in terms of what it does on your system.
Rollback does what you might think: it basically goes through all the updates between the most recent and the one specified and it reverses each of them, theoretically restoring the system to the state it was in at that time.
I say “theoretically” because this isn’t a perfect system. For example, if you have an update where you removed some software that had some customizations done to it and then went through a rollback it’ll put that software back but may be missing configurations you applied to it, so potentially it could cause some issues if those were important. This gets into a lot of complicated stuff and tbh it is a powerful but imperfect system. Something like Atomic gives you more of a guarantee that a rollback will work because the whole system state is defined by the installer, not just the packages.
There’s one more note: Fedora removes old versions of packages from its repos so you’ll need to add their historical archives repo to do certain things. I forget how to do that off the top of my head.
This may not be what you want exactly but it’s a powerful tool that’s good to be aware of.
dnf remove @gnome-desktop dnf autoremove
For the curious.
Note that the autoremove might not do anything here. Removing @gnome-desktop removes the whole package group and should get everything in it.
I imagine something like Fedora with an RT kernel and CPU partitioning could be as reliable as an old Amiga. CPU partitioning would let you reserve one or more cores for specific applications such as music production software. Now, the software in question may not be up to the task but that’s a different problem.
They used to be good, almost as good as the Windows drivers. Lately, though, they’ve been kinda trash and the AMD open driver is pretty alright now. (Performance isn’t as good but other than that it’s good.)
Doesn’t KDE basically have color management with 6 or 6.1 or something?
Ubuntu previously was excepting Gnome point releases from major testing on the grounds that Gnome’s point releases are all big fixes and thus don’t require Ubuntu’s major testing process. Gnome shipped a new major feature in a point release and so Ubuntu said “oops, guess we gotta test their point releases after all”. Practically, it means Gnome point releases take longer to get into Ubuntu than they previously did (but are more tested for bugs).
There’s a real opportunity here and we can either take it and run or we can let it pass us by.
They’re definitely going to back down. I’m guessing they’re going to back down a little (maybe create an opt out for the enterprise customers?) and then claim victory, but we’ll see.
Basically what it’s doing is booting to an alternate OS configuration to do the install. It’s way easier to just reboot again rather than tear down the installer environment and go into a normal one. That’s basically a reboot in all but name. It’s annoying to have to enter your encryption passphrase twice, though.
I feel like a lot of Linux behaviors tell me most Linux people don’t encrypt their data, which tbh should not only be the default but should be difficult to opt out of. Apple actually does this one right. Encryption is just the way it works.
Pretty sure you can configure “open as root” in some file managers. Also you can configure a gksudo (or similar) setup.
Really though, that makes me think. The file manager should detect you’re opening something you don’t have write access to and ask if you want to authenticate as root to open it.
Blue check on Twitter… Someone who’s paying $10/mo to the world’s richest person has an overinflated sense of importance… well… What’re you gonna do?