• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: October 4th, 2023

help-circle







  • tal@lemmy.todaytoLinux Gaming@lemmy.mlHalo infinite on Linux
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    7 months ago

    I’ve never played Halo Infinite or used Mint, so I’m going to have to be a bit hands-off. I’ve no idea if there is a trivial fix, or whether you’re using Wayland or whether you’re using an Nvidia/AMD GPU, but I can try give some suggestions.

    The mouse movement also feels off like I have mouse acceleration on or input lag.

    Well, it could be using the windowing environment’s mouse acceleration. Is there an option in Halo’s video settings for something like “fullscreen” or “borderless fullscreen”/“windowed”/“borderless windowed”? If it’s any of the latter, my guess is that it’s most-likely using the windowing environment’s acceleration.

    A reasonable test might be flipping off your desktop environment’s mouse acceleration, running the game, seeing if the issue goes away. I don’t think that there’s any non-desktop mouse acceleration layer that could be causing it.

    I’m not familiar with the issue, but I also see some discussion online about Proton-GE – the GloriousEggroll build, not Valve’s – intaking patches for raw input.

    https://www.reddit.com/r/linux_gaming/comments/1b9sga3/wayland_mouse_sensitivity_inconsistency_in_game/

    But I don’t know whether that’s relevant to Wayland or not (or whether you’re using Wayland). Probably wouldn’t hurt to give Proton-GE a shot, though, rather than Proton Experimental, if you’re otherwise unable to resolve it.

    When I run the game on windows I get 144 fps almost constantly on Linux I get 70-80.

    The first thing I’d do is glance at the ProtonDB page, see if anyone has run into performance problems and has a fix. That’s a good first stop for “something under Proton isn’t working the way I want”.

    https://www.protondb.com/app/1240440/

    If that doesn’t help…

    I’m guessing that you have a 144Hz-capable monitor and that it’s running at 144Hz in the game in Linux, just to rule out anything silly like the monitor refresh rate being low and running with vsync?

    If so, I suppose that the next thing I’d look at is whether your CPU or GPU is the bottleneck, as it’s most-likely one or the other.

    There are various HUDs to look at that. I don’t know what’s popular these days.

    Looking at the Halo Infinite ProtonDB page, I see people talking about using mangohud there, so I imagine that it’d probably work.

    The GitHub page says that it can show both CPU and GPU load in-game.

    https://github.com/flightlessmango/MangoHud

    If that doesn’t work for you, if you have an AMD card, there’s a utility called “radeontop” that will let you see your GPU’s load. It runs in a console. I don’t know what desktop environment you have set up in Mint or what Mint even uses by default, but if you know how to flip away from the game to another workspace there, you can take a look at what it shows. It looks like Nvidia’s equivalent is “nvidia-smi”. I’ve used those before when monitoring GPU load. The top command will show you how many of your CPU cores are active.


  • I mean, I’d like games to be available, but I don’t see archive.org’s legal basis for providing it. I mean, the stuff is copyrighted. Lack of commercial availability doesn’t change that.

    Yeah, some abandonware sites might try to just fly under the radar, and some rightsholders might just not care, might not be much value there. But once you’re in a situation where a publisher is fighting a legal battle with you, you’re clearly not trying that route.

    You can argue that copyright law should be revised. Maybe copyright on video games should be shorter or something. Maybe there should be some provision that if a product isn’t offered for sale for longer than a certain period of time, copyright goes away. But I don’t think that this is the route to get that done.


  • Apparently the backdoor reverts back to regular operation if the payload is malformed or the signature from the attacker’s key doesn’t verify. Unfortunately, this means that unless a bug is found, we can’t write a reliable/reusable over-the-network scanner.

    Maybe not. But it does mean that you can write a crawler that slams the door shut for the attacker on any vulnerable systems.

    EDIT: Oh, maybe he just means that it reverts for that single invocation.


  • Also, even aside from the attack code here having unknown implications, the attacker made extensive commits to liblzma over quite a period of time, and added a lot of binary test files to the xz repo that were similar to the one that hid the exploit code here. He also was signing releases for some time prior to this, and could have released a signed tarball that differed from the git repository, as he did here. The 0.6.0 and 0.6.1 releases were contained to this backdoor aimed at sshd, but it’s not impossible that he could have added vulnerabilities prior to this. Xz is used during the Debian packaging process, so code he could change is active during some kind of sensitive points on a lot of systems.

    It is entirely possible that this is the first vulnerability that the attacker added, and that all the prior work was to build trust. But…it’s not impossible that there were prior attacks.


  • Honestly, while the way they deployed the exploit helped hide it, I’m not sure that they couldn’t have figured out some similar way to hide it in autoconf stuff and commit it.

    Remember that the attacker had commit privileges to the repository, was a co-maintainer, and the primary maintainer was apparently away on a month-long vacation. How many parties other than the maintainer are going to go review a lot of complicated autoconf stuff?

    I’m not saying that your point’s invalid. Making sure that what comes out of the git repository is what goes to upstream is probably a good security practice. But I’m not sure that it really avoids this.

    Probably a lot of good lessons that could be learned.

    • It sounds like social engineering, including maybe use of sockpuppets, was used to target the maintainer, to get him to cede maintainer status.

    • Social engineering was used to pressure package maintainers to commit.

    • Apparently automated software testing software did trip on the changes, like some fuzz-tesing software at Google, but the attacker managed to get changes committed to avoid it. This was one point where a light really did get aimed at the changes. That being said, the attacker here was also a maintainer, and I don’t think that the fuzzer guys consider themselves responsible for identifying security holes. And while it did highlight the use of ifunc, it sounds like it was legitimately a bug. But, still, it might be possible to have some kind of security examination taking place when fuzzing software trips, especially if the fuzzing software isn’t under control of a project’s maintainer (as it was not, here).

    • The changes were apparently aimed at getting in shortly before Ubuntu freeze; the attacker was apparently recorded asking and ensuring that Ubuntu fed off Debian testing. Maybe there needs to be more-attention paid to things that go in shortly before freeze.

    • Part of the attack was hidden in autoconf scripts. Autoconf, especially with generated data going out the door, is hard to audit.

    • As you point out, using a chain that ensures that a backdoor that goes into downstream also goes into git would be a good idea.

    • Distros should probably be more careful about linking stuff to security-critical binaries like sshd. Apparently this was very much not necessary to achieve what they wanted to do in this case; it was possible to have a very small amount of code that performed the functionality that was actually needed.

    • Unless the systemd-notifier changes themselves were done by an attacker, it’s a good bet that the Jia Tan group and similar are monitoring software, looking for dependencies like the systemd-notifier introduction. Looking for similar problems that might affect similar remotely-accessible servers might be a good idea.

    • It might be a good idea to have servers run their auth component in an isolated module. I’d guess that it’d be possible to have a portion of sshd that accepts incoming connections (and is exposed to the outside, unauthenticated world) as an isolated process. That’d be kind of inetd-like functionality. The portion that performed authentication (and is also running exposed to the outside) as an isolated process, and the code that runs only after authentication succeeds run separately, with only the latter bringing in most libraries.

    • I’ve seen some arguments that systemd itself is large and complicated enough that it lends itself to attacks like this. I think that maybe there’s an argument that some sort of distinction should be made between more- or less-security-critical software, and different policies applied. Systemd alone is a pretty important piece of software to be able to compromise. Maybe there are ways to rearchitect things to be somewhat more-resilient and auditable.

    • I’m not familiar with the ifunc mechanism, but it sounds like attackers consider it to be a useful route to hide injected code. Maybe have some kind of auditing system to look for that.

    • The attacker modified the “in the event of an identified security hole” directions to discourage disclosure to anyone except the project for a 90-day embargo period, and made himself the contact point. That would have provided time to continue to use the exploit. In practice, perhaps software projects should not be the only contact point – perhaps it should be the norm to both notify software projects and a separate, unrelated-to-a-project security point. That increases the risk of the exploit leaking, but protects against compromise of the project maintainership.


  • You probably are fairly safe. Yeah, okay, from a purely-technical standpoint, your server was wide-open to the Internet. But unless some third party managed to identify and leverage the backdoor in the window between you deploying it and it being fixed, only the (probably state-backed) group who are doing this would have been able to make use of it. They probably aren’t going to risk exposing their backdoor by exploiting it on your system unless they believe that you have something that would be really valuable to them.

    Maybe if you’re a critical open-source developer, grabbing your signing keys or other credentials might be useful, given that they seem to be focused on supply-chain attacks, but for most people, they probably just aren’t worth the risk. Only takes them hitting some system with an intrusion-detection system that picks up on the breakin, them leaving behind traces, and some determined person tracking down what happened, and they’ve destroyed their exploit.