The worst kind of an Internet-herpaderp. Internet-urpo pahimmasta päästä.

  • 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: July 24th, 2023

help-circle



  • Running Galaxy with proton-ge. Sure, it doesn’t install linux versions of games or anything, but it works.

    Basically what I did was:

    • run arch btw, obviously and loaded with sarcasm, as always
    • install https://aur.archlinux.org/packages/proton-ge-custom-bin
    • aquired galaxy installer (GOG’s site hides download links on linux… why???)
    • proton gog-galaxy-installer.exe to install. It installs to ~/.local/share/proton-pfx/0/pfx/drive_c/Program Files/GOG Galaxy (or somesuch)
    • I made a shortcut to launch the galaxy.exe with proton from the directory & using the directory as working directory
    • profit.

    Seems to work fine, some older version of proton-ge and/or nvidia driver under wayland made the client bit sluggish, but that has fixed itself. Games like Cyberpunk work fine. The galaxy overlay doesn’t, though.




  • yep, I’m aware. I just haven’t observed* any compilation stutters - so in that sense I’d rather keep it off and save the few minutes (give or take) on launch

    *Now, I’m sure the stutters are there and/or the games I’ve recently played on linux haven’t been susceptible to them, but the tradeoff is worth it for me either way.


  • well, I do have this one game I’ve tried to play, Enshrouded, it does do the shader compilation on it’s own, in-game. The compiled shaders seem to persist between launches, reboots, etc, but not driver/game updates. So it stands to reason they are cached somewhere. As for where, not a clue.

    And since if it’s the game doing the compilation, I would assume non-steam games can do it too. Why wouldn’t they?

    But, ultimately, I don’t know - just saying these are my observations and assumptions based on those. :P



  • Overall I’m still getting used to the Steam “processing vulkan shaders” pretty much every time a game updates, but it’s worth it for the extra performance.

    That can be turned off, though. Haven’t noticed much of a difference after doing so (though, I am a filthy nvidia-user). Also saving quite a bit of disk space while too.







  • Hastily read around in the related issue-threads and seems like on it’s own the vm.max_map_count doesn’t do much… as long as apps behave. It’s some sort of “guard rail” which prevents processes of getting too many “maps”. Still kinda unclear what these maps are and what happens is a process gets to have excessive amounts.

    That said: https://access.redhat.com/solutions/99913

    According to kernel-doc/Documentation/sysctl/vm.txt:

    • This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling malloc, directly by mmap and mprotect, and also when loading shared libraries.
    • While most applications need less than a thousand maps, certain programs, particularly malloc debuggers, may consume lots of them, e.g., up to one or two maps per allocation.
    • The default value is 65530.
    • Lowering the value can lead to problematic application behavior because the system will return out of memory errors when a process reaches the limit. The upside of lowering this limit is that it can free up lowmem for other kernel uses.
    • Raising the limit may increase the memory consumption on the server. There is no immediate consumption of the memory, as this will be used only when the software requests, but it can allow a larger application footprint on the server.

    So, on the risk of higher memory usage, application can go wroom-wroom? That’s my takeaway from this.

    edit: ofc. I pasted the wrong link first. derrr.

    edit: Suse’s documentation has some info about the effects of this setting: https://www.suse.com/support/kb/doc/?id=000016692