[-] jrgd@lemm.ee 3 points 2 months ago

The potential common cause points toward the GPU drivers (note of games in Proton, libgtk4 segfaults, and libnvidia-glcore segfaults). What nvidia driver version is in use. A quick search found a rough match to shown symptoms, but is recent and matches the hardware (NVidia Polaris desktop). Perhaps the driver version in use exhibits a similar showing of a regression for such GPUs?

[-] jrgd@lemm.ee 2 points 2 months ago

The flatpak documentation has a semi-relevant page on setting up a flatpak repo utilizing gitlab pages and gitlab's CI runners on a pipeline. Obviously, you'd need to substitute Gitlab Pages for a webserver of your choice and to port the CI logic over to Gitea Actions (ensuring your Gitea instance is setup for it).

A flatpak repo itself is little more than a web server with a related GPG key for checking the signatures of assembled packages. The docs recommend setting up the CI pipeline to run less on-commit to the package repos and more on the lines of checking for available updates on interval, though I imagine other scenarios in a fully-controlled environment such as a selfhosted one might offer some flexibility.

[-] jrgd@lemm.ee 3 points 2 months ago

I started dual booting Linux after an upgrade to an insider preview of Windows 10 soft-bricked my Windows 7 install. I later stopped booting into Windows and eventually reclaimed the partitions to extend whatever distro was installed at that point when the actual release of Windows 10 decided to attempt automatically upgrading my Windows 7 system, soft-bricking it a second time. 2016 onwards, I haven't used Windows on my systems outside of occasionally booting LTSC in a VM.

[-] jrgd@lemm.ee 3 points 3 months ago* (last edited 3 months ago)

Running the same memory constraints on a 1.18 vanilla instance, most of the stack memory allocation largely comes from ramping the render distance from 12 chunks to 32 chunks. The game only uses ~0.7 GiB memory non-heap at a sane render distance in vanilla whereas ~2.0 GiB at 32 chunks. I did forget the the render distance no longer caps out in vanilla at 16 chunks. Far render distances like 32 chunks will naturally balloon the stack memory size.

[-] jrgd@lemm.ee 2 points 3 months ago* (last edited 3 months ago)

I think the fact that RCT Classic is only worth getting on mobile because there are better options on PC doesn't help make the case that RCT Classic should be a shining example of 'mobile gaming'. RCT Classic is bit above the bare minimum for an acceptable rerelease of older games.

[-] jrgd@lemm.ee 2 points 3 months ago

Some additional reasons:

  • The native packages are broken or otherwise missing functionality
  • The app you're using is quite volatile and needs upstream updates faster than the native packages are pushed

Flatpak specific:

  • You want the app to be sandboxed
[-] jrgd@lemm.ee 3 points 3 months ago

Generally, yes. It's not nearly as bad as say 2015 but NVidia has a long standing history of being difficult to deal with, and users having to make constant compromises. For instance, NVidia hasn't had properly working Wayland support on most environments until recently due to the awful flickering that many users experienced. Things like power saving, dual GPU handoff, general OpenGL performance, frame stability and tearing (X.Org), etc. have been either historical and/or current pain points for using NVidia GPUs vs AMD or Intel GPUs.

[-] jrgd@lemm.ee 2 points 4 months ago

2-2-1 still insinuates having a remote backup. I don't see how this particular threat destroys a 2-2-1 setup.

[-] jrgd@lemm.ee 3 points 5 months ago* (last edited 5 months ago)

From the Github README:

Also, for the very first time, introducing tiny11 core builder! A more powerful script, designed for a quick and dirty development testbed. Just the bare minimun, none of the fluff. This script generates a significantly reduced Windows 11 image. However, it's not suitable for regular use due to its lack of serviceability - you can't add languages, updates, or features post-creation. tiny11 Core is not a full Windows 11 substitute but a rapid testing or development tool, potentially useful for VM environments.

It literally says that it cannot be updated from a built OS install. You need to reinstall tiny11 by rebuilding the install image with a newer Windows 11 base image. Obviously it would be best to do this every time there is a security patch release for Windows 11.

EDIT: Rereading further, the bigger Tiny11 image might be able to be updated in-OS. I'm going to dig through the ps1 scripts to see if the README holds up to that un-noted capability.

EDIT2: I don't see any registry edits that knock Windows Updater offline. I'll test it in a VM to see if things work (from prebuilt when it eventually downloads). Though I am unsure at this moment if such an image's changes will survive a Windows update at all.

EDIT3: VM not tested yet, but an issue on the GitHub seems to corroborate my initial assumption.

EDIT4: VM tested. Things claimed to be patched out (Edge) came back with one of the cumulative updates applied shortly after install. Other cumulative updates are being blocked (error instantly on attempt to install after download) (perhaps unintentionally). Image downloaded claimed to be for 23H2, but Windows 11 22H2 was installed, seemingly with no way to actually upgrade. I think my point stands.

[-] jrgd@lemm.ee 3 points 5 months ago

Windows does have memory compression, though you can't really change the algorithm or how aggressive it is. AFAIK it is just a toggle of on or off.

[-] jrgd@lemm.ee 3 points 6 months ago

I have been utilizing BunkerWeb for some of my selfhost sites since it was bunkerized-nginx. It is indeed powerful and flexible, allowing multi-site proxying, hosting while allowing semi-flexible per-site security tweaks (some security options are forcibly global still, a limitation).

I use it on podman myself, and while it is generally great for having OWasp CRS, general traffic filtering targets and more built on top of nginx in a Docker container, the way Bunkerweb needs to be run hasn't really remained stable between versions. Throughout several version upgrades, there have been be severe breaking changes that will require reading the setup documentation again to get the new version functional.

[-] jrgd@lemm.ee 3 points 7 months ago* (last edited 7 months ago)

Ah, that would put a bit of complication into things. If you want to actually accomplish this though, you should largely start with the same steps as a standard system install, using a second USB flash drive to write the distro onto the external SSD, leaving enough space to build the rest of the partitions you need. If you intend to make a Windows-shared partition (exfat, fat32, or NTFS), it is probably best to put that partition either first or just behind the EFI partition so that Windows systems won't have a hard time finding it. Exfat or NTFS would be a better choice for this type of partition.

I would still generally recommend keeping the live distros on a separate bootable drive, but you can size and reserve dummy partitions after the rest of your normal dual-boot installs and shared data partitions for live installers to overwrite. There is likely going to be some experimentation with getting the OS bootloader (such as on GRUB provided by Fedora in this case) to pick them up and add them as boot entries. You should (depending on the live image) be able to dd write them to the ending partitions reserved for the image for as long as the partition is sized equal or larger than the ISO image's size (it's best to give at least a few blocks of oversize on the partition when writing ISO's directly).

Edit: In a proper Fedora install, you should almost never need to disable selinux or set it to permissive unless you know why you don't want it.

view more: ‹ prev next ›

jrgd

joined 10 months ago