Not to step on anyone’s toes, but HTTP 4xx errors usually mean the request isn’t allowed or can’t be fulfilled on the client side (like needing to log in, an expired link, or access restrictions). 5xx errors are the ones that point to real server issues. So in this case it might just be the hosting platform enforcing access rules rather than a problem with Summit itself. Just sharing a thought.
Network edge cases — interrupted or partial requests can reach the server in an incomplete form, which results in 400.
Edit/afterthought:
Something that occurred to me afterwards: sometimes 400s can also pop up from network edge cases — like an interrupted or partial request reaching the server in an incomplete form (RFC 7231 §6.5.1). And since you’re on a different Lemmy instance, it could just be that side acting up. Lemmy servers typically run behind an nginx reverse proxy (default setup), and if that proxy or backend gets overloaded, it may return 400s. Add in federation quirks like sync or backlog delays, and it would explain why it sometimes works fine and sometimes doesn’t — different people see different images fail at different times.
There are many good answers here already, just wanted to add to it.
It sounds very much like what you’re seeing could be either a driver fault or a memory-related issue. Both can manifest as hard system freezes where nothing responds, not even Ctrl+Alt+Fx or SysRq. You mentioned this briefly before, and that still fits the pattern.
If it’s a driver issue, it’s often GPU or storage related. A kernel module crashing without proper recovery can hang the whole system—especially graphics drivers like NVIDIA or AMD, or low-level I/O drivers handling your SSD or SATA controller. Checking dmesg -T and journalctl -b -1 after reboot for GPU resets, I/O errors, or kernel oops messages might reveal clues.
If it’s memory pressure or the OOM killer, that can also lock a machine solid, depending on what’s being killed. When the kernel runs out of allocatable memory, it starts terminating processes to free RAM. If the wrong process goes first—say, something core to the display stack or a driver thread—you’ll see a full freeze. You can verify this by searching the logs for “Out of memory” or “Killed process” messages.
A failing DIMM or a bad memory map region could also behave like this, even if Windows seems fine. Linux tends to exercise RAM differently, especially with heavy caching and different scheduling. Running a memtest86+ overnight is worth doing just to eliminate that angle.
If your live USB sits idle for hours without freezing, that strongly hints it’s a driver or kernel module loaded in your main install, not a hardware fault. If it does freeze even from live media, you’re probably looking at a low-level memory or hardware instability.
The key next steps:
Check system logs after reboot for OOM or GPU-related kernel messages.
Run memtest86+ for several passes.
Try a newer (or older) kernel to rule out regression.
If it’s indeed a driver or OOM event, both would explain the “total lockup” behavior and why Windows remains unaffected. Linux’s memory management and driver model are simply less forgiving when something goes sideways.