I have recently built a new PC, to be used as a server. For months now, I have been getting unexplained crashes, sometimes after a few minutes, sometimes after a few days, where the PC just reboots without any trace in the logs. Just normal occasional status logs, and then, a few seconds later, the log of a normal boot process.
This is slowly driving me crazy because I just can't make out the issue. I have tried multiple different Linux installs, swapped out the ssd and PSU and ran a ram test but this behaviour stills persists.
Today something was different. Instead of rebooting, it showed me this blue screen, this time finally with a log. But I still can't seem to make out the issues. Some quick internet searches show some very vague answers; everything from software to hardware, and psu to CPU.
Can any Linux wizard help me fix my problem? Link to the log
Update: I have now faced an even weirder issue. I booted up, installed cpupower like a comment suggested, installed man to look up its documentation and then the screen froze, and I was forced to reboot the PC by pressing the power button for 3s. Then when I booted back up, my bash history was reset to a state a from a few days back (~.bash_history mod time from 2 days ago) even though I rebooted several times since then, and have not had any persistency errors like this. man was also not installed anymore. Even weirder is that cpupower was still installed. So it seems like some data was saved, while other files were discarded. I will now use a second ssd and try to replicate this. I now suspect some kind of Storage issue, even though the two ssd drives in question have never caused issues in my laptop. This seems scary, I have never witnessed a so weirdly corrupted Linux install, ever.
I would not worry too much about a somehow "forgetful" file system immediately after a hard power cycle. This is exactly what happens if data could not be flushed to disk. Thanks to journaling, your FS does not get corrupted, but data lingering in caches is still lost and discarded on fsck, to retain a consistent fs. I would recommend to repeat the installations you did before the crash, and maybe shove a manual
sync
behind it, to make sure you don't encounter totally weird "bugs" withman
later, when you don't remember this as a cause anymore. Your bash history is saved to file on clean shell exit only, and is generally a bit non-intuitive, especially with multiple interactive shells in parallel, so I would personally disregard the old .bash_history file as "not a fault, only confusing" and let that rest, too.Starting a long SMART self-test and a keen eye on the drive's error logs (
smartctl -l error <drive>
), or better yet, all available SMART info (-x
), to see if anything seems fishy with your drive is a good idea, anyway. Keep in mind that your mainboard / drive controller or its connection may just as well be (intermittently) faulty. In ye olden times, a defective disk cable or socket was messing up my system once or twice. You will see particular faults in your syslog, though - this is not invisible. You don't only get a kernel panic without some sprinkling of I/O errors as well. If your drive is SMART-OK, but you clearly get disk I/O errors, time to inspect and clean the SSD socket and contacts and re-seat once more. If you never saw any disk I/O errors, and your disk's logs are clean, I'd consider the SSD as not an issue.If you encouter random kernel panics, random as in "in different and unrelated call stacks that do not make sense in any other way", I agree that RAM is a likely culprit, or an electrical fault somewhere on the mainboard. It's rare, but it happens. If you can, replace (only) the mainboard, or better yet, take a working PC with compatible parts, and replace the working MBO with your suspected broken one to see if the previously working machine now faults. "Carrying the fault with you" is easier/quicker than proving an intermittent fault gone.
Unless you get different kernel panics, my money's still on your c-states handling. I'd prefer the lowest level you can find to inhibit your CPUs from going to sleep, i. e. BIOS > kernel boot args > sysctl > cpupower, to keep the stack thin. If that is finnicky somehow, you could alternatively boot with a single CPU and leave the rest disabled (bootarg
nosmp
). The point is just to find out where to focus your attention, not to keep this as a long-term workaround.To keep N CPUs running, I usually just background N infinite loops in bash:
In your case you might change that to:
To just kick each CPU every second, it does not have to be stressed. The
taskset
will bind each loop to one CPU, to prevent the system from cleverly distributing the tiny load. This could also become a terrible, terrible workaround to keep running if all else fails. :)