[-] gargravarr2112@alien.top 2 points 9 months ago

YT videos get taken down for any reason these days - fake copyright claims, hacking or just the creator getting fed up with YT's policies. Entire channels vanish with no warning. Valuable videos that generate income suddenly become private only. It is not an open platform, it's a monetised platform first and foremost.

If you have these videos under your control, then if they're no longer watchable online, you still have them. That's exactly what TA is for and does a superb job of. Basically every YT video I watch that I think is useful, I hit the Save button. Some of them are indeed no longer available. I have entire channels downloading so if the creator does close up shop, at least I've got their latest.

Obviously you need a lot of storage space - mine is over 5TB and growing. But it's worth it.

Also, it avoids the YT before, mid and after ads.

[-] gargravarr2112@alien.top 1 points 9 months ago

This. With a proper backup strategy, you are reducing the probability of a catastrophic sequence of events. It becomes P(some unlikely event) x P(some other unlikely event) x ... Etc. for as many events you can think of and/or can afford to mitigate.

As you say, the risk will never be zero. And even the best-laid plans can fail - the Gitlab incident a few years back saw five layers of backups and disaster preparedness fail.

Really, all you can do is backup your data using standard methods, and TEST THE RESTORE before you need to rely on it!

[-] gargravarr2112@alien.top 1 points 9 months ago

Can recommend my APC SMT1500I. Original batteries lasted 9.5 years. Accidentally plugged a fan heater into it once and it survived. Most reliable device in my rack.

[-] gargravarr2112@alien.top 1 points 10 months ago

If someone or something malicious gets a shell account on my systems, then it at least stops them doing anything system-wide. And yes, if a script is going to request admin rights to do something, it'll stop right at the sudo prompt. Passwordless, it could do stuff without you even being aware of it.

Whether or not this is a line of defence at all is open to debate.

[-] gargravarr2112@alien.top 1 points 10 months ago

Heavy computation rack is in an unheated conservatory with a window cracked open. Keeps the HDD temperatures around 30 degrees. Temperature monitoring from my PDU shows a 3'C rise from the inlet to the exhaust side of the rack. This stuff is mostly powered off when not in use. In summer, it can get to 35'C in that room so I shut everything down at that point.

24/7 rack is in my lounge and vents the heat into the room (helps a little bit with heating costs). Top of the rack is about 37'C but I've seen it around 45'C with all my hypervisors doing stuff. Nothing complains. As long as the intake air is within the manufacturer's stated range, it's fine.

Might want to consider redirecting the heat into the house rather than venting it outside.

[-] gargravarr2112@alien.top 1 points 10 months ago
  1. Domain auth (1 place to set passwords and SSH keys), no root SSH
  2. SSH by key only
  3. Passworded sudo (last line of defence)
  4. Only open firewall hole is OpenVPN with security dialled up high
  5. VLANs - laptops segregated from servers
  6. Strict firewall rules between VLANs
  7. TLS on everything
  8. Daily update check alerts (no automatic updates, but persists until I deal with them)
  9. Separate isolated syslog server for audit trails
  10. Cold backups
[-] gargravarr2112@alien.top 1 points 10 months ago

DIY - No Regrets.

I built my NAS out of spare parts originally and then it evolved into needing dedicated purchases. I like having full control of the OS and everything on it - it helps me understand what daemons are doing what. It does a lot more than file sharing.

The likes of QNAP and Synology may make a more polished product with an easy UI, as well as offering support, but as far as I care, I am support, so I like to fix problems myself.

If you're ping-ponging between the two options, from your post it reads like cost is the biggest problem you face. But as you say, storage is a critical part of the infrastructure and sometimes you do have to spend money on it if you want it to be reliable. I just upgraded my main NAS with a larger chassis and motherboard (from an ITX) so I can expand it further. It cost me a sizeable amount of money that might have bought me a low-end ready-made, but this is far more flexible.

[-] gargravarr2112@alien.top 1 points 10 months ago

I have Adaptec controllers and I don't recommend them, my ASR-71605 does not like heavy disk IO to multiple drives. It can't handle full bandwidth to multiple drives at once and they start lagging or piling up the IO. And my ASR-78165 cannot deal with SAS devices that have multiple LUNs, like tape drives. The whole card locks up.

Definitely go for LSI-based cards. They are the industry standard.

[-] gargravarr2112@alien.top 1 points 10 months ago

Yes, this should work fine. SAS does not care what path the signal takes - it doesn't differentiate between internal and external. You can run internal over external cables without issue. I've done similar by turning my old NAS chassis into a DAS, and connecting it to her internal ports of the HBA. And you can connect SAS or SATA drives to the DAS (system 1).

[-] gargravarr2112@alien.top 1 points 10 months ago

Why not just use ThinOS? IIRC it supports the NX protocol that NoMachine uses. If not, then you have Wyse's respin of Ubuntu called ThinLinux, which you should be able to install.

You can download the OS image and flash utility from Dell's website.

[-] gargravarr2112@alien.top 1 points 10 months ago

Several locations.

  • bunch of metering smart plugs flashed with Tasmota, feeding into Home Assistant (power at the wall)
  • UPS being polled by LibreNMS (difference in load Vs wall shows the UPS consumption)
  • metering PDU polled by LibreNMS shows the server load
  • PMBus PSUs in the servers report their own power consumption to the BMC

And after all this, I do... Nothing with this data.

[-] gargravarr2112@alien.top 1 points 10 months ago

Power in the UK has gone through the roof. I've downsized my lab as much as I can and have at times wondered if I should shut it down completely.

Originally I was running an EdgeRouter 4, Zyxel 48-port managed switch and custom-built NAS with an i3-9100T, 32GB ECC and 6x 12TB SAS drives in a zpool. The NAS did everything - VMs, storage, backups etc. but it was pulling quite a lot of power.

A while back I ran a USFF PC as my server, which idled at 8W. Versus my 200W Xeon machine at the time, it paid for itself in 12 months. I dug that out and moved the VMs onto it. Storage went onto an ARM NAS. I was running too many VMs for a single USFF even maxed out, so I bought another 2 of them (identical). Now I run them in a Proxmox cluster. I use a passive cooled HP 1810 managed switch and an EdgeRouter Lite for the network, plus an Apple Airport with its transmitter dialled down to 25%. The ARM machine is much slower than my ZFS NAS, but it is much lighter on power - at that point, the HDDs are the significant draw, so I only run 2 spinners that are non-redundant and make sure they're backed up to cold storage. I also power up my ZFS machine once a month or so and sync the data from it. Other than that, I keep the big x86 machines shut down until needed.

view more: next ›

gargravarr2112

joined 10 months ago