[-] talkingpumpkin@lemmy.world 1 points 5 days ago

For those kind of issues I'd recommend snapshots instead of backups

33

Prometheus-alertmanager and graphana (especially graphana!) seem a bit too involved for monitoring my homelab (prometheus itself is fine: it does collect a lot of statistics I don't care about, but it doesn't require configuration so it doesn't bother me).

Do you know of simpler alternatives?

My goals are relatively simple:

  1. get a notification when any systemd service fails
  2. get a notification if there is not much space left on a disk
  3. get a notification if one of the above can't be determined (eg. server down, config error, ...)

Seeing graphs with basic system metrics (eg. cpu/ram usage) would be nice, but it's not super-important.

I am a dev so writing a script that checks for whatever I need is way simpler than learning/writing/testing yaml configuration (in fact, I was about to write a script to send heartbeats to something like Uptime Kuma or Tianji before I thought of asking you for a nicer solution).

[-] talkingpumpkin@lemmy.world 8 points 1 month ago

Your system will appeal to the intersection between people who like gambling and people who like donating to charities.

Even among them, I don't see why anyone would prefer putting 100$ in your web3 thingie instead of just donating 50$, gambling with 45$, and buying a beer with the 5$ they would lose to you... well, there are a lot of ~~stupid~~ peculiar people (especially among crypto bros), so you might actually be ok.

About the implementation, the 50% to charities should be transferred automatically... what's the point of a smart contract if people must trust you to "check the total donations and create a donation on The Giving Block"?

PS:

IDK about the US, but where I live gambling is regulated very strictly: make sure to double check with a lawyer before getting into trouble.

[-] talkingpumpkin@lemmy.world 6 points 1 month ago* (last edited 1 month ago)

I don't see the ethics implications of sharing that? What would happen if you did disclose your discoveries/techniques?

I don't know much about LLMs, but doesn't removing these safeguards just make the model as a whole less useful?

[-] talkingpumpkin@lemmy.world 6 points 1 month ago

Wow, that's so neat!

On my machine it opens a fullscreen plasma spash and then it shows the new session intermixed/overlayed with my current one instead of in a new window... basically, it's a mess :D

If I may abuse your patience:

  • what distro/plasma version are you running? (here it's opensuse slowroll w/ plasma 6.1.4)
  • what happens if you just run startplasma-wayland from a terminal as your user? (I see the plasma splash screen and then I'm back to my old session)
70

I'm not much hoepful, but... just in case :)

I would like to be able to start a second session in a window of my current one (I mean a second session where I log in as a different user, similar to what happens with the various ctrl+alt+Fx, but starting a graphical session rather than a console one).

Do you know of some software that lets me do it?

Can I somehow run a KVM using my host disk as a the disk for the guest VM (and without breaking stuff)?

[-] talkingpumpkin@lemmy.world 4 points 1 month ago

Read this, delete this post and try again.

[-] talkingpumpkin@lemmy.world 9 points 2 months ago

Best of luck to you!

I’m trying to understand Git, but it’s a giant conceptual leap.

Git is not that different from svn (I mean, the biggest hurdle is going from a shared folder to any version control system)... I'd say the main difference is that branches live in a different namespace than files (ie. you don't have trunk/src/whatever but just src/whatever in the main branch). On top of that there's that commit and push are two different things (and the same with fetch and checkout) and that merges are way easier than in svn (where you had to merge stuff manually).

If you create a repo locally and clone it twice in two different directories, you can easily simulate what would happen when you and a coworker collaborate via a centralized repo (say, github) - do a few experiments and you'll see it's not as complicated as it seems (I'd recommend using the CLI instead of some GUI client: it's way easier to figure things out without the overhead of learning to differentiate between git concepts and how the GUI tries to help).

32
submitted 2 months ago* (last edited 2 months ago) by talkingpumpkin@lemmy.world to c/selfhosted@lemmy.world

I have two subnets and am experiencing some pretty weird (to me) behaviour - could you help me understand what's going on?


Scenario 1

PC:                        192.168.11.101/24
Server: 192.168.10.102/24, 192.168.11.102/24

From my PC I can connect to .11.102, but not to .10.102:

ping -c 10 192.168.11.102 # works fine
ping -c 10 192.168.10.102 # 100% packet loss

Scenario 2

Now, if I disable .11.102 on the server (ip link set <dev> down) so that it only has an ip on the .10 subnet, the previously failing ping works fine.

PC:                        192.168.11.101/24
Server: 192.168.10.102/24

From my PC:

ping -c 10 192.168.10.102 # now works fine

This is baffling to me... any idea why it might be?


Here's some additional information:

  • The two subnets are on different vlans (.10/24 is untagged and .11/24 is tagged 11).

  • The PC and Server are connected to the same managed switch, which however does nothing "strange" (it just leaves tags as they are on all ports).

  • The router is connected to the aformentioned switch and set to forward packets between the two subnets (I'm pretty sure how I've configured it so, plus IIUC the second scenario ping wouldn't work without forwarding).

  • The router also has the same vlan setup, and I can ping both .10.1 and .11.1 with no issue in both scenarios 1 and 2.

  • In case it may matter, machine 1 has the following routes, setup by networkmanager from dhcp:

default via 192.168.11.1 dev eth1 proto dhcp              src 192.168.11.101 metric 410
192.168.11.0/24          dev eth1 proto kernel scope link src 192.168.11.101 metric 410
  • In case it may matter, Machine 2 uses systemd-networkd and the routes generated from DHCP are slightly different (after dropping the .11.102 address for scenario 2, of course the relevant routes disappear):
default via 192.168.10.1 dev eth0 proto dhcp              src 192.168.10.102 metric 100
192.168.10.0/24          dev eth0 proto kernel scope link src 192.168.10.102 metric 100
192.168.10.1             dev eth0 proto dhcp   scope link src 192.168.10.102 metric 100
default via 192.168.11.1 dev eth1 proto dhcp              src 192.168.11.102 metric 101
192.168.11.0/24          dev eth1 proto kernel scope link src 192.168.11.102 metric 101
192.168.11.1             dev eth1 proto dhcp   scope link src 192.168.11.102 metric 101

solution

(please do comment if something here is wrong or needs clarifications - hopefully someone will find this discussion in the future and find it useful)

In scenario 1, packets from the PC to the server are routed through .11.1.

Since the server also has an .11/24 address, packets from the server to the PC (including replies) are not routed and instead just sent directly over ethernet.

Since the PC does not expect replies from a different machine that the one it contacted, they are discarded on arrival.

The solution to this (if one still thinks the whole thing is a good idea), is to route traffic originating from the server and directed to .11/24 via the router.

This could be accomplished with ip route del 192.168.11.0/24, which would however break connectivity with .11/24 adresses (similar reason as above: incoming traffic would not be routed but replies would)...

The more general solution (which, IDK, may still have drawbacks?) is to setup a secondary routing table:

echo 50 mytable >> /etc/iproute2/rt_tables # this defines the routing table
                                           # (see "ip rule" and "ip route show table <table>")
ip rule add from 192.168.10/24 iif lo table mytable priority 1 # "iff lo" selects only 
                                                               # packets originating
                                                               # from the machine itself
ip route add default via 192.168.10.1 dev eth0 table mytable # "dev eth0" is the interface
                                                             # with the .10/24 address,
                                                             # and might be superfluous

Now, in my mind, that should break connectivity with .10/24 addresses just like ip route del above, but in practice it does not seem to (if I remember I'll come back and explain why after studying some more)

57

I want to have a local mirror/proxy for some repos I'm using.

The idea is having something I can point my reads to so that I'm free to migrate my upstream repositories whenever I want and also so that my stuff doesn't stop working if some of the jankiest third-party repos I use disappears.

I know the various forjego/gitea/gitlab/... (well, at least some of them - I didn't check the specifics) have pull mirroring, but I'm looking for something simpler... ideally something with a single config file where I list what to mirror and how often to update and which then allows anonymous read access over the network.

Does anything come to mind?

[-] talkingpumpkin@lemmy.world 6 points 2 months ago

If going the route of a backup solution, is it feasible to install OpenWRT on all of my devices, with the expectation that I can do some sort of automated backups of all settings and configurations, and restore in case of a router dying?

My two cents: use a "full" computer as your router (with either something like OPNsense or any "regular" linux distro if you don't need the GUI) and OpenWRT on your access points.

Unless you use the GUI and backup/restore the configuration (as you would with proprietary firmwares), OpenWRT is frankly a pain to configure and deploy. At the moment I'm building custom images for all my devices, but (next time™) I'm gonna ditch all that, get an x86 router and just manually manage OpenWRT on my wifi APs (I only have two and they both have the same relatively straightforward config).

It’s a pain that I know can be solved with buying dedicated access points (…right?)

Routers and access points are just computers with network interfaces (there may be level-2-only APs, but honestly I've never heard of any)... most probably your issue is that the firmware of your "routers as access points" doesn't want to be configured as a dumb AP.

[-] talkingpumpkin@lemmy.world 7 points 6 months ago

I'd say a good middle ground could be making that stuff only visible from your mom's user (or even setting up a completely separate server)?

It depends on what YOU want to do, really... personally, I would be ok hosting religious nonsense if asked, as long as it's not generally available in kids' accounts and stuff (also, porn), but I would come clean and outright refuse if it was neonazi,racist and/or conspiracy stuff. It depends on where you decide to draw the line.

BTW: there's also the passive/aggressive, cowardly option of sayng "I'll rip them when I have time" and then sequester all the DVDs and only ever find the time to rip the ones you don't mind

[-] talkingpumpkin@lemmy.world 5 points 6 months ago

man this is getting real popular (kinda like "why not both?" a while ago)

[-] talkingpumpkin@lemmy.world 30 points 6 months ago

IMHO Ansible isn't much different than a bash script... it has the advantage of being "declarative" (in quotes because it's not actually declarative at all: it just has higher-level abstractions that aggregate common sysadmin CLI operations/patterns in "declarative-sounding" tasks), but it also has the disadvantage of becoming extremely convoluted the moment you need any custom logic whatsoever (yes, you can write a python extension, but you can do the same starting with a bash script too).

Also, you basically can't use ansible unless your target system has python (technically you can, but in practice all the useful stuff needs python), meaning that if you use a distro that doesn't come with python per default (eg. alpine) you'll have to manually install it or write some sort of pythonless prelude to your ansible script that does that for you, and that if your target can't run python (eg. openwrt on your very much resource-constrained wifi APs) ansible is out of the question (technically you can use it, but it's much more complex than not using it).

My two cents about configuration management for the homelab:

  • whatever you use, make sure it's something you re-read often: it will become complex and you will forget everything about it
  • keep in mind that you'll have to re-test/update your scripts at least everytime your distro version changes (eg. if you upgrade from ubuntu 22.04 to 24.04) and ideally every time one of your configured services changes (because the format of their config files may in theory change too)
  • if you can cope with a rolling-style distro, take a look at nix instead of "traditional" configuration management: nixos configuration is declarative and (in theory) guarantees that you won't ever need to recheck or update your config when updating (in reality, you'll occasionally have to edit your config, but the OS will tell you so it's not like you can unknowingly break stuff).

BTW, nixos is also not beginner-friendly in the least and all in all badly documented (documentation is extensive but unfriendly and somewhat disorganized)... good luck with that :)

[-] talkingpumpkin@lemmy.world 5 points 8 months ago

With the very limited number of drives one may use at home, just get the cheapest ones (*), use RAID and assume some drive may fail.

(*) whose performances meet your needs and from reputable enough sources

You can look at the backblaze stats if you like stats, but if you have ten drives 3% failure rate is exactly the same as 1% or .5% (they all just mean "use RAID and assume some drive may fail").

Also, IDK how good a reliabiliy predictor the manufacturer would be (as in every sector, reliabiliy varies from model to model), plus you would basically go by price even if you need a quantity of drives so great that stats make sense on them (wouldn't backblaze use 100% one manufacturer otherwise?)

view more: next ›

talkingpumpkin

joined 1 year ago