1
4
submitted 1 week ago by root@lemmy.world to c/homelab@lemmy.ml

I recently had my Proxmox host fail, so I re-installed and recovered all my VMs from backups.

I'm noticing that my file structure (this is on my NAS where Proxmox mounts it via SMB/CIFS) has some duplicate folders in it. The ones I highlighted are all empty. Is this normal? Can these be removed safely?

2
15
submitted 2 weeks ago by zhill29@lemmy.world to c/homelab@lemmy.ml

I've managed to get TrueNAS connected to Active Directory and created a share that I can access from an AD account on a Windows client just fine. However when I try to mount the share on Ubuntu Server 24.04 I keep getting permission/logon failure.

In my fstab entry I've tried every combo I can think of.

domain=domain,user=user,password=pass domain=domain.local,user=user,password=pass user=domain\user,password=pass user=domain.local\user,password=pass

I've also tried a separate credentials file with every one of those combinations as well as versions 2.1 and 3.0. I've got no problem mounting shares from the Windows server without even specifying the domain.

At this point I'm pretty sure I'm missing a setting on TrueNAS but no idea what. Any ideas?

3
8

So I just added a TP-Link switch (TL-SG3428X) and access point (EAP670) to my network, using OPNSense for routing, and was previously using a TP-Link SX-3008F switch as an aggregate (which I no longer need). I’m still within the return window for the new switch and access point, and have to admit the sale prices were my main reason with going for these items. I understand there have been recent articles mentioning TP-Link and security risks, so I’m thinking if I should consider returning these, and upping my budget to go for ubiquity? The AP would only be like $30 more for an equivalent, so that’s negligible, but a switch that meets my needs is about 1.6x more, however still only has 2 SFP+ ports, while I need 3 at absolute minimum.

I’m generally happy with the performance, however there is a really annoying bug where if I reboot a device, the switch drops down to 1G speed instead of 10G, and I have to tinker with the settings or reboot the switch to get 10G working again. This is true for the OPNSense uplink, my NAS and workstation. Same thing happened with the 3008F, and support threads on the forums have not been helpful.

In any case, any opinions of switching to ubiquity would be worth it?

4
10
submitted 3 weeks ago by RadDevon@lemmy.zip to c/homelab@lemmy.ml

I'm running a Docker-based homelab that I manage primarily via Portainer, and I'm struggling with how to handle container updates. At first, I had all containers pulling latest, but I thought maybe this was a bad idea as I could end up updating a container without intending to. So, I circled back and pinned every container image in my docker-compose files.

Then I started looking into how to handle updates. I've heard of Watchtower, but I noticed the Linuxserver.io images all recommend not running Watchtower and instead using Diun. In looking into it, I learned it will notify you of updates based on the tag you're tracking for the container, meaning it will never do anything for my containers pinned to a specific version. This made me think maybe I've taken the wrong approach.

What is the best practice here? I want to generally try to keep things up to date, but I don't want to accidentally break things. My biggest fear about tracking latest is that I make some other change in a docker-compose and update the stack which pulls latest for all the container in that stack and breaks some of them with unintended updates. Is this a valid concern, and if so, how can I overcome it?

5
7

Hi Everyone

I will try and keep it sort, my friend and I both do our own homelabs the usual stuff Radarr, Pi-hole, TrueNas, Proxmox etc.

Now we want to do a bit of silly thing as the title says, regards to Fax'ing we just want to be able to do send faxs to eachother (memes, guides etc.) We both have a cisco SPA2102 VoIP which to our understanding should do the trick.

I have tried to find stuff on the internet to see if I could find a guide or some idea on how to do this in a modern internet method but without luck, so I was hoping to either get a bit of help or straight answer saying "Just use email man"

6
8

Hosting your own PrivateDNS for Android?

How do you run your own DNS for privateDNS for Android?

I am currently using OPNsense with unbound for my DNS. My wireguard vpn is also on OPNsense.

I have LSIO Swag for my reverse proxy with Let's Encrypt and CloudFlare for my SSL and DNS.

Docker compose for my containers.

Can Pi-Hole, Ad Guard Home, Technitium be used as and entry for PrivateDNS on android?

7
35
submitted 1 month ago by Kage@discuss.tchncs.de to c/homelab@lemmy.ml

Hey there, im looking into setting up a DNS Server in my Homelab, i would like something like this:

  1. Server in Docker on my Proxmox Server
  2. Server in Docker on my NAS and
  3. Server in my "Cloud" Network

Do you guys have any recommendations on how i could accomplish this? Otherwise i will just use PiHole with sync again or something like it :)

8
5
submitted 1 month ago by swordgeek@lemmy.ca to c/homelab@lemmy.ml

Anyone an expert in Synology here?

Synology's Hybrid Raid (SHR) is a funky little system, especially since it's built on standard Linux tools.

What I'm wondering though, is how data is distributed when you change the disks in the system.

Imagine I have 2x1TB drives and 2x4TB drives in a system.

  • First it creates a 4x1TB "chunk" which is essentially RAID5. (3TB available)
  • Next it creates a 2x3TB chunk which acts like RAID1 (although internally may be calculated like a RAID5 parity.) (3TB available from this)

Now let's say I replace those two 1TB drives with 4TBs (safely, preserving data, etc.), and tell SHR to expand to use the new drives. I can see a number of scenarios from this point:

  • It mirrors the two new blocks into another 3TB chunk, giving me 9TB total. (3 from RAID5, 3 from first mirror pair, 3 from second mirror pair)
  • It expands the 3TB mirror into a second RAID5 group, giving 12TB total. (3 initial plus 9 in the second group)
  • It does the same thing and also rewrites the data on the (former) 3TB mirror pair to be striped across all four disks
  • It expands the 3TB mirror to RAID5, *and merges it with the original 3TB RAID group, giving a single 12TB RAID5.
  • Again it does the same thing but with rewriting of the data that was formerly just mirrored.

This isn't likely to be a huge deal, but I'd like to know how it works under the covers.

9
7
submitted 1 month ago by root@lemmy.world to c/homelab@lemmy.ml

I see a lot of guides on setting up DoH (DNS over HTTPS) using things like cloudflared, but not many concrete ones on DoT (DNS over TLS).

Does anyone have any guides they'd recommend?

10
7

cross-posted from: https://lemmy.blahaj.zone/post/16452222

Hello friends, I've been pulling my hair out trying to figure out how to get my service to properly play well with traefik.

My service is reachable at /dnd-notes/page, but the service needs to fetch additional resources and fails to do so.

IE: user navigates to /dnd-notes/foobar

foobar loads. foobar fetches /.client/main.css foobar fails to find this resource.

Here is my static configuration:

## traefik-static.yml
providers:
  docker:     
    exposedByDefault: false
    
api:
  insecure: true
  dashboard: true

entryPoints: 
  web:
    address: :80
  websecure:  
    address: :443
    
log:
  level: DEBUG

Here is my compose:

services:
  traefik:
    image: "traefik:latest"
    container_name: "traefik"
    ports:
      - "80:80"
      - "8080:8080"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "./traefik/traefik.yaml:/etc/traefik/traefik.yaml"

  silverbullet:
    image: zefhemel/silverbullet
    container_name: "dnd-notes"
    volumes:
      - './dnd-notes/space:/space'
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.dndnotes.rule=PathPrefix(`/dnd-notes/`)"
      - "traefik.http.routers.dndnotes.service=dndnotes"
      - "traefik.http.routers.dndnotes.entrypoints=web"
      - "traefik.http.routers.dndnotes.middlewares=dndnotes_stripprefix"
      - "traefik.http.services.dndnotes.loadbalancer.server.port=3000"
      - "traefik.http.middlewares.dndnotes_stripprefix.stripprefix.prefixes=/dnd-notes"
11
22
submitted 2 months ago* (last edited 2 months ago) by tired_n_bored@lemmy.world to c/homelab@lemmy.ml

I will start first

  • I didn't notice my diy NAS motherboard had Pci-E Gen 2.0 (old gen) before buying it. It's not a great limitation (still 500MB/s) for the two spinning disks I have on it, but it'd be if I will decide to switch to SSDs
  • I cheaped out on the PSU. I bought another one without waiting for that crap to burn down so I eventually spent more
  • I often break the software. Sometimes I kill the OS or mess with some BTRFS pools

Sometimes I just feel not adequate for it. Does this kind of things happen to you too?

12
4
submitted 2 months ago* (last edited 2 months ago) by Zozano@lemy.lol to c/homelab@lemmy.ml

First, thank you in advance.

I'm having trouble with exposing my server, I think what I need is a better understanding, as opposed to technical help (though that would be appreciated)

At the moment I'm using the linuxserver.io suite of applications. I've got SWAG set up with DuckDNS, and I'm trying to set up Jellyfin and other applications. (they're all in the same compose.yaml).

I can access my applications on an external network via <user>.duckdns.org:<port> and it works fine (but no https).

Within my home network I can access jellyfin.<user>.duckdns.org - the https is valid and everything is working fine.

I suspect this means my router is not set up correctly? I'm using OpenWRT. What am I doing wrong?

13
33
submitted 2 months ago by corroded@lemmy.world to c/homelab@lemmy.ml

This is more "home networking" than "homelab," but I imagine the people here might be familiar with what in talking about.

I'm trying to understand the logic behind ISPs offering asymmetrical connections. From a usage standpoint, the vast majority of traffic goes to the end-user instead of from the end-user. From a technical standpoint, though, it seems like it would be more difficult and more expensive to offer an asymmetrical connection.

While consumers may be connected via fiber, cable, DSL, etc, I assume that the ISP has a number of fiber links to "the internet." Those links are almost surely some symmetrical standard (maybe 40 or 100Gb). So if they assume that they can support 1000 users at a certain download speed, what is the advantage of limiting the upload? If their incoming trunks can support 1000 users at 100Mb download, shouldn't it also support 1000 users at 100Mb upload since the trunks themselves are symmetrical?

Limiting the upload speed to a different rate than download seems like it would just add a layer of complexity. I don't see a financial benefit either; if their links are already saturated for download, reducing upload speed doesn't help them add additional users. Upload bandwidth doesn't magically turn into download bandwidth.

Obviously there's some reason for this, but I can't think of one.

14
27
submitted 2 months ago* (last edited 2 months ago) by jack@water.house to c/homelab@lemmy.ml

Has anyone else been called crazy for home-labbing front facing stuff?

I've always had this mindset of asking, "What am I really getting out of this?" But when it came to the internet and what I posted, I held onto a bit of innocence. Over the past two years, though, that innocence has been chipped away, but I think I’ve managed to reclaim it.

I don’t fault for-profit companies like Reddit for monetizing content; honestly, it was my own oversight for not reading the terms of service carefully. But since then, I’ve realized just how much I’ve unknowingly contributed to other projects for free.

There’s nothing inherently wrong with that, but does anyone else ever feel a bit... exploited?

It’s like when a recruiter asks for a .docx version of your resume instead of the .pdf I provide. Maybe it’s just to block your contact details, or maybe there’s something more dubious at play. I’ve experienced both, and each time, I’ve ended up feeling a bit... used.

Now, when a recruiter asks for a .docx , I ask them why. If it’s to hide contact details, I send an anonymized version. If they want to trim it down to two pages, I direct them to the summary section on my professional website. And if they want to add their bits to it, I guide them to my website, where they can explore my detailed posts.

For me, it’s about reclaiming control over what I’ve shared.

I was talking to someone about this recently, and they mentioned that they like to post everything on GitLab to showcase what they’ve been working on. But honestly, it’s just not the same as self-hosting your own Gitea or GitLab instance. But this guy thought I was crazy for hosting a single instance GitLab.

Okay so take X, for example. There, could have a super locked-down account like I do here, only contributing to communities when I want to by directly tagging them, but otherwise just using it as a personal journal like my Mastodon, but it’s just not the same. When X started monetizing posts, the platform's objective changed.

I don’t mind 'for-profit,' but when it’s driven by short-term gains like a monetized post, eventually all engagement is funneled towards that. It ends up feeling like you’re writing in someone else’s diary. That you tailor for engagement.

It’s also about the love of tinkering.. breaking things, fixing them, and getting everything back up to spec. It’s about embracing the original idea of the internet: a decentralized space where anyone can contribute, without your work being exploited.

It’s your own little corner where you can post whatever you want, for whomever you want. A Jellyfin server for my partner, a portfolio for the hiring manager, a GitLab for my playground. Enjoying the freedom to experiment without an ops exec pulling their hair out.

It's kinda magical.

Footnote: This is my first post to this community, if this post isn't a good fit, please let me know and I'll gladly adjust or remove it.

Tags for Federation: @homelab

#homelab #macroblog

15
32
submitted 2 months ago by root@lemmy.world to c/homelab@lemmy.ml

I've been using PFSense for years, and it's been pretty great, but I also have some friends who are homelabbers that like their Unifi setups.

What do you guys prefer, and why?

16
40

I was gifted a new Raspberry Pi. I already have a previous pihole setup and now looking for other ideas to run on my network.

I was considering a network monitoring tool. Any other suggestions?

17
5
submitted 3 months ago by northendtrooper@lemmy.ca to c/homelab@lemmy.ml

Is it possible to have about 4 PoE cameras attached to a PoE switch in a network closet which will be trunked to a L3 switch where the NVR will be also attached too?

Or would it be better practice to home the NVR in the network closet to supply the power natively.

18
12
submitted 3 months ago by corroded@lemmy.world to c/homelab@lemmy.ml

A few months ago, I upgraded all my network switches. I have a 16-port SFP+ switch and a 1GB switch (LAGG to the SPF+ with two DACs). These work perfectly, and I'm really happy with the setup so far.

My main switch ties into a remote switch in another building over a 10Gb fiber line, and this switch ties into another switch of the same model (on a different floor) over a Cat6e cable. These switches are absolute garbage: https://www.amazon.com/gp/product/B084MH9P8Q

I should have known better than to buy a cheap off-brand switch, but I had hoped that Zyxel was a decent enough brand that I'd be okay. Well, you get what you pay for, and that's $360 down the toilett. I constantly have dropped connections, generally resulting in any attached devices completely losing network connectivity, or if I'm lucky, dropping down to dial-up speeds (I'm not exaggerating). The only way to fix it is to pull the power cable to the switch. Even under virtually no load, the switch gets so hot that it's painful to touch. Judging from the fact that my connection is far more stable when the switch is sitting directly in front of an air conditioner, that tells me just about all I need to know.

I'm trying to find a pair of replacement switches, but I'm really striking out. I have two ancient Dell PowerConnect switches that are rock solid, but they're massive, they sound like jet engines, and they use a huge amount of power. Since these are remote from my homelab and live in occupied areas, they just won't work. All I need is a switch that has:

  • At least 2 SFP+ ports (or 1 SFP+ port for fiber and a 10Gb copper port)
  • At least 4 1Gb ports (or SFP ports; I have a pile of old 1GB SFP adapters)
  • Management/VLAN capability Everything I find online is either Chinese white-label junk or is much larger than what I need. A 16-port SFP+ switch would work, but I'd never use most of the ports, and I'd be wasting a lot of money on overkill hardware. As an example, one of these switches is in my home office; it exists solely so I have a connection between my server rack, two PCs, and a single WAP. I am never going to need another LAN connection in my home office; any hardware is going to go in the server rack, but I do need 10GB connectivity on at least one of those PCs.

Does anyone have a suggestion for a small reliable switch that has a few SFP+ ports, is made by a reputable brand, and isn't a fire hazard?

19
7
submitted 3 months ago* (last edited 3 months ago) by MetaCubed@lemmy.world to c/homelab@lemmy.ml

In the past, I've used nessus for vulnerability scanning my lab, but as my service count has grown, the 16 IP limit is becoming a little unwieldy.

Is anyone able to recommend an alternative that fits at least most of the requirements I have?

  • Free (preferably in both senses of the word)

  • Doesn't use Docker, even if containerized, I'd prefer to avoid having my scanner share a host with another service... and I'm not incredibly well versed with Docker

  • Scans multiple systems (I tried Trivy, but as far as I can tell it only scans the system you install it on)

  • Has a webui for management of scans

Alternatively, if anyone is willing to lend some advice for the configuration of Wazuh... I deployed the service months ago with the expectation that it could be used for vulnerability scanning (the Dev was in a few reddit threads suggesting that it had the capability), but i haven't been able to configure it properly.

I appreciate any advice people are willing to offer!

Edit: fixed formatting

20
3

Is there a way to easily create Gotify notifications from critical system errors (journalctl -p 3)? I recently had a bunch of out-of-memory errors and it would've been great to be notified about them. There must be a pre-build solution for this, right? Ideally also dockerized. Thanks in advance!

21
12
submitted 4 months ago by atfergs@lemmy.world to c/homelab@lemmy.ml

I've got a homelab running a number of services in Docker. Everything works beautifully internally, but access from outside the network is very slow. I'm using nginx proxy manager and cloudflare ddns for the external access. It's not a speed issue. I'm on fiber with a very solid upload.

Jellyfin and Overseerr are the main services that I'm having trouble with. Oddly, once you manage to get a video going in Jellyfin, it works fine.

I could use some guidance in what to look for, what tools I can use, or any other advice on how to track down the issue. Thanks!

22
9
Question about NAT (lemmy.world)
submitted 4 months ago by root@lemmy.world to c/homelab@lemmy.ml

I am hosting a couple of services (Matrix chat server and a game server). I know NAT's job is to translate external requests into internal addresses, so that the traffic can hit the WAN and ultimately make it to the internal service which is expected to handle the traffic, however I'm wondering if my setup is correct.

Everything is working as expected, but I'm just wondering how the traffic knows which service to go to. If an outside requests comes in, is it just the destination port that is used to route to the correct internal IP? Do I need to do something else here for best practices?

23
25
submitted 4 months ago* (last edited 4 months ago) by umami_wasbi@lemmy.ml to c/homelab@lemmy.ml

Lesson learnt: don't ever buy an used server from Quanta

Also, isn't Epyc have an efuse that will pair it with the mobo?

24
10
Dell Boss N1 questions (lemmy.sdf.org)
submitted 4 months ago by krakenfury@lemmy.sdf.org to c/homelab@lemmy.ml

I've recently picked up an Intel P4000 and I'm purchasing some parts to set it up. Since it's an older platform, I get that there are some limitations on what I can use, so I'm worried about buying things that aren't compatible.

I'm interested in installing a Dell Boss N1 Monolithic to run Proxmox in RAID1, but have some concerns:

  • Will it even work with my system board? Maybe my search skills suck, but I can't glean from the Internet how tightly controlled Server hardware ecosystems are. Would my mb even recognize a component like this, or the drives installed on it?

  • What drives work with it? According to the user manual, there are only three supported drives, and they have to be 480gb or 960gb in size. Had anyone tested using different NVMe M.2 drives?

25
54
submitted 4 months ago* (last edited 4 months ago) by possiblylinux127@lemmy.zip to c/homelab@lemmy.ml

Help I now have several lans

view more: next ›

homelab

6602 readers
1 users here now

founded 4 years ago
MODERATORS