one_knight_scripting

joined 1 year ago

Do it over tmux. When the screen goes out switch tty, and load up the tmux session to see if it is finished.

Or is it dead even from that?

Not if you need custom error bars on a scatter plot in Excel.

[–] one_knight_scripting@lemmy.world 4 points 1 day ago (1 children)

That is true, but I would define standard practice like this: ls -l = ll And ls -la = la

[–] one_knight_scripting@lemmy.world 3 points 1 day ago (3 children)

Doesn't show dotfiles.

[–] one_knight_scripting@lemmy.world 12 points 1 day ago (8 children)

I like the la alias.

[–] one_knight_scripting@lemmy.world 2 points 1 day ago (2 children)

IOT Enterprise LTSC fully works for running Windows games. It just doesn't have a lot of the bloatware. I've tried it and I'm dual booting with Arch.

If it is just meant as a steam machine, I recommend looking at Nobara for Nvidia GPU and Bazzite for AMD GPU. I will admit that I haven't tested vr games yet.

Personally, I'm maining Arch and it plays most games in HDR at 4k 120Hz. My Windows is so I have access to Microsoft Office.

So, I understand this is Ian only, I will leave out NextCloud.

I would personally say Ceph. This is a storage solution meant to be spread among a bunch of different hosts. Basically, it operates on RAID 5 principles AND replicated storage.

Personal setup: single host 12 ea. 10TB HDDs.

To start, it does go ahead and generates the parity data for the storage bucket. On top of that, I am running a X2 replicated bucket. Now since I am running a single host, this data is replicated amongst OSDs(read HDDs), but in a multiple host cluster it would be replicated amongst multiple hosts instead.

One of the benefits to an array like this is that other types of services are easily implemented. NFS overall is pretty good, and it is possible to implement that through the UI or command line. I understand that Samba is not your favorite, but that is also possible. Personally, I am using Rados to connect my Apache Cloudstack hypervisor.

I will admit, it is not the easiest to set up, but using docker containers to manage storage is an interesting concept. On top of that, you can designate different HDDs to different pools, perhaps you want your solid state storage to be shared separately. Ceph is also capable of monitoring your HDDs with smartctl.

Proper installation does give you a web UI to manage it, if some one of your skill even needs it. ;)

Does an image like that exist?! Asking for a friend.

Forgive me, but doesn't yay -S neofetch do the thing still?

Hypervisor Gotta say, I personally like a rather niche product. I love Apache Cloudstack.

Apache Cloudstack is actually meant for companies providing VMs and K8S clusters to other companies. However, I've set it up for myself in my lab accessible only over VPN.

What I like best about it is that it is meant to be deployed via Terraform and cloud init. Since I'm actively pushing myself into that area and seeking a role in DevOps, it fits me quite well.

Standing up a K8S cluster on it is incredibly easy. Basically it is all done with cloud init, though that process is quite automated. In fact, it took me 15m to stand up a 25 node cluster with 5 control nodes and 20 worker nodes.

Let's compare it to other hypervisors though. Well, Cloudstack is meant to handle global operations. Typically, Cloudstack is split into regions, then into zones, then into pods, then into clusters, and finally into hosts. Let's just say that it gets very very large if you need it to. Only it's free. Basically, if you have your own hardware, it is more similar to Azure or AWS, then to VMWare. And none of that even costs any licensing.

Technically speaking, Cloudstack Management is capable of handling a number of different hypervisors if you would like it to. I believe that includes VMWare, KVM, Hyperv, Ovm, lxc, and XenServer. I think it is interesting because even if you choose to use another hypervisor that you prefer, it will still work. This is mostly meant as a transition to KVM, but should still work though I haven't tested it.

I have however tested it with Ceph for storage and it does work. Perhaps doing that is slightly more annoying than with proxmox. But you can actually create a number of different types of storage if you wanted to take the cloud provider route, HDD vs SSD.

Overall, I like it because it works well for IaaS. I have 2000 vlans primed for use with its virtual networking. I have 1 host currently joined, but a second host in line for setup.

Here is the article I used to get it initially setup, though I will admit that I personally used a different vlan for the management ip and the public ip vlan. http://rohityadav.cloud/blog/cloudstack-kvm/

[–] one_knight_scripting@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

This is not a programming language, this is bash.

>> Does not right-shift bits, it appends to a file.

29
DIY Sonos Project (lemmy.world)
submitted 5 months ago* (last edited 5 months ago) by one_knight_scripting@lemmy.world to c/selfhosted@lemmy.world
 

Hey Self Hosted!

Got a shower thought I wanna bounce off youse guys. It's half-baked but itching to become real: DIY Sonos-like surround sound using Raspberry Pis (or maybe other SBCs if Pi's not cut out for it). Need your brains to kick things off!

The Vision:

Server Pi

  • Acts as the brain. Takes 5.1 audio input from the TV (SPDIF? HDMI? Still figuring that out).

Client Pis

Wireless speakers running balenaSound or similar. Each handles a specific channel (front left, rear right, etc.). I do picture each of these being connected to a amplifier board. With some fancy wiring to give Raspberry pi voltages and power required for the amplifiers. (Something like this: https://a.co/d/fwkXuCm)

The Hurdles:

5.1 Audio Input

Can a Pi even handle 5.1 audio input? Do I need a fancy sound card/HAT? Or should I ditch the Pi for something beefier?

Channel Remapping Sorcery

Wiring all speakers the same (e.g., left channel only) but using Linux wizardry to assign which channel each speaker plays. Like, plug in a "rear right" speaker, tell the Pi "yo, you’re rear right now," and boom—it works. Possible? Or am I dreaming?

Why? Swapping speakers without rewiring = less headache. Plus, modularity.

First roadblock: Getting clean 5.1 into a Pi. Second headache: Software channel routing.

Anyone tackled something like this before? Am I reinventing a wheel that’s already on fire?

Edit: I think I may actually have found a solution even cheaper and I intended. Has anyone here ever heard of WiSa? Long story short it is a solution for Wireless Audio Cinemas. Mostly it is used in very expensive speakers, I'm talking like $5K USD for a whole system. However. I have found a much cheaper solution: https://a.co/d/fXkaMEX. This would be a good starter point for me because the server side already does everything that I want it to. The client side(speakers) are just about there... But I want to see better drivers and amplifiers. If I were to purchase this, I would use it as is initially, but eventually cannibalize the WiSa adapter, attach it to a strong amplifier, and mount the result in a better set of speakers.

 

Hello there Selfhosted community!

This is an announcement of the completion of a project I've been working on. A Script for installing Ubuntu 24.04 on a ZFS RAID 10. Now, I'd like to describe why I choose to develop this and how I'd like for other people to have access to it as well. Let us start with the hardware.

Now, I am using an old host. My host in particular was originally a BCDR device that was based on a ZFS raidz implementation. Since it was designed for ZFS, it doesn't even have a RAID card, it only has an HBA anyways. So for redundancy, ZFS is a good way to go. Now, even though this was a backup appliance, it did not have root on ZFS. Instead, it had a separate harddrive for the operating system and three individual disks for the zpool. This was not my goal.

So I did a little research and testing. I looked at two particular guides (Debian/Ubuntu). Now, I performed those steps a dozens of times because I kept messing up the little things. And to eliminate the human error(that's me) I decided to just go ahead and script the whole thing.

The Github Repository I linked contains all the code needed to setup a generic ubuntu-server host using a ZFS RAID 10.

Instructions for starting the script are easy. Boot up a live cd(https://ubuntu.com/download/server). Hit CTRL+ALT+F2 to go into the shell. Run the following command:

bash <(wget -qO- https://raw.githubusercontent.com/Reddimes/ubuntu-zfsraid10/refs/heads/main/tools/install.sh)

This command does clone the repository, changes directory into it, and runs the entrypoint(sudo ./init.sh). Hopefully, this should be easy to customize to meet your needs.

More Engineering details are on the Github.

view more: next ›