22

I am setting up a Linux server (probably will be NixOS) where my VM disk files will be stored on top of an NTFS partition. (Yes I know NTFS sucks but it has to be this way.)

I am asking which guest filesystem will have the best performance for a very mixed workload. If I had access to the extra features of BTRFS or ZFS I would use them but I have no idea how CoW interacts with NTFS; that is why I am asking here.

Also I would like some NTFS performance tuning pointers.

[-] anon2963@infosec.pub 3 points 2 months ago

Usually these drives will be mounted on Linux. But occasionally they will be mounted on Windows 10 where I do not have admin or developer mode access, so I cannot depend on symlinks.

21

This is more of a system config question than a programming one, but I think this community is the best one to ask about anything Git-related.

Anyway, I am setting up a new project with hardware that has 2 physical drives. The "main" drive will usually be mounted and have 10-20 config files on it, maybe 50-100 LOC each. The "secondary" drive will be mounted only occasionally, and will have 1 small config file on it, literally 2 or 3 LOC. When mounted, this file will be located in a specific directory close to the other config files.

I would like to manage all of these files using git, ideally with a single repo, as they are all part of the same project. However, as the second drive (and thus the config file on it) will sporadically appear and disappear, Git will be confused and constantly log me adding and deleting the file.

Right now I think the most realistic solution is to make a repo for each drive and make the secondary drive a submodule of the main. But I feel like it is awkward to make a whole repo for such a simple file.

What would you do in this situation, and what is best practice? Is there a way to make this one repo?

[-] anon2963@infosec.pub 2 points 3 months ago

Thanks for the wonderful info. I think I will go with the iStorage datAshur PRO+C because it has the best speeds out of all of them. It is slightly more involved to activate read-only mode than a simple switch, but it should be negligible compared to the time to boot the system and other overhead.

There is no way for me to verify how the write-protect works with this drive, but that is true for all of them, so I have to trust one. However, this company seems very competent. And importantly there are many 3rd party reviews of this and similar iStorage products. Also the firmware is supposedly signed so it should be immune to badUSB. But you do make the point that there is no way to be sure.

I plan to use root on LUKS anyway (I want persistent storage), so I can keep / encrypted and checksum my /boot every boot to search for anomalies. Once LUKS is decrypted, theoretically malware could get embedded in there, but I feel like it would be unlikely for malware to infect one partition and not the other.

I wonder if there is a way to setup a "honeypot" partition which holds no useful data but exhibits traits that are appealing for malwares to embed themselves in. It would be checksummed regularly while the system was running and alert me if anything changed.

That open source flash drive looks awesome, and I will keep my eye on it, maybe I would consider it if my threat model was tougher.

14

I am looking for a fast USB drive which has a physical write-protect enable switch on it. I would also want a BadUSB-resistant USB controller. I want this for 2 reasons:

  • So I can diagnose issues on machines where the problem may or may not be malware. This way, I can plug it into several machines without risking spreading malware.

  • So I can carry around a TailsOS drive wherever I go, and use it on public computers and friend's computers without risk of infection.

So far, I have only found one company making these things, Kanguru. There are almost no reviews of their products by reputable sources, at least not for their write-protecting drives.

Their BadUSB firmware detection module is NIST certified, which is great (given that you trust proprietary cryptography modules at all), but no certs for the main storage write protection. Also Kanguru products are very overpriced.

And no I am not using SD cards, their write protect implementation is software-based and they are too slow for me.

I am specifically looking at the Kanguru FlashTrust . My questions are:

  • Has anyone used Kanguru products and how was your experience?

  • Are there other companies that make decent quality drives with hardware write-protect switches? (Ideally ones that have FOSS firmware and are third-party tested, but I will take anything).

  • Are there any companies that make USB writeblockers which are small enough to fit in a wallet and <$50? (Example of one that is too big). That way I can use a standard, cheaper USB drive.

Oh how I wish Nitrokey made these!

16
submitted 3 months ago by anon2963@infosec.pub to c/nix@programming.dev

I am just setting up my NixOS config for the first time, and I know that it will be fairly complex. I know it will only be possible and scalable if I have sane conventions.

I have read a number of example configs, but there does not seem to be consistent conventions between them of where to store custom option declarations, how to handle enabling/disabling modules, etc. They all work, but they do it in different ways.

Are there any official or unofficial conventions/style guides to NixOS config structure, and where can I find them?

For example, should I make a lib directory where I put modules that are easily portable and reusable in other people's configs? When should I break modules up into smaller ones? Etc. These are things that I hope to be addressed.

[-] anon2963@infosec.pub 2 points 3 months ago* (last edited 3 months ago)

Note: I haven't tested this yet, but I don't see any reason why it wouldn't.

You can have the best of both worlds by importing modules and then enabling/disabling them with config options.

The idea is that every single module, whether you want to be able to toggle them on/off or not, gets imported into your configuration.nix. For options that you want to permanently be enabled, there is no more work to do. For options or groups of options that you want to be toggleable on/off, you put them behind a lib.mkIf.

In the following video, Vimjoyer essentially makes an option that enables/disables an entire module, even though it is already imported. He creates an options.module1.enable option, and then hides the entire contents of module1 behind a lib.mkIf options.module1.enable

https://youtu.be/vYc6IzKvAJQ?t=147

13
submitted 3 months ago by anon2963@infosec.pub to c/nix@programming.dev

I have started using NixOS recently and I am just now creating conventions to use in my config.

One big choice I need to make is whether to include a unique identifier as the most significant attribute in any options that I define for my system.

For example:

Lets say I am setting up my desktop so that I am easily able to switch between light and dark modes system-wide. Therefore, I create the boolean option:

visuals.useDarkMode

Lets say I also want to toggle on/off Tor and other privacy technologies all at once easily, so I create the boolean:

usePrivateMode

Although these options do not do related things, they are still both custom options that I have made. I have the first instinct to somehow segregate them from the builtin NixOS options. Let's say my initials are "RK". I could make them all sub-attributes of the "RK" attribute.

rk.visuals.useDarkMode

rk.usePrivateMode

I feel like this is either a really good idea or an antipattern. I would like your opinions on what you think of it and why.

42
submitted 3 months ago by anon2963@infosec.pub to c/linux@lemmy.ml

My question is whether it is good practice to include a unique wrapper phrase for custom commands and aliases.

For example, lets say I use the following command frequently:

apt update && apt upgrade -y && flatpak update

I want to save time by shortening this command. I want to alias it to the following command:

update

And lets say I also make up a command that calls a bash script to scrub all of of my zfs and btrfs pools:

scrub

Lets say I add 100 other aliases. Maybe I am overthinking it, but I feel there should be some easy way to distinguish these from native Unix commands. I feel there should be some abstraction layer.

My question is whether converting these commands into arguments behind a wrapper command is worth it.

For example, lets say my initials are "RK". The above commands would become:

rk update rk scrub

Then I could even create the following to list all of my subcommands and their uses:

rk --help

I would have no custom commands that exist outside of rk, so I add to total of one executable to my system.

I feel like this is the "cleaner" approach, but what do you think? Is this an antipattern? Is is just extra work?

[-] anon2963@infosec.pub 2 points 3 months ago

Not yet, I am looking to buy my printer in a couple months. If I get this one then I will test it and do a write up of it and post it somewhere.

15
submitted 4 months ago by anon2963@infosec.pub to c/3dprinting@lemmy.ml

I am new to 3D printing, but have always wanted to get into it. Unfortunately, I have very limited space and no dedicated area that I could call my workshop. I also travel frequently, and I would like something where I could take it with me for the day.

Therefore, I would like a portable, or at least very small printer. AFAIK, the new Positron V3.2 is purpose-built to solve this kind of problem.

I am asking whether that model is a good idea for a beginner. My main concern is the price, which I am willing to put up with if there really is no other portable printer.

My other concern is just the fact that it is new and I may be too inexperienced with printers to deal with problems that are natural in first-gen products. I have a decent amount of experience soldering and other electronics work, but nothing with small moving parts. Also IDK if sourcing parts would be an issue.

If, in your experiences, these make it not worth it as a first printer, what would you recommend as a portable printer?

42

I am planning to build a multipurpose home server. It will be a NAS, virtualization host, and have the typical selfhosted services. I want all of these services to have high uptime and be protected from power surges/balckouts, so I will put my server on a UPS.

I also want to run an LLM server on this machine, so I plan to add one or more GPUs and pass them through to a VM. I do not care about high uptime on the LLM server. However, this of course means that I will need a more powerful UPS, which I do not have the space for.

My plan is to get a second power supply to power only the GPUs. I do not want to put this PSU on the UPS. I will turn on the second PSU via an Add2PSU.

In the event of a blackout, this means that the base system will get full power and the GPUs will get power via the PCIe slot, but they will lose the power from the dedicated power plug.

Obviously this will slow down or kill the LLM server, but will this have an effect on the rest of the system?

4
submitted 4 months ago by anon2963@infosec.pub to c/electricians@lemmy.ca

I am not an electrician, but an end user.

I am planning to build a very powerful server for running LLMs. It will have many GPUs and can realistically hit a 1500 watt sustained load. The PSU in my computer can handle 240v but I do not have access to a 240v circuit.

My question is whether it is a good idea to somehow balance the load between 2 or 3 120v circuits. If so, what are some methods to safely do this?

[-] anon2963@infosec.pub 2 points 5 months ago

In the past I have used Proxmox with ZFS raid on a basic mini PC. With ZFS raid it syncs everything except /boot. Proxmox has a tool called "proxmox-boot-tool-refresh" which will syncs /boot between drives. ZFS kernel module can be loaded in the initramfs so it will boot fine, even if missing a drive.

For this project I do not plan to use ZFS, but AFAIK software raid is now standard. Here is a popular video from Level1Techs talking about the flaws of hardware RAID: https://youtu.be/l55GfAwa8RI

42

I have an 11th gen Framework mainboard which I would like to repurpose as a server. Unfortunately, (unless I do some super janky stuff) I can only connect 1 drive to it over M.2 and any additional ones must be over USB.

I am thinking of just using some portable hard drives and plugging them in over USB. I plan to RAID1 them and use them as boot drives and data storage, and use the M.2 slot for something unrelated.

In your experiences, is USB reliable enough nowadays to run a RAID array for a server like this? If it is, does it depend on the specific drive used?

75
submitted 5 months ago by anon2963@infosec.pub to c/opensource@lemmy.ml

I am currently learning to be a sysadmin and I have no software development skills. I love FOSS very much and want to contribute to several projects, including non-networked ones.

How can I do this with my skillset? I have a very small (16GB RAM) server that I could offer to these projects as a build server or web host. IDK what else I could do.

59
submitted 5 months ago* (last edited 5 months ago) by anon2963@infosec.pub to c/opensource@lemmy.ml

I am wondering what the standard tool is for sending and receiving SMS and MMS on a device that does not have a SIM card in it.

Is there some tool that can do it natively? Is there a specific carrier that is commonly used for this? Is there some sort of selfhosted service that bridges it to email, and if so do I need to put a SIM card in my server?

Bonus points if I can do it within Emacs.

[-] anon2963@infosec.pub 2 points 5 months ago

Thank you for the detailed reply. You seem very knowledgeable. I will implement your suggestions as I redesign my network.

[-] anon2963@infosec.pub 2 points 5 months ago

Thanks. Some of these entries maybe (20%) have IOMMU groups listed under "lspci_all". But it is extremely awkward to search through. So maybe I will put a feature request in the forum to make IOMMU more searchable. But this is still likely the largest database of IOMMU groupings on the web, even if it is not easily searchable.

[-] anon2963@infosec.pub 1 points 5 months ago

Thanks but these are only lists of CPUs and motherboards that support IOMMU, not the IOMMU groups. For me (and many others) the groupings are just as important as whether there is support at all.

The groupings are defined by the motherboard. In my experience, all motherboards that support IOMMU will put at least 1 PCIe slot in its own own group, which is good for Graphics Card passthrough. However, the grouping of other stuff like SATA controllers and NICs varies wildly between board, and that is what I am interested in.

[-] anon2963@infosec.pub 2 points 5 months ago

Thank you, that is a very good point, I never thought of that. Just to confirm, best standard practice is for every connection, even as simple as a Nextcloud server accessing an NFS server, to go through the firewall?

Then I could just have one interface per host but use Proxmox host ID as the VLAN so they are all unique. Then, I would make a trunk on the guest OPNsense VM. In that way it is a router on a stick.

I was a bit hesitant to do firewall rules based off of IP addresses, as a compromised host could change its IP address. However, if each host is on its own VLAN, then I could add a firewall rule to only allow through the 1 "legitimate" IP per VLAN. The rules per subnet would still work though.

I feel like I may have to allow a couple CT/VMs to communicate without going through the firewall simply for performance reasons. Has that ever been a concern for you? None of the routing or switching would be hardware accelerated.

[-] anon2963@infosec.pub 2 points 6 months ago

Search eBay for used gaming laptops. Comes with a built in UPS.

[-] anon2963@infosec.pub 4 points 7 months ago

To my knowledge there is no way to index Tor v3 hostnames unless the owner of the address explicitly shares them. Therefore, even if an attacker knew that I was behind Tor, they would have no way to find out the hostname of my service and connect to it, so it is not security through obscurity. They would have to get into my password manager and steal my public key. Am I wrong about this?

Whatever the case of the hostname being public or not, do you think it is important to add another layer of security such as Wireguard in this case, or is hardening the SSH config enough?

view more: next ›

anon2963

joined 7 months ago