[-] algernon@lemmy.ml 1 points 1 day ago

I'm one of those crazy people with / and /home on tmpfs. Setting that up is very easy with Impermanence, but it does require some care and self control. That is precisely the reason I set it up: I have no self control, and need the OS to force my hand. Without impermanence, my root and home fills up with garbage fast. I tend to try and play with a lotof things, and I abandon most of them. With Impermanence, I don't need to clean up after myself: I delete the git checkout, and all state, cache and whatnit the software littered around my system will be gone on reboot.

In short, Impermanence makes my system have that freshly installed, clean and snappy feeling.

The whole thing sounds scarier and more complicated than it really is.

[-] algernon@lemmy.ml 2 points 1 day ago

So instead of commenting inside of nix files, you put nix files into .org documents and collate them so you can make your nix files an OS and a website and a zettelkasten-looking set of linked annotated nodes.

Yup! And writing it in Org allows me to structure the configuration any way I like. It makes it a whole lot easier to group things that belong together close to each other, and I never have to fight the Nix language to do so. I can also generate easily browsable, rich documentation that explains what's what and why, which helps me tremendously, because a year after I installed and configured something, I will not remember how and why I did it that way, so my own documentation will help me remember.

Generating code from docs (rather than the other way around) also means that I'm much more likely to document things, because the documentation part is the more important part. It... kinda forces a different mindset on me. And, like I said, this allows me to structure the configuration in a way that makes sense to me, and I am not constrained by the limitations of the Nix language. I can skip a tremendous amount of boilerplate this way, because I don't need to use NixOS modules, repeating the same wrapping for each and every one of them. Also feels way more natural, to be honest.

You have home on tmpfs. Isn’t that volatile? Where do you put your data/pictures/random git projects? Build outputs? How’s your RAM? (Sorry if I’m missing something obv)

It is volatile, yes, in the sense that if I reboot, it's lost. I am using Impermanence, for both /home and /. The idea here is that anything worth saving, will be recorded in the configuration, and will be stored on a persistent location, and will get bind mounted or symlinked. So data, pictures, source code, etc, live on an SSD, and they get symlinked into my home. For example, the various XDG userdirs (~/Downloads, etc), I configured them to live under ~/data, and that dir lives on persistent storage and gets symlinked back.

My root and /home are both set to 128Mb, intentionally small, so that if anything starts putting random stuff there, it will run out of space very fast, and start crashing and complaining loudly, and I'll know that I need to take care of it: either by moving the data to persistent storage, or asking whatever is putting stuff there to stop doing that. My /tmp (where temporary builds end up) is 2Gb, and sometimes I need to remount it at 10gb (hi nerdfonts!), but most of the time, 2g is more than enough.

I have 32Gb RAM, but only ~2.5g is used for tmpfs purposes (2g of it on /tmp itself), and most of the time, the majority of that is unused and as such, available for other things. My wife's laptop with 16Gb RAM uses a similar setup, with 512mb for /tmp, and that works just as fine.

I do have 64Gb of swap on a dedicated SSD, though, and that helps a lot. I currently have 3GB ram free, and 37G of swap used, but don't feel any issues with responsiveness. I don't even know what's using my swap! Everything feels snappy and responsive enough.

What’s your bootup like?

A few seconds from poweron to logging in. By far the slowest part of it is the computer waiting for me to enter my password.

❯ systemd-analyze
Startup finished in 8.667s (kernel) + 29.308s (userspace) = 37.975s
graphical.target reached after 29.307s in userspace.

Looking at systemd-analyze blame and systemd-analyze critical-path, most of that userspace time is due to waiting for the network to come online (18s), and for docker to start up (7s). Most of that is done parallel, though. Boot to gdm is waaay faster than that.

Another commenter mentioned difficulties in setting up specialized tools w/o containerizing, and another mentioned that containers still have issues. Have you run into a sitch where you needed to workaround such a problem? (e.g. something in wine, or something that needs FHS-wrangling)

I haven't run into any issues with containers, and I'm using a handful of them. docker, podman, flatpak all work fine out of the box (after setting up permanent storage for their data, so they don't try to pull 10gb containers into my 128mb root filesystem :D). Wine... I'm using Wine via Lutris to play Diablo IV, and it has worked without issues so far out of the box, I didn't have to fight to make it work.

I did run into a few problems with some stuff. AppImages for example require running them with appimage-run, but you can easily set up binfmt_misc to automatically do that for you, so you can continue your curl https://example.com/dl/Example.AppImage -o Example.AppImage && chmod +x Example.AppImage && ./Example.AppImage practices after that.

There's also cases where downloaded binaries don't work out of the box, because they can't find the dynamic linker. I... usually don't download random third party binaries, so I don't often run into this problem. The one case where I did, is Arduino tooling. I have a handy script in my (arduino-powered) keyboard firmware to patch those with patchelf. But if need be, there's buildFHSEnv, which allows us to build a derivation that simulates an FHS environment for the software being packaged. So far, I did not need to resort to that. Come to think of it... using buildFHSEnv would likely be simpler for my keyboard firmware than the patching. I might play with that next time I'm touching that repo.

[-] algernon@lemmy.ml 2 points 1 day ago

It does support a number of gestures, yeah. Can't comment on how well they work, because I do not use a touchpad. But if the quality of the rest of the compositor is any indication, they should work really darn well.

[-] algernon@lemmy.ml 10 points 2 days ago

I'm a big fan of niri, which is a scrolling tiling compositor. I always had a soft spot for tiling wms/compositors, but couldn't stick with any of them for long until I tried niri, and wholeheartedly embraced the scrolling tiling world.

Very friendly upstream & community, and written in a modern language, too.

[-] algernon@lemmy.ml 22 points 3 days ago

I've been daily driving NixOS for about a year now, switched from over two decades of running Debian. I'll try to answer your questions from my perspective:

How much can I grok in a week?

If you have some experience with functional programming or declarative configs (think Ansible), then it's a lot easier. You can definitely learn enough in a week to get started. One year in, my Nix knowledge is very light still, and I get by fine. On the other hand, there's a lot of Nix I simply don't use. I don't write reusable Nix modules, and my NixOS configuration isn't split into small, well manageable files. It's a single 3k lines long, 130k sized flake.nix. Mind you, it's not complete chaos: it is generated from an Org Roam document (literate programming style; my Org Roam files are 1.2mb in size, clocking in at a bit below 10k lines).

With that said, it took me about a month of playing and experimenting with NixOS in a VM casually, a couple of hours a week, to get comfortable and commit to switching. It's a lot easier once you switched, though.

How quick is it to make a derivation?

For most things, a couple of minutes tops. I found it easier to create derivations than creating Debian packages, and I was a Debian Developer for two decades, had a lot more and lot deeper understanding of Debian packaging practices. It's not trivial, but it's also not hard. The first derivation is maybe a bit intimidating, but the 10th is just routine.

Regarding make install & co, you can continue doing that. I use project-specific custom flakes and direnv to easily set up a development environment. That makes development very easy. For installing stuff... I'd still recommend derivations. A simple ./configure && make && make install is usually very easy to write a derivation for. And nixpkgs is huge, chances are, someone already wrote one.

How quick is it to install something new and random?

With a bit of self control and liberal use of direnv & flakes, near instant.

How long do you research a new package for?

https://search.nixos.org/packages, you can search for a package, and you can explore its derivation. The same page also provides search for NixOS options, so you can explore available NixOS modules to help you configure a package.

Can you set up dev environments quickly or do you need to write a ton of configs?

Very easy, with a tiny amount of practice. Liberal use of flakes & direnv, and you're good to go. I can't comment much on Python, because I don't do much Python nowadays, but JavaScript, Go, Rust, C, C++ have been very easy to build dev environments for.

What maintenance ouchies do you run into? How long to rectify?

None so far. If it builds, it usually works. I do need to read release notes for packages I upgrades, but that's also reasonably easy, because I can simply "diff" the package version between my running system, and the configuration I just built: I can see which packages were upgraded, and can look up their release notes if need be. In short, about the same effort as upgrading Debian was (where I also rarely ran into upgrade/maintenance gotchas).

Do I need to finagle on my own to have /boot encrypted?

If you use the NixOS installer, then yeah, you do have to fiddle with that a bit more than one would like. If you install via other means (eg, build your own flake and use something like nixos-anywhere to install it), then it's pretty easy and well supported and documented.

Feel free to ask further question, I'm happy to elaborate on my experience so far.

[-] algernon@lemmy.ml 184 points 6 months ago

Sadly, that's not code Linus wrote. Nor one he merged. (It's from git, copied from rsync, committed by Junio)

[-] algernon@lemmy.ml 59 points 6 months ago

...and here I am, running a blog that if it gets 15k hits a second, it won't even bat an eye, and I could run it on a potato. Probably because I don't serve hundreds of megabytes of garbage to visitors. (The preview image is also controllable iirc, so just, like, set it to something reasonably sized.)

[-] algernon@lemmy.ml 33 points 6 months ago

There are no bugs. Just happy little accidental features.

[-] algernon@lemmy.ml 36 points 8 months ago

Steam Deck, because it is handheld, and can run a lot of my Steam games. I can also dock it to a big screen and attach a controller.

[-] algernon@lemmy.ml 110 points 8 months ago

The single best thing I like about Zed is how they unironically put up a video on their homepage where they take a perfectly fine function, and butcher it with irrelevant features using CoPilot, and in the process:

  • Make the function's name not match what it is actually doing.
  • Hardcode three special cases for no good reason.
  • Write no tests at all.
  • Update the documentation, but make the short version of it misleading, suggesting it accepts all named colors, rather than just three. (The long description clarifies that, so it's not completely bad.)
  • Show how engineering the prompt to do what they want takes more time than just writing the code in the first place.

And that's supposed to be a feature. I wonder how they'd feel if someone sent them a pull request done in a similar manner, resulting in similarly bad code.

I think I'll remain firmly in the "if FPS is an important metric in your editor, you're doing something wrong" camp, and will also steer clear of anything that hypes up the plagiarism parrots as something that'd be a net win.

[-] algernon@lemmy.ml 42 points 9 months ago

Very bad, because the usability of such a scheme would be a nightmare. If you have to unzip the files every time you need a password, that'd be a huge burden. Not to mention that unzipping it all would leave the files there, unprotected, until you delete them again (if you remember deleting them in the first place). If you do leave the plaintext files around, and only encrypt & zip for backing up, that's worse than just using the plaintext files in the backup too, because it gives you a false sense of security. You want to minimize the amount of time passwords are in the clear.

Just use a password manager like Bitwarden. Simpler, more practical, more secure.

[-] algernon@lemmy.ml 31 points 9 months ago

There are worse things out there than Great Old Ones. You might invoke Perl by accident.

view more: next ›

algernon

joined 10 months ago