Dishonor on you! Dishonor on your cow!
I mean, that's precisely the ideal case and goal of many tariffs.
“We had a huge chunk of our engineering staff spending time improving FreeBSD as opposed to working on features and functionalities. What’s happened now with the transition to having a Debian basis, the people I used to have 90 percent of their time working on FreeBSD, they’re working on ZFS features now … That’s what I want to see; value add for everybody versus sitting around, implementing something Linux had a years ago. And trying to maintain or backport, or just deal with something that you just didn’t get out of box on FreeBSD.”
I still hold much love for FreeBSD, but this is very much indicative of my experience with it as well. The tooling in FreeBSD, specifically dtrace, bhyve, jails, and zfs was absolutely killer while Linux was still experiencing teething problems with a nonstandard myriad of half developed and documented tools. But Linux has since then matured, adopted, and standardized. And the strength of the community is second to none.
They'll be happier with Linux.
It was the bad old days of sysadmin, where literally every critical service ran on an iron box in the basement.
I was on my first oncall rotation. Got my first call from helpdesk, exchange was down, it's 3AM, and the oncall backup and Exchange SMEs weren't responding to pages.
Now I knew Exchange well enough, but I was new to this role and this architecture. I knew the system was clustered, so I quickly pulled the documentation and logged into the cluster manager.
I reviewed the docs several times, we had Exchange server 1 named something thoughtful like exh-001 and server 2 named exh-002 or something.
Well, I'd reviewed the docs and helpdesk and stakeholders were desperate to move forward, so I initiated a failover from clustered mode with 001 as the primary, instead to unclustered mode pointing directly to server 10.x.x.xx2
What's that you ask? Why did I suddenly switch to the IP address rather than the DNS name? Well that's how the servers were registered in the cluster manager. Nothing to worry about.
Well... Anyone want to guess which DNS name 10.x.x.xx2 was registered to?
Yeah. Not exh-002. For some crazy legacy reason the DNS names had been remapped in the distant past.
So anyway that's how I made a 15 minute outage into a 5 hour one.
On the plus side, I learned a lot and didn't get fired.
I mean, it is Alabama. The nearest civilization is Atlanta.
ZFS is a very robust choice for a NAS. Many people, myself included, as well as hundreds of businesses across the globe, have used ZFS at scale for over a decade.
Attack the problem. Check your system logs, htop, zpool status.
When was the last time you ran a zpool scrub? Is there a scrub, or other zfs operation in progress? How many snapshots do you have? How much RAM vs disk space? Are you using ZFS deduplication? Compression?
Amazing how many products just don't.
Well that's super fucking cool
SearxNG is still here to help
College (if they go), is when these boys pulled out of their comfort zone and thrown into a huge mixer with a huge variety of new people and ideas. I imagine there's a reason they only see this trend in "high school boys".
Oh yes, let's never do anything good, because there might be something else even more impossible that would be better.
I know several large companies looking to Microsoft, Xen, and Proxmox. Though the smart ones are more interested in the open source solutions to avoid future rug-pulls.