89
submitted 3 months ago by Tekkip20@lemmy.world to c/linux@lemmy.ml

After doing some google-fu, I've been puzzled further as to how the finnish man has done it.

What I mean is, Linux is widely known and praised for being more efficient and lighter on resources than the greasy obese N.T. slog that is Windows 10/11

To the big brained ones out there, was this because the Linux Kernel more "stripped down" than a Windows bases kernel? Removing bits of bloated code that could affect speed and operations?

I'm no OS expert or comp sci graduate, but I'm guessing it has a better handle of processes, the CPU tasks it gets given and "more refined programming" under the hood?

If I remember rightly, Linux was more a server/enterprise OS first than before shipping with desktop approaches hence it's used in a lot of institutions and educational sectors due to it being efficient as a server OS.

Hell, despite GNOME and Ubuntu getting flak for being chubby RAM hog bois, they're still snappier than Windows 11.

MacOS? I mean, it's snappy because it's a descendant of UNIX which sorta bled to Linux.

Maybe that's why? All of the snappiness and concepts were taken out of the UNIX playbook in designing a kernel and OS that isn't a fat RAM hog that gobbles your system resources the minute you wake it up.

I apologise in advance for any possible techno gibberish but I would really like to know the "Linux is faster than a speeding bullet" phenomenon.

Cheers!

all 50 comments
sorted by: hot top controversial new old
[-] jet@hackertalks.com 45 points 3 months ago* (last edited 3 months ago)

You're going to want to read the foundational published papers on operating system design. Especially kernel design and considerations. You can just Google any random graduate operating system class, and look at their reading list to get started.

I.e. https://www.cs.jhu.edu/~huang/cs718/spring20/syllabus.html

The big thing you want to look at is the different types of kernels there are, microkernels, monolithic kernels. How they divide memory, how they do IPC, how they incorporate drivers. All of these have different trade-offs.

BSD/Mac OS, Linux, NT/windows, xen/hypervisors... They all currently have different approaches, and they're all actually quite performant.

A while ago, process multiplexing, scheduling was had a huge impact on the perceived performance of a system, but now with multicore machines becoming extremely common well this is still important it is not as impactful.

Approaches to memory management, virtual memory, swapping to disk, the aggressiveness of this also has an impact on perceived system performance.

......

As you alluded to in your post, a lot of the perceived performance is not the operating system and kernel itself, but the user interface and extra services offered. Windows 11 is going to feel like a clunker for any retail user just due to all of the network driven advertisements incorporated which slow down the core interaction loop. If you click on the start menu and everything lags for a second will it pulls a new advertisements, you're going to feel that.

Start adding in background scanning for viruses, indexing for AI features, you're adding a lot of load to the system that's not necessary.

[-] jet@hackertalks.com 9 points 3 months ago

Because everything's a trade-off, people optimize different systems for different things, if you have a real-time operating system that runs a power plant, it doesn't matter if the interface is clunky as long as it hits its time targets for its tasks.

If you're running a data center server, you're probably worried more about total throughput over time, rather than immediate responsiveness to a terminal.

For a computer that does lots of machine learning and vector math, you might spend a massive amount of time making certain programs run a few percentage points faster by changing how memory is managed, cache allocation across CPUs, network access, you're going to find your critical path and bottleneck of performance and optimize that.

When we're talking about a general use desktop computer, we tend to focus on anything that a human would interact with minimize that loop. But because people could do anything, this becomes difficult to do perfectly in all scenarios. Just ask anybody who's run Chrome for a while, without restarting, and has a thousand tabs open, because all the RAM is being consumed the computer starts to feel slow, because virtual memory management becomes more demanding....

TLDR, all of the operating systems are capable of being very performant, all of the kernels are really good, it's all the extra stuff that people run at the same time that makes them feel different.

[-] DaGeek247@fedia.io 20 points 3 months ago

Because everything's a trade-off, people optimize different systems for different things

And microsoft has chosen to optimize windows 11 for online advertisers above or equal to the user experience.

[-] jet@hackertalks.com 12 points 3 months ago* (last edited 3 months ago)

Yeah, they seem hell-bent on making people hate windows. Not a great long-term strategy.

Before you could argue most retail people wouldn't know a better experience, they just accept it. But now everybody has a phone, and that phone gives them a better experience than Windows. So the tolerance for this b******* is going to go down

[-] Max_P@lemmy.max-p.me 41 points 3 months ago

Linux encourages users to send patches while Microsoft is the sole company that can modify Windows.

It's very common to see patches from Google/Meta/Cloudflare/Amazon squeezing more performance for their particular use cases. That benefits everyone in the end.

Microsoft on the other hand is more concerned about its enterprise sales and overall profits. So they don't care that much. Windows 7 was horribly bloated, and they didn't address until Windows 8 because they had to, because they realized it was too bloated to run on their new tablet PCs so they had to do something about it.

Apple cares a lot, because their thing is energy efficient fanless netbooks, and phones, and tablets. macOS and iOS are very close in how they work, so Apple has all the incentive to keep it efficient because their software will also affect the hardware side of the business. Microsoft doesn't, it's the hardware partners that get stuck dealing with it.

The NT kernel is fairly good, it just doesn't get the attention it deserves. Microsoft mostly add features on top of older features, they never go in and be like "this sucks" and rewrite a feature, because that's very risky to do and may break millions of applications and affect their bottomline. Linux doesn't have to care about that.

I'd say, if Windows was open-source, we'd have some pretty solid Windows distributions because the community would care to go in and fix a ton of bottlenecks that aren't worth it for Microsoft as a company to even bother reviewing the patches let alone develop and test them. It's much more lucrative for them to release AI crap like Copilot than make Windows 10% snappier. Because most Windows users are corporate people that makes decisions based on marketing and business items than being an enjoyable experience. Less frustrated users? Nah. More productive employees with crappy AI features that barely works? Hell yeah 🤑

TL;DR: Windows sucks because of Microsoft's business interests don't require Windows to be that good, merely good enough.

[-] balder1993@programming.dev 8 points 3 months ago

This is the right answer. To complement it, I’d just say I’ve read someone before say that at Microsoft there’s no incentive to squeeze performance, so why bother if it won’t help you get promoted or get a bonus? All these things add up over time to make Windows only care about it when there is actually a huge bottleneck.

It’s also worth noting (for non programmers out there) that speed has no correlation with the amount of code. Often it’s actually the opposite: things start simple and begin to grow in complexity and amount of code exactly to squeeze more optimizations for specific use-cases.

[-] henfredemars@infosec.pub 27 points 3 months ago

As I'm sure you've gathered, this is a complex and nuanced discussion, but to me the biggest factors making Linux fast are:

  • Absence of telemetry/data collection monitoring your use of the device
  • Open source development model encouraging the entire world to contribute to make the Linux kernel better

There's something to be said where a consistent 5% performance improvement in a filesystem or process scheduler would be taken as a huge win. How would you even manage finding or contributing such a change to something closed source like Windows 11? Academics write papers about the kernel's performance and how it can be improved whereas I tend to think Microsoft takes more of a 'good enough' approach to such details.

[-] SuperSpruce@lemmy.zip 25 points 3 months ago

I think both the Windows NT Kernel and the Linux Kernel are solid speedy parts of the OS. The main bloat is what's on top.

Windows seems to have progressively more bloated phases. Newer stock Windows programs are built from much heavier components.

  1. There's the Win32 phase, which is super fast and lightweight. Few programs are still alive using this phase, WordPad (RIP) is one of them.

  2. Then there's the broad Win64 phase, comprised of mostly Win Vista/7/8/10 parts. Word, Excel, and the old Outlook are examples of these programs. Slow upon their inception, they have become considerably faster due to better hardware, but still aren't very snappy.

  3. And finally there's the new phase, Windows 11. Horribly bloated and laughably slow when pushed beyond the absolute basics. Examples include File Explorer, Notepad, Teams, and the new Outlook. Notepad is mostly fine, but even File Explorer takes multiple seconds to load basic things on midrange hardware. We all know how bad Teams is, and the new Outlook takes 30 seconds to launch and open an email even on high end hardware.

Much of the modern bloat comes from this latest phase, but somehow other parts of the system have seriously bloated as well, like all of the disk processes on startup and even the Windowing system, which used to be near instant on crappy hardware back in the Win2000 era, now takes up to a second on modern midrange hardware on 11.

Linux has fared better against the onset of bloat than Windows, which is the main reason why it feels much snappier and uses less memory. Despite this, you can still see Linux getting significantly heavier over the years, from the super lightweight Trinity Desktop to what we have now. But, web browsers powering many greedy tabs can easily out-bloat GNOME, to the point where Linux only feels slightly faster than Windows because everything is in a browser.

[-] AVincentInSpace@pawb.social 9 points 3 months ago* (last edited 3 months ago)

Or, to use their proper names, Win32, WinForms, and (shudder) UWP/Metro

[-] SuperSpruce@lemmy.zip 6 points 3 months ago

I thought UWP/Metro was Win8/10. Win11 is "Fluent". Perhaps there were 4 phases, not just 3, but my post was already getting too long and the WinForms phase has been pretty much fully conquered by today's fast hardware.

[-] mvirts@lemmy.world 7 points 3 months ago

I agree wholeheartedly and want to add that with Linux systems it's much easier to clean out the bloat and be left with a usable system. It's possible on Windows but might as well not be since the bloat is so deeply integrated into the OS and SDK.

[-] t_378@lemmy.one 18 points 3 months ago* (last edited 3 months ago)

This may be a minor point, but I often think in discussions like these, people are talking about the entire OS rather than just the kernel. And while you can take a fully featured desktop ~~system~~ environment for a spin, and it's pretty good, a lightweight window manager is lightning quick.

If you stick to minimalistic apps for things like photo viewing, you can open folders with 1000s of images in thumbnail mode at incredible speeds, or enormous PDFs. Those are the types of tasks that seemingly slow W10 to a crawl.

In general I also have pretty good luck with stability on my machine. I don't find myself needing to kill apps that start misbehaving for unexplained reasons, except Firefox... But usually an update sorts it out.

[-] digdilem@lemmy.ml 14 points 3 months ago

Windows has an entirely different set of objectives. The coders have to layer on so many services that are insisted upon by marketing that no matter how optimised they make the kernel, it's always doing to be a little boat carrying far too much cargo.

There's also a lot of fairly reliable rumour that the Windows codebase is very messy. Evolved and complicated, supporting many obsolete things and has suffered from different managers over the years changing styles and objectives. We don't know for sure because it's proprietary.

But that said, I use both and find each good for different things. Windows is much more stable than it used to be, and speed is adequate for most things, largely because we've become used to buying better hardware every few years.

[-] flork@lemy.lol 12 points 3 months ago

Windows has an entirely different set of objectives.

I never thought of it this way. My first reaction was "What do you mean 'different objectives', they're both operating systems!" But Windows is an operating system with the objective of making profit for Microsoft. Linux is an operating system with the goal of... being an operating system.

It really puts it in perspective. Windows (and Mac) can and will only use useful to the consumer up to a point.

[-] TimeSquirrel@kbin.melroy.org 10 points 3 months ago* (last edited 3 months ago)

Windows also still runs software unchanged
from 20 or more years ago, while software on Linux has to be constantly updated to use new libraries and APIs, else it's considered "dead" and very soon will no longer run or even compile in its current form.

It has a lot of baggage that Linux doesn't need to worry about. Up until Vista, you could even still natively run 16 bit DOS software from the 80s.

[-] independantiste@sh.itjust.works 8 points 3 months ago* (last edited 3 months ago)

That doesn't really explain why the file explorer compiled for 64-bit computers is slow as balls

[-] soundconjurer@mstdn.social 1 points 3 months ago* (last edited 3 months ago)

@independantiste @TimeSquirrel , I could be wrong, but Windows NTFS is also incredibly terrible at reading/writing large numbers of small files. Windows explorer can now be opened in different processes, at least that's some improvement.

Edit: There's a reason why game developers create an archive of the files for the game rather than reading them from the FS itself.

[-] independantiste@sh.itjust.works 4 points 3 months ago

The question really is why do they keep hanging to NTFS? It's like 156 years old at this point, there are so many newer alternatives like btrfs that are faster, support bigger drives and have more features like snapshots

[-] i_am_not_a_robot@feddit.uk 3 points 3 months ago

Not sure about DOS, but Windows 10 will happily run 16-bit Windows software. You have to use the 32-bit version of Windows though - the 64-bit version dropped support.

[-] msage@programming.dev 2 points 3 months ago

You can run Wine and it will probably work better than on Windows.

[-] dan@upvote.au 2 points 3 months ago* (last edited 3 months ago)

You could still run 16-bit apps on the 32-bit version of Windows 10! You just had to manually install NTVDM from the optional features dialog. It was completely unsupported by Microsoft, though.

They never ported NTVDM to 64-bit Windows, so it died once Windows because 64-bit only.

[-] wuphysics87@lemmy.ml 8 points 3 months ago

Read The Cathedral and the Bazaar and you'll have your answer

[-] alphapuggle@programming.dev 7 points 3 months ago

This is just a theory, I don't have knowledge of the inner-workings of either Linux or Windows (beyond the basics). While Microsoft has been packing tons of telemetry in their OS since Windows 10, I think they fucked up the I/O stack somewhere along the way. Windows used to run well enough on HDDs, but can barely boot now.

This is most easily highlighted by using a disk drive. I was trying to read a DVD a while ago and noticed my whole system was locked up on a very modern system. Just having the drive plugged in would prevent windows from opening anything if already on, or getting past the spinner on boot.

The same wasn't observed on Linux. It took a bit to mount the DVD, but at no point did it lock up my system until it was removed. I used to use CDs and DVDs all the time on XP and 7 without this happening, so I only can suspect that they messed up something with I/O and has gone unnoticed because of their willingness to ignore the issues with the belief they're being caused by telemetry

[-] mrvictory1@lemmy.world 3 points 3 months ago

I had a USB with a faulty sector. Windows 10 froze for hours when I plugged it in. I got an error similar to "loading ctrl alt del interface failed"

[-] possiblylinux127@lemmy.zip 4 points 3 months ago* (last edited 3 months ago)

It just doesn't have a ton of bloat. Windows is bloated while everything else is just normal

[-] dan@upvote.au 2 points 3 months ago* (last edited 3 months ago)

What does "just normal" mean? lol

I sure some people would disagree with the "doesn't have a ton of bloat" in some cases... I've seen people complain about the number of apps preinstalled on the Fedora KDE spin for example.

[-] possiblylinux127@lemmy.zip 1 points 3 months ago

I don't really have an answer

[-] bastion@feddit.nl 2 points 3 months ago* (last edited 3 months ago)

This is a bug report for the above comment:

Expected behavior: interesting or funny comment

Actual behavior: word salad

[-] possiblylinux127@lemmy.zip 2 points 3 months ago

Please listen carefully as our options have recently changed

[-] bastion@feddit.nl 2 points 3 months ago

Nice response for the edit. :-D

[-] dan@upvote.au 1 points 3 months ago

Your call is important to us. Please stay on the line, as calls are answered in the order they were received.

[-] possiblylinux127@lemmy.zip 1 points 3 months ago

If you have a question about a product please call our support line

[-] Eheran@lemmy.world 3 points 3 months ago

If there is free RAM, is there a reason not to use it?

[-] halm@leminal.space 10 points 3 months ago

Well, true up to the point where the OS itself uses up most of the available RAM just for basic processes (like tracking and reporting the users' data to the manufacturer's data centre).

[-] Sonotsugipaa@lemmy.dbzer0.com 2 points 3 months ago

Yes, caches. Lots of caches.

this post was submitted on 15 Jul 2024
89 points (94.1% liked)

Linux

48073 readers
763 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS