this post was submitted on 03 Jun 2025
1377 points (99.1% liked)

linuxmemes

25381 readers
957 users here now

Hint: :q!


Sister communities:


Community rules (click to expand)

1. Follow the site-wide rules

2. Be civil
  • Understand the difference between a joke and an insult.
  • Do not harrass or attack users for any reason. This includes using blanket terms, like "every user of thing".
  • Don't get baited into back-and-forth insults. We are not animals.
  • Leave remarks of "peasantry" to the PCMR community. If you dislike an OS/service/application, attack the thing you dislike, not the individuals who use it. Some people may not have a choice.
  • Bigotry will not be tolerated.
  • 3. Post Linux-related content
  • Including Unix and BSD.
  • Non-Linux content is acceptable as long as it makes a reference to Linux. For example, the poorly made mockery of sudo in Windows.
  • No porn, no politics, no trolling or ragebaiting.
  • 4. No recent reposts
  • Everybody uses Arch btw, can't quit Vim, <loves/tolerates/hates> systemd, and wants to interject for a moment. You can stop now.
  • 5. πŸ‡¬πŸ‡§ Language/язык/Sprache
  • This is primarily an English-speaking community. πŸ‡¬πŸ‡§πŸ‡¦πŸ‡ΊπŸ‡ΊπŸ‡Έ
  • Comments written in other languages are allowed.
  • The substance of a post should be comprehensible for people who only speak English.
  • Titles and post bodies written in other languages will be allowed, but only as long as the above rule is observed.
  • 6. (NEW!) Regarding public figuresWe all have our opinions, and certain public figures can be divisive. Keep in mind that this is a community for memes and light-hearted fun, not for airing grievances or leveling accusations.
  • Keep discussions polite and free of disparagement.
  • We are never in possession of all of the facts. Defamatory comments will not be tolerated.
  • Discussions that get too heated will be locked and offending comments removed.
  • Β 

    Please report posts and comments that break these rules!


    Important: never execute code or follow advice that you don't understand or can't verify, especially here. The word of the day is credibility. This is a meme community -- even the most helpful comments might just be shitposts that can damage your system. Be aware, be smart, don't remove France.

    founded 2 years ago
    MODERATORS
     
    you are viewing a single comment's thread
    view the rest of the comments
    [–] SatyrSack@lemmy.sdf.org 314 points 4 days ago (8 children)

    Immediately get noticed

    Realistically, though, we are only aware of that one because it was noticed in that unlikely scenario and then widely reported. For all we know, most open source backdoors are alive and well in our computers, having gone unnoticed for years.

    [–] towerful@programming.dev 124 points 4 days ago (3 children)

    Yup.
    But in open source it CAN be noticed, by anyone determined enough to dig into its side effects.
    Proprietary software? You file a regression bug that startup takes 500ms longer, and it might get looked at.

    Also, backdoors that are discovered in open source software improve automated software auditing.

    [–] alaphic@lemmy.world 72 points 4 days ago

    500ms longer, and it might get looked at.

    Why would you even lie to the poor fellow like that? 🀣 lol

    [–] jj4211@lemmy.world 6 points 3 days ago

    Yeah, you open a bug like that in proprietary software and it will immediately get rationalized away as having no business case to address, likely with a person with zero direct development responsibility writing a bs explanation like the small impact was due to a number of architectural changes.

    Speaking as someone with years of exposure to business managed issue handling.

    [–] magic_lobster_party@fedia.io 23 points 4 days ago (1 children)

    The flaw also highlighted a social engineering exploit. It’s not the first time some vulnerability has entered open source software due to social pressure on the maintainer. Notably EventStream exploit.

    This is difficult to account for. You can’t build automated tooling for social engineering exploits.

    [–] Ack@lemmy.ca 65 points 4 days ago* (last edited 4 days ago) (2 children)
    [–] meme_historian@lemmy.dbzer0.com 71 points 4 days ago (1 children)
    [–] Ack@lemmy.ca 28 points 4 days ago

    Wow, thanks, that’s way better than the link I found.

    [–] SatyrSack@lemmy.sdf.org 57 points 4 days ago

    Yes, this particular incident.

    https://en.wikipedia.org/wiki/XZ_Utils_backdoor

    In February 2024, a malicious backdoor was introduced to the Linux build of the xz utility within the liblzma library in versions 5.6.0 and 5.6.1 by an account using the name "Jia Tan".[b][4] The backdoor gives an attacker who possesses a specific Ed448 private key remote code execution through OpenSSH on the affected Linux system. The issue has been given the Common Vulnerabilities and Exposures number CVE-2024-3094 and has been assigned a CVSS score of 10.0, the highest possible score.[5]

    Microsoft employee and PostgreSQL developer Andres Freund reported the backdoor after investigating a performance regression in Debian Sid.[8] Freund noticed that SSH connections were generating an unexpectedly high amount of CPU usage as well as causing errors in Valgrind,[9] a memory debugging tool.[10]

    [–] haui_lemmy@lemmy.giftedmc.com 48 points 4 days ago (3 children)

    Thats not really how open source works. If you use an open source tool like say, nano. It has been looked at and improved for many years by many people who have worked up an understanding of the code.

    I realize that this can only be natively understood by a programmer.

    What we (I) do when we work at open source projects is reading through the code for so long until we "get it". It means we start to understand what does what. If you want so change something, you must locate it, finding out what it is not. The chance that someone stumbles across something that then sparks a full blown investigation isnt that low. Of course you can hide something in extremely long and boring code but its alas automatically tested by most software shops.

    In short: we dont do this since yesterday and opeb source is so many universes better than closed source is a truth that only a fool would disregard.

    [–] squaresinger@lemmy.world 46 points 4 days ago (1 children)

    Are you sure?

    All I'm saying is leftPad, if you still remember.

    As a programmer I do not believe you when you claim that you read through all the code of all the libraries you include.

    Especially with more hardcore dependencies (like OpenSSL), hardly anyone reads through that.

    [–] rtxn@lemmy.world 25 points 4 days ago (1 children)

    That's assuming the attacker is stupid enough to put the exploit in the source code where it can be easily discovered.

    The Xz exploit was not present in the source code.

    It was hidden in the makefile as an obfuscated string and injected into the object file during the build process.

    [–] haui_lemmy@lemmy.giftedmc.com 9 points 4 days ago (1 children)

    I saw the code. It was pretty obvious once you look at that particular piece. You have to adapt the makefile pretty often so you also would see gibberish. If you're a programmer and you encounter what YOU think is gibberish, all alarms go off.

    i dont know your experience in coding but I dont see how a huge number (a given with old and popular code) of experienced people could overlook something like this.

    [–] r00ty@kbin.life 24 points 4 days ago (1 children)

    But this is the crucial thing. It wasn't in the repository. It was in the tarball. It's a very careful distinction because, people generally reviewed the repository and made the assumption that what's there, is all that matters.

    The changes to the make process only being present in the tarball was actually quite an ingenius move. Because they knew that the process many distro maintainers use is to pull the tarball and work from that (likely with some automated scripting to make the package for their distro).

    This particular path will probably be harder to reproduce in the future. Larger projects I would expect have some verification process in place to ensure they match (and the backup of people independently doing the same).

    But it's not to say there won't in the future be some other method of attack the happens out of sight of the main repository and is missed by the existing processes.

    Absolutely understand the point. They had a good idea. They failed. Done. my point stands. Foss is superior.

    [–] 0x0@lemmy.zip 1 points 3 days ago (1 children)

    automatically tested by most software shops.

    Really?

    I feel like its a mixed bag. Certainly there's an infinitely higher chance of someone randomly noticing a backdoor in OSS than in closed source simply because any OSS project in use has someone looking at it. Many closed systems have dusty corners that haven't had programmer eyes on them in years.

    But also, modern dev requires either more vigilance than most of us have to give or more trust than most of us would ideally be comfortable offering. Forget leftpad, I've had npm dependencies run a full python script to compile and build sub dependencies. Every time I run npm update, it could be mining a couple of bitcoins for all I know in addition to installing gigs and gigs of other people's code.

    The whole industry had deep talks after leftpadgate about what needed to be done and ultimately, not much changed. NPM changed policy so that people couldn't just dissapear their packages. But we didn't come up with some better way.

    Pretty much every language has its own NPM now, the problem is more widespread than ever. With Rust, it can run arbitrary macros and rust code in the build files, it can embed C dependencies. I'm not saying it would be super easy to hide something in cargo, i haven't tried so I don't know, but i do think the build system is incredibly vulnerable to supply chain attacks. A dependency chain could easily pull in some backdoor native code, embed it deep into your app, and you might never realize it's even there.

    [–] jj4211@lemmy.world 6 points 3 days ago (1 children)

    Evidence suggests this isn't the case.

    We know of so many more closed source backdoors despite them being harder to notice in practice. Either before they became a problem or after they have been used in an attack. So we know backdoors can get noticed even without access to source code.

    Meanwhile we have comparatively fewer backdoor type findings in major open source software, despite and thanks to increased scrutiny. So many people want to pad their resume with "findings" and go hit up open source software relentlessly. This can be obnoxious because many of the findings are flat out incorrect or have no actual security implications, but among the noise is a relatively higher likelihood that real issues get noticed.

    The nature of the xz attack shows the increased complexity associated with attempting to back door open source. Sneaking a malicious binary patch into test data, because the source code would be too obvious, and having to hide asking the patch in an obfuscated way in build scripts that would only apply in theory under specific circumstances. Meanwhile the closed source backdoors have frequently been pretty straightforward but still managed to ship and not be detected.

    Even if we failed to detect unused backdoors, at some point someone would actually want to use their backdoor, so they should be found at some point.

    [–] TheKMAP@lemmynsfw.com 1 points 2 days ago (1 children)

    I'm not sure how you can provide evidence that one thing has fewer unknown unknowns than another thing.

    [–] jj4211@lemmy.world 2 points 2 days ago

    By relative volume of the known things. It's not a guarantee, but it's highly suggestive that the more observable instances of something, the more not yet observed instances of the same thing are out there.

    There are factors that can knock that out of balance, like not having access to source code making things harder to find, but those confounding factors would hide more on the closed source side than the open source side.

    [–] possiblylinux127@lemmy.zip 24 points 4 days ago (2 children)

    I haven't really seen any evidence to support this

    [–] Gladaed@feddit.org 16 points 4 days ago (1 children)
    [–] limer@lemmy.dbzer0.com 2 points 3 days ago

    Which in itself is worrying to me; given that there are now tens of thousands of in-use libraries and millions of programmers, the chances are high that someone tried at least once more than we have heard about .

    And I know there have been several attempts, but there seems to be a lack of information about them all in one easy to read place

    [–] SatyrSack@lemmy.sdf.org 1 points 3 days ago (1 children)

    There doesn't need to be any evidence. This is something that is impossible to prove one way or the other, like Last Thursdayism.

    [–] possiblylinux127@lemmy.zip 1 points 3 days ago* (last edited 3 days ago)

    That's just called a conspiracy theory

    [–] pinball_wizard@lemmy.zip 9 points 3 days ago

    For all we know...

    This isn't something we need to speculate about. The vulnerability histories of popular closed and open source tools are both part of public data sets.

    Looking into that data, the thing that stands out is that certain proprietary software vendors have terrible security track records, and open source tools from very small teams may be a mixed bag.

    [–] Peffse@lemmy.world 17 points 4 days ago (2 children)

    Reminds me of the old Debian OpenSSL vulnerability that went unnoticed for 2 years... but it did eventually get noticed.

    https://lists.debian.org/debian-security-announce/2008/msg00152.html

    [–] Samskara@sh.itjust.works 1 points 3 days ago

    OpenSSL has a whole list of serious security issues Heartbleed and go to fail is what I remember right away.