You're right to think that "since it's open source, people can see what it's doing and would right away notice something malicious" is bullshit, cause it pretty much is. I sure as hell don't spend weeks analyzing the source code of every third party open source package or program that I use. But just like with close-source software, there's a much bigger story of trust and infrastructure in play.
For one, while the average Joe Code isn't analyzing the source of every new project that pops up, there are people whose job is literally that. Think academic institutions, and security companies like Kaspersky. You can probably argue that stuff like that is underfunded, but it definitely exists. And new projects that gain enough popularity to matter, and don't come from existing trusted developers are gonna be subject to extra scrutiny.
For two, in order for a malicous (new) project to be a real problem, it has to gain enough popularity to reach its targets, and the open source ecosystem is pretty freakin' huge. There's two main ways that happens: A) it was developed, at least partially, by an established, trusted entity in the ecosystem, and B) it has to catch the eye of enough trusted or influential entities to gain momentum. On point B, in my experience, the kind of person who takes chances on small, unknown, no-name projects is just naturally the "exceptionally curious" type. "Hmm, I need to do X, I wonder what's out there already that could do it. Hey, here's something. Is it worth using? I wonder how they solved X. Lemme take a look..."
For three, the open source ecosystem relies heavily on distribution systems, stuff like GitHub, NuGet, NPM, Docker, and they take on a big chunk of responsibility for the security and trustability of the stuff they distribute. They do things like code scanning, binary validation, identity verification, and of course punitive measures taken against identified bad actors (I.E. banning).
All that being said, none of the above is perfect, and malicious actor absolutely do still manage to implant malware in open source software that we all rely on. The hope is that with all of the above points, as well as all the ones I've missed, that the odds of it happening are rare, and that when it DOES happen, it's way easier to identify and correct the problems than when we have to trust a private party to do it behind closed doors.
Great recent example, from last year: https://www.akamai.com/blog/security-research/critical-linux-backdoor-xz-utils-discovered-what-to-know
Me, I see this story as rather uplifting. I think it shows that the ecosystem we have in place does a pretty good job of controlling even the worst malicious actors, cause this story involves just about the worst kind of malicous actor you could imagine. They spent a full 2 years doing REAL open source work to develop that community trust I talked about, as well as maintaining a small army of fake accounts submitting support requests, to put pressure on the project to add more maintainers, resulting in a VERY sophisticated, VERY severe backdoor being added. And they still got found out relatively quickly.