I get your point and it's funny but it's different in important ways that are directly relevant to the OP article. The parent uses the instrumental theory of technology to dismiss the article, which is roughly saying that antidemocracy is a property of AI. I'm saying that not only is that a valid argument, but that these kinds of properties are important, cumulative, and can fundamentally reshape our society.
I don’t like this way of thinking about technology, which philosophers of tech call the "instrumental" theory. Instead, I think that technology and society make each other together. Obviously, technology choices like mass transit vs cars shape our lives in ways that simpler tools, like a hammer or or whatever, don't help us explain. Similarly, society shapes the way that we make technology.
In making technology, engineers and designers are constrained by the rules of the physical world, but that is an underconstraint. There are lots of ways to solve the same problem, each of which is equally valid, but those decisions still have to get made. How those decisions get made is the process through which we embed social values into the technology, which are cumulative in time. To return to the example of mass transit vs cars, these obviously have different embedded values within them, which then go on to shape the world that we make around them. We wouldn't even be fighting about self-driving cars had we made different technological choices a while back.
That said, on the other side, just because technology is more than just a tool, and does have values embedded within it, doesn't mean that the use of a technology is deterministic. People find subversive ways to use technologies in ways that go against the values that are built into it.
If this topic interests you, Andrew Feenberg's book Transforming Technology argues this at great length. His work is generally great and mostly on this topic or related ones.
Honestly I should just get that slide tattooed to my forehead next to a QR code to Weizenbaum's book. It'd save me a lot of talking!
I agree with you so strongly that I went ahead and updated my comment. The problem is general and out of control. Orwell said it best: "Journalism is printing something that someone does not want printed. Everything else is public relations."
These articles frustrate the shit out of me. They accept both the company's own framing and its selectively-released data at face value. If you get to pick your own framing and selectively release the data that suits you, you can justify anything.
I am once again begging journalists to be more critical ~~of tech companies~~.
But as this happens, it’s crucial to keep the denominator in mind. Since 2020, Waymo has reported roughly 60 crashes serious enough to trigger an airbag or cause an injury. But those crashes occurred over more than 50 million miles of driverless operations. If you randomly selected 50 million miles of human driving—that’s roughly 70 lifetimes behind the wheel—you would likely see far more serious crashes than Waymo has experienced to date.
[...] Waymo knows exactly how many times its vehicles have crashed. What’s tricky is figuring out the appropriate human baseline, since human drivers don’t necessarily report every crash. Waymo has tried to address this by estimating human crash rates in its two biggest markets—Phoenix and San Francisco. Waymo’s analysis focused on the 44 million miles Waymo had driven in these cities through December, ignoring its smaller operations in Los Angeles and Austin.
This is the wrong comparison. These are taxis, which means they're driving taxi miles. They should be compared to taxis, not normal people who drive almost exclusively during their commutes (which is probably the most dangerous time to drive since it's precisely when they're all driving).
We also need to know how often Waymo intervenes in the supposedly autonomous operations. The latest we have from this, which was leaked a while back, is that Cruise (different company) cars are actually less autonomous than taxis, and require >1 employee per car.
edit: The leaked data on human interventions was from Cruise, not Waymo. I'm open to self-driving cars being safer than humans, but I don't believe a fucking word from tech companies until there's been an independent audit with full access to their facilities and data. So long as we rely on Waymo's own publishing without knowing how the sausage is made, they can spin their data however they want.
edit2: Updated to say that ournalists should be more critical in general, not just about tech companies.
David Graeber's Debt: The First 5000 Years. We all take debt for granted. It's fascinating to learn how differently we've thought about it over the millenia and how much of our modern world makes more sense when understood through its lens.
No need to apologize for length with me basically ever!
I was thinking how you did it in the second paragraph, but even more stripped down. The algorithm has N content buckets to choose from, then, once it chooses, the success is how much of the video the user watched. Users have the choice to only keep watching or log off for simplicity. For small N, I think that @kersplomp@programming.dev is right on that it's the multi-armed bandit problem if we assume that user preferences are static. If we introduce the complexity that users prefer familiar things, which I think is pretty fair, so users are more likely to keep watching from a bucket if it's a familiar bucket, I assume that exploration gets heavily disincentivized and exhibits some pretty weird behavior, while exploitation becomes much more favorable. What I like about this is that, with only a small deviation from a classic problem, it would help explain what you also explain, which is getting stuck in corners.
Once you allow user choice beyond consume/log off, I think your way of thinking about it, as a turn based game, is exactly right, and your point about bin refinement is great and I hadn't thought of that.
Yeah I really couldn't agree more. I really harped on the importance of other properties of the medium, like brevity, when I reviewed the book #HashtagActivism, and how those too are structurally right wing. There's a lot of scholars doing these kinds of network studies and imo they way too often emphasize user-user dynamics and de-emphasize, if not totally omit, the fact that all these interactions are heavily mediated. Just this week I watched a talk that I thought had many of these same problems.
I knew you were the person to call :)
Thanks!
I feel enlightened now that you called out the self-reinforcing nature of the algorithms. It makes sense that an RL agent solving the bandits problem would create its own bubbles out of laziness.
You're totally right that it's like a multi-armed bandit problem, but maybe with so many possibilities that searching is prohibitively expensive, since the space of options to search is much bigger than the rate that humans can consume content. In other ways, though, there's a dissimilarity because the agent's reward depends on its past choices (people watch more of what they're recommended). It would be really interesting to know if anyone has modeled a multi-armed bandit problem with this kind of self-dependency. I bet that, in that case, the exploration behavior is pretty chaotic. @abucci@buc.ci this seems like something you might just know off the top of your head!
Maybe we can take advantage of that laziness to incept critical thinking back into social media, or at least have it eat itself.
If you have any ideas for how to turn social media against itself, I'd love to hear them. I worked on this post unusually long for a lot of reasons, but one of them was trying to think of a counter strategy. I came up with nothing though!
Great comment. Taking it further, making "politics" inherently negative has a lot of propaganda value to power. The people in charge generally want to defend the status quo, so they'd rather depoliticize the populace. This is why you get such strange contradictions as the people in charge constantly attacking "political elites" or "the swamp" or whatever. They're trying to discredit politics itself to consolidate their power. Similarly, when they do want to change something, they say "it's not politics; it's common sense." They want a population that feels like politics is something inherently dubious, or at least just not worth their time and effort.
Inclusion has always been and will always be a political project, because there are people who want power and who will use it to exclude people for whatever reason.