To me, the idea of using market power as a key argument here seems quite convincing, because if there was relevant competition in the search engine market, Google would probably have had much more difficulty imposing this slop on all users.
HedyL
I disagree with the last part of this post, though (the idea that lawyers, doctors, firefighters etc. are inevitably going to be replaced with AI as well, whether we want it or not). I think this is precisely what AI grifters would want us to believe, because if they could somehow force everyone in every part of society to pay for their slop, this would keep stock prices up. So far, however, AI has mainly been shoved into our lives by a few oligopolistic tech companies (and some VC-funded startups), and I think the main purpose here is to create the illusion (!) of inevitability because that is what investors want.
Completely unrelated fact, but isn't the prevalence of cocaine use among U. S. adults considered to be more than 1% as well?
(Referring to this, of course - especially the last part: https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/)
Stock markets generally love layoffs, and they appear to love AI at the moment. To be honest, I'm not sure they thought beyond that.
Yes, they will create security problems anyway, but maybe, just maybe, users won’t copy paste sensitive business documents into third party web pages?
I can see that. It becomes kind of a protection racket: Pay our subscription fees, or data breaches are going to befall you, and you will only have yourself (and your chatbot-addicted employees) to blame.
At this point it’s an even bet that they are doing this because copilot has groomed the executives into thinking it can’t do wrong.
This, or their investors (most likely both).
reliably determining whether content (or an issue) is AI generated remains a challenge, as even human-written text can appear ‘AI-like.’
True (even if this answer sounds like something a chatbot would generate). I have come across a few human slop generators/bots in my life myself. However, making up entire titles of books or papers appears to be a specialty of AI. Humans would not normally go to this trouble, I believe. They would either steal text directly from their sources (without proper attribution) or "quote" existing works without having read them.
So what kind of story can you tell? A movie that perhaps has a lot of dream sequences? Or a drug trip?
Maybe something like time travel, because then it might be okay if the protagonists kept changing their appearance to some degree. But even then, there wouldn't be enough consistency, I guess.
This has become a thought-terminating cliché all on its own: "They are only criticizing it because it is so much smarter than they are and they are afraid of getting replaced."
I’ve noticed a trend where people assume other fields have problems LLMs can handle, but the actually competent experts in that field know why LLMs fail at key pieces.
I am fully aware of this. However, in my experience, it is sometimes the IT departments themselves that push these chatbots onto others in the most aggressive way. I don't know whether they found them to be useful for their own purposes (and therefore assume this must apply to everyone else as well) or whether they are just pushing LLMs because this is what management expects them to do.
First, we are providing legal advice to businesses, not individuals, which means that the questions we are dealing with tend to be even more complex and varied.
Additionally, I am a former professional writer myself (not in English, of course, but in my native language). Yet, even I find myself often using complicated language when dealing with legal issues, because matters tend to be very nuanced. "Dumbing down" something without understanding it very, very well creates a huge risk of getting it wrong.
There are, of course, people who are good at expressing legal information in a layperson's way, but these people have usually studied their topic very intensively before. If a chatbot explains something in “simple” language, their output usually contains serious errors that are very easy for experts to spot because the chatbot operates on the basis of stochastic rules and does not understand its subject at all.
This is, of course, a fairly blatant attempt at cheating. On the other hand: Could authors ever expect a review that's even remotely fair if reviewers outsource their task to a BS bot? In a sense, this is just manipulating a process that would not have been fair either way.