this post was submitted on 16 Aug 2025
500 points (93.7% liked)
Technology
74180 readers
3776 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It was also inefficient for a computer to play chess in 1980. Imagine using a hundred watts of energy and a machine that costed thousands of dollars and not being able to beat an average club player.
Now a phone will cream the world's best in chess and even go
Give it twenty years to become good. It will certainly do more stuff with smaller more efficient models as it improves
We really need to work out the implications of the fact that Moore's Law is dead, and that technology doesn't necessarily advance on a exponential path like that, anyway. Not in all cases.
The cost per component of an integrated circuit (the original formulation of Moore's Law) is not going down much at all. We're orders of magnitude away from where we "should" be if we start from the Intel 8008 and cut the cost in half every 24 months. Nodes are creating smaller components, but they're not getting cheaper. The fact that it took decades to get to this point is impressive, but it was already an exception in all of human history. Why can't we just be happy that computers we already have are pretty damned neat?
Anyway, AI is not following anything like that path. This might mean a big breakthrough tomorrow, or it could be decades from now. It might even turn out not to be possible; I think there is some way we can do AGI on computers of some kind, but that's not even the consensus among computer scientists. In any case, there's no particular reason to think LLMs will follow anything like the exponential growth path of Moore's Law. They seem to have hit a point of diminishing returns.
The 19th and 20th centuries saw so much technological advancement and we got used to that amount of change.
That’s why people were expecting Mars by the mid 80s and flying cars and other fanciful tech by now.
The problem is that the rate of advancement is slowing down, and economies that demand infinite, compounding growth are not prepared for this.
It might, but:
Stockfish on ancient hardware will still mop up any human GM
Umm.. ok, but that's a bit beside the point?
Unless you mean to include those 1980 computers, in which case stockfish won't run on that... More than about 10 year old home computer would likely be unable to run it.
If you want to argue in favor of your slop machine, you're going to have to stop making false equivalences, or at least understand how its false. You can't make ground on things that are just tangential.
A computer in 1980 was still a computer, not a chess machine. It did general purpose processing where it followed whatever you guided it to. Neural models don't do that though; they're each highly specialized and take a long time to train. And the issue isn't with neural models in general.
The issue is neural models that are being purported to do things they functionally cannot, because it's not how models work. Computing is complex, code is complex, adding new functionality that operates off of fixed inputs alone is hard. And now we're supposed to buy that something that creates word relationship vector maps is supposed to create new?
For code generation, it's the equivalent of copying and pasting from Stack Overflow with a find/replace, or just copying multiple projects together. It isn't something new, it's kitbashing at best, and that's assuming it all works flawlessly.
With art, it's taking away creation from people and jobs. I like that you ignored literally every point raised except for the one you could dance around with a tangent. But all these CEOs are like "no one likes creating art or music". And no, THEY just don't want to spend time creating themselves nor pay someone who does enjoy it. I love playing with 3D modeling and learning how to make the changes I want consistently, I like learning more about painting when texturing models and taking time to create intentional masks. I like taking time when I'm baking things to learn and create, otherwise I could just go buy a box mix of Duncan Hines and go for something that's fine but not where I can make things when I take time to learn.
And I love learning guitar. I love feeling that slow growth of skill as I find I can play cleaner the more I do. And when I can close my eyes and strum a song, there's a tremendous feeling from making this beautiful instrument sing like that.
Its because the tech bros have 0 empathy or humanity. Llm slop is perfect for them.
Oh my God, that's perfect. It's kit bashing. That's exactly how it feels.
Stockfish can't play Go. The resources you spent making the chess program didn't port over.
In the same way you can use a processor to run a completely different program, you can use a GPU to run a completely different model.
So if current models can't do it, you'd be foolish to bet against future models in twenty years not being able to do it.
I think the problem is that you think you're talking like a time traveler heralding us about the wonders of sliced bread, when really it's more like telling a small Victorian child about the wonders of Applebee's and in the impossible chance they survive to it then finding everything is a lukewarm microwaved pale imitation of just buying the real thing at Aldi and cooking it in less time for far tastier and a fraction of the cost.
Show me the chess machine that caused rolling brown outs and polluted the air and water of a whole city.
I'll wait.
It probably would have if IBM decided that every household in the USA needed to have chess playing compute capacity and made everyone dial up to a singular facility in the middle of a desert where land and taxes were cheap so they could charge everyone a monthly fee for the privilege...
Servers have been eating up a significant portion of electricity for years before AI. It's whether we get something useful out of it that matters
That's the hangup isn't it? It produces nothing of value. Stolen art. Bad code. Even more frustrating phone experiences. Oh and millions of lost jobs and ruined lives.
It's the most american way possible that they could have set trillions of dollars on fire short of carpet bombing poor brown people somewhere.
Not even remotely close to this scale... At most you could compare the energy usage to the miners in the crypto craze, but I'm pretty sure that even that is just a tiny fraction of what's going on right now.
Very wrong
In 2023 AI used 40 TWh of energy in the US out of a total 176 TWh used by data centers
https://davidmytton.blog/how-much-energy-do-data-centers-use/
From the blog you quoted yourself:
(And likewise, the last graph of predictions for 2028)
From a quick read of that source, it is unclear to me if it factors in the electricity cost of training the models. It seems to me that it doesn't.
I found more information here: https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
So, I'm not sure if those numbers for 2023 paint the full picture. And adoption of AI-powered tools was definitely not as high in 2023 as it is nowadays. So I wouldn't be surprised if those numbers were much higher than the reported 22.7% of the total server power usage in the US.
Crypto miners wish they could be this inefficient. No literally they do. They're the "rolling coal" mfers of the internet.
Not the same. The underlying tech of llm's has mqssively diminishing returns. You can akready see it, could see it a year ago if you looked. Both in computibg power and required data, and we do jot have enough data, literally have nit created in all of history.
This is not "ai", it's a profoubsly wasteful capitalist party trick.
Please get off the slop and re-build your brain.
That's the argument Paul Krugman used to justify his opinion that the internet peaked in 1998.
You still need to wait for AI to crash and a bunch of research to happen and for the next wave to come. You can't judge the internet by the dot com crash, it became much more impactful later on
No. No i don't. I trust alan Turing.
NB: Alan Turing famously invented ChatGPT
One of the major contributors to early versions. Then they did the math and figured out it was a dead end. Yes.
Also one of the other contributors (weizenbaum i think?) pointed out that not only was it stupid, it was dabgeroys and made people deranged fanatical devotees impervious to reason, who would discard their entire intellect and education to cult about this shit, in a madness no logic could breach. And that's just from eliza.
We're talking about 80 years ago
~1948-52, yeah
Edit: The underlying math and method. Not alone, of course. The main difference between then and now is the data set and some tuning, not a fundamentally new metjod or kibd of thing.
It seems like you are implying that models will follow Moore's law, but as someone working on "agents" I don't see that happening. There is a limitation with how much can be encoded and still produce things that look like coherent responses. Where we would get reliable exponential amounts of training data is another issue. We may get "ai" but it isn't going to be based on llms
You can't predict how the next twenty years of research improves on the current techniques because we haven't done the research.
Is it going to be specialized agents? Because you don't need a lot of data to do one task well. Or maybe it's a lot of data but you keep getting more of it (robot movement? stock market data?)
We do already know about model collapse though, genai is essentially eating its own training data. And we do know that you need a TON of data to do even one thing well. Even then it only does well on things strongly matching training data.
Most people throwing around the word agents have no idea what they mean vs what the people building and promoting them mean. Agents have been around for decades, but what most are building is just using genai for natural language processing to call scripted python flows. The only way to make them look coherent reliably is to remove as much responsibility from the llm as possible. Multi agent systems are just compounding the errors. The current best practice for building agents is "don't use a llm, if you do don't build multiple". We will never get beyond the current techniques essentially being seeded random generators, because that's what they are intended to be.
Twenty years is a very long time, also "good" is relative. I give it about 2-3 years until we can run a model as powerful as Opus 4.1 on a laptop.
There will inevitably be a crash in AI and people still forget about it. Then some people will work on innovative techniques and make breakthroughs without fanfare