this post was submitted on 12 Mar 2025
57 points (93.8% liked)
Technology
66975 readers
4504 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Isn’t the much simpler and likely explanation that they know that these models will be way more efficient shortly and won’t need that much compute power?
This efficiency-focused perspective makes more business sense than the pessimistic view that "AI is a nothingburger." If Microsoft truly believed AI had no future, they wouldn't have invested so heavily in OpenAI in the first place.
The decision to cancel 1000GW of future builds could simply reflect Microsoft's confidence that they can achieve their AI goals with less infrastructure due to coming efficiency improvements, rather than a fundamental doubt about AI's potential.
If AI was actually going to replace every human worker on earth, even with huge efficiency gains you'd still want to build out absolutely ungodly amounts of compute capacity.
Also, it should be noted that while Deepseek has demonstrated that its possible to substantially reduce the compute requirements for transformer based models, doing so relies heavily on a "Good enough" approach that moves the results further away from being enterprise capable. It's not a cut and dried solution to the backend costs of running these models at the scale that investors want to see them running.