this post was submitted on 03 Oct 2025
21 points (100.0% liked)

Hardware

4239 readers
209 users here now

All things related to technology hardware, with a focus on computing hardware.


Rules (Click to Expand):

  1. Follow the Lemmy.world Rules - https://mastodon.world/about

  2. Be kind. No bullying, harassment, racism, sexism etc. against other users.

  3. No Spam, illegal content, or NSFW content.

  4. Please stay on topic, adjacent topics (e.g. software) are fine if they are strongly relevant to technology hardware. Another example would be business news for hardware-focused companies.

  5. Please try and post original sources when possible (as opposed to summaries).

  6. If posting an archived version of the article, please include a URL link to the original article in the body of the post.


Some other hardware communities across Lemmy:

Icon by "icon lauk" under CC BY 3.0

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Alphane_Moon@lemmy.world 8 points 2 weeks ago

Microsoft is reportedly in the process of bringing a second-generation Maia accelerator to market next year that will no doubt offer more competitive compute, memory, and interconnect performance.

But while we may see a change in the mix of GPUs to AI ASICs in Microsoft data centers moving forward, they're unlikely to replace Nvidia and AMD's chips entirely.

Over the past few years, Google and Amazon have deployed tens of thousands of their TPUs and Trainium accelerators. While these chips have helped them secure some high-profile customer wins, Anthropic for example, these chips are more often used to accelerate the company's own in-house workloads.

If Google's TPUs or Amazon's Trainium systems were as good as (and as flexible as) offerings from Nvidia/AMD, no one would be buying their enterprise GPUs.