129
submitted 2 months ago by vegeta@lemmy.world to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] brucethemoose@lemmy.world 12 points 2 months ago* (last edited 2 months ago)

People overblow the importance of ISA.

Honestly a lot of the differences are business decisions. There is a balance between price, raw performance and power efficiency. Apple tend to focus exclusively on the latter two at the expense of price, while Intel (and AMD) have a bad habit of chasing cheap raw performance.

[-] InvertedParallax@lemm.ee 3 points 2 months ago

Decode overhead is fairly fixed, and proportionately has become tiny over the decades. Most larger instructions dispatch to microcode, and compilers know better than to use them much.

There's a price to x86, but for larger cores it's pretty small, we've learned to work around it.

Apple bothered to do the things Intel was too lazy to do for so long, particularly improve the ooo and other resources to where Intel didn't want to spend the silicon. Intel has always been cheap, nickel and diming their way out of performance, this is the cost.

[-] GamingChairModel@lemmy.world 2 points 2 months ago

Apple does two things that are very expensive:

  1. They use a huge physical area of silicon for their high performance chips. The "Pro" line of M chips have a die size of around 280 square mm, the "Max" line is about 500 square mm, and the "Ultra" line is possibly more than 1000 square mm. This is incredibly expensive to manufacture and package.
  2. They pay top dollar to get the exclusive rights to TSMC's new nodes. They lock up the first year or so of TSMC's manufacturing capacity at any given node, at which point there is enough capacity to accommodate other designs from other TSMC clients (AMD, NVIDIA, Qualcomm, etc.). That means you can just go out and buy an Apple device made from TSMC's latest node before AMD or Qualcomm have even announced the lines that will be using those nodes.

Those are business decisions that others simply can't afford to follow.

[-] InvertedParallax@lemm.ee 1 points 2 months ago

800 is reticle, they're not past that, it doesn't make sense.

They chiplet past 500, the economics break down otherwise.

[-] GamingChairModel@lemmy.world 1 points 2 months ago

They chiplet past 500

I don't know if I'm using the right vocabulary, maybe "die size" is the wrong way to describe it. But the Ultra line packages two Max SoCs with a high performance interconnect, so that the whole package does use about 1000 mm^2 of silicon.

My broader point is that much of Apple's performance comes from their willingness to actually use a lot of silicon area to achieve that performance, and it's very expensive to do so.

[-] InvertedParallax@lemm.ee 2 points 2 months ago

You could say total die size, but you wouldn't say die, that implies a single cut exposure of silicon.

But agreed, Apple just took all the tricks Intel dabbled with and turned them to 11, Intel was always too cheap because they had crazy volumes (and once upon a time had a good process) and there was no point.

this post was submitted on 03 Sep 2024
129 points (96.4% liked)

Technology

59148 readers
1932 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS