192
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 11 Mar 2024
192 points (94.9% liked)
Technology
60070 readers
3398 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
Any program written for the .net clr ought to just run out of the box. There’s also an x64 to ARM translation layer that works much like Apple’s Rosetta. It will run the binary through a translation and execute that. I have one of the windows arm dev units. It works relatively well except on some games from my limited experience.
Both of them?
Except for the performance bit.
ARM processors use a weak memory model, whereas x86 use a strong memory model. Meaning that x86 guarantees actual order of writes to memory is the same as the order in which those writes executes, while ARM is allowed to re-order them.
Usually it doesn’t matter in which data is written to RAM, and allowing for re-ordering of writes can boost performance. When it does matter, a developer can insert a so-called memory barrier, this ensures all writes before the barrier are finished before the code continues.
However, since this is not necessary on x86 as all writes are ordered x86 code does not include these memory barrier instructions at the spots where write order actually matters. So when translating x86 code to ARM code, you have to assume write order always matters because you can’t tell the difference. This means inserting memory barriers after every write in the translated code. This absolutely kills performance.
Apple includes a special mode in their ARM chips, only used by Rosetta, that enables an x86-like strong memory model. This means Rosetta can translate x86 to ARM without inserting those performance-killing memory barriers. Unless Qualcomm added a similar mode (and AFAIK they did not) and Microsoft added support for it in their emulator, performance of translated x86 code is going to be nothing like that of Rosetta.
The biggest advantage Apple has is they’ve been breaking legacy compatibility every couple years, training devs to write more portable code and setting a consumer expectation of change. I can’t imagine how the emulator will cope with 32bit software written for the Pentium II.