He's right. I think it was really a mistake for RISC-V to support it at all, and any RISC-V CPU that implements it is badly designed.
This is the kind of silly stuff that just makes RISC-V look bad.
Couldn't agree more. RISC-V even allows configurable endianness (bi-endian). You can have Machine mode little endian, supervisor mode big endian, and user mode little endian, and you can change that at any time. Software can flip its endianness on the fly. And don't forget that instruction fetch ignores this and is always little endian.
Btw the ISA manual did originally have a justification for having big endian but it seem to have been removed:
We originally chose little-endian byte ordering for the RISC-V memory system because little-endian systems are currently dominant commercially (all x86 systems; iOS, Android, and Windows for ARM). A minor point is that we have also found little-endian memory systems to be more natural for hardware designers. However, certain application areas, such as IP networking, operate on big-endian data structures, and certain legacy code bases have been built assuming big-endian processors, so we have defined big-endian and bi-endian variants of RISC-V.
This is a really bad justification. The cost of defining an optional big/bi-endian mode is not zero, even if nobody ever implements it (as far as I know they haven't). It's extra work in the specification (how does this interact with big endian?) in verification (does your model support big endian?) etc.
Linux should absolutely not implement this.