RISC-V

To be interesting to me, Windows on ARM would have to be a lot faster, a lot more efficient, or some combination of the two. So far, that's not happening, and RISC-V is much further out.

Linux on ARM, on the other hand, runs very nicely. Because it's source-based, and almost all C, there isn't a lot of legacy cruft to support. It just doesn't matter what the CPU is when you're running Linux; all you really detect at the user and programming levels is how fast it is or isn't. The same would apply to RISC-V; the CPU is pretty much irrelevant for almost everyone but kernel and compiler writers. (Well, as long as the CPU is little-endian, anyway. Big-endian sometimes needs programmer awareness.)

A fast Linux ARM box would seem perfectly achievable at this point. I guess they just figure there's not enough of a market there to bother. If RISC-V chips get fast, I don't see the market as being any larger, so I'm not sure we're gonna see truly competitive RISC-V chips ever delivered into a desktop-targeted system.

Part of what's shutting that possibility down is the large quantities of cheap and fairly fast Intel chips in mini-PCs. The RPi5 is maybe the best example of a reasonably fast ARM desktop, and it's way overpriced compared to the N95 and N100 boxes.
 
Nvidia and WD both deliver lots of RISC-V processors inside other devices they make. Nvidia expected to ship a billion of them last year (not sure how many they actually did).
I wasn't specific enough. Nvidia does not have a track record making CPUs with competitive PC / datacenter performance. I would expect them to get there after making a few mistakes and learning from them.
 

continuum

Ars Legatus Legionis
96,307
Moderator
Sounds like RISC-V has a pretty narrow window to try and jump into desktop/client performance space for non-Windows stuff to maybe secure a niche.
I dunno...? I wonder how much of that is on the OS side (i.e. Microsoft being willing to commit resources to Windows on other architectures) vs. how much is on the architecture side (evolving RISC-V into something that is competitive enough with current high-end ARM and x86 cores). At least for the latter I would suspect it's going to be a few more generations before that happens?

They never said that they could by themselves. It is the RISC-V ecosystem that will. Jim Keller says he thinks RISC-V will take over data centers and supercomputers in 5-10 years in this interview and Alibaba supposedly has a server class cpu coming out soon.
Jim Keller is a hell of a lot smarter than me, if he is saying 5 years is possible on the near end I would not be one to argue against him. OTOH I think 10 years is much more reasonable...
 
Jim Keller says he thinks RISC-V will take over data centers
As @continuum says, Jim Keller is about as expert as anyone gets, but I think he's ignoring the proprietary lockup problem of RISC-V. I believe that will seriously impair its adoption, like BSD versus GPL. Corporations are almost always as nasty and anti-competitive as they can possibly get away with. All the good features will be behind high walls, so you'll be able to get Feature A or Feature B, but not both at once.

With ARM, you mostly can't have that, except with the founding members. (eg, Apple). Almost all the chips run all the code.
 

Drizzt321

Ars Legatus Legionis
30,810
Subscriptor++
That is indeed a possible, maybe even likely, problem is the proprietary vendor extensions. However the basic ISA is shared by all, regardless, so there's always a generic code path, right? I think I recall a njmber of optional official extensions, so isn't there motivation, generally, for those extensions (like vector, etc) to generally be implemented based on target audience? So for the higher performance laptop/phone/desktop general target, I'd hope most implementations would stick with the official ISA extensions. And share the ISA level, like x86_64 that AMD did. Although that might just have been the required cross-license with Intel for ISA level stuff.
 
That's probably why they decided to work with MediaTek. MediaTek has been making Chromebook APUs with PC class IO for a while.
I actually have one of them, a Kompanio 520 with 8GB which has excellent battery life (20h+) and is great for browsing and such. The screen is horrible though so it never got much usage, but the concept made me interesting in finding a replacement with excellent battery life in the near future.
Not sure what x86 has on the cheap end to compare with that.
 
If you are interested in Tenstorrent they were at RISC-V Day Tokyo 2025 Spring conference.

Ascalon has taped out, and we should see some applications this year. But the big news was the Ascalon successor Callandor which will be 16-wide, and have a huge ROB and loads of ALUs etc. due in early 2027 with Tenstorrent hopes leading-edge performance.

You can get the Tenstorrent and other presentations at the link.

https://riscv.or.jp/en/risc-v-day-tokyo-2025-spring-en/
 
Last edited:
  • Like
Reactions: continuum
This is more "for science!" than anything, because the RISC-V SoC in question is a rather weak one. Still, Phoronix benched Ubuntu performance history from version 21.04 to 24.04LTS. Looks like there has been substantial work on compiler and OS in the last few years.
Phoronix put up an article with benchmarks for the new SiFive HiFive Premier P550 RISC-V they received. The board comes with 16 GB or 32 GB of RAM for $399 or $499. It is competitive in performance with the Raspberry Pi 4 in some cases and gets beat badly by it in others.
 
Europe has a $260M project called DARE (Digital Autonomy with RISC-V in Europe) to develop RISC-V for supercompting. From the article:
The DARE project is supported by the EuroHPC Joint Undertaking and coordinated by the Barcelona Supercomputing Center (BSC-CNS). The project aims to create three chiplets – individual chip dies that can be combined to form complete processor packages – and has already picked leaders for each effort:

  1. A vector-math accelerator die tuned for high-performance computing (HPC) workloads, led by Barcelona-based chip designer Openchip
  2. A next-gen inference chiplet from Dutch startup Axelera AI
  3. A general-purpose processor die, driven by Germany's Codasip
 
If you are interested in Tenstorrent they were at RISC-V Day Tokyo 2025 Spring conference.

Ascalon has taped out, and we should see some applications this year. But the big news was the Ascalon successor Callandor which will be 16-wide, and have a huge ROB and loads of ALUs etc. due in early 2027 with Tenstorrent hopes leading-edge performance.

You can get the Tenstorrent and other presentations at the link.

https://riscv.or.jp/en/risc-v-day-tokyo-2025-spring-en/
Ascalon is already a good performing processor according to the presentation. The last I looked at Tenstorrent they were focused on the AI processor and were licensing SiFive cores. That is big news that they are developing their own high-performance RISC-V cpus to go with their AI processors.
 
There's a GPU startup called Bolt Graphics that is using the RISC-V instruction set. Here's the press release. They claim it will be the fastest and most power efficient GPU and HPC compute card when it arrives. The GPU stuff will be ray traced graphics. They say developer kits will be available in 2025 and mass production will begin in late 2026. This article from Serve the Home has slides from a Bolt presentation and has the most details of anything I've found about it. It has a bit of an Intel Larrabee/Xeon Phi feel to it based on the slides.
 
Last edited:

Drizzt321

Ars Legatus Legionis
30,810
Subscriptor++
Interesting. That's some huge claims, and for gaming GPU honestly a ton of the performance really is in the driver stack. I'm very skeptical if the claims until we see real hardware and drivers.

For HPC compute they might have the start of something, but I also am skeptical until things actually release.

But it's exciting to see, although I bet it's super expensive to start with.
 

evan_s

Ars Tribunus Angusticlavius
6,378
Subscriptor
Certainly, an "I'll believe it when I see it" type situation. Even if they do manage to build something that does well on the data center/ai/HPC side of things the driver stack for consumer GPUs for gaming is a massive amount of work. Just look at Intel and their Arc drivers. They've had driver for integrated GPUs for over a decade and they've still had massive problems with ARC drivers. They are mostly passable now but still have overhead issues.

Also, putting a lot of compute on a card is relatively easy. Efficiently utilizing all of it, especially in gaming is a challenge. On the non-gaming side of things the CUDA lock in is going to be a challenge. Even if their hardware claims are 100% true they could still easily end up failing due to the software side of things.
 
Interesting. That's some huge claims, and for gaming GPU honestly a ton of the performance really is in the driver stack. I'm very skeptical if the claims until we see real hardware and drivers.

For HPC compute they might have the start of something, but I also am skeptical until things actually release.

But it's exciting to see, although I bet it's super expensive to start with.
FWIW, this slide shows the software stack they say they'll have running on it. It includes the big game engines, Vulkan, DirectX, and LLVM. That's a start. It remains to be seen how buggy they'll be.
 
I wonder to what extent such a GPU will implement 3D acceleration "in software" such that it can be compiled for other architectures, and to what extent it will be tied to very specific/complex/powerful ISA extensions. Lots of geek points for someone writing a generic vulcan backend in AVX512.
This slide shows support for RISC-V vector extensions and unspecified Bolt accelerators.
 
On the non-gaming side of things the CUDA lock in is going to be a challenge.
Honestly I think the only way anyone gets around that is by copying the API and offering something plug and play with existing CUDA code. OpenCL has made a little headway but CUDA is still such a force in many fields you can only consider Nvidia cards.
 
Honestly I think the only way anyone gets around that is by copying the API and offering something plug and play with existing CUDA code. OpenCL has made a little headway but CUDA is still such a force in many fields you can only consider Nvidia cards.
I think there will be a market for them if they meet the power targets they have in the slides. I think the biggest question is if their financial backers have deep enough pockets and a commitment to see them through the inevitable early growing pains.
 

cogwheel

Ars Tribunus Angusticlavius
6,864
Subscriptor
FWIW, this slide shows the software stack they say they'll have running on it. It includes the big game engines, Vulkan, DirectX, and LLVM. That's a start. It remains to be seen how buggy they'll be.
I think you're missing the part where modern GPU drivers are pretty much JIT recompilers that wholesale replace large swaths of bad game developer code with better GPU developer code customized to each game's idiosyncratic stupidities by those GPU developers. Even Bolt working directly with Epic won't fix the screwups that third party developers make when they use UE5.
 
I think you're missing the part where modern GPU drivers are pretty much JIT recompilers that wholesale replace large swaths of bad game developer code with better GPU developer code customized to each game's idiosyncratic stupidities by those GPU developers. Even Bolt working directly with Epic won't fix the screwups that third party developers make when they use UE5.
All companies have to deal with the world as it exists and not with how they want it to be. Only time will tell if they have a place in it.
 

Drizzt321

Ars Legatus Legionis
30,810
Subscriptor++