Nvidia projected to ship roughly a billion RISC-V cores in its products by year’s end

Serving tech enthusiasts for over 25 years.

TechSpot means tech analysis and advice you can trust.

In brief: Nvidia has been quietly using the RISC-V architecture to power numerous computing devices, and deploying a substantial number of cores to paying customers. In fact, the company is nearing a historical milestone in terms of RISC-V core deployments.

Nvidia has apparently been utilizing the RISC-V architecture for quite some time. As one of the most valued technology companies in the world, the GPU giant employs this open standard instruction set architecture (ISA) from the RISC paradigm in many of the custom, albeit “ancillary,” cores embedded within its GPUs.

Nvidia’s close relationship with RISC-V was highlighted at the recently held RISC-V Summit, where the company discussed how the open ISA is implemented in its chips. RISC-V-based cores have been used as specialized microcontroller units (MCUs) since 2015, replacing the company’s proprietary “Falcon” MCUs.

Fascinating talk at the #RISCV summit from Nvidia. Apparently there are between 10 and 40 #RISCV cores in every Nvidia chip, with Nvidia alone having shipped around a billion #RISCV cores pic.twitter.com/Uiy62YnnUy

– Nick Brown (@NickBrownHPC) October 22, 2024

Nvidia explained that the hardware and software for these RISC-V MCUs were developed in-house, and the number of cores continues to grow.

According to an “unofficial” estimate, the Santa Clara corporation is projected to ship around one billion RISC-V cores by the end of the year. RISC-V applications in Nvidia products encompass “function-level control” scenarios, including video codecs, display management, chip-to-chip interfaces, chip-level control tasks like power management and security, and network data processing.

Nvidia’s RISC-V cores feature more than 20 custom extensions. The most significant piece of silicon based on the open ISA is likely the GPU System Processor (GSP). Some GPUs include a specialized GSP unit that offloads GPU initialization and management tasks, thereby managing computing processes traditionally handled by the system’s CPU driver.

The GSP embedded in recent Nvidia GPUs offloads kernel driver functions, further reducing CPU utilization and enabling multiple remote users to share the same GPU unit in cloud environments. In addition to GPUs, Nvidia is also shipping RISC-V cores and MCUs in CPUs, System-on-Chip designs, and other products.

The RISC-V architecture began its journey in 2010 as a project at the University of California, Berkeley. Since 2019, the Swiss-based non-profit organization RISC-V International has been managing the development of the ISA. The architecture is available under a Creative Commons or BSD license, allowing anyone in the world to theoretically develop new chip designs based on it – and many are doing just that.

Source : TechSpot

Related posts

Our favorite books we read in 2024

Climate tech startup aims to store carbon in oceans and reshape the energy sector

Intel PresentMon 2.3 adds support for XeFG, XeLL, and AMD Fluid Motion Frames