eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

211
active users

#cuda

4 posts4 participants0 posts today

In a surprise move, NVIDIA is bringing CUDA to RISC-V CPUs 💥
Announced at RISC-V Summit China , this allows RISC-V processors to run CUDA drivers + logic, with NVIDIA GPUs handling compute tasks ⚙️
Enables open CPU + proprietary GPU AI systems—big for edge, HPC & China’s chipmakers 🇨🇳

A potential shift in global AI infrastructure 🌐

@itsfoss

news.itsfoss.com/nvidia-cuda-r

It's FOSS News · In a Surprise Move, NVIDIA Brings CUDA to RISC-V ProcessorsA surprise collaboration, I must say.

#NVIDIA Bringing #CUDA To #RISCV
NVIDIA's drivers and CUDA software stack are predominantly supported on x86_64 and AArch64 systems but in the past was supported on IBM POWER. This week at the RISC-V Summit China event, NVIDIA's Frans Sijstermans announced that CUDA will be coming to RISC-V.
#AMD for their part with the upstream #opensource #AMDKFD kernel compute driver can already build on RISC-V and the #ROCm user-space components can also be built on RISC-V.
phoronix.com/news/NVIDIA-CUDA-

www.phoronix.comNVIDIA Bringing CUDA To RISC-VNVIDIA announced this week that they are bringing their CUDA software to RISC-V processors.

Apple-KI-Framework MLX: Künftig Support für Nvidias CUDA

Zwar laufen in Macs keine Nvidia-GPUs mehr, dennoch soll Apples MLX nun bald auch dort laufen. Das macht interessante Portierungen möglich.

heise.de/news/Apple-KI-Framewo

heise online · Apple-KI-Framework MLX: Künftig Support für Nvidias CUDABy Ben Schwan
#Apple#CUDA#IT

#GPUHammer is the first attack to show #Rowhammer bit flips on #GPU memories, specifically on a GDDR6 memory in an #NVIDIA A6000 GPU. Our attacks induce bit flips across all tested DRAM banks, despite in-DRAM defenses like TRR, using user-level #CUDA #code. These bit flips allow a malicious GPU user to tamper with another user’s data on the GPU in shared, time-sliced environments. In a proof-of-concept, we use these bit flips to tamper with a victim’s DNN models and degrade model accuracy from 80% to 0.1%, using a single bit flip. Enabling Error Correction Codes (ECC) can mitigate this risk, but ECC can introduce up to a 10% slowdown for #ML #inference workloads on an #A6000 GPU.

gpuhammer.com/

GPUHammerGPUHammer

#ZLUDA Making Progress In 2025 On Bringing #CUDA To Non-NVIDIA #GPU
ZLUDA #opensource effort that started half-decade ago as drop-in CUDA implementation for #Intel GPUs and then for several years was funded by ##AMD as a CUDA implementation for #Radeon GPUs atop #ROCm and then open-sourced but then reverted has been continuing to push along a new path since last year. Current take on ZLUDA is a multi-vendor CUDA implementation for non-NVIDIA GPUs for #AI workloads & more.
phoronix.com/news/ZLUDA-Q2-202

www.phoronix.comZLUDA Making Progress In 2025 On Bringing CUDA To Non-NVIDIA GPUs

That's it, I'm going against AMD for recommending computers for #AI.

I don't even know how to start running something on their NPU via Linux, or check it's running at all. Windows fares better but it's `llama.cpp` doesn't work there.

So, if you want to run AI on your computer: RTX, Mac, or don't bother at all.