eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

240
active users

#cuda

1 post1 participant0 posts today

💻 FreeBSD CUDA drm-61-kmod 💻

"Just going to test the current pkg driver, this will only take a second...", the old refrain goes. Surely, it will not punt away an hour or so of messing about in loader.conf on this EPYC system...

- Here are some notes to back-track a botched/crashing driver kernel panic situation.
- Standard stuff, nothing new over the years here with loader prompt.
- A few directives are specific to this system, though may provide a useful general reference.
- The server has an integrated GPU in addition to nvidia pcie, so a module blacklist for the "amdgpu" driver is necessary (EPYC 4564P).

Step 1: during boot-up, "exit to loader prompt"
Step 2: set/unset the values as needed at the loader prompt

unset nvidia_load
unset nvidia_modeset_load
unset hw.nvidiadrm.modeset
set module_blacklist=amdgpu,nvidia,nvidia_modeset
set machdep.hyperthreading_intr_allowed=0
set verbose_loading=YES
set boot_verbose=YES
set acpi_dsdt_load=YES
set audit_event_load=YES
kern.consmsgbuf_size=1048576
set loader_menu_title=waffenschwester
boot

Step 3: login to standard tty shell
Step 4: edit /boot/loader.conf (and maybe .local)
Step 5: edit /etc/rc.conf (and maybe .local)
Step 6: debug the vast output from kern.consmsgbuf logs

AMD YOLO: because why not base your entire #business #strategy on a meme? 🚀🎉 Thanks to AMD's cultural enlightenment, they're now #shipping #boxes faster than philosophical musings on singularity! 🤯 Who knew rewriting a stack could be as easy as beating #NVIDIA at their own game? Just don't tell CUDA—it might get jealous! 😜
geohot.github.io//blog/jekyll/ #AMD #YOLO #meme #CUDA #competition #HackerNews #ngated

the singularity is nearer · AMD YOLOAMD is sending us the two MI300X boxes we asked for. They are in the mail.

Hot Aisle's 8x AMD #MI300X server is the fastest computer I've ever tested in #FluidX3D #CFD, achieving a peak #LBM performance of 205 GLUPs/s, and a combined VRAM bandwidth of 23 TB/s. 🖖🤯
The #RTX 5090 looks like a toy in comparison.

MI300X beats even Nvidia's GH200 94GB. This marks a very fascinating inflection point in #GPGPU: #CUDA is not the performance leader anymore. 🖖😛
You need a cross-vendor language like #OpenCL to leverage its power.

FluidX3D on #GitHub: github.com/ProjectPhysX/FluidX

The recording of the February 20th, 2025 #bhyve Production User Call is up:

youtu.be/Kb1muRQvsrs

We discussed two lab successes, bhyve.org, LibVirt updates, the VirtManager update, hypervisor "anti-detection", an old VirtIO bug, FreeBSD 14.3 goals and wishlist items, #CUDA ON FREEBSD, Nuttx, more GPU Pass-Through, and more!

"Don't forget to slam those Like and Subscribe buttons."

#NVIDIA #ProjectDIGITS Explained: #AI Power in a Compact $3,000 Package
At the heart of Project DIGITS is NVIDIA #GB10 Superchip, a system-on-a-chip (SoC) that delivers up to 1 petaflop of AI performance at #FP4 precision. The GB10 combines an NVIDIA Blackwell GPU with next-generation #CUDA cores, Tensor Cores, and a high-performance NVIDIA Grace CPU featuring 20 power-efficient Arm-based cores. Each system includes 128GB of unified memory and up to 4TB of NVMe storage.
storagereview.com/news/nvidia-

StorageReview.comNVIDIA Project DIGITS Explained: AI Power in a Compact PackageNVIDIA Project DIGITS: The Smallest AI Supercomputer — Grace Blackwell-powered, $3,000, petaflop-class AI performance. Available May 2025.