eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

192
active users

#cuda

3 posts3 participants0 posts today
HGPU group<p>RDMA Point-to-Point Communication for LLM Systems</p><p><a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/RDMA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RDMA</span></a> <a href="https://mast.hpc.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=30339" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=30339</span><span class="invisible"></span></a></p>
.:\dGh/:.<p>Eventually the <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> race for components would start to eat Laptops, PC and Consoles.</p><p>We're seeing it now with RAM prices doubling, and probably triplicating.</p><p>My guess? Cheaper alternatives will become kings: smartphones and tablets.</p><p><a href="https://mastodon.social/tags/Videogames" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Videogames</span></a> <a href="https://mastodon.social/tags/Gaming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gaming</span></a> <a href="https://mastodon.social/tags/Games" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Games</span></a> <a href="https://mastodon.social/tags/PC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PC</span></a> <a href="https://mastodon.social/tags/Laptops" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Laptops</span></a> <a href="https://mastodon.social/tags/Laptop" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Laptop</span></a> <a href="https://mastodon.social/tags/PCGaming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PCGaming</span></a> <a href="https://mastodon.social/tags/PCGames" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PCGames</span></a> <a href="https://mastodon.social/tags/PCHardware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PCHardware</span></a> <a href="https://mastodon.social/tags/Hardware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Hardware</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/NVIDIA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NVIDIA</span></a> <a href="https://mastodon.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mastodon.social/tags/Consoles" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Consoles</span></a> <a href="https://mastodon.social/tags/Console" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Console</span></a></p>
AI Daily Post<p>ComputeEval 2025.2 now features 232 CUDA challenges, pushing LLMs to master Tensor Cores, CUDA Graphs, shared memory and warp‑level tricks. The benchmark’s difficulty spikes, giving researchers a tougher yardstick for AI performance. Dive into the details and see how your models stack up. <a href="https://mastodon.social/tags/ComputeEval" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ComputeEval</span></a> <a href="https://mastodon.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/TensorCores" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TensorCores</span></a></p><p>🔗 <a href="https://aidailypost.com/news/computeeval-20252-expands-232-cuda-challenges-upping-llm-test" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">aidailypost.com/news/computeev</span><span class="invisible">al-20252-expands-232-cuda-challenges-upping-llm-test</span></a></p>
HGPU group<p>Enhancing Transformer Performance and Portability through Auto-tuning Frameworks</p><p><a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mast.hpc.social/tags/AutoTuning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AutoTuning</span></a> <a href="https://mast.hpc.social/tags/PerformancePortability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PerformancePortability</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=30329" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=30329</span><span class="invisible"></span></a></p>
HGPU group<p>INT v.s. FP: A Comprehensive Study of Fine-Grained Low-bit Quantization Formats</p><p><a href="https://mast.hpc.social/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://mast.hpc.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://mast.hpc.social/tags/ML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ML</span></a> <a href="https://mast.hpc.social/tags/Package" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Package</span></a></p><p><a href="https://hgpu.org/?p=30326" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">hgpu.org/?p=30326</span><span class="invisible"></span></a></p>
TugaTech 🖥️<p>Samsung e NVIDIA aprofundam aliança com megafábrica de IA e mais de 50.000 GPUs<br>🔗&nbsp;<a href="https://tugatech.com.pt/t73668-samsung-e-nvidia-aprofundam-alianca-com-megafabrica-de-ia-e-mais-de-50-000-gpus" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">tugatech.com.pt/t73668-samsung</span><span class="invisible">-e-nvidia-aprofundam-alianca-com-megafabrica-de-ia-e-mais-de-50-000-gpus</span></a></p><p><a href="https://masto.pt/tags/base" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>base</span></a> <a href="https://masto.pt/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://masto.pt/tags/DRAM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DRAM</span></a> <a href="https://masto.pt/tags/ia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ia</span></a> <a href="https://masto.pt/tags/litografia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>litografia</span></a> <a href="https://masto.pt/tags/mundo" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>mundo</span></a> <a href="https://masto.pt/tags/nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia</span></a> <a href="https://masto.pt/tags/samsung" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>samsung</span></a> <a href="https://masto.pt/tags/semicondutores" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>semicondutores</span></a> <a href="https://masto.pt/tags/servidores" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>servidores</span></a> <a href="https://masto.pt/tags/tecnologia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tecnologia</span></a>&nbsp;</p>
Allie!<p>Custom PSU cable shipped today!</p><p>A whole pc case move and EndeavorOS installation coming up this weekend. I’m filled with trepidation and excitement.</p><p>Setting up the whole CUDA toolchain seems daunting particularly with <a href="https://mastodon.social/tags/Hyprland" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Hyprland</span></a> not exactly being ‘supported’ but I am pretty stubborn.</p><p>A week from now I’ll probably be in tears from this ill-advised misadventure.</p><p><a href="https://mastodon.social/tags/linux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>linux</span></a> <a href="https://mastodon.social/tags/cuda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cuda</span></a></p>
Hacker News 50<p>Continuous Nvidia CUDA Profiling in Production</p><p>Link: <a href="https://www.polarsignals.com/blog/posts/2025/10/22/gpu-profiling" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">polarsignals.com/blog/posts/20</span><span class="invisible">25/10/22/gpu-profiling</span></a><br>Discussion: <a href="https://news.ycombinator.com/item?id=45669377" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.ycombinator.com/item?id=4</span><span class="invisible">5669377</span></a></p><p><a href="https://social.lansky.name/tags/cuda" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cuda</span></a> <a href="https://social.lansky.name/tags/nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia</span></a></p>
heise online English<p>Red Hat integrates Nvidia CUDA into Enterprise Linux and OpenShift</p><p>Red Hat will henceforth distribute Nvidia's CUDA Toolkit directly via its platforms. This should simplify the provision of GPU-accelerated AI applications.</p><p><a href="https://www.heise.de/en/news/Red-Hat-integrates-Nvidia-CUDA-into-Enterprise-Linux-and-OpenShift-10962793.html?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege&amp;utm_source=mastodon" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">heise.de/en/news/Red-Hat-integ</span><span class="invisible">rates-Nvidia-CUDA-into-Enterprise-Linux-and-OpenShift-10962793.html?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege&amp;utm_source=mastodon</span></a></p><p><a href="https://social.heise.de/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> <a href="https://social.heise.de/tags/DevOps" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DevOps</span></a> <a href="https://social.heise.de/tags/IT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IT</span></a> <a href="https://social.heise.de/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://social.heise.de/tags/Nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nvidia</span></a> <a href="https://social.heise.de/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a> <a href="https://social.heise.de/tags/RedHatEnterpriseLinux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RedHatEnterpriseLinux</span></a> <a href="https://social.heise.de/tags/news" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>news</span></a></p>

Red Hat integriert Nvidia CUDA in Enterprise-Linux und OpenShift

Red Hat verteilt künftig Nvidias CUDA-Toolkit direkt über seine Plattformen. Das soll die Bereitstellung GPU-beschleunigter KI-Anwendungen vereinfachen.

heise.de/news/Red-Hat-integrie

heise online · Red Hat integriert Nvidia CUDA in Enterprise-Linux und OpenShiftBy Moritz Förster
#CUDA#DevOps#IT

OpenAI의 구글 TPU 검토에 NVIDIA가 긴급 대응한 진짜 이유

OpenAI가 구글 TPU 검토를 계기로 NVIDIA와 1,000억 달러 파트너십을 체결한 배경과 AI 인프라 시장의 권력 구조 변화를 분석합니다. CUDA 생태계의 강력함, TPU/Trainium 같은 대안 칩들의 가격 경쟁력, 그리고 OpenAI의 멀티 클라우드 전략이 시사하는 AI 인프라 시장의 미래 전망을 다룹니다.

aisparkup.com/posts/5876

Beyond the , there is the , with so many people repeating fake news like they just don't care...

It is kind of weird to me that /#Qwen, the Chinese equivalent of /#AWS, recommands for its VL model, thus , therefore ie ?
Probably not for long, I tooted something about a new Chinese company () developing its own software for parallel computing. Is it stable like software provided by Orange Pi?

github.com/QwenLM/Qwen3-VL?tab

I tested Qwen/Qwen3-VL-4B-Instruct quickly (full version, no GGUF) using , so that there are no doubts about quality loss with quantization (see result in desc on the left picture)
It is quite interesting :
- Using device_map=auto in transformers demonstrates very well the bandwidth issue: process is going back and forth between CPU and GPU, it's slow on my setup (old system)
- it uses lots of ram for text encoding
- it was unable to distinguish all 137 animals

The Deepseek-OCR would run on it obviously, because there is OCR in the name, thus support (joke alert).

"Inference using Huggingface transformers on NVIDIA GPUs. Requirements tested on python 3.12.9 + CUDA11.8:"

huggingface.co/deepseek-ai/Dee

It's open-source!, 2025.

Slogan: "if you can't catch the mouse on camera, we reimburse you".

it is not a RPI, it is an OCR Grapevine℠, with space between USB ports to accomodate extremely diverse USB vendors design packagings, with -angled- slots.
And the modern standards, USB-C, OClink, micro-SFP+, all 10Gbps+
It is for surveillance/security companies to plug more AI capable cameras with realtime cognitive bias; copy books at the library on-the-go, etc