eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

221
active users

#multithreading

0 posts0 participants0 posts today
Frontend Dogma<p>Worker Threads in Node.js: A Complete Guide for Multithreading in JavaScript, by @nodesource.bsky.social:</p><p><a href="https://nodesource.com/blog/worker-threads-nodejs-multithreading-in-javascript" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">nodesource.com/blog/worker-thr</span><span class="invisible">eads-nodejs-multithreading-in-javascript</span></a></p><p><a href="https://mas.to/tags/guides" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>guides</span></a> <a href="https://mas.to/tags/nodejs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nodejs</span></a> <a href="https://mas.to/tags/workerthreads" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>workerthreads</span></a> <a href="https://mas.to/tags/javascript" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>javascript</span></a> <a href="https://mas.to/tags/multithreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multithreading</span></a></p>
Dr. Moritz Lehmann<p><a href="https://mast.hpc.social/tags/FluidX3D" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FluidX3D</span></a> <a href="https://mast.hpc.social/tags/CFD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CFD</span></a> v3.2 is out! I've implemented the much requested <a href="https://mast.hpc.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> summation for object force/torque; it's ~20x faster than <a href="https://mast.hpc.social/tags/CPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CPU</span></a> <a href="https://mast.hpc.social/tags/multithreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multithreading</span></a>. 🖖😋<br>Horizontal sum in <a href="https://mast.hpc.social/tags/OpenCL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenCL</span></a> was a nice exercise - first local memory reduction and then hardware-supported atomic floating-point add in VRAM, in a single-stage kernel. Hammering atomics isn't too bad as each of the ~10-340 workgroups dispatched at a time does only a single atomic add.<br>Also improved volumetric <a href="https://mast.hpc.social/tags/raytracing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>raytracing</span></a>!<br><a href="https://github.com/ProjectPhysX/FluidX3D/releases/tag/v3.2" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/ProjectPhysX/FluidX</span><span class="invisible">3D/releases/tag/v3.2</span></a></p>
Giuseppe Bilotta<p>Remember when I mentioned we had ported our <a href="https://fediscience.org/tags/fire" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>fire</span></a> propagation <a href="https://fediscience.org/tags/cellularAutomaton" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cellularAutomaton</span></a> from <a href="https://fediscience.org/tags/Python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Python</span></a> to <a href="https://fediscience.org/tags/Julia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Julia</span></a>, gaining performance and the ability to parallelize more easily and efficiently?</p><p>A couple of days ago we had to run another big batch of simulations and while things progressed well at the beginning, we saw the parallel threads apparently hanging one by one until the whole process sat there doing who know what.</p><p>Our initial suspicion was that we had come across some weird <a href="https://fediscience.org/tags/JuliaLang" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>JuliaLang</span></a> issue with <a href="https://fediscience.org/tags/multithreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multithreading</span></a>, which seemed to be confirmed by some posts we found on the Julia forums. We tried the workarounds suggested there, to no avail. We tried a different number of threads, and this led to the hang occurring after a different percent completion. We tried restarting the simulations skipping the ones already done. It always got stuck at the same place (for the same number of threads).</p><p>So, what was the problem?</p><p>1/n</p>
Christian Grobmeier<p>Multithreading can lead to deadlocks. Do one thing at a time, you are not the JVM. </p><p>December 13<br><a href="https://mastodon.social/tags/ZenDevAdvent" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZenDevAdvent</span></a> <a href="https://mastodon.social/tags/java" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>java</span></a> <a href="https://mastodon.social/tags/programming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>programming</span></a> <a href="https://mastodon.social/tags/multithreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multithreading</span></a></p>
Royce Williams<p>Multithreaded CLI developers: let your users configure the number of threads.</p><p>Entire classes of use cases are hiding inside that will make <em>your</em> life easier as a dev -- and <code>threads=1</code> is usually not hard to add.</p><p>One example: if your multithreaded tool works significantly faster on a single file when I force your tool to just use a single thread and parallelize it with <code>parallel --pipepart --block</code> instead, then either:</p><ol><li><p>you might decide to develop sharding the I/O of the physical file yourself, or</p></li><li><p>you might consciously decide to <em>not</em> develop it, and leave that complexity to <code>parallel</code> (which is fine!)</p></li></ol><p>But if your tool has no <code>threads=N</code> option, I have no workaround.</p><p>Configurable thread count lets me optimize in the meantime (or instead).</p><p><a href="https://infosec.exchange/tags/CLI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CLI</span></a> <a href="https://infosec.exchange/tags/multithreading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multithreading</span></a></p>