eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

211
active users

#llvm

1 post1 participant0 posts today
Mishal<p>My team at Apple is currently hiring for a role that focuses on compiler tools and infrastructure. If you’re interested in this opportunity, please take a look at the job posting here: <a href="https://jobs.apple.com/en-us/details/200613714/compiler-tools-engineer?team=SFTWR" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">jobs.apple.com/en-us/details/2</span><span class="invisible">00613714/compiler-tools-engineer?team=SFTWR</span></a> <a href="https://mastodon.social/tags/llvm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llvm</span></a> <a href="https://mastodon.social/tags/swiftlang" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>swiftlang</span></a></p>
David Smith<p>Y'all wanna see an excessively cute trick LLVM's optimizer can do?</p><p>Swift String contains roughly this method:</p><p>```<br>func _fastCStringContents() -&gt; UnsafePointer&lt;UInt8&gt; {<br> if isASCII {<br> return contentsPointer<br> }<br> return nil<br>}<br>```</p><p>Where `isASCII` is defined as `(flags &amp; 0x8000_0000_0000_0000) != 0`</p><p>Would you expect this to generate (solution in reply)</p><p><a href="https://mastodon.social/tags/swiftlang" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>swiftlang</span></a> <a href="https://mastodon.social/tags/llvm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llvm</span></a></p>
mattpd<p>2025 AsiaLLVM Developers' Meeting Talks<br>Videos: <a href="https://www.youtube.com/playlist?list=PL_R5A0lGi1ADKfJbzpA0rMDCb5T3QGe5k" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">youtube.com/playlist?list=PL_R</span><span class="invisible">5A0lGi1ADKfJbzpA0rMDCb5T3QGe5k</span></a><br>Slides: <a href="https://llvm.org/devmtg/2025-06/#program" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">llvm.org/devmtg/2025-06/#progr</span><span class="invisible">am</span></a><br><a href="https://mastodon.social/tags/LLVM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLVM</span></a> <a href="https://mastodon.social/tags/MLIR" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLIR</span></a></p>
Ramkumar Ramachandra<p>I'm trying out <a href="https://mathstodon.xyz/tags/GenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenAI</span></a> in the context of writing code. My conclusion based on a few days of intensive use is that it is alpha-quality, with a suggestion reject-rate of 95% on any real software project like <a href="https://mathstodon.xyz/tags/LLVM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLVM</span></a>. The 5% of good suggestions appear when you're editing one instance of a pattern, and want to change all instances; things like paren-matching are also taken care of automatically. The little automation comes at the cost of putting up with bad visual feedback nearly all the time, and it can take some time to get used to. It is, by no means, "smart", but this technology offers a way to automate things that can never be automated by classical software.</p><p>I also tried it on a toy project: a tree-sitter-based LLVM IR parser. In this case, the entire task is a mechanical chore of reading docs/ample examples and encoding the knowledge in the parser. For kicks, I tried to generate the entire parser with the technology, and the result turned out to be so bad, I had to delete it. Then, I started writing the parser, and the suggestions were actually quite good. The best part? I generated 300 tests to exercise the parser automatically (I had to tweak very little)! Of course, the tests aren't high-quality with over 30% redundancy, but this is a toy project anyway, so who cares?</p>
Giacomo Tesio<span class="h-card"><a href="https://wetdry.world/users/laund" class="u-url mention" rel="nofollow noopener" target="_blank">@laund@wetdry.world</a></span><br><br>If a project <b>needs</b> <a href="https://snac.tesio.it?t=bigtech" class="mention hashtag" rel="nofollow noopener" target="_blank">#BigTech</a> money, it is controlled by Big Tech.<br><br>The control might be tighter or smoother, but it's there: the project cannot deviate from its sponsors will, whatever the developer community think about it.<br><br>To me, it doesn't matter how the money come, as deducible donation, as contracts to be the default search engine, as <a href="https://snac.tesio.it?t=google" class="mention hashtag" rel="nofollow noopener" target="_blank">#Google</a> summer of code, or as contributor employment: <b>if you take their money, you serve them</b>.<br><blockquote>if you think ‘working with other people whose goals don’t 100% align with yours’ is a bad thing, please avoid LLVM.<br></blockquote>Well, to be fair, I try not only to avoid <a href="https://snac.tesio.it?t=llvm" class="mention hashtag" rel="nofollow noopener" target="_blank">#LLVM</a> but any <a href="https://snac.tesio.it?t=programming" class="mention hashtag" rel="nofollow noopener" target="_blank">#programming</a> language based on it.<br><br>Sure: you can ignore all the damage these corporations are doing to people all over the world, as resource extraction, as pollution, as energy consumption, as surveillance and people manipulation through <a href="https://snac.tesio.it?t=adtech" class="mention hashtag" rel="nofollow noopener" target="_blank">#AdTech</a>, as <a href="https://abcnews.go.com/Technology/string-suicides-apple-manufacturer-china/story?id=10789704" rel="nofollow noopener" target="_blank">workers' oppression</a> and work with them for years despite "working with other people whose goals don’t 100% align with yours" is not such a bad thing.<br><br>But I can not.<br><br>I guess it's matter of priority, isn't it?<br><br>To me an overcomplicated toolchain is a liability in itself, but if it's built with <a href="https://snac.tesio.it?t=gafam" class="mention hashtag" rel="nofollow noopener" target="_blank">#GAFAM</a> money, it's a cancer.<br><br><span class="h-card"><a href="https://infosec.exchange/users/david_chisnall" class="u-url mention" rel="nofollow noopener" target="_blank">@david_chisnall@infosec.exchange</a></span> <span class="h-card"><a href="https://mastodon.social/users/Dominix" class="u-url mention" rel="nofollow noopener" target="_blank">@Dominix@mastodon.social</a></span><br>
Hannes Hauswedell<p>Go home, Clang, you are drunk!</p><p><a href="https://mastodon.social/tags/LLVM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLVM</span></a> <a href="https://mastodon.social/tags/Clang" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Clang</span></a> <a href="https://mastodon.social/tags/cplusplus" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cplusplus</span></a></p>
KaiXinAlso I am curious about <a href="https://snac.bsd.cafe?t=freebsd" class="mention hashtag" rel="nofollow noopener" target="_blank">#FreeBSD</a> with <a href="https://snac.bsd.cafe?t=ports" class="mention hashtag" rel="nofollow noopener" target="_blank">#ports</a> and <a href="https://snac.bsd.cafe?t=poudriere" class="mention hashtag" rel="nofollow noopener" target="_blank">#poudriere</a>, how do you guys manage <a href="https://snac.bsd.cafe?t=firefox" class="mention hashtag" rel="nofollow noopener" target="_blank">#Firefox</a> and possibly <a href="https://snac.bsd.cafe?t=libreoffiice" class="mention hashtag" rel="nofollow noopener" target="_blank">#LibreOffiice</a>? It took me~5h to compile <a href="https://snac.bsd.cafe?t=llvm" class="mention hashtag" rel="nofollow noopener" target="_blank">#LLVM</a> default flavor on my laptop, I would just assume giants I listed above will take more than 10 hours? I still remember old days when I was using <a href="https://snac.bsd.cafe?t=gentoo" class="mention hashtag" rel="nofollow noopener" target="_blank">#Gentoo</a> <a href="https://snac.bsd.cafe?t=linux" class="mention hashtag" rel="nofollow noopener" target="_blank">#Linux</a> and whenever there was updates for them I had to keep my PC on overnight...But nowadays <a href="https://snac.bsd.cafe?t=firefox" class="mention hashtag" rel="nofollow noopener" target="_blank">#Firefox</a> seems to update more frequently, I dare not to compile few times a month.<br><br><a href="https://snac.bsd.cafe?t=bsd" class="mention hashtag" rel="nofollow noopener" target="_blank">#BSD</a> <a href="https://snac.bsd.cafe?t=unix" class="mention hashtag" rel="nofollow noopener" target="_blank">#Unix</a> <a href="https://snac.bsd.cafe?t=usebsd" class="mention hashtag" rel="nofollow noopener" target="_blank">#UseBSD</a> <a href="https://snac.bsd.cafe?t=runbsd" class="mention hashtag" rel="nofollow noopener" target="_blank">#RUNBSD</a> <a href="https://snac.bsd.cafe?t=foss" class="mention hashtag" rel="nofollow noopener" target="_blank">#FOSS</a><br>
Peter N. M. Hansteen<p>clang(1)/llvm/lld(1) updated to version 19 <a href="https://www.undeadly.org/cgi?action=article;sid=20250612123207" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">undeadly.org/cgi?action=articl</span><span class="invisible">e;sid=20250612123207</span></a> <a href="https://mastodon.social/tags/openbsd" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openbsd</span></a> <a href="https://mastodon.social/tags/clang" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>clang</span></a> <a href="https://mastodon.social/tags/llvm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llvm</span></a> <a href="https://mastodon.social/tags/lld" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>lld</span></a> <a href="https://mastodon.social/tags/development" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>development</span></a> <a href="https://mastodon.social/tags/compiler" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>compiler</span></a> <a href="https://mastodon.social/tags/update" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>update</span></a></p>
Continued thread

On the other hand I was not so lucky using C. I tried a very simple C program and compiled it with #llvm-mos mos-mega65-clang and it didn't work as expected. I recompiled it using the very same compiler with mos-c64-clang and it works like a charm on the GO C64 mode. I’m not sure yet whether it’s something related to the emulator or the compiler.

All I want is just a collection of #binutils, #GCC, #llvm+#clang, #glibc and #musl that are "free standing" / relocatable, which I can pack into a #squashfs image to carry around to my various development machines.

You'd think that for something as fundamental as compiler infrastructure with over 60 years of knowledge, the whole bootstrapping and bringup process would have been super streamlined, or at least mostly pain free by now.

Yeah, about that. IYKYK

TPDE Compiler Back-End Framework

arxiv.org/abs/2505.22610

"TPDE-LLVM: a standalone back-end for LLVM-IR, which compiles 10--20x faster than LLVM -O0 with similar code quality, usable as library (e.g., for JIT), as tool (tpde-llc), and integrated in Clang/Flang (with a patch)."

Holy cow! 🤯

Open Source on GitHub:
github.com/tpde2/tpde

arXiv.orgTPDE: A Fast Adaptable Compiler Back-End FrameworkFast machine code generation is especially important for fast start-up just-in-time compilation, where the compilation time is part of the end-to-end latency. However, widely used compiler frameworks like LLVM do not prioritize fast compilation and require an extra IR translation step increasing latency even further; and rolling a custom code generator is a substantial engineering effort, especially when targeting multiple architectures. Therefore, in this paper, we present TPDE, a compiler back-end framework that adapts to existing code representations in SSA form. Using an IR-specific adapter providing canonical access to IR data structures and a specification of the IR semantics, the framework performs one analysis pass and then performs the compilation in just a single pass, combining instruction selection, register allocation, and instruction encoding. The generated target instructions are primarily derived code written in high-level language through LLVM's Machine IR, easing portability to different architectures while enabling optimizations during code generation. To show the generality of our framework, we build a new back-end for LLVM from scratch targeting x86-64 and AArch64. Performance results on SPECint 2017 show that we can compile LLVM-IR 8--24x faster than LLVM -O0 while being on-par in terms of run-time performance. We also demonstrate the benefits of adapting to domain-specific IRs in JIT contexts, particularly WebAssembly and database query compilation, where avoiding the extra IR translation further reduces compilation latency.

#EuroLLVM is a good opportunity to talk of the #LLVM community. No, not at the conference.

Because if you ever were wondering what LLVM project's attitude towards its volunteer contributors is, just look at the ticket prices. I mean, which volunteer would spend $750 on a conference ticket?!

But yeah, we know our place. It's to spend weekends fixing what corporate contributors broke through the week, then beg them to actually review our fixes before they break more. And in the meantime, our gracious lords will debate how to mess up our future even more.

llvm.swoogo.com/2025eurollvm

llvm.swoogo.comEuroLLVM Developers' Meeting 2025EuroLLVM Developers' Meeting 2025

One of the reasons I'm still using GitHub for a lot of stuff is the free CI, but I hadn't really realised how little that actually costs. For #CHERIoT #LLVM, we're using Cirrus-CI with a 'bring your own cloud subscription' thing. We set up ccache backed by a cloud storage thing, so incremental builds are fast. The bill for last month? £0.31.

We'll probably pay more as we hire more developers, but I doubt it will cost more than £10/month even with an active team and external contributors. Each CI run costs almost a rounding-error amount, and that's doing a clean (+ ccache) build of LLVM and running the test suite. We're using Google's Arm instances, which have amazingly good price:performance (much better than the x86 ones) for all CI, and just building the x86-64 releases on x86-64 hardware (we do x86-64 and AArch64 builds to pull into our dev container).

For personal stuff, I doubt the CI that I use costs more than £0.10/month at this kind of price. There's a real market for a cloud provider that focuses on scaling down more than on scaling up and made it easy to deploy this kind of thing (we spent far more money on the developer time to figure out the nightmare GCE web interface than we've spent on the compute. It's almost as bad as Azure and seems to be designed by the same set of creatures who have never actually met a human).