Institute for AI<p>Is complex query answering really complex? A paper at the International Conference on Machine Learning (<a href="https://xn--baw-joa.social/tags/ICML2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ICML2025</span></a>) presented by Cosimo Gregucci, PhD student at <span class="h-card" translate="no"><a href="https://xn--baw-joa.social/@UniStuttgartAI" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>UniStuttgartAI</span></a></span> <span class="h-card" translate="no"><a href="https://xn--baw-joa.social/@Uni_Stuttgart" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>Uni_Stuttgart</span></a></span>, discussed this question.</p><p>In this paper, Cosimo Gregucci, Bo Xiong, Daniel Hernández (<span class="h-card" translate="no"><a href="https://mstdn.degu.cl/@daniel" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>daniel</span></a></span>), Lorenzo Loconte, Pasquale Minervini (<span class="h-card" translate="no"><a href="https://sigmoid.social/@pminervini" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>pminervini</span></a></span>), Steffen Staab, and Antonio Vergari (<span class="h-card" translate="no"><a href="https://ellis.social/@nolovedeeplearning" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>nolovedeeplearning</span></a></span>) reveal that the “good” performance of SoTA approaches predominantly comes from answers that can be boiled down to single link prediction. Current neural and hybrid solvers can exploit (different) forms of triple memorization to make complex queries much easier. The authors confirm this by reporting the performance of these methods in a stratified analysis and by proposing a hybrid solver, CQD-Hybrid, which, while being a simple extension of an old method like CQD, can be very competitive against other SoTA models.</p><p>The paper proposed a way to make query answering benchmarks more challenging in order to advance science.</p><p><a href="https://arxiv.org/abs/2410.12537" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2410.12537</span><span class="invisible"></span></a></p><p><a href="https://xn--baw-joa.social/tags/KnowledgeGraphs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KnowledgeGraphs</span></a> <a href="https://xn--baw-joa.social/tags/QueryAnswering" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>QueryAnswering</span></a> <a href="https://xn--baw-joa.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://xn--baw-joa.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://xn--baw-joa.social/tags/Benchmarking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Benchmarking</span></a> <a href="https://xn--baw-joa.social/tags/CQA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CQA</span></a></p>