Is complex query answering really complex? A paper at the International Conference on Machine Learning (#ICML2025) presented by Cosimo Gregucci, PhD student at @UniStuttgartAI @Uni_Stuttgart, discussed this question.
In this paper, Cosimo Gregucci, Bo Xiong, Daniel Hernández (@daniel), Lorenzo Loconte, Pasquale Minervini (@pminervini), Steffen Staab, and Antonio Vergari (@nolovedeeplearning) reveal that the “good” performance of SoTA approaches predominantly comes from answers that can be boiled down to single link prediction. Current neural and hybrid solvers can exploit (different) forms of triple memorization to make complex queries much easier. The authors confirm this by reporting the performance of these methods in a stratified analysis and by proposing a hybrid solver, CQD-Hybrid, which, while being a simple extension of an old method like CQD, can be very competitive against other SoTA models.
The paper proposed a way to make query answering benchmarks more challenging in order to advance science.
https://arxiv.org/abs/2410.12537