Result: Query Languages for Neural Networks

Title:
Query Languages for Neural Networks
Contributors:
Grohe, Martin, Standke, Christoph, STEEGMANS, Juno, VAN DEN BUSSCHE, Jan, Roy, S., Kara, A., Steegmans, Juno, Van den Bussche, Jan
Publisher Information:
SCHLOSS DAGSTUHL, LEIBNIZ CENTER INFORMATICS
Publication Year:
2025
Collection:
Document Server@UHasselt (Universiteit Hasselt)
Document Type:
Conference conference object
File Description:
application/pdf
Language:
English
Relation:
Leibniz International Proceedings in Informatics; https://hdl.handle.net/1942/47507; 9:18; 9:1; 328; 001533987300009; https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICDT.2025.9
DOI:
10.4230/LIPIcs.ICDT.2025.9
Rights:
Martin Grohe, Christoph Standke, Juno Steegmans, and Jan Van den Bussche; licensed under Creative Commons License CC-BY 4.0 ; info:eu-repo/semantics/openAccess ; Creative Commons Attribution 4.0 International license
Accession Number:
edsbas.F10C03F8
Database:
BASE

Further Information

We lay the foundations for a database-inspired approach to interpreting and understanding neural network models by querying them using declarative languages. Towards this end we study different query languages, based on first-order logic, that mainly differ in their access to the neural network model. First-order logic over the reals naturally yields a language which views the network as a black box; only the input-output function defined by the network can be queried. This is essentially the approach of constraint query languages. On the other hand, a white-box language can be obtained by viewing the network as a weighted graph, and extending first-order logic with summation over weight terms. The latter approach is essentially an abstraction of SQL. In general, the two approaches are incomparable in expressive power, as we will show. Under natural circumstances, however, the white-box approach can subsume the black-box approach; this is our main result. We prove the result concretely for linear constraint queries over real functions definable by feedforward neural networks with a fixed number of hidden layers and piecewise linear activation functions. ; Funding Martin Grohe: Funded by the European Union (ERC, SymSim, 101054974). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. Christoph Standke: Funded by the German Research Foundation (DFG) under grants GR 1492/16-1 and GRK 2236 (UnRAVeL). Juno Steegmans: Supported by the Special Research Fund (BOF) of UHasselt. Jan Van den Bussche: Partially supported by the Flanders AI Program (FAIR)