Treffer: teex: a toolbox for the evaluation of explanations

Title:
teex: a toolbox for the evaluation of explanations
Publisher Information:
Elsevier
Publication Year:
2023
Collection:
Universitat Politècnica de Catalunya, BarcelonaTech: UPCommons - Global access to UPC knowledge
Document Type:
Fachzeitschrift article in journal/newspaper
File Description:
application/pdf
Language:
English
ISBN:
978-0-925231-22-2
0-925231-22-3
Relation:
https://www.sciencedirect.com/science/article/pii/S0925231223007658; Antoñanzas, J. [et al.]. teex: a toolbox for the evaluation of explanations. "Neurocomputing", 28 Octubre 2023, vol. 555, núm. article 126642.; http://hdl.handle.net/2117/396241
DOI:
10.1016/j.neucom.2023.126642
Rights:
Attribution 4.0 International ; http://creativecommons.org/licenses/by/4.0/ ; Open Access
Accession Number:
edsbas.EC42ED1F
Database:
BASE

Weitere Informationen

We present teex, a Python toolbox for the evaluation of explanations. teex focuses on the evaluation of local explanations of the predictions of machine learning models by comparing them to ground-truth explanations. It supports several types of explanations: feature importance vectors, saliency maps, decision rules, and word importance maps. A collection of evaluation metrics is provided for each type. Real-world datasets and generators of synthetic data with ground-truth explanations are also contained within the library. teex contributes to research on explainable AI by providing tested, streamlined, user-friendly tools to compute quality metrics for the evaluation of explanation methods. Source code and a basic overview can be found at github.com/chus-chus/teex, and tutorials and full API documentation are at teex.readthedocs.io. ; teex has been developed as part of the TAIAO project (TimeEvolving Data Science/Artificial Intelligence for Advanced Open Environmental Science), funded by the New Zealand Ministry of Business, Innovation, and Employment (MBIE). ; Peer Reviewed ; Postprint (published version)