Treffer: Toward Generalizable Machine Learning Models in Speech, Language, and Hearing Sciences: Estimating Sample Size and Reducing Overfitting

Title:
Toward Generalizable Machine Learning Models in Speech, Language, and Hearing Sciences: Estimating Sample Size and Reducing Overfitting
Language:
English
Authors:
Hamzeh Ghasemzadeh (ORCID 0000-0001-5395-1908), Robert E. Hillman (ORCID 0000-0002-7374-994X), Daryush D. Mehta (ORCID 0000-0002-6535-573X)
Source:
Journal of Speech, Language, and Hearing Research. 2024 67(3):753-781.
Availability:
American Speech-Language-Hearing Association. 2200 Research Blvd #250, Rockville, MD 20850. Tel: 301-296-5700; Fax: 301-296-8580; e-mail: slhr@asha.org; Web site: http://jslhr.pubs.asha.org
Peer Reviewed:
Y
Page Count:
29
Publication Date:
2024
Sponsoring Agency:
National Institute on Deafness and Other Communication Disorders (NIDCD) (DHHS/NIH)
Contract Number:
T32DC013017
P50DC015446
K99DC021235
Document Type:
Fachzeitschrift Journal Articles<br />Reports - Research
DOI:
10.1044/2023_JSLHR-23-00273
ISSN:
1092-4388
1558-9102
Entry Date:
2024
Accession Number:
EJ1417845
Database:
ERIC

Weitere Informationen

Purpose: Many studies using machine learning (ML) in speech, language, and hearing sciences rely upon cross-validations with single data splitting. This study's first purpose is to provide quantitative evidence that would incentivize researchers to instead use the more robust data splitting method of nested k-fold cross-validation. The second purpose is to present methods and MATLAB code to perform power analysis for ML-based analysis during the design of a study. Method: First, the significant impact of different cross-validations on ML outcomes was demonstrated using real-world clinical data. Then, Monte Carlo simulations were used to quantify the interactions among the employed cross-validation method, the discriminative power of features, the dimensionality of the feature space, the dimensionality of the model, and the sample size. Four different cross-validation methods (single holdout, 10-fold, train-validation-test, and nested 10-fold) were compared based on the statistical power and confidence of the resulting ML models. Distributions of the null and alternative hypotheses were used to determine the minimum required sample size for obtaining a statistically significant outcome (5% significance) with 80% power. Statistical confidence of the model was defined as the probability of correct features being selected for inclusion in the final model. Results: ML models generated based on the single holdout method had very low statistical power and confidence, leading to overestimation of classification accuracy. Conversely, the nested 10-fold cross-validation method resulted in the highest statistical confidence and power while also providing an unbiased estimate of accuracy. The required sample size using the single holdout method could be 50% higher than what would be needed if nested k-fold cross-validation were used. Statistical confidence in the model based on nested k-fold cross-validation was as much as four times higher than the confidence obtained with the single holdout-based model. A computational model, MATLAB code, and lookup tables are provided to assist researchers with estimating the minimum sample size needed during study design. Conclusion: The adoption of nested k-fold cross-validation is critical for unbiased and robust ML studies in the speech, language, and hearing sciences.

As Provided