Micromodels for Efficient, Explainable, and Reusable Systems:A Case Study on Mental Health


Many statistical models have high accuracy on test benchmarks, but are not explainable, struggle in low-resource scenarios, cannot be reused for multiple tasks, and cannot easily integrate domain expertise. These factors limit their use, particularly in settings such as mental health, where it is difficult to annotate datasets and model outputs have significant impact. We introduce a micromodel architecture to address these challenges. Our approach allows researchers to build interpretable representations that embed domain knowledge and provide explanations throughout the model’s decision process. We demonstrate the idea on multiple mental health tasks: depression classification, PTSD classification, and suicidal risk assessment. Our systems consistently produce strong results, even in low-resource scenarios, and are more interpretable than alternative methods.

Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: Findings
Jonathan K. Kummerfeld
Jonathan K. Kummerfeld
Tenure-Track Faculty at the University of Sydney (mid-2022)

Postdoc working on Natural Language Processing.