In recent years, we have witnessed the shift of paradigms in
NLP from fine-tuning large-scale pre-trained language
models (PLMs) on task-specific data to prompt-based
learning. In the latter, the task description is embedded
into the PLM input, enabling the same model to handle
multiple tasks. While both approaches have demonstrated
impressive performance in various NLP tasks, their opaque
nature makes comprehending their inner workings and
decision-making processes challenging for humans. We
conduct research to address the interpretability concerns
surrounding neural models in language understanding. Our
work includes a hierarchical interpretable text classifier
going beyond word-level interpretations, uncertainty
interpretation of text classifiers built on PLMs,
explainable recommender systems by harnessing information
across diverse modalities, and explainable student answer
scoring.