Abstract
When does a model explain? When does it promote understanding? A dominant approach to scientific explanation is the interventionist view. According to this view, when X explains Y, intervening on X can produce, prevent or alter Y in some predictable way. In this paper, I argue for two claims. First, I reject a position that many interventionist theorists endorse. This position is that to explain some phenomenon by providing a model is also to understand that phenomenon. While endorsing the interventionist view, I argue that explaining and understanding are distinct scientific achievements. Second, I defend a novel theory of scientific understanding. According to this view, when some model M promotes understanding, M makes available a distinctive mental state. This state is of the same psychological kind as when we grasp events in a narrative as bearing on some ultimate conclusion. To conclude, I show that, given this view, mechanistic explanations often provide a powerful source of understanding that many causal-historical models lack. This paper will be of interest to both philosophers of science and epistemologists engaged in the topics of explanation and understanding.