The problem of missing-data mechanism uncertainty can be addressed in a variety of ways. Statistically, one approach to the issue of missing-data mechanism uncertainty is sensitivity analysis. Most approaches involve specifying a range of fixed values of a sensitivity parameter, which can reflect differences in the assumed missing-data mechanism and/or the severity of the departure from the MAR mechanism. A drawback to this strategy is that researchers must explicitly make a series of mathematical judgments or decisions that are somewhat subjective. However, comparing results obtained under different assumptions of missing-data mechanisms and missingness relationships may be of greater interest than comparing results obtained from fixed values in some settings, particularly in the behavioral sciences. This is a slightly different question than the one posed by typical sensitivity analysis procedures.In this dissertation, I develop a method for sensitivity analysis that involves decisions with less subjectivity and aims to provide more objective information that is straightforward and easy to interpret. My procedure is designed to capture the degree to which statistical results are impacted by the choice of missing-data mechanism that is assumed. Broadly, the method I propose represents a modification of the usual sensitivity analysis procedure and provides a statistic to quantify the stability of results in the face of missing-data mechanism uncertainty. I provide two alternatives for conducting hypothesis tests in this modified sensitivity analysis framework: a classical ANOVA approach and Monte Carlo approach. I also examine several candidates for a statistic that can serve as an effect size of sensitivity to missing-data mechanism uncertainty. The performance of my procedure is evaluated in three Monte Carlo simulation studies.Results suggest that the hypothesis testing aspect of my proposed procedure works as intended under a variety of circumstances -- the Monte Carlo approach works particularly well. Cohen's f and the coefficient of variation work well as effect size measures. Applications of my procedure are illustrated using real data examples.