Over the last forty years, political science has moved toward a consensus as to how to conduct research. The best evidence of this consensus is the growing acceptance of the rational choice paradigm that deduces testable hypotheses that meet the parsimony standard of Ockham's razor. Unfortunately, in a world with multiple political actors (e.g., voters, individual politicians, leaders of special interests and political parties), such simple models often fail to explain the complexity created by the interactions between these various groups. This dissertation examines how the broad acceptance of parsimony in evaluating theory has impacted the discipline. The primary result is the formulation of potentially limited theories that may reduce the opportunity to gain insight. An assessment of the field of voting behavior yields illustrative examples of the shortcomings of the pursuit of parsimony over comprehensive explanation. The author outlines an alternative to the disciplinary norms of generating hypothesis that utilizes conditional hypotheses to produce 'thicker' explanations without sacrificing generalizability. The currently accepted quantitative methods generally used to test hypotheses are less suitable for analyzing conditional hypotheses. By applying methodologies from the machine learning and artificial intelligence subfields of computer science, the author introduces a unique approach to evaluating hypotheses that leads to more descriptive and predictive theory. These recommendations are explored in detail vis-ÌÊ-vis two important dilemmas of political participation: (1) over-reporting of voting behavior and (2) the influence of political sophistication on turnout. Constraining forces within the discipline have forced scholars to accept the inferential inadequacies that result from application of orthodox quantitative methodologies as part of the process. The author demonstrates that political scientists need not make this accommodation.