My dissertation argues that machine ethics has not paid nearly enough attention to contemporary research in cognitive neuroscience. While many fields in artificial intelligence have benefited from neuroscience, machine ethics has largely eschewed help from this quarter, and as a result, has failed to take advantage of our growing understanding of how moral cognition is implemented in the human brain. This is at least partially because moral cognition consists in a complex interaction between many different neurocognitive systems, none of which is exclusive to moral decision making. Given this complexity, there remain significant gaps in our ability to explain how these diverse brain areas and functions interact in constituting our ability to navigate the world in moral terms. Imperfect though it may be, our neuroscientific understanding of moral cognition is sufficiently advanced to help us build machines better able to participate in our moral lives. Detailed findings in cognitive neuroscience have the potential to benefit machine ethics in a number of important areas, informing the development of better machine architectures, suggesting promising approaches to solving difficult cognitive problems, and possibly even helping us to better understand our own complicated relationships with machine agents. However, human moral cognition is hardly a guarantee of morally correct behavior and machine ethics should be careful not to build machines so biologically faithful that they share our moral failings. In this sense, the potential of cognitive neuroscience to benefit machine ethics is not a matter of modeling moral cognition in toto, but rather, one of understanding how the neural implementation of certain cognitive functions might inform our best efforts to realize similar functions in machine agents. Ultimately, the hope is that by better understanding our own brains, we might better understand a project that aspires to recreate in machines what seem to be some of the most distinctive aspects of human cognition.