Deep Neural Networks (DNNs) have achieved state-of-the-art performance in many areas and are integrated into commercial products successfully. One of the foremost challenges in adopting DNNs to real-world mission-critical applications is the lack of competency-awareness. Even cutting-edge models fail in edge cases because of the complicated nature of the real-world problem and the long tail of the real-world cases. More importantly, failure happens silently. This is in sharp contrast to humans' capability on competency-awareness. In order for DNNs to gain humans' trust in making decisions reliably, uncertainty estimation is investigated as a promising way to competency-aware neural networks.In this dissertation, we focus on the uncertainty estimation that provides quantified scores representing the confidence of the model's prediction. First, we analyze uncertainty estimation as a general function approximation problem and provide the approximation error bound in quantized scenarios. Second, we identify some problematic evaluation methods in the recent literature on uncertainty estimation and propose new evaluation metrics. We further use these metrics to explore the relationship between uncertainty estimation and hardware resource constraints. Third, motivated by the discrepancy between the model training objective and the objective in practice, we develop uncertainty-aware training for selective medical image segmentation. Fourth, with a correction effort prediction model enhanced by uncertainty maps, we propose an ultrasound scanning framework that optimizes the data acquisition for improved automated segmentation by DNNs.