The fast growth of machine learning has brought us to a new era that characterizes two primary trends: design automation and hardware optimization. During the past few years, neural architecture search (NAS) has demonstrated to be one of the most successful techniques in realizing machine-based architecture engineering for deep neural networks, and become the key component in automated machine learning (AutoML). While NAS has already achieved the state of art in many machine learning tasks, efficiency problem arises in both the development and deployment phases. Although plenty of tricks for accelerating NAS have been invented, and hardware constraints awareness during the search has shown potential in optimizing runtime performance, a variety of application-specific problems still challenges the practicality of its broad deployment.Due to the intricate relationship between application specification and the nature of NAS, it is infeasible to develop a universal methodology to boost both search and execution efficiency. My focus in the series of work is to the enhance the applicability of NAS to different fields by adaptively improving both its software and hardware efficiency. First, we propose novel methods that tunes the search framework to fit hardware-constrained platforms, such as edge devices. The hardware-software co-exploration technique is a variant of hardware-aware NAS that suits configurable devices. The combination of architecture search with classic model compression techniques is a popular topic while the consequent cost is generally expensive. From a completely different viewpoint, I reformulate the problem of quantization-aware architecture search, and propose to take advantage of their merge for simplification. Second, I explore the possibilities of applying NAS to specific tasks where efficiency matters. The topics range from irregular data and model structure (graphic neural network), to time-/power -demanded real-life applications (medical intervention). The outcome of these works can demonstrate more possibilities of NAS as a replacement of handcrafted methods for neural network design, and propel the realization of AutoML.