Image blur is a common artifact that significantly degrades image quality and has negative impacts on the subsequent applications. Even in a properly focused imaging system, image blur can occur due to relative motion between a camera and the captured scene during the exposure time or because of the diffraction limit of the system. Image deblurring aims to recover a sharp image from a blurred observation. This is an ill-posed inverse problem when neither the latent sharp image nor the blur kernel that causes blur is known. To make this problem well-posed, auxiliary information is necessary. Conventionally, the auxiliary information is incorporated into image deblurring approaches through an optimization scheme which iteratively estimates the latent sharp image and the blur kernel. However, these optimization based methods are usually time-consuming. Recent progress in deep learning has inspired a variety of image deblurring networks. These networks have performed better on single image deblurring with much less processing time compared to the conventional approaches. However, for challenging cases such as spatially varying blur, these networks have not performed well. This work explores the possibility of using auxiliary information in image deblurring networks to improve deblurring performance. The major contributions of this work includes methods for integrating different types of auxiliary information into deep learning networks.Having access to multiple images captured with different exposure settings is one type of the auxiliary information. In addition to a blurry image captured with a longer exposure time, an image captured with a short exposure time can be captured before the blurry one. This short exposed image is typically unacceptable noisy, but does provide us additional information. Since the noisy/blurry image pair is captured in quick succession, the scene being captured can be quite similar. Though degraded by noise, the noisy image preserves large-scale structures of the captured scene. Hence, this noisy frame can be a critical support for an image deblurring network to recover a sharp image. As the first contribution, two network structures are proposed to process a noisy and a blurry image in a sequential manner and a parallel manner, respectively. The complementary information in the image pair is extracted and processed using network structures like the auto-encoder and long short time memory (LSTM). By fusing the information of the differently exposed images, the proposed network is more able to address local blur caused by objective motion. The image quality and quantitative score are also marginally improved. Camera built-in inertial sensor data is another type of the auxiliary data that can be useful. The inertial sensor records the camera motion during the exposure time. The camera motion can be characterized by a homography computed from the inertial sensor data. To integrate the inertial sensor data, an image deblurring network is proposed to incorporate the homographies with the input blurry image by a per-pixel concatenation on high level feature maps. This structure can be regarded as a network version of image warping using the homography, where the camera motion data is tightly fused with the image data. Through this unique structure, the proposed network can effectively tackle spatially varying blur without sacrificing the processing time. The third contribution of this work is an image restoration network that jointly achieves image deblurring and superresolution for a newly proposed microscopy imaging method. In this proposed network, the point spread function of the imaging method is estimated and treated as the auxiliary information. The network is trained to be an inverse operation of a convolution process where the estimated PSF is regarded as the blur kernel. The result shows that with the assistance of the estimated PSF, the deblurring performance as well as the spatial resolution of the microscopy imaging method are both improved. This work improves deblurring performance by incorporating the auxiliary information. All the proposed networks are evaluated on both synthetic data and real data. Comprehensive comparisons with the state-of-the-art image deblurring networks are also conducted to demonstrate the superior performance of the proposed work.