Due to increasing demand on force protection, intelligence gathering, and target systems in recent years, object tracking is receiving considerable attention in the research community. Tracking refers to the problem of estimating the trajectory of an object as it moves around a scene. Object tracking, in general, is a challenging problem. Difficulties in real world robust target tracking can arise due to low resolution, abrupt object motion, loss of information caused by projection of the 3D world on a 2D image, changing appearance patterns of both the object and the scene, non-rigid object structures, occlusions, noise in the image and camera motion. In this dissertation, different methodologies that exploit various visual cues are proposed to tackle the problem of robust target tracking. A survey of the robust target tracking literature is presented using a taxonomy of existing algorithms along with some of the necessary background material to understand the contribution of this work. Using this taxonomy, the performances of the state-of-the-art object tracking techniques are compared and evaluated. A new robust online rigid tracking system, which is based on a Studentized Dynamical System framework, is introduced. The proposed system enables the incorporation of dense target features, has the capability of fine-tuning its parameters online and is robust enough to track very small, low contrast targets undergoing outlier disturbances. The robust tracking performance can be attributed to the application of a Student's t-distribution, which lowers the effect of outliers adaptively. Next, a novel synergistic approach for the robust contour tracking of a moving target undergoing non-rigid motion is introduced. The method is based on unifying two powerful segmentation tools: Geodesic Active Contours (GAC) and 3D Conditional Random Fields (CRF). This new contour tracking framework not only can efficiently fuse various image cues, but also offers an elegant inference process to adapt to changes in the scene. Experimental evaluations on typical contour tracking problems illustrate its accurate performance in delineating moving target boundaries. As a third contribution, a pixel level motion estimation method is developed by augmenting an L1 total variation framework with a new $p$-harmonic energy based regularization term. The estimation framework is based on using PDEs from the $p$-harmonic maps to smooth the optical flow angle and using an L1 total variation model to estimate the optical flow vector. The evaluation demonstrates that, by using the $p$-harmonic energy to explicitly introduce a smooth prior on the orientations of motion vectors, the proposed algorithm outperforms a recently published, top performing L1 total variation method, which shows the capability of the new regularization term in improving optical flow estimation performance, especially in the average angle error. This work improves on previous performances of both rigid and non-rigid target tracking, and also identifies a new regularization tool for high accuracy optical flow estimation. The work also has impact in the design of new systems for unmanned reconnaissance and visual surveillance of remote targets.