LDPC convolutional codes have been shown to be capable of achieving the same capacity-approaching performance as LDPC block codes with iterative message passing decoding. In this dissertation, we present several methods of deriving families of time-varying and time-invariant LDPC convolutional codes from LDPC block codes. We demonstrate that the derived LDPC convolutional codes significantly outperform the underlying LDPC block codes, and we investigate the reasons for these 'convolutional gains'. It is well known that cycles in the Tanner graph representation of a sparse code affect the iterative decoding algorithm, with short cycles generally pushing its performance further away from optimum. Hence it is common practice to design codes that do not contain short cycles, so as to obtain independent messages in at least the initial iterations of the decoding process. We show that the derived LDPC convolutional codes have better graph-cycle properties than their block code counterparts. In particular, we show that the unwrapping process that is used to derive LDPC convolutional codes from LDPC block codes can 'break' some cycles of the underlying LDPC block code, but cannot create any shorter cycles. We prove that any cycle in an LDPC convolutional code always maps to a cycle of the same or smaller length in the underlying LDPC block code. Minimum distance is an important code design parameter when the channel quality is high, since codewords that are minimum distance apart are the most likely to cause errors with ML or near-ML decoding. In this case, the minimum distance determines the so-called error floor behavior in the performance curve corresponding to high channel signal-to-noise ratios. Thus studying the minimum distance properties of a code family gives insight into its error floor behavior. We use asymptotic methods to calculate a lower bound on the free distance of several ensembles of asymptotically good LDPC convolutional codes derived from protograph-based LDPC block codes. Further, we show that the free distance to constraint length ratio of the LDPC convolutional codes exceeds the minimum distance to block length ratio of the corresponding LDPC block codes. Message-passing iterative decoders for LDPC block codes are known to be subject to decoding failures due to so-called pseudo-codewords. These failures can cause the large signal-to-noise ratio performance of message-passing iterative decoding to be worse than that predicted by the maximum-likelihood decoding union bound. We address the pseudo-codeword problem from the convolutional code perspective. In particular, we show that the minimum pseudo-weight of an LDPC convolutional code is at least as large as the minimum pseudo-weight of an underlying LDPC block code. This result, which parallels a well-known relationship between the minimum Hamming weight of convolutional codes and the minimum Hamming weight of their quasi-cyclic counterparts, is due to the fact that every pseudo-codeword in the LDPC convolutional code induces a pseudo-codeword in the LDPC block code with pseudo-weight no larger than that of the convolutional pseudo-codeword. More generally, we demonstrate a difference in the weight spectra of LDPC block and convolutional codes that leads to improved performance at low-to-moderate signal-to-noise ratios for the convolutional codes, a conclusion supported by simulation results.