Imaging is a dominant strategy for data collection in neuroscience, yielding 3D stacks of images that can scale to petabytes of data for a single experiment. Machine learning-based algorithms from the computer vision domain can serve as a pair of virtual eyes that tirelessly processes these images to automatically construct more complete and realistic circuits.In practice, such algorithms are often too error-prone and computationally expensive to be immediately useful. Therefore we introduce a new fast and flexible learning-free automated method for sparse segmentation and 2D/3D reconstruction of brain micro-structures.Unlike supervised learning methods, our pipeline exploits structure-specific contextual clues and requires no extensive pre-training. This approach generalizes across different modalities and sample targets, including serially-sectioned scanning electron microscopy (sSEM) of genetically labeled and contrast enhanced processes, spectral confocal reflectance (SCoRe) microscopy, and high-energy synchrotron X-ray microtomography (μCT) of large tissue volumes. Experiments on newly published and novel mouse datasets demonstrate the high biological fidelity and recall of the pipeline, as well as reconstructions of sufficient quality for preliminary biological study. Compared to existing supervised methods, it is both significantly faster (up to several orders of magnitude) and produces high-quality reconstructions that are robust to noise and artifacts.After introducing this method, we enrich the software by adding more functionality to it. We use a statistical report functionality to compare different classes of microbiomes in the rodent enteric nervous system to evaluate the influence of the microbiome on the enteric nervous system as well as reconstruct 3D models to help scientists have a better understanding of the health of the model system.We also reconstruct more exotic systems like various butterfly and jumping spider species to evaluate the impact of evolution on their brains and visual systems.A true understanding of the brain lies in multi-resolution studies that incorporate both structural and functional neural data. New imaging technologies and tissue preparation techniques are enabling such studies at an unprecedented scale, allowing the same neural volume to be imaged in vivo and ex vivo. In moving between imaging modalities, however, the shape, orientation, and resolution of the volume may change, hindering cross-modality mapping.To address this, we introduce a novel volume co-registration algorithm based on graph-theoretic principles. Starting with two 3d reconstructions, our method creates a graph based on object skeletons and then compares branch points to find the closest match. This results in a transformation matrix that maps one volume onto the other. We demonstrate that this method correctly co-registers two volumes within the same modality, across modalities, and across different resolution settings. Notably, we are able to apply this method in sequence across volumes to co-register sSEM, μCT X-ray, and 2-photon microscopy images of the same neural volume.