id author title date pages extension mime words sentences flesch summary cache txt work_mg56pigmxrfjtj5dkcxnsr4xhu Richard Socher Grounded Compositional Semantics for Finding and Describing Images with Sentences 2014 12 .pdf application/pdf 7703 779 71 Grounded Compositional Semantics for Finding and Describing Images with Sentences However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the syntactic structure or word order than related models such as CT-RNNs or Recurrent Neural Networks Figure 1: The DT-RNN learns vector representations for sentences based on their dependency trees. use unsupervised large text corpora to learn semantic word representations. have used a compositional sentence vector representation and they require specific language generation techniques and sophisticated inference methods. In order for the DT-RNN to compute a vector representation for an ordered list of m words (a phrase or sentence), we map the single words to a vector space Figure 3: Example of a DT-RNN tree structure for computing a sentence representation in a bottom up fashion. ./cache/work_mg56pigmxrfjtj5dkcxnsr4xhu.pdf ./txt/work_mg56pigmxrfjtj5dkcxnsr4xhu.txt