id author title date pages extension mime words sentences flesch summary cache txt work_waeoeckza5hvjfqri65tyffi54 Ikuya Yamada Learning Distributed Representations of Texts and Entities from Knowledge Base 2017 16 .pdf application/pdf 8722 878 63 We train the model using a large corpus of texts and their entity annotations extracted from Wikipedia. model on three important NLP tasks (i.e., sentence textual similarity, entity linking, and factoid question answering) involving both unsupervised and supervised settings. Methods capable of learning distributed representations of arbitrary-length texts (i.e., fixed-length continuous vectors that encode the semantics of texts), vector space enables us to easily compute the similarity between texts and entities, which can be beneficial for various KB-related tasks. Additionally, there have also been proposed methods that map words and entities into the same continuous vector space (Wang et al., 2014; Yamada model to three different NLP tasks, namely semantic textual similarity, entity linking, and In this section, we propose our approach of learning distributed representations of texts and entities In addition, because the length of a text t is arbitrary in our model, we test the following two settings: t as a paragraph, and t as a sentence3. ./cache/work_waeoeckza5hvjfqri65tyffi54.pdf ./txt/work_waeoeckza5hvjfqri65tyffi54.txt